Recent discussions in the field of artificial intelligence have highlighted the challenges associated with large language models (LLMs) and their training processes. A novel framework known as POA (Pre-training Once for All) has been introduced to address the significant additional effort and computational resources required after the initial pre-training of LLMs. This self-supervised learning framework aims to optimize the training process, potentially enhancing the efficiency of model performance. Additionally, the concept of 'latent space' has emerged as a focal point in various conversations, with references to its complexities and implications in the optimization of self-supervised pretrained models and latent feature distribution. The ongoing exploration of these themes indicates a growing interest in refining AI methodologies and understanding the underlying structures of machine learning models.