"Protein Language Model Fitness Is a Matter of Preference" https://t.co/fbz7aFFPPw Abstract Leveraging billions of years of evolution, scientists have trained protein language models (pLMs) to understand the sequence and structure space of proteins aiding in the design of more… https://t.co/MctCObuvih
"Single-Sequence, Structure Free Allosteric Residue Prediction with Protein Language Models" https://t.co/5gY9JwlyCG Abstract Large language models trained on protein amino acid sequences have shown the ability to learn general coevolutionary relationships at scale, which in… https://t.co/ohNtzyz2Oa
Cool new preprint from @BrianHie and Peter Kim’s groups @StanfordEng and @Stanford_ChEMH Single-Sequence, Structure Free Allosteric Residue Prediction with Protein Language Models | bioRxiv https://t.co/jAxCtci32L





Recent advancements in protein language models (pLMs) have shown significant potential in the field of protein design and engineering. One paper explores deep learning models that co-generate both protein sequences and structures, aiming to enhance the accuracy of protein design by modeling both modalities simultaneously. Another study investigates how pLMs use sequence likelihood to predict zero-shot fitness estimation, a crucial aspect of protein engineering and mutation prediction. Additionally, groundbreaking research led by Peter Kim and Brian Hie at StanfordEng and Stanford_ChEMH demonstrates that pLMs can predict allosteric residues without requiring structural data or multiple sequence alignments, highlighting the models' ability to learn allosteric interactions without explicit supervision. These advancements suggest that evaluating protein LMs solely on their ability to learn 3D structures may overlook important functional information. The research is available on bioRxiv.