Recent advancements in protein language modeling have been highlighted through several studies. A notable development is the Long-Context Protein Language Model (LC-PLM), which utilizes a structured state-space model known as BiMamba-S. This model aims to enhance the efficiency of modeling long-range dependencies within protein structures, addressing limitations faced by traditional transformer architectures. Additionally, a new framework called DeepTracer-LowResEnhance combines deep learning techniques with AlphaFold to improve protein structure predictions from low-resolution cryo-electron microscopy maps. Other significant contributions include the megaDNA model, which introduces a multiscale transformer for processing and generating long genomic sequences specifically for bacteriophage genomes, and a study focused on predicting protein folding stability using generative models. Furthermore, the BALM framework has been developed for predicting protein-ligand binding affinities by fine-tuning existing language models to achieve high accuracy. These studies reflect a growing trend in leveraging advanced computational models to tackle complex biological challenges.
Learning Binding Affinities via Fine-tuning of Protein and Ligand Language Models • This paper introduces BALM, a deep learning framework that predicts protein-ligand binding affinity by fine-tuning pretrained protein and ligand language models, achieving high accuracy with… https://t.co/vCQirzDrKc
A long-context protein language model with a bidirectional Mamba architecture. @yingheng_wang @ZichenWangPhD, Gil Sadeh, Luca Zancato, Alessandro Achille, @karypis @DMfun https://t.co/HsZh8kvmlx
Learning Binding Affinities via Fine-tuning of Protein and Ligand Language Models https://t.co/MsHWB3aD34 #biorxiv_bioinfo