MIA @ Broad -- Hybrid protein language models for fitness prediction and design


The ability to accurately model the fitness landscape of protein sequences is critical to a wide range of applications, from quantifying the effects of human variants on disease likelihood, to predicting immune-escape mutations in viruses and designing novel biotherapeutic proteins. Deep generative models of protein sequences trained on family-specific sets of homologous sequences have been the most successful approaches so far to address these tasks. The performance of these methods is however contingent on the availability of sufficiently deep and diverse alignments for reliable training. Their potential scope is thus limited by the fact many protein families are hard, if not impossible, to align. Not subject to these limitations, large protein language models trained on non-aligned sequences across protein families have achieved increasingly high predictive performance – but have not yet fully bridged the gap with their alignment-based counterparts. We introduce and discuss various approaches for hybrid methods between family-specific and family-agnostic models that seeks to build on the relative strengths from each approach.

Feb 14, 2024
Cambridge, MA, USA
Pascal Notin
Pascal Notin
Scientific Lead

Research in AI for Protein Design