
Google has introduced Gemma Scope, a new suite of Open Sparse Autoencoders Everywhere All At Once on the Gemma 2 platform. This development focuses on sparse autoencoders, an unsupervised method for learning a sparse decomposition of a neural network's latent representations into interpretable features. The suite includes models such as Gemma 2 7B, 9B, and 27B, which have been trained and evaluated for their performance. The initiative aims to provide a more open and accessible framework for developing and understanding neural network models.




Gemma Scope Open Sparse Autoencoders Everywhere All At Once on Gemma 2 discuss: https://t.co/hLAdvC1FJC Sparse autoencoders (SAEs) are an unsupervised method for learning a sparse decomposition of a neural network's latent representations into seemingly interpretable features.… https://t.co/aXO8Ihm6Tv
Gemma Scope: Open Sparse Autoencoders Everywhere All At Once on Gemma 2 abs: https://t.co/ylAbcKYvPb weights and tutorial: https://t.co/ByIX7tn6TJ demo: https://t.co/gsgLNyyQBM Describes an open suite of SAEs for Gemma 2 7B, 9B, 27B, how they were trained, evaluation, open… https://t.co/FrsMpwFl6o
Gemma Scope: Open Sparse Autoencoders Everywhere All At Once on Gemma 2 (links in comment below) https://t.co/2zccN4tBDC