







Google has launched Gemma 2, a new AI model available in 9 billion and 27 billion parameter sizes. Gemma 2 is optimized for speed, offers great performance, and supports up to 8K context. It is released for researchers and developers globally, providing higher performance and efficiency compared to the first generation.
Google releases Gemma 2: https://t.co/NUWMgjirNP
Gemma 2 is out with 9b and 27b! A few things I really liked in tech report: On pretraining: - GQA (finally lol) - interleaving global attn w local (but 4k vs. 8k? it feels like it should support longer and was kneecapped...) - 16x expansion ratio (very wide!) (1/n) https://t.co/3NEjzBoAg3
Gemma 2 released in 9B and 27B variants with faster inference and improved capabilities. i need to try this out and check if it tops llama3 actually or not https://t.co/QRJC0j8OZk https://t.co/QryiKVz7uX