



The latest release from Allen AI, the OLMo 1.7 and its variant OLMo 1.7-7B, marks significant improvements in machine learning models. The OLMo 1.7 model has been enhanced with better data quality, training procedures, and model architecture, leading to a notable performance leap. The OLMo 1.7-7B variant, specifically, has achieved a 24-point increase in the MMLU score, outperforming its predecessors and competitors such as Llama 2-7B and even surpassing Llama 2-13B in GSM8K tasks. This model also features a doubled context length of 4096 tokens and utilizes the improved Dolma 1.7 dataset.
Great to see Allen AI iterating here. OLMo 1.7 is a solid step up, and with fully open data! https://t.co/UAbFNcICnA
๐๐๐๐๐จ ๐.๐ - ๐๐ model is out! ๐Scores 52 on MMLU, outperforming Llama 2-7B and approaching Llama 2-13B ๐ชExcels on GSM8K, surpassing Llama 2-13B ๐Longer context length of 4096 tokens ๐ฅImproved data quality with new Dolma 1.7 dataset release ๐Read more https://t.co/T64h0aC4jD
Announcing our latest addition to the OLMo family, OLMo 1.7!๐Our team's efforts to improve data quality, training procedures and model architecture have led to a leap in performance. See how OLMo 1.7 stacks up against its peers and peek into the technical details on the blog:โฆ https://t.co/T6DMfiGsgg