OpenThinker-32B, a new open-data reasoning model, has been introduced as a leading performer in the MATH500 and GPQA Diamond benchmarks, outperforming all other 32B models. Developed by the Open Thoughts team, it is fine-tuned from Qwen2.5-32B-Instruct using the OpenThoughts-114k dataset. The model is fully open-source, with all model weights, datasets, data generation code, evaluation code, and training code available for public use. Users can run the model locally by pasting the GGUF model link into Jan Hub. Additionally, LocalAI has announced several new models, including nvidia_aceinstruct-1.5b and nvidia_aceinstruct-7b, both of which feature improved capabilities for various tasks such as coding and mathematics.
you can try out OpenThinker, the best *uncensored* 😉 open-source 32B reasoning model, on our playground now! https://t.co/A6UqsUo6RV
you try out OpenThinker, the best *uncensored* 😉 open-source 32B reasoning model, on our playground now! https://t.co/GCrMjRXal2
AI grounding will ensure that your models will connect its data to your intended real-world context. What are some AI grounding techniques to get you there? #datascience #AI #artificialintelligence #opensource #ODSC https://t.co/BuMckXzmDT