Meta has announced the release of nine new open-source AI research artifacts coinciding with the #NeurIPS2024 conference. This collection focuses on advancements in developing agents, enhancing robustness and safety, and introducing new architectures. Among the notable contributions is 'Maya', an 8 billion parameter open-source multilingual multimodal model designed to generate culturally-aware content in eight languages using toxicity-free datasets. Developed by a collaborative team from Cisco Meraki, Cohere For AI Community, Indiana University Bloomington, Imperial College London, and Georgia Institute of Technology, Maya aims to improve inclusivity in AI applications. Additionally, researchers from Carnegie Mellon University, KAIST, and the University of Washington have introduced 'AGORA BENCH', a benchmark for systematically evaluating language models as synthetic data generators.
Maya advances vision-language models by enabling safe, culturally-aware content generation across 8 languages through toxicity-filtered datasets and multilingual model architecture. ----- 🌍 Original Problem: Vision Language Models excel mainly in English, creating… https://t.co/KWxn5D3ckt
A collection of new AI research released out of FAIR today. Check out the models, papers, code and datasets: https://t.co/kOKwASJZnP https://t.co/rcwmzgNgQO
Wrapping up the year and coinciding with #NeurIPS2024, today at Meta FAIR we’re releasing a collection of nine new open source AI research artifacts across our work in developing agents, robustness & safety and new architectures. More in the video from @jpineau1. All of this… https://t.co/rNvZ5dmdYp