Snorkel AI, a data labeling and AI evaluation startup, has raised $100 million in a Series D funding round led by Addition, bringing its valuation to $1.3 billion. StepStone platform manager and other investors participated in the round. The company is focusing on building specialized datasets and evaluation systems to help enterprises test and improve their AI models, particularly in high-stakes applications such as healthcare and finance. Snorkel AI's approach emphasizes the use of subject matter experts and a proprietary method called 'programmatic labelling' to efficiently label large volumes of data. The company has hired tens of thousands of skilled contractors, including STEM professors and lawyers, to generate datasets for AI developers. The funding comes amid increased competition in the AI evaluation space, with rivals such as Scale AI, Turing, and Invisible Technologies also offering similar services. Snorkel AI's business has rebounded since a slowdown following the launch of ChatGPT, and it is now expanding its focus on evaluation. LMArena, an open community platform for evaluating AI models, has also secured $100 million in seed funding led by a16z and UC Investments, with participation from Felicis, Lightspeed, and others, at a $600 million valuation. The rebuilt LMArena platform, launching next week with a mobile-first design, aims to provide rigorous, transparent, and human-centered AI evaluation. LMArena has conducted over 400 model evaluations and received more than 3 million votes to date.
The future of AI evaluation: real-world feedback, from real users. @lmarena_ai makes that possible: models tested side by side, in public, and voted on by the people who use them. Hear how it started — and why human preference is the foundation of reliable AI in the full https://t.co/qe3abcEXJp
1/ Humanity doesn't need more AI benchmarks. We need real time, real world, continuous testing of AI systems I sat down with @istoica05 @infwinston @ml_angelopoulos to unpack what @lmarena_ai is building and why its critical for AI reliability https://t.co/G1I3Kssd6U
1/ To truly make AI reliably, as an industry we need to move on from static exams to continuous, real time testing in the wild I sat down with @istoica05 @infwinston @ml_angelopoulos to unpack what they are building and why this is such a critical moment for AI: https://t.co/GMEQSgR7YU