
Mistral AI has released a new model, mistral-large, which has been praised for its ability to extract details from inputs using its new function calling abilities. This feature was highlighted in tests against the LangChainAI LangSmith extraction dataset. Additionally, Mistral AI introduced three new Large Language Models (LLMs) and a function calling API, now integrated with LangChainAI's JavaScript/TypeScript (JS/TS) offerings. The updates, which include support for structured output, are considered a significant advancement in the field, potentially changing how AI models are utilized in production environments. The technology community has also seen the introduction of LiteLLM v1.27.15, which can check if an LLM supports function or tool calling across over 100 LLMs, including support for Mistral AI's tool calling.
☎️ LiteLLM v1.27.15 - Check if an LLM supports Function/Tool Calling with litellm.supports_function_calling, works across 100+ LLMs https://t.co/hozLSyioCc 👉 Support for Mistral AI Tool Calling Live now https://t.co/cET9GGxiXm 🛠️ Major Fix for using Async Success Callbacks on… https://t.co/niJvQECJWa
🏛️ MistralAI Function Calling @MistralAI's new models and function calling API are now available in @LangChainAI JS/TS 🦜🔗! In addition to Mistral's new state of the art LLM, they also released support for structured output, a total game changer in terms of productionizing… https://t.co/iLB8tFY5uK
🤖 Issue #342 is live! This week: Mistral AI’s three new LLMs, LLM-Powered API Agent for Task Execution, How to Unit Test Machine Learning Code & Models, a paper on Unified Training of Universal Time Series Forecasting Transformers, and many more! https://t.co/LGdjOgJLBA


