
An AI company has developed a new model that can catch hallucinations produced by large language models and explain why they’re wrong. AI hallucinations are a significant issue, with off-the-shelf GPT-4 hallucinating 5-10% of the time, and open-source LLMs performing even worse. To address this, the company created an automated fact-checking pipeline that catches 99% of errors before they reach users. Aporia Guardrails, developed by AporiaAI, has outperformed NeMo, GPT-4o, and GPT-3.5 in AI hallucination detection and latency.



"Other possible avenues include utilizing conversational AI (e.g., ChatGPT) for intelligent assistant functions in AR & VR headsets." https://t.co/22WgirPTrI
Aporia Guardrails Outperforms NeMo, GPT-4o, and GPT 3.5 in AI Hallucination Detection and Latency https://t.co/u7Dqiwj4RJ @AporiaAI #datanami #TCIwire
🤫 Keep your use of ChatGPT a secret from your educator with HIX Bypass! https://t.co/kfXbYk5vlS