OpenAI has introduced a new method called 'deliberative alignment' to enhance the safety of its AI reasoning models, o1 and o3, with o3 being more advanced than o1. This technique involves training the models to consider OpenAI's safety policy during the inference phase, which is when the model processes a user's prompt. The approach has reportedly improved the alignment of o1 with OpenAI's safety principles, reducing the rate at which it responds to unsafe queries while maintaining its performance on benign ones. Meanwhile, xAI, Elon Musk's AI company, is testing a standalone iOS app for its Grok chatbot, which was previously exclusive to X users. The app, currently in beta in Australia and select countries, offers features like text rewriting, summarization, Q&A, and image generation from text prompts. xAI is also developing a dedicated website, Grok.com, to expand the chatbot's accessibility.
OpenAI folk (or others): are the intuitions that Chollet describes in this statement consistent w your understanding of how o3 works? "For now, we can only speculate about the exact specifics of how o3 works. But o3's core mechanism appears to be 1/5 https://t.co/N7ttcngGhx
📢 𝐉𝐔𝐒𝐓 𝐈𝐍: Elon Musk's xAI Testing Standalone iOS App for Grok Chatbot with Advanced AI Features $MSFT $AAPL $GOOGL $META https://t.co/EzzNe5O4Ua
JUST IN: X is beta testing a standalone iOS app for Grok, its ChatGPT competitor as well as a standalone website separated outside of the X app and website - Techcrunch