DeepSeek continuous self-improvement DeepSeek and China’s Tsinghua University say they have found a way that could make AI models more intelligent and efficient. DeepSeek has developed a groundbreaking technique called “Self-Principled Critique Tuning” (SPCT) in collaboration https://t.co/Ow19AqQEYu
In Depth: Unlike the U.S., where tech giants and venture capitalists have dominated funding of large language models like ChatGPT, DeepSeek and many other Chinese AI firms have been financed by hedge funds looking to exploit technology to make money. https://t.co/XaSoxi0hOD
DeepSeek unveils new technique for smarter, scalable AI reward models https://t.co/LEAQKlFONU https://t.co/wFbG2r8BjE
Chinese AI firm DeepSeek has partnered with Tsinghua University to enhance the capabilities of large language models (LLMs). This collaboration aims to improve reasoning in AI through a novel approach that combines generative reward modeling with self-principled critique tuning. The new technique, referred to as Self-Principled Critique Tuning (SPCT), is designed to produce faster and more intelligent responses. DeepSeek's open-source model is positioned as a disruptive force in the AI sector, offering smaller, more efficient, and cost-effective alternatives to traditional models. Unlike in the U.S., where funding for LLMs is largely dominated by tech giants and venture capitalists, DeepSeek and other Chinese AI firms have attracted investments from hedge funds seeking to capitalize on technological advancements.