Recent research from Anthropic has unveiled that ChatGPT exhibits cognitive biases similar to those of humans, challenging the perception of AI's decision-making capabilities. The study tested the AI on 18 classic decision-making scenarios and found that it mirrored human biases such as overconfidence and risk aversion in half of these tests. This raises concerns about the reliability of AI in critical decision-making contexts. The findings suggest that while AI can excel in mathematical and logical tasks, it retains human-like blind spots in subjective decision-making, prompting calls for increased oversight and refinement in AI applications.
MatthewBerman表示:“我们之前对大规模语言模型(LLMs)是如何工作的知之甚少...直到现在。 Anthropic刚刚发布了一篇超级震撼的研究论文,详细描述了 AI “思考” 的一些方式。 而这些方式与我们之前的想法完全不同。 ” 以下是他们的一些惊人发现:🧵 https://t.co/PS5mxyI3fZ
AI Thinks Like Us: Flaws, Biases, and All, Study Finds https://t.co/AxqEjltvis https://t.co/0QqtmoT0Rw
this is one of the most interesting research papers I’ve read this year. If you're curious about how LLMs work and think, read this thread https://t.co/snqu4DNe1m