Experts in artificial intelligence are debating the concept and feasibility of Artificial General Intelligence (AGI). Some argue that AGI is more complex and less understood than rocket science, emphasizing that while rocket science has been mastered over decades—with the first moon landing occurring in 1969—AGI lacks a clear roadmap and presents unprecedented risks. There are concerns that with AGI, there may be only one chance to get it right, and the science is in a lamentable state. Others suggest that human intelligence itself is specialized rather than general, questioning the appropriateness of the term AGI. Additionally, it is argued that once AGI is achieved, it could immediately surpass human capabilities significantly, potentially rendering human jobs pointless. There are also warnings about the importance of aligning simulations with reality in AGI development, as "reality wins when simulations disagree," highlighting potential risks. The term AGI is said to have caused confusion in the field, further complicating discussions about its development and implications.
AGI is the hypothetical point where machines not only outperform humans in every task but can also question their own existence, realize that human jobs are pointless, and still decide to keep us around — for entertainment purposes
This misunderstanding of AGI is the real problem. While humans have made progress on any imaginable field we can say that the ability to specialize in any subject is generality. Isolating agents from their cognitive tools is also a misunderstanding of intelligence. https://t.co/0yuQBPGhoD
Human intelligence is highly specialized. The term AGI to designate human-level intelligence is nonsense. https://t.co/ssV9OEnoCP