
A recent analysis has raised questions about ChatGPT's proficiency in coding. According to a study of GPT-3.5-based ChatGPT's ability to solve 728 coding problems, the AI performs fairly well on problems that existed before 2021 but struggles with newer ones. The study, reported by Michelle Hampson in IEEE Spectrum, found that its ability to generate functional code for 'hard' problems dropped from 40% to 0.66%. Experts suggest that this decline may be due to the AI's reliance on its training dataset, which predominantly features problems seen before 2021. ChatGPT lacks the critical thinking skills of a human and can only address problems it has previously encountered.
ChatGPT 3.5 https://t.co/UnHOHk6FzY https://t.co/q2xKW61qXw
A study of GPT-3.5-based ChatGPT's ability to solve 728 coding problems: fairly good at solving problems that existed before 2021, but struggles with newer ones (@michellehampson / IEEE Spectrum) https://t.co/Q9oOuhwwl1 https://t.co/gFvovwBWls
"its ability to generate functional code for 'hard' problems dropped from 40% to 0.66% after this time as well. 'A reasonable hypothesis for why ChatGPT can do better with algorithm problems before 2021 is that these problems are frequently seen in the training dataset'" https://t.co/w1CJ0hhUA2
