A recent study conducted by the nonprofit research organization METR has found that artificial intelligence (AI) coding tools can reduce the productivity of experienced software developers by approximately 19%. The study tracked 16 veteran developers as they addressed 246 real issues within large-scale GitHub projects containing over one million lines of code. Developers worked on codebases they were already highly familiar with, and half of the tasks allowed the use of AI tools such as Cursor and Claude. Contrary to popular belief that AI accelerates coding, the findings indicate that AI tools may introduce delays due to the need for developers to provide context, review, and correct AI-generated code. This slowdown has been attributed to doubts over the accuracy of AI outputs and the additional time required to manage AI assistance effectively. The research highlights that while AI promises faster coding, in complex and familiar coding environments, it may instead hinder seasoned programmers' efficiency. The study has sparked discussions about the role of AI in software development and the importance of testing, as AI-generated code can lead to more bugs, emphasizing the critical role of quality assurance in the AI era.
AI isn’t the end of coding, but of bad computer science training https://t.co/eyikHJXyjS
Thanks to AI, I’m feeling more productive than ever coding as a PM. But for some reason, engineers won’t accept the 3,000 line PR that Claude and I vibe coded. Anyone know why?
👍 AI 对测试的影响,总结的很到位。 首先是 AI 给测试带来的机会: - 测试价值在 AI 时代反而“被看见”了 当开发依赖于 AI 写代码,看似产出更高了,但实际上 Bug 会更多,而测试重要性会更加凸显 - 测试“左移”具像化 https://t.co/MORS7Lq58A