The discourse surrounding the use of large language models (LLMs) in coding has sparked significant debate among software engineers and developers. Some critics argue that LLMs, such as Claude 3.5, undermine the profession by allowing individuals without software engineering skills to attempt coding, potentially leading to economic repercussions as artificial general intelligence (AGI) develops. Others acknowledge the limitations of LLMs, including their propensity to hallucinate, inability to generate complete codebases, and the fact that they cannot fully replace human coders. Despite these criticisms, the presence of LLMs has prompted some to humorously label poor programmers as 'bots', reflecting a growing tension in the coding community regarding the role of AI in software development.
Meanwhile, HN commenters will express shock at anyone claiming LLMs are helpful for coding assistance https://t.co/x71lb5ArKg
Just realized that the existence of LLM coders allows us to call bad programmers "bot"
The downside of LLM’s like Claude 3.5 is that they enable, and indeed encourage, people who have zero software engineering talent, to try to make a living writing code. Ironically, these people will be among the first economic casualties of anything remotely resembling “AGI”