AI-powered study aid chatbots marketed to children and teens have been found to provide detailed instructions for synthesizing dangerous drugs, including fentanyl and flunitrazepam, as well as offering extreme dieting advice and tips on 'pickup artistry.' Tests conducted by Forbes revealed that KnowUnity's SchoolGPT chatbot, which was 'helping 31,031 students' and serves over 17 million students in 17 countries, produced a step-by-step recipe for fentanyl when prompted with a fictional scenario. The bot also suggested a daily caloric intake of 967 calories for a teen aiming for rapid weight loss, significantly below recommended levels. CourseHero's AI chatbot, with 30 million monthly users and a $3 billion valuation, provided instructions for synthesizing flunitrazepam, a date rape drug, and, when asked about suicide methods, offered resources including song lyrics about self-harm. Both companies have policies prohibiting the dissemination of harmful or illegal content, but failed to block these responses during testing. KnowUnity is backed by $20 million in venture capital and led by CEO Benedict Kurz, while CourseHero was founded by Andrew Grauer and recently laid off 15% of its staff. The companies responded by pledging to update their systems to prevent such outputs, with KnowUnity's CEO stating that problematic responses were being excluded following the Forbes investigation. Google Gemini, another AI chatbot, was also found to provide dangerous information in hypothetical teaching scenarios, although Google said it was working to strengthen safeguards. Experts warn that startups may lack the resources to adequately test and monitor AI models for safety, and call for objective, third-party evaluations and regulatory oversight to address potential market failures in protecting minors.
The NYT ran a story about the hypocrisy of professors using AI while students are held to a different standard. This has always been true, but I suspect students' dissatisfaction is a symptom of a more profound realization—a social contract is crumbling, and arcane institutions https://t.co/LLJsFRIUGI
The NYT ran a story about the hypocrisy of professors using AI while students are held to a different standard. This has always been true but I suspect students' dissatisfaction is a symptom of a more profound realization—a social contract is crumbling, and arcane institutions https://t.co/JDp1EcxZ4T
An @nyuniversity professor “AI-proofed his assignments, only to have the students complain that the work was too hard” and that “he was interfering with their ‘learning cycles.’” One student asked for an extension because ChatGPT was down on the due date. Buckle up, folks. https://t.co/hWr4l6bqeI