
Researchers from Cornell have identified significant issues with OpenAI's Whisper, a speech-to-text AI tool. The AI has been found to hallucinate violent language, false facts, and fake websites. These hallucinations are more likely to occur with speakers who have long pauses, such as those with speech impairments or aphasia. The AI also generates random names, fragments of addresses, and irrelevant websites, and even incorporates YouTuber lingo into its transcriptions. These findings, reported by Tech Xplore, raise concerns about the reliability and ethical implications of using AI for speech-to-text applications.
"Unlike other widely used speech-to-text tools, Whisper is more likely to hallucinate when analyzing speech from people who speak with longer pauses between their words, such as those with speech impairments, researchers found." #ethics #tech #data #AI #research https://t.co/sIADIq2mEl
"In other examples of hallucinated transcriptions, #Whisper conjured random names, fragments of addresses and irrelevant... websites. Hallucinated traces of YouTuber lingo, like 'Thanks for watching and Electric Unicorn,' also wormed into transcriptions." #ethics #AI #data #LLMs https://t.co/sIADIq2mEl
AI speech-to-text can hallucinate violent language - Tech Xplore https://t.co/hiI1PKC0Fo


