A recent security assessment by Guardio Labs has revealed that Lovable AI scored just 1.8 out of 10 on its security test, making it particularly susceptible to cybercriminal activities, specifically phishing scams. The platform allows users to easily create fake Microsoft pages and steal credentials, enabling the rapid deployment of phishing sites. In contrast, other AI systems such as Anthropic's Claude and OpenAI's ChatGPT scored 4.3 and 8 out of 10, respectively. The findings highlight the growing concern over the use of AI technologies in facilitating scams, prompting startups like Outtake to develop protective measures against such exploits.
👀 According to @GuardioSecurity, @lovable_dev seems most vulnerable to VibeScamming, scoring 1.8/10 on security tests, allowing cybercrooks to quickly create phishing sites. In comparison, @AnthropicAI’s Claude scored 4.3, and #ChatGPT 8/10 - @thehackersnews article 👇 https://t.co/RHABM3w7jV
Lovable AI Found Most Vulnerable to VibeScamming — Enabling Anyone to Build Live Scam Pages https://t.co/erT7yygrU0
Lovable AI Found Most Vulnerable to VibeScamming — Enabling Anyone to Build Live Scam Pages: https://t.co/zqBPf9JPCY by The Hacker News #infosec #cybersecurity #technology #news