🤖🇺🇸 Patch now: 'Easy-to-exploit' RCE in open source Ollama! 💥 Even with a fix available, over 1,000 instances remain vulnerable to remote code execution. Don't risk your AI business - update to version 0.1.34 ASAP! https://t.co/wMGG7eC59q
Make sure you've updated your Ollama instances to at least v0.1.34 to close off a security hole in the API server This is particularly bad for those exposing that server to the world – there are 1,000+ vulnerable instances still doing that, we're told https://t.co/a6sDHBTgyh
We found a Remote Code Execution (RCE) vulnerability in @Ollama - one of the most popular AI inference projects on GitHub. Here is everything you need to know about #Probllama (CVE-2024-37032) 🧵👇 https://t.co/DcYmrwisPC

A critical remote code execution (RCE) vulnerability (CVE-2024-37032), known as Probllama, has been discovered in Ollama, an open-source AI platform on GitHub. The flaw, identified by Wiz, could potentially allow attackers to execute arbitrary code on affected systems. Despite the availability of a patch in version 0.1.34, over 1,000 instances remain exposed to this vulnerability. Users are urged to update their Ollama instances to the latest version to mitigate the risk.


