
Executives of AI companies, including OpenAI, are reluctant to disclose details on how their algorithms are trained, leading to concerns about potential misuse. Researchers warn about the risk of hackers manipulating generative AI tools through poisoned training data. The lack of transparency extends beyond OpenAI to other leading language model (LLM) providers, raising privacy and security issues. The Chinese government is intensifying efforts to regulate generative AI.
As Generative AI Takes Off, Researchers Warn of Data Poisoning https://t.co/2rXBYIebYP
Chinese government escalates its own push to police generative AI - https://t.co/iKWOVA9Rmd
It's not just OpenAI that isn't public about what data its models are trained on. We don't know that for virtually every leading LLM, including some of the major open source ones. https://t.co/sSSLj9P2JV


