
Generative AI has become increasingly integrated into businesses, with 56% of businesses incorporating it just a year after ChatGPT's release, and nearly 60% of workers expressing a desire for more AI implementation at work. The technology is now viewed as a general-purpose technology with potential for significant economic impact, including in education and academic research. This rapid adoption has made managing AI's impact a leadership priority in many organizations. However, challenges remain in evaluating and monitoring generative AI effectively. In response, NIST has launched a new platform and program to assess generative AI technologies. This initiative includes issuing challenge problems to evaluate AI capabilities and limitations, and a pilot study to distinguish between human-created and AI-generated media, with results expected in February 2025, and plans to develop content authenticity detection technology.







"#NIST GenAI’s first project is a pilot study to build systems that can reliably tell the difference between human-created and #AI-generated media, starting with text.... Registration for the pilot will begin May 1, with the results scheduled to be published in February 2025." https://t.co/FIIXoceO3G
“The #NIST GenAI program will issue a series of challenge problems [intended] to evaluate and measure the capabilities and limitations of generative #AI #technologies": https://t.co/bfVCYMxZ90 #ethics #gov
NIST launches a new program to assess generative AI technologies, with plans to release benchmarks, help create "content authenticity" detection tech, and more (@kyle_l_wiggers / TechCrunch) https://t.co/ishUurTgpa 📫 Subscribe: https://t.co/OyWeKSQRTe https://t.co/Xp8HXGLNJ7