A recent study conducted by Stanford University's Hoover Institution analyzed over 180,000 user comparisons across 24 major AI language models and found that these models exhibit a left-leaning political bias in their responses to political questions. The research, led by Justin Grimmer, AHall_Research, and Sean J. Westwood, identified that all prominent AI models, including those developed by Google, Meta, and OpenAI, tend to skew left in their political outputs. Among these, OpenAI's ChatGPT was noted as the most left-leaning. The findings were highlighted in a Fox Business report and underline ongoing concerns about political bias in artificial intelligence technologies.
🇺🇸 STANFORD STUDY FINDS POPULAR AI MODELS SKEW LEFT ON POLITICAL TOPICS A new study from Stanford’s Hoover Institution finds that several widely used AI models show a left-leaning bias when responding to political prompts. Researchers gathered over 180,000 user judgments https://t.co/B4OMHWjtc1 https://t.co/DxUDO2VWPS
A new study from Stanford University found that leading AI models from Google, Meta, and OpenAI lean left in their responses to political questions. OpenAI's ChatGPT was the most left-leaning of them all. Thoughts? https://t.co/TWyMrwimUa
Based on a Hoover study using +180k user comparisons across 24 AI models, @JustinGrimmer, @AHall_Research, and @SeanJWestwood find that all major language models are perceived as having a left-leaning political bias. Read @CCreitzPolitics in @FoxBusiness: https://t.co/nQe4VZD4nd