MIT @techrview: Scientists are diving into AI and human cognition, crafting models like Centaur to predict behavior. But can a billion-parameter machine really unlock the mind? Or is it just a calculator in disguise? The quest for understanding continue… https://t.co/WQum9W6i87
🤖 La IA no sólo destruye empleos: nuevo estudio señala que puede mejorar salarios y condiciones laborales. Conocé más 👇 https://t.co/0brBcgDUQn
🤖 Aseguran que una nueva IA puede predecir decisiones humanas. Conocé más 👇 https://t.co/bwtoyqCgpT
A cluster of new surveys points to a rapid, largely unchecked shift toward algorithmic control over personnel decisions. Data gathered this week by industry researchers and HR platforms indicates that roughly two-thirds of U.S. managers already use artificial-intelligence tools at work, and 94 percent rely on them to decide promotions or dismissals. One poll reported that 40 percent of managers now allow the software to act without human supervision, while another found that 66 percent consult large-language models such as ChatGPT, Copilot or Gemini when weighing layoffs; nearly one in five let the system cast the decisive vote. The trend extends beyond the United States. An Observer Research Foundation review says 93 percent of Fortune 500 chief HR officers have introduced AI into their workflows, and 88 percent of companies globally deploy automated screening for the first cut of applicants. The technology’s advocates cite faster hiring cycles and lower costs, yet critics warn the tools can encode or amplify bias and leave rejected candidates with little transparency or recourse. Regulation remains patchy. The European Union’s forthcoming AI Act classifies hiring algorithms as “high-risk,” mandating strict documentation and human oversight. In the United States, the Equal Employment Opportunity Commission has reminded employers they are liable for discriminatory outcomes, but enforcement is limited. India’s 2023 Digital Personal Data Protection Act contains no provisions on automated decision-making, creating what analysts call an accountability vacuum as the country’s firms accelerate adoption. Labour advocates say the widening use of opaque models threatens to erode trust unless companies keep humans in the loop and audit systems regularly. Policy specialists are urging legislators to establish clear standards for explainability, contestability and data governance before AI becomes the default boss on hiring, promotion and firing.