Google on 21 August released a technical paper and blog post detailing, for the first time, the per-prompt environmental footprint of its Gemini AI apps. The company’s internal measurements put the median text request at 0.24 watt-hours of electricity, 0.26 milliliters of water—about five drops—and 0.03 grams of carbon-dioxide emissions, amounts Google likens to running a microwave for one second or watching television for less than nine seconds. The Alphabet unit says hardware and software optimisations, including custom Tensor Processing Units and more efficient model architectures, cut the energy needed for a typical Gemini prompt 33-fold and the carbon footprint 44-fold between May 2024 and May 2025. Google argues the findings show rapid progress toward cleaner AI and offers its methodology as a template for wider industry disclosure. Independent researchers welcomed the rare transparency but warned that the numbers may understate Gemini’s true toll. Shaolei Ren of the University of California, Riverside, and Alex de Vries-Gao of Digiconomist said the study omits indirect water consumed in power generation and uses market-based rather than location-based carbon accounting, masking local grid impacts. The paper has not yet undergone peer review. The publication gives policymakers and investors one of the most detailed views yet into the operating costs of large language models, but experts called for sector-wide, independently audited standards—akin to an Energy Star label—to compare AI systems on equal terms as companies accelerate data-centre expansion.
"....we find the median Gemini Apps text prompt consumes 0.24 Wh of energy—a figure substantially lower than many public estimates...have driven a 33x reduction in energy consumption and a 44x reduction in carbon footprint for the median Gemini Apps text prompt over one year." https://t.co/y9EyrpndQw
Google just dropped a paper showing that the energy used for a Gemini text prompt has dropped 33x in only 12 months btw https://t.co/gAZRPPtUmB https://t.co/ebYiMDx6dB
"These figures are more comprehensive than many previously published metrics, but also end up being one or two orders of magnitude smaller than many existing estimates or measurements of AI inference benchmarks." $GOOG $GOOGL https://t.co/sxkr6eHzfJ