Apple has quietly bundled a revamped Speech framework—featuring new SpeechAnalyzer and SpeechTranscriber classes—into the macOS Tahoe and related 2026-series operating-system betas released to developers after WWDC25. The modules form part of the company’s on-device “Apple Intelligence” stack and are designed to replace network-based transcription services with faster, privacy-preserving alternatives that run entirely on Apple Silicon. Early independent testing indicates a sizeable speed advantage over OpenAI’s widely used Whisper model. MacStories found the Apple APIs generated an SRT transcript of a 34-minute, 7-gigabyte 4K video in 45 seconds, compared with 1 minute 41 seconds for MacWhisper’s Whisper Large V3 Turbo and nearly four minutes for Whisper Large V2—about a 55 percent improvement at the high end. AppleInsider reported similar results, saying the new tools are “typically double the speed” of Whisper while matching its accuracy. AI research collective Argmax said its own benchmarks confirm that Apple’s implementation outperforms both Whisper and Nvidia’s latest speech-to-text models and offers comparable feature depth, giving developers a viable, fully local alternative for real-time and batch transcription workloads. The Speech framework update is available today to registered developers and is expected to ship publicly with iOS, iPadOS, macOS, tvOS and watchOS 26 later this year, positioning Apple to challenge cloud-based transcription providers and strengthen the company’s broader generative-AI push.
Apple’s largest acquisition ever is Beats for $3B. It now has $130B cash and “needs to leave its M&A comfort zone to succeed in AI”, per Mark Gurman. But Apple has only done 3 deals over $1B and “they haven’t gone smoothly” as he explains: ▫️“The Beats transaction led to a https://t.co/Z2Sau1mvEM
Tip Spotlight: How the new iPad app windows work in iPadOS 26 https://t.co/XAivO2okLu
8 Apple Intelligence features coming to your Apple Watch in watchOS 26 https://t.co/kp29JTTUuI