Alibaba's Qwen team has launched Qwen-Image, a new open-source text-to-image generation model with 20 billion parameters. The MMDiT architecture model excels in rendering embedded text within images, supporting both English and Chinese languages, including complex Chinese prompt rendering. Qwen-Image is capable of generating images in diverse styles such as photorealistic, anime, cyberpunk, sci-fi, minimalist, retro, surreal, and ink wash. It also offers advanced image editing features including style changes and object manipulation. Released under the Apache 2.0 license, Qwen-Image is freely deployable by developers and is integrated into platforms like AnyCoder via Replicate. Benchmark evaluations indicate that Qwen-Image rivals or surpasses other leading models such as OpenAI's GPT-4o Images, Imagen 3, and FLUX.1 Kontext in terms of text rendering quality and image generation. Alibaba's DAMO Academy made the model publicly available on August 5, 2025, continuing the Qwen team's rapid pace of AI innovation with recent releases including various Qwen3 versions and coder models. The release contrasts with OpenAI's delay in open-weight model availability due to security concerns.
📢 New Model Drop Qwen-Image by @Alibaba_Qwen is live on Yupp! It offers exceptional text-to-image generations, is especially strong at creating stunning graphic posters with native text, and is now open-source! We gave Qwen-Image a try with a few fun & creative prompts: https://t.co/spIixcnZ7w
Does Qwen Image work on Macbooks yet? /lazy
There's a new mode on the Replicate Qwen Image model that let's you generate slightly smaller images faster, taking only 5.5 seconds. Set image_size to optimize_for_speed to try it: https://t.co/XlmIsxIgMw > a portrait photo of a man and a woman, on the guy's pink t-shirt it https://t.co/eJUB2EEMvO