Details
Uni-1 is built on a decoder-only autoregressive transformer where text and image tokens are processed together in one sequence. Unlike diffusion models, it performs structured internal reasoning before and during image synthesis — decomposing instructions, resolving constraints, and planning composition before rendering. It accepts up to nine reference images per request and supports natural-language editing (e.g., swapping backgrounds, shifting lighting). The Uni-1.1 API became available to developers in May 2026, with production commitments from platforms including Envato, Comfy, Fal, and Magnific.
Have evidence about Luma AI's AI practices? Submit a report.
Report a Sighting →