Details
Act-One was launched in October 2024 as part of the Gen-3 Alpha platform. It takes a short video of a person performing as input — recorded on any consumer camera — and maps that performance onto a still character image or video, generating an animated clip. Act-Two, the successor, extends this capability to include body gestures and environmental motion. Runway describes the prior industry approach as requiring 'complex, multi-step workflows' including motion capture equipment, multiple footage references, and manual face rigging — but does not explicitly confirm that Runway itself previously employed such a process for its own users.
Have evidence about Runway's AI practices? Submit a report.
Report a Sighting →