With new Gen-4 model, Runway claims to have finally achieved consistency in AI videos

AI video startup Runway announced the availability of its newest video synthesis model today. Dubbed Gen-4, the model purports to solve several key problems with AI video generation.
Chief among those is the notion of consistent characters and objects across shots. If you’ve watched any short films made with AI, you’ve likely noticed that they’re either dream-like sequences of thematically but not realistically connected images—mood pieces more than consistent narratives.
Runway claims Gen-4 can maintain consistent characters and objects, provided it’s given a single reference image of the character or object in question as part of the project in Runway’s interface.