Towards Real-Time Text2Video via CLIP-Guided, Pixel-Level Optimization
Peter Schaldenbrand, Zhixuan Liu, & Jean Oh
The Robotics Institute, Carnegie Mellon University
We introduce an approach to generating videos based on a series of given language descriptions.
Frames of the video are generated sequentially and optimized by guidance from the CLIP image-text encoder; iterating through language descriptions, weighting the current description higher than others.
As opposed to optimizing through an image generator model itself, which tends to be computationally heavy, the proposed approach computes the CLIP loss directly at the pixel level, achieving general content at a speed suitable for near real-time systems.
The approach can generate videos in up to 720p resolution, variable frame-rates, and arbitrary aspect ratios at a rate of 1-2 frames per second.