Meta AI’s new text-to-video AI generator is like DALL-E for video

Date

blank

Facebook’s parent company Meta has unveiled a text-to-video AI model called Make-A-Video. Like DALL-E, the system works based on text prompts, generating short five-second videos based on users’ descriptions.

 

Make-A-Video is a new AI system that lets people turn text prompts into brief, high-quality video clips. Make-A-Video builds on Meta AI’s recent progress in generative technology research and has the potential to open new opportunities for creators and artists. The system learns what the world looks like from paired text-image data and how the world moves from video footage with no associated text. As part of our continued commitment to open science, Meta AI is sharing details in a recent research paper.

 

Generative AI research is pushing creative expression forward by giving people tools to quickly and easily create new content. With just a few words or lines of text, Make-A-Video can bring imagination to life and create one-of-a-kind videos full of vivid colors, characters, and landscapes. The system can also create videos from images or take existing videos and create new ones that are similar.

 

Make-A-Video follows Meta AI’s announcement earlier this year of Make-A-Scene, a multimodal generative AI method that gives people more control over the AI generated content they create. With Make-A-Scene, we demonstrated how people can create photorealistic illustrations and storybook-quality art using words, lines of text, and freeform sketches.

 

The Meta AI designers want to be thoughtful about how they build new generative AI systems like this. Make-A-Video uses publicly available datasets, which adds an extra level of transparency to the research. They are openly sharing this generative AI research and results with the community for their feedback, and will continue to use a responsible AI framework to refine and evolve their approach to this emerging technology.

 

Learn more about Make-A-Video by visiting Meta AI’s site and read the paper.

Read the full article at: www.theverge.com

More
articles