\n <\/p>\n Video of a sea turtle, animated from a still image with Make-A-Video. <\/p>\n<\/figcaption><\/figure>\n<\/li>\n<\/ul><\/div>\n
“Using function-preserving transformations, we extend the spatial layers at the model initialization stage to include temporal information,” Meta wrote in a white paper. “The extended spatial-temporal network includes new attention modules that learn temporal world dynamics from a collection of videos.”<\/p>\n
Meta has not made an announcement about how or when Make-A-Video might become available to the public or who would have access to it. Meta provides a sign-up form people can fill out if they are interested in trying it in the future.<\/p>\n
Meta acknowledges that the ability to create photorealistic videos on demand presents certain social hazards. At the bottom of the announcement page, Meta says that all AI-generated video content from Make-A-Video contains a watermark to “help ensure viewers know the video was generated with AI and is not a captured video.”<\/p>\n
If history is any guide, competitive<\/a> open source text-to-video models may follow (some, like CogVideo, already exist), which could make Meta’s watermark safeguard irrelevant.<\/p>\n<\/p><\/div>\n