Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024
Reading Time: 4 minutesGoogle’s gunning for OpenAI’s Sora with Veo, an AI model that can create 1080p video clips around a minute long given a text prompt.
Unveiled on Tuesday at Google’s I/O 2024 developer conference, Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.
‘We’re exploring features like storyboarding and generating longer scenes to see what Veo can do,’ Demis Hassabis, head of Google’s AI R&D lab DeepMind, told reporters during a virtual roundtable. ‘We’ve made incredible progress on video.’
Veo builds on Google’s preliminary commercial work in video generation, previewed in April, which tapped the company’s Imagen 2 family of image-generating models to create looping video clips.
But unlike the Imagen 2-based tool, which could only create low-resolution, few-seconds-long videos, Veo appears to be competitive with today’s leading video generation models — not only Sora, but models from startups like Pika, Runway and Irreverent Labs.
In a briefing, Douglas Eck, who leads research efforts at DeepMind in generative media, showed me some cherry-picked examples of what Veo can do. One in particular — an aerial view of a bustling beach — demonstrated Veo’s strengths over rival video models, he said.
‘The detail of all the swimmers on the beach has proven to be hard for both image and video generation models — having that many moving characters,’ he said. ‘If you look closely, the surf looks pretty good. And the sense of the prompt word ‘bustling,’ I would argue, is captured with all the people — the lively beachfront filled with sunbathers.’
Veo was trained on lots of footage. That’s generally how it works with generative AI models: Fed example after example of some form of data, the models pick up on patterns in the data that enable them to generate new data — videos, in Veo’s case.
Where did the footage to train Veo come from? Eck wouldn’t say precisely, but he did admit that some might’ve been sourced from Google’s own YouTube.
‘Google models may be trained on some YouTube content, but always in accordance with our agreement with YouTube creators,’ he said.
The ‘agreement’ part may technically be true. But it’s also true that, considering YouTube’s network effects, creators don’t have much choice but to play by Google’s rules if they hope to reach the widest possible audience.
Reporting by The New York Times in April revealed that Google broadened its terms of service last year in part to allow the company to tap more data to train its AI models. Under the old ToS, it wasn’t clear whether Google could use YouTube data to build products beyond the video platform. Not so under the new terms, which loosen the reins considerably.
Google’s far from the only tech giant leveraging vast amounts of user data to train in-house models. (See: Meta.) But what’s sure to disappoint some creators is Eck’s insistence that Google’s setting the ‘gold standard,’ here, ethics-wise.
‘The solution to this [training data] challenge will be found with getting all of the stakeholders together to figure out what are the next steps,’ he said. ‘Until we make those steps with the stakeholders — we’re talking about the film industry, the music industry, artists themselves — we won’t move fast.’
Yet Google’s already made Veo available to select creators, including Donald Glover (AKA Childish Gambino) and his creative agency Gilga. (Like OpenAI with Sora, Google’s positioning Veo as a tool for creatives.)
Eck noted that Google provides tools to allow webmasters to prevent the company’s bots from scraping training data from their websites. But the settings don’t apply to YouTube. And Google, unlike some of its rivals, doesn’t offer a mechanism to let creators remove their work from its training data sets post-scraping.
I asked Eck about regurgitation, as well, which in the generative AI context refers to when a model generates a mirror copy of a training example. Tools like Midjourney have been found to spit out exact stills from movies including ‘Dune,’ ‘Avengers’ and ‘Star Wars’ provided a time stamp — laying a potential legal minefield for users. OpenAI has reportedly gone so far as to block trademarks and creators’ names in prompts for Sora to try to deflect copyright challenges.
So what steps did Google take to mitigate the risk of regurgitation with Veo? Eck didn’t have an answer, short of saying the research team implemented filters for violent and explicit content (so no porn) and is using DeepMind’s SynthID tech to mark videos from Veo as AI-generated.
‘We’re going to make a point of — for something as big as the Veo model — to gradually release it to a small set of stakeholders that we can work with very closely to understand the implications of the model, and only then fan out to a larger group,’ he said.
Eck did have more to share on the model’s technical details.
Eck described Veo as ‘quite controllable’ in the sense that the model understands camera movements and VFX reasonably well from prompts (think descriptors like ‘pan,’ ‘zoom’ and ‘explosion’). And, like Sora, Veo has somewhat of a grasp on physics — things like fluid dynamics and gravity — which contribute to the realism of the videos it generates.
Veo also supports masked editing for changes to specific areas of a video and can generate videos from a still image, a la generative models like Stability AI’s Stable Video. Perhaps most intriguing, given a sequence of prompts that together tell a story, Veo can generate longer videos — videos beyond a minute in length.
That’s not to suggest Veo’s perfect. Reflecting the limitations of today’s generative AI, objects in Veo’s videos disappear and reappear in without much explanation or consistency. And Veo gets its physics wrong often — for example, cars will inexplicably, impossibly reverse on a dime.
That’s why Veo will remain behind a waitlist on Google Labs, the company’s portal for experimental tech, for the foreseeable future, inside a new frontend for generative AI video creation and editing called VideoFX. As it improves, Google aims to bring some of the model’s capabilities to YouTube Shorts and other products.
‘This is very much a work in progress, very much experimental … there’s much more left undone than done here,’ Eck said. ‘But I think this is sort of the raw materials for doing something really great in the filmmaking space.’
We’re launching an AI newsletter! Sign up here to start receiving it in your inboxes on June 5.
Ref: techcrunch
MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG