Adobe After Effects rolled out a new feature on Wednesday that can automatically scrub objects from a video for the first time.
While Adobe Photoshop has long offered a tool that can conceal areas of a still image with camouflage fill, the software giant said the ability to do so across multiple frames was made possible by improvements to its machine learning platform, Adobe Sensei. The feature is the latest example of how artificial intelligence is transforming the video production process, making professional content quicker and easier to produce at scale.
The new tool is able to track a discrete object across a given clip, remove it and fill the space it occupied with pixels that blend with the surrounding imagery. Adobe suggests that it can be used for anything from removing anachronistic giveaways from a period piece to erasing a stray boom mic.
The offering is one of several video editing software updates that Adobe is releasing ahead of next week’s National Association of Broadcasters trade show in Las Vegas. Others include audio enhancement tools, a real-time Twitch stream animator and new ways of organizing storyboards.
“Video is experiencing a golden age as video professionals across broadcast, film, streaming services and digital marketing are facing higher consumer demand for content creation. Meanwhile production timelines are shorter and the list of deliverables are longer,” Steven Warner, vice president of digital video and audio at Adobe, said in a statement. “Through optimized performance and intelligent new features powered by Adobe Sensei, video professionals can cut out more tedious production tasks to focus on their creative vision.”
The tool is one of many ways AI is changing the content production process, allowing for algorithmically generated graphics ranging from photorealistic images to faux-classical art. While this added functionality makes professional-grade video editing capabilities more accessible, it also opens the door to more realistic visual manipulation for fake news purposes, or so-called “deepfakes.”