Snap Inc. has launched new generative AI tools to enhance its AR capabilities, enabling more realistic and complex special effects for Snapchat users and developers. The upgraded Lens Studio platform significantly reduces AR creation time and introduces advanced AI features, expanding creative possibilities.
19 June 2024 – Santa Monica-based Snap Inc. has unveiled its latest iteration of generative AI technology, which promises to deliver more realistic special effects for users filming themselves with their phone cameras. This strategic move is part of Snap’s ongoing effort to maintain its edge in the competitive social media landscape, dominated by larger players like Meta.
Snap has long been a trailblazer in augmented reality (AR), a technology that superimposes digital effects onto real-world images and videos. With this latest enhancement, the company is aiming to attract new users and advertisers to its platform by offering advanced, whimsical AR effects known as lenses. These AI-powered lenses will enable Snapchat users to create more engaging content, while AR developers will have new tools to innovate and expand their creative capabilities.
A significant component of Snap’s announcement is the upgraded version of its developer program, Lens Studio. This enhanced platform allows artists and developers to create AR features not only for Snapchat but also for other websites and applications. Bobby Murphy, Snap’s Chief Technology Officer, highlighted that the improved Lens Studio will dramatically reduce the time required to create AR effects from weeks to mere hours, facilitating the production of more intricate and complex work.
“What’s exciting for us is that these tools expand the creative possibilities for users, yet they remain user-friendly enough for newcomers to quickly create unique content,” Murphy said in an interview.
The new Lens Studio includes a suite of generative AI tools, such as an AI assistant designed to assist developers with queries. Another notable tool allows artists to generate three-dimensional images from text prompts, eliminating the need for manual 3D modeling. These advancements mark a significant leap from earlier AR technology, which was limited to simpler effects like adding static accessories to videos.
Snap’s improvements now enable AR developers to create more dynamic and realistic lenses, where digital objects can move naturally and adapt to the video’s lighting conditions. Murphy also mentioned Snap’s future plans to expand AR experiences to full-body applications, such as generating new outfits, a feature that is currently challenging to develop.
[source]