Google Photos is extending its utilization of artificial intelligence to help clients alter and upgrade their photographs. With the introduction of Magic Editor, the company is turning to AI for more complex edits, but it has already used AI for its tools like the distraction-removing Magic Eraser and corrective Photo Unblur features in Photos. The new device will join artificial intelligence methods, including generative artificial intelligence, for altering and reimaging photographs, says Google.
At the Google I/O developer conference this week, the company demonstrated the capabilities of the new experimental feature by providing a sneak peek at it.
Users will be able to edit specific parts of the photos, like the foreground or background, fill in gaps, and even reposition the subject to get a better shot with Magic Editor.
Google, for instance, demonstrated how a shot of a person standing in front of a waterfall could be enhanced with Magic Editor.
In a demo of the innovation, a client can initially eliminate the others from the foundation of the photograph, then eliminate a pack tie from the subject’s shoulder for a cleaner look. The ability to reposition the subject is new, whereas these kinds of edits were previously available in Google Photos via Magic Eraser. Here, the AI “cuts out” the subject in the photo’s foreground, allowing the user to drag and drop the subject elsewhere in the image.
This is similar to the image cutout feature that Apple introduced with iOS 16 last year. This feature allowed users to separate the subject from the rest of the photo for a variety of purposes, including copying and pasting a portion of the image into another app, grabbing the subject from images found through Safari search, and placing the subject in front of the clock on the iOS Lock Screen.
In Google Photographs, be that as it may, the element is intended to assist clients with making better photographs.
Another demonstration demonstrated how Magic Editor’s AI-based ability to fill in gaps in an image could be combined with its ability to reposition a subject.
In this model, a kid is perched on a seat holding a lot of inflatables, however the seat is moved off to the left half of the photograph. Magic Editor uses generative AI to create more of the bench and balloons to fill in the rest of the photo as you move the boy and bench closer to the center of the image. You can add a final touch by brightening the sky behind the photo to a more vivid blue with white fluffy clouds instead of the original’s gray, overcast sky.
Similar to Lensa and Lightricks’ Photoleap, among others, the sky-filling feature is available in other photo editing apps. However, in this instance, rather than necessitating the download of a separate tool, it is integrated into the primary photo organizing app that users use.
The consequence of the alters, in the demos, is that of regular looking, all around made pictures, not those that appear as though they’ve been vigorously altered or man-made intelligence made, fundamentally.
Google states that Magic Editor will be made available as an experimental feature later this year, but it warns that there will be times when it doesn’t work as intended. The tests and client criticism will assist the component with working on over the long haul, as clients presently alter 1.7 billion photographs every month utilizing Google Photographs, the organization said.
However, it is unknown if Google will eventually charge for this feature or if it will be exclusive to the Pixel. As it did with Magic Eraser earlier this year, it may make Magic Editor a benefit of Google One.
Google has declined to reveal which Pixel devices will receive the feature first, but it will initially be available to “select” Pixel devices.
The company said it would also talk more about the AI technology behind the scenes when the feature’s early access release gets closer, but it won’t go into detail right now.