top of page
Lanon Wee

Debate on the Effects of Google Pixel's AI Photo Manipulation Feature

No matter what, the camera will never tell an untruth; however, it appears to be deviating from that standard more and more often. The smartphone era has seen digital alteration of photos on the spot for better results, from bringing out the colours to adjusting illumination. A new range of AI-powered smartphone tools is now raising questions about what constitutes photographing reality. Last week, Google released their two newest smartphones, the Pixel 8 and Pixel 8 Pro, which take things to the next level when compared to those of other companies. The devices are utilizing AI technology in order to modify people's facial expressions in photos. We've all been there: Someone in a group shot is not looking at the camera or isn't smiling. Google's phones allow users to utilize machine learning to select an expression from a different photo of the same person to insert into the photograph. This technology has been called Best Take by Google. Devices can facilitate the erasing, relocation, and resizing of undesirable elements in an image - ranging from people to buildings - that is filled back in through Magic Editor. Deep learning is implemented to achieve this, which essentially is an AI algorithm that computes the appropriate texture to fill the gap by understanding the nearby pixels by leveraging its training from millions of previous photographs. You don't need to take all your pictures with the Pixel 8 Pro; the Magic Editor and Best Take tools can be used to edit any images stored in your Google Photos library. This brings up new inquiries as to how photographs are taken, for some onlookers. Tech commentators and reviewers have variously described Google's new AI technology as having an icky feeling, being creepy and even presenting a risk to people's existing limited faith in online content. Andrew Pearsall, a professional photographer and a senior lecturer in Journalism at the University of South Wales, voiced his concern about the dangers of AI manipulation. He cautioned that a single, superficial adjustment could take us to a dark place. He stated that the dangers posed by AI in professional settings were greater, but that it was a concern for everybody to contemplate. You must take great care when deciding where the limits lie. It is quite concerning that you can snap an image and delete something immediately using your smartphone. We seem to be entering a type of artificial world. Isaac Reynolds, who is the head of the Google team in charge of smartphone camera systems, told the BBC that the company is keenly aware of the ethical implications of the consumer technology they produce. He was quick to emphasize that features such as Best Take were not misrepresenting anything. Camera quality and software are considered essential for the company to effectively contend with Samsung, Apple and other rivals - and these AI characteristics are perceived as a distinctive advantage. All of the reviewers that had issues with the tech commended the camera system's photos for their excellent quality. Reynolds noted that you can finally take a shot with everyone appearing the way you want them to with this camera, something that has not been possible with a smartphone or any other camera. He explained that if there was a version of the photo with the person smiling, it would be displayed. However, in the absence of such a version, there would be no smiling image. For Mr Reynolds, the resultant composition is a "depiction of an instant". Put another way, although the precise scene may not have taken place, it is the image that was desired to be produced, fashioned from several genuine instants. Rafal Mantiuk, a specialist in graphics and displays at Cambridge University, expressed that it is essential to recall that Artificial Intelligence (AI) on smartphones is not utilized to make photos look realistic. He proclaimed that folks are not interested in documenting reality, but rather creating picturesque images. He suggested that the entirety of the image processing process found in cell phones is designed to make the pictures look nice, not genuine. Due to the physical confines of smartphones, machine learning is used to supply information which is absent from the photo. This contributes to better zoom capabilities, enhances dark-light images, and - in the case of Google's Magic Editor feature - enables the addition of elements that weren't in the photograph previously, or instead, swapping in components from alternate photos, for example substituting a frown for a grin. The alteration of pictures is not a novel concept - it has been around since the inception of photography. However, AI technology has made it easier than ever to blend reality with fiction. This past year, Samsung garnered criticism for its utilization of deep learning algorithms to improve the quality of Moon pictures taken by its smartphones. Evaluations revealed that no matter how blurry a photo was initially, the phone would always render a usable image. To put it another way - the Moon image you obtained was not necessarily of the exact Moon you were viewing. The company accepted the criticism, expressing that they were striving to "diminish any possible uncertainty that could stem from photographing the true Moon versus an image of the Moon". Reynolds states that Google have incorporated metadata - the digital signature of an image - with their new technology, following the standard of the industry to indicate when AI has been utilised. He states that the matter is something they have discussed internally for a while and extensively so, due to their prolonged engagement with the concerns. It has been a conversation with their active listeners, the users, at the center of it. Google is sure that users will concur - the AI functions on its new phones are the focal point of its advertisement effort. Is there a boundary that Google would not cross when it comes to manipulating images? Mr Reynolds declared that the discussion surrounding artificial intelligence was too complex to determine a definite boundary between permissible and impermissible applications. As one dives further into the process of creating features, it becomes clear that a linear resolution of what is actually a difficult selection of features is a simplification, he adds. Professor Mantiuk emphasizes that even with these emerging technologies, we must not forget the limits of our own vision. He stated that our brain's ability to reconstruct info and infer missing info explains why we view sharp and vivid images. You may gripe that cameras distort reality, yet the human brain does something similar in another form.

Comments


bottom of page