top of page
Lanon Wee

Google Trials Watermark to Recognize AI Pictures

Google is testing out a digital watermark in an effort to discern pictures generated by AI with the aim of combatting misinformation. DeepMind, Google's AI division, created SynthID, a system for recognizing images that were generated by machines. It operates by inserting alterations to particular pixels in pictures, thereby making watermarks undetectable to the human eye but still detectable by computer systems. DeepMind stated that it is not completely resistant to extreme image manipulation. With the ever-advancing technology, it is becoming increasingly more difficult to distinguish authentic pictures from those that are created artificially - as demonstrated by BBC Bitesize's AI or Real quiz. AI image generators have become commonplace, with the popular tool Midjourney having amassed over 14.5 million users. They permit individuals to generate images quickly through the introduction of straightforward text commands, prompting deliberations across the globe on matters of copyright and possession. Imagen, Google's image generator, is the only system with a watermarking feature available. A logo or text is generally included in a watermark to show ownership of the image, while also making it more difficult to duplicate and use without authorization. Images used on the BBC News website normally have a copyright watermark placed in the bottom-left corner. However, watermarks of this type are not adequate for distinguishing between AI-created images since they can be simply changed or cut out. Tech firms rely on a process labeled hashing to generate digital "fingerprints" of videos of abuse that have been identified, thus allowing them to recognize and promptly remove them if they start to proliferate on the internet. However, these fingerprints can be distorted if the video clip is cropped or altered. Google's system produces a watermark that is effectively imperceptible, allowing people to expediently determine if a picture is genuine or generated by a computer. Pushmeet Kohli, the leader of research at DeepMind, informed the BBC that its system adjusts images in such a faint manner that "it appears unchanged to you and me, to a human". He stated that in contrast to hashing, the company's program is still capable of recognizing the watermark even when the image is trimmed or altered later. He asserted that no matter how the image may be modified--be it changing of the color, adjusting the contrast, or resizing--DeepMind would still recognize it as being generated by AI. He warned however, that this is a trial rollout of the system, and the organization must have people engaged with it in order to gain more information on its reliability. In July, Google was one of seven major companies working in artificial intelligence to enter into a voluntary accord in the US that was designed to guarantee the secure development and utilization of AI, including making sure people are able to identify computer-generated images by employing watermarks. Mr Kohli declared this action corresponded to those commitments; however, Claire Leibowicz from the Partnership on AI campaign group asserted there should be greater collaboration among businesses. She expressed that standardisation would be advantageous to the industry. We need to track the different methods being attempted in order to assess their respective impacts. How can we make our reports more detailed and comprehensive to ensure that we are able to effectively measure the success or failure of each method? Many institutions are examining diverse approaches, increasing the levels of intricacy, since our data setting utilizes dissimilar processes for decoding and recognizing the data that is AI-created," she noted. Microsoft and Amazon are two of the major tech firms that have promised, in the same way as Google, to imprint some AI-generated material with a watermark. Meta has released a research paper that covers their upcoming Make-A-Video video generator. The research paper mentions that watermarks will be included on the generated videos to ensure transparency regarding works created by AI. This year, China prohibited the use of AI-generated images without watermarks, including those produced with the text-to-image tool of Alibaba's cloud division, Tongyi Wanxiang.

Comments


bottom of page