Google recently introduced a powerful new tool designed to watermark and identify AI-generated images. This technology places a hidden digital watermark within the pixels of an image, making it undetectable to the human eye but capable of being identified for verification. Developed in collaboration with Google Cloud, Google DeepMind, the AI research division of Google, has introduced a beta version of this tool, known as 'SynthID'.
SynthID is currently accessible to a select group of users using Vertex AI, Google's platform for constructing AI applications and models. It's particularly designed to work with Imagen, one of Google's latest text-to-image models. Imagen can take text input and generate highly realistic images from it.
ALSO READ | Instagram reels may soon allow 10-minute videos: All you need to know
While generative AI has immense creative potential, it also poses risks, including the potential to spread misinformation, whether intentionally or unintentionally. Google DeepMind mentioned the importance of identifying AI-generated content as a means to empower individuals to recognise when they are interacting with generated media and to combat misinformation.
Importantly, SynthID remains effective even if modifications are made to the images, such as applying filters, altering colors, or compressing them.
To create SynthID, DeepMind trained two AI models on a diverse range of images, one for watermarking and another for identification. However, it's important to note that SynthID cannot definitively identify watermarked images. Instead, it distinguishes between images that likely have a watermark and those that probably do not.
ALSO READ | After US, Google launches AI-based Search in India and Japan | More deets inside
While it's not foolproof against extreme image alterations, the platform believes SynthID represents a promising step in enabling responsible use of AI-generated content. The tool may also evolve to handle other types of AI-generated content, including audio, video, and text.
Google has plans to integrate this tool into more of its products and intends to make it available to third-party users in the near future. This initiative aims to enhance accountability and transparency in the era of AI-generated content.