After criticism from the experts and AI companies, the government has issued a fresh advisory where the Ministry of electronics and IT (MeitY) has removed the earlier requirement of seeking government’s nod before launching any untested or unreliable AI models in the country.
The platforms have been advised to label their experimental AI models and software as 'under testing' before releasing them to the public. They have also been advised to implement a user consent mechanism to inform users about any potential erroneous outcomes the generative AI model may generate.
On March 1, the Ministry of Electronics and Information Technology (MeitY) released an advisory for all intermediaries that use artificial intelligence (AI) models, software, or algorithms. The advisory requested these intermediaries to obtain permission from the government before making their platforms available to the public. In addition to this, the advisory also required these platforms to label their experimental models as 'under testing'.
A group of startups has voiced their discontent regarding the screening of large language models. They consider the move regressive. The advisory was released after generative AI platforms reported various incidents of biassed content and misinformation in their experimental models.
The government has also instructed intermediaries to find a way to embed metadata or a unique identification code for all synthetic content created on their platforms, to enable identification of the source of any misinformation or deepfakes. Previously, companies were asked to submit an action-taken report within 15 days, but a revised advisory has removed this requirement and platforms have been asked to comply "with immediate effect".
“Further, in case any changes are made by a user, the metadata should be so configured to enable identification of such user or computer resource that has effected such change,” the revised advisory said.
ALSO READ: Industry ministers of G7 commit to advancing AI for sustainable development