Thursday, November 21, 2024
Advertisement
  1. You Are At:
  2. News
  3. Technology
  4. Major tech companies partner with US government for ethical AI innovation

Major tech companies partner with US government for ethical AI innovation

As part of the agreement, the tech companies have committed to subjecting their AI systems to security testing by both internal and external experts before release. This step will help identify potential security vulnerabilities that ensure safer and more reliable AI deployments.

Edited By: Vishal Upadhyay New Delhi Published on: July 22, 2023 16:58 IST
tech news, google, meta, amazon, openai, us govt, usa, 7 tech giants
Image Source : PEXELS Major tech companies partner with US government for ethical AI innovation

Seven major artificial intelligence (AI) tech companies, including Google, OpenAI, and Meta, have come together to forge an agreement with the Joe Biden administration to address the risks associated with AI technology. The collective effort aims to implement new measures and guardrails to ensure responsible and safe AI innovation.

According to IANS, the companies involved in the deal are Amazon, Anthropic, Meta, Google, Inflection, and OpenAI. The proposed measures include conducting security testing of AI systems and making the results of these tests publicly available. This move towards transparency is crucial in enhancing accountability and building trust with users and the wider public.

ALSO READ: Threads, Meta's app, faces drop in usage: Here's why

During a meeting at the White House, President Biden mentioned the importance of responsible AI innovation. He acknowledged the significant impact that AI will have on people's lives worldwide and stressed the critical role of those involved in guiding this innovation responsibly and with safety in mind.

"AI should benefit the whole of society. For that to happen, these powerful new technologies need to be built and deployed responsibly," said Nick Clegg, Meta's president of global affairs.

"As we develop new AI models, tech companies should be transparent about how their systems work and collaborate closely across industry, government, academia and civil society," he added.

Moreover, the companies will implement watermarks in their AI-generated content, enabling users to identify such content more easily. Regular public reporting of AI capabilities and limitations will also be made, contributing to a more transparent AI landscape.

ALSO READ: OpenAI to launch ChatGPT on Android devices next week

In addition to security testing, the companies will undertake research to address risks associated with AI, such as bias, discrimination, and privacy invasion. This research is crucial to ensure AI technologies are developed responsibly and ethically.

OpenAI highlighted that watermarking agreements will require companies to develop tools or APIs to determine if specific content was generated using their AI systems. Google had already committed to deploying similar disclosures earlier this year.

Furthermore, Meta recently announced its decision to open-source its large language model, Llama 2, making it freely available to researchers. 

Inputs from IANS

Advertisement

Read all the Breaking News Live on indiatvnews.com and Get Latest English News & Updates from Technology

Advertisement
Advertisement
Advertisement
Advertisement