Thursday, Nov 21

AI Standards

AI Standards

The Biden administration said on Tuesday (Dec 19) it was taking the first step toward writing key standards and guidance for the safe deployment of generative artificial intelligence and how to test and safeguard systems.

WASHINGTON: The Biden administration announced on Tuesday (Dec 19) it had initiated the initial phase in crafting crucial standards and guidance for the secure implementation of generative artificial intelligence, along with outlining procedures to test and protect these systems.

The National Institute of Standards and Technology (NIST) within the Commerce Department disclosed that it was soliciting public input until Feb 2, specifically focused on conducting essential testing vital for ensuring the safety of AI systems.

Commerce Secretary Gina Raimondo highlighted that this endeavour stemmed from President Joe Biden's October executive order on AI.

It seeks to shape industry standards concerning AI safety, security, and trust that will uphold America's leadership in the conscientious development and utilisation of this rapidly advancing technology.

The agency is in the process of formulating directives for appraising AI, aiding in the formulation of standards, and creating testing environments to assess AI systems. The call for input is directed at AI firms and the wider public, addressing the management of risks associated with generative AI and mitigating the dissemination of AI-generated misinformation.

Generative AI, capable of producing text, images, and videos based on open-ended cues, has sparked both enthusiasm and concerns in recent months. There are worries that it could render certain jobs redundant, influence electoral outcomes, and potentially surpass human capabilities with catastrophic consequences.

Biden's directive instructed agencies to define standards for such testing and confront associated risks related to chemical, biological, radiological, nuclear, and cybersecurity realms.

NIST is in the process of formulating guidelines for testing, including determining where "red-teaming" would be most advantageous for AI risk appraisal and management, while establishing best practices in this domain.

External red-teaming has a history in cybersecurity, aimed at identifying emerging risks. The term harks back to Cold War simulations, where the adversary was labelled the "red team."

August witnessed the inaugural U.S. public evaluation "red-teaming" event held during a significant cybersecurity conference, organised by AI Village, SeedAI, and Humane Intelligence.

The White House disclosed that thousands participated in attempts to discern if they "could induce undesirable outcomes or failures in these systems," aiming to gain a better understanding of the inherent risks. The event illustrated the efficacy of external red-teaming in identifying novel AI-related risks, as per the White House statement.