Loading...

NIST releases framework to boost risk-free adoption of AI

NIST releases framework to boost risk-free adoption of AI
Photo Credit: 123RF.com
Loading...

National Institute of Standards and Technology (NIST),  a US-based federal agency responsible for building technology standards, has released artificial intelligence risk management framework (AI RMF 1.0), which can be used by companies to build and use AI systems in an ethical and risk-free manner. 

Developed in collaboration with private and public sector organisations, AI RMF framework is voluntary, which means it’s usage is not binding on any company. 

However, NIST director Laurie E. Locascio believes that it can help large and small organisations across sectors manage their AI related risks more effectively. 

Loading...

The framework is part of NIST’s larger goal of “cultivating trust” in AI technologies within all communities, added Locascio. 

“It should accelerate AI innovation and growth while advancing — rather than restricting or damaging — civil rights, civil liberties and equity for all,” Don Graves, Deputy Commerce Secretary, said in a statement. 

NIST has divided AI RMF in two parts. The first part focuses on helping organisations identify the characteristics of trustworthy AI system, while the second part will help them follow a set of rules to 

Loading...

govern, map, measure and manage the AI system to avoid any risky practices. NIST claims these rules can be applied in specific use cases and at any stage of an AI solution’s life cycle. 

Though application of AI and it’s various self-learning technologies has grown in recent times, many have raised concerns over their ability to enforce stereotypes and increase discrimination against marginalised communities if the AI models are not built ethically. 

To prevent misuse of AI-based technologies, several cities in the US have banned use of face recognition technology by law enforcement agencies due to the general mistrust in AI. 

Loading...

Several big tech firms such as Google have also committed to not share their AI technology with military or development of weapons. 

Last year, OpenAI, the research firm behind ChatGPT and DALL-E, also found in its tests that 

DALL-E 2 was enforcing racial and gender stereotypes by generating images of white men by default and overly sexualizing images of women, according to a Wired report, published last May. 

Loading...

The framework can also help protect organisations from any potential fines as many jurisdictions have started the process of regulating AI technologies. 

For instance, EU is working on the Artificial Intelligence Act, which will create a legal framework to regulate AI applications, products and services.


Sign up for Newsletter

Select your Newsletter frequency