Loading...

About 34% of firms using or implementing AI security tools: Report

About 34% of firms using or implementing AI security tools: Report
Photo Credit: 123rf.com
Loading...

In a recent survey conducted by Gartner, Inc., it has been revealed that a substantial 34% of organizations are actively employing or in the process of implementing artificial intelligence (AI) application security tools to address the growing concerns associated with generative AI (GenAI).

Additionally, over half (56%) of the respondents indicated that they are exploring solutions to bolster their AI security posture. 

The Gartner Peer Community survey was conducted from April 1 to April 7 and involved 150 IT and information security leaders from organizations where GenAI or foundational models are either already in use, planned for use, or currently under exploration. 

Loading...

Among the notable findings, 26% of the survey participants disclosed that they are currently in the process of implementing or using privacy-enhancing technologies (PETs), while 25% are actively involved in ModelOps, and 24% are engaged in model monitoring. 

Avivah Litan, Distinguished VP Analyst at Gartner, emphasized the importance of an enterprise-wide strategy for AI TRiSM (trust, risk, and security management). She stated, "IT and security and risk management leaders must, in addition to implementing security tools, consider supporting an enterprise-wide strategy for AI TRiSM. AI TRiSM manages data and process flows between users and companies who host generative AI foundation models, and must be a continuous effort, not a one-off exercise to continuously protect an organization." 

While 93% of IT and security leaders surveyed reported some level of involvement in their organization's GenAI security and risk management efforts, only 24% claimed to own this responsibility. Interestingly, for organizations where the responsibility does not lie with IT and security, 44% attributed the ultimate responsibility for GenAI security to the IT department, while 20% stated that their organization's governance, risk, and compliance departments were responsible. 

Loading...

The survey also shed light on the primary risks associated with GenAI, which are seen as significant, continuous, and ever-evolving. Notably, 57% of respondents expressed concern about leaked secrets in AI-generated code, while 58% were worried about incorrect or biased outputs. 

Avivah Litan warned, "Organizations that don't manage AI risk will witness their models not performing as intended and, in the worst case, can cause human or property damage. This will result in security failures, financial and reputational loss, and harm to individuals from incorrect, manipulated, unethical, or biased outcomes. AI malperformance can also cause organizations to make poor business decisions." 


Sign up for Newsletter

Select your Newsletter frequency