Customers lack trust in artificial intelligence (AI) and do not extract its full potential, according to a study by customer engagement and digital process automation firm Pegasystems Inc.
The study of over 5,000 consumers was conducted by research firm Savanta on AI, morality, ethical behaviour, and empathy.
Sixty five per cent of respondents said that they do not trust that AI-providing companies have the best interests at heart. Sixty eight per cent of respondents pointed out that organisations have an obligation to do what is morally right.
Additionally, 27% of respondents cited the “rise of robots and enslavement of humanity” as a concern.
“Our study found that only 25% of consumers would trust a decision made by an AI system over that of a person regarding their qualification for a bank loan,” said Rob Walker, vice-president, decisioning and analytics, Pegasystems.
Walker also pointed out that consumers likely prefer speaking to people because there is a greater degree of trust.
The report stressed the need for AI systems to help companies make ethical decisions. For example, if a bank offers a loan to a customer, the AI tool should be able to determine whether or not it is the right thing to do.
Only nine per cent of the respondents said they were “very comfortable” with the idea of AI-based decisions while more than half believed that AI systems as of today are unable to make unbiased decisions. Fifty three per cent also said that AI would make decisions based on the biases of the person who built its infrastructure and code.
Twelve per cent believed that AI can tell the difference between good and evil and the same percentage said that they had previously interacted with a machine that was capable of showcasing empathy.
In order to instil more trust in AI systems, the report stated that companies needed to think about combining AI-based insights with human-supplied ethical considerations.