71% of organisations state that AI ethics and responsible AI are not a core part of their operational strategies

Demand for AI products and tools are on the rise according to the results of a new study, with more than half of respondents (52%) saying they are a higher priority than 12 months ago.

Yet the vast majority (71%) have not implemented ethical and responsible AI into their core strategies.   

Conducted by Corinium and sponsored by FICO, the report surveyed 100 C-level AI leaders in the financial services sector to examine where enterprises stand in implementing their AI strategy, approaches to AI ethics and governance, and their outlook on the future of AI initiatives.  

“Even though we have seen a growth in demand for AI-driven financial products and offerings, many financial services firms have yet to develop and hold themselves accountable to responsible AI standards,” said Scott Zoldi, chief analytics officer at FICO. 

“Beyond fulfilling an ethical responsibility to their customers, implementing standards for responsible AI that is explainable, auditable, and ethical helps to improve brand loyalty, reduces reputational risks, and better enables regulatory standards to be met.”

Brand protection 

Currently, only 8% of respondents say that their AI strategies are fully mature with model development standards consistently scaled across their organisations.

The study found that the number one benefit of responsible AI was improving the customer experience, followed by creating new revenue opportunities (69%), and protecting brand equity/minimising reputational risk (63%). 

“In order to succeed in AI digital transformation, it is imperative that organisations are committed to strong ethical AI practice and governance,” said Chun Schiros, SVP and head of Enterprise Data Science Group at Regions Bank. 

“As we increase the use of AI applications, continuous learning and considerations of fairness and explainability are crucial to support.”