Taming AI algorithms — finance without prejudice

When the computer says ‘no’, is it doing so for the right reasons?

Human biases all too readily creep into AI technology

That is the question increasingly being asked by financial regulators, concerned about possible bias in automated decision making.

With the Bank of England and Financial Conduct Authority (FCA) both highlighting that new technologies could negatively affect lending decisions and the Competition and Markets Authority (CMA) scrutinising the impact of algorithms on competition, this is a topic that’s set to have extensive governance implications. So much so that the European Banking Authority (EBA) is questioning whether the use of artificial intelligence (AI) in financial services is “socially beneficial”.

However, as consumers increasingly expect loan and mortgage approvals at the click of a button, and with some estimates suggesting that AI applications could potentially save firms over $400 billion, there is plenty of incentive for banks, for instance, to adopt this technology with alacrity.

But, if bias in financial decisions is “the biggest risk arising from the use of data-driven technology”, as the findings of The Centre for Data Ethics and Innovation’s AI Barometer report suggest, then what is the answer?

Algorithmovigilance.

In other words, financial services firms can systematically monitor the algorithms computers use to evaluate customer behaviours, credit referencing, anti-money laundering and fraud detection, as…

Read more…