Exploring the relevance of ethics in intelligent systems used in the financial services industry
Artificial Intelligence (AI) has given the banking and financial services industry whole new ways of delivery. Consequently, we are witnessing the dramatic rise of the fintech industry today. One would assume that this would improve efficiency and lessen the probabilities of human follies. However, is that truly the reality? To answer that, let’s first understand AI.
AI and its benefits
AI is the algorithm to create computers capable of intelligent behaviour, by mimicking human cognitive functions like learning, problem-solving and decision-making. The benefits of AI are many – reduced costs, increased vigilance, leaner organisations, real-time delivery, etc.
Industrial Economist [IE] has written on AI and the technology behind AI in previous issues. ‘BOTS’ TO ‘CHATBOTS’ – November 2019; We’ll All Be Impacted By AI – July 2019.
However, risks and challenges are often conjoint with changes and advancements. Remember the credit card’s arrival in the 1950s which revolutionised shopping? And do you also remember the subsequent consumer-debt culture and the shift of the US to a dominant consumerist economy? Similarly, securitisation helped the capital markets in the 1980s and yet fuelled the subprime crisis.
Risks of AI
Financial market risk: Financial markets may face stability issues when several market participants apply AI at the same time. For example, if AI-based traders outperform other traders, more traders may adopt similar technologies, and the similar patterns of such tech-enabled trading strategies may be vulnerable to manipulation of market prices.
Consumer privacy risk: Machines learning from data is the moto of AI. It continuously collects and analyses data and, therefore, the amount of personal data being recorded and stored is colossal. Such personal and sensitive data is at risk from confidentiality and integrity issues.
Consumer bias risk: Data collected or computed by AI models may result in the unintentional recording of biases and prejudices that humans are so often susceptible to. For instance, when you train a computer system to predict which convicted felons will re-offend, you’re using inputs from a criminal justice system biased against low-income people — and so its outputs will likely be biased against coloured and low-income people too. Similar issues may arise when predicting loan defaulters and rural/small businesses may be discriminated. Let’s say a bank employs natural language processing to evaluate loan application forms and make credit decisions. A farmer who has made spelling mistakes on the application form shall be rejected.
Thus, data sources on which these new-age systems thrive are prone to issues like misrepresentation; historical prejudices and the opacity of AI algorithms further exacerbates the issue.
Risk of market loopholes: Trading algorithms shall involve a degree of unpredictability. If they impact financial markets, it shall be more difficult to match the causes and effects. In addition, if AI is used widely in high-frequency trading, there may be large volume of transactions at the same time, leading to increased market volatility.
This is supported by research undertaken in the context of gaming. An AI system playing the Atari exploration game Montezuma’s Revenge found a bug that let it force a key to reappear, thereby allowing it to earn a higher score by exploiting the glitch. Thus, systems might have bugs or unintended behaviour, or behaviour which humans don’t fully understand; and a sufficiently powerful AI system might act unpredictably — pursuing its goals through an unexpected avenue.
Risk of market concentration: Emerging technologies provide a data-rich environment; and data is set to control our thoughts and minds with growing studies in behavioural sciences, consumer profiling and data analytics. Concentration of such technologies in the hands of a small number of players as they learn and master the art of ruthlessly collecting and exploiting several data points, would lead to monopolistic or oligopoly markets and violation of freedom of choice.
In closing…
The financial services sector is deeply interwoven with the very fabric of our economies and societies; and while AI brings many benefits, the consequences of a single folly would be manifold and widespread. In the face of this sense of probable doom and disorientation, one must recognise values that empower us – privacy, accountability, responsibility, transparency and free will and seek to integrate it with innovation to weave meaning to our narrative, lest we fall prey to expediency and profitability targets of few innovators and investors.