top of page

The Impact of Artificial Intelligence on Banking Discrimination Practices

When considering banking and financial services, Deloitte warns that the issue of AI amplifying existing human biases can be extremely serious. AI systems can only be as effective as the data they receive, meaning incomplete or insufficient datasets can lead to a lack of objectivity. Furthermore, bias in the teams responsible for training such systems may contribute to this cycle of bias. Rumman Chowdhury, a former Twitter executive, claims that lending is particularly prone to AI systems developing bias against marginalized groups. AMSTERDAM — There is a clear issue of racial bias with artificial intelligence. From facial recognition technology that more often misidentifies Black people and minorities, to speech recognition software that has difficulty with regional accents, AI has a lot of ground to cover when it comes to eliminating discrimination. Moreover, the issue of amplifying existing bias can be even more severe with regards to banking and financial services. According to Deloitte, AI systems can only be as objective as the data they are supplied with; if the dataset is incomplete or sparsely representative, AI's objectivity could be impaired, and biases from the teams programming such algorithms may be perpetuated. Nabil Manji, head of crypto and Web3 at Worldpay by FIS, emphasized that the quality of an AI product depends heavily on the source material that is used to train it. During an interview with CNBC, Manji remarked, "The two determinants of how good an AI product turns out are the data it is provided with and the quality of the underlying language model. For this reason, several companies like Reddit have publicly stated that they will not allow any scraping of data and that anyone has to pay for it." He further commented on financial services, noting that "a lot of the back-end data systems are fragmented in different languages and formats, which makes it difficult to consolidate and harmonize them. This implies that AI-driven products in the financial services space are likely to be considerably less effective than in other industries where there is more data uniformity and usage of more up-to-date systems." Manji proposed that blockchain, or distributed ledger technology, might be an avenue to gain better visibility of the isolated data stored in the messy systems of conventional banks. Nevertheless, he added that banks - due to being heavily controlled and slow-moving bodies - will almost certainly not keep up with the agility of tech companies in embracing up-to-date AI technologies. "Microsoft and Google, who have been seen as drivers of progress over the past decade or two, can't keep up with that speed. And then consider financial services. Banks aren't renowned for being speedy," Manji said. Rumman Chowdhury, formerly Twitter's head of machine learning ethics, transparency and accountability, noted that lending is a major area where AI systems' bias against marginalized communities can be easily observed. Chowdhury observed this phenomenon during a panel at Money20/20 in Amsterdam, describing how in the 1930s, Chicago denied loans to primarily Black neighborhoods through the practice of "redlining," where properties' creditworthiness was determined by the demographic makeup of a neighborhood. Angle Bush, founder of Black Women in Artificial Intelligence, has found instances where AI systems are used to determine loan approval decisions, potentially reinforcing existing racial and gender disparities due to the historical data used to train the algorithms. Frost Li, a long-time AI and machine learning developer, discussed the potential issues posed by the "personalization" dimension of AI integration. According to Li, when applying AI to banking, it can be difficult to identify the source of problematical biases that may be present when the calculations are complex. Li illustrated how this can affect people from different backgrounds differently, as a graduate from Tokyo University would not be able to get a credit card from certain banks, despite working for a high profile company like Google, whilst somebody from a local community college would not experience such difficulties. Niklas Guske, Chief Operating Officer at Taktile, highlighted that Generative AI is not generally used to create credit scores or risk score consumers, but is more powerful when put to use in pre-processing unstructured data, such as with classifying transactions, to be fed into a traditional underwriting model. It is difficult to prove when AI-based discrimination has occurred. Kim Smouter, director of the European Network Against Racism, said: "One of the difficulties in the mass deployment of AI is the opacity in how these decisions come about and what redress mechanisms exist were a racialized individual to even notice that there is discrimination." He added: "Individuals have little knowledge of how AI systems work and that their individual case may, in fact, be the tip of a systems-wide iceberg. As a result, it is also hard to detect specific cases where something has gone wrong." Smouter gave the example of the Dutch child welfare scandal, in which thousands of benefit claims were wrongfully called fraudulent. A 2020 report found that these victims were "treated with an institutional bias", leading the Dutch government to resign. Smouter said: "This demonstrates how swiftly such dysfunctions can spread and how hard it is to demonstrate them and get redress once they are discovered, resulting in often irreversible damage." Chowdhury has suggested that a global regulatory body like the United Nations be established to address the ethical and moral issues associated with AI. Concerns expressed by technologists and ethicists include misinformation, bias in AI algorithms, and the generation of "hallucinations" from ChatGPT-like tools. "I worry that AI is creating a post-truth world where nothing online can be trusted, and how can we attain reliable information?" Chowdhury questioned. Due to the time it will take for regulation like the EU's AI Act to be implemented, some are concerned this won't happen quickly enough. Smouter urges increased transparency and accountability in algorithms, the ability of non-experts to assess for themselves, evidence of testing, independent complaint procedures, audits, reporting, and the involvement of racialized communities in tech design. The AI Act is set to be enforced in two years' time, though Smouter hopes this period can be shortened to ensure transparency and accountability are prioritized in AI innovation.

Σχόλια


bottom of page