When discussing banking and financial services, the potential issue of artificial intelligence intensifying existing human biases must be given serious consideration. Deloitte emphasizes that AI systems depend heavily on the type and quality of data they're provided with; datasets are either incomplete or unrepresentative, thus limiting the system's impartiality. Additionally, biases in the teams that build and educate the system have the potential to further perpetuate this cycle. An obvious example of this danger is lending, with Rumman Chowdhury, a former Twitter executive, citing it as a major area of concern with regards to AI's unfair discrimination against marginalized groups.
AMSTERDAM — Racial bias is an issue with artificial intelligence. Misidentifying the faces of Black people and people belonging to minority groups, as well as not being able to distinguish voices with various accents, are examples of problems related to AI that need to be solved. Moreover, when it comes to banking and financial services, the consequence of amplifying biases can be severe. Deloitte emphasizes the idea that the AI systems are only as trustworthy as the data fed into them: incomplete data or data which is not representative can limit the objectivity of AI, and if the development teams responsible for training them also have their own biases, the cycle of discrimination will only be perpetuated.
Nabil Manji, head of crypto and Web3 at Worldpay by FIS, explained that the success of AI products largely depends on the quality of the data used to train it. In an interview with CNBC, Manji stated, "The strength of these products is really based on two factors. One is the data available to it and the second is how powerful the language model is. For example, Reddit has recently said that companies must pay to access their data, so they will not be allowing scraping." Additionally, Manji noted that within financial services, the backend data is not harmonized, and this could potentially hinder the efficacy of AI-driven products. He added, "The data is in multiple, different languages and formats, which prevents it from being consolidated. That is why these products may not be as successful in financial services as they may be in other industries."
Manji proposed that blockchain, or distributed ledger technology, may be a useful method of gaining an understanding of the varied data buried in the cluttered systems of traditional banks. He elaborated, however, that banks being highly regulated and inflexible institutions are unlikely to keep pace with the advanced tech companies in implementing new artificial intelligence tools. Manji further noted, “Microsoft and Google, who have perceived as driving innovation in the last decade or two, are unable to compete with that velocity. When you consider financial services, banks are not known for their speed.”
Rumman Chowdhury, Twitter's former head of machine learning ethics, transparency and accountability, remarked that lending is of prime significance in terms of observing how prejudice against marginalized segments of society can show itself through AI systems. Chowdhury shared her opinion on a panel at Money20/20 in Amsterdam, saying, "Algorithmic discrimination is actually very tangible in lending. Chicago had a history of literally denying those [loans] to primarily Black neighborhoods." The 1930s in the city were known for the practise of redlining, which was a form of discriminatory practice that determined creditworthiness of properties based on the racial demographics of any given neighborhood. She further explained, "There would be a giant map on the wall of all the districts in Chicago, and they would draw red lines through all the districts that were primarily African American, and not give them loans."
Angle Bush, the founder of Black Women in Artificial Intelligence, which is an initiative striving to empower Black women in the AI sector, told CNBC that when AI systems are employed for loan approval decisions, it typically leads to the possibility of clones of existing biases present in past data employed to train the algorithms. "This can result in automatic loan denials for individuals from marginalized demographics, thus reinforcing gender or racial disparities," Bush added.
Frost Li, a developer with over a decade of experience in AI and machine learning, told CNBC that the "personalization" feature of AI integration can likewise be a source of worry. He elucidated, "What's interesting in AI is how we select the 'core features' for training. Sometimes, we select features unrelated to the results we want to predict." Li claimed that when AI is used in banking, it is difficult to identify the "culprit" in bias because of the confounding nature of the calculations. He gave an example, "A good example is how many fintech startups are especially for foreigners, because a Tokyo University graduate won't be able to get any credit cards even if he works at Google; yet a person can easily get one from community college credit union because bankers know the local schools better."
Generative AI generally is not used for creating credit scores or risk scoring of consumers, according to Niklas Guske, Chief Operating Officer at Taktile, a startup helping fintechs automate decisions. He said, "That is not what the tool was built for." Instead, he stated that the most effective applications of Generative AI are in pre-processing unstructured data such as text files - like categorizing transactions. By doing so, Guske asserted, "Those signals can then be provided to a more typical underwriting model. Therefore, Generative AI will enhance the underlying data quality for such decisions instead of replacing common scoring processes."
It is hard to demonstrate that AI-based discrimination has happened, remarked Kim Smouter, director of the European Network Against Racism. He went on to say that “One of the issues with widespread use of AI is the lack of visibility in how decisions are made, as well as the lack of a remedy when a racially marginalized person notices discrimination.” People often do not understand how AI works, and they might not realize that their case might be the tip of the iceberg. Smouter pointed to the Dutch child welfare scandal as an example, where thousands of people were wrongfully accused of fraud and the Dutch government had to resign because of it. He added that “this shows how problems can spread quickly and how tricky it is to prove them and get redress, by that time a great amount of damage has usually been done.”
Chowdhury believes that a worldwide regulating body, similar to the United Nations, is necessary to deal with the risks related to AI. Although AI has demonstrated to be a valuable tool, there is a certain amount of apprehension among technologists and ethicists with regards to the moral and ethical values of the technology. Concerns such as disinformation; bias built into AI algorithms; and the hypothetical "hallucinations" caused by ChatGPT-like tools have been raised. "I'm really concerned that AI is bringing us to an atmosphere wherein nothing we meet online can be depended on – not any text, video, or audio – so how do we get our info? And how do we guarantee that the info we get is reliable?" Chowdhury asked. Regulation of AI has to be implemented soon – even though the European Union's AI Act will take a while to be applied, some are concerned that this might not be soon enough. Smouter suggested "more transparency and responsibility regarding algorithms and how they work, a basic demonstration that will enable everyday people with no AI experience to decide for themselves, evidence of testing and publication of outcomes, an independent complaints system, regular reviews and reports, involvement of People of Color when tech is being designed and assessed for implementation." The AI Act, the first of its kind, includes a basic rights approach and ideas such as redress, says Smouter, and will be fully enforced in a period of two years. "It would be great if this period could be shortened to ensure transparency and responsibility are in the very core of innovation," he said.
top of page
bottom of page
Comments