Imagine you’re a young mother living on the outskirts of Kampala, Uganda. Due to unexpected medical expenses for your son, you decide to apply for a digital loan. You download a credit app that you heard about from a neighbor, answer a few questions about yourself, consent to have data on your phone shared, and wait 30 minutes. Then, you receive a rejection message: “Your credit limit is 0 UGX.”
You heard that having an active mobile wallet would increase your chances of being approved for a loan and made sure your MTN mobile wallet was active before applying. You’re confused about why your loan application was rejected and wonder how to improve your future chances. Most importantly, you’re still unsure how you will cover your son’s medical expenses.
Understanding the reasons behind credit decisions is increasingly more difficult as algorithms, rather than front-line staff, make and communicate outcomes. Complex algorithms, fueled by alternative data, are driving finance decisions about and for customers — from underwriting to insurance to eligibility for social protection. As algorithms become more commonplace, we must look for ways to increase transparency around decisions and inform low-income customers about their options for rectification.
In the Dark on Automated Decisions
In honor of World Consumer Rights Day, CFI is researching how credit decisions are made and communicated to low-income consumers, the rights consumers deserve, and the tools they need to understand – and be able to challenge – those decisions.
In the not-so-distant past, most financial institutions relied on front-line staff to make and communicate decisions to customers. If a customer was rejected for a loan or insurance product, she could ask a loan officer to explain how her history was evaluated and could work with the officer to correct data errors. However, as digital models leveraging algorithms gain traction, the transparency that customers used to rely on from their in-person engagements with loan officers or agents has disappeared.
As digital models leveraging algorithms gain traction, the transparency that customers used to rely on from their in-person engagements with loan officers or agents has disappeared.
Because most customers are unaware of how an underwriting algorithm works or what data inputs are used, many customers are also unclear about how a decision is made or whom to ask for more information. This lack of clarity was evident from CFI’s survey-based research in Rwanda. Out of a sample of 30 digital borrowers in Rwanda, 16 respondents had been rejected for a digital loan but only 10 of those recalled an explanation for the denial. Of those 10 who received an explanation, six were dissatisfied with the provider’s communication; some received only a basic explanation such as, “Your credit limit is zero.”
When the full survey sample was asked to describe what an acceptable explanation of loan denial might look like, there was a clear ask for more specificity and more communication. A 52-year-old woman shared her experience: “They denied me a loan because I delayed to repay when I was in a hospital and very sick. If they had called to explain the reason for the denial, I would have explained my condition.” A 38-year-old woman reasoned: “When you delay to repay, they call to remind you about the loan. They should then do the same when you are denied a loan.”
Help for Consumers: Legislation, Awareness, Support
So, what can consumers do in this increasingly digital era? Most data protection frameworks give consumers the right to access and rectify their data. However, our Rwanda research suggests that consumers are largely unaware of what is being inputted in the first place. While data protection efforts – like the 2019 Kenyan Data Protection Act that gives individuals the right to request the rectification of personal data that is “inaccurate, out-of-date, incomplete or misleading” – are a good first step, more needs to be done to communicate to customers about the inputs, decisions, and digital rights.
More needs to be done to communicate to customers about the inputs, decisions, and digital rights.
Many nascent data protection frameworks offer consumers the right to be informed if they’ve been subjected to an automated decision by an algorithm, for example. For instance, the Rwandan Data Privacy Law mandates that individuals should be informed about the logic involved in their automated decision at the time of personal data collection. Brazil’s Data Protection Act gives consumers the ability to ask for a review of a decision made with their personal data taken solely through automated processing; included in the review should be the criteria and procedures used for the decision.
But what this will look like in practice, and how it will be enforced, remains to be seen. And given what we know about the challenges facing low-income consumers in accessing and feeling empowered to use grievance redressal mechanisms, there is concern that these data rights will not be fully exercised in a way that keeps companies accountable.
Consumer advocacy organizations are a potential avenue to help raise the voice of consumers vis-à-vis their digital rights. For instance, our research on government-to-person digital payments during COVID-19 found that civil society organizations often were the first to sound the alarm about digital systems failing or customers poorly served.
Whether it’s a mother in Kenya applying for a credit product to pay for medical expenses, or a farmer looking to insure his crops in the face of an increasingly unpredictable climate, we must support consumers’ voices and increase transparency around how digital decisioning is made.
Along with other leading consumer advocates around the world and in partnership with Consumers International, we’re excited to participate at the Fair Digital Finance Forum this week. Take a look at their program to get a lay of the land and stay up to date with CFI’s consumer protection workstream.