Screening and approving the customer

As with other aspects of digital credit, the screening methods enabled by technology and big data are a two-edged sword. Moving beyond traditional methods and information sources opens up possibilities for hitherto excluded consumers while also heightening risks of bias and misuse of data.

Digital lenders and credit scoring providers use increasingly sophisticated algorithms to assess credit risk, many of them incorporating generative artificial intelligence. GenAI’s ability to process very large and diverse data sets and to generate content in accessible and easily usable formats (including conversational) is proving very useful in enhancing efficiency and improving customer experience, risk mitigation, and compliance reporting by financial providers. However, the deployment of GenAI in the financial sector has its own risks that need to be fully understood and mitigated by the industry and prudential authorities.1

On the one hand, algorithmic credit-scoring systems are able to predict default risk better than traditional methods – i.e., actual risk rather than risk attributed on the basis of social norms or skewed information. They are able to do so because they draw on a wide range of information including real-time transaction data, attitudinal and behavioral variables, and network indicators. This enables lenders to look beyond balance sheets and assets, and helps ensure more accurate and equitable risk assessment as compared with traditional banking

On the other hand, credit scoring algorithms pose several risks, including:

  • Biased outcomes due to poor algorithm design, incomplete, unrepresentative, or biased input data
  • Discrimination based on proxies reflecting sensitive attributes
  • Consumers being unaware or powerless regarding use of the algorithm
  • Regulators’ lack of technical expertise to evaluate algorithmic systems, or being blocked or hampered by the proprietary nature of the algorithms.2

Algorithmic bias usually stems from conscious or unconscious prejudices introduced by the individuals — data scientists, coders, developers, or others — who create the algorithms. The algorithm itself may have baked-in biases, or incomplete, faulty, or prejudicial data sets are input to “train” the algorithm.    To take the case of gender bias, lenders have historically used marital status and gender to determine creditworthiness. Eventually, these discriminatory practices were replaced by ones considered more neutral. But by then, women had less formal financial history and suffered from discrimination, impacting their ability to get credit. Data points tracking individuals’ credit limits capture these discriminatory trends. When AI systems that determine creditworthiness learn from historical data, they reproduce the same inequitable access to credit along gender (and race) lines.3

Financial consumer protection laws sometimes prohibit discrimination on the basis of characteristics such as religion, race, or gender. But imposing algorithmic transparency—disclosure of the nature, uses, and consequences of algorithms—is challenging. This is especially true where creditworthiness checks depend on “black box” algorithms, i.e., AI-based algorithms whose outputs are difficult or impossible to explain to the consumer. In principle, regulators should (but mostly do not) police the design and implementation of algorithms used in lending, e.g., by imposing nondiscrimination constraints on the algorithm’s code.4Regulators typically consider this task too complicated and technical and fear taking action.

Recommendation: Regulation of algorithmic scoring could take such forms as the following:

  • Apply fair treatment and anti-discrimination rules to algorithms
  • Require appropriate procedures, controls, and safeguards during development, testing, and deployment of algorithms to assess and manage risks of bias
  • Require regular auditing of algorithmic systems by external experts
  • Ensure transparency to consumers regarding use of algorithms, including a duty to explain the basis for algorithmic decisions
  • Provide consumers with the right not to be subject solely to automatic processing and the right to request human intervention.

Country examples of credit screening and algorithm regulation

Link to United States case studies
United States
Link to European Union case studies
European Union