Posted to Linkedin at https://www.linkedin.com/today/post/article/20140609165942-5425117-score-cut-offs-can-blow-up-in-your-face
Risk scores are extremely powerful tools in determining the final disposition of credit applications. Typically scores are used in a consumer lending scenario – but can be in a commercial environment as well (SME segment).
Most scores would include variables encompassing application details, bureau variables (including a generic bureau derived score – for e.g. a FICO Score, or Vantage Score – or in India the CIBIL TransUnion Score) and internal bank variables if the customer already has a relationship with the bank. In absence of a specialized application score – the generic bureau score can also be used to grade the applications.
Operationally, the scores can be used to give Yes/No decisions to customer applications – though in some scenarios; scores on the margins can be referred or decisions on partial exposures taken.
Most Banks/Financial Institutions will calibrate the scores using extensive analysis to identify the Odds or Bad rate at score bands. A business specific bad rate definition can be used here – for e.g. 2, or 3 missed payments in next 12 months (i.e. the loan going bad in a fixed time period post loan sanction). This calibration can be done by retrospective analysis of applications in the past – and their performance post sanction. (Assumption being that the patterns of past will propagate into the future without too much variance – macro or otherwise). Basis the retro analysis – a score cut-off is identified which allows the bank to target a specific bad rate. The score cut off also forces a rejection rate on incoming applications.
In order to illustrate impact of score cut-offs on bad rates – I am going to assume the score has been calibrated to grade incoming applications on a normal distribution with a mean score of 600 and a standard deviation of 50 points. Additionally the score has been equalized at an anchor of 600 with a PDO (points to double the odds) at 25 points. (There is no fixed rule that a scorecard needs to be centred at the mean/median point – done for illustration purpose only). Odds at the centre are calibrated at 1/70 or roughly 1.42% of customers with score of 600 will go delinquent on their loan.
Below graphic and table gives the score distribution of a 10,000 applicants and the bad rate by score bands.
The below table gives the bad rate by score cut-offs for the same population –>
|Score Band||% Bad Rate|
The table essentially tells as that out of the 10000 odd customers – the expected bad rate if the bank approves everybody is 2.6%. i.e. no cutoff – we get an approval rate of 100% and a bad rate of 2.6%.
As evident- there is a trade off between approval rate and the expected bad rate – in order to reach our target bad rate of 1.5 % – the table can be referred to identify 525 as a potential score cut-off.
That is banks can continue to approve applications to score bands which in isolation may be considered high risk, but pooled with a larger number of customers in higher bands, still maintain the overall portfolio bad rate. And why would a bank lend to customers in say the 550-599 band when it clearly has an elevated bad rate – there can be a multitude of reasons – capturing market share, approval rate pressures and sales targets – you name it. After all, sub- prime customers are the most profitable, as long as we can predict the bad rates; and have a pool of good customers to balance them out. Sub-prime customers are theoretically charged a higher interest rate which is supposed to take care of the extra risk the bank is taking.
So now by enforcing a cut-off of 525 on incoming applications (instead of 575) – we get an approval rate of approximately 93%. (Calculating area under curve of a normal distribution with a known mean and std deviation). i.e. approximately 7 % of incoming applications will be deemed as high risk and rejected – and approved population will have a target bad rate of 1.5%. Now with a 93% approval rate, both the risk and sales teams are happy! Or are they?
Let the Bad times roll
One major weakness of using score cut-offs is the long list of assumptions inherent in score building and deployment process. Even slight deviations from these assumptions can have a disproportionate impact on the risk exposure of the bank.
One of the most critical assumptions is around the probability distribution of the applications. Score cut offs are calculated based on studying the past distributions (they need not be normal), as in the case of the example being discussed – based on the chart above – a cutoff of 525 gives an approval rate of 93% and a bad rate of 1.5%. If the distribution remains stable – the cutoff can give a predictable bad rate and can be controlled and the bank can confidently lend to subprime customers as well, thus cornering market share as well as a much healthier interest spread while relying confidently relying on their ‘Million Dollar Statistical Model’.
However, take the scenario of worsening macro-economic situations (not unlike witnessed in 2008), or a new sourcing channel opening up. A distribution shift can happen for any number of reasons – and even slight deviations can have a large impact.
For e.g. let’s assume the distribution of incoming applications left shifts to a mean of 580 (from 600 previously). The std deviation and PDO remaining constant – the table below gives the impact on the bad rates based on different cut offs now –>
The above figure shows the new application distribution as compared to the original.
Assuming the score anchor and PDO remains unchanged, based on the new incoming application distribution – we see the shift in the score based cutoffs. Previously the score cutoff at 525 gave a bad rate of 1.5%. However when the applicants mean shifts to 580 from 600 originally, the same score cutoff of 525 now gives a bad rate of 2.1%. (an increase of more than 30% in the bad rate!), and that’s not all – the approval rate has now fallen to 86% – a rejection rate of 14%.
|Score Band||% Bad Rate (New Dist.)|
The reason for this is due to the distribution shifting slightly to the left, % of applicants in higher score bands go down, these customers were supposed to drive the portfolio bad rates down – but now, the % of customers sourced in the not so good score bands shoot up (but hey, we didn’t compromise on the score cutoffs did we?).
The sales team is now hopping mad with rejection rates having more than doubled from before; and risk team is under pressure – even after rejecting so many applications – the bad rates are shooting up!
A cursory look at the re-calculated bad rates on the updated distribution shows that the score cutoff needs to be revised to 550 from 525 to maintain the same bad rate as before. The actual approval rate needs to be 72%!
This illustrates how a small shift in the incoming population would need the risk team to quickly revise the score cutoff to bring the approval rate down from 93% earlier to 72% now just to maintain the target bad rate.
What this essentially means is that the risk exposure of the bank has suddenly shot up, the subprime customers have actually not been priced correctly on this model now, the interest rate calculation did not take into consideration this particular scenario. The bank continues to source on the new distribution – confident that the score will continue to perform (which it is – just not as assumed).
It may not end here, when macro-economic parameters deteriorate, worsening credit quality of incoming customers as discussed above is one symptom, the other impact happens on the credit scores itself. Scores built or calibrated on ‘good times’ will almost certainly begin to wander when the ‘bad times’ come in. The score odds are not set in stone and do change based on how the industry is performing.
In ‘bad times’ deterioration of the odds ratio itself at score intervals can be expected as many banks found out in 2008. (FICO faced some heat for this), however the basic purpose of the score still holds irrespective – which is to rank order customers from highest risk to lowest risk. In a case where macroeconomic parameters impact individual behaviour – any score would need to be recalibrated to capture new behaviour. The basic presumption of past behaviour propagating into future is invalidated here as behaviour is now changing rapidly.
For our example where the score was anchored at 600 with an odds of 70 to 1 and PDO of 25, lets assume a deterioration of odds to 60 to 1 with the PDO unchanged. The new interval bad rate table is as below (capped at 99% for lowest interval) –>
|Score Band||% New Bad Rate||Original Bad Rate|
The difference may not look very high, but let’s explore what happens when we combine this new data with our update probability distribution for the cutoff bad rates.
|Score Band||% Bad Rate (New Dist.)||% Bad Rate (Old Dist.)|
Based on previous cutoff of 525 – post a odds and a population shift, the actual new bad rate faced by the bank is 2.4% instead of the expected 1.5%, i.e. the bad rate has suddenly spiked up by 60%.
To compensate for this, the score cut off actually needs to be revised significantly north of 550, with an approval rate of even lesser than the 72% when the odds had not shifted.
Both factors; a population shift alongwith odds change can deliver a double whammy to the risk team of any bank. There are practical problems a risk team will face in convincing the sales head that the approval rate needs to be cut down to less than 70% from 94% earlier because of the small matter of score mean shifting by 20 points (on a score scale which ranges from 400 to 800) and a odds shift to 60 to 1 from 70 to 1.
While the scores continue to do their job of ranking customers, reliance on pure cutoffs by banks can be suicidal and invalidate the scorecard needlessly. Like any other tool a scorecard is also only as good as the risk manager behind it. If a risk manager does not have the authority or freedom as a case in point in this example to cut approval rates down to 70% from 94%, cutoffs will simply not work. In fact quite the opposite, enforcing a score cutoff can be spectacularly counterproductive.
While the illustration discussed above is fairly simplistic and with assumptions which are unlikely to present themselves so neatly in the real world, the scenario discussed has unfortunately replicated itself in many banks and lenders throughout the world.
Raghuram Rajan (ex IMF chief economist and current RBI governor) talks about a conference he attended in his book ‘Fault Lines’ where he is addressing a group of risk managers (a while before 2008 happened) about tail risk and its possible impact; the talk is not well received by the audience and then someone pulls him aside and tells him that the risk managers who could understand and push what he was saying inside their bank’s had long since been fired for being Cassandra’s. The whole concept of tail risk is that while probability of the event happening is low, but when it does happen – they wipe out all profits accumulated over the so called good times. The concept that pricing the risk to lenders exposing themselves to subprime can be modelled out is inherently faulty; and while using scores – you may be able to generate handsome profits over years and years – tail risk is actually much higher than what our models estimate.