Financial institutions across the country are now actively preparing for the bank accounting transition from the incurred loss to the current expected credit loss (CECL) models. By now, most banks and credit unions are well aware of the methodology options under CECL. However, many are still having challenges interpreting results from their CECL modeling exercises. Common questions that arise include:

  • What happens if I don’t have enough loan-level historical data?
  • What is the next step if my results are zero?
  • Are there shortcuts for anticipating when certain approaches won’t work before building models to test?

During a recent webinar on interpreting CECL modeling results, Abrigo risk management consultant’s Brandon Quinones and Danny Sharman polled the audience on what the most challenging aspect of analyzing CECL modeling results is at their financial institution. Forty percent of the 185 attendees polled specified that they do not have CECL modeling results yet. The second most popular answer, unmatched expectations vs. reality, was chosen by 19 percent of the respondents.

“I’m not at all surprised by the responses,” said Quinones. “I would be more surprised if they had not yet done anything, but financial institutions are quickly realizing that adhering to the standard is a bit more involved than pushing a button. It relies heavily on the data they have at their disposal, so although you may want to go in a specific direction, you may be unable to do so without the data. The question becomes ‘how do you react when the first attempt goes wrong?’”

CECL modeling process: The first attempt

Attendees of the webinar were polled on which methodology to start with and selected vintage analysis. Quinones and Sharman agreed that it is a potential logical first step because it’s a methodology that is easily understood. A vintage analysis is specifically a logical starting point since many institutions conduct vintage disclosures today and it’s easier to build in an excel model. It looks at a cohort of loans based on origination periods and then tracks charge-offs and recoveries each year following that period. If the loss rate is unknown for recent years, an average can be taken of earlier origination years to help determine expectations for the future.

This methodology does require loan-level data. For this calculation, a financial institution should have on hand the origination and renewal date of their loans, when the transaction occurred and what the loan number is as well.

“One of the great ways to see if a vintage methodology makes sense as a loss rate under CECL is to layer these years on top of each other. What we’re really looking for is consistency. Charge-offs should be occurring in the beginning since origination, with recoveries pulling through in recent years. However, many times this is not the case. Charge-offs may be occurring at sporadic times since origination, which is not a good indicator of what will happen in the future. It’s likely that these charge-offs occurred due to an event that happened,” said Quinones.

A vintage is a good predictor of expected losses if a set of loans are consistently subject to charge-offs after the same amount of time has passed since origination, according to Quinones and Sharman. Additionally, a financial institution should have enough loan-level data to account for the average life. If this is not the case, a vintage might not be the best option.

CECL modeling process: The second attempt

If a vintage analysis did not produce the results that a financial institution expected, Quinones and Sharman considered trying a migration analysis based on attendee feedback. A migration analysis analyzes loans that make up a loan segment over a specific period in time. It monitors the loans’ statuses on specific risk characteristics, i.e., days currently past due or risk rating. When charge-offs occur within a designated time frame, the charge-offs are tied back to the loan’s risk characteristics of the period considered. This methodology derives a unique loss rate for each risk characteristic within the loan segment.

The “dream scenario” for this methodology is to have historical loss rates that increase based on the risk level of the selected characteristics, according to Quinones and Sharman. However, sometimes a financial institution does not capture losses for a majority of its loan balances. Thus, these loss rates only apply to a smaller amount of loans with higher risk ratings.

Additionally, there may have been partial charge-offs. If this happens, it’s likely that a financial institution will still downgrade the loan. This leads to net recoveries being applied to the riskiest loans in a segment. In such a case, a risk rating approach may be too granular for a migration analysis.

A migration analysis is a good predictor of expected losses when it is being leveraged for larger loan pools and when there has been consistent risk rating and charge-off maintenance. If a financial institution has limited historical losses but likes the idea of a migration analysis, a static pool approach may be a better-suited methodology to test. It does not require a calculation to use such granular risk factors.

CECL Modeling Process: The third attempt

If other methodologies don’t work out, financial institutions can try a static pool analysis when building out their CECL modeling process. A static pool is a migration analysis, but it is aggregated at the pool level. There is no consideration of risk characteristics as sub-segments. Additionally, for this methodology, it makes sense to use a period length that parallels the average life of a pool.

A static pool analysis includes loans that were active at the beginning of the start date for the chosen period length. This applies to both charge-offs and recoveries. If a charge-off or recovery occurs outside the period of time, it is not considered in the net charge-off calculation.

If a financial institution has a lot of data, it has to consider how far back it is going to look in its dataset when conducting a static pool. A financial institution may have to make a reasonable and supportable forecast to decide what they are going to select from the past to derive a loss rate. For example, if the prediction is that unemployment will stay around 3-6 percent in the next three years, only periods of time where that condition is true will need to be considered. The periods can then can be aligned with forecasted unemployment to determine the loss rates.

If an institution has limited loan-level data, questions that need to be asked when conducting a static pool are:

  • Are a few data points enough?
  • Are the loans on my books similar to loans that were on my books this far back in time?
  • Can I ignore forecasting altogether given limited data?
  • What happens if my average life is even longer?

On the other hand, if an institution has a lot of loan-level data but little charge-off experience, it will be difficult to conduct a static pool without relying on qualitative adjustments. This may warrant using more forward-looking approaches.

CECL Modeling Process: The final attempt

If still not receiving results that they were expecting from the previous methods, the last methodology that webinar attendees specified that they would like to try was a discounted cash flow approach for their CECL modeling. A DCF is an analysis of each individual loan’s expected cash flows. According to Quinones and Sharman on the webinar, it takes into consideration a specific loan type’s tendency to default/incur losses and pay off earlier than its contractual term. In essence, an amortization schedule layers in assumptions around default and prepayment rates. It allows financial institutions to forecast key assumptions through statistical regression analysis.

A DCF looks at individual loans’ expected cash flows. Each individual loan is driven by different inputs, such as payment type, maturity date, payment amount, interest rate, payment frequency and amortization days. These items are inherently true and don’t need to be analyzed. More impactful items that do need to be addressed are the probability of default and loss given default, prepayment rate and curtailment rate. An advantage of this methodology is that peer and industry data can be layered into a DCF.

Typical challenges that occur are not having charge-offs for a loan type and receiving results of $0. This is where industry-driven assumptions for PD/LGD can be leveraged. Another challenge with a DCF is defending a prepayment/curtailment rate. If there is a sense of discomfort deriving the number, an institution can also leverage industry data or hire consultants to calculate it.

CECL Modeling Takeaways

CECL modeling is a complex process, but once a financial institution determines the policy election for a single loan type, they can use average life and data availability to help determine which loss rate approaches are feasible for other loan types, not including a DCF.

“It would take a lot of time to test every loss rate on each loan pool,” said Sharman during the webinar. “If you can be familiar with your data, such as the average or remaining life of a given loan pool, and how much loan-level data you have, you can make an informed decision on which loss rate methodologies will be a good fit before beginning an analysis. You don’t have to test everything for your CECL modeling practices to be effective.”

To receive more information, access the on-demand webinar, “Interpreting CECL Modeling Results.


Article Tags: