Lenders, investors, and insurers have certainly taken their losses in the mortgage crisis. Now they are engaged in a new phase of the mortgage crisis as they grapple with the question of who ends up with the bad assets and how much of them—a veritable game of hot potato. Lenders are facing repurchase requests on mortgages underlying securities sold to investors. Mortgage guaranty insurers are going back over the loans they insured and rescinding coverage due to alleged fraud and misrepresentations. Financial guaranty insurers are trying to identify loans underlying wrapped securities with breaches or misrepresentations and requesting the mortgage-backed security transaction sponsor to repurchase, cure, or substitute the underlying suspect mortgages (Ambac 10K p66).
For all the players, this raises issues of counterparty risk, impacting financial reporting and capital management decisions. Deciding which assets get repurchased or rescinded for insurance could mean the difference between survival and insolvency. Buyers and sellers face a complex game of quantifying the impact of this game of hot potato, making it essential that the quantitative models they use to estimate the probability of resolution and severity of impact are as accurate as possible.
For banks, this means getting a handle on the mortgages they are being requested to repurchase, to help them accurately estimate earnings impacts. For investors and insurers, the modeling of mortgage remediation efforts will help them prioritize their loan review process and assess financial reporting issues like establishing loss provisions.
These days, with the financial strength of banks, investors and insurers at stake, those who use the best methods to optimally navigate decision making will be better off than those who resort to guesswork.
The status quo in remediation estimation
click to enlarge
Typically, industry participants use so-called aggregate and development methods to estimate future remediation results. These approaches rely on the experience of a group as a whole to forecast future performance. Yet these methods are crude at best. They do not account changes in mix of the loans between those that have already gone through the review process vs. those that are still outstanding. And they don't factor in detailed underlying (or innate) loan characteristic differences from one portfolio to the next.
Parties trying to estimate resolution need to scrutinize each loan, using databases with information about mortgages that have already been considered and reviewed for repurchase or rejection. Using predictive models, analysts can perform statistical analysis on each loan already reviewed in order to develop assumption ranges and confidence levels to estimate the probability and severity of loss for unresolved loans not yet reviewed.
Passing the potato
Investors and insurers have some recourse as they contemplate possession of these hot-potato assets. Loans acquired by investors contain representations and warranties that protect them from fraud and misrepresentation. For example, early payment defaults is one warranty. The reasoning goes like this: If the borrower went delinquent early, then proper underwriting wasn’t done. Other reps and warranties cover misrepresentations or fraud in the loan application, such as false statements about income, occupancy, and appraisals.
Investors, in an effort to lessen their pain, are requesting that originators buy back the mortgages. Money center banks have reported losses due to this phenomenon to the tune of billions of dollars. Lenders are setting aside establishing repurchase reserves and consequently negatively impacting earnings.
The federal government is participating in the game of hot potato too. The FHFA has been increasing pressure to send back loans that don't measure up to the representations and warranties.1 Freddie Mac sent back $4.1 billion of single-family mortgages to lenders in 2009, according to the agency's annual report. That compares with $1.8 billion in 2008.2
Mortgage insurers are also rescinding coverage under the same premise: What they agreed to insure is not in fact what they were given. Lenders have filed claims with insurers, asking for payment when the loans default. But on closer inspection, the insurers are finding the information provided was either inaccurate or false, leading them to rescind coverage. Until recently, rescission had been typically used for grossly inaccurate loan applications, running about 1% of claims. Yet now 10% to 20% of claims are being rejected and in some instances much more. Nevertheless, mortgage insurers could refine their estimates for rescission by implementing sophisticated predictive models.
Financial guaranty insurers—which insure a security that represents a pool of loans—are taking a similar approach. These companies are in a similar predicament as investors and mortgage insurers. For these mortgage-backed securities, financial guaranty insurers are asking lenders to repurchase misrepresented non-performing loans.
Digging deeper into data
All parties need a good handle on the assets they have on the books. Information about the failure rate of the loan repurchase requests, rescissions, and putbacks can be used to calibrate a predictive model that can be used to estimate future activity on a loan-by-loan basis, using loan characteristics. This approach is oftentimes used for pricing auto insurance and other exposures. With ample data, it is relatively easy to calibrate and implement, and is superior to the methods currently being used.
Often, examinations of loan pools are too simplistic. A bank may review a pool of 500 loans and repurchase, say, 35% of them. Going forward, the bank might then use that aggregate number, with some back-of-the-envelope adjustments, as a way to assign a future probability of repurchase on other loans. Yet say that first pool is based on vacation homes in California, where unemployment now hovers around 12.5%.3 If the bank were to apply the 35% number to a pool of primary residence mortgages from Wisconsin, it would likely misestimate the likelihood of resolution, as the groups would tend to have differing levels of resolution because they are grossly different groups
Using only aggregate data methods would be akin to grouping all drivers in one group to estimate premiums for automobile insurance. For example, an analysis of using simple pure premium analysis may show a book of business where older cars have higher claim frequencies relative to newer cars. Yet in reality, older cars tend to be driven by younger drivers, who tend to generate higher claim frequencies. While the patterns for older cars and young drivers both look more prone to claim, the poorer performance of the older autos might be mostly due to the young drivers. A robust model to determine automobile insurance premium might take into consideration a variety of factors, such as age, car model, driving record and credit scores, amongst others.
Similarly, loan-level predictive models take into account characteristics about each loan individually (as the name suggests) and respond accordingly to differences in loans when estimating resolution.
Analysis: Data gathering
One way to tackle this game of hot potato is to set up a database with information about loans that have already been considered and reviewed for repurchase, rescission, or putback. There also needs to be a field identifying the outcome of the effort—in other words, whether the loan was repurchased. This data can more or less be developed only once a thorough review of a subset of loans has been conducted and the outcome of the efforts have been evidenced.
For those loans not yet considered, a database should be set up with the same fields as relied on for the model calibration. Information found to be helpful include documentation type, geography, borrower credit scores, occupancy type, presences of I/O or negative amortization features, fixed vs adjustable interest rate and loan purpose.
Using predictive models, analysts can take the underlying data of loans that have already been reviewed. This calibration helps shape a model that estimates probability of resolution based on loan-level characteristics. The model then can be used to assign probability of resolution (success or failure) to each loan in the database of loans not already manually reviewed. This explicit probability estimate can then be used in the loss reserving and forecasting exercise.
Models also need to consider outside factors such as changes in operations. The models being used by lenders, investors, and insurers currently might rely on development of time from one date—in other words, origination or documentation request date to the date of a trigger, such as repurchase request date. In their most arcane form these methods assume no change in operational activities, despite the rise of a cottage industry focusing on loan scrutiny. Just the hiring of staffs by investors and insurers means more loans likely will be sent back to the lenders.
% Repo A
% Repo B
Let’s look at an example where an aggregate model is used without consideration for loan-level underwriting characteristics. Assume an analyst was asked to estimate the proportion of loans in Pool A that will be repurchased. Of the loans in Pool A, 20% are categorized as “Full Doc”, and 80% as “Low/No Doc”. Full Doc refers to the fact the borrower supplied the mortgage originator with verifiable documentation on assets and income. Low/No Doc refers to a borrower that did not. The analyst finds that other pools (say, Pool B) in a similar economic, litigious and operational environment had repurchase rates of 23% (ie, 23% of the loans were repurchased, and 77% were not repurchased). The analyst then goes on to apply this credit of 23% to Pool A. In reality, the difference in the mix of Full to Low/No Doc is a key driver of repurchase likelihood. Pool B had a mix of Full Doc to Low/No Doc of 70%/30%. A more exhaustive analysis finds that Full Doc loans had a repurchase rate of 5% whereas Low/No Doc loans had a repurchase rate of 65%. The aggregate repurchase rate of 23% (ie, 70%*5% + 30%*65%) may be appropriate for Pool B, however, a repurchase rate of 53% may be more appropriate for Pool A. This example highlights the weakness of an aggregate approach. In this example, a single factor (ie, documentation as the independent factor) model was shown to be superior. In reality though, there are additional fields available such that a multi-factor model should be considered.
Use predictive modeling
Lenders, investors and insurers would all benefit from using these kinds of methods to gauge exposure of their existing mortgage portfolios or loans they insure. Current aggregate and development methods fail to do an adequate job of estimating future activity. By using information about the failure rate of loan repurchase requests, rescissions and putbacks, industry participants can calibrate a predictive model that can be used to estimate future activity on a loan-by-loan basis based on loan characteristics and operational activity. This type of actuarial approach, often times used for pricing auto insurance and other insurance exposures with a lot of data, is relatively easy to calibrate and implement. The predictive modeling approach gives a better snapshot of exposure than the methods currently used.
Sidebar: How we got here
The market for mortgage-backed securities came in vogue in the 1990s, growing in popularity each year. It ballooned over time, with gross issuances reaching nearly $2.2 trillion in 2005, up from $318 billion a decade earlier. Banks and mortgage lenders granted loans to higher risk borrowers, often with little due diligence believing that the underlying collateral would support their decisions. Mortgage production extended its reach to riskier groups of borrowers, approving subprime loans and so-called alt-A loans, which were mortgages that required little documentation or no money down.
But borrowers also contributed to the frenzy, misrepresenting key underwriting information such as inflating income and assets in order to make it look like they could afford the payments. Borrowers who speculated on real estate fudged applications, saying they would live in the residence rather than rent it out, giving them lower mortgage rates and mortgage insurance premiums.
All of these loans were pooled and sold off as mortgage-backed securities, giving investors a steady stream of interest payments—so long as the mortgages stayed healthy. Mortgage guaranty insurers underwrote the loans against default. Financial guaranty insurers insured securities against non-timely payments of principal and interest. When the mortgages soured, both investors and insurers faced huge losses.