Can EBITDA normalization adjustments be automated?
If you have ever worked on an M&A transaction, you would know that a lot of time is spent in understanding the normalized EBITDA of the business. While a complete due diligence on a company involves much more than just looking at the normalized EBITDA, at the end of the process, all aspects are looked to be translated into an EBITDA impact. It is one of the key metrics used for benchmarking enterprise value and for transaction financing.
While EBITDA is not a GAAP defined term, the base EBTIDA can very easily be calculated using financial statements of a company. Calculating normalized EBITDA requires one to have knowledge of the applicable accounting standards, knowledge of the company’s recent operations and any actions taken that may have impacted the underlying earnings. Simply put, normalized EBITDA is a measure of the underlying profitability of the company which excludes the impact of any one-time actions and, in certain cases, also includes a pro forma impact of any growth or value creation initiatives that the company may have undertaken recently.
The improvement in technology and the ease of using self-service BI tools and other low/no-code data wrangling tools have allowed the firms that undertake due diligence to be able to dig deeper and better understand the underlying drivers of profitability. It has enabled firms to provide better and deeper insights in a much shorter timeframe.
While there is much more data analysis now involved in a due diligence process then 5-10 years ago, with a lot of investments by all involved in upskilling staff and improving/creating platforms that allow ingesting and analysis of large datasets, automating of adjusted EBITDA/quality of earnings, or a portion thereof, still seems like a distant reality. The purpose of this article is to explore (i) if the process of calculating normalized EBITDA be automated, and (ii) to what extent. On paper, it seems like a difficult task. Let us breakdown EBITDA adjustments into broad categories:
- Definitional adjustments: These would comprise definitional adjustments such as interest, depreciation, tax, etc. These do not need to be automated as these are generally reported.
- Misstatements: These would comprise any accounting misstatements.
- One-off items: These would comprise any one-off expenses or income that would be considered non-recurring and/or one-off in nature.
- Pro forma adjustments: These would comprise pro forma impact of any recent growth and/or value creation initiative undertaken by the company.
- Stand-alone adjustments: These would comprise estimates of cost required on a stand-alone basis in instances where the Target is being carved-out of a larger entity.
Definitional adjustments are reported and there is no need to automate to identify these. Pro forma adjustments, by definition, require judgement and there would generally be not enough data to enable automation. Stand-alone adjustments require an understanding of the underlying operations and assumptions and analyses on the future state of the business. This leaves us with Misstatements and One-off items. Coincidentally, these adjustments do take a significant amount of time, if not the majority of time on a large proportion of deals.
Misstatements
Considering that the starting point for a financial due diligence are audited financial statements, there is some level of comfort that the underlying financial information have been audited and therefore should not have any material misstatements. Having said that, due diligence would not be complete without considering potential impact of misstatement and/or representation as part of scope. Misstatement and/or misrepresentation of financial information has been a topic of interest to regulators, academic researchers and auditors. There does not appear to be a ton of research on this topic but there has been some research. We can probably start with one of the models empirically tested, such as the M-score by Beneish, M (“The detection of Earnings Manipulation, 1999”) and modify it to incorporate private companies and growth stage companies.
This would provide directional automation but directional automation can also save a lot time and money, as scope of work can be scaled up or down depending on the probability of manipulation indicated by the model.
One-off items
The typical process of identifying one-off items and/or unusual trends (which can also inform any misrepresentation/misstatement) is to (i) understand the nature of the underlying GL account (and the applicable accounting standards and company’s policy); (ii) analyze trends; (iii) based on trend analysis and what we know about the nature of the underlying trends - identify unusual trends and any accounts comprising discretionary accounting accruals; (iv) discussing these with company’s Management, if access is available; and (v) concluding if any adjustments to EBITDA are required.
The above process is very simplified. The underlying nature of the account would dictate the type of analytical analysis required to understand it further (e.g. revenue/income accounts would require a different analysis than direct costs which would require a different analysis than indirect costs).
I would not delve into the analysis of revenue accounts as that would be a different topic in itself and the type of analysis required would differ by each sector/industry. Also, I think generally, firms already have platforms and tools in place to enable an analysis of the top-line. These can and should be improved, but at least there is some headway made in that space.
While investment has been made in automating analysis of revenue, there has not been much investment made in automating analysis of expense accounts. An end-to-end automation of expense accounts would be difficult without standardization of chart of accounts and policies across companies in given sub-sector. Perhaps we will be there one day (maybe ERP systems will one day adopt blockchain and we will have a better audit trail for analysis), but for now let us explore if we can use statistical methods to at least directionally automate this part of the diligence.
The process of identifying one-off items is similar to the process of identifying anomalies in a dataset. There is certainly a potential for using established statistical methods and clustering methodologies (as a starting point) in identifying these anomalies in the trial balance. It would not lead to an end-to-end automation, at least not at start. Even if it directionally automates the process at this stage, that would save a lot of time and cost.
I would not get into the exact statistical methods and how it can be automated, as the purpose of the article was to explore and perhaps start a discussion on the topic. I would love to collaborate on automating this or have a discussion. If you are interested in collaborating or just discussing ideas, please feel free to reach out to me.
--------------------------------------------------------------------------------------
Important Note: All comments and opinions are my own and do not represent those of my employer
Nice perspective Arsalan. I agree that some sort of automation can be definitely done for one off items by seeing the anamolies in TB. A thought to add here is that if we are looking for automation in EBITDA normalisation, then rather than seeing adjustments as mentioned in article; we can see if we can inbuilt some automation to analyse sell side proposed adjustments. The seller consultants have already done diligence. If the buyer can have some automation to analyse the assumptions of sell side proposed adjustments, then this can definitely bring efficiencies in terms of identifying new adjustments by buyer rather than just spending time in questioning seller adjustments. But again there can not be any fully automated process. Will surely require human intervened
Henry TrungHieu T.