When the Basel Committee on Banking Supervision got involved trying to put together some rules on capital calculations with the current Basel II initiative, they defined operational risk management as “direct or indirect loss resulting from inadequate or failed internal processes, people and systems or from external events.” Peter Keppler, a senior analyst in Risk Management on the Capital Markets team at Financial Insights, an IDC company, offers his perspective from more than 14 years of experience in financial services and risk management.
IT Focus: What is operational risk management and why is IT operational risk management a relatively new focus?
Peter Keppler: Operational risk is an attempt to measure and monitor the actual loss events and then trying to quantify the financial loss that is the result of whatever kind of error event caused the loss.
Examples are the big headline grabbers, the fraud where people are fiddling around with the trade ticket and getting into deeper and deeper holes. It is also every sort of data processing error that might cost something to the firm whether it is a small interest charge over a short period of time; computer outages and whatever financial losses can be attributed to the infrastructure downtime or operational downtime.
There is also the idea of trying to estimate future potential operational losses. That is the basis for trying to calculate the amount of capital a given business line might attract and therefore a charge for capitalization.
Basel has definitely brought it to the surface. Large institutions had been looking at it and working on it. The fact that there is going to be an explicit capital charge for operational risk within the Basel regime has put everyone’s feet to the fire. In tandem with that, in the United States the Sarbanes-Oxley legislation is closely aligned with operational risk.
Regardless of whether there is a specific regulatory environment that an institution is facing within its regulatory regime, these practices and issues have added a critical mass and brought attention to processes and methodologies of looking at operational risk. These concerns did exist prior to these recent regulatory initiatives. If a financial institution is trying to do economic capital allocation, methodologically, things are quite advanced on the market risk side. Credit risk has been picking up steam over the last five to seven years. Operational or business risk had always been prior to these regulatory initiatives kind of the least sophisticated in terms of measurement method.
IT Focus: Why is that?
Keppler: Operational risk doesn’t lend itself quite as cleanly to some of the quantitative methods that other types of risk do such as market risk and to a greater and greater extent credit risk. It’s not really about the behaviour of financial instruments. You can collect a lot of data on credit defaults and equity price volatility and interest rate movement. Operational risk is about people and processes devised by people to get X accomplished. Another big problem is the fact that the big operational losses that firms are afraid of – the kinds of things that can bring an institution down – are so infrequent that the numerical methods that are applied to credit and market risk don’t work because there isn’t enough of a data pool of experience.
So it is quite difficult from the quantification side to try to project forward potential risk or potential for big loss. How big can I quantify the chance of having a multi-million dollar trading fraud loss? It is extremely difficult.
IT Focus: So other than being very difficult to quantify, how does operational risk management differ from managing financial risk?
Keppler: There are two different goals of emphasis of operational risk that I would want to point out. One is the goal of actually developing better controls and monitoring processes for reducing operational losses, so actually reducing those costs of the losses. Then there is the idea of taking the next step and trying to quantify the potential loss. Am I trying to reduce my losses or am I trying to reduce the amount of capital the regulators want me to hold because of my loss profile?
IT Focus: But aren’t you doing both by doing one?
Keppler: You are not necessarily reducing your capital charge by doing the former, but you have to still take all your data you collect and all the monitoring you’ve done and run that through an analytical engine that is to the regulator’s satisfaction capable of quantifying your potential losses.
You can work very hard at reducing your operational losses but not necessarily be going far enough in order to qualify for a more reduced capital charge.
In some ways, the Basel regulations actually inhibited the sophistication of some of the approaches that institutions were taking of their own accord because then they also had the added burden of having to comply with a regulatory measure that didn’t necessarily match up to the way they were viewing the world in terms of operational risk. So then they just said ‘the heck with us taking a more sophisticated approach if it’s not even going to get us to that reduced capital charge. We’ll just take whatever is the cheapest way to comply with the regulation.’ They unfortunately abandoned or mothballed what in our opinion were actually more sophisticated approaches to measuring and managing their actual operational losses and risks.
The Basel Accord has been very prescriptive in the way that they are going to allow banks to approach these calculations. They didn’t necessarily align fully with some of the most interesting things that people were doing. Basel regulations are the product of a lot of compromise.
IT Focus: How does operational risk management differ between the different types of financial services firms?
Keppler: For things like data entry validation of balances or prices, confirming transaction details for securities trading or other sorts of financial contracts like letters of credit, the processes themselves are largely similar. Where they would start to differ is the way those processes would aggregate up to a certain business type.
Also, the question is: what is the goal of the operational risk exercise? Is it to reduce those losses or is there the further goal of getting into the quantification and capitalization question which at this point is really largely more about banks?
IT Focus: What best practices have you seen banks developing?
Keppler: There are two approaches and they bifurcate along the two basic goals we’ve been talking about – one being the monitoring and understanding of the risks that are out there versus those that are more intended to quantify the risk and come up with a numeric estimation of the size of risk and attaching capital to that.
In terms of the former, there are components that are necessary. You’ve got to be collecting loss data and Basel requires five years of loss data by 2008. They are also looking to add to their internal loss data by participating in some kind of consortia group where banks are sharing their loss data and approximate cause information in order to build critical mass of data so that the quantitative methods that hope to be applied have enough data in order to get a statistically robust result.
Self-assessment is a mechanical way of informing management of the relative confidence of managers within an operation of how good their controls are. It is a high, medium, low or green light, yellow light, red light measure of an operation at a fairly granular level of your controls around [for example] foreign currency transactions. You probably have eight types of things that a manager in that area would rate themselves on and then input that information into more advanced pieces of software that enable that or are Web-based and input their results on a monthly or quarterly basis. Those sorts of self-assessment results are aggregated and ranked in relative importance and mapped up to an executive dashboard sort of application where they can have an at a glance feel in a general sense of where their highest operational issues might be in order to direct their efforts, whether it is sending budget towards a particular area or process – and seeing what kind of improvements can be made.
Key risk indicators (KRIs) are a step above that where you’re starting to track things like employee turnover or computer downtime. That’s a little more quantitative. It’s not dependent on a person’s assessment of themselves which of course could be influenced by human nature.
That’s moving up the scale of collection and tracking. Then we move up a step into analysis tools where you’ve got tools which explore the loss data pool in order to come up with estimates of future loss frequencies and severities, aggregate exposure calculation and then tools that might allow you to do scenario analysis. Given my status right now and my balance sheet and my estimation of which parts of my institution are risky from an operational perspective, if I throw this particular situation at it, what if the equity market has a huge spike in volumes and we’re going to have an increase in manual processing errors – what if there is a terrorist attack somewhere and it hits one of our major counter parties or something like that? That’s where you’re starting to get much more sophisticated in terms of not just assessing where you’ve been and what your current state is and the controls you’ve put in place. Maybe you were able to have an up to date view of where you stand from a risk perspective, but then what could the potential exposures be if XY or Z were to come to pass?
Then from there you get into capital calculation and high-end stuff that is really more focused on the financial implications of operational risk, i.e. what is my capital charge going to be versus actually reducing exposures to losses.
Frankly, we don’t think these initiatives should be by and large driven by a need to comply. It is more important for institutions to focus on identifying and controlling the losses and improving the processes than it is for them to come up with exposure calculations and quantification of the potential loss. As you pointed out, the former should, at least potentially, influence the latter to the point that by improving the processes and controls, one would hope that the capital requirement would then be reduced. An institution can save a heck of a lot more money by reducing its processing errors in terms of absolute dollars than by trying to escape a certain level of capital charge by going through all these quantitative hoops to estimate their capital charge in a more sophisticated method.