The next decade or so of innovation in Computing and Information Technology will once again provide a driving force for fundamental changes in business and consequently risk management. It is worth acknowledging that IT change will not be the only factor driving wide-spread change; climate change, demographically ageing populations and other, perhaps unrecognised, factors will also no doubt have a material impact which should also be considered. However, the IT changes are likely to be pervasive and far reaching:
- The extension of Moore’s law (that computing power doubles every 2 years or so) for yet another decade driven by ongoing innovation in Extreme UV chip production will provide ever more computing power. This generation of change however is likely to see computing power and data collection ubiquity outstrip our ability to store or transmit information. This is likely to push much more computing to the “edge” – i.e. nearer the devices where the data is captured. According to Satya Nadella, CEO of Microsoft in 2021 he views that we have reached “peak centralization” and the future lies in decentralised data and processing.
- Machine Learning will become more ubiquitous which combined with more data availability will ensure there is always a digital model of the behaviour of real-world systems. Further, it is likely that increasingly humans in many contexts will not know if they are dealing with a fellow human being or an automated representation of one.
- Quantum Computers while still in their early phases are just becoming commercially useful. Quantum Computers will accelerate certain types of algorithms, particularly replacing current encryption technologies and enhancing simulations of complex systems.
Each of these changes may significantly affect the best way in which to manage risk within an enterprise. Managing risk is typically broken down into three processes: identification, measurement / modelling, and management.
Identification: The decentralisation of data and processes makes identification of risks more challenging. Components of a decentralised system may be working effectively however the combined changes may cause undesired outcomes overall. Similarly with a proliferation of system components it becomes easier for any element of a system to be compromised (or to fail for more mundane reasons) and it must be a requirement therefore that there is zero trust between elements. As an example, you could think of a centralised system being a train network running on a timetable with centralised signalling and train monitoring. Such transport systems have extremely rare accidents and fatalities. Conversely the road network has no timetable and each driver makes their own decisions independently. As a result, road traffic accidents are much more common (per mile travelled you are c10x more likely to die in a car than a train). To mitigate the damage we have to ensure that drivers follow rules, cars are built for safety and that roads are designed to avoid dangerous junctions and excessive bottlenecks. Equivalent procedures will be required in all decentralised systems, centralised risk management will not be effective alone.
Measurement and Modelling: As computing power is enhanced (potentially dramatically with the advent of quantum computing), certain types of problems become many times quicker to solve. This will make it feasible to model systems using more data, more quickly and accurately. This is clearly a benefit to improved risk management as enhanced models mean better abilities to predict when a failure or risk event will emerge. Increasing data availability will also improve measurement helping identify when models are starting to fail and can no longer be trusted. However, it must also be recollected that increasing computing power will not be a panacea. There are some systems that are intrinsically unforecastable. Chaotic systems are characterised by widely differing outcomes dependent on only very slightly different starting positions or conditions. These systems are created by non-linear relationships and feedback loops. Identifying such mechanisms clearly and finding regions of stable performance then becomes key to successful risk management. As an example, it is likely that credit risk in bank lending is uncertain at a portfolio level due to external economic factors which potentially may be more forecastable using for example better real-time granular data across the economy. By contrast, the volatility in prices of some markets (e.g. shares or crypto-currency) is likely a chaotic system with strong feedback loops (more buyers join in as they see the price rise and vice versa), there is typically little fundamental on a day to day basis to drive the price changes that are seen. From a risk management point of view in an intrinsically chaotic environment therefore improved models are unlikely to help – any new predictability found would quickly be arbitraged away.
Management: One of the key issues of risk management is in ensuring that human actors are appropriately incentivised to efficiently perform the required task without taking undue risk for which they might get a reward but where the downsides are only identified later. As machine learning becomes more sophisticated the opportunity to replace human risk decision makers with machines becomes more and more feasible. Already many decisions in finance from credit to asset allocation to trading have been automated or assisted in large measure by models. The benefit of automation to risk management is that the trade off between risk and expected return has to be explicit in the model – there is no need to incentivise the model to make the right choice. Instead the challenge become increasingly to manage the model risks.
To ensure organisations maintain control of risk is going to require risk management operating models to evolve to match the new generation of IT infrastructures strengths and weaknesses. To take advantage of the opportunities presented and mitigate the challenges posed by increasing complexity.