It is commonly said in scientific and analytical circles that “data is king”.  For property cat modeling, detailed data about the exposed risk as well as key data on the peril and historic impact is currently the underpinning of any successful model.   

Unlike property natcat risks, cyber catastrophe risk currently does not have the same level of key company specific data (the corollary to location data in property) nor global historic claims data. 

To further complicate the picture, both attritional and non-attritional historic data is not the predictor of the future.  

We are managing against a plausible systemic event in the creative minds of criminals that has never occurred.  We necessarily look at the cyber exposure of the company not just from past experience, but with a forward-looking view. Given this landscape, it is no surprise that lack of data is recognized as a key issue in the market.

How to fill in this data gap?

The risk is real and insurable, and the good news is that we have the analytical and technological expertise to provide solutions, both with access to data and the use of data science analytics. 

This point was raised at two recent conferences in London:  The Aventedge European Catastrophe Risk Conference and the Advisen Cyber Risk Insights Conference.

In both conferences it became clear that it is important to understand the value of both “exposure data” and “cyber risk model data”.   Arguably, both are interconnected and how this information is implemented in cyber catastrophe models dramatically influences results.  

As Ashwin Kashyap, co-founder of CyberCube, emphasized on a Cyber Risk Model Comparison panel in London, it is not surprising that there is a significant difference in modeled views, especially when not only the model methodology, but the underlying data is a significant differentiator.  

Differences in key input such as revenue, whether relying on the insured input or modeler underlying database input were shown to result in significant differences.  The value of rigorous modeler data and the insurer ability to accurately incorporate available insured information is critical.

The time-sensitive nature of cyber risk metrics was also highlighted.  What is true today, is not true tomorrow.  A panel of cyber risk modeling experts echoed the need to capture current insurance industry cyber risk metrics from insureds.  Oli Brew from CyberCube highlighted that sources of “raw” inside-the-firewall data on a global basis is critical to providing invaluable insight.  But equally important is the effort required to assess the quality of data sources with data scrubbing and analytics.  

With emerging technology trends and relevant data acquisition capability, we can model the potentially possible.  While it is important to recognize that it will take time and regrettably future cyber events to acquire the necessary data, it is also important to recognize that this risk embodies dissimilarities to other catastrophe risks.  

In the meantime, there is much that can be used from available avenues that require teamwork between the insureds, the insurers and the model vendors. Models, while not perfect, can provide meaningful and actionable insights, especially if never used in isolation but as a cornerstone to support decisions. 

The goal is not only to narrow the measure of loss uncertainty in cyber risk with data, but to also understand the underpinnings of uncertainty in risk relative to the data available.