Depending On If I Had Coffee Or Not Your Application May Be High Risk

Security practitioners are often espresso'ing risk with qualitative measurements. We use broad, imprecise risk measurements such as high, medium, and low while applying them inconsistently if we haven't had our first cup. We struggle to measure if security work is driving down risk, meaning that we may be making ill-informed decisions. Infosec practitioners may have access to risk professionals, but engaging risk professionals for assessments doesn’t scale for thousands of applications. 

To address this problem, we've poured a new cup of coffee leveraging the industry-standard FAIR framework to perform quantitative application risk analysis at scale. This work is a collaboration between engineers from application security, risk, data science, incident response, and detection. We can accurately articulate the risk in dollars of an external actor impacting the confidentiality of sensitive data across thousands of applications, where dollars include productivity losses, response costs, and estimations of regulatory fines and judgements.

We'll perk up the audience by walking through how we use machine learning to forecast adversarial impact at scale. This will include an overview of our asset inventory, and how we use risk factors, vulnerabilities, and security controls to train our model. We'll discuss how we evaluate this risk model and use it to predict defensible forecasts that represent risk in dollars. 

We'll explore our results, including sun-shining failures and missteps we made along the way and what we learned. We'll provide strategies on how to articulate the leverage of this work to your business to get buy-in. Demos will be sweetened with no foam throughout the talk to clarify our methodology.

Attendees will leave this talk with a fresh roasted approach to perform application risk analysis at scale. Attendees adopting this methodology can speak more consistently with their stakeholders, with loss measured in frequency and dollars, to drive a better security posture. 


Shannon Morrison

Senior Security Engineer - Detection Engineering @Netflix
Shannon Morrison is a senior security engineer on the Detection Engineering team at Netflix, where she builds data-driven detections. Previously, she was a data scientist building anomaly detection models and a container-based machine learning platform at a Fortune 50 insurance company. She also... Read more Find Shannon Morrison at:


Scott Behrens

Senior Security Engineer @Netflix
Scott Behrens is a senior security engineer that drives technical strategy for the Product and Application Security organization at Netflix. Before Netflix, Scott worked as a senior security consultant at Neohapsis (Cisco) and as an adjunct professor at DePaul University. Scott's expertise... Read more Find Scott Behrens at:

Thursday May 20 / 10:10AM EDT (40 minutes)

TRACK Building Secure Systems TOPICS SecurityQuantitative RiskProgramming ADD TO CALENDAR Calendar IconAdd to calendar