ICSA Colloquium/SPT Seminar - 17/06/2021

Title: 
Security Engineering and Machine Learning Abstract:
Statistical machine-learning techniques have been used in security applications for over 20 years, starting with spam filtering, fraud engines and intrusion detection. In the process we have become familiar with attacks from poisoning to polymorphism, and issues from redlining to snake oil. The neural network revolution has recently brought many people into ML research who are unfamiliar with this history, so it should surprise nobody that many new products are insecure. In this talk I will describe some recent research projects where we examine whether we should try to make machine-vision systems robust against adversarial samples, or fragile enough to detect them when they appear; whether adversarial samples have constructive uses; how we can do service-denial attacks on neural-network models; on the need to sanity-check outputs; and on the need to sanitise inputs.  We need to shift the emphasis from the design of "secure" ML classifiers, to the design of secure systems that use ML classifiers as components.
 
Bio:
Ross Andersion is Professor of Security Engineering at the University of Edinburgh and the University of Cambridge. He is widely recognized as one of the world's foremost authorities on security. In 2015 he won the Lovelace Medal, Britain's top award in computing. He is a Fellow of the Royal Society and the Royal Academy of Engineering. He is one of the pioneers of the economics of information security, peer-to-peer systems, API analysis and hardware security. Over the past 40 years, he has also worked or consulted for most of the tech majors.
Jun 17 2021 -

ICSA Colloquium/SPT Seminar - 17/06/2021

Ross Anderson

Zoom