Adversarial Machine Learning: The Pitfalls of Artificial Intelligence-based Security

  • Date 06 Jun 2017

Adversarial Machine Learning: The Pitfalls of Artificial Intelligence-based Security

06 Jun 2017, 13:40 - 14:25

Intelligent Defence

Language:
English

AI has recently been touted as the next pivotal technology in malware classification, autonomous hacking, and network behavior anomaly detection.

However, the current state-of-the-art is often not well understood, and the risks the come from adversarial machine learning are often understated.

This presentation reviews how AI is (mis)used in various security-related domains, and describes the possible attacks against AI-based security systems.

Leveraging the experience gained from participating to the first-ever AI-based hacking competition, to researching the latest machine learning approaches to malware analysis, the presentation provides new insights into what AI can (and cannot) do for security.

Learning Outcomes:

  1. Gain a deeper understanding of what artificial intelligence can do to improve the security of systems and networks
  2. Understand how artificial intelligence and machine learning are used today in available commercial tools
  3. Learn about the risks associated with adversarial machine learning and how to mitigate them
  4. Obtain an insider's view into autonomous hacking competitions and real-world AI-based malware analysis
  5. Learn how make sense of the buzzwords and marketing lingo used to described AI-based security solutions

 

Contributors

  • Giovanni Vigna

    Speaker

    Professor and CTO

    University of California Santa Barbara and Lastline

    Giovanni Vigna is a Professor in the Department of Computer Science at the University of California in Santa Barbara and the CTO at Lastline, Inc....

We use cookies to operate this website and to improve its usability. Full details of what cookies are, why we use them and how you can manage them can be found by reading our Privacy & Cookies page. Please note that by using this site you are consenting to the use of cookies.