Let's fool modern AI systems with physical stickers!
[ UUID ] 66b11951-5aaf-4e2d-9d88-029171028b9a
[ Session Name ] Let's fool modern AI systems with physical stickers! [ Primary Space ] Privacy and Security
[ Submitter's Name ] Anant Jain [ Submitter's Affiliated Organisation ] Commonlounge (Compose Labs) [ Submitter's GitHub ] @anant90
What will happen in your session?
This session will start with a short visual introduction to machine learning. I will keep the explanation free of any pre-requisites or math and would model this part of the session on http://www.r2d3.us/visual-intro-to-machine-learning-part-1/
Next, we'll dive into a demo of an ML application that identifies objects in real-time. Once the participants are convinced that it works well, I'll briefly introduce them to "adversarial attacks" — an emerging area of research in this field. To demo an adversarial attack, we'll circulate physical stickers that look like nothing but trick the ML application to believe anything in front of it is a "toaster". Here's a demo video of this: https://www.youtube.com/watch?v=i1sp4X57TL4 from the original paper.
What is the goal or outcome of your session?
The goal of the session is to demystify Machine Learning for the participants and show them a real Machine Learning system in action. The secondary goal is to show that Machine Learning is itself just another tool, susceptible to adversarial attacks. These can have huge implications, especially in a world with self-driving cars and other automation. The session aims to be highly collaborative and audience-driven and can be adjusted to suit the participants' familiarity with machine learning and coding.
Time needed
60 mins