Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • M mozfest-program-2018
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 295
    • Issues 295
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • MozFest (Mozilla Festival)
  • mozfest-program-2018
  • Issues
  • #619
Closed
Open
Issue created Aug 01, 2018 by mozfest-bot@mozfest-bot

Let's fool modern AI systems with physical stickers!

[ UUID ] 66b11951-5aaf-4e2d-9d88-029171028b9a

[ Session Name ] Let's fool modern AI systems with physical stickers! [ Primary Space ] Privacy and Security

[ Submitter's Name ] Anant Jain [ Submitter's Affiliated Organisation ] Commonlounge (Compose Labs) [ Submitter's GitHub ] @anant90

What will happen in your session?

This session will start with a short visual introduction to machine learning. I will keep the explanation free of any pre-requisites or math and would model this part of the session on http://www.r2d3.us/visual-intro-to-machine-learning-part-1/

Next, we'll dive into a demo of an ML application that identifies objects in real-time. Once the participants are convinced that it works well, I'll briefly introduce them to "adversarial attacks" — an emerging area of research in this field. To demo an adversarial attack, we'll circulate physical stickers that look like nothing but trick the ML application to believe anything in front of it is a "toaster". Here's a demo video of this: https://www.youtube.com/watch?v=i1sp4X57TL4 from the original paper.

What is the goal or outcome of your session?

The goal of the session is to demystify Machine Learning for the participants and show them a real Machine Learning system in action. The secondary goal is to show that Machine Learning is itself just another tool, susceptible to adversarial attacks. These can have huge implications, especially in a world with self-driving cars and other automation. The session aims to be highly collaborative and audience-driven and can be adjusted to suit the participants' familiarity with machine learning and coding.

Time needed

60 mins

Assignee
Assign to
Time tracking