Understanding Adversarial AI Vulnerabilities
Researchers have discovered that adversarial AI can be easily manipulated through tiny alterations in its Machine Learning inputs. This manipulation can lead to serious consequences, such as AI mistaking objects for something else and launching attacks.
Risks of Autonomous AI in Warfare
There are concerns about the potential risks posed by autonomous AI-driven ‘slaughterbots’ or ‘killer robots’ going rogue on the battlefield. The US Pentagon is actively addressing vulnerabilities in its Artificial Intelligence systems to mitigate such risks.
DARPA’s Concerns and Initiatives
The Defense Advanced Research Projects Agency (DARPA) has highlighted that AI systems are susceptible to exploitation through visual tricks and manipulated signals. This vulnerability could have severe implications in various scenarios.
The GARD Research Programme
The Pentagon has initiated the Guaranteeing AI Robustness Against Deception (GARD) research programme to tackle these challenges. GARD aims to establish theoretical foundations for identifying vulnerabilities in AI systems, enhancing system robustness, and developing effective defences.
Dr. Hava Siegelmann from DARPA emphasized the critical need for ML defence, especially as AI technology becomes more integrated into essential infrastructure. The GARD program aims to ensure the safety and reliability of AI systems against potential deception.
Objectives of GARD
- Develop theoretical foundations for defensible ML and create new defence mechanisms based on them.
- Create and test defensible systems across various settings.
- Construct a test bed for evaluating ML defensibility in different threat scenarios.
Through these objectives, GARD seeks to build deception-resistant ML technologies with robust evaluation criteria.