computer-scienceangular-roadmapbackend-roadmapblockchain-roadmapdba-roadmapdeveloper-roadmapdevops-roadmapfrontend-roadmapgo-roadmaphactoberfestjava-roadmapjavascript-roadmapnodejs-roadmappython-roadmapqa-roadmapreact-roadmaproadmapstudy-planvue-roadmapweb3-roadmap
You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
1.0 KiB
1.0 KiB
AI Security Fundamentals
This covers the foundational concepts essential for AI Red Teaming, bridging traditional cybersecurity with AI-specific threats. An AI Red Teamer must understand common vulnerabilities in ML models (like evasion or poisoning), security risks in the AI lifecycle (from data collection to deployment), and how AI capabilities can be misused. This knowledge forms the basis for designing effective tests against AI systems.
Learn more from the following resources:
- @article@Building Trustworthy AI: Contending with Data Poisoning - Nisos - Explores data poisoning threats in AI/ML.
- @article@What Is Adversarial AI in Machine Learning? - Palo Alto Networks - Overview of adversarial attacks targeting AI/ML systems.
- @course@AI Security | Coursera - Foundational course covering AI risks, governance, security, and privacy.