# Defensive Measures Defensive measures protect AI models from prompt attacks. Techniques include input sanitization, model fine-tuning, and prompt engineering. These strategies aim to enhance AI system security, prevent unauthorized access, and maintain ethical output generation. Visit the following resources to learn more: - [@article@Defensive Measures](https://learnprompting.org/docs/prompt_hacking/defensive_measures/overview) - [@opensource@Prompt Injection Defenses](https://github.com/tldrsec/prompt-injection-defenses?tab=readme-ov-file#prompt-injection-defenses)