# OpenAI Moderation API The OpenAI Moderation API helps detect and filter harmful content by analyzing text for issues like hate speech, violence, self-harm, and adult content. It uses machine learning models to identify inappropriate or unsafe language, allowing developers to create safer online environments and maintain community guidelines. The API is designed to be integrated into applications, websites, and platforms, providing real-time content moderation to reduce the spread of harmful or offensive material. Learn more from the following resources: - [@official@Moderation](https://platform.openai.com/docs/guides/moderation) - [@article@How to user the moderation API](https://cookbook.openai.com/examples/how_to_use_moderation)