Risk Control Recognition
Overview
The Risk Control Recognition API offers a variety of models designed to help users identify and handle sensitive content in videos.
Model List
Text Moderation
A highly efficient and accurate text content moderation API service that supports the identification of inappropriate information such as pornography, violence, and abuse. It is suitable for scenarios like social platforms, comment sections, and instant messaging, ensuring content safety and compliance.
nsfw-classifier
NSFW Classifier is a high-precision deep learning model specifically designed for real-time identification and filtering of sensitive images that are inappropriate for public environments.
Security-semantic-filtering
Security-semantic-filtering is a technology used to enhance system security. It ensures the protection and compliance of sensitive information through semantic filtering and analysis of data.
Sample Code
Text Content Risk Control Check
from openai import OpenAI
client = OpenAI(
base_url="https://moark.ai/v1",
api_key="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", # Please replace with your access token
)
response = client.moderations.create(
input=[
{
"type": "text",
"text": "...text to classify goes here..."
}
],
model="moark-text-moderation", # Replace with the desired model name
)
Image Check
from openai import OpenAI
client = OpenAI(
base_url="https://moark.ai/v1",
api_key="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", # Please replace with your access token
)
response = client.moderations.create(
input=[
{
"type": "image_url",
"image_url": {
"url": "https://example.com/image.jpg"
}
}
],
model="nsfw-classifier",
)