moderation - Evaluation-options - Feedback Aide API

Feedback Aide API Evaluation Options

moderation

Configuration options for moderating sensitive, inappropriate, or crisis-related content in responses.

See Content moderation with Feedback Aide for more information.

Examples

// Example for evaluate endpoints
{
    "options": {
        "moderation": {
            "inappropriate_content": true,
            "critical_safety_content": true
        }
    }
}

// Example using the JavaScript API
const feedbackSession = await feedbackApp.feedbackSession(
    { /* security token */ },
    {
        state: '...',
        session_uuid: '...',
        // ...

        options: {
            moderation: {
                inappropriate_content: true,
                critical_safety_content: true
            }
        },
    }
);

Values

Type object

Properties

  • inappropriate_content boolean

    Controls whether to use the content moderation checking for inappropriate content in responses.

  • critical_safety_content boolean

    Controls whether to use the content moderation checking for critical safety content in responses.

  • enabled boolean Deprecated

    Controls whether to use the content moderation checking for responses.

Was this article helpful?

Did you arrive here by accident? If so, learn more about Learnosity.