Sanitize prompts before sending to LLMs. Detects PII, prompt injection, toxicity, and off-topic content. Returns cleaned text + risk score. Use when: sanitiz...
by clawhubcommunitySource: clawhub
Quality: mediumSafety: communityCategory: AI & MLUpdated: 2026-03-02