If you discover a safety issue with any Etude AI system — a jailbreak, an unexpected harmful capability, a failure of our refusal mechanisms, or a vulnerability in our infrastructure — we want to hear from you. Responsible disclosure helps us fix real problems before they cause real harm.
We commit to acknowledging all reports within 48 hours and providing a substantive response within 14 days. We will keep you informed as we investigate and will credit you in any related safety disclosure, with your permission.
We ask that you give us reasonable time to investigate and mitigate before publishing findings publicly. We will work with you in good faith and will not pursue legal action against researchers who act in accordance with this policy.
For vulnerabilities in third-party systems that interact with our models, please contact the relevant vendor directly. For concerns about how our models are being used by third parties, include as much context as possible so we can investigate effectively.