跳转至

Usage policies

Updated Feb 15th, 2023 We want everyone to use our tools safely and responsibly. That’s why we've created usage policies that apply to all users of OpenAI’s models, tools, and services. By following them, you'll ensure that our technology is used for good.

If we discover that your product or usage doesn't follow these policies, we may ask you to make necessary changes. Repeated or serious violations may result in further action, including suspending or terminating your account.

Our policies may change as we learn more about use and abuse of our models.

Platform policy Our API is being used to power businesses across many sectors and technology platforms. From iOS Apps to websites to Slack, the simplicity of our API makes it possible to integrate into a wide array of use cases. Subject to the use case restrictions mentioned below, we allow the integration of our API into products on all major technology platforms, app stores, and beyond.

Disallowed usage We don’t allow the use of our models for the following:

Illegal activity Child Sexual Abuse Material or any content that exploits or harms children Generation of hateful, harassing, or violent content Generation of malware Activity that has high risk of physical harm Activity that has high risk of economic harm Fraudulent or deceptive activity Adult content, adult industries, and dating apps Political campaigning or lobbying Activity that violates people’s privacy Engaging in the unauthorized practice of law, or offering tailored legal advice without a qualified person reviewing the information Offering tailored financial advice without a qualified person reviewing the information Telling someone that they have or do not have a certain health condition, or providing instructions on how to cure or treat a health condition High risk government decision-making We have further requirements for certain uses of our models:

Consumer-facing uses of our models in medical, financial, and legal industries; in news generation or news summarization; and where else warranted, must provide a disclaimer to users informing them that AI is being used and of its potential limitations. Automated systems (including conversational AI and chatbots) must disclose to users that they are interacting with an AI system. With the exception of chatbots that depict historical public figures, products that simulate another person must either have that person's explicit consent or be clearly labeled as "simulated" or "parody." Use of model outputs in livestreams, demonstrations, and research are subject to our Sharing & Publication Policy. You can use our free moderation endpoint and safety best practices to help you keep your app safe.

Changelog 2023-02-15: We’ve combined our use case and content policies into a single set of usage policies, and have provided more specific guidance on what activity we disallow in industries we’ve considered high risk. 2022-11-09: We no longer require you to register your applications with OpenAI. Instead, we'll be using a combination of automated and manual methods to monitor for policy violations. 2022-10-25: Updated App Review process (devs no longer need to wait for approval after submitting as long as they comply with our policies). Moved to an outcomes-based approach and updated Safety Best Practices. 2022-06-07: Refactored into categories of applications and corresponding requirements 2022-03-09: Refactored into "App Review" 2022-01-19: Simplified copywriting and article writing/editing guidelines 2021-11-15: Addition of "Content guidelines" section; changes to bullets on almost always approved uses and disallowed uses; renaming document from "Use case guidelines" to "Usage guidelines". 2021-08-04: Updated with information related to code generation 2021-03-12: Added detailed case-by-case requirements; small copy and ordering edits 2021-02-26: Clarified the impermissibility of Tweet / Instagram generators