ChatGPT Gets Its Own Bug Bounty Program

Open AI has launched a bug bounty program that would offer rewards worth up to $20,000 for finding flaws and security-related vulnerabilities in its products like ChatGPT. It's not a novel strategy, as almost every major tech company runs similar bug bounty programs that reward code sleuths and independent experts handsomely for finding vulnerabilities. Microsoft, which poured billions into OpenAI, offers six-figure rewards for finding critical-level issues. OpenAI has partnered with bounty platform BugCrowd to let experts report their findings about "vulnerabilities, bugs, or security flaws" and collect their modest reward.

Advertisement

The OpenAI Bug Bounty Program will offer anywhere between $200 to $20,000 for affecting OpenAI products, especially ChatGPT. Of course, the payout will vary based on the severity, damage scope, and how complex it is for a bad actor to exploit those vulnerabilities. 

In addition to ChatGPT, OpenAI's bug discovery program also covers the APIs offered by the company to commercial clients that will integrate services like the GPT-4 language model into their products. OpenAI's APIs, alongside the associated cloud server information and linked accounts, are all covered under the rewards program.

Prompt flaws don't get rewards

ChatGPT, arguably the most popular OpenAI product, is at the center of the bug bounty program. The company says it offers rewards for finding flaws associated with the ChatGPT Plus subscription service, user log-in, and other related functionalities. ChatGPT was recently pulled down briefly following a bug that exposed personal information tied to the subscription, such as real name and banking details. OpenAI will also offer a bounty to experts that uncover flaws in the freshly-launched plug-in system, which allows clients like Expedia to integrate the natural language bot into their online services.

Advertisement

Notably, the bug bounty doesn't cover flaws associated with the text-based prompts given to ChatGPT. OpenAI notes on the BugCrowd directory that "issues related to the content of model prompts and responses are strictly out of scope, and will not be rewarded." 

Weeks ago, some testers were able to bypass ChatGPT's sensitive content restrictions using the prompt system without even having to tinker with any code-level settings. The issue received widespread attention, but surprisingly, OpenAI won't consider prompt-related flaws bounty-worthy unless they pose a direct security threat.

Recommended

Advertisement