According to a recently published support page on OpenAI’s website, verification will necessitate the submission of a government-issued ID from countries supported by OpenAI’s API. This new requirement is part of OpenAI’s broader commitment to ensuring that its technologies are used safely and ethically. Notably, the verification process will limit each ID to verifying only one organization every 90 days, which underscores the company’s intention to maintain a high level of security and accountability.
“At OpenAI, we take our responsibility seriously to ensure that AI is both broadly accessible and used safely,” states the page. The move comes in response to concerns about misuse by a minority of developers who have allegedly violated OpenAI’s usage policies. By implementing this verification requirement, OpenAI aims to mitigate the risk of unsafe applications of its AI technologies while continuing to provide advanced models to the wider developer community.
The introduction of the Verified Organization process could be seen as a strategic response to various challenges that AI developers face today. The rapid advancement of AI capabilities has raised concerns over issues such as intellectual property theft and malicious use of technology. For instance, earlier reports indicated that OpenAI was investigating a potential breach involving a group linked to the China-based AI lab DeepSeek, which allegedly exfiltrated large amounts of data through its API in late 2024. These allegations point to a growing need for stricter controls over who can access and utilize advanced AI models.
The verification process is not just about safeguarding OpenAI’s proprietary technology; it also reflects a broader trend in the tech industry towards enhanced security measures. As organizations increasingly leverage AI for critical applications, the demand for accountability and transparency is likely to become more pronounced. This shift could set a precedent for other major AI developers to adopt similar verification processes to protect their technologies and mitigate risks.
Experts in the field have noted that implementing such a verification system could be a double-edged sword. While it enhances security, it may also create barriers for smaller developers and startups who may struggle to meet the new requirements. These concerns highlight the need for a balanced approach that fosters innovation while ensuring safety.
Moreover, as OpenAI moves forward with this new verification requirement, it is expected to continue releasing updates and improvements to its AI models, including the anticipated “next exciting model release.” The company has been proactive in addressing security challenges associated with its technologies, publishing reports on efforts to detect and mitigate malicious use.