OpenAI Will Award $1M for Generative AI Cybersecurity Programs
OpenAI has begun a $1 million grant program for leveraging generative AI in cybersecurity. The company hopes to “boost and quantify AI-powered cybersecurity capabilities and to foster high-level AI and cybersecurity discourse with the program, which will award money on a rolling acceptance basis.
OpenAI is looking for submissions in three categories. The applicants should have a plan to either use AI to enhance cybersecurity, measure and quantify how secure an AI model is, or develop management practices and security strategies that address potential current and future security issues. The company said it has a “strong preference” for more practical ideas for employing AI in defensive cybersecurity that can be licensed or distributed for public benefit.
“Our goal is to work with defenders across the globe to change the power dynamics of cybersecurity through the application of AI and the coordination of like-minded individuals working for our collective safety,” OpenAI explained in the announcement. “If you share our vision for a secure and innovative AI-driven future, we invite you to submit your proposals and join us in our aim towards enhancing defensive cybersecurity technologies.”
This is OpenAI’s second grant program launch in less than a month, following the $1 million grant to look for democratic ways of making decisions about AI system rules announced last week. The cybersecurity interest comes after a security expert uncovered a ChatGPT vulnerability that allowed some users to see the titles of other people’s conversations with the AI. A relatively quick resolution couldn’t forestall more scrutiny from government regulators and a brief ban on ChatGPT in Italy. It also led to OpenAI setting up a ‘bug bounty’ program that pays as much as $20,000 to users who inform the company about security issues. Meanwhile, OpenAI CEO Sam Altman said at a U.S. Senate hearing on generative AI that there should be more industry regulation partly to address that problem.