Google's AI Bug Hunter Breaks Ground with First Security Discovery

Google's AI Bug Hunter Breaks Ground with First Security Discovery
Aug 5, 2025 23:46
Aug 5, 2025 23:46

Google’s AI-powered vulnerability researcher, dubbed “Big Sleep,” has successfully identified its first set of security flaws. On Monday, Heather Adkins, Vice President of Security at Google, confirmed that the large language model (LLM)-based tool has discovered a total of 20 vulnerabilities in widely used open-source software, according to a report by TechCrunch.

Developed through a collaboration between Google DeepMind and the elite security research team Project Zero, Big Sleep uncovered most of the flaws in FFmpeg—a popular multimedia framework—and ImageMagick, an open-source image editing software.

Due to the unresolved nature of the identified bugs, specific details have not yet been disclosed.

According to Google, the AI autonomously identified and reproduced each vulnerability, although the final reports were reviewed and verified by human experts. Royal Hansen, Google’s Vice President of Engineering, wrote on X, “This achievement opens up a new frontier in automated vulnerability detection.”

The development signals a significant milestone in using artificial intelligence for proactive cybersecurity, particularly in automating the complex task of hunting for software flaws at scale.