header banner

Regenerative AI is being used by two thirds of ethical hackers to find bugs

Generative AI tools are being used by a growing number of ethical hackers to support their work hunting for vulnerabilities, according to a new report.

A study from bug bounty platform HackerOne found that over half of ethical hackers participating in programs use generative AI in some capacity. 

Nearly two-thirds (61%) said they are actively using and developing generative AI-based hacking tools in a bid to find more vulnerabilities, expand capabilities, and streamline efficiency. 

The use of generative AI tools aren’t just limited to the technical aspects of bug hunting, HackerOne revealed. 

Two-thirds (66%) of ethical hackers said they plan to use generative AI to write better reports while 53% said the technology is being used to support writing code. 

One-third (33%) said generative AI is also being used to “reduce language barriers” for bug hunters. 

Despite an appetite among ethical hackers to integrate generative AI tools within workflows, HackerOne’s study pointed to a lingering hesitancy among many with regard to long-term security risks. 

More than one-quarter (28%) of hackers told the firm they were particularly concerned about criminal exploitation of generative AI tools while 18% held concerns over a potential increase in insecure code. 

Nearly half (43%) of hackers said that generative AI could lead to an increase in vulnerabilities moving forward. 

Generative AI LLM bug hunting

HackerOne’s study also revealed that 61% of program participants plan to specifically target vulnerabilities identified in the OWASP Top 10 flaws for large language models (LLMs).


Whitepaper cover with title over image of colleagues chatting in an office with red circular digital icons around them

(Image credit: Zscaler)

Learn about the tactics used in phishing attacks and prevent costly data breaches from affecting your organization


OWASP recently published its top ten vulnerabilities for LLM applications. The most common vulnerabilities identified included prompt injection, in which an attacker manipulates the operation of an LLM through specifically crafted inputs. 

Generative AI-related supply chain vulnerabilities were also highlighted in the list. 

OWASP has determined that the LLM supply chain has glaring vulnerabilities which have the potential to impact the “integrity of training data, machine learning (ML) models, and deployment platforms”.

“Supply chain vulnerabilities in LLMs can lead to biased outcomes, security breaches, and even complete system failures,” HackerOne said.

Record payouts for bugs

This news from HackerOne comes as the bug bounty platform announces a payout milestone for users.

The firm revealed that since its inception in 2012, it has paid out over $300 million in rewards to security researchers. The size of payouts have also been steadily rising in recent years, HackerOne said.

The median price of a bug on the HackerOne platform has now reached $500, marking an increase from $400 in 2022.

More than two dozen researchers have also been paid over $1 million in rewards, HackerOne said, with the largest payout of $4 million announced in August.

Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2023.


Article information

Author: Jose Hudson

Last Updated: 1700335922

Views: 690

Rating: 4.8 / 5 (31 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Jose Hudson

Birthday: 1997-06-08

Address: Unit 3071 Box 1348, DPO AA 38429

Phone: +4647953399365839

Job: Article Writer

Hobby: Painting, Survival Skills, Juggling, Yoga, Camping, Chess, Embroidery

Introduction: My name is Jose Hudson, I am a strong-willed, accomplished, Gifted, dedicated, exquisite, brilliant, clever person who loves writing and wants to share my knowledge and understanding with you.