May 27th, 2020


The law doesn't protect ethical hackers. This new project could help close that gap

 Derek Hawkins

By Derek Hawkins The Washington Post

Published Dec. 28,2018

Balaam and Dostoevsky

A new project from the cybersecurity firm Bugcrowd and a University of California researcher aims to protect well-intentioned hackers from legal action when they reveal security vulnerabilities in an organization's networks or software.

The project, called, offers companies, academic institutions or even government agencies a standard legal agreement they can post that says, in effect, it's okay to hack us if you do it in good faith. It's a way to tell security researchers - sometimes called white hat hackers - that they won't get sued or face criminal charges if they find a flaw on an organization's systems and report it responsibly.

The effort highlights how federal anti-hacking laws aren't keeping pace with the way security vulnerabilities are often identified and patched. Laws such as the Computer Fraud and Abuse Act and the Digital Millennium Copyright Act don't contain protections for researchers who disclose bugs, creating a legal gray area discouraging ethical hacking. could help close that gap.

"It should be built into those policies, but we're not there yet. And this is a stopgap for people to take advantage of until we're there," said Jason Haddix, Bugcrowd's vice president of trust and security.

The problem is real and well known. In recent years, companies have sued or threatened legal action against researchers who have uncovered serious vulnerabilities - sometimes to prevent an embarrassing flaw from being disclosed publicly. In one extreme example last year, the FBI investigated security researchers in Georgia who discovered that millions of voter registration records were publicly accessible on the state's election website.

And it's not just the law that hasn't caught up. Private companies and other organizations have widely inconsistent approaches for handling these disclosures. Some have no policies in place for protecting security researchers. And even those that do tend to use convoluted or murky legal language, Haddix said. That makes it difficult for white hat hackers to draw the line between what an organization sees as permissible and what could get them in trouble.

"A lot of times the legal language can get like spaghetti," Haddix told me. "It's hard to unwrap if you're not a lawyer." In turn, he said, researchers are reluctant to report potentially serious security flaws because they fear the repercussions. seeks to simplify things. It offers a template with boilerplate language that spells out in plain terms what security researchers can and can't do if they decide to probe for bugs, and offers them legal safe harbor if they play by the rules. The template is open sourced, meaning anyone is free to use it or modify it. The target audience is "everyone on the Internet," Haddix said - from major tech companies to mom-and-pop shops.

It's a sign that the private sector is taking the lead on this issue, rather than waiting for the government to take action, Ars Technica's Sean Gallagher wrote in a post on "Given how regulated information security practices have become in some industries - and how badly legislation regarding any sort of hacking has been handled over the past few years - using 'open source,' battle-tested boilerplate contracts to speed adoption of disclosure and bug bounty programs might be a lot easier and a lot less expensive than anything mandated by new government regulation." grew out of work by Amit Elazari, a doctoral candidate at the University of California at Berkeley School of Law, who has advocated for standardizing disclosure and bug bounty programs, which offer financial rewards for reporting flaws. Some early incarnations of the project have been promising. Mozilla executives recently credited Elazari for motivating them to add new safeguards to their bug bounty program. "The legal protections afforded to bounty program participants have failed to evolve," they wrote, "putting security researchers at risk and possibly stifling that research."

Other companies have rolled out programs like the one proposes.

Dropbox, for example, revised its disclosure terms earlier this year to better protect white hat hackers after a security firm sued a reporter for writing about an apparent bug in its software. "Anything that stifles open security research is problematic," Dropbox's head of security wrote in a blog post, "because many of the advances in security that we all enjoy come from the wonderful combined efforts of the security research community."

Every weekday publishes what many in the media and Washington consider "must-reading". Sign up for the daily JWR update. It's free. Just click here.