At 6:30am on Tuesday, May 24, someone began banging on the front door of Justin Shafer’s home in North Richland Hills, Texas. When Shafer and his wife answered the door, they found a dozen FBI agents with guns drawn. Shafer, 36, still in his boxer shorts, was allegedly handcuffed, according to a Daily Dot report. The agents seized all of Shafer’s computers and digital devices, and pushed him into a car.
The raid happened because Shafer is a software security researcher—one best known for finding bugs in dental software. In 2015 he showed that the maker of a dental practice management platform called Dentrix misled users about the strength of the software’s encryption algorithms, compromising patients’ privacy. More recently, he’d discovered a publicly accessible file transfer protocol server containing sensitive data for 22,000 dental patients. He privately notified the company responsible, and once the data was secured, published a blog post about the server and a related security vulnerability.
Shafer wasn’t thanked for his efforts. Instead, he faces a pending trial date. His story, though, is not as unusual as you might expect; he joins a growing list of people facing law-enforcement scrutiny for revealing embarrassing cybersecurity flaws. In fact, recently the antagonism from authorities has become severe enough that some researchers have decided to stop sharing any flaws they find.
According to Tor Ekeland, a New York-based attorney who specializes in defending hackers and cybersecurity experts, companies see security flaws as an embarrassment—and use legal force to quell what they see as negative publicity. “The Department of Justice, our government officials—they are all pro-business and want to protect corporations,” he says.
Ekeland is well-known for defending Andrew Auernheimer, the infamous hacker-troll known as Weev, who was sentenced to prison for obtaining the personal data of more than 100,000 iPad owners from AT&T’s publicly accessible website. He’d found a glaring flaw, but he hadn’t had to work hard to do so: he later compared his research to walking down the street and writing down the physical addresses of buildings, only to find himself charged with identity theft. In response to his prosecution, he wrote an opinion piece for Wired arguing, “Forget Disclosure—Hackers Should Keep Security Holes to Themselves.”
Auernheimer’s conviction was eventually vacated, but Ekeland believes he never should have been charged. “AT&T’s response was childish,” he says. “If someone points out that you shouldn’t be storing all that propane next to welding equipment, they should be rewarded for it, not prosecuted.”
“If someone points out that you shouldn’t be storing all that propane next to welding equipment, they should be rewarded for it, not prosecuted.”
Ekeland argues that prosecuting researchers who voluntarily share their findings can have a chilling effect on future disclosures—if it hasn’t already. At the heart of these aggressive actions, he says, is the flawed Computer Fraud and Abuse Act (CFAA), a statute that originated in 1984. “There’s an old-school conceptual framework behind the CFAA, and it was written by people who are computer ignorant,” says Ekeland. “The criminal scope of the CFAA needs to be shrunk.”
Chris Roberts is someone else who found himself under scrutiny after demonstrating security flaws. Roberts, a respected professional who’d founded the cybersecurity firm One World Labs, was aboard a United Airlines flight in April 2015. Using the plane’s Wi-Fi, he tweeted a rather cryptic message: “Find myself on a 737/800, lets see Box-IFE-ICE-SATCOM,? Shall we start playing with EICAS messages? ‘PASS OXYGEN ON’ Anyone? :)”
The first reply was, “…aaaaaand you’re in jail. :)” His tweet referred to research he’d conducted years ago on vulnerabilities in in-flight infotainment networks, flaws that could allow someone to access cabin controls and deploy a plane’s oxygen masks. For years, Roberts tried to notify Boeing and Airbus about security issues with their passenger communications systems. He’d failed to get their attention.
His tweet, however, did draw attention. When his connecting flight touched down in Syracuse, New York, law enforcement was waiting for him. He insisted he was merely pointing out a vulnerability, but the airlines and the FBI weren’t having it. Soon, the Bureau was searching his computers and questioning him about his claim to have physically accessed the in-flight network via a box under the seat—which then allowed him to commandeer the plane.
When the FBI released its affidavit claiming Roberts had tampered with a plane, some fellow security researchers were aghast. Yahoo’s chief information officer, for one, reportedly tweeted, “You cannot promote the (true) idea that security research benefits humanity while defending research that endangered hundreds of innocents.”
Ultimately, Roberts didn’t face any prison time, but One World Labs executives dissolved the company, and Roberts moved on to another job. Via email, he said that while some in the FBI understand what he was trying to accomplish, others “are so tied up in processes, red-tape, and self-absorbed mannerisms that they get in the way of the good work being done.” (A request for an interview with FBI officials was denied, with an email statement that “due to limited resources at this time, we cannot accommodate your request.”)
“You cannot promote the (true) idea that security research benefits humanity while defending research that endangered hundreds of innocents.”
Ekeland says that besides law enforcement’s sometimes aggressive response, the other factor threatening full-disclosure research is how companies view security flaws. “The problem,” he says, “is that companies get embarrassed and defensive. They think, ‘Oh shit, we are potentially civilly liable by exposing user identities on the internet, so let’s try to blame someone else.’” He’d like to see blame placed where he feels it belongs. “If we’re going to criminalize information security researchers for exposing flaws,” he says, “we should make it a felony for companies to have bad information security.”
Of course, not every researcher faces felony charges. Cesar Cerrudo is chief technology officer for IOActive, a global security consultancy, and in the past 15 years he’s uncovered bugs affecting companies such as Yahoo, Twitter, Microsoft, and IBM. He has the advantage of being backed by a massive firm that provides legal advice before he goes public or consults with a company.
He says there are lines to be drawn in lawful research. “It’s fine to reveal a problem with a website, but if you go a step further and collect user data and credit card numbers [in order to prove a point], then you can get into real trouble,” he says via Skype from his hometown of Paraná, Argentina.
He also says that building strong relationships with companies is critical; he’s worked with Microsoft for more than a decade, identifying flaws in their kernel and Windows operating systems. Without those tight relationships, it’s hard to know how a corporation will respond to security flaws being revealed. “Other companies get angry and can threaten legal action,” he says, “or they just ignore you, for some reason.”
Being ignored is a common complaint among security researchers. Even Cerrudo recalls how he found vulnerabilities in traffic light systems in cities such as New York and San Francisco. When he contacted the product vendors for those systems, he was met with the online equivalent of a shrug. Later, he learned that San Francisco’s traffic light system still didn’t encrypt data from traffic sensors, opening the city to attack. “And I remember contacting the Federal Transit Administration about this issue and they said to me, ‘We understand this is a problem and could be serious but we have a lot more serious problems to deal with than this.’” These vulnerabilities exist, Cerrudo claims, because for many vendors and corporations, security isn’t a priority.
Roberts agrees. “Security unfortunately is typically an afterthought or it’s something that someone realizes five minutes before the product launch,” he says. “Rarely is a product designed from the ground up with security AND functionality in mind.”
That lack of emphasis on security makes for a muddled gray area once products are out in the wild. “I would like to see something more publicly available that explains what a researcher can and can’t do,” says Cerrudo, “because there’s a fine line of what’s legal in this field, and I fear many young researchers, who are good guys, just don’t know the rules.”
Illustration via Max Fleishman