<p>Internet end-users increasingly face threats of compromise by visiting seemingly innocuous websites that are themselves compromised by malicious actors. These compromised machines are then incorporated into bot networks that perpetuate further attacks on the Internet. Google attempts to protect users of its search products from these hidden threats by publicly disclosing these infections in interstitial warning pages behind the results. This paper seeks to explore the effects of this policy on the economic ecosystem of webmasters, web hosts, and attackers by analyzing the experiences and data of the StopBadware project. The StopBadware project manages the appeals process whereby websites whose infections have been disclosed by Google get fixed and unquarantined. Our results show that, in the absence of disclosure and quarantine, certain classes of webmasters and hosting providers are not incentivized to secure their platforms and websites and that the malware industry is sophisticated and adapts to this reality. A delayed disclosure policy may be appropriate for traditional software products. However, in the web infection space, silence during this period leads to further infection since the attack is already in progress. We relate specific examples where disclosure has had beneficial effects and further support this conclusion by comparing infection rates in the U.S. where Google has high penetration to China where its market penetration rate is much lower.</p>
Oliver Day is a researcher at the Berkman Center for Internet and Society where he is focused on the Stopbadware project. He was formerly a principal security consultant for @stake where he focused on web applications and storage area networks. At @stake Oliver also lectured before dozens of Fortune 500 companies and educational institutions about network security. Before @stake, Oliver was an engineer with eEye Digital Security and created automated security checks to find flaws in newly discovered vulnerabilities. He has also been a staunch advocate of the disclosure process and providing shielding for security researchers.</p>
Rachel Greenstadt is an assistant professor of computer scienceat Drexel University, where she studies issues at the intersectionof artificial intelligence, security, and privacy. She recentlycompleted a postdoctoral fellowship at Harvard's Center forResearch on Computation and Society. She tries to combineher academic work with participation in hacker conferences.</p>