At the high level, this talk aims to help hackers understand why usability is an integral part of security. In the hacker community, we often tell users to RTFM ("read the fucking manual"), Â<8a>Â<97>_ despite the fact that hackers are not known for good manuals. Reminding n00bs that true hacking skill requires hard work is important. But making privacy accessible to everyone (not just the "elites") is also important. I will be quoting from experts like Roger Dingledine[1] to help motivate the idea that usability is an important requirement for privacy enhancing technologies, not just for lofty philosophical reasons, but because of down to earth concepts like "herd immunity" Â<8a>Â<97>Â<96> more people using privacy enhancing technologies (PETs) means it's harder to for attackers to tell which data is sensitive.
Second, we want to educate the audience on some practical takeaways from some of the pioneering works in the usable security. While the one of the first usable security papers ("Why Johnny Can't Encrypt") was first published in 1999[1], it wasn't until the one-two punch of the Snowden revelations and the Arab Spring that usable security became a hot topic outside of academia. This presentation will briefly cover some of the useful works the academic community has produced, with an emphasis on lessons that developers can take away from said research.
Third, this talk aims to introduce some basic human factors testing techniques to a nonexpert audience. In the talk we will emphasize scientific rigor and repeatability, rather than subjective opinion of what "good" design is. On the other hand, we want to help hackers find solutions that are "good enough". We do not aim to teach users how to A/B test 41 different shades of blue (or otherwise get bogged down on minute details.) Instead, we aim to help hackers find high level "stop points" - major usability issues that cause users to get frustrated and abandon their task. , To illustrate these points, I will present a case study which uses the techniques discussed earlier in the presentation to evaluate a real life privacy enhancing technology - the Tor Browser Bundle. Rather than focusing on p-values and bar graphs, I'll go through how the usability evaluation was conducted, why each methodological choice was made, and how hackers wishing to do usability evaluations could adopt these techniques to evaluate software they work on.
[1] Anonymity Loves Company http://www.freehaven.net/anonbib/cache/usability:weis2006.pdf [2] Why Johnny Can't Encrypt, Proceedings of the 8th USENIX Security Symposium (Washington, D.C., Aug. 23Â<8a>Â<97>Â<96>36, 1999), 169Â<8a>Â<97>Â<96>184 http://www.cs.berkeley.edu/~tygar/papers/Why_Johnny_Cant_Encrypt/OReilly.pdf [3] Why Johnny Can't Blow the Whistle: Identifying and Reducing Usability Issues in Anonymity Systems https://www.norcie.com/papers/torUSEC.pdf