Zero bugs found? Hold my Beer AFL! How To Improve Coverage-Guided Fuzzing and Find New 0days in Tough Targets

Zero bugs found? Hold my Beer AFL! How To Improve Coverage-Guided Fuzzing and Find New 0days in Tough Targets

Fuzzing remains to be the most effective technique for bugs hunting in memory-unsafe programs. Last year, hundreds of security papers and talks on fuzzing have been published and dozens of them were focused on adapting or improving American Fuzzy Lop in some way. Attracting with its simplicity and efficiency, AFL is the number one choice for the vast majority of security researchers. This high popularity means that hunting for bugs with AFL or a similar tool is becoming less and less fruitful since many projects are already covered by other researchers. It is especially hard when we talk about a project participating in Google OSS-Fuzz program which utilizes AFL to generate a half-trillion test cases per day.

In practice, this means that we can not blindly rely on AFL anymore and should search for better fuzzing techniques. In order to overcome this challenge, we need to understand how AFL and similar fuzzers work and be able to use their weaknesses to find new 0days. This talk is aimed to discuss these weaknesses on real examples, explain how we can do fuzzing better and release a new open-source fuzzer called Manul.

Manul is a high-scalable coverage-guided parallel fuzzer with the ability to search for bugs in open source and black box binaries on Windows and Linux. Manul was able to find 10 0-days in 4 widely-used projects that have been extensively tested by AFL. These vulnerabilities were not found by chance, but by analyzing and addressing issues exist in AFL. Authors will show several of the most critical vulnerabilities and explain why AFL overlooked them.

This talk will be interested for experienced hackers, who are willing to improve their bug hunting capabilities, as well as for new researchers, who are making their first steps on the thorny trail of bug hunting.

Presented by