Ranking Warnings From Multiple Source Code Static Analyzers via Ensemble Learning

Title: Ranking Warnings From Multiple Source Code Static Analyzers via Ensemble Learning

Authors: Athos Ribeiro (University of São Paulo), Paulo Meirelles (Federal University of São Paulo), Nelson Lago (University of São Paulo), Fabio Kon (University of São Paulo)

Abstract: While there is a wide variety of both open source and proprietary source code static analyzers available in the market, each of them usually performs better in a small set of problems, making it hard to choose one single tool to rely on when examining a program looking for bugs in the source code. Combining the analysis of different tools may reduce the number of false negatives, but yields a corresponding increase in the absolute number of false positives (which is already high for many tools). A possible solution, then, is to filter these results to identify the issues least likely to be false positives. In this study, we post-analyze the reports generated by three tools on synthetic test cases provided by the US National Institute of Standards and Technology. In order to make our technique as general as possible, we limit our data to the reports themselves, excluding other information such as change histories or code metrics. The features extracted from these reports are used to train a set of decision trees using AdaBoost to create a stronger classifier, achieving 0.8 classification accuracy (the combined false positive rate from the used tools was 0.61). Finally, we use this classifier to rank static analyzer alarms based on the probability of a given alarm being an actual bug in the source code.

Download: This contribution is part of the OpenSym 2019 proceedings and is available as a PDF file.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.