2021 journal article

An Empirical Study on Type Annotations: Accuracy, Speed, and Suggestion Effectiveness

ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 30(2).

co-author countries: United States of America πŸ‡ΊπŸ‡Έ
author keywords: Type checking; automated static analysis; software reliability; annotations; program analysis; dimensional analysis; physical units; robotic systems
Source: Web Of Science
Added: April 19, 2021

Type annotations connect variables to domain-specific types. They enable the power of type checking and can detect faults early. In practice, type annotations have a reputation of being burdensome to developers. We lack, however, an empirical understanding of how and why they are burdensome. Hence, we seek to measure the baseline accuracy and speed for developers making type annotations to previously unseen code. We also study the impact of one or more type suggestions. We conduct an empirical study of 97 developers using 20 randomly selected code artifacts from the robotics domain containing physical unit types. We find that subjects select the correct physical type with just 51% accuracy, and a single correct annotation takes about 2 minutes on average. Showing subjects a single suggestion has a strong and significant impact on accuracy both when correct and incorrect, while showing three suggestions retains the significant benefits without the negative effects. We also find that suggestions do not come with a time penalty. We require subjects to explain their annotation choices, and we qualitatively analyze their explanations. We find that identifier names and reasoning about code operations are the primary clues for selecting a type. We also examine two state-of-the-art automated type annotation systems and find opportunities for their improvement.