A meta-learning approach for evaluating the effect of software development policies

Loading...
Thumbnail Image

Date

2017

Journal Title

Journal ISSN

Volume Title

Publisher

University of New Brunswick

Abstract

Delivering high-quality software on time and on budget is a challenging endeavor but it can be made more likely by adhering to an approach where guidance is provided through the use of software development policies. Software development policies represent standards and best practices that a company has chosen to follow throughout their software development effort. For our purposes, a software policy is a statement of conduct intended to guide and constrain development activities. Policies can be written to capture company guidelines, industry best practices, empirical research, and even past experience. A simple example of a policy might read “a preliminary design must be completed before implementation begins.” Policies help to ensure the existence of certain environmental conditions that are conducive to a successful outcome. Depending on the situation, however, the policies in use may not have the expected effect. Currently, there does not exist a formal way to evaluate a company’s policy set without resorting to extensive experimentation or case study on each policy. We propose a method that monitors weekly success indicators on project aspects such as quality, time, budget, and morale. The policies in use are then evaluated against these indicators resulting in a summary of those policies thought to impact process performance. Due to the many complexities of this problem (e.g. policy interactions, delayed effects of changes, etc.), our method consists of a combination of several different analysis techniques that are combined to yield a more complete solution. Our set of analysis methods currently includes: a form of linear regression adapted for greater sensitivity; a check that extreme values coincide; a trend analysis that detects whether data generally deviates in the same (or opposite) direction; and a special check adapted specifically for discrete measures. The results from each method are then combined using a meta-learner that compares the similarity of the ranked results produced by each individual technique, and provides a single indicator of how strongly they agree. To ensure our method works and is practical, we validated it against industry data from a leading Canadian business-solutions provider. Despite the many challenges inherent with real-world data (e.g., missing, inconsistent, incorrect, biased, sparse, and limited data), our validation work indicates that our method can identify more potential effects than other traditional approaches, especially the more subtle weaker effects, which can serve as a trigger for further investigation. These results shall be of special interest to project managers in their efforts to deliver on successful projects.

Description

Keywords

Citation