T1: Data Science and Measurement in Software Reliability Engineering

Pete Rotella
Cisco Systems, Inc., USA
Sunita Chulani
Cisco Systems, Inc., USA


Date

July 16 (Monday)


Duration

09:00 - 12:30 (Half-day)


Abstract

There is an abundance of metrics and models in the field of software reliability engineering, but a significant challenge in their use is to develop a framework that is both mathematically sound and also reflects the behavior of software in a large, productional software environment. Most importantly, the metrics and models need to help the engineering teams to improve their practices and processes, and thereby enable them to produce great products.


High performance models are needed to enable software practitioners to identify deficient (and superior) development and test practices. Even using standard practices and metrics, software development teams can, and do, vary substantially in practice adoption and effectiveness. One challenge for researchers and analysts in these organizations is to develop and implement mathematical models that adequately characterize the health of individual practices (such as code review, unit test, static analysis, function testing, etc.). These models can enable process and quality assurance groups to assist engineering teams in surgically repairing broken practices or replacing them with more effective and efficient ones.


In this tutorial, we will describe our experience with model building and implementation, and describe the boundaries within which certain types of models perform well. We will also address how to balance model generalizability and specificity in order to integrate computational methods into everyday engineering workflow.


An important part of this modeling and analysis effort needs to be the correlative ‘linking’ of the development and test metric values to customer experience outcomes and then to customer sentiment (satisfaction). These linkages are essential in not only convincing engineering leadership to use computational tools in practice, but also in enabling investigators, at an early stage, to design experiments and pilots to test model applicability moving forward in time. After convincing experiments and pilots have been demonstrated, much work remains: Choosing a useful, and manageable, set of metrics, establishing goals and tracking/reporting mechanisms, and planning and implementing the tooling, training, and rollout. These practical considerations invariably put a strain on the models, therefore the models and ancillary analyses must be resilient, ‘industrial strength.’


Understanding a model’s practical limitations and strengths is an important part of its use – just as the mathematical and statistical limitations and strengths underscore a model’s scientific validity. Both factors, mathematical and practical, need to mesh properly in a workable way in a computation-driven engineering environment. The tutorial addresses the integration of these factors.


We will describe our experience in building and implementing models that are used by engineering teams that employ diverse development approaches, including waterfall, hybrid and agile development. We will show how we link in-process measures (from development and test) to customer experience and customer satisfaction, which in turn correlate strongly to company revenue. We will discuss the steps involved in choosing the most valuable metrics, setting goals for these metrics, and using them to help in improving development and test practices and processes.


Tutorial Outline

  • Industrial mathematical models and simulations – strengths, weaknesses, limitations
  • Generalizability and specificity – balancing these in a software development environment
  • Choosing variables (including the question ‘to fish or not to fish?’)
  • Correlation and causality – can we test for causality, and if so, how?
  • Linkages from in-process to customer experience, then to customer sentiment metrics
  • Case studies – general and specific models that have worked well and that haven’t
  • Impact v. precision/recall – how to identify which problems to address, and where to invest
  • Practical limitations of models in industrial settings
  • Customer sentiment models and models characterizing non-functional requirements
  • Reporting/goaling/governance and the best-in-class paradigm
  • Measuring model adherence/adoption and effectiveness
  • Practical considerations in integrating models into the engineering workflow
  • Use of computational engineering in corporate quality programs
  • What has worked and what has not – what are the next steps for computational engineering

About the Speakers

Pete Rotella has over 30 years’ experience in the software industry, as a leader of large-scale development projects and senior software engineering researcher. He has led major system development projects at IBM Corporation, U.S. Environmental Protection Agency, U.S. National Institutes of Health, GlaxoSmithKline plc, Unisys Corporation, and several statistical systems startups. For the past 16 years, he has been focusing on improving software reliability at Cisco Systems, Inc.


Sunita Chulani is an Advisory Engineer/Senior Manager of Analytical Models and Insights at Cisco Systems. Sunita has deep subject matter expertise in the area of software metrics, measurement and modeling and is responsible for developing insights derived from descriptive and prescriptive quality data analytics. Sunita has a good understanding of the mix between engineering and management with good analytical, communication and leadership skills. Her team’s charter focuses on Analytic Models and Customer/Product Insights. Sunita is a go to expert with a 9-year tenure at Cisco. She has several patents and co-authored a book, several book chapters, encyclopedia articles and more than five dozen papers/presentations at prestigious conferences. She is also very active in IEEE with a strong influence in the field of software reliability and has taught graduate level courses at Carnegie Mellon University. Prior to Cisco, Sunita was as a Research Staff Member at IBM Research in the Center for Software Engineering. She received her Ph.D. and Masters in Computer Science (with an emphasis on Statistics/Data Analysis and Software Economics) from the University of Southern California.


Previous Tutorial

  • QRS 17 – Prague, Czech Republic, July 25, 2017

[Back]


Blank