This is a mobile version, full one is here.

Yegor Bugayenko
26 December 2017

The Formula for Software Quality

How do you define the quality of a software product? There is definitely an intrinsic emotional component to it, which means satisfaction for the user, willingness to pay, appreciation, positive attitude, and all that. However, if we put emotions aside, how can we really measure it? The IEEE says that quality is the degree to which a product meets its requirements or user expectations. But what is the formula? Can we say that it satisfies requirements and expectations to, say, 73%?

Here is the formula and the logic I'm suggesting.

As we know, any software product has an unlimited number of bugs. Some of them are discovered and fixed by the development team, let's call them F. Some of them are discovered by the end users, let's call them U. Thus, the total amount of bugs we are aware of, out of an infinity of them, is F+U.

Obviously, the smaller U is, the higher the quality. Ideally, U has to be zero, which will mean that users don't see any bugs at all. How can we achieve that, if the total amount of bugs is infinite? The only possible way to do it is to increase F, hoping that U will decrease automatically.

Thus, the quality of a product can be measured as:

We simply divide the amount of bugs found by the total amount of bugs visible. Thus, the more bugs we manage to find before our users see them, the higher the quality.

A quality of 100% means that no bugs are found by the users. A quality of 0% means that all bugs are found by them.

Does it make sense?

P.S. It seems that I'm not the inventor of the formula. This is the quote from Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing (2009) by Rex Black, page 109: A common metric of test team effectiveness measures whether the test team manages to find a sizeable majority of the bugs prior to release. The production or customer bugs are sometimes called test escapes. The implication is that your test team missed these problems but could reasonably have detected them during test execution. You can quantify this metric as follows:

P.P.S. Here is another similar metric by Capers Jones at Software Defect Removal Efficiency, Computer, Volume 29, Issue 4, 1996: "Serious software quality control involves measurement of defect removal efficiency (DRE). Defect removal efficiency is the percentage of defects found and repaired prior to release. In principle the measurement of DRE is simple. Keep records of all defects found during development. After a fixed period of 90 days, add customer-reported defects to internal defects and calculate the efficiency of internal removal. If the development team found 90 defects and customers reported 10 defects, then DRE is of course 90%."