Stop Foolish Rounding/Six Sigma

Six Sigma has been suggested as a methodological solution to foolish rounding in business programs. See Stop Foolish Rounding.

Six Sigma Project Initiation

I think there may be a misconception here about how a Six Sigma project is initiated and operates. A project is initiated by a person knowledgeable in the defect area being targeted for improvement, usually by the person suffering some pain from that defect. In my project, I was one of the "test people" that performed the contracted system acceptance test and I was trying to demonstrate to customers systems that simply were not ready for acceptance. Now that was painful.

The team leader requests that a project be formed and provides some justification. I had no problems with that; I was not the only one hurting. Then the team leader forms a team from other knowledgeable people that can contribute in needed ways. As the project progresses team members may be added as needed. One of the driving forces in systems not being ready was program managers being held to scheduled events, particularly financial ones like customer acceptance, so I added a program manager to my team. Higher management felt strongly enough about this that they added another program manager. Seems they were hurting, too, because income projections were not being met. This lent management credibility to our results and conclusions.

So one thing that Six Sigma did for us individual contributors was to provide us a path for correcting defective processes. (Part of the Six Sigma activity is to define the process causing the problem; typical problematic processes contain free loops. In my case, the customer was sent home, the system "readied" again (NOT!) and we attempted to demonstrate the system again. A loop only bounded by when the customer could be negotiated into accepting a system still failing tests, usually with an "engineer-in-a-box".)

Now in the case of your example problem, there is someone inside the company painfully dealing with "+" defects (and don't forget ignoring hardware overflow!) who suffers pain from this continuing problem. That person might not even know that (perhaps part of) the problem is a defective implementation of "+". That person would initiate the Six Sigma project. Note that for Six Sigma to work, there needs to be a company wide commitment to this activity; if this is not the case, then there are portions of the company that will not buy into a solution and thus it will be doomed to failure. One important part of the organization is accounting, for they must weed out the phony cost savings associated with a defect to provide a clear financial picture of what the quality improvement will save the company. The rational for supporting Six Sigma within a company is that product quality improvements save the company money. Increased sales due to improved quality is not included in the cost assessment.

Again, I am not really defending Six Sigma as a software quality improvement methodology, because I think the likelihood of universal acceptance within a software development company is minuscule. One reason for that is the quantity of disinformation about Six Sigma. I have pointed out resistance by software developers and Ward points out resistance to the software development team suggestions for quality improvement.

Alan Jorgensen 19:32, 13 September 2007 (PDT)

Six Sigma and Software Development

Now I'm not saying that Six Sigma could be a good idea for Software Development and I am asserting that Six Sigma would be a hard hard sell in the software development community and a large reason for that is the current (non engineering oriented) software development culture.

But for a little background. After Ward posted the above "a + b" example and James Bach posted another comment, I posted a response also commenting on a problem I had trying to copy and paste from the Yahoo Groups email:

Of course we all use our own internal processes every day and when we do something wrong we sometimes understand what we did and why we did it so that we can avoid doing it again.
Take Ward's "a + b" example. My assumption here is that in some code, somewhere, this particular code failed. Once we know that, we know that the person who injected that code into the implementation did it for some reason. Part of that person's internal process. Not only that, that coder was following a set of rules that that person learned somewhere as the right set of rules to learn. (i.e., it is always safe to add to values of currency with "+"). But somewhere that person's internal process broke down and a mistake was made. So how can we avoid that particular mistake as a general practice? First of all, we'll need to know who did it. (Not likely.) Next, we'll need to find out why that person made that mistake (Not likely.) Then we would need to figure out how to establish a rule (or to provide training for an existing rule) so that our entire organization is unlikely to make that mistake again.:
These problems are buried in the software development culture. There are some very good efforts to change that culture (see the list of XP rules; my favorite is YAGNI, violation of which is a major source of errors), and to provide a uniform set of rules that everyone can follow. We can argue quality, but I know of no one who respects the quality of the software they are using (to their knowledge).
One universal rule of engineering practice: Sign your work.
When the developers at Microsoft tried doing this, what happened and why?
There seems to be a lot of disinformation about the nature of Six Sigma. For instance it is not about statistical control of inputs to ensure proper outputs but rather it is about:
  • What defects appear in the product?
  • How did they get there?
  • What can we do to prevent that in the future?
  • How can we ensure that the problem doesn't come back?
This is pretty much Six Sigma in a nutshell.
But I agree with James, it ain't going to happen, but like the TV commercial where the guy asks about all his women friends calling at the same time and taking down the phone system: "So it IS a possibility!"
Alan A. Jorgensen
BTW, I tried to copy Ward's comments into here and I couldn't do it. I'm using Thunderbird. Is that a bug?

Which elicited a question from Michael Bolton

Now, my question is: what would Six Sigma suggest that we do with this problem?

And that really tripped my trigger provoking the following rant designed to provide a detailed explanation of how Six Sigma could be applied to software development problems.

The first step in the Six Sigma process as I learned it is "Define". Just what is the problem, anyway? The starting place for me is the symptom of what I perceive to be a bug. I selected a section of the display. It was highlighted just like any other time I wanted to do a copy/paste operation. I tried Alt->Edit->Copy. Nothing bad seemed to happen. No warning that I haven't really copied. (No "Garbage In, Apology Out", and if selecting across a portion of two text boxes cannot be copied, or whatever the problem is, I expect to be told that I can't do what I am trying to do. Is that nuts?) When I subsequently did a paste; nothing happened. What did I do wrong? (I think Hendrickson pointed out that that is the first human response to a bug.) I tried CTRL+c. Nothing I know (that's easy) worked. In any case, having decided that that is a bug, or in reality, the symptom of a bug, somewhere there is code doing the wrong thing.
Maybe in this case it would be the failure to notify the user that this particular selection cannot be copied or that this string cannot be selected. Let's assume the former. Now, in Six Sigma, there is a problem: defining an "opportunity." There is a lot of flexibility in this area requiring a certain creativity. Indeed we are not necessarily counting apples and oranges but we do need to select something we can get a real handle on. In this example, perhaps we should select the nature of this bug: "Notification of an illegal user operation." Please keep in mind that this is only the first step, and later steps may prove our problem definition to be faulty or inadequate or overly difficult in some way.
Then in the next step, "Measure", we would have to count all of the times that our sample set should notify of illegality and the number of times that it does not. This ratio is our defect rate that we would like to drive to zero, but to be more practical, let's set the limit to a nearly impossible task, 3.4 defects per 1,000,000 opportunities. (Even more realistically, let's simply try to reduce the error rate by 80% on our first attempt and 50% thereafter until we achieve a defect rate of 0.034 per myriad (‱), one for which there is a meaningful name by which this entire process may be identified). As in any process, there is an art to properly applying it. Maybe, in order to make "Measure" easier, we simply want to define an "opportunity" as a function calling a function that returns an error code wherein the calling function does not examine or otherwise utilize the returned error code. Now I think that might prove to be a very useful quality statistic. As is often the case in Six Sigma, the problem we end up solving may not even be the one we originally set out to solve. But it is still a valid quality improvement. A fundamental assumption of Six Sigma is that defects cost real money and learning to avoid making the same mistakes produces tax free income in the form of money that does not need to be spent (cost avoidance). Done properly, the cost of quality is negative, not even counting the customer satisfaction benefits. Other cost avoidance techniques: "Let the customer test it for us." "Let's don't fix that bug."
And yes, there is a lot of complicated statistics involved, but I don't need to know why a car works in order to drive one (but it certainly helps, particularly when it doesn't, work, that is). Companies implementing Six Sigma have on-call Statistics Help Desks.
Then the next step, "Analyze", is to find all the reasons that we can for the failures; root cause analysis. We can use Pareto, or other techniques, to determine how many and which of those causes we need to fix in order to achieve our quality improvement goal. Then we need to put in place fixes for those particular causes. For instance, maybe the software designer didn't know that he or she was supposed to notify the user when the user attempted to do something the software couldn't do. Maybe training could fix that. Maybe the code was there, but didn't do its job correctly or warned in a way the user didn't notice, like commanding a sound when the sound is turned off. There are lots of ways of doing this poorly (as Ward pointed out). Or maybe we simply need to implant the rule: "Every function must return an error indicator and every function must process every error indicator returned by the functions it calls." Implementing the autonomic requirements makes this rule even more specific in terms of how returned errors must be processed. Having a standard way of doing this makes implementation and auditing much easier. (What? Audit MY work?)
In any case, the next step is "Implement" where all of the selected fixes need to be put into place.
The final step, "Control", is management's method of maintaining the fix which usually involves a policy that "Measurement" and "Analysis" continue periodically to ensure that the quality goal continues to be met with an action plan when it does not (like adding training for new hires).
Now if your marketing paradigm is to charge customers for fixing bugs, avoiding the creation of bugs is not your goal and Six Sigma is not for you.
Alan A. Jorgensen
Not in a Nutshell
"Garbage In, Apology Out"

This is really great. I have a lot more rants with real ideas about how to do something about them. At least there is this place to vent. Thank you, Ward.

Alan Jorgensen 23:22, 12 September 2007 (PDT)

There are many interesting arguments about the applicability of Six Sigma to the software development process at Sigma Meets Software Development Is off Track.

Alan Jorgensen 00:43, 22 October 2007 (PDT)

Seeking Software Quality Improvement

My recent rant posted on Software-Testing:

There are so many things that could be done to improve software quality. So why is software quality so disrespected? I am surrounded by ordinary people who use computers every day. Every one of them complains about something the computer is doing wrong. I am at my Dad's house. How can I explain to him why his computer experienced the blue screen of death (now filled with tons of white, to him, gibberish)? What reasonable person would read down to the end to see that you have to manually restart the computer? He turns off the machine and walks away, knowing he did something wrong. Just another bad hair day. When he comes back and turns the computer on, Lo! Another miracle! But what if he does the same thing wrong again? He errs in thinking that we, the people who make and test his software, have delivered him a quality product and that he just doesn't use it properly.
If software quality improvement is so easy, why isn't it done?
I should think if it were really important to software companies, it would be done.

Yeah. Now here is a root cause analysis problem. Why is software quality such a hard sell?

Alan Jorgensen 00:50, 14 September 2007 (PDT)



Retrieved from "http://aboutus.com/index.php?title=Stop_Foolish_Rounding/Six_Sigma&oldid=15290749"