Ken Schwaber and others talk of Empirical Process Control ideas as
being key to understanding Scrum. I think this makes some sense.
Mr. Schwaber got these ideas from Babtunde Ogunnaike and W. Harmon Ray, who wrote the process bible:
Process Dynamics, Modeling and Control. Big ole book, mainly about chemical processes.
We
are talking about how to build new products. How to get business
results, in the form of new products. Innovation. Some of you don't
even want to call it a 'process'.
But to process geeks, if you
build something in a half-way regular way, then that way that you build
things, even if fairly irregular, is a process. Of a sort.
Ken Schwaber uses this (to the Scrum world) famous quote from Ogunnaike and Ray's book:
It is typical to adopt the defined (theoretical)
modeling approach when the underlying
mechanisms by which a process operates are
reasonably well understood. When the process
is too complicated for the defined approach, the
empirical approach is the appropriate choice.”
In very simple terms, we Agile folks take 'defined' to mean 'waterfall', roughly as defined by the famous
waterfall article by Dr. Winston Royce.
Two
things must be said about 'waterfall'. First, Dr. Royce defines and
shows lots of feedback loops, and most people, when they speak of
waterfall, do not mean that. Or they mean that those feedback loops are
very weak and work poorly, very poorly.
Two, Dr. Royce calls for
builders to 'build it once, throw it away, and build it again
(correctly).' This is virtually never done in real life, and is
typically not meant when people say 'waterfall.'
And by 'empirical', we take that to mean, very simply, Scrum, as defined too quickly in the
Scrum Guide by Jeff Sutherland and Ken Schwaber.
So,
how do we connect the dots? A later question is: have we connected
them fairly, usefully, and with as much rigor as possible. And a final
question: is there any more light that 'process control' ideas can give
us, to enable us to do our innovation/new product development work
better?
***
Here are what I think of as the basics of process
control, as applicable and useful to understanding Scrum better. And
the two basic methods or approaches (defined vs empirical).
This
is what I think I have been told over the years. By several different
people. In fact, I may be adding and subtracting what others have said
to me.
It is a very simple theory or set of ideas. Even though it is simple, it is still (IMO) useful.
There
is also a lot it does not 'explain.' Although perhaps one could start
to use these basics concepts to discuss these other things or issues.
In the simplest model, process control consists of a flow across three 'elements'.
1. Inputs
2. The black box ('the process')
3. Output
If
1 and 2 are both 'in control' and highly reliable, then 3 is likely to
be reliable. Ceteris paribus. These are the conditions for a defined
(waterfall) approach.
At the other 'end', if both 1 and 2 are 'out
of control' or unreliable, then 3 is, by the definition of this model,
unreliable. (Unless there is some other element that magically makes it
reliable.) These are the conditions for an empirical approach.
What does empirical approach mean?
a. We inspect 3 often (this is only common sense -- when 3 is
unreliable one naturally wants to inspect it more often; one needs to),
with the 'best possible' eyes, expecting it often, even usually, not to
be what we want.
b. When 3 is not what we want, if we can, we pull
it back to the beginning, and run it through again. And try to 'adapt'
either 1 or 2, to make them temporarily more reliable. And pray that 3
is or becomes -- after we run it through again -- actually what we want.
We
are assuming for now 3 (the widget) can be run through again, usefully.
Of course, this may not always be the case. For software, this is
true. For some physical products, this may be a bad option.
Now,
the empirical approach is terrible, obviously. If one (the 'God' of the
process) had any sense at all, one would change things so that 1 and 2
were both highly reliable. And then 3 would become reliable. That is,
one would change things so that one could use the defined approach.
But, what we are saying with new product innovation with human beings, is that we never have 1 or 2 in a reliable state.
We are not God, and there is too much stuff hitting the fan. From every direction.
So, while we might control the inputs and the black box for 3 minutes,
after about 3 minutes things are unreliable again. Sadly. So we are
always stuck using an empirical process.
But at least we understand the process we do have.
***
Personally, I consider human beings highly unreliable inputs to any
process. We humans often compare ourselves to machines, but in fact we
are highly unreliable. Now, innovation and creativity are 'the
unexpected'. So, in innovation 'unreliability' is actually a good
thing. So, humans aren't that bad after all.
This very simple
theory or paradigm seems very real and accurate to me. It makes sense,
to me, of the mess that we are in. The tar pit, as Fred Brooks calls it.
I
think this is basically what Tunde Ogunnaike and Harmon Ray meant. At
least for us. When they compared 'defined' and 'empirical'. But, I
will ask them.
***
Now, some questions that this very simple theory does not answer.
1. What if only 1 or 2 is 'unreliable'? (And the other one is reliable.)
2. How unreliable do 1 or 2 need to be before one uses an empirical
approach (as you call it)? One imagines that, at very low levels of
'variation' in 1 and 2, ...that the 'defined' approach would still work
or be better.
(As a practical matter for software development, I find that 1 and 2 are so 'out of control' that this question becomes moot.)
3. How do you know there is not another 'element'?
4. What if you can't adapt 'enough' (on 1 or 2)?
(Logically, in the simple case, one stops working or trying to produce
anything, since 3 is highly likely to be wrong. Unless one can live
with totally random success.)