Saturday, June 27, 2009

Metrics & Velocity

I have received a few comments, both recently and in the past, that tell me some people are uncomfortable measuring velocity. And they are uncomfortable measuring the Team.

They are usually not that clear why they are uncomfortable.

Let me state my position, which I believe is also close to the position of Jeff Sutherland and Ken Schwaber.

First, as a memory device, I say: "Velocity: Don't leave home without it."

Second, any decent Team wants to know if they are really successful.

Third, the Team must measure velocity and it must aggressively be trying to improve it. Doubling velocity in the first 6 months should be a goal. In Scrum, the larger goal is to get the Team to be 5x - 10x more productive than the average Team. (Good data tell us that the average is about 2 Function Points per man-month.) Scrum does not guarantee that every team will get to 5x-10x. But none will if they don't go for it.

Improving velocity means removing the top impediment, one at a time. It does NOT mean working harder. In fact, often one of the top impediments is that we are already working too many hours per week. (To some, this will seem a paradox. Explanation another time.)

How do we use velocity? Many ways, but I will emphasize three. (1) In planning, to plan the release, for example. (2) To push back with a pattern of numbers when magical-thinking managers ask the Team to double their velocity in one Sprint. (3) To challenge ourselves, as a Team, to get impediments removed so we can enjoy some real success around here. (And often we have to ask managers and even senor management to get involved with the impediment removal.)

What are the push-backs that I hear?

Several. This post is getting long enough that I won't state them all hear.

But the cartoon represents one of the major ones, I think. People are concerned that we are putting human lives in the hands of some stupid bean counter (as we say in the South "bless his little heart"). Certainly not a happy thought.

So, a few assertions about metrics (not time here to discuss them):
* the Team does the metrics themselves, honestly because they want to use the numbers
* there should be no "managing from behind the desk" (as Lean would say)
* velocity does not reflect one single factor, but the result of all factors
* when the Team evaluates velocity, they use human judgment (Ex: "the velocity dip last Sprint was mainly due to Vikas being out sick 4 days; he's fine now")
* people want to see clearly and show that they are successful
* velocity is not supposed to be a tool for Dilbert managers to beat up the Team with
* while every metric will eventually be gamed (eg, due to Dilbert managers), these issues are part of the larger imperative of honesty and transparency in Scrum. Occasional gaming is not a reason to never do any metrics
* Velocity is not the only important metric

Thursday, June 25, 2009

Fun & Success - Learn Scrum - Durham and Montreal

Fun & Success? In the same sentence?

"You are doing Scrum right if and only if you are having fun. Serious fun."

"You are doing Scrum right if and only if you are having clear success."

How can these both be true? Pushing through success is so stressful. Fun is light-hearted, like laughter.

Well, this is one of the paradoxes of Agile that we explore in the Certified ScrumMaster course. It is only a 2-day course; it is doubtful one could be a true "master" of anything in 2 days. But we do promise you will learn a lot (more than you can possibly take on-board) in 2 days. (Ok, 3 days if you include the Team Start-up workshop.)

Oh, about the Dilbert cartoon. Seriously, we recommend that you have at least some training before starting Agile. The whole team, in fact, is what we recommend (sounds self-serving, I know, but that is the best recommendation). On the fun side, we love to hate Dilbert managers as much as the next guy, but some of them managers actually drink beer and eat pizza like normal people. Who knew they could actually help remove impediments? And lead us toward a more fun life with more success.

For almost everyone, Scrum is a serious paradigm shift. Get ready. (Koan: If you think you have made the shift, you haven't.)

And get ready to have some fun. (Yes, even the course is mostly fun, with, for example, a bunch of exercises and a PG-rated Richard Pryor joke. We leave no stone unturned to help you flip those bits in your wetware. Wetware is the hardest thing to refactor.)

For Durham (Jun 30 - Jul 2), see here and here.

For Montreal (July 8-9), see here.

We have other courses on the same site.

Wednesday, June 24, 2009

Completing a Release

OK, so we have a known velocity in Story Points. And, having that, it is an exercise for a 6 year old to figure out how many more sprints until the release.

Example: We have a velocity of 20 and the stories in the backlog for this release have a total of 100 story points, so QED, we have 5 sprints remaining until we can release.

[QED is from my old school days, meaning "which was to be proved".]

Fine, for a shark, a simple project manager or a 6 year old.

What's the problem, you say?

In real life, we need to be cleverer than a shark.

It takes a clever, determined Product Owner (in Scrum terms) to land the release.

We know from long experience that the Product Backlog will (or should) change. New features will be discovered, the customer will "know it when he sees it" (a law of software "requirements"). And "stuff will happen" such that the current known velocity will change.

Most importantly, the PO (Product Owner) will be executing the Pareto Rule, which says that 80% of the value comes from 20% of the stories (maybe better to say 20% of the story points). Maybe not a perfect 80-20 rules, but all those stories slated for the release are NOT required.

Side note: What can happen to velocity? First, it should improve as we remove impediments. Second, "stuff happens" and the foreseeable problems (which we refused to foresee) happen. And the completely unexpectable happens with known regularity (and perhaps some unknown frequency as well).

Let me emphasize again: The PO should dynamically be looking at the product as it grows to determine the Minimum Marketable Feature Set (MMFS) to release. This is a very dynamic process of discovery. Or should be. When you are creating something for the first time, there is always plenty to learn. (Or, if you waited for the "perfection" of the so-called requirements, you probably waited way too long.)

For a given product, we hope there will never come a day when we are finished improving it. When all the stories are done. We are always discovering what they want most NOW. Customers always want more (although the "more" that they want is often less...example: less complexity).

Monday, June 22, 2009

Recommended Reading - June 2009

We have a list of recommended books, here.

In addition, we can recommend the following:

A Sense of Urgency by John Kotter.

Fearless Change: Patterns for Introducing New Ideas by Mary Lynn Manns and Linda Rising.

Toyota Production System: Beyond Large-Scale Production by Taiichi Ohno.

Taiichi Ohno's Workplace Management

The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition (2nd Edition)by Frederick Brooks. One of his famous quotes: "How does a project get one year late? One day at a time."

Fit for Developing Software: Framework for Integrated Tests (Robert C. Martin Series)by Mugridge and Cunningham.

Continuous Integration: Improving Software Quality and Reducing Risk (Addison-Wesley Signature Series)by Duvall, Matyas, and Glover.

Agile Retrospectives: Making Good Teams Great by Esther Derby and Diana Larsen.

Test Driven Development: By Example (Addison-Wesley Signature Series)by Kent Beck.

Working Effectively with Legacy Code (Robert C. Martin Series)by Michael Feathers.

The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation by Nonaka and Takeuchi.

Good to Great: Why Some Companies Make the Leap... and Others Don'tby Jim Collins.

Software by Numbers: Low-Risk, High-Return Developmentby Mark Denne and Jane Cleland-Huang.

The Five Dysfunctions of a Team: A Leadership Fable by Patrick Lencioni.

Comment: I have given links to Amazon, which has some benefits. There is certainly no obligation to buy from Amazon.

Suggestion: Some of these books are technical (in one area or another) and some are more about people. Mix and match. Consider what you need to learn. Consider what you are most ready to learn. And don't think too much in the sky. Quickly see how much you have really learned by putting your ideas into action.

Breaking the world record


My younger daughter has her last swim meet of the season tonight. I am excited (and still a bit affected by Father's Day yesterday).

When I talk about Agile & Scrum & Lean, I often refer to Michael Phelps' attitude. Not his attitude in SC, whatever you may think of that. (Not that I begrudge him some relaxation.) But his attitude about swimming. He broke the world record before the Olympics, he broke it in the first heat, again in the second heat, and he intends to break it again in the finals. He is relentless.

We ordinary humans must take the same attitude.

Just about now, your colleagues would be encouraged by seeing you break your own best record.

Just about now, the other teams would be encouraged by seeing your team break its own best record.

So, what do we mean practically?

Well, first, we mean sustainable pace. We mean that we will break records in our new product development work, not by working harder, but by working more creatively. By creating knowledge -- faster, better, with more certainty, and more power.

Second, we will admit that half of what we know is wrong. (Cf Taiichi Ohno in "Workplace Management".)

Third, we will double the team's velocity. In 6 months or less.

Doubling velocity (story points done-done in each sprint) usually means we must improve several things:
* a clearer definition of done (or "done, done" if you prefer). Usually we let this be too vague. Vary it must for some stories, but for most SW dev stories it must be very clear and can be consistent. And in my opinion, it must mean "no [known] bugs escape the Sprint". And testing must include at least functional testing.
* we must measure velocity. I still can't believe how many teams I find that don't have some measure of velocity. More on this next time. For now: "Velocity: don't leave home without it."
* we must prioritize the impediments, and keep removing or reducing the top one until velocity is doubled. Hint: We might want to prioritize the impediments by how much the removal/reduction will increase velocity. 25% here, 30% there; pretty soon you're talking a real increase in velocity.

Hint: Improving quality and reducing technical debt are almost always important keys to seriously increased velocity. Not the only keys, but very important.

Who is gonna feel better when the Team doubles velocity (with sustainable pace)? Yes, the Team, perhaps first and most importantly. And customers. And managers. And the widows and orphans who own the company.

Is velocity the only metric in town? Ok, good question, but for another day. Increase velocity now. Show yourself you can do that professionally. Then we talk.

"But, things are so good around here, we can't possibly double velocity." Ummm. My first thought is that your biggest impediment is that people are hiding from the truth. Every place I look, we are using such a small percentage of the potential of people, that doubling the velocity is a task any team can accomplish. Look again, and take Michael Phelps' attitude.

If you really think you can't get any better, declare yourselves the best team in the world, write-up your success, and challenge other teams, anywhere, to beat you. You might just learn a thing or two. And have some serious fun.

Thursday, June 11, 2009

What is Business Value Engineering?

I made a post in AgileBusiness (yahoo group). That I thought I would repeat here:

QUOTE
I have been asked to start a conversation about BV Engineering. So, here's a
start...

What is it?

It is a framework for looking at the delivery of business value. It is called BV Engineering for two reasons. First, instead of hand-waving, we believe that BV Engineering should include qantitative measures (although not be dominated by metrics), and, second, it is called that so that it is approached like other engineering practices in Scrum, as something that is not prescribed, except to say one must always have them, identify them, and improve them. And BV Engineering becomes one of those engineering practices.

Where do we start?

We start with a grossly simplistic franework that says: we have
* a box of customers (external and internal typically),
* a box of Business (customer facing people, internal groups like legal and compliance, and perhaps others), and
* a Team (eg, the Scrum team that will produce or improve one product for those customers).

We also start with an assumption that BV Engineering is a round-trip set of experiments that are continuously trying to prove whether our hypotheses related to BV Engineering are useful. Or, more accurately, it is a feedback loop that continuously shows us how far off our hypotheses are. (And since stuff is happening all the time, we are always at least somewhat off.)

And what do we do next?

Next, based on that simple framework, one does a simple drawing, as with Value Stream Mapping, and describes what BV Engineering is in your specific context.

The flow between all the people is diagrammed. The assumptions and hypotheses are described. The business value model is described. We lay out the current state.

Why?

So that, being visible, we can all see it, and make suggestions, little tests, for improving it. Constantly. That is, so we can continuously move toward a future state.

So, what kinds of things are you including?

Well, anything that helps or hinders us from delivering stellar business value to our customers instantly, at a lower price, solving exactly their problem (or fulfilling their need) with no adverse side-effects. And fulfills all our constraints (eg, a good return to capital, etc, etc).

The approach also works if you make the simplifying assumption that "we only want to make money". As Drucker would say, not the correct basis for doing business, but one that some adhere to.

Back to the things. These things include, or might include: communication (who, what, how, when, etc), gathering requirements, the BV model and its assumptions, frequency of release, feedback from the release, how much we do the telephone game, who needs to understand the customer, the role of tacit and explicit knowledge (about what?), how and where we do knowledge creation, how we balance customer needs with legal/regulatory needs, how we do portfolio management, how we start and kill efforts, the Kano model, prioritizing across multiple customers (or customer sets), priority poker, value stream mapping, personas, use cases, etc, etc, etc.

For example, the use of any one of these things has one or more assumptions tied to it. One hypothesis is that these assumptions have often never been articulated, much less challenged, and even less have they been confirmed as accurate for our specific situation.

Two more observations, each fundamental. As soon as one sees this as a feedback loop (or PDCA cycle) that is trying to prove whether our theories and practices are on target, one immediately asks about the frequency of feedback. And almost always it is not fast enough (GM anyone?). And, second, one then looks at this not as a static model, but as a dynamic model that is always adapting to change. So, one asks "how are we building into our BV Engineering appropriate mechanisms for it to be continuously adjusting in a useful way?"

Lastly, let me add a "personal bias" (which I find empirically true), namely:
virtually every team member needs to understand how we do BV Engineering in our specific situation, and where they fit in on the process.

Well, that's a start.
Comments or questions?
UNQUOTE

Your comments, here or on AgileBusiness, would be welcome.