Having a culture of continuous improvement means you are constantly experimenting with your processes. A regular retrospective meeting is an important part of that. It’s an opportunity for the team to inspect themselves and create a plan for improvements. In these meetings the team decide on “Action points”: things that individuals or the team as a whole need to do. These action points might be fairly trivial things, like buying stationary, or they might be significant changes to the way the team works. They are possibly the most important outcome of a retrospective, and a key mechanism for a team to clear obstacles and get better.

Action points aren’t enough though. At least according to my team at a recent retrospective. We decided to formalise our experiments and add more structure. Let’s look at an example:

A team wants to test working with a sprint goal. They collectively produce something that looks like this:

Sprint Goal
 
Evaluate
    2019/08/14 - 2019/09/11  (4 weeks)

What problem are we trying to solve, or what do we want to improve, and why?
     Team is working on a lot of things in parallel, spending a lot of time context switching ...
     ....the team decided on this experiment as a way to reduce work in progress,
     improve collaboration, and shorten lead times.

Thing we will try in order to (hopefully) improve.
     Sprint goal each iteration. Negotiated between the Product Owner and the Development Team... (etc)
     
What are our criteria for knowing if the experiment was successful.
    We experience less context switching, delivering  more and delivering what is most important.
    Product feels the same. We also feel more engaged.

Okay, but we already have action points to improve our process. Why is this useful? Is it even useful, or is it just unnecessary admin for our new ideas? To tell the truth, we didn’t know. So we conducted an experiment on having experiments. This is what we found:

Formalised experiments are useful.

(TLDR)

More specifically:

They shone a light on process improvement. I find teams can very often become exclusively focused on their short term obstacles rather than long term improvements. Having a special “experiment” entity can nudge a team towards a culture of always trying something new. We haven’t lacked an ongoing experiment since we started doing them. But I hope at some point if we do, the team will, thinking of themselves as a team that experiments, notice the absense.

They made it clear upfront that our exciting ideas for improvement might be a complete failures. They provided us the criteria for judging success, and a time to either accept the new idea more permanently into our way of working, or reject it. I feel this cemented an evidence based approach, with discussion staying focused around what we were originally trying to achieve. Ideas that didn’t help us get to the goals we articulated at the start of the experiment weren’t ones we kept.

Because of that evidence based approach. We made explicit contract with the more sceptical (or change-averse) members of the team. We always need their full commitment to a change, regardless of their individual doubts. It’s a much easier thing to ask for if it’s for a fixed duration and we have a clear time earmarked to reject that idea if it turns out to be rubbish. Interestingly, while any improving team with some kind of feedback loop is capable of reversing bad changes, no adopting untested ideas “officially” into our “permanent” way of working, seems to be a thing that makes people more comfortable with change.

So it appears formalised experiments worked for us.

Maybe they’ll work for you. But of course every team is different. I wouldn’t recommend just assuming they’re a great idea. I mean… maybe run an experiment?