November 21, 2014

Make Stories Small When You Have “Wicked” Problems

By Johanna Rothman

If you read my Three Alternatives to Making Smaller Stories, you noticed one thing. In each of these examples, the problem was in the teams’ ability to show progress and create interim steps. But, what about when you have a “wicked” problem, when you don’t know if you can create the answer?

If you are a project manager, you might be familiar with the idea of “wicked” problems from   from the book Wicked Problems, Righteous Solutions: A Catalog of Modern Engineering Paradigms. If you are a designer/architect/developer, you might be familiar with the term from Rebecca Wirfs-Brock’s book, Object Design: Roles, Responsibilities, and Collaborations.

You see problems like this in new product development, in research, and in design engineering. You see it when you have to do exploratory design, where no one has ever done something like this before.

Your problem requires innovation. Maybe your problem requires discussion with your customer or your fellow designers. You need consensus on what is a proper design.

When I taught agile to a group of analog chip designers, they created landing zones, where they kept making tradeoffs to fit the timebox they had for the entire project, to make sure they made the best possible design in the time they had available.

If you have a wicked problem, you have plenty of risks. What do you do with a risky project?

  1. Staff the project with the best people you can find. In the past, I have used a particular kind of “generalizing specialist,” the kind where the testers wrote code. The kind of developers who were also architects. These are not people you pick off the street. These are people who are—excuse me—awesome at their jobs. They are not interchangeable with other people. They have significant domain expertise in how to solve the problem. That means they understand how to write code and test.
  2. Help those generalizing specialists learn how to ask questions at frequent points in the project. In my inch-pebble article, I said that with a research project, you use questions to discover what you need to know. The key is to make those questions small enough, so you can show progress every few days or at least once week. Everyone in the project needs to build trust. You build trust by delivering. The project team builds trust by delivering answers, even if they don’t deliver working code.
  3. You always plan to replan. The question is how often? I like replanning often. If you subscribed to my Reflections newsletter (before the Pragmatic Manager), back in 1999, I wrote an article about wicked projects and how to manage the risks.
  4. Help the managers stop micromanaging. The job of a project manager is to remove obstacles for the team. The job of a manager is to support the team. Either of those manager-types might help the team by helping them generate new questions to ask each week. Neither has the job of asking “when will you be done with this?” See Adam Yuret’s article The Self-Abuse of Sprint Commitment.

Now, in return, the team solving this wicked problem owes the organization an update every week, or, at the most, every two weeks about what they are doing. That update needs to be a demo. If it’s not a demo, they need to show something. If they can’t in an agile project, I would want to know why.

Sometimes, they can’t show a demo. Why? Because they encountered a Big Hairy Problem.

Here’s an example. I suffer from vertigo due to loss of (at least) one semi-circular canal in my inner ear. My otoneurologist is one of the top guys in the world. He’s working on an implantable gyroscope. When I started seeing him four years ago, he said the device would be available in “five more years.”

Every year he said that. Finally, I couldn’t take it anymore. Two years ago, I said, “I’m a project manager. If you really want to make progress, start asking questions each week, not each year. You won’t like the fact that it will make your project look like it’s taking longer, but you’ll make more progress.” He admitted last year that he took my advice. He thinks they are down to four years and they are making more rapid progress.

I understand if a team learns that they don’t receive the answers they expect during a given week. What I want to see from a given week is some form of a deliverable: a demo, answers to a question or set of questions, or the fact that we learned something and we have generated more questions. If I, as a project manager/program manager, don’t see one of those three outcomes, I wonder if the team is running open loop.

I’m fine with any one of those three outcomes. They provide me value. We can decide what to do with any of those three outcomes. The team still has my trust. I can provide information to management, because we are still either delivering or learning. Either of those outcomes provides value. (Do you see how a demo, answers or more questions provides those outcomes? Sometimes, you even get production-quality code.)

Why do questions work? The questions work like tests. They help you see where you need to go. Because you, my readers, work in software, you can use code and tests to explore much more rapidly than my otoneurologist can. He has to develop a prototype, test in the lab and then work with animals, which makes everything take longer.

Even if you have hardware or mechanical devices or firmware, I bet you simulate first. You can ask the questions you need answers to each week. Then, you answer those questions.

Here are some projects I’ve worked on in the past like this:

  • Coding the inner loop of an FFT in microcode. I knew how to write the inner loop. I didn’t know if the other instructions I was also writing would make the inner loop faster or slower. (This was in 1979 or so.)
  • Lighting a printed circuit board for a machine vision inspection application. We had no idea how long it would take to find the right lighting. We had no idea what algorithm we would need. The lighting and algorithm were interdependent. (This was in 1984.)
  • With clients, I’ve coached teams working on firmware for a variety of applications. We knew the footprint the teams had to achieve and the dates that the organizations wanted to release. The teams had no idea if they were trying to push past the laws of physics. I helped the team generate questions each week to direct their efforts and see if they were stuck or making progress.
  • I used the same approach when I coached an enterprise architect for a very large IT organization. He represented a multi-thousand IT organization who wanted to revamp their entire architecture. I certainly don’t know architecture. I know how to make projects succeed and that’s what he needed. He used the questions to drive the projects.

The questions are like your tests. You take a scientific approach, asking yourself, “What questions do I need to answer this week?” You have a big question. You break that question down into smaller questions, one or two that you can answer (you hope) this week. You explore like crazy, using the people who can help you explore.

Exploratory design is tricky. You can make it agile, also. Don’t assume that the rest of your project can wait for your big breakthrough.  Use questions like your tests. Make progress every day.

I thank Rebecca Wirfs-Brock for her review of this post. Any remaining mistakes are mine.

November 21, 2014 02:10 PM

November 19, 2014

Three Alternatives for Making Smaller Stories

By Johanna Rothman

When I was in Israel a couple of weeks ago teaching workshops, one of the big problems people had was large stories. Why was this a problem? If your stories are large, you can’t show progress, and more importantly, you can’t change.

For me, the point of agile is the transparency—hey, look at what we’ve done!—and the ability to change. You can change the items in the backlog for the next iteration if you are working in iterations. You can change the project portfolio. You can change the features. But, you can’t change anything if you continue to drag on and on and on for a give feature. You’re not transparent if you keep developing a feature. You become a black hole.

Managers start to ask, “What you guys doing? When will you be done? How much will this feature cost?” Do you see where you need to estimate more if the feature is large? Of course, the larger the feature, the more you need to estimate and the more difficult it is to estimate well.

Pawel Brodzinski said this quite well last year at the Agile conference, with his favorite estimation scale. Anything other than a size 1 was either too big or the team had no clue.

The reason Pawel and I and many other people like very small stories—size of 1—means that you deliver something every day or more often. You have transparency. You don’t invest a ton of work without getting feedback on the work.

The people I met a couple of weeks ago felt (and were) stuck. One guy was doing intricate SQL queries. He thought that there was no value until the entire query was done. Nope, that’s where he is incorrect. There is value in interim results. Why? How else would you debug the query? How else would you discover if you had the database set up correctly for product performance?

I suggested that every single atomic transaction was a valuable piece. That the way to build small stories was to separate this hairy SQL statement was at the atomic transaction. I bet there are other ways, but that was a good start. He got that aha look, so I am sure he will think of other ways.

Another guy was doing algorithm development. Now, I know one issue with algorithm development is you have to keep testing performance or reliability or something else when you do the development. Otherwise, you fall off the deep end. You have an algorithm tuned for one aspect of the system, but not another one. The way I’ve done this in the past is to support algorithm development with a variety of tests.

Testing Continuum from Manage It!This is the testing continuum from Manage It! Your Guide to Modern, Pragmatic Project Management. See the unit and component testing parts? If you do algorithm development, you need to test each piece of the algorithm—the inner loop, the next outer loop, repeat for each loop—with some sort of unit test, then component test, then as a feature. And, you can do system level testing for the algorithm itself.

Back when I tested machine vision systems, I was the system tester for an algorithm we wanted to go “faster.” I created the golden master tests and measured the performance. I gave my tests to the developers. Then, as they changed the inner loops, they created their own unit tests. (No, we were not smart enough to do test-driven development. You can be.) I helped create the component-level tests for the next-level-up tests. We could run each new potential algorithm against the golden master and see if the new algorithm was better or not.

I realize that you don’t have a product until everything works. This is like saying in math that you don’t have an answer until you have the finished the entire calculation. And, you are allowed—in fact, I encourage you—to show your interim work. How else can you know if you are making progress?

Another participant said that he was special. (Each and every one of you is special. Don’t you know that by now??) He was doing firmware development. I asked if he simulated the firmware before he downloaded to the device. “Of course!” he said. “So, simulate in smaller batches,” I suggested. He got that far-off look. You know that look, the one that says, “Why didn’t I think of that?”

He didn’t think of it because it requires changes to their simulator. He’s not an idiot. Their simulator is built for an entire system, not small batches. The simulator assumes waterfall, not agile. They have some technical debt there.

Here are the three ways, in case you weren’t clear:

  1. Use atomic transactions as a way to show value when you have a big honking transaction. Use tests for each atomic transaction to support your work and understand if you have the correct performance on each transaction.
  2. Break apart algorithm development, as in “show your work.” Support your algorithm development with tests, especially if you have loops.
  3. Simulate in small batches when you have hardware or firmware. Use tests to support your work.

You want to deliver value in your projects. Short stories allow you to do this. Long stories stop your momentum. The longer your project, and the more teams (if you work on a program), the more you need to keep your stories short. Try these alternatives.

Do you have other scenarios I haven’t discussed? Ask away in the comments.

This turned into a two-parter. Read Make Stories Small When You Have “Wicked” Problems.

November 19, 2014 06:02 PM

November 18, 2014

Have You Signed Up for the Conscious Software Development Telesummit?

By Johanna Rothman

Do you know about the Conscious Software Development Telesummit? Michael Smith is interviewing more than 20 experts about all aspects of software development, project management, and project portfolio management. He’s releasing the interviews in chunks, so you can  listen and not lose work time. Isn’t that smart of him?

If you haven’t signed up yet, do it now. You get access to all of the interviews, recordings, and transcripts for all the speakers. That’s the Conscious Software Development Telesummit. Because you should make conscious decisions about what to do for your software projects.

November 18, 2014 06:14 PM