The perils of a schedule, part II

In the first part of this post, I started to answer a reader’s question about what information you need before you estimate a project and build a schedule. The reader, Wayne, said that he didn’t “get a solid sense of the relative timing of the activities (especially the requirements activity),” because it wasn’t clear how much information you need to know about the project before you get started. One thing that Jenny and I come back to again and again is that there is no single “best,” one-size-fits-all way of running a project. A schedule is a great tool for planning a project, but you have to actually take a close look at what you know about your project before you start building a schedule. And you need to come to grips with the reality that what you know today could easily change. Even if you have a perfect understanding of today’s needs (which, in reality, never actually happens), that doesn’t mean the world won’t change and your users won’t need different software tomorrow.

Don’t get me wrong: I do think schedules are great tools for planning. You can use a schedule to organize the effort and manage your project’s dependencies. And you can use it to communicate with your team. A schedule’s a great way to get a lot of difficult-to-manage information down on paper so you can to see it all in one place. There have been many times over the years when it was only after I had a schedule all sorted out that I could see the project clearly. That’s when I could realize, “Hey, we can save time by doing these two things concurrently,” or, “Uh-oh, we’ve got the riskiest stuff on our critical path. That’s not a good idea!” That’s how a schedule can be a great tool for understanding your work.

But really, while those are important aspects of a project schedule, they aren’t the main way that we use schedules.

That'll light a fire under their asses...

Think about what happens when you give a schedule to someone. If that person’s on your team, they’ll probably groan – maybe not out loud to you if you’re the boss, but we all groan a little inside when someone hands us a schedule that we need to meet. Just like a contractor who doesn’t really care whether the renovation on your house takes six weeks or eight weeks, your team doesn’t really care how long their work takes, as long as they have enough time to do it and don’t have to work nights and weekends to scramble to meet an unrealistic deadline. (Obviously, teams take pride in working quickly, but let’s be realistic here.)

But if you show a schedule to someone who’s not on your team, that schedule makes them happy. They’re generally relieved to see it, because now they know more about when you’re delivering the software. But it’s not the whole schedule they care about. Most of the time, when you hand a schedule to a client, a user or a manager at your company, they see one thing: the deadline. Which you just committed to.

And that’s the real nature of the schedule. Your project’s schedule contains a list of everything that you know you have to do – and it’s your way of telling the rest of the world that you’re committed to doing every single item on that list by a certain date. A schedule isn’t really about getting technical input from your team, or about planning out the work. Those things are nice side-effects of building a schedule, but there are tools that you can use to do those things that don’t involve committing to a date.

No, a schedule, at its core, is really about making commitments to other people. Schedules aren’t just there to be followed. They’re there to represent the real-life commitments that you made to other people. If you meet every commitment you made but go entirely off plan, your project will still be successful. But if you “work the plan” in perfect, excruciating detail, but you still manage to break the commitments that you made – even if it’s because of changes you couldn’t control – your project will be a failure. And that’s the power a schedule brings to your project. Like any tool, it can be used for good or malice.

That’s the perilous aspect of building a schedule: as soon as you commit yourself to it, you’ve introduced potential negative consequences that weren’t there before you put dates down on paper. (No wonder programmers are so reluctant to give estimates!)

Project schedules… not for the commitment-phobic

Jenny and I do a lot of speaking, and when we do we often find ourselves bringing up the idea that the point of any document is to communicate. Let’s say you’re my client, and we’ve got a requirements specification for a piece of software that I’m building for you. The specification itself, the words printed on paper, that’s not important. What’s important is that what’s in my head matches what’s in your head, that the software I’m planning on building is as close as possible as the software that you’re expecting me to deliver. It just so happens that a software requirements specification is a great tool for making sure that what’s in your head matches what’s in mine.

But the document does something else, too. Once we both have the same understanding, writing it down in a specification and agreeing on it means that we both made a commitment. I made a commitment to build the software that’s described in the document. But just as importantly, you’re making a commitment to me: that if I deliver software that meets that specification, you’ll accept it as complete. If you have changes, that’s fine. We just need to update the specification so that it has those changes.

(Oh, and just in case I didn’t make it clear, that “specification” could be a stack of index cards with user stories written on them, and we could make those updates every week or even every day, if that’s what the business needs.)

A schedule works the same way. If we write down and agree on a schedule, that means I promised to give you a certain set of deliverables on certain dates, and you promised to accept them.

At this point, someone who’s studied for the PMP exam might bring up “progressive elaboration,” which reflects the idea that a team can’t know everything about the project they’re working on at the very beginning. We don’t know everything about how the software will be built and tested when we’re still working on the design, and we don’t know everything about the design when we’re working on requirements. When we get to the next checkpoint we may realize that our earlier estimates were wrong, or that our whole approach was wrong. If we’re lucky, we’ve put together a team that accepts this as a basic reality, and plans all work in iterations that deliver complete, working software at the end of each iteration. (And yes, if you’re studying for the PMP exam, you do need to know about iteration!)

But can you see how, even with all of that, it still revolves around commitments?

That’s my point. A schedule is first and foremost a tool for managing your commitments, and only after that is it a tool for actually planning the work. (For a distant third, it’s a record of how the project turned out that you can use to generate metrics.) But the big point is that the schedule doesn’t commit you. Your commitments commit you. The schedule just keeps your commitments on paper in one place.

Now, while all of this may sound negative, it’s not. A good software team that can meet their commitments gains trust from their users, clients and stakeholders. If you’ve got a reputation for making commitments and sticking to them, you’ve got something really powerful. You’ve got the trust of the people you depend on to drive your project forward. And that’s where the schedule can be a really positive thing. To your users, it represents stable software they can depend on. To your team, it represents normal days without crazy pressure, without working late nights or weekends. When you take your commitments seriously, your schedule represents the truth about your project at any given point, and people come to depend on it.

I want to finish off by excerpting a section from “Applied Software Project Management,” because I think it cuts to the core of the point I’m trying to make about schedules and commitments, and how you can use them effectively.

Use the Schedule to Manage Commitments

A project schedule represents a commitment by the team to perform a set of tasks. When the project manager adds a task to the schedule and it’s agreed upon by the team, the person who is assigned to that task now has a commitment to complete it by the task’s due date. Senior managers feel that they can depend on the schedule as an accurate forecast of how the project is going to go—when the schedule slips, it’s treated as an exception, and an explanation is required. For this reason, the schedule is a powerful tool for commitment management .

One common complaint among project managers attempting to improve the way their organizations build software is that the changes they make don’t take root. Typically, the project manager will call a meeting to announce a new tool or technique—he may ask the team to start performing code reviews, for example—only to find that the team does not actually perform the reviews when building the software. Things that seem like a good idea in a meeting often fail to “stick” in practice.

This is where the schedule is a very valuable tool. By adding tasks to the schedule that represent the actual improvements that need to be made—for example, by scheduling all of the review meetings—the project manager has a much better chance of gaining a real commitment from the team.

If the team does not feel comfortable making a commitment to the new practice, the disagreement will come up during the schedule review. Typically, when a project team member disagrees with implementing a new tool or technique, he does not bring it up during the meeting where it’s introduced. Instead, he will simply fail to use it, and build the software as he has on past projects. This is usually justified with an explanation that there isn’t enough time, and that implementing the change will make the task late.

By explicitly adding a task to the schedule, the project manager ensures that enough time is built in to account for the change. This cements the change into the project plan, and makes it clear up front that the team is expected to adopt the practice. More importantly, it is a good consensus-building tool because it allows team members to bring up the new practice when they review the project plan. By putting the change out in the open, the project manager encourages real discussion of it, and is given a chance to explain the reason for the practice during the review meetings. If the practice makes it past the review, then the project manager ends up with a real commitment from the team to adopt the new practice.

— Stellman & Greene, Applied Software Project Management, chapter 4 (O’Reilly, 2005)

I hope that this helps explain how we think you can use a schedule can be used to help you and your team manage your projects more effectively to build better software.

The perils of a schedule

The walls are closing in

We got this e-mail a few days ago from one of our readers:

Hello,

I bought your book, “Applied Software Project Management.” It seems very good overall, but I can’t get past the fact that your book seems to imply that software requirements come after the project plan/WBS/scheduling. Am I missing something?

On page 40, the script for estimating states that the input is documentation that defines the scope of the work being performed. Does this include the SRS? If so, why is this not made more explicit in your book (since requirements plays such a big role)? If not, how can a good estimate and schedule be generator before the requirements analysis has been performed?

I don’t get a solid sense of the relative timing of the activities (especially the requirements activity). Can you comment on this?

Thanks!!

— Wayne M.

That’s an excellent question. I’ve got a straightforward answer, and I’ve got a more involved answer.

The straightforward answer is yes. If you’re using Wideband Delphi (or, really, almost any estimation practice) to come up with estimates that you’ll turn into a schedule, then you need to get a handle on exactly what you’re estimating. So yes, when we say in our book that the input to the process is the “Vision and Scope document, or other documentation that defines the scope of the work product being estimated,” the “other documentation” we’re referring to definitely includes any software requirements you have. (For any readers who haven’t read our book, you can download a PDF of the estimation chapter that this reader’s referring to.)

Let me be clear about something here. You’re absolutely right that requirements analysis leads to more accurate schedules. If you’re lucky enough to have a really detailed specification at the beginning of the project that describes, in detail, all of the software that you’re going to build, then that will give you a much more accurate schedule than if you had a three-page Vision and Scope document that simply lets you know who the users and stakeholders are, explains their needs, and gives you the broad strokes about what features the team will build to meet those needs.

But when’s the last time you actually had the luxury of a complete specification before you had to deliver a schedule? And I’m stressing the word “complete” for a reason: it’s very rare that you’re done with the requirements before you’ve started building the software. So rare, in fact, that neither Jenny nor I have ever seen it in our entire careers, and I suspect very few (if any) of our readers have, either.

A good Agile developer might point out that this is the reasoning behind one of the core Agile principles: “Welcome changing requirements, even late in development.” And, in fact, that’s exactly why we dedicated so much of the chapter on software requirements in Applied Software Project Management to change control, because it’s important to not only accept that change happens, but to recognize it as a good thing. It’s better to change course partway through the project rather than to trundle on to an end goal that you know won’t actually meet your users’ needs or make your customers happy.

So with that in mind, go back to the process you mentioned. Specifically, take a look at the end of the script, because this is where it ties directly into the question you asked:

Exit Criteria The project plan has been updated to reflect the impact of the change, and work to implement the change has begun.

There’s an important idea there in those first six words: the project plan has been updated. That means that any time your world changes, you need to go back and update the scope, the WBS, the schedule, all the actual stuff you plan to deliverable (which might be written down in a statement of work), the list of people doing the work, a risk plan (if you built one)… all the stuff you used to plan your project.

…because there’s no single one Best Wayâ„¢ to build software

And that’s why we didn’t give an explicit order to the activities. Sometimes you’ll end up planning out your project and building a schedule before you do requirements analysis; sometimes you’ll build a schedule after. Our goal was to help you do it well in either case. And by not forcing our readers into a single process or methodology, we don’t have to pretend to know all the answers… because, as far as I know, there is no single one Best Wayâ„¢ to build software.  That’s the main idea, by the way, behind our “diagnose and fix” approach to improving the way you build software. Trying to overhaul your whole software process by doing a major process improvement effort is hard; adopting specific practices that make incremental improvements to the areas that hurt the most is much easier and a lot less risky.

This may sound like we’re calling for a lot of documentation, but it doesn’t have to be like that. Obviously, if you’re working on a team with dozens or even hundreds of people (like the teams Jenny often leads), this can be a pretty big task. But if you’ve got a small team working on a project that will take a few weeks to do, then this may just amount to rearranging your task board, updating your user story cards, updating a couple of Wiki pages, and having a quick stand-up meeting to make sure everyone’s in sync. That’s why people often talk about how running a really big team is like steering an aircraft carrier, because changing course requires miles of water, while running a small team is a lot more like piloting a speedboat that can change course really quickly.

So that’s the straightforward answer. But I think it’s worth delving a little deeper and asking an important question: What’s the nature of a schedule? I know, that probably seems like an odd question, but it’s actually important to understand what schedules are and how they’re used. (Technically, it’s only important if you want your projects to run well.)

A naïve answer might be to simply defer to that old project management chestnut: “Plan the work and work the plan.” If you haven’t heard that saying, take a minute and do a Google search for it. It’s one of those sayings that people love to quote, and it pretty much summarizes how a lot of people use (abuse?) project schedules.

I don’t like that saying, and there’s a reason for that.  I mean, don’t get me wrong, here. “Working the plan” is fine if it that plan accounts for changes. That’s one thing I really like about the PMBOK® and PMP approach: a whole lot of project planning revolves around how to handle changes, and specifically about dealing with change control. Also, it’s a great idea to make sure that you include a reserve for your project, and you can use risk planning to try to get a handle on the unknown.

To be continued in “The perils of a schedule, part II

Taking stock of a failed project

Oops?

Some projects just go wrong.

It’s a fact of life. Projects go over budget, blow their schedules, squander their resources. Sometimes they go off the rails so spectacularly that there’s nothing you can do except (literally) pick up the pieces and try to learn whatever lessons you can so you don’t repeat the failure in the future.

Last week I got a phone call from a developer who was looking for some advice about exactly that. He’s being brought in to repair the damage from a disastrous software project. Apparently the project completely failed to deliver. I wasn’t 100% clear on the details—neither was he, since he’s just being brought in now—but it sounded like the final product was so utterly unusable that the company was simply scrapping the whole thing and starting over. This particular developer knows a lot about project management, and even teaches a project management course for other developers in his company. He’d heard me do a talk about project failure, and wanted to know if I had any advice, and maybe a postmortem report template or a lessons learned template.

I definitely had some advice for him, and I wanted to share it with you. Postmortem reports (reports you put together at the end of the project after taking stock of what went right and wrong) are an enormously valuable tool for any software team.

But first, let’s take a minute to talk about a bridge in the Pacific Northwest.

The tragic tale of Galloping Gertie

One of my favorite failed project case studies is Galloping Gertie, which was the nickname that nearby residents gave to the Tacoma Narrows Bridge. Jenny and I talk about it in our “Why Projects Fail” talk because it’s a great project failure example—and not just because it failed so spectacularly. It’s because the root causes for this particular project failure should sound really familiar to a lot of project managers, and especially to people who build software.

The Tacoma Narrows Bridge opened to the public on July 1, 1940. This photo was taken on November 7 of the same year:

Galloping Gertie

While there were no human casualties, despite heroic attempts at a rescue the bridge disaster claimed the life of a cocker spaniel named Tubby.

Jenny and I showed a video of the bridge collapsing during a presentation of our “Why Projects Fail” talk a while back in Boston. After the talk, a woman came up to us and introduced herself as a civil engineer. She gave us a detailed explanation of the structural problems in the bridge. Apparently it’s one of the classic civil engineering project failure case studies: there were aerodynamic problems, and there were structural problems due to the size of the supports, and there were other problems that combined to cause a distinctive resonance which gave the bridge its distinctive “gallop.”

(We embedded the video of the Tacoma Narrows Bridge collapse here. If you get the Flash Player, you’ll be able to see it!)

But one of the most important lessons we took away from the bridge collapse isn’t technical. It has to do with the designer.

[A]ccording to Eldridge, “eastern consulting engineers” petitioned the PWA and the Reconstruction Finance Corporation (RFC) to build the bridge for less, about which Eldridge meant the renowned New York bridge engineer Leon Moisseiff, designer and consultant engineer of the Golden Gate Bridge. Moisseiff proposed shallower supports—girders 8 feet (2.4 m) deep. His approach meant a slimmer, more elegant design and reduced construction costs compared to the Highway Department design. Moisseiff’s design won out, inasmuch as the other proposal was considered to be too expensive. On June 23, 1938, the PWA approved nearly $6 million for the Tacoma Narrows Bridge. Another $1.6 million was to be collected from tolls to cover the total $8 million cost.

(Source: Wikipedia)

Think back over your own career for a minute. Have you ever seen someone making a stupid, possibly even disastrous decision? Did you warn people around you about it until you were blue in the face, only to be ignored? Did your warnings turn out to be exactly true?

Well, from what I’ve read, that’s exactly what happened to Galloping Gertie. There was plenty of warning from many people in the civil engineering community who didn’t think this design would work. But these warnings were dismissed. After all, this was designed by the guy who designed the Golden Gate Bridge! With credentials like that, how could he possibly be wrong? And who are you, without those credentials, to question him? The pointy-haired bosses and bean counters won out. Predictably, their victory was temporary.

Incidentally, some people refer to this as one kind of halo effect: a person’s past accomplishments give others undue confidence in his performance at a different job, whether or not he’s actually doing it well. It’s a nasty little problem, and it’s a really common root cause of project failure, especially on software projects. I’ve lost count of the number of times I’ve encountered really terrible source code written by a programmer who’s been referred to by his coworkers as a “superstar.” Every time it happens, I think of the Tacoma Narrows Bridge.

But there’s a bigger lesson to learn from the disaster. When you look at the various root causes—problematic design, cocky designer, improper materials—one thing is pretty clear. The Tacoma Narrows Bridge was a failure before the first yard of concrete was poured. Failure was designed into the blueprints and materials, and even the most perfect construction would fail if it used them.

Learning from project failures

This leads me back back to the original question I was asked by that developer: how do you take stock of a failed project? (Or any project, for that matter!)

If you want to gain valuable experience from investigating a project—especially a failed one—it’s really important that you write down the lessons you learned from it. That shouldn’t be a surprise. If you want to do better software project planning tomorrow, you need to document your lessons learned today. You can think of a postmortem report as a kind of “lessons learned report” that helps you document exactly what happened on the project so you can avoid making the same missteps in the future.

So how do we take stock of a project that went wrong? How do we find root causes? How do we come up with ways to prevent this kind of problem in the future?

The first step is talking to your stakeholders… all of them. As many as you can find. You need to find everyone who was affected by the project, anyone who may have an informed opinion, and figure out what they know. This can be a surprisingly difficult thing to do, especially when you’re looking back at your own project. If people were unhappy (and people often are, even when the final product was nearly perfect), they’ll give you an earful.

This makes your life more difficult, because it’s hard to be objective when someone’s leveling criticisms at you (especially if they’re right!). But if you want to get the best information, it’s really important not to get defensive. You never know who will give you really valuable feedback until you ask them, and it often comes from the most unexpected places. As developers, we have a habit of dismissing users and business people because they don’t understand all of the technical details of the work we do. But you might be surprised at how much your users actually understand about what went wrong—and even if they don’t, you’ll often find that listening to them today can help make them more friendly and willing to listen to you in the future.

Talking to people is really important, and having discussions is a great way to get people thinking about what went wrong.  But most effective postmortem project reviews involve some sort of survey or checklist that lets you get written feedback from everyone involved in or affected by the project. Jenny and I have a section on building postmortem reports in our first book, Applied Software Project Management, that has a bunch of possible postmortem survey questions:

  • Were the tasks divided well among the team?
  • Were the right people assigned to each task?
  • Were the reviews effective?
  • Was each work product useful in the later phases of the project?
  • Did the software meet the needs described in the vision and scope document?
  • Did the stakeholders and users have enough input into the software?
  • Were there too many changes?
  • How complete is each feature?
  • How useful is each feature?
  • Have the users received the software?
  • How is the user experience with the software?
  • Are there usability or performance issues?
  • Are there problems installing or configuring the software?
  • Were the initial deadlines set for the project reasonable?
  • How well was the overall project planned?
  • Were there risks that could have been foreseen but were not planned for?
  • Was the software produced in a timely manner?
  • Was the software of sufficient quality?
  • Do you have any suggestions for how we can improve for our next project?

We definitely recommend using a survey where the questions are grouped together and each question is scored, so that you can start your postmortem report with an overview that shows the answers in a chart. (If you’re looking for a kind of “lessons learned template,” this is a really good start.)

Postmortem survey results

The rest of the report delves into each individual section, pulling out specific (anonymous) answers that people wrote down or told you. Here’s an example:

Beta
Was the beta test effective in heading off problems before clients found them?
Score: 2.28 out of 5 (12 Negative [1 to 2], 13 Neutral [3], 9 Positive [4 to 5])
All of the comments we got about the beta were negative, and only 26% (9 of 34) of the survey respondents felt that the beta exceeded their expectations. The general perception was that many obvious defects were not caught in the beta. Suggestions for improvement included lengthening the beta, expanding it to more client sites, and ensuring that the software was used as if it were in production.
Individual comments:

  • I feel like Versions 2.0 and 2.1 could have been in the beta field longer so that we might have discovered the accounting bugs before many of the clients did.
  • We need to have a more in-depth beta test in the future. Had the duration of the beta been longer, we would have caught more problems and headed them off before they became critical situations at the client site.
  • I think that a lot of problems that were encountered were found after the beta, during the actual start of the release. Shortly thereafter, things were ironed out.
  • Overall, the release has gone well. I just feel that we missed something in the beta test, particularly the performance issues we are experiencing in our Denver and Chicago branches. In the future, we can expand the beta to more sites.

(Source: Applied Software Project Management, Stellman & Greene 2005)

There’s another approach to coming up with postmortem survey results that I think can be really useful. Jenny and I have spent the last few years learning a lot about the PMBOK® Guide, since that’s what the PMP exam is based on. If you’ve studied for the PMP exam, one thing you learned is that you need to document lessons learned throughout the entire project.

The exam takes this really seriously: you’ll actually see a lot of PMP exam questions about lessons learned, and understanding where lessons learned come from is really important for PMP exam preparation.

The PMBOK® Guide categorizes the activities on a project into knowledge areas. Since there are lessons learned in every area of the project, those categories (the knowledge area definitions) give you a useful way to approach them them:

  • How well you executed the project and managed changes throughout (what the PMBOK® Guide calls “Integration Management”)
  • The scope, both product scope (the features you built) and project scope (the work the team planned to do)
  • How well you stayed within your schedule or if you had serious scheduling problems
  • Whether or not budget was tight, and if that had an effect on the decisions made during the project
  • What steps you took to ensure the quality of the software
  • How you managed the people on the team
  • Whether communication—especially with stakeholders—was effective
  • How well risks were understood and managed throughout the project
  • If you worked with consultants, whether the buyer-seller relationship had an impact on the project

For each of these areas, you should ask a few basic questions:

  1. How well did we plan? (Did we plan for this at all?)
  2. Were there any unexpected changes? How well did we handle them?
  3. Did the scope (or schedule, or staff, or our understanding of risks, etc.) look the same at the end of the project as it did at the beginning? If not, why not?

If you can get that information from your stakeholders and write it down in a way that’s meaningful and that you can come back to in the future, you’ll be in really good shape to learn the lessons you need to learn from any project. Even a failed one.