Monday, November 21, 2011

Plan like there's no tomorrow

A mistake I see a lot when I'm coaching client/managers on developing release and iteration plans is delaying some of the most valuable stories until the 2nd or 3rd iteration. This is in a course project with only 4 iterations to begin with!

I think it arises from being used to creating schedules to fill the time allotted to the project. You know you can't do everything at once, so you take the pile of valuable stuff and spread it out. With enough repetition of the MVP mantra, they get the idea that they need to think small and high value, but that just reduces the size of the pile, not the urge to spread things out.

So I tell them "plan like there's no tomorrow."  Assume the project has been cancelled. This is your last iteration. What do you put in it to get the most value you can in this final turn? If you really take seriously the idea that this is all you get to do, you start creating an iteration plan that takes thin slices of value from those later iterations. Pack in all the goodies that your developers are willing to commit to, just not as complete and fancy as you'd hoped those goodies would be.

The danger of planning for the future is trusting that the future will still be there. So plan like there isn't a future.

Thursday, October 6, 2011

Sometimes it's not what you slice, but who

I push slices a lot in my lectures on development. Whether it's for determining a minimum viable product, or breaking things down into testable releases, you can't be too thin. Elephant carpaccio as Cockburn says.

But slicing what you build is not the only place to look. Sometimes, it's important to think about slicing your target users. I was just hearing about a project rollout in crisis in a large enterprise because of the number and variety of users who need to be updated. It's basically your classic updating of the base level enterprise operating systems and all the attendant applications that affects.

Mass rollouts are famous for, if not failure, at least a lot of grief and midnight oil expenditures. If it hurts when you do that, don't do that! Slice the rollout. But just as you have to think about how to slice a product, so that you don't slice out the important part of the product, so too you need to think about how to slice your initial targets. Doing it division by division or region by region is as bad if not worse than a mass rollout. The first division in line gets all the bugs and missteps, often becomes disconnected from the rest of the company, and rightly feels like a lab rat.

Instead, look for the small audience that would be OK with glitches in the change process. What group really really wants this change, or what group is pretty tech-savvy and hence likely to find workarounds give useful reports on what's broken?

Don't stop with one slice. "We had a pilot rollout, now we're going live everywhere." Pilot groups are never representative of the total audience. Even if your initial slice was not a tech-savvy group, they knew they were in an experiment and almost certainly gave you the benefit of the doubt. That's not going to be true of everyone else.

Instead, you need to keep slicing. There are many criteria for the next slice. It might be the next group who most wants the change. It might be the group your first group interacts with the most. It might be the group that's most concerned about the change. Why that last group? Because at this point you can still afford to give them the extra attention and handholding they need at this. Plus you find out what the worst case reactions and snafus are.

Wednesday, September 7, 2011

Designing CS101: From Causes to Learning Obstacles

Let's recap the process of designing a learning experience for introducing computer science to non-CS students. What mistake is the target audience making that they care about? I chose "many people are missing out on a satisfying career." Why do they make this mistake? There are many causes, especially sociological ones, but I chose "people don't get to see what computer scientists actually do." They may take a programming course, often with "exciting" assignments like printing an amortization table. They may someone in a movie or TV show rapidly typing cryptic text on a computer screen. They may get to take a course programming games or robots. Unfortunately, none of these are accurate depictions of what we spend our days doing in CS, namely coming up with new ideas for old problems, or even better, new problems. For me, the neat part of CS is that I get to come up with ideas that may change the world. I can do it with surprisingly small resources, compared to other transformational disciplines like medical research.

Sp now the question is to identify why a learning experience is needed. You don't need course for walking, ordering food in a restaurant, going to school, playing videogames, etc. You learn these and many other things by watching other people do it, and/or trying to do it yourself and learning from failure. We only need courses when this natural process doesn't work. That might be because there's no one to watch, e.g., a master violinist practicing techniques, or learning from failure is not an option, e.g., brain surgery, or a handful of other reasons.

So why don't people learn what computer scientists really do naturally? I think it's pretty clear that it's because there's no opportunity. Most people don't mix with computer scientists doing their job. TV doesn't show it because there's nothing to watch. An intro programming course doesn't help, because programming, while critically important, is not the career. If you don't get people to understand being a doctor or medical researcher by making them take an anatomy course!

So I'm going to answer the third question with this: "people don't get to see what CS is really like simply because there's no opportunity to be exposed to people doing CS."

Monday, July 11, 2011

Fear of failure

Jonathan Rasmussen in Love it when you're proven wrong argues for the benefits of being proven wrong. As a strong advocate of learning from failure, I can't argue with the benefits.

Furthermore, fear of failure is a big problem in agile teams. It occurred in 17 of 17 organizations surveyed in People over Process (PDF) by Conboy et al. in the July/August 2011 IEEE Software. [One of the best such survey articles I've seen by the way. Seek it out for that and the other 8 challenges they discovered.]

But I don't think seeing the learning benefits addresses the real problem. The big negatives of being wrong are social -- embarrassment, loss of face, loss of future ability to influence decisions, etc. The individual cognitive gains of learning from failure are swamped by the perceived social costs.

That's why to enable learning from failure in my courses, I have most learning occur in private extended homework activities and one-on-one in-depth critiquing exchanges.

Getting team members to embrace individual failure without negative consequences is really hard. If only we could get developer teams to emulate improv groups. In a profile in the Northwestern alumni magazine, Stephen Colbert described the event that clinched for him why he wanted to work in comedy and improv, not serious theater.
I saw someone fail onstage — terribly, massively fail onstage,” Colbert recalls. “And we backstage laughed so hard at this woman’s failure, and our laughter was so joyful and not derisive. I remember turning to a friend of mine, Dave Razowsky, and we threw our arms out wide and hugged each other in laughter and literally fell to the ground in each other’s arms over the joy of that failure.

Friday, May 13, 2011

5 Whys, Harder than it looks, Part 1: What's a Failure?

An exercise I gave my software project management class at Northwestern this year is doing 5 Whys process improvement analysis, as in this video and essay from Eric Ries. I also strongly recommend Tony Ford's experience report and example analysis.

There were three parts to the exercise:
  • identifying some key failures
  • developing a 5 Whys causal chain for each failure
  • proposing process improvements to address each step in each chain
I showed the Ries video, assigned the Ries essay, and then had them do similar analyses on several failures in the project they had just finished. I did these as critiqued exercises, figuring that it would take a few iterations to get right. I was right about the need for iterations, way off on the "few" part. Each part has been quite hard for most of the students.

First up:

What is a 5YA failure?

A project has lots of failures, but only some are appropriate for a 5 Whys analysis. Both Ries and Ford uses examples involving some form of web site failure. Here are some of the things my students submitted and the critiques they got. There were many similar examples of each type of problem.
  • Our team had to implement image upload three times
I classify ones like this as as "bad choices led to rework." Clearly something you'd like to avoid in the future, but is it a 5YA (5 Why Appropriate) failure? I say no, because there's no user story failure. A 5YA failure is a user failure, like "the web site crashed" or "the search function returned 'page not found' errors." Without knowing what user stories were impacted and how (broken, missing, slow, ....), you have no idea how to prioritize the importance of the failure and therefore how much effort to expend in trying to avoid it in the future. Furthermore, making things run smoother is much less motivating than avoiding embarrassing public failures.
  • Our client changed the design of the website twice
There were a lot of these "requirement changes led to rework." This has three fundamental things wrong with it. The first problem is the same as the previous example. There's no specific user story impact. Second, this is not a developer failure. You can't fix what you can't control. What you need to fix is how your process handles the things you don't control. It's not a 5YA failure if your cloud service dies. It is a 5YA failure if you have no backup. Third, and most important, this may not even be a failure at all. Iterating on designs is a good thing, if driven by actual usage. The whole point of agile is that it's impossible to get it all correct up front. The corollary is that sometimes we'll get it wrong. That's not a failure. That's life.
  • XXX was working on a fork of the repository so almost none of his code got integrated
I leave what's wrong here as an exercise for the reader. It's one of the points already made.
  • The code XXX submitted for "index" view of the "projects" controller didn't work
  • Buggy and totally non-working code was checked in
These are just way too broad and non-specific. "Didn't work" doesn't distinguish "nothing happened" from "wrong results" from "error page appeared." The devil is in the details. Ignore those details and the devil remains. Trying to fix the problem "code is buggy" leads to ineffective measures like "test more!" and expensive ineffective measures like "everyone goes to Java camp this summer!" How the user story actually breaks leads to very different analyses, responses and priorities.

A 5YA failure is a bug report

My final recommendation for my class was that a 5YA failure should be writable as a bug report. That is, it should be put in the form "when a [user-type] does [action], [failure event] occurs." Failure events should distinguish between nothing happening, the wrong things being return, errors being returned, and so on.


Tuesday, April 26, 2011

Designing CS101: From Failure to Causes

So far, I've argued that the goal for a CS principles course that makes the most sense is to deal with the failure "students who don't know what CS is about miss out on a chance to find a career they'll really enjoy in software development or computer science."

The next question then is "Why do they make this mistake?" There's a lot of data on this, particularly for females and minorities, e.g., to pick one pretty much at random, this survey-based study. In this study as in many others, the usual reasons given for not pursuing computer science is that it's too hard and computers are boring. Other studies and articles have focused on beliefs that computer science is all about programming and that that is a solitary activity. Yet others have focused on the presence of role models, again particularly for females and minorities, and the image (or lack thereof) of computer science in the popular media.

I have my doubts about that last one. Female hackers have been a staple on 24, NCIS, Warehouse 13, and elsewhere for some time. These reinforce the nerdy image but are otherwise very positive and emphasize membership in a working group over being a loner.

What about the perception that CS is about programming and programming is boring. Are those beliefs wrong?

One approach has been to make programming more fun. Alice. Scratch. Computing and Multimedia. I love these but so far I've not seen them move past the introductory level and hit the place where most agree real CS begins: algorithms and data structures. This also doesn't address the issue of what you do in CS.

Another approach is to skip the programming and go straight to the big ideas. That's what started this series of blog entries. But if the goal is to show that CS is an interesting career, then it' s not the ideas students need to see. That's like thinking a biology course introduces you to what biologists do. Students need to see and try doing things that accurately reflect what computer scientists actually do. What fills their days? What makes them want to come to work? What's a moment of joy?

So, my conclusion is that students misunderstand what people do in CS, besides programming, because they've never seen it done nor had a chance to try doing it, particularly pre-college. In this, CS isn't really different than math or biology or many other fields. Most education is badly constructed around facts and ideas rather than skills and activities. If there was a glut of computer scientists, this wouldn't matter, but as far as most U.S. companies and the government are concerned, we have a growing shortage.

On to question 3!

Sunday, March 6, 2011

Designing CS101: Failure-Driven Course Design

In Agile Education and the CS Principles Course, I criticized learning objectives like:
  • "The student can analyze the effects of computing within economic, social, and cultural contexts." (Objective 4)
  • "The student can use abstractions and models to solve computational problems and analyze systems." (Objective 5)
These appear to have no answer to Question #1 in the 6-question framework: What mistakes are people making that matter, and who cares? What goes wrong when people don't or can't do Objective 4? Outside of essays and exam questions in the CS Principles Course, where does this skill matter? Is there some subgroup for whom this skill is critical? Who and when?

With Objective 5, I can think of some people to whom it might matter: computer scientists and software developers. There are two problems though. First, this is a much narrower group than the CS Principles Course is being developed for. Second, I can only say "might" because I need to see typical examples of developers not using abstractions and models, and what the negative consequences are. What's the evidence that this occurs often enough to be of concern, and is harmful when it does occur?

All of the CS Principles learning objectives have this problem, except those directly about programming, such as
  • The student can create a computational artifact through programming.
But the point of CS Principles was that there's more to CS that people should know besides programming.

So can we answer Question #1? Let's step back, as good Agile developers do, and ask "what's the goal of the CS Principles project?" Like many committee and community-built curricula, a number of goals have accrued. A summary of them can be found in the Open Letter to the Computer Science community.

One impetus certainly is the feeling by many in CS that most people, including students, teachers, parents, and so on, think computer science means programming, programming is boring, and therefore students choose not to pursue a career in computer science.

That feels like a real mistake, but Question #1 also asks who cares and why? I can think of 3 major groups:
  • employers
  • the government
  • students
Employers care because they need good developers. But that's back to programming plus other developer skills. Not where the CS Principles initiative arose from.

The government, especially NSF, cares because they want the US to maintain a lead in research in new forms of information technology. That's good. A CS Principles Course about the big ideas, where they came from and where new ones are coming from, would help address a failure they care about.

Students should care. It's in their own best interests to know whether CS is a career they might actually enjoy and do well in.

So we can argue for a course on how to be a software developer or a computer scientist. But the proponents of CS Principles have a broader aim. Peter Denning in a number of essays on the Great Ideas in CS, Jeannette Wing in various essays and talks on Computational Thinking, and others, have said that a big mistake students make is not knowing how to approach problems in science, engineering, and social issues, from a computational perspective.

The relevant parts in the Open Letter are these two bullet points:
  • With a concepts-rich curriculum that emphasizes computational thinking and problem solving students taking the course will be better prepared for most careers, given the role that computing plays in most sectors of American life.
  • The population at large -- starting with present day high school students -- will become more knowledgeable and cognizant of computing and computational phenomena.
I'm unconvinced. Students certainly need to be tech-savvy. But they are, often more so, when it comes to social media, than most of the faculty in CS departments. But do they really need computational thinking? Where exactly, outside of programming and CS research, does it matter if someone doesn't apply computational thinking?

I can easily see "here's why it matters" arguments for subareas of computational thinking for subgroups of students. Students in biology should know about multi-agent simulations. Students in cognitive psychology should know about cognitive modeling. There are motivating examples of muddled thinking failures in those and other fields due to the inability to think computationally.

But I'm unable to see why poets, playwrights, and plumbers need to know about computational thinking. Without convincing examples of "why does this failure matter?" a course design is doomed. Students will view CS Principles as yet another irrelevant rite of passage, like learning the parts of a neuron, the lifecycle of stars, the dynasties of China, and so on.

So, I'll stick with the answer "students who don't know what CS is about miss out on a chance to find a career they'll really enjoy in software development or computer science." Phew! One question down, five more to go.

Agile Education and the CS Principles Course

One of the things about the Agile movement that attracted me from the beginning was how proponents would go back to the core goals of development, like "getting something working that the client really needs," and from that develop practices that turned standard development idea on their head. That resonated with my own experiences in how to attack problems like natural language understanding and the design of learning environments.

I recently gave an internal "brown bag" talk on how several of these upside-down Agile practices could be fruitfully applied to educational design and academic research.

Then I happened across the AP CS Principles course, and specifically their learning outcomes document (PDF). I have great respect for the goal, the people involved, and the effort invested, but that "requirements" document is wrong in so many ways that I felt a need to try and articulate why.

My own approach to designing courses and other learning experiences is based on the 6-question framework developed by Alex Kass, Ray Bareiss, Danny Edelson and myself in response to the many flawed designs we'd seen (or created) over the previous decade at the Institute for the Learning Sciences at Northwestern.

Question #1 in that framework, "What mistakes are people making that matter, and who cares?" is the most critical. It defines what you need to teach. It's where at least half the designs we saw failed, including the CS Principles learning outcomes. Pull out just the statements about what a student will learn to do. Here are two:
  • "The student can analyze the effects of computing within economic, social, and cultural contexts."
  • "The student can use abstractions and models to solve computational problems and analyze systems."
Try and find any outcomes that a student would say "I can't do that but I'd love to learn how!"

In case it's not clear why this matters, read the UW CSE120 blog on a pilot run of this course. Look where student engagement occurred. Without engagement, nothing else happens when it comes to learning

It's easy to criticize. That's the point of this blog. But it's not enough to criticize. What would a better answer look like? What would happen if I applied the 6-question framework to the course with the same intent as the CS Principles course?

That's my plan for the next several blog posts. Stay tuned.