A young colleague recently wrote to me complaining of frustration from having to deal with a high rate of errors in software development and data analysis. Any time you are innovating in a knowledge-intensive field such as health care, you will need to develop new software and analyze data in new ways. Errors will inevitably result. There’s no easy way to avoid them. Therefore, reducing errors in software development and analysis is a lifelong battle for healthcare innovators.
The conventional philosophy of reducing errors is the following:
- Make sure everyone clearly knows what they are responsible for
- Make sure you use a tightly controlled development process with clear steps, checkpoints, milestones and gates
- Make sure you have everything well documented, using documents created from highly detailed templates designed to assure that nothing is forgotten
- Make sure you have detailed testing scenarios designed in advance, and that you do “regression testing” to assure that changes to one part of a system or analysis do not cause the testing scenarios to fail
- Make sure everyone understands the consequences of errors, both to the organization and to them personally
These are the pillars of rigorous project management.
But, unfortunately, experience teaches that sometimes this philosophy can have some unintended consequences. Sometimes, errors still occur. Little errors, like bugs. And big errors, like creating something that nobody needs or wants. For example, when you have a tightly controlled process, sometimes that communicates to people that you intend for the process to be linear, rather than iterative. Even when you say “let’s do this iteratively,” all the steps, milestones and gates tell people that you really mean the opposite. When you create a highly detailed template, intended to assure that nothing is forgotten, you unintentionally are switching people into a mode of “filling out the form,” rather than the much harder and more valuable work of figuring out how to effectively teach the most important concepts to the reader. And, you unintentionally convert your quality assurance process to one that emphasizes adherence to the template, rather than the quality of the underlying ideas being taught. When you create detailed testing scenarios, you unintentionally encourage the team to treat “passing the tests” as quality, rather than having them challenging the software or the analytic results to tests that are designed based on insights about how the software or the analytic calculations are actually structured and what types of errors might be more likely. A software developer I know describes that as “testing smarter.” Finally, when you communicate to people the consequences to them personally of messing up, intending to increase their motivation to do error-free work, you unintentionally are telling them to allocate more of their time to avoiding blame and documentating plausible deniability. And, you are unwittingly telling them to bury the problems that could provide the insights needed to drive real improvement.
W. Edwards Deming famously advocated for “driving out fear.” In his landmark book, “Out of the Crisis,” published back in 1982, Deming explains that people fear new knowledge because it could reveal their failings and because it could lead to changes that could threaten their security. Focusing on motivating people might be a good idea if the problem is inadequate motivation. But, in my experience, poor performance is usually not an issue of motivation, especially in professional settings. More likely, poor performance is an issue of poor tools, poor training (leading to inadequate knowledge or skills), or having the wrong talent mix for the job.
That last one — talent — is a tricky one. We consider it enlightened to assume that everyone could do a great job if only they received the right tools and training. Saying someone lacks the necessary talent for a particular job can be considered arrogant and wrong. I think this may be because talent is an unchangeable characteristic, and we are taught that it is wrong to consider other unchangeable characteristics such as gender or race. But, each person was given their own unique mix of talents. They will make their best contribution and achieve their highest satisfaction if they are in a role that is a good fit for their talents. On the other hand, it is devilishly hard to tell the difference between unchangeable talents vs. changeable skills and knowledge. And, developing people’s skills and knowledge is hard work and requires patience. As a result, it is too easy for leaders to get lazy and waste real talent. Finding the right balance between optimism and realism about peoples’ potential requires maturity, effort and some intuition. If in doubt, take Deming’s advice and error on the side of optimism.
I’m not arguing against processes, documentation, test scenarios or accountability. But, I am suggesting to be careful about the unintended consequences of taking those things too far and relying on them too much.
My advice to my colleague was to focus more on the following:
- Make sure you hire really talented people, and then invest heavily is developing their knowledge and skills
- Make sure you are actually analyzing data the moment you start capturing it, rather than waiting a long time to accumulate lots of data only to discover later that it was messed up all along
- Make sure you do analysis and develop software iteratively, with early iterations focused on the hardest and most complex part of the work to make sure you don’t discover late in the game that your approach can’t handle the difficulty and complexity
- Most importantly, create a culture of learning, where people feel comfortable sharing their best ideas, talking about errors and problems, taking risks, and making improvements