Where Disruptive Innovation Came From11 November, 2015 / Articles
After a long and successful run, the theory of disruptive innovation has come under attack of late. Last year, The New Yorker published a piece by Jill Lepore, a history professor at Harvard, attacking the whole idea as overblown and based on shoddy scholarship. In a recent Sloan Management Review article, Dartmouth professor Andrew King asked “How Useful Is the Theory of Disruptive Innovation?” and concluded it’s not nearly as valuable as its proponents argue.
My own take is that Clay Christensen and his co-authors on the theory made a substantial contribution to our understanding of innovation. In particular, Christensen’s research offers a powerful lens for understanding why incumbents so often lose to upstarts attacking from the low end of the market. Disruptors adopt a new technology and target a market segment that doesn’t matter to incumbents, then ride improvements in the technology to expand into established players’ core customer base. Leaders in the incumbent firms face a very real dilemma: Invest to sustain their current business — which is proven and profitable — or venture into new territory and jeopardize their core business through lost focus or cannibalization.
Disruptive innovation is a parsimonious theory that explains many business failures. But not all. And that is the critical insight getting lost in the cross-fire between the theory’s disciples and its critics. Businesses fail to respond to innovation for many reasons, and no single theory can be expected to explain every case. Fortunately, disruptive innovation does not have to. Christensen’s work is part of a rich body of research that, taken as a whole, explains more than any single theory can.
Christensen’s initial work was done during the 1990s when a group of scholars, many at Harvard Business School, were studying why established companies, like Polaroid or DEC, struggled to adapt to technological change despite ample resources, talented engineers, and admired leaders. For more than a decade, this community chipped away at the question — applying different frameworks, studying different industries, and generating a set of insights that deepened our understanding of why good companies go bad.
In a 1990 seminal study of the semiconductor equipment industry, Kim Clark and Rebecca Henderson argued that the organization and knowledge flows of high technology companies reflect the architecture of their underlying product. When the product architecture changes, by reconfiguring how the components are integrated into a system, for example, established players often struggle to adapt. Their organizational structure, which still embodies the old product architecture, is difficult and time consuming to change.
Two years later, Dorothy Leonard-Barton published the findings from a study of 20 innovation projects at companies including Ford, HP, and Chaparral Steel. Firms, Leonard-Barton found, had to invest heavily to build technical competencies required to excel along a set technical trajectory. These capabilities were deeply rooted in an organization’s routines, but also in its culture. When faced with a new technology, however, market leaders often found their historical capabilities were poorly suited to new conditions and difficult to change. Core competencies became core rigidities.
In 1995, Christensen and Joe Bower published the article that introduced the notion of disruptive technology. Joe had pioneered the study of how resources get allocated inside large companies, and their article built on his earlier work. Dealing with a disruptive technology forced firms to choose between funding existing businesses and betting on new ones. The executives running business units serving current customers typically won the fight — not because it was best for the company in the long run, but because they had the power that comes from bringing in the cash.
The following year, Bower and Tomo Noda published a study comparing two Baby Bells after AT&T’s breakup. One succeeded in mobile telephony, while the other floundered. Their key insight was that early investments in cellular technology made each subsequent investment easier to justify. If you didn’t make that initial bet, it became successively harder to catch up.
My own work in the late 1990s explored why incumbent tire companies like Firestone and Uniroyal failed to adopt the radial technology that had proven its superiority in Europe. The key insight was that leaders made a set of mutually reinforcing commitments to technical capabilities, resources such as factories, relationships with customers, cognitive models of their environment, and corporate culture. Not only did the individual commitments harden over time, their interdependencies make the entire system fiendishly difficult to change. As a result, companies often respond to even the biggest changes with active inertia — accelerating activities that worked in the past.
The bottom line is this: No single theory, not even one as elegant as disruptive innovation, can explain everything. But that was never Christensen’s intent. And it is certainly not his fault that disruption has become a buzzword synonymous with “change.” The critics are right that one size theory does not fit all cases. But it also doesn’t make sense to write disruptive innovation off as useless. The framework belongs in the intellectual toolbox of any leader who wants to understand, and harness, the power of innovation.