As everyone interested in such topics knows, Moore’s Law began as an empirical observation, a post-hoc curve-fitting exercise that highlighted the remarkable and accelerating progress of semiconductor technology. It quickly became a leitmotif in every discussion of technology. And despite the widely diverse techniques through which it has been sustained, it has gained and held the unquestioned character of a physical law, deeply believed and considered to be immutable by consumers, engineers, executives and financiers, world-wide.
Although Moore’s original 1965 article described doubling the number of transistors every 12 months, rather than today’s popular form, which has microprocessor performance doubling every 18 months (as our folklore maintains), these considerations are secondary to a much larger truth, which Moore clearly emphasized in his article: namely the overwhelming role of increasing integration in reducing cost per function or per operation. Twenty years ago, Gordon Moore understood that he was describing an economic, rather than technological phenomenon, with the potential to restructure whole industries, indeed whole economies.
A few times a year, we hear an envious voice from the beleaguered optical components sector (or the even more-oppressed disk drive sector) say something like, “Big Deal. We can improve our technology even faster than Moore’s Law!” This illustrates a basic misunderstanding of what Moore’s Law has become, namely an economic law.
In order to make this point clearly, I give my interpretation of Moore’s Law as applied today:
“Moore’s Law expresses that rate of semiconductor process improvement which transfers the maximum profit from the computer industry to the semiconductor industry.” It is NOT the maximum process improvement/feature size reduction rate possible technically, nor is it limited by capital spending on new equipment. It is the rate at which each product generation lasts long enough to be (marginally) profitable for the systems vendors, and yet provides new product introductions to customers at a pace at which they will consider them seriously. This economic influence has become the means by which Moore’s Law has not only driven the semiconductor sector, but has restructured, marginalized, or greatly invigorated other sectors as well, as we will discuss further.
One might well ask “How does all this economic influence create a technical cross-impact?” Consider an analogy with the flight of an aircraft. Every aircraft has a speed called maximum operating speed (or “never exceed” speed), above which the aircraft is unsafe to fly. Another lower speed is called Va (maneuvering speed). It is safe to fly above this speed, but only in straight and level flight, without any changes of direction. In other words, motion is confined to only one dimension, which allows greater speed but no significant changes in other dimensions. In order to allow for arbitrary changes in heading or altitude, the aircraft must slow down below Va.
An enormous impact of Moore’s Law was to dramatically constrain and focus the computer industry. For decades, before the advent of significant levels of integration in semiconductors, innovations in computer architecture formed the basis of competition among vendors. These innovations meant that system organization, instruction sets, memory management techniques and I/O controllers were different in every product generation and from every vendor. Before System 360, introduced in 1964 by IBM, there had hardly been two consecutive models that were compatible. These dramatic changes in direction meant that no particular design ever had enough customers to become low in cost, or to allow a meaningful independent software industry to arise (virtually all software was vendor dependent, either provided by the vendor or by the customer). Each small step produced improved cost/performance, but at great expense to vendor and customer.
Moore’s Law raised the velocity of cost/performance improvement far above maneuvering speed for the computer industry. That is, the tremendous rate of improvement in cost/performance via feature size reduction made it unwise for computer vendors to attempt architectural changes, which meant in turn that the semiconductor suppliers took control of the architecture of their customers’ designs by integrating more function in their chips, running them faster, and reducing electronic costs steadily.
This, in turn, meant that these high performance and low cost processors and memories were available to all incumbent computer vendors and to industry newcomers as well. Over a fifteen year period, the existing computer industry was commoditized, and slowly destroyed. Even IBM was forced to transform itself further in the direction of providing services and to concentrate its own architectural efforts in the very top of the market. But this brave new world proved very hospitable to low cost manufacturers, who concentrated their efforts on power and packaging and distribution, taking out costs and improving reliability and relying on Intel, AMD or Motorola for the major value creation.
A huge collateral effect of Moores Law was the creation of the commercial software industry as a meaningful force in the economy. This took place in two ways. First, the falling cost and wide availability of powerful processors greatly increased the number of computers in use, and thus successful software products could be sold in enormous numbers at modest prices. Secondly, the end of proliferation of architectural variations meant that a successful software product need run on only one or at most two different CPU types, secure in the knowledge that (a) this would cover virtually the entire market of all vendors and customers, and (b) that there would be inexorable steady improvements in cost/performance which would seldom require any significant changes to the programs, thus allowing larger software investments to be made, in products which would surely perform better and better over time, courtesy of Moores Law.
These three very large phenomena, i.e. the enormous growth of the semiconductor industry, the commoditization of the computer industry and the emergence of a huge software industry are of course mutually dependent, and have created the economic framework that has held Moore’s Law in place for so long. That the rewards have fallen disproportionately to Intel, Microsoft, Oracle and Cisco is incidental; the global wealth and productivity created by the simplicity and constancy of Moore’s Law (or rather, our belief in it) has deeply changed civilization, our very concepts of information and our access to it.
However, staying strictly on any technical path involves bypassing others; sacrificing progress in some areas to sustain it in another. It is certainly worth examining some of the approaches delayed or abandoned by the course we’ve taken as an industry.
Programming in the 60’s and 70’s was a high form of technological art, and computer science was its guiding scripture. A rich body of theory developed, as well as a software engineering discipline, largely driven by the notions (true at the time) that computers were expensive, their resources limited, and programmers scarce. Thus great emphasis was placed on clever algorithms to require the least number of instructions or the smallest amount of memory or both. Elegant, parsimonious program design was celebrated. Improvements to compilers for denser code and new languages for programmer productivity were high priorities in academia and industry alike. It was a small and much envied priesthood of those that had access to a computer.
This entire culture disappeared under the crashing wave of Moore’s Law. Clever, parsimonious designs were unnecessary in an era of ever faster and cheaper CPUs and memory, as were complex compilers that generated smaller code. Programmer productivity was suddenly a different matter when everyone had access to their own machine, night and day. Computer science has reinvented itself very successfully around topics like networking, databases, and search. Programming, however, will never be the same. It’s become a pick-and-shovel technical activity, where knowing the peculiarities of J2EE is vastly more useful than knowing complexity theory.
Another area of technology completely knocked off the track by Moore’s Law is parallel computing. In the 70’s, very large scientific computers with parallelism among several arithmetic units were just beginning to work well, after a few failures. And equally important, software researchers were just beginning to make real progress on the problem of programming parallel machines, since they seemed the most promising architectural step toward a new level of performance.
Parallel computing, too, was frozen by the chill winds of Moore’s Law. In effect, every designer asked himself “Should I really build a dual processor machine and the get software modifications that are required, or just wait 18 months to have a single CPU that’s twice as fast with no additional effort?” Only now, that the limits of growth in CPU clock frequency are in sight, has really serious focus on significant multi-processing entered the mainstream.
Moore’s Law IS the story of information technology in the past 40 years. Of course it has made the semiconductor industry an unbelievable success. But its cross-impacts to the computer, communications, consumer electronics and software industries have changed our world, even beyond what Gordon Moore foresaw in that short and brilliant article in 1965.
About the Author
David Liddle joined U.S. Venture Partners in January 2000, after retiring as president and CEO of Interval Research Corporation, a Silicon Valley-based laboratory and incubator for new businesses focused on broadband applications and advanced technologies, founded in 1992. David is also a consulting professor of Computer Science at Stanford University and has spent his career in Silicon Valley, in activities spanning research, development, management and entrepreneurship. Prior to co-founding Interval with Paul Allen, he founded Metaphor Computer Systems in 1982 and served as its president and CEO. The company was acquired by IBM in 1991 and David was named vice president, Business Development, IBM Personal Systems. Before that, from 1972 to 1982, he held various R&D and management positions at Xerox Corporation and at its Palo Alto Research Center. While there, he was vice president and general manager, Office Systems Division. David has served as a director at Sybase, Broderbund Software, Borland International and Ticketmaster Group, as well as numerous private companies, and as Chair of the Board of Trustees of the Santa Fe Institute. He has served on the DARPA Information Science and Technology Committee, and the Computer Science and Telecommunications Board of the National Research Council. He earned a B.S. in Electrical Engineering at the University of Michigan, and an M.S.E.E. and Ph.D. at the University of Toledo. For his contributions to human-computer interaction design, he has been named a Senior Fellow of the Royal College of Art. He also serves as a director of the New York Times Company. David currently represents USVP as a director at Caspian Networks, MaXXan, T-RAM, Inc., Axiom Microdevices, Optichron Corp., MaxLinear, Gear6, Klocwork, Instantis and PacketHop Inc.
His primary investment areas are in RF and analog semiconductors, cellular and wireless networking, signal processing, and datacenter networks.
No comments:
Post a Comment