Machine intelligence has been a military research goal for decades, but is it even worth it? Artificial intelligence research reaches toward long-held visions of human-machine symbiosis, and all the benefits this would have for military might. Even if scientists fall short of these lofty ambitions, or even if they prove impossible to fully achieve, aiming for them may move humanity further along the path of scientific progress — but are small increments of progress worth billions of taxpayer dollars?
Such ambitions for generic AI systems have fueled research programs across the defense landscape since the late 1960s. The Strategic Computing Program grew out of the context of the early 1980s —an optimism about the ability of computers to solve military problems coupled with the Reagan administration’s Cold War push to bolster the United States through technology advancement and big defense budgets. However, the program over-promised and under-delivered, despite more than $1 billion of funding, and was left to fade into embarrassed obscurity in the early 1990s. Its fate holds lessons on how to approach largescale defense research projects, as well as providing an important lens through which to view the smaller successes of AI research.
Early AI Research
From the early 1960s, the Department of Defense was the largest funder of advanced computer science in the United States, and the Pentagon’s Defense Advanced Research Projects Agency (DARPA) has been responsible for major advances in computer science. Whilst this support contributed greatly to the general American dominance in this field, it was particularly crucial to the military. Computers and software increasingly became the vital component of major weapons systems, and without good information technology it became impossible to bring these new systems to fruition. Military interest in AI specifically stemmed from the general drive towards automation — a central theme of military research after the end of World War II, from weapons to training to factory production. The underlying focus was the endeavor to produce automated machines by codifying human intellectual functioning.
The Information Processing Techniques Office, a part of DARPA run by J. C. R. Licklider, funded projects throughout the 1960s (such as the Project on Machine-Aided Cognition) to research a range of topics related to AI, including natural language processing and using computers to play games such as chess. There was initial success in the building of expert systems, which rendered the knowledge of human experts into rules that could be applied by computers, leading to a small boom in this type of AI.
However, these systems could only cope with tightly defined inputs, and they proved to be difficult to update and maintain. Researchers had come close in several projects to mastering toy problems in microworlds, limited domains in which one could program and control all of the variables, but these simply did not scale. Moving wooden blocks around a table did not translate directly to moving cargo around a freight yard, and the compounding complexity of scaling up meant that the problems quickly ran up against limits imposed by the speed and power of early computers.
Despite these problems, the small initial successes in expert systems, speech recognition, time-sharing, and machine translation kept expectations alive and seemed to prove there was promise in the field. These fillips meant that AI remained a key priority for the defense community, even throughout the setbacks of the 1960s and 1970s. In the summer of 1981, the Defense Science Board, a panel of civilian experts advising the Department of Defense, ranked AI second from the top of its list of which technologies had the most potential to make an order-of-magnitude impact on defense in the 1990s.
The key limiting factor on progress towards AI was clearly computing power, and this spurred calls for research into the development of faster and more powerful interactive computer systems. In the early 1980s, DARPA recognized the urgent need for high-performance computing to underwrite the military’s future requirements. The agency saw that architectures that could handle speeds thousands of times faster than currently possible would soon be as critical to the armed forces as traditional weapons systems. It was this need for a whole new outlook on computing technology that gave rise to the Strategic Computing Program.
The Strategic Computing Program
DARPA conceived of the Strategic Computing Program as a way of pushing forward toward AI through a wide front, integrating advances in chip design, processing speeds, computer architecture, and AI software. Between 1983 and 1993, DARPA spent over $1 billion of federal funding on this program, before its eventual collapse.
The program differed from its predecessors in that it self-consciously set out to advance an entire research front. Individual research programs had usually focused on one specific problem, or funding had been disbursed to an entire field to promote general advancement, but this program used an approach that was a mix of these. The specific problem was AI, but the program conceived of this as a problem comprising several interrelated subsystems, each of which could be developed and then connected. Each subfield would move ahead along its own path, but each would also contribute to the advancement of the others. The various strategy documents produced over the project’s lifetime show that the Strategic Computing Program was envisaged as a pyramid of technologies: Building the infrastructure at the bottom, such as microelectronics and architecture, would lead to developments flowing up the pyramid, via connecting layers, to the capstone of AI. These mid-level technologies included a range of AI-based applications for military purposes, including photograph interpretation, applied speech understanding for a pilot’s assistant, and battle simulation and planning.
The key to the implementation of the various technologies would be orders-of-magnitude increases in computing power through the parallel connection of processors, giving the speed and capacity that had been missing during the early approaches to AI. The peaks and troughs of optimism that AI had cycled through since the early 1960s had given rise to a range of projects that had all foundered in delivering their goals, but the Strategic Computing Program’s founders maintained that the computing power facilitated by the successful development of parallel processing would finally clear the way for the promise of AI.
The shape and momentum of the Strategic Computing Program were created in large part by the historical context of the time at which it was proposed. When Ronald Reagan became president in 1981, he brought into office with him a desire to challenge the Soviet Union through the unleashing of free enterprise. As such, he had a predisposition to look favorably upon projects that heralded significant advances for the U.S. technology sector, as well as military-focused programs. The Strategic Computing Program promised technological breakthroughs that would provide new military capabilities, as well as energizing the private sector, so the Reagan administration saw great promise in it and were willing to back it with funding. This became even more vital when the Japanese government announced its Fifth Generation computer program in 1982, which aimed specifically at AI. The rapid growth of the Japanese economy and its rise as a trading power during the 1970s had caused a great deal of American paranoia about its position at the forefront of global trade and technology; paranoia that had only been compounded by the domestic economic recession of the early 1980s. If the United States failed to seize the moment in response to the Fifth Generation project, it would be overtaken by Japan and lose its place as the world leader in high technology. This context enabled DARPA to sell the technological paradigm of the Strategic Computing Program to an already willing political class.
The Strategic Computing Program was nothing less than the orchestration of the advancement of an entire suite of technologies, through an ethos of connection, toward the eventual goal of machine intelligence. There was, however, a tension between the top-level staff over which way the flow would run. Robert Kahn, the director of the Information Processing Techniques Office, favored technology “push,” where specific applications would flow naturally from the promotion of a wide technology base. DARPA Director Robert Cooper argued for technology “pull,” whereby a focus on a specific application (in his case, the pilot’s assistant) would naturally lead to a more developed technology base. This tension between “trickle down” and “bubble up” remained evident throughout the program’s existence.
The May 1983 document, “Strategic Computing and Survivability,” was the attempt to reconcile the two in a way that would sell the Strategic Computing Program to Congress, but Cooper felt that there was insufficient focus on specific military applications to appeal to the holders of the purse-strings; Congress had made funding for the program contingent upon the inclusion of concrete objectives and milestones. The document was rewritten with an unwieldy title and was published in October 1983. Instead of military applications being a potential payoff of wider computer research, the second document sees these applications as the stimulus to higher levels of technology — a clear move away from Kahn’s vision and towards Cooper’s, from push to pull.
It is important to be careful when drawing conclusions from documents such as these. When funding for big science projects is distributed by a government from a limited pot, those involved in each project must sell their wares to those making the decisions; the larger the price tag, the more selling goes on. Furthermore, there were other big projects around at the time that promised both revolutionary breakthroughs and increased security for the United States; perhaps most famously, the Strategic Defense Initiative (or “Star Wars”) that would create a dome of missile protection over the whole country. These projects all shared the Strategic Computing Program’s complexity and ambition, and all demanded a huge amount of investment from the government. When competing with these projects, the Strategic Computing Program’s top team had to ensure that the politicians to whom they were advocating would see the potential, and this understandably entailed a level of glossing over the cracks in order to promote the image of new technological marvels and a computing revolution.
In 1984, the complex agenda for the Strategic Computing Program was divided into 12 programs, and four of those related to AI. The influence of Cooper in the planning meant that each of these was directly linked to specific applications that could be presented to the Pentagon: machine vision would serve the autonomous land vehicle; natural language processing would support the battle management project; speech understanding would link to both battle management and the pilot’s associate; and expert systems would underpin each of these applications. The last of these was the level directly under the capstone in the pyramid; the development of a generic expert system to serve multiple applications would be the crowning glory on the road to AI. All of these would be supported by the other programs, particularly parallel processing, which would provide the infrastructure and power necessary to achieve the pinnacle of machine intelligence.
Refocusing and Fading
In 1985, a new director arrived at the Information Processing Techniques Office: Saul Amarel. He had significant doubts about the feasibility of the Strategic Computing Program’s quest for generic AI systems, and believed that this goal had been too ambitious from the start. Whilst knowledge-based systems could be developed for a certain environment, it was proving to be difficult to apply them outside of this original context. For example, a vision system developed for autonomous navigation could not be translated into a successful vision system for autonomous manufacturing. Amarel decided to refocus the program towards the narrower goal of advancing U.S. military capabilities through the development of advanced computer systems and identified three ways in which this could be achieved. Firstly, machines would help personnel to operate military systems under critical time constraints, sometimes by taking primary responsibility (the autonomous land vehicle) and sometimes by advising or assisting the human (the pilot’s associate). Secondly, computer systems such as the battle management program would support time-critical planning and command functions. Thirdly, complex software systems and simulators would help to train personnel, and to design and manufacture defense systems.
However, Amarel’s tenure coincided with a period far less politically conducive to the funding of big research programs. At the end of 1985, Congress ordered a reduction in the Pentagon’s funding for research and development projects, and DARPA lost $47.5 million from its budget. Shortly afterwards, Reagan signed the Gramm-Rudman-Hollings Act, which aimed to balance the federal budget over five years by mandating automatic, across-the-board spending cuts; half of the specified $11.7 billion reduction for FY1986 was to be shouldered by the Pentagon, and Reagan compounded the effect on the rest of the Department of Defense by exempting personnel expenditures and the Strategic Defense Initiative.
By the end of the decade, it was becoming clear that funding constraints and a lack of progress toward generic AI systems were running the Strategic Computing Program into the ground. The management wrote a new plan for a second phase of the program, which removed “machine intelligence” from its own block of the conceptual pyramid and subsumed it into “software.” This seemingly small shift in nomenclature in fact heralded a profound reconceptualization of AI, with a more sober appraisal of its progress. While the machines developed by the Strategic Computing Program excelled at the storage and retrieval of data, they could not come close to human talent at learning, judgement, and pattern recognition. Furthermore, throwing more computer power at the problem had not solved it, despite the hopes of researchers in the early 1980s, and generic systems remained out of reach. AI simply did not scale up from carefully defined environments to generic usage. The second phase plan was never released, no annual reports or congressional testimony mentioned the Strategic Computing Program after 1988, and the program quietly faded, overshadowed and ultimately replaced by the incoming high-performance computing initiative. The AI goals of the original Strategic Computing Program had disappeared from the landscape entirely by 1990.
Small Successes, Big Lessons
When assessing the success of the Strategic Computing Program, it is important to bear in mind the sheer scale of its goals: massive advances in machine intelligence with breakthroughs across a wide range of technologies. One of the more interesting aspects of the Strategic Computing Program is not that it failed, but rather that DARPA tried at all. Despite the fact that DARPA is known for its high-risk, high-reward philosophy, the Strategic Computing Program represents a project of unprecedented ambition both in scale and in substance.
Even at the time, the program was recognized by some to be overly ambitious, particularly in the timescales laid out. Not only did the enormous technical scope and the ten-year schedule for completion lead many observers to doubt that DARPA could make good on its plans, but the inclusion of detailed timelines also caused concern that the program’s success depended on scheduled breakthroughs; a lapse in one early part of the plan would cause a cascade effect through the pyramid of connected technologies.
The Strategic Computing Program’s record in achieving its AI goals can be characterized, at best, as mixed. While the pursuit of a generic expert system produced some valuable insights into various features and led to some progress in narrowly customized expert systems, the program failed to produce the generic tool that would have truly opened up the possibility of AI. A similar fate befell machine vision — while significant progress was made in the vision for specific applications, the hopes for a generic machine vision capability were dashed, and the algorithms produced simply could not interpret input images with the speed or the accuracy of humans. There was far greater success with both natural language understanding and speech recognition, which benefitted greatly from both the increases in computing power coming out of parallel processing, and the input from linguistics experts into the projects. Each of these projects was transferred out of the laboratory and into both military and commercial applications, which chimes with at least one measure of success envisaged by the Strategic Computing Program’s creators: the spinoff of technologies from the military to the commercial sector.
What the program’s record shows is that smaller successes can come from grand projects even when their ultimate goals are not achieved. Despite the fact that none of the promised generic AI systems materialized out of the Strategic Computing Program, progress was made in a number of the smaller and more narrowly defined areas of research, and these have formed the basis of further advancements since the program’s end. Grand projects may fail overall, but there are always some areas of success. We do not make scientific progress by aiming low.
However, the ability to salvage some treasures from the wreckage does not mean the ship has not sunk, and these small advancements seem to be less of a success when we consider how much funding the program consumed over its lifetime and how far short it fell of the original goals. Machine intelligence has always been a concept that requires enormous ambition, particularly when looking at generic systems rather than narrow applications. But this should not mean that those with the responsibility of funding such projects should always opt for the optimistic and romantic view of pushing the scientific frontiers over the cold light of day, especially when billions of taxpayer dollars are at stake.
Writing in Bulletin of the Atomic Scientists in 1984, Severo Ornstein and his co-authors aptly described the Strategic Computing Program as “a combination of ordinary naivete, unwarranted optimism, and a common if regrettable tendency to exaggerate in scientific proposals.” The Department of Defense should make sure to learn the right lessons from the program’s ungainly demise and not fall prey to any temptation to over-promise and under-deliver on grand AI projects.
Emma Salisbury is working on her PhD at Birkbeck College, University of London. Her research focuses on defense research and development in the United States and the military-industrial complex. She is also a senior staffer at the U.K. Parliament. The views expressed here are solely her own. You can find her on Twitter @salisbot.