The Heroic University Defeats the NSA!
“Modern cryptography has, since its earliest days, been associated with governments. Amateurs there were, like Edgar Allan Poe, who dabbled in the art, and it has held a certain public fascination from the earliest days. But the discipline requires resources, and only governments could marshal the resources necessary to do the job seriously. By the end of World War II, American cryptology had become inextricably intertwined with the Army and Navy's codebreaking efforts at Arlington Hall and Nebraska Avenue. But this picture would begin changing soon after the war.”
The NSA, from an FOIA request in 2009 (p. 231).
As a central institution in a democratic capitalist system, the public research university, many argue, is a bulwark that protects the public from government corruption, overreach, and malfeasance. By assuming into itself the work of training intellectuals, the university, rather than the state or political party, trained the functionaries who will fill the democratic bureaucracy (trained, therefore, in the professional standards of a disciplined professional body). During the late 19th century, the professional academics who created the modern university claimed research as its major function at a time when governments, the churches and other corporate entities found their resources and specialties inadequate to the demands of a restructuring world. It claims to democratize knowledge production and, through a free-market capitalist economy, democratize access to the benefits of that knowledge. Finally, the university provides a home, beyond the aegis of government agencies, wherein experts can engage in principled and public debate about how new theoretical and practical knowledge should or should not be implemented and, further, what alternatives to policy might look like. From the Wisconsin Idea’s attempts to write legislation to modern climate scientists discussing what a sane energy policy would look like, the research university was to be the place where the best and brightest would sharpen their thought and benefit their community, state and country. From a professional, middle-class viewpoint it is an essential--and desperately necessary--institution by which to guarantee the “American Way of Life.”
In what follows, I will use an episode from the development of Computer Science at Stanford and MIT to show that, even when the above ideals of the Research University are operative, it still fails as an institution “of the people,” while it does an admirable job of securing the rights of private property and rights holders. The NSA is, rightly, infamous for its attempts to monitor every facet of life throughout the globe. At the dawn of the computer age, the Department of Defense contracted much of its research and development needs to major research universities (particularly Stanford, MIT, and Carnegie-Mellon). Very soon after, as private corporations were given the green light to convert this publicly funded research into private profit, the security of data became an important issue. Where the NSA sought to establish their right to burrow into every computer system, some faculty stood against them, arguing that data and privacy should not simply be ceded to this agency. While privacy was a major concern, even more important was the ability of the corporations these men also worked for to sell products both at home and overseas. I will argue, then, that at its best the university works not in the interest of community empowerment and liberation from arbitrary authority, but rather as the “democratic” guarantor of private property. Offering the illusion of equal opportunity, it is, in the final analysis, a central institution supporting neoliberal capitalist expansion.
The first university-based computer science programs began appearing in the 1960s, through experimental work in Mathematics, Physics, and Engineering departments preceded and laid the groundwork for these programs. Stanford, the first to found a Computer Science department, has been an elite center for nearly every phase of computing power. In this, funding from Defense Advanced Research Projects Agency (DARPA) and the National Science Foundation (NSF) - not to mention the Office of Naval Research - were instrumental in supplying the necessary funds for faculty, grad students, and equipment. In this way, the research capabilities of universities could begin to equal that of Bell Labs, IBM and other industry giants. Many of the researchers had an inkling that what they were doing was having terrible ramifications for those targeted by the US military, but believed that by doing university based research, and involving themselves in anti-war groups and other such organizations (or going to a protest so as, recalls Stanford Computer professor Don Knuth, “to be counted in the statistics of people saying so-and-so many people protested today” p. 12, pdf), they nonetheless provided a valuable check to the war-machine. (The Babbage Institute’s website has a tremendous collection of their oral histories).
DARPA funding was the most important, at least until the late 1970s, for the sheer bulk of funds, though it was largely directed at its “centers of excellence” - Stanford, MIT and Carnegie Mellon University. As W. Richards Adrion, who held positions at the NSF, the National Bureau of Standards, and several universities, recalls, this allowed universities to engage in “research that was competitive with IBM and Xerox and DEC [Digital Equipment Corporation]” (p. 16, pdf). The NSF did much smaller grants and tried, as much as possible, to develop a national infrastructure of computing ability. This was to diversify employment opportunities as well as to create markets for computer technology. The NSF also funded more theoretical science, such as cryptography. Until then, modern cryptography had been dominated by corporate labs, such as Bell Industries and IBM (oil companies and banks were very interested in these products), and the government, which classified nearly everything pertaining to cryptography until the 1960s (Whitfield Diffie, in comments on a FOIA NSA document, claims that, the “NSA succeeded in virtually eliminating all other cryptographic work in the US” (p. 232)).
By the late 1960s, however, cryptography was a small but growing concern for computer hardware developers, banks, the national government and academic researchers. Among them was Martin Hellman, a Stanford grad who, upon achieving his Ph.D, worked at IBM where he was exposed to cryptography. At IBM, he had seen the potential for commercial applications of cryptography in computer systems and networks, though most of that information was classified by the NSA. While he’d initially avoided an academic position before (so as to avoid poverty), he saw that if he were the first to publish data and theorems, he’d be able to get the credit and the patents--and therefore make money. Thus he accepted a professorship at MIT and, soon after, a professorship at Stanford.
Upon publishing his first paper on cryptography--a technical report--in 1973, Hellman was recruited by the NSA. He declined, recalling that, “I had had normal security clearance but I knew crypto-clearances were much worse in terms of the strictures. With the normal security clearance that I had for my consulting for example, if I did work in a related area but did not build on anything classified, I didn’t need to get permission to publish it. On the other hand, in cryptography, my understanding was, and the NSA people who approached me agreed, that once I had a crypto-clearance I would have to get permission to publish anything I’d worked on” (p. 15, pdf).
In 1974, the Federal Register, working with the National Bureau of Standards (NBS) and the NSA, put out a request for proposals for a data encryption standard for government and commercial machines. IBM, because they were so far in advance of everyone else, easily won the project with their Digital Encryption System (DES) system. Prior to this, Hellman and Whitfield Diffie, acting without knowledge of the other, had argued with IBM developers that its system was not actually secure. IBM was not pleased, however, and ignored their pleas. Following the acceptance of IBM’s DES by the Federal Register, there was a brief window for academic comment. Given the role of the NSA in determining the standards at the NBS, and the problems they believed lay in the system, Hellman and Diffie, now working together, presented two major concerns: key size and the lack of openness about the design theory of the encryption (they believed the NSA had built a trap-door into the encryption process or, if not that, had mandated such a possibility). The key size was a compromise with the NSA--it wanted 48 bits; IBM had wanted 64; they agreed on 56 (p. 232). This would be strong enough to prevent all but the US government from breaking the code, but not too strong for the NSA’s own capabilities. Hellman describes the NSA reasoning quite well in an extended paragraph:
“The double bind NSA was in was they had to put out a standard that was very strong, because if it was broken not only would NBS, now NIST [National Institute of Standards and Technology}, get a black eye, but NSA would get a black eye because they’d been advising NBS. They can’t put out something that’s easy to break. Plus all the commercial American data would be at risk. And yet the other side of the problem was it was pretty clear they didn’t want a standard they couldn’t break. Or they would be in danger with a standard they couldn’t break because it could be used by third world countries. We weren’t so much worried about the Soviets. The Soviets were very good in mathematics and were therefore presumably very good in cryptography. The general belief and consensus of the community was that we couldn’t break the Soviet codes and they couldn’t break ours. But third world countries were the real gold mines of information and often we could get information on Soviet intentions, or they on ours, by information going through some third world nation that was an ally. And this problem actually predated the data encryption standard. In the military you have the same problem. You want to give your soldier in the field a very secure cipher for use in getting orders, yet if the cipher is captured by the enemy, you don’t want the enemy to be able to use it. One idea that had started to develop, which led to public key cryptography, was the idea of a trap door cipher. A trap door cipher is one that appears very secure, in fact no one can break it except the designer and the designer can break it only because he’s built in trap doors. We came up with the idea of trap doors from the Hardy Boys or other mystery books I’d read as a kid” (p. 22) (italics mine).
So what was the problem with a trapdoor system? With a trap doors, the designer could easily break the system--making public use of it suspect. Or, if the system was captured, it could easily be broken. Hellman and Diffie had developed an alternative--public-key encryption--that would eliminate this problem, however. By keeping half the key kept secret and half public, security is optimized while the architecture and theory can be made publicly available.
They presented, along with new collaborator Ralph Merkle, public-key encryption in “New Directions in Cryptography”, for Information Theory Transactions of the International Electrical and Electronics Engineers (IEEE) in July of 1976. “The reception,” Hellman tells his interviewer, “from the open community, you know the non-military community, was ecstatic. The reception from NSA was apoplectic… A fifty-six bit key size, which is what DES has, that’s roughly ten to the seventeenth keys, a hundred thousand million. We had a paper around the same time we were arguing that the key size was inadequate. And, if I put myself in the shoes that I had put myself in, which was the self appointed security officer for the public, a fifty-six bit key size was inadequate. The short version is we had estimated you could build an exhaustive search machine that could search all two to the fifty-six keys in a matter of a day at a cost of approximately ten thousand dollars” (p. 31f). This was way beyond the power of any commercial interest, but not beyond the power of the US Department of Defense (ie, the NSA).
While any form of encryption was tough for the NSA to allow, what the three researchers were calling for was worse. “Before DES, Hellman continued,
“and actually even after, most stuff was going in the clear, unencrypted. And when it is in computer readable form they could search, today, billions of words for a dollar, and even back then, millions of words for a dollar. All of a sudden DES is going to cost them ten thousand dollars per key. And, with public key cryptography, people using DES can change keys frequently and it is going to cost them ten thousand dollars per key. That’s a huge change. And when you look at the kind of vacuum cleaner intelligence operations that they were using, even a fifty-six bit key size was a major, major give on their part. And now you have public key cryptography coming into it where people can change keys as often as they want. Because prior to public key cryptography you’d need to send a courier or registered letter for a new key and people wouldn’t do that but once a year if that often. Once a month maybe if they’re very security conscious. Now you can do it every few minutes. So the two papers together I’m sure gave them apoplexy” (p. 32).
DES with a higher key count would make it extraordinarily difficult for intelligence operations to spy. The NSA threatened them, saying that they were harming public security. However, in the wake of Nixon’s disastrous stay in office, concerns were different than they were in the post-9/11 world (Hellman’s interview took place in November, 2004). At the same time, many did not believe the NSA to be as manipulative as the CIA or FBI (though Hellman says that they were known to have spied on US anti-war protesters and had, of course, been front and center in Operation Shamrock)--regardless, this turn of events was worrisome. In the end, the researchers thought that, because there were way more computers in the US than anywhere else--at the time, they were largely housed in universities, banks, defense manufacturers and the like--US computer users had the most at stake in insecure commercial encryption and they should fight for these computers. Already we can see the university at its ideal - stocked with principled researchers who, concerned with profitability and the integrity of their products (there were very few personal computers at the time, so individual privacy had not yet become such a large issue), are willing to fight the state.
As the three researchers approached publication, an NSA operative, acting on internal NSA debates, but without official permission, warned them that to do so would violate the International Traffic of Arms Regulation (ITAR) and therefore they must cease publication. This legislation exists to prohibit US companies from exporting arms--and technical details about production--abroad without a license. Cryptography, like nuclear secrets, fell under ITAR. Hellman decided he would better bring this to Stanford, because, “Stanford might be in violation and also because I was kind of afraid to defend myself if I were prosecuted… Stanford’s attorney, General Counsel John Schwartz was very supportive and I remember the conversation very well. He said it was his opinion after reviewing it that, if the ITAR was construed broadly enough to cover our work, it was unconstitutional. But the only way to determine that is in a court case. You can’t go get things predetermined” (p. 35). ) Hellman was told that the university would defend him but, if he were found guilty, it wouldn’t be the university that would go to jail or pay the fine.
In October of 1977, two of Hellman’s students, Steve Pohlig and Ralph Merkle, were going to give papers on these problems at the Cornell International Symposium on Information Theory (ISIT). Stanford, however, advised against it: they were unclear on whether they could defend graduate students because they were not sure if graduate students were, as professors were, agents of the institution (Hellman was an agent because he brought contract money to the university). Further, as a tenured faculty member, his career could withstand a long court case, but a graduate student would be destroyed by it. Pohlig and Merkle, however, still wanted to present their papers, regardless of the ramifications. As Hellman tells it, they did this because it was so clearly the correct thing to do:
“It’s interesting people have asked me and sometimes talked about how courageous I was to do this. And it’s one of those things where it’s not courage. You’re confronted with a situation where it’s so clearly right to do it and you find the courage in yourself. It’s just no question and the same with the students, their initial reaction was unquestioning, ‘No we’ll give the papers.’ Their families had other ideas however, and eventually they deferred to their families’ concerns. So, at the ISIT, I gave the talks, but had the students stand next to me and explained the situation to call attention to and credit their work even though they were not giving the talks” (p. 36).
Together they decided to try to wage a public and political battle. They had the support of the New York Times, Science and others, which was helpful. They also approached David Kahn, whose The Codebreakers - the Story of Secret Writing, ranks among the most influential publications on the history of cryptography and who had, in 1976, helped spark a national debate about cryptography. He wrote an op-ed in the New York Times that passionately pleaded for the right to publish (concluding line: “For what shall it profit it a man if he gain the whole world but lose his soul”). For the press at the time, the idea that anything could be classified, from the moment the idea was conceived, was anathema, so the researchers had plenty of press support. The professional ranks were, to a large extent, united against the government. The controversy waged, though the combined heat on the NSA forced them to re-calibrate its justifications and defenses.
Admiral Inman, now director of the NSF, attempted to allay fears by setting up a committee, composed of government types and academics, to review what could be published. They came up with, according to an NSA’s FOIA document, three possible routes: do nothing and hope the controversy would die down (not feasible); pass stricter legislation to block any publication of cryptographic research; or seek voluntary academic and commercial “compliance” (p. 236). The document continues: “It was essential, then, to slow the rate of academic understanding of these techniques in order for the NSA to stay ahead of the game. (There was general recognition that academia could not be stopped, only slowed.)” After attempting and failing the legislative solution, the third option was all that was left. Inman, while pursuing legislation, had also tried to make connections in the academic community. He travelled to Berkeley, the FOIA document continues, and “found himself in a room with antiestablishment faculty members, and ‘for an hour it was a dialogue of the deaf.’ Then the vice chancellor of the University of California, Michael Heyman, spoke up. Just suppose, he said, the admiral is telling the truth and that national security is being jeopardized. How would you address the issue? Instantly the atmosphere changed, and the two sides (Inman on one side, the entire faculty on the other) began a rational discussion of compromises)” (p. 237). Following this success, Inman approached the NSF about collaborating. Academics, the NSF and the NSA then approached the American Council on Education (ACE) to play intermediary. The NSA became much more open about its understanding of the competing interests of commerce (the point, after all, was the ability of companies, working with university patents, to sell secure systems) and national security. An ACE committee, which included Hellman, advocated pre-restraint--a voluntary program whereby researchers would run their work by a board before publication. The NSA, also, began to fund some proposals into cryptographic research so as to play a part in the academic production of cryptographic knowledge (Hellman received the first grant). In the end, Hellman believes the majority of papers since then were not run by the NSA before publication. These experiences have no doubt informed Inman’s limited opposition to current and past NSA practices.
Around the Spring of 1977, Stanford filed for patent of the Diffie-Hellman-Merkle (DHM) patent for public-key encryption. The three of them, while quite successful in their later business endeavors, did not reap a large paycheck here: RSA Data Security (founded by MIT professors Rivest, Shamir and Adleman after a similar story of intimidation - which, it appears, had an effect!) and MIT, though publishing later than DHM, had beat them to the patent and so cornered the market on public key encryption. Stanford would, in the 1980s, license these patents to another competitor, Cylink, in order to sue RSA for inclusion; the companies merged but then fell apart (see p. 32 of Bidzos pdf for more detail). (For an excellent, in-depth history of public-key cryptography up to 1988, see Whitfield Diffie’s “The First Ten Years of Public-Key Cryptography”).
Hooray for Public Universities?
“Physicists lost their innocence and freedom from government controls with Los Alamos. For biologists that time came in 1976 with National Institutes of Health regulation of recombinant DNA experiments. Mathematicians have been free from restraint - until now. The National Security Agence (NSA) has asked for and received an agreement of prior review on articles concerning cryptography.”
Susan Landau, “Primes, Codes and the National Security Agency.
In the story above, and many others, the university and its researchers play the hero against a government intent on a monopoly of information, secrecy, and power. Principled university researchers engage in the free exchange of ideas and brave potential government repression. Universities, by providing labs and legal services, also get to shine in this glory. From a capitalist point of view, this is a triumphant story where not only do the researchers win, they become fabulously wealthy to boot (through their companies or by consulting)! Not being a capitalist, however, it is possible to see beyond this tale of academic valor and view this episode as a struggle between capitalists and the state for the right to export a commodity. This was not a battle fought for the right of individual Americans (in any of the Americas) or people on other continents to have the freedom to communicate away from prying eyes (how could it be? Terrorists and the mafia would use these means and then there would be no possible way to catch them!). While it was a battle over the ownership of knowledge, it was between universities, their researchers, and the NSA - not about open access and public ownership of these discoveries. It was not a question of what public ownership even means. These discoveries, again, were paid for by tax money. Further, the close relationship between higher education, Silicon Valley and the NSA, though at times adversarial, has not ended. Just because Snowden’s leaks have not yet pointed in the direction of university involvement, we should at least be asking how much universities are involved. After all, they spy on anyone that uses .edu email and, in the case of the University of California, have just appointed the former head of Homeland Security, Janet Napolitano, to UC President. At Johns Hopkins, academic deans can still demand that critique of the NSA be silenced (though public outcry brought the blog post back). Further, the back door that these researchers sought to battle, was, in the end, inserted anyway. Perhaps most damning of all, the battles engaged in were for the right to profit off of publicly funded research--not about the type of empire the US was constructing, the lengths to which it would go to secure its economic, military and political dominance throughout the world, and what that meant for the future of government surveillance. It was capitalist altruism--altruism with a healthy dose of self-interest.
I should stress here that I do not fault any of these researchers: none of them claim to be anti-capitalist; to expect them to have behaved as I would have wanted them to, thirty years later, is ludicrous. I want to stress that the movement to thwart the NSA was, at root, a movement by the professional class (academics and the media)--not by a class interested in ending US imperialism. Its tactics lay in the courts and the professional organs of popular opinion--not in the need to demand a new way of life. This, however, is precisely the point of the modern Research University--through research and technology, the needs of the past, present and future will find a salve.
Indeed, I bring these issues up because the university, and the battles it fights, are anchored in capitalist soil. For this reason, the university struggle cannot simply be about defending the idea of the older public university against neoliberal encroachment, but must be about disrupting the university as it has been and as it has become. This entails, from the start, elaborating a critical position regarding higher education. The public research university has itself become an owner of private property (land, patents, etc) and major employer. Not only is the public research university grounded in the soil of capitalism, but it receives its research orders from the government agencies (and now private corporations) that fund it. Democratic oversight consists of courts and legislation--not councils of faculty, students, computer designers, equipment users and others. The university, as the organ of a professional class that believes itself able to mediate all wrongs, through research and the free exchange of ideas, provides the means for US domination of financial markets and military zones while withholding or hiding the only possibility of radical change: a working class movement to end capital and empire. To imagine a world other than what currently exists requires two moves: A) struggle within the current institutions--strikes, building and grounds occupations, student-worker solidarity--to counter the attack on university workers, on people of color, and the working class; and B) the development of several alternative forms of education grounded in the soil of the struggle against capitalism, empire, and structural oppression.
Mark Paschal is on the editorial board of Viewpoint Magazine, and has written for Reclamations, Classwaru.org, and Fuse Magazine. He is also a member of URGE (University Research Group Experiment). Currently he is on leave of absence from the University of California, Santa Cruz, and bartends in Madison, WI.