✍️ Column by Natalie Klym, who has been leading digital technology innovation programs in academic and private institutions for 25 years including at MIT, the Vector Institute, and University of Toronto. Her insights have guided the strategic decisions of business leaders and policy makers around the world. She strives for innovation that is open, creative, and responsible.
This is part 2 of Natalie’s Permission to Be Uncertain series.
The interviews in this series explore how today’s AI practitioners, entrepreneurs, policy makers, and industry leaders are thinking about the ethical implications of their work, as individuals and as professionals. My goal is to reveal the paradoxes, contradictions, ironies, and uncertainties in the ethics and responsibility debates in the growing field of AI.
I believe that validating the lack of clarity and coherence may, at this stage, be more valuable than prescribing solutions rife with contradictions and blind spots. This initiative instead grants permission to be uncertain if not confused, and provides a forum for open and honest discussion that can help inform tech policy, research agendas, academic curricula, business strategy, and citizen action.
Interview with David Clark, Senior Research Scientist, MIT Computer Science & Artificial Intelligence Lab
Artificial intelligence has recently emerged from its most recent winter. Many technical researchers are now facing a moral dilemma as they watch their work find its way out of the lab and into our lives in ways they had not intended or imagined but more importantly, in ways they find objectionable.
The atomic bomb is a classic example that many commentators on contemporary technologies refer to when discussing ethics and responsibility. But a more recent and relevant example that I would like to draw lessons from is the Internet–a foundational technology that has reached maturity and is fully embedded in society.
My focus is not on the specific social issues per se, e.g., net neutrality or universal access, rather, my goal is to provide a glimpse into some of the dynamics associated with the Internet’s transition from lab to market as experienced by one prominent member of the research community, Dr. David Clark, Senior Research Scientist at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL).
Clark has been involved with the development of the Internet since the 1970s. He served as Chief Protocol Architect and chaired the Internet Activities Board throughout most of the 80s, and more recently worked on several NSF-sponsored projects on next generation Internet architecture. In his 2019 book, Designing an Internet, Clark looks at how multiple technical, economic, and social requirements shaped and continue to shape the character of the Internet.
In discussing his lifelong work, Clark makes an arresting statement: “The technologists are not in control of the future of technology.” In this interview, I explore the significance of those words and how they can inform today’s discussions on AI ethics and responsibility.
You describe your observation that “the technologists are not in control” as a revelation that came to you during the commercialization phase of the Internet. Can you describe this moment and why it was revelatory?
The goals of the research community in the 1970s and 1980s were purely technical. In the 70s, we were just trying to get the protocols to work. In the 1980s the challenge was scale. We went from a goal of connecting about 100,000 institutional computers to millions of personal computers, and now of course, we’re looking at billions of connected devices including cell phones and sensors of all kinds.
Commercialization of the Internet began in the 1990s. During this period, it went through a rapid transition from being an infrastructure run by the US government to a service provided by the private sector.
It was an interesting as well as surprising time since we didn’t know what a commercial Internet would look like. We had never thought about it that way. As the source of investment changed, so did the drivers and the goals of the research, which were now being led by industry. All of a sudden, a new set of factors emerged, things like profit-seeking, competition, etc.
The example that first brought this home to me concerned QoS, or “quality of service” controls, which enable the prioritization of packets. Our goal in designing these controls was to make time-sensitive applications like real time voice and games work better, and the controls did that. We initially saw that as a technical enhancement. However, it’s not difficult to understand that in a commercial context packet prioritization has everything to do with industry competition and therefore money. In the early days of online phone and video services it was difficult for providers of Voice over IP and IPTV to compete with the telco and cableco’s proprietary services because the quality of transmission over the public Internet was still relatively poor at the time and there were no QoS capabilities. So, if you were an ISP offering traditional phone and TV services, why would you build capabilities in your IP network that would offer a means for these new entrants to compete with you? As one ISP executive said to me, “why should I spend money on QoS so that Bill Gates can make money selling Internet telephony?”
The point is that what I had considered packet-routing protocols for decades were in effect money-routing protocols. This was pointed out to me by an economist who said: “the Internet is about routing money; routing packets is a side effect, and you screwed up the money-routing protocols.” In my defense, I replied, “I didn’t design any money-routing protocols,” and his response was, “that’s what I said.” We were joking, but the point was real.
That the technical design of the Internet has implications for industry dynamics is obvious to economists and business people, and it’s obvious to me now, but until we, as technologists, were compelled to solve industry problems, none of this was obvious. We did not understand at the time that we were now engineering both a technology and an industry structure that determined who had economic power.
So, you’re saying that industry, or the private sector, is driving the future?
Societal concerns have become more important in the last decade. We now have a fundamental tussle between the objectives of the private sector–concerned with things like commercialization and profitability–and those of the public sector. I worked with kc claffy, who is the Director of the Center for Applied Internet Data Analysis at the University of California, on research that explored the societal aspirations for the future of the Internet. We collected statements from a variety of stakeholders including governments and public interest groups and catalogued them into a list–things like reach, ubiquity, trustworthiness. These social aspirations can be in direct conflict with private sector goals. And it’s this tension that shapes the future of the Internet.
You changed your focus to the social implications of the Internet and policy matters in the 2000s. Would it be accurate to say this was the result of realizing you were now also engineering a social structure?
Yes and no. There’s a difference between the Internet itself—the package carriage system—and the applications that run on top of it, like the Web or Facebook, for example. These days, when people talk about the Internet, they are often talking about the latter.
So, yes, I have been more focused on the social implications of the Internet in the last 2 decades, but in terms of engineering a social structure, this stems from the application space, as opposed to the network itself.
And within the application space, it’s the ad-driven business model that I most have issues with. This model creates all kinds of incentives that have negative consequences for society. Facebook, for example, is designed to be addictive. They want to keep you on their site so they can show you lots of ads, so they manipulate the experience to make it “sticky,” and influence your personal behavior with all kinds of tactics. It’s an incredibly distorted space.
The 2020 documentary film, The Social Dilemma, and Shoshana Zuboff’s 2019 book The Age of Surveillance Capitalism explore this distortion. And a few years ago, in 2014, Ethan Zuckerman (formerly at the MIT Media Lab and now at U. of Western Mass.) wrote a public apology for designing the pop-up ad back in 1997, declaring advertising in general as “the Internet’s original sin.” Do you feel personally responsible for the things you think are bad about today’s Internet?
We designed the Internet (the packet carriage system) for generality; that was its strength. I don’t think there’s any way I could have built an Internet that would allow for generality and at the same time preclude bad behavior at the application layer. Maybe there was a fork in the road where someone could have pushed things in a different direction—but not at the packet level.
What are your thoughts on the moral dilemmas facing AI researchers today?
I would say AI is probably more like the packet carriage layer in that it’s a basic technology, with applications that can take many forms. And it’s difficult if not impossible to preclude bad behavior or only allow for good behavior, nor is it easy to define what “good” vs “bad” behavior is. Stephen Wolff, who ran NSFNET in the 80s, said back then that every behavior we see in the real world is going to manifest in cyberspace including behaviors that we find unwelcome and offensive.
The sociologist and Internet historian, Manuel Castells, has said that the Internet is the mirror of society. It is neither good nor bad, nor is it neutral, his point being that its uses are socially determined.
I agree; the Internet evolved to defy both the original utopian and dystopian visions. The more general question regarding the moral responsibility of scientists has been debated over and over again. My view is that technology can be used in so many different ways, and rarely do we understand the future implications of what we have made, even if we have a clear sense of intended uses. A creative person can and will come along and use it in ways we never imagined.
The history of technology is full of stories of unintended consequences, whether good, bad, or simply frivolous. GPS is an interesting case. The early research papers stressed military applications of GPS because the researchers wanted the military to fund its development. Those who understood some of the broader societal benefits were afraid that if they put too much emphasis on these, the military would not pay for it. So, GPS emerged as a military technology. But today, we all have access to maps and directions, and my children have not had the experience of being lost. That is a good thing. But the negative consequences include things like neighborhood traffic congestion that results from traffic apps sending cars down residential streets, and global tracking of everyone’s location. These outcomes are not something that could have been foreseen, nor is it at the level of something like autonomous weapons, even though it can lead to fatal accidents. But these are negative consequences that no one could have predicted at the time.
I think we should teach scientists to think through the social consequences of what they’re doing, but I don’t know if we can put a burden on researchers that says they have some obligation or responsibility to predict all the consequences and then try to embed mechanisms to prevent harm. I just don’t think the world works that way.
The cryptographer Phillip Rogaway makes a strong case for computer scientists taking responsibility for their work, arguing that their work is political in nature.
I’m very sympathetic to what Rogaway says, but it’s one thing to conclude that encryption is going to shift power balances, and another to then design the technology to preclude bad behaviors by its users.
There should definitely be a sense of awareness during the more abstract exploration phase, but it’s not until you get closer to a specific application that you need to think through the ethical implications. You’re going to have to rectify problems as they emerge in each context, on a case by case basis.
Carly Kind from the Ada Lovelace Institute in the UK refers to an emerging “third wave” of ethical AI that addresses specific use cases framed as social problems as opposed to philosophical concepts (the first wave) or narrowly-defined technical issues focused on algorithmic bias (the second wave). As we enter this third wave, it’s clear that we need many voices at the table, but understanding each other and integrating multiple perspectives isn’t always easy or straightforward. How have you addressed this challenge?
When I began to realize that the Internet was no longer a purely technical problem I stopped running a purely technical research group at MIT. I started by hiring an economist and have also hired political scientists and collaborated with philosophers like Helen Nissanbaum. The last project you and I worked on regarding convergence at the application layer integrated ideas from media studies and other social sciences. Taking a multidisciplinary or interdisciplinary approach is key to understanding and shaping innovation in a way that benefits society.
I actually first started thinking in multidisciplinary terms much earlier, in the late 1980s. I got involved with the Computer Science and Telecommunications Board at the National Academies, which is an organization chartered by the US government to advise them in areas where a multi-stakeholder assessment of a problem is needed. I chaired an early study on computer security that was published in 1991. And I got a lot out of it, but most significantly, my experience with the National Academies taught me the importance of having conversations with people who were not like me—economists, social scientists, artists, lawyers, regulators. I chaired the board for 8 years, learning what happens when you get people with very different points of view and stakeholder biases together to produce something coherent. And so, as I moved forward with my technical research, I carried with me experiences and expertise that most pure technologists did not have.
This third wave of ethical AI is intersecting with the long-awaited regulation of big tech. What are your thoughts on regulation?
In terms of AI, the government has awakened to social implications a lot earlier in the lifecycle of the technology compared to the Internet. With the Internet, it took about 15 years after commercialization for the government to really wake up, so sometime around 2010.
There is always tremendous resistance from the private sector around regulation because they need to compete, which can mean doing things that harm society. When it comes to matters of public interest, as opposed to anti-competitive monopolistic practices, if one actor tries to be “good,” they will lose. But if you impose regulation on everyone, it levels the playing field. The financial services sector is an example. It is heavily regulated—not to address monopolistic practices, but for public interest reasons. It adds to costs, and can stifle innovation, but it affects all players equally.
It can take a while for governments to figure out how to be effective. They may, for example, impose regulation on the wrong players for the wrong reasons. In the case of the Internet, we’ve seen governments impose regulation on the ISPs regarding objectionable content, like child pornography or terrorist activities, rather than the application providers. The rationale is that it’s easier for the application providers to escape regulation by relocating operations to foreign countries. So, the ISPs are an easier target. But they are not necessarily the right target, or an effective one.
You have referred to an “abstract” exploration phase of research and “basic” vs applied technology in a way that suggests more neutral phases in the overall research and innovation process. But do you think that the relationship between basic and applied research; between academic and industry research; and the path from discovery to invention to innovation in general, has changed over the years? How so? Are universities doing less basic, curiosity-driven research as collaborative innovation increases?
There has been a large growth in computer science (CS) research, and the balance has certainly shifted toward more applied research—closer to commercialization. Our government is pushing investment to drive innovation, make our country more competitive, and so on. And innovation is not the only driver of this shift. As our field matures, some of the basic questions get answered. It is important to remember that in the late 1960s and early 1970s, when the early concepts behind the Internet were emerging, this was totally a venture into the unknown. But there are still folks who look further into the future. I am actually not sure I buy the distinction between basic and applied research. Those may not be the right divisions. Some research is more speculative, more driven by curiosity—a sense of exploration. If you are looking for a venture today that is going into the great unknown, crypto-currency comes to mind. Speculate on good and bad consequences of that idea, and how it will (or may) be shaped by various forces.
Is there anything you would have done differently when designing the Internet?
We designed the Internet for generality, that was its strength. The whole idea was that you could build anything on top of that platform. Of course, I hoped that smart people would come in and build really cool, useful things. I also expected that people would come in and build frivolous things, which is fine, and I always knew that people and organizations would eventually do wicked things on the Internet.
I love the writings of Terry Pratchet. He writes social satire cast as science fiction. His view of the world is that life is about performing a series of experiments that reveal how people really are. I see the Internet as such an experiment, and what we’ve discovered is that much of the world is evil, but I guess we knew that already.