✍️ Column by Natalie Klym, who has been leading digital technology innovation programs in academic and private institutions for 25 years including at MIT, the Vector Institute, and University of Toronto. Her insights have guided the strategic decisions of business leaders and policy makers around the world. She strives for innovation that is open, creative, and responsible.
This is part 3 of Natalie’s Permission to Be Uncertain series.
The interviews in this series explore how today’s AI practitioners, entrepreneurs, policy makers, and industry leaders are thinking about the ethical implications of their work, as individuals and as professionals. My goal is to reveal the paradoxes, contradictions, ironies, and uncertainties in the ethics and responsibility debates in the growing field of AI.
I believe that validating the lack of clarity and coherence may, at this stage, be more valuable than prescribing solutions rife with contradictions and blind spots. This initiative instead grants permission to be uncertain if not confused, and provides a forum for open and honest discussion that can help inform tech policy, research agendas, academic curricula, business strategy, and citizen action.
Interview with Domhnaill Hernon, Global Lead of Cognitive Human Enterprise at EY and former Head of Experiments in Arts and Technology (E.A.T.) at Nokia Bell Labs
Marshall McLuhan believed that artists were the best probes into the future of technology because they lived on the frontiers. They were the most likely to take technology in directions beyond the intentions of the scientists and engineers. But according to Domhnaill Hernon, artists don’t just think outside the box in terms of features and applications, their most important contribution to tech innovation is the ability to create a much needed human-centric vision of the future.
Domhnaill, you just ended a 5-year term leading Bell Labs’ Experiments in Arts and Technology program, one of the handful of corporate programs in the U.S. that integrated the arts with R & D. And now you are creating a new initiative at EY (Ernst & Young) called the Cognitive Human Enterprise. Can you tell us more about the work you do with artists and how it brought you to EY?
I was asked to lead a new initiative at EY to show the potential of fusing art/creativity and technology to create the most cognitively diverse organizations possible. The role builds directly on what I had achieved at Bell Labs’ Experiments in Arts and Technology program and supports EY’s commitment to what they call Humans@Center.
My unique approach is to leverage the significant differences between the world of technology/business and art/creativity. This is a lofty goal but I truly believe that the future of human-centered innovation lies at the intersection of art and technology.
Can you tell us more about the history of the E.A.T. program and how you ended up there?
Bell Labs had been bringing engineering science and the arts together since its inception in 1925, all the way up until about the early 1980s. One of the standout moments from that period was the creation of a global not-for-profit organization, called E.A.T., which stands for Experiments in Art and Technology, in the 1960s. It emerged out of a series of art-performance events called 9 Evenings: Theatre and Engineering held in 1966. These events comprised collaborations between Bell Labs’ engineers and several prominent artists of the time including experimental music composer John Cage, the abstract expressionist painter Robert Rauschenberg, dancer and choreographer Yvonne Rainer, and many others. But from the 1980s until about 2016 the fusion of art and engineering was largely non-existent at Bell Labs.
What happened in the 80s? Why did the E.A.T. program end?
There were major changes in U.S. socioeconomic policy that changed how industry in general was being regulated and how research was being funded, and a lot of other shifts including things like how employees were treated. There was pressure on corporations investing in what many perceived as frivolous artistic activity–and R&D in general–to reduce funding to those programs.
And then what happened in 2016?
I moved in late 2015 from Bell Labs in Ireland to the HQ of Bell Labs research in New Jersey. Soon after I arrived it was the 50th anniversary of E.A.T.’s incarnation. Several of us at the leadership level got invited to several celebratory events in New York City that were essentially engineers + artist meetups. Through those events we learned about the history of Bell Labs’ work with artists and I realized that we, as an institution, had forgotten about that part of our history. From that point onward, we learned more and more about the critical value of fusing art and engineering, and the immensely significant role that Bell Labs had played.
And at the same time, but separately, we were having internal conversations about what was missing from our research strategy and what we wanted from new talent and our organizational culture. So, these were the conversations we had during the day, and then in the evenings, we attended the artist meetups commemorating the E.A.T. program.
Every one of the interactions blew my mind. I realized that the artists had a completely different perspective on everything — from the intersection of technology with society to life in general. More specifically, the role that humans play in technology development was at the center of every answer they provided to my questions. This was impressive because, as an engineer, I had not been trained that way and I could not believe that I was so blind to this perspective. It struck me that we needed an organizational culture that emphasized a more human-focused approach to innovation.
So I decided to establish a new initiative, based on the original E.A.T. program I had just learned about, but focused on modern day needs.
So, how was the new program different from the original one?
The original program was a not-for-profit entity, separate from Bell Labs. It grew organically out of the interpersonal connections between the artists and engineers, whereas the new initiative was a sanctioned, funded, internal initiative that evolved to become its own research lab within Bell Labs. It was designed more purposefully, based on what we learned from the original program.
The context was also very different. Back in the 60s and 70s digital technology was very new. Artists had a lot of creative ideas but didn’t have the technological knowhow to manifest them. They leveraged Bell Labs’ engineers and scientists expertise to make the technology required to enable their creative ideas. That is not the case today as many multimedia artists are technologically gifted.
There’s also just that much more user-friendly consumer technology today, but it wasn’t always the case. Dan Richter, who played the ape, Moonwatcher, in the opening scene of Stanley Kubrick’s iconic AI film, 2001: A Space Odyssey, was a guest on the seminar series I run at the University of Toronto’s BMO Lab, which focuses on the relationship between AI and art. He talked about how much new technology had to be created to enable Kubrick’s vision. That was back in the early 1960s.
Yes, in fact, Artur C. Clarke (the author of the novel) spent a lot of time at Bell Labs. There was a major relationship between Bell Labs and the production of that film. They developed futuristic props such as the video phone, and Max Matthews, who’s considered the godfather of computer music, inspired some of the music in 2001 such as HAL singing “Daisy Bell” towards the end of the film.
How does the E.A.T. program benefit the engineers?
Today, the E.A.T. artists are more technologically savvy and the program is designed to be more mutually beneficial. When we pair artists with engineers and scientists, the artists, as before, get access to tech they wouldn’t otherwise, but in the new program they infuse their human focus deep into the R & D community. In other words, the artists are there to enlighten the engineers. And by that I mean, STEM practitioners are very well trained in the scientific methods but that blinds us to other ways of thinking and problem solving. We wanted the artists to infuse R & D with an ethos of humanizing technology. We wanted the engineers and scientists to always have in their minds the human aspect of technology; to question how this technology might do good or harm to society, and how they might design out the ability to do harm in the earliest stages of a research project. We also wanted to expose our R&D community to new forms of creativity.
In my previous interview with MIT’s David Clark, one of the early Internet pioneers, he emphasizes how difficult it is to predict the outcomes of technology–to “design out” those possibilities as you put it.
I’m not saying it’s easy. It’s very difficult, but at the very least, technologists, engineers, scientists, researchers should be asking those questions, they should be aware of that human element. That awareness isn’t part of how engineers are educated or expected by their employers to create value in the marketplace. Whereas artists have an inherent way of keeping the human in mind, first and foremost. I wanted to integrate their way of thinking into our R & D community in a deeply purposeful way. That way, we could drive real cultural change around this foundational concept of humanizing technology. I see this type of holistic approach as the driver of human-centered innovation.
That might be one of the great untapped potentials of fusing art and technology–the ability to sense and create a human-centric vision for the future.
Your point about creating value in the marketplace is interesting and makes me question whether it’s the technology and technologists that need to be “humanized” or business and the executives managing firms. In other words is tech the problem or is it the tech industry?
I don’t think the problems in society can be blamed on technology directly. Technology is just a tool that is designed and used by humans in various ways. I also don’t think it’s fair to say it’s purely a business problem either. I think every aspect of the chain needs redefining and all elements of the chain need to work together in tandem. Much of my work is about getting to the core of where the tensions reside, and fundamentally, it’s about adding the human element to the design of technology and a human element to how businesses leverage the skills of engineers to create value and a human element to how businesses push technology out into the market.
We’ve evolved to a point where we largely rely on markets and we develop technology to survive and thrive as humans. A core part of the human condition is that we’re going to develop systems and paradigms and tools that are developed by humans for humans and they will have an impact on society.
My main issue today when I look across the chain is with the education and training aspect of science and engineering. When you’re studying or working in technology, all problems are technology problems and all solutions are technology solutions. It was really an eye opening experience to work with artists. It made me realize the trap I had fallen into and that I was blind to the other lenses through which you can view the world and solve problems. There’s a lack of connection to the humanities, a gap. However, making that connection in an impactful way is not easy. The E.A.T. program, as I said earlier, was about making purposeful connections and really bringing the best of both worlds together.
You’re reminding me of my experience working with a research group at the MIT Media Lab that made the integration of art as one of its goals. But it was a vaguely defined objective and the project leaders, neither of whom were artists, didn’t know exactly what it meant or how to do it, and were very open to suggestions. I appreciated their honesty, because it isn’t easy, as you say, to make the connection in an impactful way.
There is a real lack of understanding of how to bring these different ways of thinking together. It’s very hard work. I still see a lot of efforts that are quite superficial, what I would call a “check the box” exercise. And I see a lot of efforts that are random–an artist is randomly selected and paired with a randomly selected engineer and they are put together randomly in some common space for a short period of time. In that model if anything good was to come out of the interaction it would be just fluke. These initiatives need to be thought through purposefully and strategically and executed with precision within the bounds of what you have control over.
I also encountered an attitude from some of the engineers I worked with over the years that art, or any of the social sciences for that matter, was somehow inferior or insubstantial. The word “fluff” was used on many occasions to describe these disciplines.
Yes, and when I started the Bell Labs program I had to think through all the ways in which the program could be killed, given that kind of attitude.
But even when there’s a lot of goodwill, and good intention, there’s still not a lot of good execution. I found there were two main approaches to art and tech fusion. One was extremely transactional. A company would bring in an artist for a couple of weeks and say, here’s our new product, do something cool with it. But then that was it. The impact was short term and superficial, driven primarily by communications and branding goals.
Then there was the completely ad hoc approach where someone in the organization would say, oh, we need to bring artists in, and they would randomly select an artist and likewise a random group of employees who would engage with the artist. They would put them in a space together and think something would just emerge and that the organization would suddenly become more creative.
By contrast, I designed the modern E.A.T. program more strategically and more purposefully, with ways to measure impact. And again, I designed around the modes of failure I was aware of and I applied the concept of a pre-mortem to my analysis and design of the new initiative. I spent a lot of time getting to know artists, their personalities, their openness to collaboration and their technological capabilities. And the same thing for the scientists and engineers, so I could make the right match. I also had to factor in deliverables and schedules of those engineers and the perception of their management chain so that we covered all dimensions of success as much as possible.
I had to think it through as much as I would an actual technical product that would go to market. Also very important is the fact that I viewed it as a major cultural change initiative, which are known to have a high failure rate.
What were some of the early proof of value experiences you had, and how did they evolve over time?
In some cases, the proof of value was an exceptionally insightful conversation that completely changed our perspective on technology and informed a new research direction.
From there we developed whole new classes of technology–not just out of the conversations with artists but also out of the collaborations where artists were using our technology in very different ways.
Technology is often used differently than how its inventors intended. In cases where the technology in question is a creative tool, you get some amazing stories. The electric guitar, for example, was a technical solution to the very practical problem of amplification, but Jimi Hendrix and other musicians created a whole new sound. Stevie Wonder did the same thing with the synthesizer, turning technical and gimmicky sounds into a whole new artistic practice. What were some of the artist-driven consequences you saw at Bell Labs?
One of the earliest examples was in the area of wearables. We had asked, what’s the next communication device after the smartphone? This was around 2016. We were looking 10 years out. You had to assume the smartphone didn’t exist anymore. We started from a technological research perspective that led to ideas of disaggregating smart phone functionality so that we could communicate, control and sense the world around us in new ways. Our earliest designs and prototypes were very utilitarian and clearly designed with technology at the center. Then we brought in artists and approached the question from completely new angles.
One of the first artists we collaborated with in the modern E.A.T. era was Jeff Thompson. He pointed out to us that, even at that time in 2016, we were all spending an order of magnitude more time on our smartphones than we were with the people we most loved in the world! This was an eye opening observation that helped us completely rethink the design and development of these new wearable concepts to be more human centric. We designed a wearable for your arm and one for your head–the Sleeve and the Eyebud–that worked in combination in much more intuitive and non-intrusive ways and removed the need to keep looking at your smartphone. So our initial conversations focused on the problem from a technological perspective (solving the biggest tech challenges in creating wearables), but the solution we ended up with came directly out of our artistic collaborations and showed you could sense and control the world around you in much more human centric ways, using the more natural forms of your body and leveraging the technology in a symbiotic way.
What about AI?
I think there are two main popular narratives surrounding Machine Learning (ML) and Artificial Intelligence (AI) at the moment and both stem from different interpretations in the value of automating “mundane” tasks.
In one argument people talk a lot about how AI can be used in industry to enhance efficiency/productivity through automation of the mundane and the popular assumption is that this approach will lead to job losses. I think this is probably a reasonable current assumption since very few in industry or academia that are researching and developing these AI tools have provided a strong counter argument. It is clear that current business imperatives are based on cost savings and margin increases and AI has the potential to benefit companies across all industries in that regard.
The second argument is that automation will free up people’s time and then they can be more creative, productive and strategic with that time and create more value. The difficulty with this argument is that people can’t just become more creative/productive/strategic–we need to develop tools that will help them on that journey.
So either way, we have a gap between the benefits that AI can provide and the narrative surrounding AI and its use in industry. We need to figure out a way to free up people’s time from the mundane tasks and help them be more creative and productive with that time to create more value. The value they create needs to be more than the savings created through the potential of job reductions.
I’m also very interested in counteracting the dystopian narratives around AI. These negative stories are typically based on a fundamental lack of understanding of the technology and the lack of understanding on the potential for the technology to enhance human creativity and potential.
For example, at Bell Labs we wanted to showcase instead, the potential for AI to enhance human creativity. One project is “We Speak Music” and features the beatboxer Reeps One. We trained ML algorithms on his voice as he was beatboxing to the point where they started generating sounds and techniques that he had never created in his life, yet the AI voice kind of sounded like him.
Prior to this experiment, Reeps One felt that he had pushed the capability of his voice to the absolute limit. He didn’t think there was anything else he could do to augment his voice and had started branching into other areas of art to satisfy his creativity and curiosity. But through this experiment we gave him what we called a “second self”, an AI digital beatboxing twin, for him to collaborate with and according to him this enabled him to “level up” his voice and he is now creating new sounds and techniques and composing and performing in new ways.
Think about that–we took one of the best beatboxers that ever lived and one of the most creative people I’ve had the pleasure of working with and we helped him be more creative by creating an AI digital twin of/for him to collaborate with. Can you imagine the potential for AI to enhance the creativity of all people if it was developed right and for everyone?
The seminar that I run at U of T’s BMO Lab questions the role of technology in the creative process and there’s definitely a tension around the question of how much tech is too much? At what point is it no longer human creativity? Is that a good thing or a bad thing? Has anyone ever viewed the idea of an AI-based digital twin as a dystopian narrative?
Never. I’ve never heard anyone even question the experiment from that perspective. What was important to me was developing AI in a way that involves actual humans, that took embodied cognition or embodied intelligence into consideration and where the technology was in service to our humanity and not viewed as a replacement.
I believe the reason this question didn’t arise out of our work is because we collaborated with this intent from day 1. The whole point of the collaboration was to dispel this sentiment.
What’s interesting to me about the work of Reeps One is that it’s not about automatically producing a piece of music “in the style of,” like a deep fake, it’s about an actual collaboration between a human and machine. Can you say a little more about embodied intelligence?
I think there is a lot of work to be done to dispel some of the myths and assumptions in AI today. For example, this notion that AI will supersede human intelligence is nonsensical to me because the way AI is developed today is based on a flawed understanding of human intelligence.
The neural net (machine learning and deep learning) type architectures of today are based on the mathematical models that AI pioneers created 30-40 years ago based on how they thought the human brain worked. We know now that the model is more of a metaphor and not how the brain actually works today based on neuroscience; however, the model is simple and pervasive and won’t change anytime soon.
The problem with the neural net model of human intelligence is the aspect of disembodiment — the absence of a human body. The human brain on its own has no intelligence, cognition, creativity or consciousness. It has to be connected to the human body. The brain is a computational pattern recognition engine that requires sensory inputs from your physical body. Without the body, the brain is nothing.
But because of the flawed foundations of AI, we have this equally flawed idea of intelligence that encourages us to imagine we can replicate or supersede human intelligence, which has all sorts of practical and ethical implications.
I have no doubt that we are creating a new type of intelligence, which may be able to do things humans can’t, but it’s not going to be more intelligent in the way humans are intelligent. It’s going to be different.
What do you hope to achieve in your new role at EY?
One of the powerful lessons I learn everyday working with artists is to remember that we are human, remember what is special about humanity and keep that front and center when developing technology. This is something that I am very excited about with my new role at EY.
EY have invested in diverse communities for decades. For example, they set up more than 10 global neurodiverse centers of excellence and hired hundreds of people from that community showing the world the immense value that people with different experience and skills can bring to any organization.
I co-founded a new initiative called the Cognitive Human Enterprise. The objective is to solve global-scale human and business challenges by investing in massively interdisciplinary collaboration and full-spectrum diversity to create the most cognitively diverse organizations possible. One aspect of accelerating towards that cognitive diversity is to leverage the benefits of fusing art and technology. Given EY’s commitment to Humans@Center I am excited to see how far we can take this and deliver on human-centered innovation.