w/k - Zwischen Wissenschaft & Kunst
Press "Enter" to skip to content

Esmeralda Conde Ruiz: sensitive ears and insensitive infrastructures. Part I

Interview by Michael Klipphahn-Karge | Section: Interviews

Abstract: From April to October 2022, the internationally renowned composer, conductor and artist Esmeralda Conde Ruiz (*1980/Spain, lives in the UK) was a guest at the Schaufler Lab@TU Dresden. Whilst on residency she worked with TU Dresden’s (TUD) scholars and scientists as well as experts from other institutions and researched ideas related to the Lab’s main topic of enquiry which is Artificial Intelligence as a Factor and Consequence of Social and Cultural Change. Our editor Michael Klipphahn-Karge, currently a fellow of the Schaufler Kolleg@TU Dresden himself and an art scholar, interviewed Conde Ruiz about her stay and plans during the residency in Dresden. 

Esmeralda Conde Ruiz, you were Artist in Residence at Schaufler Lab@TU Dresden from April to October 2022. At the Schaufler Lab@TU Dresden, humanities scholars and social scientists worked with you, or with the artists in residence in general, to examine artificial intelligence systems and ask what social changes the associated technologies may entail and how these can be understood, ordered, and evaluated. Building on this, our conversation will shed light on how the collaboration between you as an interdisciplinary artist and the other participants with different academic backgrounds has been organised. How did your application process for the residency work? With what ideas and plans did you apply? And what was expected of you as an artist in residence in Dresden?
Prior to this residency I had no experience of working with AI. I have always had an interest in new technologies and what new opportunities these might bring creatively. Therefore, the idea of being able to experiment with AI in collaboration with experts was very appealing to me. I was curious to try ideas out that would lead to something interesting as opposed to applying with a finished project idea. For this reason, I applied with a proposal consisting of three main questions to explore:

  • What would be the reasons for creating an AI choir?
  • Who would benefit from it?
  • What would this sound like?

An independent jury chose from different applications and then I had an additional video interview with Kirsten Vincenz (director of the Office for Academic Heritage, Scientific and Art Collections of TUD and speaker of Schaufler Lab@TU Dresden) and Gwendolin Kremer (Curator of the University`s Art Collection of TUD and of the Schaufler Residency@TU Dresden) discussing my method and my approach to mistakes.
The team’s hope was that I could be part of the campus and the Schaufler Kolleg@TU Dresden. Shortly after the public announcement of my residency Gwendolin Kremer started organising online meetings with professors at TUD for me. 
So, on one hand the expectation was to make time during the six months to meet as many researchers as possible. Whilst on the other hand artistic experimentation was still expected from me. Over the six months I was asked if I could contribute to conferences and university events regularly, for example the 5th extended symposium in visual arts in Saxony and also internal university workshops and conferences. This culminated in my own symposium at the end of my time in Dresden entitled Zukunftsmusik. What the future might sound like.

Symposium "Zukunftsmusik – What the future might sound like" at TUD (2022). Panel discussion on 29.09.2022. Left to right: Gwendolin Kremer, curator Schaufler Residency@TU Dresden; Prof. Carsten Nicolai, artist, musician, professor of art with a focus on digital and time-based media, Dresden University of Fine Arts; Esmeralda Conde Ruiz, artist/composer, Schaufler Residency@TU Dresden 2022 and Jun.-Prof. Miriam Akkermann, junior professor in Empirical Musicology. Photo: André Wirsig/© Schaufler Lab@TU Dresden.
Symposium at TUD: Zukunftsmusik – What the future might sound like (2022). Photo: André Wirsig/© Schaufler Lab@TU Dresden.

Now that the six months are over the conversations continue and we are planning an exhibition together so in this way it is much more than just a residency.

What collaborations were offered, what networks are there? And how did it feel to work as an artist at a university that is almost entirely dedicated to technology?
My studio was located on a field between three large buildings: the Sächsische Landesbibliothek (SLUB) – a huge library –, the Institut für Biologie, a large modern biology building and the Otto-Mohr-Laboratorium, the institute of Concrete Structures. 
You sit within nature in your studio which is a glass box, watching the butterflies and bunnies pass by whilst the Otto-Mohr-Laboratorium, one of the most modern testing facilities in the construction industry in Germany, is running tests on different materials nearby. The contrast couldn’t be bigger. I loved watching their disregarded experiments piling up outside their building. They were large, heavy, and constant. As my sonic work is invisible, I did find it very inspiring to be surrounded by a very visible industry. 

Artist Studio at the Schaufler Residency@TU Dresden (2022). Photo: Schaufler Lab@TU Dresden.
Artist Studio at the Schaufler Residency@TU Dresden (2022). Photo: Schaufler Lab@TU Dresden.

Surprisingly the connection to technology and the direct contact with industries didn’t feel alien to me. This is probably due to my collaborative projects which are interdisciplinary, so I am used to working within different fields. Talking to professors I also realised that we have very similar approaches in terms of experimentation, testing, prototypes and talking through thoughts and ideas with clients and commissioners.
In terms of offering collaborations: Gwendolin Kremer hand-picked the professors that might be of interest to my practice. In a way it was like a blind date, to see where there is some potential spark and similar excitement about the possibility of collaborating. I thought that was a clever way of seeing who might be able to become involved.
The first month I basically just went on meetings all over town which was a wonderful introduction into such a vast topic as AI. The Office for Academic Heritage was a gold mine of inspiration covering so many different fields of interest, such as the Historical Acoustic-Phonetic Collection under Rüdiger Hoffmann.
Additionally, I was offered an introduction to a vast network of other local universities, partners, and museums in the city. I met local musicians and artists and that creative network I valued highly and found to be a huge creative support during my time in Dresden.

What research questions did you pursue as artist in residence? What questions arose between your artistic practice and research on AI in general? And what gaps were there that you found exciting and productive?
My research questions changed and developed during the residency and that was supported in full by the Schaufler Lab@TU Dresden. My initial hope to work with a non-human intelligent system and to explore how that might sound was quickly shattered when I learned more about AI and how many humans are actually involved in running such systems and models. In every step of the process, I learned that there was human intelligence shaping the path or overseeing the progress and that these humans were frequently being exploited. This suddenly created an entirely different scenario from what I naively had anticipated. 
Sonically I wanted to explore the non-human. A sound that I can’t create on my own or with other human voices. I started following the sound of training data that led me to the infrastructure that is needed to run AI models. This led me to the biggest microtonal noise ever: data centres. This sparked my curiosity and inspired me to explore a different angle within my research. 
I started to think more about the continuous cycle of our human data and the technological space it occupies. My questions began to focus on highlighting the new sounds created when we communicate digitally with each other. How do these sounds differ sonically and in what ways are they similar to our traditional methods? What are the costs of digital communication? If technology is trying to sound human, what happens if humans attempt to mimic the sound of technology? 

What, in your view, are the important insights you have gained through the joint work with various researchers regarding your idea of this artistic work in the context of AI?
The researchers I met introduced me to the human workforce behind AI and the realisation that so-called AI systems are actually fuelled by millions of low paid workers around the world. Attached to this is the training of data, the origins of that data. The speed and scale of server farms and their carbon footprints had a profound impact upon me as did the lack of sharing this side of the industry with the wider public and how this knowledge isn’t commonly known. 
These are all very big topics and ones which are being constantly investigated by researchers and journalists, they are ever evolving and changing. These insights though have made it possible for me to look at the human versus technology connection from a wider lens, to see the global picture of how it is developing. It didn’t feel right for me to just use AI to create an artwork once I understood the larger picture. It needed to become something that echoes the sounds of all these conflicting connections.

Which project exactly did you pursue during your time at TUD?
I am currently creating a durational audio-visual work which combines the sounds of digital infrastructures used for storing data, humans engaging with communication technology and colour. The project intends to draw attention to the invisible and barely tangible world of our digital data and the future technologies, highlighting the role and influence of humans in this constantly changing world. Sounds which have inspired the work include the whirring of the ventilation systems at the server farms as well as the isolated microtonal sounds emitted from different racks of supercomputers. During the piece my hope is that these tones evolve into a human/machine choir humming, repeating, and re-articulating words. I’m adding synthetic voices to the cacophony of noise which causes the listener to interrogate the authenticity of what they hear. The work is created as an infinite loop which symbolises the sounds created continuously at server farms as users from different time zones engage and add to their personal data stores. The sounds portray the co-creation between humans and technology and ask how true symbiosis might sound in the future. Hopefully the work will call to question the continuous cycle of our human data and the technological space it occupies. 

View of the server rooms of the TUD (2022). Photo: Robert Gommlich/© Courtesy of TUD´s Center for Information Services and High-Performance Computing (ZIH).
View of the server rooms of the TUD (2022). Photo: Robert Gommlich/© Courtesy of TUD´s Center for Information Services and High-Performance Computing (ZIH).

What role did the sounds, sound emissions and noises that can be produced by mechanical processes play? Is this natural sound of a machine already a kind of music for you?
Yes, sometimes. It definitely influences the tonality and sound character that I create through it. Pauline Oliveros was the distinguished research professor of music at Rensselaer Polytechnic Institute, Troy, NY, and founder of the Deep Listening Institute. She described Deep Listening as a way of listening in every possible way to everything possible to hear no matter what you are doing. Such intense listening includes the sounds of daily life, of nature, of one’s own thoughts as well as musical sounds.
Like her I draw a huge inspiration from listening. Not necessarily listening to music but to spaces, focusing on sounds that are specific to that space. Sounds that are already there. In a way I look for the sonic architecture of a place. These sounds are often the starting point of my inspiration and evolve into something new. 
A trained ear can hear a tonality, let’s say in the sound of a humming fridge. The brain reacts by categorising it as background noise and then forgets about it. I like listening to exactly those sounds for that reason. Sounds that come from the technology that we use and are part of our daily life that we don’t pay attention to. Or as in this case sounds, we don’t have access to such as the sounds from a server farm.
Visiting the server farm of TUD´s Center for Information Services and High Performance Computing (ZIH) was like listening to a specially arranged orchestra. Experiencing an entire building that was trying to keep the sound inside was fascinating to me. It was sonically very different to how I imagined it would be or even to the sounds I found online of servers. We are encouraged to think of our data as occupying a cloud, something light, transient, and natural when in fact it is a massive loud, sweating building that works around the clock. There is no beginning or ending to the sound. It is a physical as well as an aural experience. 
The center’s engineers hear the differences in tones and can diagnose problems in the computers before the system tells them that something is wrong simply through listening to the familiar sound emitted from the many racks. The sounds are being created by us. By the researchers who are running simulations or by the staff writing emails. The server farms store this data and all of it creates sound. It is just technology doing its work, but by humans using it and overusing it, it starts generating more noise and changing, suddenly creating a very physical spatial sound. All these sounds signify that the system is working. 
The most fascinating sounds to me were the sounds coming directly from the huge supercomputers, each rack had a different frequency and character. Some were static, some always changing. The human connection to the sound was for me the light bulb moment: the server farm sounded softer when the university went on holiday. The difference in sound was extraordinary.
What is the natural sound of our data? The server farm is in itself not natural. It is a space not made for human conditions. It is an unnatural place for a human despite being created by our activity. An instrument which is the result of our entire system of user technology. Every server farm sounds different, and its sounds are strongly connected to the users’ activities. In that sense I see it as a sonification of our human activities.

Cooling infrastructure of the server rooms of ZIH, TUD (2022). Photo: Robert Gommlich/© Courtesy of TUD´s Center for Information Services and High Performance Computing (ZIH).
Cooling infrastructure of the server rooms of ZIH, TUD (2022). Photo: Robert Gommlich/© Courtesy of TUD´s Center for Information Services and High Performance Computing (ZIH).

The question of natural sound was really important to me during my research. Implementing found sounds was crucial to my approach. I did not want to work in sound design or create a different sound through AI. I wanted to understand the sonic qualities of AI and not copy human voices. I did not want to replicate human sound. I wanted to find a new sound. In a way I found it in those machine sounds. 

What role did the musicality of such sounds play? Is subtlety predestined for your handling of such discrete processes that we normally can neither hear nor really see?
The musicality of those server farm tones is what really got me. Micro tonal frequencies are a very different tonal world, not often found in western music scores. Discovering those tones opened an entirely new world complete with new rules. 
That human-technology, co-creation and co-dependency is what I am interested in sonically amplifying. Obviously, it is very minimal and utilises very delicate listening skills therefore I chose to amplify within my work sounds that are not dominant, not necessarily immediately heard or understood. I find those sounds more interesting as there is more room for interpretation and exploration. 
Our ears are more delicate than we might think. If we are brought up in a country in the Western world we are very used to western tonalities, even if we are not particularly musical. I understood that for an audience to be able to hear what I hear I needed to musically translate more of the piece, so I began experimenting with voices humming alongside the machines. I did tests with a trained opera singer and their struggle to sing alongside the computers was fascinating. I did different experiments with singers trying to sing along in different ways playing with frequencies, length. I discovered that a language for articulating how to blend with such a constructed machine sound didn’t exist. 

Details of the cover photo: Esmeralda Conde Ruiz (2022) in a server room of TUD´s Center for Information Services and High Performance Computing (ZIH). Photo: André Wirsig/© Schaufler Lab@TU Dresden

How to cite this article

Esmeralda Conde Ruiz and Michael Klipphahn-Karge (2023): Esmeralda Conde Ruiz: sensitive ears and insensitive infrastructures. Part I. w/k–Between Science & Art Journal. https://doi.org/10.55597/e8620

Be First to Comment

    Leave a Reply

    Your email address will not be published. Required fields are marked *