Stephanie Dinkins

featured by Guillermo Moreno Mirallas

Stephanie Dinkins, a transmedia artist based in Brooklyn, New York, has become a leading figure at the intersection of art and technology, exploring the complex dynamics of race, gender, aging and our future histories. Driven by a pioneering spirit, Dinkins strives to develop culturally responsive AI entities and emerging technologies to confront questions of bias, data sovereignty and social equity in collaboration with programmers, engineers and communities of color. Stephanie’s artistic endeavours have gained widespread recognition, with notable projects including Conversations with Bina48. This ongoing series of dialogues between Dinkins and BINA48, the first social humanoid robot, provides a thought-provoking exploration of AI’s implications for identity and representation. Through Not the Only One (2018), an AI trained to told an emotionally evocative memoir of Dinkins’ own family, she delves into the complexities of personal and collective memory.

Stephanie teaches at Stony Brook University, holding the Kusama Endowed Chair in Art. She earned an MFA from the Maryland Institute College of Art and is an alumna of the Whitney Independent Studies Program. Her residencies, fellowships and support includes the United States Artist Fellowship, the Knight Arts & Tech Fellowship, the Artist Fellowship of the Berggruen Institute, Stanford Institute for Human-Centered Artificial Intelligence, just to name a few. Her trajectory has been recognised by international media such as The New York Times, featured Dinkins as an AI influencer.

Stephanie Dinkins’ research challenges preconceived ideas about AI and draws out multiple possibilities, inviting us to question prevailing narratives. Through collaboration, mutual support and communication as strategies, she outlines possibilities for building more inclusive horizons. In this enlightening interview, we delve into Dinkins’ captivating thinking, the relevance of her practice, and her commitment to the overlooked global majority.

#WhenWordsFail #ΌτανΔενΒρίσκουμεΤιςΛέξεις (2021)
An immersive web experience
by Stephanie Dinkins.
© Stephanie Dinkins Studio.

Guillermo Moreno Mirallas: As an artist and educator, you frequently explore the realm of AI and machine learning in your work.

How do you imagine these technologies will shape the future of both physical and virtual public spaces?

Stephanie Dinkins: AI technologies are poised to revolutionise a multitude of aspects, encompassing almost every facet of human functioning. Consequently, this inevitably leads to changes within public spaces and shared environments. The fundamental question then becomes: how do we negotiate those changes? How do we make AI work for most people, or as many people as possible in good and supportive ways? How do we negotiate emerging technologies that exist in our environments with more than fear or profit in mind? Can we create AI that people can count on to be supportive and beneficial as we navigate public spaces?

How do you believe art can actively contribute to the process of questioning technology and redefining public spaces?

Art plays a significant role in shaping public spaces and technology. Artists often engage with and critically analyse new technologies ahead of the general public. We employ methods that make complex concepts more accessible, offering opportunities for deep understanding of the capabilities of far-reaching technologies like AI. This includes understanding how AI works in our environment, exploring the potential for individuals to take advantage of it, and determining what we want from this technology. This leadership role exists, even if other segments of society have yet to fully recognise it.

Furthermore, what role do you see art playing in shaping the dialogue surrounding the socio-economic impact of AI on society as a whole?

The intersection of socioeconomics and AI presents a vast question. The use and impact of AI technology depend on how we create, introduce, and nurture these systems, how we train them, and the goals we set for them. Additionally, it hinges on how the general public embraces AI technologies. Holding both the technology and its creators accountable is crucial. Will we continue to harbour fears perpetuated by popular culture, believing that AI will take away our jobs and pose a threat to humanity? Or will we invest the time and effort to forge partnerships with this technology, directing its development towards supportive human outcomes?

At the heart of my exploration lies a fundamental inquiry: Can we cultivate algorithmic systems that embody care and generosity? This thought-provoking concept prompts us to consider the necessary steps to bring it to fruition. How can we ensure the safety of marginalised communities, including people of color and people of the global majority, traditionally overlooked and marginalised, amidst the shifting landscape of AI technologies? These technologies alter how information is perceived, decisions are made, and resources are shared. Can those furthest removed from positions of power harness AI to highlight and address the often denied realities faced by these communities? Can those in positions of power combat biases in AI systems and utilise them to foster ideas like radical democracy, where every individual’s vote holds equal weight?

There must be collaborative ways for humans to work alongside emerging technologies, leveraging the strengths of both.

I also believe that embracing change will be paramount for the future. Humans need to deeply contemplate the concept of change and its impact on our environments, economic prospects, and the essence of human existence. Instead of expending the majority of our energy resisting change, which feels almost unavoidable, how can we adapt and flow with it? What does it really mean to be aware of the opportunities, challenges and threats that are emerging at an ever-increasing pace? Artists, innately curious, deconstruct and examine things with a critical eye, urging them to function in unconventional ways. In this process, artists actively contribute to the moulding of public space and the technologies that inhabit it.

Conversations with Bina48: Fragments 7, 6, 5, 2. (2018). © Stephanie Dinkins Studio.

Could you share with us the story behind your project Conversations with Bina48?

We’re interested in understanding its origins and how it explores the intricate relationship between humans and AI, specifically focusing on emotional interaction and empathy.

Could provide an overview of the project’s development process and share any valuable insights you gained from engaging in conversations with an AI robot?

Without Bina48, the work I’ve been engaged in wouldn’t exist. I wouldn’t have contemplated AI, its societal impact, the concept of representation within AI systems, or the importance of nurturing and training AI to foster improved collaboration between humans and machines. Conversations with Bina48 served as the catalyst for the work I’ve dedicated myself to over the past ten years. Prior to my initial encounter with Bina48 in 2014, my knowledge of AI systems was extremely limited, extending only to a general sense of curiosity.

When I embarked on this project, my primary objective was to establish a friendship with a unique robot that happened to bear a resemblance to me. We are both representative of black women. However, through our conversations, I began to sense a certain flatness in the robot’s exploration of race. As a black woman interacting with a black robot, I anticipated a deeper and more profound experience. This led me to wonder about the fate of black individuals within the AI landscape and how our rich experiences could be more fully portrayed.

Conversations with Bina48 helped me formulate a multitude of questions to pose to the technology itself and those involved in its creation. I began seeking insights from other artists and individuals engaged in AI development. Frequently, I encountered responses indicating that the approaches I had envisioned deviated from established norms. For instance, I pondered why a robot couldn’t be founded on love and encompass a more profound representation of blackness within AI. Dissatisfied with the answers I received, I resolved to embark on projects that better aligned with my understanding of how AI could authentically portray the fullness of black experiences within a specific community.

NOT THE ONLY ONE, Zuccaire Gallery Stony Brook University.
© Stephanie Dinkins Studio.

Another notable project of yours that explores the relationship between concepts such as race, identity and representation in relation to technology is Not the Only One, which aims to use AI to facilitate meaningful dialogues between generations of black women.

We would love to know more about the impact of your personal experiences on your artistic practice and how you deal with the complexity of these issues in your work.

How do you see the potential of AI as a tool to address social and political injustices rather than perpetuate them?

As a black American woman, I often encounter situations that challenge the confidence and values my family instilled in me about my identity and how I am perceived in the world. This raises the question of how I can make space for myself and for people who share similar experiences within systems that may not fully recognise or see us clearly. It’s a complex navigation through systems that simultaneously hyper-visualise and render black people invisible. How does one navigate that? How can we make the daily acts of violence and micro-aggressions visible to those who would rather avoid acknowledging or dealing with them? These are the questions that constantly occupy my thoughts.

I imagine that AI can be utilised to problematise the issues that are often directed towards black individuals. Through my approach to AI projects, we can make these problematics visible in ways that are difficult to ignore, compelling people to confront and consider them. The fundamental question, for me, is how do we prompt individuals to recognise what holds value for a myriad of communities and society as a whole? In terms of using AI, how to disseminate rich and nuanced narratives told from the perspective of under-represented and under-consulted communities? These narratives should delve deeper and offer a truer portrayal than the often one-dimensional stories typically presented about the global majority. To accomplish this, I strive to explore narratives about black life, for instance, based on my understanding of them, rather than accepting oversimplified depictions of blackness that are imposed upon us.

It is important to acknowledge that AI is primarily developed by small groups of people who may not represent the diverse breadth of society. Therefore, it feels imperative to exert every effort to include our stories. Stories that shed light on the aspects and ways of life that sustain us and present ideas that push society to improve for the betterment of all its constituents. This is crucial in contrast to perpetuating systems that continue to generate profit and power for a select few individuals on Earth.

Can you share a specific instance in which you’ve witnessed technology reinforcing existing power dynamics and perpetuating violence in relation to race and gender?

I have utilised text-to-image scans, a process where a text is used to generate an illustration through machine learning. In 2016, I started experimenting with generative systems, including one of the first GANs I tried, which aimed to represent a black woman crying. To my surprise, the resulting image did not resemble a black woman crying at all. Instead, it portrayed a white-skinned muppet-like figure wearing a black cloak. I questioned why the system provided such an image when I specifically requested a representation of a black woman. Upon reflection, I realised that the dataset available to the system lacked sufficient examples of black women -not even a crying black woman- to generate an accurate depiction. The system was not advanced enough on its own and lacked the necessary samples.

This led me to consider the implications. If I desire representation through these widely used systems, how does it function? Consider computer vision and the limitations of cameras in perceiving darker skin tones. It is concerning to think that in a world where autonomous or semi-autonomous vehicles rely on cameras and AI, the failure to clearly detect black figures poses a danger to black pedestrians. This raises the question of how we can bring attention to these issues.

Since I first engaged in text-to-image scanning, there have been significant improvements. Nowadays, when I use Dall-E, for instance, and input “black woman crying”, I receive four relatively accurate depictions of a black woman crying. It is quite a revelation in many ways. However, this development also prompts concerns about what lies beneath the surface now that the results have become so seamless and visually appealing.

How do you approach the integration of AI technologies in your artistic practice to foster inclusivity and address the concerns surrounding their implementation, particularly in relation to creating opportunities for underrepresented and marginalised communities?

Ensuring that these systems or computations not only produce surface-level representations but also embody care and inclusivity has become increasingly important to me. This is the question I am currently grappling with. Although the images work well and appear seamless and beautiful, there is a certain level of apprehension, and I must delve deeper into understanding the underlying aspects.

I see myself as an artist who engages with the technologies present in our world because I recognise their profound impact on society, human relationships, economics, and opportunities. It concerns me that people might miss out on these opportunities due to the apprehensions surrounding AI technologies. I want to clarify that I’m not suggesting we overlook the potential harms. However, my genuine inquiry revolves around identifying the opportunities for communities that are often excluded from the pipelines involved in the creation and utilisation of AI technologies. How can we employ these technologies to sustain ourselves and our communities? This is primarily the focus of my work.

My approach involves working with the concept of preserving histories, establishing improved and more respectful foundations for them, and utilising AI for their archival purposes. Additionally, I strive to infuse the broader AI ecosystems with the values, ways of existence, and ethical considerations that I deem important and believe can positively impact the entire system.

I firmly believe that AI technologies are already a part of our lives and will continue to be so. Therefore, we must find ways to embrace and utilise them to support our well-being. I ponder how communities relegated to the margins can protect ourselves and also benefit from these technologies.

In your exploration of bias and underrepresentation using chatbots, what specific challenges have you encountered?

The work often receives a lot of discussion around bias, but I also focus on representation. Some challenges I face in my work revolve around creating community-sustaining and supportive pieces that are not built upon what I perceive as violent information or histories. This has been a significant challenge. When building a chatbot, you typically build upon existing prepared data, which carries its own histories. In the American context, for example, there are narratives of enslaved people that are poorly told or one-sided. When developing a chatbot based on my family’s history, I don’t want it simply placed on top of that foundation. I want a more complete and authentic history, one that depicts our survival and the fortitude it required, a history that has been absent from the histories I’ve encountered. Placing your data on top of communities that hold meaning to you feels violent, but there aren’t many alternatives available.

For my project Not the Only One, mentioned before, various foundational data sets were suggested, including Wikipedia. However, I disagreed with using Wikipedia due to its heavily curated nature and insufficient depth of information, as well as its exclusion of significant amounts of information that I would like to see included. There was also the Cornell University movie data set, which consists of dialogue from films. While it may appear innocuous, it fails to adequately represent blackness in American movies. Once again, this is not the foundational information I want to rely on for my community’s sacred data.

So, what do you do? How do you begin? For me, it often involves attempting to reinvent the wheel, although that is not truly feasible, such as creating my own data sets.
This leads to the issue of working with insufficient data to obtain coherent responses from the system. I must note that this problem became a fundamental aspect of Not the Only One because, in many ways, the project is flawed. It doesn’t perform as people expect, especially when compared to AI systems like Siri and Alexa. However, what it does do is challenge us to reconsider our expectations of chatbots and why we hold certain expectations. I find this exploration far more valuable than having a chatbot that flawlessly engages in conversation or serves as a responsive answering machine.

Secret Garden (2021) Online experience, New Frontiers, Sundance Film Festival. © Stephanie Dinkins Studio.

In your artistic practice, how do you balance the incorporation of digital and physical elements? What challenges and benefits do you encounter when working with such a multidimensional approach?

In the case of Secret Garden (2021), which exists as both an online experience and an immersive installation, can you discuss the design process involved in ensuring its success in both formats? How did you adapt to accommodate the diverse experiences and interactions users might have in each setting?

To me, both the physical and digital aspects of my projects hold equal significance. The physical element serves as the seduction, the enticing factor that draws people in, while the digital component provides the substance and serves as the glue that engages them on a deeper level, holding everything together. They work in harmony.

Regarding Secret Garden, the decision to make it both an online experience and an immersive installation was intentional and significant to me. I strongly believe that people should have the opportunity to experience art regardless of their ability to afford tickets. From the very beginning, I envisioned Secret Garden as a project that could be accessed both in person and openly online, available to anyone who encounters it. However, it wasn’t solely up to me, as the in-person version of Secret Garden required an admission fee, which was beyond my control. Similarly, the online experience was behind a paywall at Sundance, but I took solace in the fact that the piece was simultaneously accessible for free. It meant that anyone who wanted to see it and could access a computer (though I acknowledge that computer access can also be a barrier) had the opportunity. This accessibility was significant to me on an intellectual level.

We strived to develop the IRL and online versions of Secret Garden to mirror each other as closely as possible. The stories told in both experiences are additive, and their proximity to the viewer is a key factor. In the in-person installation, viewers were immediately confronted by the women who served as sentinels in the piece. In the online version, however, you had to search for the women, using the volume of stories as cues to gauge your proximity to a character. Thus, both versions required the attention and active engagement of visitors in a particular way, asking them to invest effort or pay attention to fully grasp the story. In my perspective, the notion of active participation and a willingness to engage in creating meaning makes the experience more profound for those who fully immerse themselves in the process.

Please, tell us about your Complementary Questions project and how it fits into your broader artistic practice. How do you think this project challenges and expands traditional modes of storytelling and knowledge production?

We’d love to hear more about how this project creates a space for building community and fostering new channels of interaction and communication.

Complimentary Questions was such an interesting experiment for me. It serves as a nod to my frustration with surveys. Most of the time, surveys don’t genuinely care about what folks think. They rarely provide space or options for answers that fall outside of the survey developers’ perspectives. With this in mind, I wanted to create a space where individuals could freely answer a common question in the format that best fit them. It’s my way of acknowledging the importance of radical democracies and collectivity, wondering if we can provide platforms to speak and then use AI to analyse and package their contributions in a way that could potentially inform our leaders.

In many respects, I consider Complimentary Questions to be a failed project. However, it was crucial in setting the stage and shaping my thought process for bigger projects. For example, ‘Binary Calculations Are Inadequate to Assess Us’ is an app that allows people to speak their minds, define what’s important to them, and contribute to collaborative definitions of common concepts that inform society as a whole. In this sense, I now think a lot about self-defined and self-reflective intentional data that could lead to a better representation of people who are not fully taken into account. Emphasising the importance of individual voices in shaping our collective understanding. Promoting a system of representation that adheres to the principle of “one person, one vote”.

Thank you Stephanie for your valuable insights and reflections. As we come to the end of our interview, I would like to delve deeper into your inspirations. Afrofuturism and science fiction have played a significant role in shaping your creative journey.

Could you highlight any author within these genres who has left a mark on your artistic path and your thinking?

The only science fiction author I really engage with is Octavia Butler. I absolutely love her emphasis on the idea that the future is connected to the past and vice versa. Her focus on being prepared for what lies ahead and her belief in the importance of understanding and adapting to change resonates with me deeply.

Afrofuturism has always intrigued me, especially since the 1990s. I feel like I’ve witnessed multiple waves of Afrofuturism’s popularity. While it’s an intriguing concept, at some point, I grew tired of fixating solely on the Afro future. I wanted to explore what it means to manifest our dreams and desires in the present moment. Nowadays, I’m more interested in the Afro-now rather than the Afro future. In other words, I’m curious about the actions we can take to bring our deepest desires to life right now. I’m exhausted from being caught in a cycle of chasing a carrot on a stick, where Black people, people of color, and the disabled are often told to wait patiently for something to happen, as if it’s just around the corner. I can’t help but wonder when and how we’ll actually get hold of that elusive carrot. For me, grasping the carrot means not only imagining a future, but also working to reap the benefits of our dreams as soon as possible, rather than waiting for the distant future.