Unbiased vision: Eric Slyman works to address fairness and representation in AI

Artificial intelligence learns from the data that we produce, data that intentionally or not reflects social biases. And the sheer volume of data that’s on the internet creates a vicious circle that reinforces these biases.

Developers of vision and language software, which uses AI to create, search or reason about images from a user’s input, know these biases are a problem. Oregon State Ph.D. student Eric Slyman is working on a better way to address it.

Slyman is motivated to take on AI biases, in part, because they’ve experienced them firsthand as someone who is nonbinary, queer and pansexual. Being gay was much less accepted when the internet first became widely used in the 1990s, they note, and AI can still be influenced by negative social biases from the past.

Slyman also sees representational harms in how AI creates images of people based on what a dominant group looks like. They cite a common example where when asked for an image of a doctor, AI will typically create one of a middle-aged white man. AI will also often identify a Black or Asian woman in a medical setting as a nurse. 

Being nonbinary, “how I represent myself is dynamic,” Slyman says. If they were to ask an AI image editor to create an image of their face wearing makeup, it might also change their face to a feminine bone structure based on what AI has learned about masculinity and femininity. If asking AI to create an image of a nonbinary person, “I would expect it would give you a less realistic looking person than a traditional man or woman because it just doesn't have that concept,” they say. “It's almost dehumanizing.”

Vision and language tools have become one of the fastest growing segments of AI, making the prevailing auditing method of selecting and manually labeling images for biases unable to keep up, Slyman says. The process of gathering and annotating examples to form large datasets that AI can learn from is not only time-consuming, it’s expensive.

Their solution is VLSlice, an open-source tool that can look at thousands of images and help auditors prioritize which biases to address. Through an interactive, back-and-forth process with an AI system, which Slyman describes as a conversation, auditors can use VLSlice to quickly collect evidence of biases so that the AI learns how to better identify them.

Slyman published their research on VLSlice at the International Conference on Computer Vision in Paris in October 2023. They have spoken with Ofcom, the communications regulator in the United Kingdom, which is interested in using VLSlice as part of its auditing pipeline. They continue to work with Adobe and have been invited to speak to research groups at Google as well.

Slyman says VLSlice is just one tool for addressing AI biases, and they recognize “this work is not without resistance.” But beyond their personal connection to AI bias, they believe “these notions of AI fairness are applicable to everybody. We don’t want to base our perceptions of the world on what the internet says.” 

VLSlice is one way to see that they are not.