The Alignment Problem

Machine Learning and Human Values

AI: An Unexpected Mirror of Human Bias

Everyone has a take on AI. “It’s going to revolutionize work.” “It’ll snatch away our jobs.” “AI will be the harbinger of world peace.” “It’s spelling doomsday for us.” Uncertainty shrouds the future of AI, its users, and the applications they’ll dream up. But we can’t deny its lightning-speed development and somewhat hasty deployment sans rigorous testing. In the process, AI’s proving to be an eerie reflection of our own flaws.

Perhaps the most unsettling facet of AI is its seemingly ingrained bias and racism. The roots of this troubling behavior trace back to the 19th century and Frederick Douglass, the most photographed individual of his time. We’ll journey from there to the 2010s, when a young developer discovered her AI robot only acknowledged her presence when she donned a white mask. These episodes, from past to present, send us clear signals of caution and hope.

The Unseen Prejudice in AI

Come 2015, web developer Jacky Alciné received a notification about a photo shared by a friend on Google Photos. To his surprise, the app had a fresh UI, and AI was categorizing photos into “graduation”, “the beach”, and the like.

A selfie of Alciné and his best friend, both black, was labeled “gorillas”. The folder was filled with only their photos. Stunned and upset, Alciné took to Twitter to challenge Google Photos. Within a couple of hours, Google acknowledged the issue and got to work. Come 2023, their best workaround is to axe the “gorillas” category from the UI as the algorithm continues to mislabel black people.

The seed of this issue lies in the 19th century with the prolifically photographed Frederick Douglass.

Douglass, a celebrated abolitionist, saw photography as an opportunity for fairer representation of black individuals. Pre-photography, the portrayal of black people was often caricatured in drawings by white artists, with features exaggerated to the point of dehumanization. Douglass championed photography and encouraged black people to embrace the medium.

However, representation was just one part of the problem. The technology itself was inherently biased. Film was treated with a specific chemical coating optimized for different lighting conditions. To fine-tune this coating, Hollywood employed a white woman (the first being Shirley) as their standard. The chemical coating was thus tailored to enhance Shirley, neglecting individuals with darker skin tones in the process.

This bias was rectified by Kodak in the 70s. Not due to civil rights activism, mind you, but because furniture and candy manufacturers desired better photographic depiction of their products. As a result, Kodak refined their film to capture a broader color spectrum. This inadvertently tapped into a new demographic for them. The downside? Decades of visual media lacked accurate representation of non-white individuals.

Now, fast forward to Alciné’s era, and we witness AI either dehumanizing black people or completely ignoring their presence.

With this understanding of tech’s racist legacy, we’ll discuss in the next section why modern AI is still wrestling with this issue and what’s being done to rectify it.

Forging an AI Future with Inclusion

In the early 2010s, graduate student Joy Buolamwini embarked on a robotics project involving facial recognition. Her goal was to train the robot to play peek-a-boo. However, the robot failed to recognize her face, forcing her to seek the assistance of a friend. A similar incident occurred later when she was in Hong Kong for a social robot demonstration. Again, the robot failed to identify her, being programmed with the same open-source code as her earlier peek-a-boo robot.

The heart of the problem, Buolamwini discovered, lay in the program’s training data, primarily composed of images from a dataset called Faces in the Wild. Upon examination, she found a glaring bias: the images predominantly featured white males, with less than five percent being images of dark-skinned females. No wonder the robots couldn’t recognize her.

Eager to share her findings, Buolamwini reached out to several tech companies. IBM was the only one to respond positively. Upon verifying her claims, they set to work rectifying the bias in their datasets and retraining the algorithm. Within weeks, they achieved a tenfold reduction in errors identifying black women’s faces.

Yet, this is merely the tip of the iceberg. When contemplating the potential blessings or curses of AI, we must delve deeper into our past. The crux is that AI’s performance is only as good as its training. When newly launched, open-source AI programs behave inappropriately, it’s likely due to their training on the internet — a platform created by humans who don’t always model the best behavior.

This serves as a warning to developers: Ignoring the historical and human contexts that inform our technology can lead to grave errors. Rushing to roll out new technologies without thorough testing and diverse perspectives poses a significant risk to our future.

Wrapping Up

Developing AI cannot be disentangled from the tapestry of human history. The quality of AI programs hinges heavily on the datasets they’re trained on. Given the longstanding misrepresentation and exclusion of Black people from photography and film in the U.S., it’s unsurprising that many datasets are skewed towards white males. Correcting this issue demands revamping the datasets.

However, a larger lesson lurks beneath the surface: Hasty deployment of new AI technologies without adequate testing and diverse inputs is not just irresponsible but potentially dangerous to humanity. By infusing AI development with an understanding of history, diverse perspectives, and a dedication to testing, we can mitigate these risks and foster a more inclusive AI future.