“Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.” – Ian Malcolm, Jurassic Park

As tech continues to advance, what was once imagined in science-fiction and fantasy has become a reality. Cars drive themselves, people are speaking to digital assistants, robot cops exist, and a camera’s eyes are capable of watching you almost everywhere you go. However, as companies continue to invest in innovation, there’s a lingering question about the ethics involved. 

For the most part, present conversations around ethics in tech focus on the use of data and artificial intelligence. That’s because neither of those things can be escaped. You are probably a data point in somebody’s research somewhere and you most likely don’t even know it. Meanwhile, artificial intelligence shapes your daily life; from the ads you see to your credit score.

Perhaps one of the biggest issues with tech today is that people are in a rush to create without fully considering the consequences. Every day, new technologies are proposed or deployed without acknowledging the harm caused to already vulnerable communities — such as Black people, Muslims, and low-income people. 

Although many reference China as the tech-dystopia of our time, noting their surveillance of Uyghur Muslims, the United States is already a tech hellscape within itself. Last year, documents gathered by the American Civil Liberties Union revealed that the Boston Police Department used a social media surveillance system called Geofeedia to conduct online surveillance between 2014 and 2015. Among its targets, the BDP monitored various, basic Arabic words and the hashtags #BlackLivesMatter and #MuslimLivesMatter. 

Another example of where innovation and ethics collide is facial recognition — a largely inaccurate technology already used by police departments to “identify” suspects. Often, public defendants are not told that these programs were used to identify their clients. In one Florida case, the Sheriff’s office never mentioned their use of a facial recognition software in an arrest report, instead claiming that they had identified a man using a manual search. Presently, the Department of Homeland Security’s Immigration and Customs Enforcement uses license plates and driver’s license databases to track down immigrants. 

Should Tech Companies Regulate Themselves?

Over the past few years, many tech companies have launched ethics boards to begin discussing the issues posed by tech. However, not all ethics boards are created equal — in fact, a lot of them suck. This year, Google put together — and then quickly dissolved  — an ethics board that included the president of the Heritage Foundation, Kay Cole James, whose transphobia and xenophobia are well-documented.  

Google’s initiative fell under an industry trend that AI researcher Ben Wagner refers to as “ethics-washing.”  

“I think (Google’s decision) reflects a broader public understanding that ethics involves more than just creating an ethics board without an institutional framework to provide accountability. It’s basically an attempt to pretend like you’re doing ethical things and using ethics as a tool to reach an end, like avoiding regulation. It’s a new form of self-regulation without calling it that by name,” Wagner told KQED.

The problem with allowing companies to establish their own ethics boards to regulate themselves has been critiqued before. Co-founder of NYU’s AI Now Institute, Kate Crawford, noted that this signaled a need to move towards external oversight. After all, it makes absolutely no sense to expect that a corporation will be able to regulate itself without any sort of bias. Most of the issues caused by tech are the rush to garner profit before — or without — considering a project’s social impact. 

When creating outside accountability, there needs to be an emphasis on a variety of expertise. Tech impacts every single area of our lives which means every single area of study needs to be involved, especially the humanities and social sciences. However, beyond relying on “professional” expertise, tech companies need to listen to people who are experts in their lived experiences, as Joy Buolamwini — founder of the Algorithmic Justice League — told MIT Technology Review:

“As we think about the governance of AI, we must not only seek traditional expertise but also the insights of people who are experts on their own lived experiences. How might we engage marginalized voices in shaping AI? What could participatory AI that centers the views of those who are most at risk for the adverse impacts of AI look like?”

Although tech companies shouldn’t be allowed to regulate themselves, we cannot forget that tech workers are organizing for themselves. When Google tried to develop an AI drone project with the Department of Defense, its employees organized to force the company out of it, declaring that the company shouldn’t be “in the business of war.” Recently, Wayfair workers walked out after learning that the company was selling furniture to border detention facilities. In Minnesota, predominately East African workers  organized a strike on Amazon Prime Day to protest racial and religious discrimination. 

Implementing ethics in tech means that business cannot continue as usual. There is no way for big tech as an industry to implement anything approaching ethics under its current structure. The industry will need to see a drastic overhaul, and ethics will have to become more than just an attempt to boost a company’s public image.