Google has decided to cancel its external AI advisory council, one week after the board’s announcement, according to Vox.

Google developed its Advanced Technology External Advisory Council  with the original mission of ensuring the company was following the “responsible” development of AI.

A company spokesperson told Vox:

“It’s become clear that in the current environment, ATEAC can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.”

-Vox

The “current environment” that Google referred to is really one of its own making.

In a blog post, Kent Walker, Google’s senior vice president of global affairs, said the group would consider “some of Google’s most complex challenges” under its AI principles. This included looking at facial recognition and fairness in machine learning and more.

Along with seven others, Google appointed Kay Cole James, president of the Heritage Foundation — a conservative think tank — to the board.
James’ transphobia and xenophobia are both well documented, with many quickly digging up her own tweets referring to trans women as “biological males” and backing Trump’s controversial national emergency.
In response to James’ addition to the board, a group called Google Against Transphobia launched a petition that was  signed by over 2,400 Google employees, academics, researchers, and more. They wrote, “In selecting James, Google is making clear that its version of ‘ethics’ values proximity to power over the wellbeing of trans people, other LGBTQ people, and immigrants.”
Google’s inclusion of drone company CEO Dylan Gibbens also raised concerns. Part of why Google developed the council in the first place was due to its involvement with Project Maven, a drone program for the Department of Defense. Similarly, thousands of employees signed a petition that made Google pull out, writing, “Google should not be in the business of war.”
There are still issues with AI and its development, but they won’t be answered by allowing tech companies to regulate themselves. Google’s board showed how companies are willing to paint bigotry as simply a matter of “diverse” opinion, but other boards are often only made up of academics and people in the tech world.
The kind of technology that was supposed to be discussed and surveyed by Google’s now faltered council impacts the world’s most vulnerable people. Whether it’s a Brooklyn landlord planning to install facial recognition technology in a low income building or the government using images of abuse victims to test facial recognition software, AI and it’s impacts span to several corners of our society.
The people who are already harmed by AI need to be involved in determining responsibility and ethics. Until then, companies are simply setting up echo chambers.