On Tuesday, Google announced its Advanced Technology External Advisory Council, a new group that will ensure the company is following the “responsible” development of AI.

In a blog post, Kent Walker, Google’s senior vice president of global affairs, wrote, “This group will consider some of Google’s most complex challenges that arise under our AI principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work.”

The principles Walker refers to were released last summer, after Google was forced to drop a contract with the Department of Defense’s Project Maven (now connected to Anduril Industries). The project intended to deploy computer algorithms in war zones and was met by backlash from Google’s own employees.

The development of an external advisory council to watch over the company seems promising because Google doesn’t have a great AI track record. However, people on Twitter are voicing concern over some of the people appointed to the council.

Researcher Os Keyes points out that Kay Cole James, president of  the Heritage Foundation –a conservative think tank — is on the council. Along with openly tweeting anti-LGBT, anti-immigrant, and other remarks, James reportedly compared LGBT people to addicts, alcoholics, adult users, and “sinful” people, according to LGBT media advocacy group GLAAD.

In the replies to Keyes’ tweet, some also expressed concern over Dyan Gibbens, CEO of Trumbull, a drone company “focused on automation, data, and environmental resilience in energy and defense.”

This is an issue because Google already has a sketchy history when it comes to drones and defense. The company was essentially forced to develop the council because of fallout from Project Maven, where its own employees wrote a letter stating, “Google should not be in the business of war,” as reported by The New York Times.

One of the biggest issues with AI development now is its prioritization of those who are already favored in society. From the continuous development of AI that can’t recognize dark skin to facial recognition software that misgenders trans people, there’s an issue.

These problems can’t be fixed by appointing people to boards who perpetuate the same issues elsewhere. If Google seeks to develop responsible AI, maybe they should reconsider who is qualified to direct them.