Questions around AI and ethics are becoming more prominent. Now, some governments are developing their own answers.
On Monday, the European Union released a series of guidelines for the development of “trustworthy” artificial intelligence.
In June 2018, the EU appointed a group of independent experts to a high-level group on AI. The EU is hoping to build on that group’s work by releasing a three-step plan to solve the issuing, one of which will include seven recommendations for fostering “trustworthy AI.”
“The ethical dimension of AI is not a luxury feature or an add-on,” Andrus Ansip, EU vice-president for the digital single market, said in a press release. “It is only with trust that our society can fully benefit from technologies.”
The EU’s seven key recommendations include transparency, human agency and oversight, societal and environmental well-being, and privacy and data governance for citizens.
These guidelines mark one of the first government-led initiatives to address AI ethics. In the United States especially, that’s generally left up to each individual company which has led to some problems.
Only a week after announcing an external council for “responsible” AI development, Google was forced to discard it due to concerns regarding some of its board members.
Allowing tech companies to regulate themselves and determine their own set of ethics probably isn’t wise, since they’re the cause of a lot of trouble with the technology. Google’s AI, for example, has has mistaken Black people for gorillas, while Amazon peddles an error-prone facial recognition system to police.
Governments stepping in is complicated, though. Communities who are most likely to be harmed by AI are generally able to pinpoint times where their government either harmed or ignored them. That can be seen in the United States, where surveillance of Black communities is the government’s default setting.
Although holding these conversations is a step, it will be interesting to watch as the EU continues to implement its strategies.
The EU will also launch a large scale pilot phase for feedback from stakeholders, and work on international consensus building for human-centric AI.