As the advancement of artificial intelligence poses questions around the potential for bias, increased surveillance, and a myriad of other issues, big tech companies are beginning to take note. Both Google and Microsoft have now warned investors that AI could cause problems.
Last week, Alphabet, Google’s parent company, reported $39.3 billion in revenue during its last quarter, a 22 percent increase from a year earlier. Google’s CEO Sundar Picahi gave some credit to Google’s machine learning technology saying, “AI is helping us drive our mission forward at a scale we couldn’t imagine.”
However, for the first time, the tech giant is now warning investors that the same AI technology they’re pumping millions of dollars into could pose legal and ethical trouble.
Companies are required to file annual reports every year, updating investors on what’s working and what carries risks. In Alphabet’s latest annual report, the company’s “Risk Factor” section noted products and services using AI or machine learning “can raise new or exacerbate existing ethical, technological, legal, and other challenges.”
The report went on to say AI may “negatively affect our bands and demand for our products and services and adversely affect our revenues and operating results.”
Google’s sudden warning echoes one given by Microsoft in its annual report last August. Microsoft wrote “AI algorithms may be flawed” and “if we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, or employment, or other social issues, we may experience brand or reputational harm.”
The timing of these new warnings may seem odd, as both companies have worked with AI for years and even faced backlash before. Back in 2015, users noticed Google’s photo service started mistaking Black people for gorillas. The company’s solution was to stop Google Photos from ever labeling any image as a gorilla, chimpanzee, or monkey, including pictures of actual primates. In 2016, Microsoft launched it’s Tay chatbot on Twitter. In less than a day, it began tweeting “Hitler was right” among other things, as reported by Vox.
However, public awareness around AI and its potential consequences have dramatically increased, especially as tech companies begin to offer AI services to government agencies. For example, in June of 2018, Google employees forced the company out of a Pentagon contract applying AI to drone surveillance footage.
About 4,000 Google employees created a petition demanding “a clear policy stating that neither Google nor its contractors will ever build warfare technology,” as reported by The New York Times.
Google and Microsoft are not the only tech companies whose AI pose risks. Amazon’s facial recognition software, Rekognition, famously can’t recognize Black people. Plus, Facebook was fined after the Cambridge Analytical scandal revealed the company failed to safeguard users’ information.
Neither Amazon nor Facebook has made moves to formally warn their investors of problems AI may cause. In fact, Amazon’s annual report filed earlier this year included a section titled “Government Regulation Is Evolving and Unfavorable Changes Could Harm Our Business.”
Amazon wrote, “It is not clear how existing laws governing issues such as property ownership, libel, data protection, and personal privacy apply to the Internet, e-commerce, digital content, web services, and artificial intelligence technologies and services.”
The company’s report directly contradicts Amazon’s Thursday blog post calling for Congress to develop legislation around facial recognition technology. Amazon’s actions suggest a focus on AI profit over ethical use, especially after noting the company only called for legislation after it already marketed and sold Rekognition to law enforcement.
In addition to warning their investors, Microsoft and Google have taken other steps. Microsoft argued for regulation of facial recognition technology before Amazon’s blog post and Google has started engaging policy makers and academics about AI governance.
AI has the potential to be very useful, but it’s not free from bias and can have a detrimental impact. Tech does have a social responsibility, especially if AI software may only exacerbate or introduce entirely new societal problems.