Showing 38 results for:
Popular topics
On Wednesday, researchers from Google, Facebook, Microsoft, and top universities — including Turing Award winner , Yoshua Benigo — published an open letter calling on Amazon to stop selling its facial recognition tech to police. Amazon’s Rekognition system is perhaps one of its most infamous tools. In May 2018, documents obtained by the American Civil Liberties Union (ACLU) of Northern California revealed Rekognition was in use by police in Orlando and Oregon. Then in October of that same year, documents obtained by the Project on Government Oversight (POGO) revealed Amazon pitched Rekognition to US Immigration and Customs Enforcement (ICE). This came after studies uncovered issues with Amazon’s program. In July 2018, the ACLU found Rekognition incorrectly matched 28 members of Congress to mugshots — six were members of the Congressional Black Caucus. In the letter, the researchers looked at one particular study that found Rekognition had great errors in trying to recognize...
This past Wednesday, Amazon announced in a blog post that they’re placing a one-year ban on police’s use of facial recognition technology. Part of their statement goes as follows: “We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.” This announcement could be in response to the protests and uprisings that have occurred as a result of all the recent police killings and brutality. Although their statement postpones their involvement in endorsing this technology for law enforcement for a year, it doesn’t address what will happen once the ban expires. On the heels of this announcement, many were outraged at the time limit placed on the ban, expressing that it’s not enough to simply forbid the sale of...
Facial recognition has been quietly unfolding across the United States for years. Now, increased public awareness has turned facial recognition into a hot political issue that may enter the presidential race. Earlier this month, Bernie Sanders became the first 2020 presidential candidate to call for a ban on police use of facial recognition technology. While some of the impacts of facial recognition cannot be reversed, its growing political significance may at least help communities of color escape some of its worst effects. Sanders’ position on facial recognition draws from bans that have occurred this year in San Francisco , Oakland, and Somerville, Massachusetts. In each city, activists focused on pointing out facial recognition’s potential for introducing widespread, mass surveillance of already vulnerable communities, like what may be occurring in the cities of Chicago and Detroit. A spokesperson for Sanders’ campaign told Recode: “Police use of facial recognition software is...
In February 2018, Amazon acquired Ring , a smart doorbell company in a $1 billion deal. Amazon followed that acquisition up only three months later by launching a new app called Neighbors through Ring. Describing itself as the “new neighborhood watch,” Neighbors is designed to allow users to receive real-time crime updates from their neighbors. On the surface, it seems like an innocent project. However, Neighbors is a strong example of how surveillance companies like Ring manufacture paranoia to sell back to you. Neighborhood watch relies on the idea that you never know what’s going to happen, so you need to remain prepared. By flooding phones with crime updates and nothing more, apps like Neighbors can create a feeling of unease that often relies less on facts and more on pre-existing distrust of people of color. As reported by Motherboard , videos posted on Neighbors disproportionately show people of color with descriptions relying on racist language or assumptions. After...
On the surface, Ring doorbells may seem like a practical invention. After all, they allow you to see what’s going on outside your home and you never know what’s going to happen — but, who else is watching? Since acquiring the home security company, Ring, in 2018, Amazon has quietly extended its efforts to further privatize surveillance. Last year, Amazon managed to do so with its launch of Neighbors by Ring, which describes itself as the “New Neighborhood Watch”. On the app, every single post has to do with crime, and people are able to directly share their Ring footage. The issue with the concept of a neighborhood watch is it encourages notions of who belongs in a place and who is inherently suspicious. Often, Black people are regulated to the latter category. According to a Motherboard report, racial profiling is prevalent on the Amazon/Ring Technology . Using VICE’s offices in Williamsburg, Brooklyn as a home address with a default 5-mile neighborhood radius, Motherboard...
From Snapchat filters to FaceApp’s ability to age you, people love apps that turn them into other things. Recently, researchers capitalized on people’s eagerness to see themselves cast in a new light with AI Portraits. On the site, people uploaded a selfie, which was then turned into a classical portrait — all thanks to artificial intelligence. Although the site is still down after crashing last week due to increased traffic, researchers had intentions beyond just giving you something fun to do. They also wanted to reveal how bias within artificial intelligence functions. Based out of MIT-IBM Watson AI Lab, researchers trained their model using 45,000 paintings. Although the portraits fed to the model spanned throughout eras, it focused on 15th-century Europe, and that became quite clear through the portraits that the AI churned out. ‘AI Portraits’ transforms selfies into (unsmiling) Renaissance paintings https://t.co/BTrNjr1LFU pic.twitter.com/3ipT3aYBWX — Frieze (@friezeofficial)...
This article was originally published on 07/15/2019 There isn’t an area of your life that technology doesn’t impact. Although people tend to think of digital technology as something that stands alone, the reality is it didn’t just pop up out of nowhere. Generally, like the internet, tech can be used to give people access to things they wouldn’t otherwise have, like the ability to locate new information through search engines. While doing so, it can also reinforce social inequalities without even trying. For instance, how Google’s search engine was found to quietly reinforce racism . While revelations around tech and its impacts on our society are a big part of our public conversation now, they aren’t too surprising since tech is an industry that doesn’t reflect the diversity of the people who use the products companies build. Tech companies first started releasing diversity reports in 2014, and since then they’ve proven what most people already knew: That it’s a white, male...
As awareness around facial recognition continues to grow, a primary concern has been its potential to open up new frameworks for mass surveillance. That concern grew even more pressing as people realized that facial recognition could potentially be used in body cameras, essentially creating roving, real-time surveillance systems on the chests of police. On Thursday, Axon — the company that created the Taser and supplies 47 out of the 69 largest police agencies in the United States with body cameras and software — announced a ban on the use of facial recognition on its devices . Although this can certainly be considered a temporary victory, Axon’s announcement must be carefully analyzed — both within social contexts, the words that the company used, and its own history. Axon’s decision comes from the first report of an AI and Policing Technology Ethics Board that the company originally formed in April of 2018. The board was developed to lead Axon in ethically developing products and...
When most people think of wide-spread surveillance, the government immediately comes to mind. It’s a fair association given programs like the FBI’s Counter Intelligence Program (COINTELPRO), which targeted Black activists and even Black-owned bookstores , to the present-day Countering Violent Extremism program targeting Muslim youth. However, as technology advances, private companies are getting their hands deeper and deeper into the business of surveillance. Recently, Amazon patented “Surveillance-as-a-Service,” a technology pitched as a home security system, according to Smart Cities Dive . The patent is for a drone that can “perform a surveillance action at a property of an authorized party.” Its capabilities would include photographs, video, infrared, thermal scanning, night-vision, and audio. The patent claims that areas around the drone would be geo-fenced, so it couldn’t pick up any data. However, Amazon has often been criticized for its home devices gathering information...
Many tech companies — including Microsoft, Google, and Amazon — have produced object recognition algorithms. This form of artificial intelligence is meant to do exactly what it says: recognize objects. It sounds like something that can’t be messed up, but a recent study found that object recognition is worse at identifying items from lower-income countries. The study was conducted by researchers — Terrance DeVries, Ishan Misra, Changhan Wang, and Laurens van der Maaten — from Facebook’s AI Lab. The team focused on analyzing five popular object recognition algorithms: Microsoft Azure, Clarifai, Google Cloud Vision, Amazon’s Rekognition, and IBM Watson The global dataset included 117 categories focusing on common household items, like shoes and soap. Researchers also made sure to diversify both household incomes and geographic locations. Researchers found that the difference in accuracy was striking. The object recognition algorithms made had an increased 10 percent error rate when...
Facial recognition has the potential to introduce continuous, mass surveillance throughout the United States. Vulnerable communities — including Black people, religious minorities, and other communities of color — are especially likely to be harmed by facial recognition’s deployment. Amazon is perhaps one of the most infamous participants in facial recognition software development. But on Wednesday, Amazon shareholders failed to pass two resolutions concerning the company’s facial recognition software, Rekognition. Although the proposals were non-binding — meaning Amazon could have rejected the vote’s results — passing them would have still sent a message. The first proposal was about stopping sales of Rekognition to the government, and the second demanded an independent review of the program’s civil and human rights impacts. Unfortunately, the vote doesn’t come as a huge surprise. As noted by TechCrunch , CEO Jeff Bezos retains 12 percent of the company’s stock. He also has the...
Across the United States, local governments have held discussions about facial recognition. Last week, San Francisco banned government use of the technology , while cities like Oakland, California and Somerville, Massachusetts are exploring doing the same. Each of those cities began looking closely at facial recognition due to the danger it poses to Black and brown communities. San Francisco’s own bill stated, “The propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits, and the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring.” It’s important that local communities are starting conversations about facial recognition tech and the harms that come with it, but it also needs to occur at a higher level. After all, the risks that civil rights and privacy advocates highlight around facial recognition includes continuous, mass surveillance...
There have been some big moves made lately in the fight against facial recognition technology. Last week, San Francisco banned local government use of the technology completely. Now, the focus has turned to Amazon’s infamous Rekognition program. At the company’s annual meeting on Wednesday , Amazon’s own shareholders will vote on two proposals. One, to stop the sale of Rekognition to the government, and the other to require an independent review of its civil and human rights impacts. In preparation of the vote, a coalition of privacy and civil rights advocates have written an open letter to Amazon’s shareholders. On May 22, the American Civil Liberties Union (ACLU) will present the open letter at Amazon’s meeting by invitation of shareholders. The presentation will mark one year since the ACLU first revealed how far Amazon’s Rekognition program had gone. Now, the letter is open for other groups and individual consumers to sign on. Within it, the groups focused on addressing Amazon’s...
We live in a world that is bursting with cameras. There is some comfort in thinking that even if our activities are recorded, nobody watches the videos until long after we’re away. But, that’s not the case anymore. The application of real-time face surveillance allows for authorities to pick you out from a crowd while you’re still in it. It may sound like a distant dystopia, but a report from Georgetown says that for millions of Americans, it may be an imminent reality. The report hones in on two cities in particular: Detroit and Chicago. Within the report, Senior Associate Clare Garvie and Executive Director Laura M. Moy note that Detroit has a million-dollar system that “affords police the ability to scan live video from cameras located at businesses, health clinics, schools, and apartment buildings.” As conversations around facial recognition and surveillance pick up, it is important to remember that Black people and other communities of color are particularly vulnerable....
On Tuesday, San Francisco officially made history as the first city in the United States to ban government use of facial recognition technology. In a reported 8-to-1 vote , the city’s Board of Supervisors passed the Stop Secret Surveillance Ordinance. The new law restricts all city departments from using facial recognition technology and requires board approval to purchase any new surveillance devices. The Stop Secret Surveillance Ordinance expressed concerns around facial recognition’s potential to exacerbate pre-existing social issues, such as anti-Blackness and over-policing of vulnerable communities. The proposal itself noted that the “propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purposed benefits”, going on to specifically cite concerns around continuous government monitoring. The coalition supporting the ordinance — made up of civil rights, racial justice, LGBTQ rights, homeless, and immigrants’ rights...