Showing 31 results for:
Popular topics
A Michigan-based venture capital fund focused on investing in businesses owned by people of color has joined forces with a renowned investment firm, paving the way for expanded growth and a more significant impact for both organizations. According to Crain’s Grand Rapids Business, the New Community Transformation Fund has partnered with Michigan Capital Network, which will manage fund’s, daily operations, and offer strategic guidance. Discussions between the two Grand Rapids firms began months ago. Paul D’Amato, CEO and managing director of Michigan Capital Network, said it “made sense” for both organizations from the start of the process. “We share their goal of creating access to capital for people of color because we know that business growth, entrepreneurship and capital investment strengthens our economy and makes our state more attractive to other businesses looking to relocate,” D’Amato said, Crain’s Grand Rapids reported. Through the affiliation, which gives the venture...
Bouncing off the plan to create a modern-day Green Book, a guide for African Americans who were seeking places they could safely reside during the Jim Crow period, ANJEL Tech was bloomed. Designed by James A. Samuel, Jr. alongside his wife Evelyn Samuel in October of 2020, the powerful body camera is designed to turn one’s smartphone into a lifeline. As a Black woman, when ANJEL Tech was brought to my attention, I was automatically intrigued. Every day, when I scroll through social media, another Black woman is missing and there appear to be no traces of her whereabouts. According to reports, last year alone nearly 183,000 Black individuals went missing and of those nearly 100,000 Black women vanished. These statistics are frightening when you consider the cries of these individuals are often met with silence and their families left with unanswered questions. ANJEL Tech puts the power back into the hands of victims and their families by storing live streams of the incident into a...
Facial recognition has been quietly unfolding across the United States for years. Now, increased public awareness has turned facial recognition into a hot political issue that may enter the presidential race. Earlier this month, Bernie Sanders became the first 2020 presidential candidate to call for a ban on police use of facial recognition technology. While some of the impacts of facial recognition cannot be reversed, its growing political significance may at least help communities of color escape some of its worst effects. Sanders’ position on facial recognition draws from bans that have occurred this year in San Francisco , Oakland, and Somerville, Massachusetts. In each city, activists focused on pointing out facial recognition’s potential for introducing widespread, mass surveillance of already vulnerable communities, like what may be occurring in the cities of Chicago and Detroit. A spokesperson for Sanders’ campaign told Recode: “Police use of facial recognition software is...
In February 2018, Amazon acquired Ring , a smart doorbell company in a $1 billion deal. Amazon followed that acquisition up only three months later by launching a new app called Neighbors through Ring. Describing itself as the “new neighborhood watch,” Neighbors is designed to allow users to receive real-time crime updates from their neighbors. On the surface, it seems like an innocent project. However, Neighbors is a strong example of how surveillance companies like Ring manufacture paranoia to sell back to you. Neighborhood watch relies on the idea that you never know what’s going to happen, so you need to remain prepared. By flooding phones with crime updates and nothing more, apps like Neighbors can create a feeling of unease that often relies less on facts and more on pre-existing distrust of people of color. As reported by Motherboard , videos posted on Neighbors disproportionately show people of color with descriptions relying on racist language or assumptions. After...
When it comes to artificial intelligence, developers make a lot of bold claims about what their programs can do. It’s quickly become a part of our every-day lives, especially with the rise of the ever-controversial facial recognition technology. There’s even software advertised to read your emotions and even predict criminality. What these types of claims show aren’t the endless possibilities of AI, but how the technology is used to legitimize pseudosciences, or believes that claim to be based in science. Generally, the idea that AI can read your emotions is referred to as “affect recognition.” It builds on the pseudoscience of phrenology, which is an offshoot of physiognomy, or the idea that you can judge character based on someone’s appearance. As noted by Dr. Richard Firth-Godbehere , physiognomy helped to provide the scientific justification for many prejudices. For example, U.S. physician James W. Redfield’s 1852 book, Comparative Physiognomy , compares various groups to...
This article was originally published on 07/25/2019 For many communities of color in the United States, surveillance is not trapped in the past, but a part of every day life. Over time, early forms of surveillance such as slave ledgers and planation structures transformed into federally sponsored methods like the infamous Counter Intelligence Program (COINTELPRO). As technology continues to develop, surveillance does too and although it existed long before computers, the digital age has allowed it to rapidly expand, putting communities of color at increased risk. When San Francisco began considering a ban on facial recognition, the threat of wide-spread government surveillance continued to pop up as a cause for concern. “While surveillance technology may threaten the privacy of all of us, surveillance efforts have historically been used to intimidate and oppress certain communities and groups more than others, including those that are defined by a common race, ethnicity, religion,...
“Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.” – Ian Malcolm, Jurassic Park As tech continues to advance, what was once imagined in science-fiction and fantasy has become a reality. Cars drive themselves, people are speaking to digital assistants, robot cops exist, and a camera’s eyes are capable of watching you almost everywhere you go . However, as companies continue to invest in innovation, there’s a lingering question about the ethics involved. For the most part, present conversations around ethics in tech focus on the use of data and artificial intelligence. That’s because neither of those things can be escaped. You are probably a data point in somebody’s research somewhere and you most likely don’t even know it. Meanwhile, artificial intelligence shapes your daily life; from the ads you see to your credit score. Perhaps one of the biggest issues with tech today is that people are in a rush to create without fully...
This article was originally published on 07/03/2019 In June, Somerville, Massachusetts became the second city in the United States to ban facial recognition technology . Originally introduced back in May , the “Face Surveillance Full Ban Ordinance” places a moratorium on government use of facial recognition and other “remote biometric surveillance systems,” until the state develops a framework for its use. “[T]he benefits of using facial surveillance, which are few and speculative, are greatly outweighed by its harms, which are substantial,” the bill says. The bill touches on concerns previously cited by advocates, comparing the broad application of face surveillance in public spaces to requiring everyone to carry around and display a photo I.D. at all times. Under the bill, any data collected through facial recognition would be considered “unlawfully obtained.” That means it can’t be used in trials and should be deleted immediately. In addition, if the banned technology is used on...
As awareness around facial recognition continues to grow, a primary concern has been its potential to open up new frameworks for mass surveillance. That concern grew even more pressing as people realized that facial recognition could potentially be used in body cameras, essentially creating roving, real-time surveillance systems on the chests of police. On Thursday, Axon — the company that created the Taser and supplies 47 out of the 69 largest police agencies in the United States with body cameras and software — announced a ban on the use of facial recognition on its devices . Although this can certainly be considered a temporary victory, Axon’s announcement must be carefully analyzed — both within social contexts, the words that the company used, and its own history. Axon’s decision comes from the first report of an AI and Policing Technology Ethics Board that the company originally formed in April of 2018. The board was developed to lead Axon in ethically developing products and...
When most people think of wide-spread surveillance, the government immediately comes to mind. It’s a fair association given programs like the FBI’s Counter Intelligence Program (COINTELPRO), which targeted Black activists and even Black-owned bookstores , to the present-day Countering Violent Extremism program targeting Muslim youth. However, as technology advances, private companies are getting their hands deeper and deeper into the business of surveillance. Recently, Amazon patented “Surveillance-as-a-Service,” a technology pitched as a home security system, according to Smart Cities Dive . The patent is for a drone that can “perform a surveillance action at a property of an authorized party.” Its capabilities would include photographs, video, infrared, thermal scanning, night-vision, and audio. The patent claims that areas around the drone would be geo-fenced, so it couldn’t pick up any data. However, Amazon has often been criticized for its home devices gathering information...
It’s almost impossible to go a day without encountering a single video camera. After all, there are almost 50 million surveillance cameras in the United States alone. However, people usually don’t pay much attention to cameras because the average one isn’t being continuously monitored. Now, a report by the American Civil Liberties Union has found that new technological developments may lead to a system of continuous, mass surveillance. It opens up new questions about not only privacy but the act of turning everyday surveillance into a business. The report — “The Dawn of Robot Surveillance” — focused on the $3.2 billion industry that’s known as “video analytics.” According to the report, these systems allow computers to not only record, but to analyze footage in real time. These machines will be able to watch people for “suspicious” behavior and their deployment is already happening. Real-time face surveillance is currently used in both Chicago and Detroit . Seeing the possible...
United States’ Customs and Border Protection (CBP) gathers a trove of information on people coming in and out of the country. Recently, CPB has started testing facial recognition at airports . Plus, the Trump administration now requires social media information from Visa applicants. All of this information is extremely sensitive, so a leak is the last thing anyone wants. Unfortunately, CBP confirmed that a data breach exposed the photos of travelers and license plate images according to TechCrunch . It seems that a subcontractor is responsible for the information leaking. According to TechCrunch, CBP said in a statement: “CBP learned that a subcontractor, in violation of CBP policies and without CBP’s authorization or knowledge, had transferred copies of license plate images and traveler images collected by CBP to the subcontractor’s company network.” A spokesperson told the outlet that the breach has impacted “fewer than 100,000 people” through a “few specific lanes at a single...
Facial recognition has been widely criticized for the risks it poses to vulnerable communities. The technology typically reinforces pre-existing social biases, as seen by its inability to read anyone who isn’t a white man . It also poses severe privacy risks. Facial recognition makes it easy for government agencies to develop continuous, mass surveillance of vulnerable communities. With all of these risks, it’s not a technology that most people would want to use on kids. Despite that, Lockport City School District in New York is trying to test a facial and object recognition system called “Aegis.” In September, the district used $1.4 million of the $4.2 million it received in funding through the Smart Schools Bond Act to install the system, Lockport Journal reported . Superintendent Michelle Bradley announced plans to begin testing the system on June 3. Bradley described the test as an “initial implementation phase.” That means the school wanted to test the system for any necessary...
Facial recognition has the potential to introduce continuous, mass surveillance throughout the United States. Vulnerable communities — including Black people, religious minorities, and other communities of color — are especially likely to be harmed by facial recognition’s deployment. Amazon is perhaps one of the most infamous participants in facial recognition software development. But on Wednesday, Amazon shareholders failed to pass two resolutions concerning the company’s facial recognition software, Rekognition. Although the proposals were non-binding — meaning Amazon could have rejected the vote’s results — passing them would have still sent a message. The first proposal was about stopping sales of Rekognition to the government, and the second demanded an independent review of the program’s civil and human rights impacts. Unfortunately, the vote doesn’t come as a huge surprise. As noted by TechCrunch , CEO Jeff Bezos retains 12 percent of the company’s stock. He also has the...