With conversations around data protection and privacy becoming more frequent, big tech companies have to step up and participate. Now, it seems Google is making attempts to develop ethical AI.
Recently, Google introduced TensorFlow Privacy, a new tool that makes it easier for developers to improve the privacy of AI models. It’s an addition to TensorFlow, a popular tool used in creating text, audio, image recognition algorithms, and more.
TensorFlow Privacy uses a technique based on the theory of “differential privacy.” Essentially, this approach trains AI to not encode personally identifiable information. This is important because nobody wants AI to put all of their business into the world.
Google developing this program means the company is actually following the principles for responsible AI development that it outlined in a blog post last year. In the post, Google’s CEO Sundar Pichai wrote, “We will incorporate our privacy principles in the development and use of our AI technologies.”
Differential privacy is already used by tech companies. Google itself incorporated it into Gmail’s Smart Reply, as noted by The Verge. That’s why when AI makes suggestions for completing a sentence, it doesn’t broadcast anyone’s personal information.
This is also Google’s way of making sure the technology is available for everyone else. Google’s product manager Carey Radebaugh told The Verge, “If we don’t get something like differential privacy into TensorFlow, then we just know it won’t be as easy for teams inside and outside of Google to make use of it.”
There are still some kinks to be worked out with differential privacy because it can sometimes remove useful or interesting data. However, kinks can’t be worked out if nobody ever uses the program.
Radebaugh told The Verge, “So for us it’s important to get into TensorFlow, to open source it, and to create community around it.”