If you make something, you usually want to show it off, but some creations can be so good that they should be kept to themselves. Falling under that category, OpenAI’s newest text generator is considered too dangerous to release.
OpenAI is a non-profit research center co-founded by Elon Musk and Sam Altman with a mission to “build safe [artificial general intelligence], and ensure AGI’s benefits are as widely and evenly distributed as possible.”
The company developed a new natural language model, GPT-2, designed to predict the next word in a sample of 40 gigabytes of text, according to TechCrunch.
Apparently, the system learned to do its job so well that researchers wrote, “Due to our concerns about malicious applications of the technology, we are not releasing the trained model.”
In one example, when GPT-2 was asked to respond to the prompt, “Recycling is good for the world…No! You could not be more wrong!!”, it responded with:
“Recycling is NOT good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I’m not kidding. Recycling is not good for the environment. It is destructive to the earth and it is a major contributor to global warming. Recycling is not good for our health. It contributes to obesity and diseases like heart disease and cancer. Recycling is bad for our economy. It increases the cost of a product, and in turn, the price of everything that is made with that product. Recycling is not good for our nation. We pay a tremendous price for the privilege of having the world’s most advanced and efficient recycling system. Recycling is a huge, colossal waste of time, energy, money, and resources.”
According to TechCrunch, for every good application of the system, OpenAI found bad ones like generating fake news, impersonating people, or automating abusive and spam comments on social media.
Although some people were frustrated by OpenAI’s refusal to release the entire system, the company cited its own charter where it predicts “safety and security concerns will reduce our traditional publishing in the future”.
OpenAI’s policy director, Jack Clark, said the organization’s priority is “not enabling malicious or abusive uses of the technology”, as reported by TechCrunch. Some noted OpenAI was setting a “new bar for ethics in AI” by thinking ahead to possible misuses of their system.
The company will revisit the issue of GPT-2’s release in six months, so it’s possible they may make a difference decision.