Artificial intelligence is becoming increasingly common in people’s day-to-day lives. That also means people are now more aware of what can happen if AI isn’t responsibly created.

For example, AI has the potential to spew out loads of misinformation, as seen when the former non-profit OpenAI tested a text generator and deemed it too dangerous to release. That can be particularly dangerous when people don’t know where to go to fact check information.

A joint Harvard-MIT program hopes to combat some of AI’s issues by working to ensure future AI developments are ethical. Today, the program announced the winners of the AI and the News Open Challenge. Winners will receive $750,000 in total.

The challenge was put on by the Ethics and Governance in AI Initiative. Launched in 2017, it’s a “hybrid research effort and philanthropic fund” that’s funded by MIT’s Media Lab and Harvard’s Berkman-Klein Center.

“As researchers and companies continue to advance the technical state of the art, we believe that it is necessary to ensure that AI serves the public good,” the AI initiative shared in a blog post. “This means not only working to address the problems presented by existing AI systems, but articulating what realistic, better alternatives might look like.”

In general, the projects selected look at tech and its role in keeping people informed. Even in looking at just a few of the winners, it’s clear important work is being done.

For example, MuckRock’s Foundation project Sidekick is a machine learning tool that will help journalists go through massive documents. Then there’s Legal Robot, a tool that will mass-request and then quickly extract data from government contracts.

Some of the projects, like Tattle, are also tackling misinformation. The tool will be used to specifically address misinformation on WhatsApp, and it’ll support fact-checkers working in India.

This isn’t the first time the initiative has given out grants, but it is the first time they’ve given them out in response to an open call for ideas.

“It’s naive to believe that the big corporate leaders in AI will ensure that these technologies are being leveraged in the public interest,” the initiative’s director, Tim Hwang, said, according to TechCrunch. “Philanthropic funding has an important role to play in filling in the gaps and supporting initiatives that envision the possibilities for AI outside the for-profit context.”