
Mission Statement
The Tarbell Center for AI Journalism supports journalism that helps society navigate the emergence of increasingly advanced AI.
In recent years, the pace of AI progress has been dazzling. Models have gone from barely being able to string a sentence together to writing entire research papers; from struggling to count to outperforming humans on many difficult math and science problems. Each new model release seems to shatter previous performance records: in 2024 alone, AI systems doubled their scores on leading scientific benchmarks.
Such performance leaps are now so routine they almost go unnoticed. But the implications could prove extremely significant. Almost half of all code on GitHub is now written with the help of AI, accelerating the pace of technological progress and AI development itself. Companies are actively replacing human workers with AI agents. And governments are increasingly relying on AI tools to help with decision making, in both civilian and military capacities.
These developments come with considerable risks — some already materializing, others lurking on the horizon. Women find themselves harassed with deepfake nudes, people of color face discrimination from biased algorithms, and vulnerable individuals regularly fall prey to AI-enabled scammers. As models grow more sophisticated, many experts fear they could aid terrorists in unleashing new pandemics, empower malicious actors to carry out devastating cyberattacks, or enable authoritarian regimes to tighten their grip on society.
And should the developers of this technology achieve their ultimate goal, the implications would be unprecedented. Leading AI companies openly pursue the creation of artificial intelligence that rivals — or surpasses — human capabilities. Such goals may simply be marketing hype. But talk to people at those companies, and it becomes clear that they sincerely believe in this mission. And while they might be delusional, many experts in the field — not just company spokespeople — also believe AI companies are on track to build so-called “artificial general intelligence”, or “AGI”, perhaps as soon as the next few years.
Such technology could completely reshape society. There would be profound benefits, particularly from accelerating scientific and technological progress. But there may be serious risks, too. AGI would upend the economy, leading to mass unemployment and societal unrest. It could permanently alter the global balance of power, essentially creating a new superweapon. And many experts — including Nobel laureate Geoffrey Hinton — fear that uncontrollable human-level AI could result in human extinction.
The future is not written in stone. It will be shaped by the choices we make today — as societies, as governments, and as individuals. Who develops this technology, what problems it’s designed to solve, who benefits from it, and how we mitigate its risks — all these questions hinge on decisions being made right now.
All of us ought to have a say in those decisions: the potential impact of AI is too significant to be left to any single group, no matter how well-intentioned or capable. But given the complexity of the technology, and the pace of progress, it is hard for decision-makers and the public to know what to do.
That is where journalism comes in. For the rest of the world to have a say, we need more information. We need to know more about the potential risks and their likelihood. We need a better understanding of how this technology actually works. And we need more scrutiny on the companies building this technology, the financiers seeking to profit from it, and the governments who are — or, as is more often the case, are not — regulating it.
Journalism, we believe, has a crucial role to play in society’s transition to a world with increasingly advanced AI. Quality journalism can demystify complex technologies, hold powerful entities accountable, and foster public discourse on critical issues. It can ensure that the development of AI doesn't happen in a black box, but in the light of public awareness and debate.
Our mission at Tarbell is to support such journalism with funding and education, much as other philanthropic efforts have supported journalism on other important topics. We hope our work will encourage society to grapple seriously with artificial intelligence, and help spur change that ensures any transition to a world with increasingly advanced AI happens as safely and equitably as possible.
While many of our supporters and funders believe AI is a transformative technology that may pose substantial risks, we are committed to supporting a diversity of viewpoints. Predictions of AI’s future importance have been wrong before, and could be wrong again. Many of the most speculative harms may never materialize. And policies implemented to prevent those harms could backfire by stifling innovation. Everything, in other words, is deeply uncertain. We believe that the only way to resolve that is by having a well-rounded conversation that questions lots of assumptions and considers all sides. Journalism must play a crucial role in fostering that conversation.
Journalism arguably took too long to take the climate crisis seriously. This directly led the public and policymakers to take the issue much less seriously than they should have — and wasted valuable time to get ahead of the crisis. We want to make sure that the same mistake does not occur with AI.
There is a significant chance that AI will shape the course of human history — for better or worse. At Tarbell, we're committed to ensuring that if that happens, journalism is there to illuminate the path forward and help steer us to a better path. The stakes are too high for anything less.