In November 2025, the Competition Commission (the Commission) of South Africa released a report on the Media and Digital Platforms Market in the country. The Commission launched an inquiry into search engines, social media platforms, news aggregators and video-sharing platforms, examining the role of generative AI, digital advertising technology and the relationships between news media, government and business. The concerns of the inquiry were that features of digital platforms distributing news content and facilitating digital advertising may distort or restrict competition, particularly in relation to advertising revenue, news distribution and the sustainability of news organisations with implications for news quality and consumer preferences. The headline outcome of the Competition Commission’s process was a settlement by which Google agreed to pay the equivalent of USD 41 million to South African media houses to support local media producers, in response to concerns that it profited from local news content without providing fair compensation.
For AI to serve as an effective tool for independent journalism and press freedom, the development and deployment of the technology requires the input of journalists, human rights advocates and the communities most affected by information disorder.
This development underscores the new threats facing press freedom globally. In recent times, we have been witnessing how the biggest threats to journalism have shifted from the imprisonment of journalists and the issuance of censorship decrees under authoritarian governments, to a more obscure erosion of press freedom where the concept of truth faces attack with the rise of information manipulation that is supercharged by AI and is flooding public discourse with synthetic content designed to confuse and polarise.
With trust in institutions being actively undermined by actors including those who benefit from the phenomenon of the “liar’s dividend,” where politicians falsely claim that unfavourable stories about them are fake news or deepfakes, the independent media organisations best positioned to counter this tide are themselves fighting for survival, caught between collapsing advertising models, platform dependency, and audiences conditioned to expect information for free – the issues the Commission sought to address in South Africa.
UNESCO's World Trends Report 2022–2025 shows that press freedom has experienced its steepest decline since 2012 with a deterioration the report compares to the most unstable periods of the twentieth century, including two world wars and the Cold War. This is a grave danger to the idea of democratic societies that we need to confront.
AI as both a risk and a possibility
While AI-generated disinformation is already being deployed at scale to fabricate quotes, clone voices and produce synthetic imagery, AI also offers tools that newsrooms can adopt to improve journalism, such as automated fact-checking, translation at speed and data analysis that would ordinarily take months. However, for AI to serve as an effective tool for independent journalism and press freedom, the development and deployment of the technology requires the input of journalists, human rights advocates and the communities most affected by information disorder. Further, we need a human rights-based approach for the governance of AI.
This is precisely the kind of conversation I was grateful to be part of during the Being Human When Digital Atlantic Institute event in February 2026, where we explored the idea of AI and hope and what it means to preserve human agency and achieve grassroots justice in an increasingly algorithmic world. The session reinforced the idea that collective stewardship is central to preserving accountability in the development of AI.
Why World Press Freedom Day still matters
World Press Freedom Day 2026 is an invitation to journalists, technologists, policymakers and citizens to stop taking press freedom for granted and recognise it as the infrastructure on which every other right depends. The Atlantic Fellows community is uniquely positioned to contribute to this work. Across disciplines and geographies, Fellows are already building bridges between the silos that too often keep communities concerned with journalism, technology and human rights apart. The task now is to move from dialogue to supporting durable systems that protect the information ecosystems on which all of us depend.
About the author
Fola Adeleke is the executive director and co-founder of the Global Center on AI Governance. Fola leads the knowledge hub, the African Observatory on Responsible AI. He holds a doctorate in international investment law and human rights from the University of Witwatersrand and completed his post-doctoral research as a Fulbright Scholar at the Columbia Center on Sustainable Investment at Columbia University and the Human Rights Program at Harvard University.
Fola is also an associate professor at Wits Law School South Africa, an Atlantic Fellow for Social and Economic Equity at the London School of Economics and an assistant professor in commercial law at the Sobey Business School at St Mary's University in Canada. Fola currently serves as a commissioner for the Lancet Commission on AI and HIV.
Being Human When Digital, a discussion at the XR Lab, Rhodes House.
In February 2026, Dr. Adeleke and Kay Firth-Butterfield — author of "Co-existing with AI," CEO of Tech for Good Advisory, and former Head of Artificial Intelligence at the World Economic Forum — explored the theme of AI and hope. Watch a short recording of the event at the Atlantic Institute XR Lab, Rhodes House, Oxford.



.png)