Meedan’s mission will always be to help people make sense of the vast and growing body of knowledge our digital spaces have to offer. As the internet’s platforms, algorithms, and communities have evolved, so too has our approach. And major changes in how communities use available technologies to seek, share, and verify information have a profound effect on how we approach our partnerships, program design and technology.

After nearly seven years of focusing our energies primarily on information exchange and fact-checking in mobile messaging spaces, we now find ourselves confronting the next shift: the rise of generative AI.

Alongside many of our partners, we worry that AI tools like ChatGPT risk stripping agency and attention away from newsrooms and civil society organizations working in the public interest. We see commercial generative AI models steering users toward a centralized, homogenized base of information—one that frequently misrepresents facts, lacks local knowledge, and is governed by corporations driven by profit rather than public accountability.

As an organization dedicated to equitable access to knowledge, how do we respond to this sea change? Today we’re announcing our plan to pursue an alternative path, one that will allow the kinds of groups we partner with to harness the benefits of this new technology, while staying true to their values and missions.

Over the past 18 months, we have scoped and begun building technical infrastructure for an AI-augmented system that will empower newsrooms and civil society organizations to index and organize deep contextual information — drawing from their own reporting and research, social media insights, public records, and permissioned community members. Over closed messaging apps, our chatbots will deliver succinct answers to user questions, incorporate community tips and questions, and guide readers to full-length articles from vetted sources. 

The broader objective is to build a tool that puts the public interest first, and that does not feed more data to big tech.

Sources matter — and context does too

For some of us, this shift feels as consequential as the rise of social media did in the mid-2000s. We are moving from a world of search engines — where information is indexed and organized by source — to one dominated by chatbots. Search engines may have overwhelmed us with links, but at least those results had identifiable origins. While chatbots “give us an answer,” that answer may not always be correct. And with most commercial chatbots, it is difficult to verify their sources and impossible to audit their systems. The implication is that we should just trust them.

This change will affect human learning and curiosity in ways that we can only begin to imagine. What we know right now is that it puts at grave risk the hyperlink — the fundamental infrastructure of the collaborative and evidentiary internet. Already, the sourceless, link-free, and too often hallucinatory world of the AI-powered internet is profoundly destabilizing the practice of journalism and, more broadly, how we come to understand and interpret the world around us.

We know too that the world’s largest corporate AI systems are trying to address every question and serve a large, general population. This may sound good and even equitable on its face. But that “general population” doesn’t seem to live anywhere in particular. Industry-leading AI chatbots struggle to account for things like public health crises, natural disasters, and violent conflict when answering basic questions about a place or community. They typically fail to consider cultural, political, and linguistic nuances that might affect what types of information would actually be useful to a given user. And if you don’t write your prompts in English, you won’t always get an intelligible answer.

Some AI systems even have a specific political orientation. China-based DeepSeek won't discuss topics that displease the Chinese government, like the 1989 protests in Tiananmen Square. Elon Musk’s Grok AI chatbot, active on X, has repeatedly mentioned a “white genocide” allegedly taking place in South Africa, despite there being no evidence to support this claim.

The overtly political nature of these chatbots’ responses should come as no surprise. Big tech CEOs – or state-aligned actors, as the case may be – have every reason to build information platforms that suit their interests.

For Meedan, all this means that we need to double down on building a competitive alternative for partners that want to embrace the affordances of generative AI while staying true to their commitments to their communities and their own bottom lines.

Our AI solution will serve the public interest

We are building chatbots that will offer readers something radically different from ChatGPT: on-demand information drawn not from the wild web, but from trustworthy reporting, research, and vetted civic data sources curated by an organization that they trust. Chatbot responses will include citations and links, allowing users to seize on their own curiosity and learn more about the issues. And none of this source material will be taken advantage of by corporate giants like OpenAI or Google. The goal of this tool will be to serve the public interest, not corporate profit margins.

A key technology enabling this vision is called Retrieval Augmented Generation. This technology will draw from an organization’s archives, civic data, and user interactions to surface contextually relevant information to readers, instead of simply relying on whatever ChatGPT has to offer. This will sit on top of a foundational language model that will ensure chatbots’ conversational capabilities.

Coming soon: Hyper-local information revolutions

If we are successful, our new product will spark hyper-local revolutions in how people get – and share – trustworthy information about the issues that affect them most.

Check, our award-winning open-source software, has given us a ten-year foundation from which to build this new infrastructure. Check allows organizations to gather questions, observations, and information from community members through SMS and messaging apps and analyze that data to reveal trends. And it already enables organizations to structure their data to train chatbots that help scale news distribution and reader engagement. Our new product will do all this and more, incorporating public data and other key resources that we believe can radically change the way such organizations engage with their audiences, and the way people in every community access and exchange information.

We've been working with our network of partners for almost two decades, which means that the global contexts we'll operate in are those of our partners, team members, and friends. In the coming days, we’ll share more about our initial efforts to pilot this new tool with select partner organizations.

We collaborated with 53 partner organizations worldwide to design and carry out our 2024 elections projects. We extend special gratitude to our lead partners in Brazil, Mexico and Pakistan, whose work we highlight in this essay.

Pacto pela democraciaINE MexicoDigital Rights Foundation

The 2024 elections projects featured in here would not have been possible without the generous support of these funders.

SkollSIDAPatrick J McGovernSVRI
Tags
Artificial intelligence
Decentralized Internet
Community knowledge

Footnotes

References

Authors

Words by

No items found.
No items found.
Words by
Organization

Published on

July 14, 2025