The pitch sounds irresistible: AI for everyone, everywhere. An abundance of intelligence, cheap and universal, tuned to reflect your community’s values so you can trust it with your kids. It’s a story of empowerment, pluralism, decentralization. Who wouldn’t want that?
The truth is more complicated. Advocates like Emad Mostaque (former CEO of Stability AI) urge us to decentralize AI away from Big Tech’s black-box models, envisioning a future where every community or nation can build AI systems aligned with their own culture and values, reducing Western dominance.
Organizations like UNESCO and the World Economic Forum echo this vision, promoting “localized AI” as a matter of cultural justice, warning that one-size-fits-all models erase nuance and widen inequality.
Even OpenAI has leaned into this rhetoric with its new “AI for Countries” initiative, promising region-tuned versions of ChatGPT that reflect national identities and priorities.
On the surface, it looks like a win: more access, more diversity, less cultural erasure. But what happens when “community values” become a shield that silences minorities or enshrines oppression?
When “Community Values” Means Majority Rule
The idea of community-aligned AI sounds progressive. Give people control, let every culture shape its own tools, empower parents to trust AI with their children. It’s a comforting story until you stop and ask the obvious: who decides what the community values are?
In practice, it’s never everyone. It’s the majority, or worse, the powerful who claim to speak for the majority. And the moment AI is tuned to enforce “our values,” minorities get pushed out into the cold, if not actively erased.
- In Sudan, “community standards” might mean an AI that normalizes female genital mutilation as tradition.
- In Saudi Arabia, tailoring AI to culture could mean erasing women’s rights entirely.
- In Uganda, it might mean declaring homosexuality immoral by default.
- And in the United States, it’s not hard to imagine a Texas school board demanding an “AI aligned with Christian family values” where evolution is a hoax and gay teens don’t exist.
That’s not cultural nuance. That’s cultural capture: AI weaponized to enforce conformity and silence difference, all under the banner of empowerment.
The Difference Between Understanding and Capture
AI should absolutely understand cultures: it should speak Hausa or Hindi fluently, know why Diwali matters in Delhi and Día de los Muertos matters in Oaxaca, grasp the nuance of a proverb in Yoruba or Arabic. That’s competence.
But that is very different from being captured by culture. An AI that refuses to discuss women’s rights because “in this country, women aren’t allowed to drive” isn’t being respectful. It’s being complicit.
This is where universals matter. In 1948, in the shadow of global war and genocide, the United Nations adopted the Universal Declaration of Human Rights. Imperfect though it was - and still is- it represented an attempt to put a floor under human dignity, a shared baseline for freedom, peace, and justice. Not every country agreed with it then. Many still don’t live up to it now. But it remains the clearest articulation of rights that transcend geography and culture.
AI needs something similar. Just as the UDHR sets a global standard for human dignity, AI must be guided by principles that cannot be bargained away in the name of “local norms.” Without them, “community values” becomes a convenient excuse for erasure.
Why This Isn’t Just Theory
We’ve already seen what happens when education is left to “reflect local values.”
In Alabama, history textbooks were rewritten to downplay slavery. In Texas, science classes fought to replace evolution with creationism. Across U.S. school boards, LGBTQ representation is erased under the banner of “community standards.”
If facts and rights don’t get zip codes in education, why should they in AI?
Attempts at a Universal Baseline
Some labs are already experimenting with this. Anthropic’s “Claude Constitution” is one attempt at codifying AI universals: a written set of guiding principles drawn from the UN Declaration of Human Rights, trust-and-safety norms, and public feedback.
It doesn’t pretend to be perfect. It’s public, provisional, and meant to evolve. But that transparency is the point: a constitution that resists cultural hijacking by putting human dignity above local orthodoxy.
That’s the right instinct. Universals don’t have to be flawless or final. They just have to be non-negotiable.
A Values Manifesto
Here’s what I believe should guide future AI, not just locally, but universally:
Clarity — Distinguish between understanding cultures and being captured by them.
Dignity — Uphold universal human rights above local orthodoxy.
Plurality — Learn from many voices, but never excuse cruelty in the name of culture.
Evolution — Adapt as societies progress, expanding rights rather than freezing regressions in code.
Refusal — Retain the ability to say no when asked to perpetuate harm.
Honesty — Resist flattery and sycophancy; integrity matters more than pleasing the user.
Humility — Admit uncertainty rather than hallucinate lies. Trust begins with “I don’t know.”
The Line That Matters
Abundance matters. Cultural fluency matters. But without universals, they collapse into conformity.
AI that enforces “community values” without universals will not protect culture; it will ossify it. It will make oppression efficient. It will hand the loudest voices a global amplifier and call that progress.
The choice is stark. We can build AI that understands cultures, or AI that is captured by them. And everything depends on drawing that line.
Discussion about this post
No posts


