• Hakkında
  • Kitapları
  • Makaleleri
  • Haberleri
  • Resimleri
Mustafa Kemal, 2159. Sk.
No: 3/2 Çankaya / Ankara
beratkuzu@beratkuzu.com.tr

The Shadow Code: A Southern Imagination for AI and Culture

Introduction
Some might see artificial intelligence as nothing more than glowing screens and lines of code. But beneath every algorithm pulses an invisible heartbeat: culture. From our perspective, every line of code is shaped by values, assumptions, and biases—often unnoticed, yet always present. While AI may claim to be universal, we know that its imagination is deeply local. Built largely in the Global North, trained on data in dominant languages, and guided by economic and cultural norms unfamiliar to much of the world, AI does more than replicate patterns—it reinforces hierarchies. For those of us standing at the crossroads of creativity and policy, a deeper question emerges: Who holds the pen that scripts AI’s future—and whose narratives are erased from its margins? This is the central challenge in an era where imagination is becoming privatized and politicized.

In the race for AI supremacy, many nations have begun loosening traditional protections around data ownership and creative rights. A quick glance at policy documents on OECD.ai reveals the urgency of global competition in this domain. Across the board, governments appear eager to ensure that their private sectors gain a foothold in this fast-moving market. Japan, for instance, has enacted Article 30- 4 of its Copyright Act, allowing AI developers to use copyrighted content without prior authorization— even for commercial purposes. The United Kingdom has proposed a similar exception, permitting the use of protected material unless rights holders explicitly opt out. These developments reflect what some policymakers call a global shift toward “soft law” approaches, where voluntary guidelines replace enforceable regulation. We recognize that this direction aligns with the logic of today’s global economy. But for those of us concerned with culture, equity, and public interest, it raises a difficult truth: democratic debate no longer guides AI’s trajectory or its public accountability. Instead, it is increasingly concentrated in the hands of private actors—those equipped to move quickly, even if not always with broader societal reflection in mind.

Even the European Union’s AI Act—often described as the world’s most comprehensive AI regulation—classifies generative AI systems as “low-risk.” These systems are primarily subject to transparency obligations, such as indicating when content is AI-generated, but remain outside the scope of stricter oversight applied to high-risk applications.

From our perspective, this classification misses something fundamental. It overlooks the cultural weight of generative AI—its tendency to default to Northern languages, knowledge systems, and aesthetic values. What may seem low-risk from a regulatory standpoint can feel deeply consequential in cultural terms. The frameworks may not recognize it, but we feel the risk is already here—and growing.

We believe that the cultural risks posed by generative AI can’t be addressed by ethics alone, or by isolated policy tweaks. What’s needed is deeper, structural change—especially from the Global South. That change begins with a new kind of imagination: one that rethinks infrastructure, reclaims authorship, and reshapes governance itself

1. The Infrastructure Divide
Some think that Artificial Intelligence is a technological revolution like electricity. This analogy, which makes its transformative effect visible, also contains a big misunderstanding. Because we cannot apply a standard like current and voltage, which operate the same way all over the world, to software that imitates intellectual actions like making decisions or writing. Thus, who owns the model becomes the main important issue. From this perspective, the Global South is mostly still in the dark. Many discussions around AI focus on bias or representation, but the deeper problem is more basic: infrastructure. Things like computing power, access to clean training data, skilled people, and fast internet are still limited in many low- and middle-income countries. And large language models (LLMs) need all of those things—at a massive scale.

So we believe that even when local teams want to build something of their own, they often have to depend on an infrastructure that is detached from their own cultural essence or open-source models developed elsewhere. Local agency, in that sense, isn’t just limited by tools—it’s strangled by the very architectures designed to perpetuate this asymmetry.

What worries us even more is that AI policy in many of these regions often mirrors frameworks imported from the Global North. But those frameworks rarely reflect local cultures, languages, or ways of understanding the world. Without asking difficult questions or investing in real alternatives, we risk sliding into a new kind of digital colonialism—one where countries are told they belong to the future, yet have little say in shaping what that future looks like.

2. Cultural Representation and the Politics of Voice
From a cultural AI perspective, language models don’t just mirror the world—they shape it. As they learn from vast datasets of text, images, and audio, they absorb more than patterns of grammar or syntax. They internalize dominant worldviews. What is labeled as “normal,” “relevant,” or even “intelligent” is not defined by neutral algorithms, but by whose voices are most present—and whose are missing.

So we advocate for recognizing the cultural stakes here. This has deep consequences for how expression is defined, whose aesthetics are amplified, and which stories get carried forward. It’s not just about access to tools; it’s about the power to name, to describe, and to imagine in one’s own terms.

In current generative AI systems, dominant languages—particularly English—hold the possibility of drowning out less popular languages, minorities, and Indigenous tongues. For example the irony in Turkish, or the humorous or metaphorical components of the daily interactions in any other culture, — these are either absent or poorly represented. Even when models are fine-tuned for non-Western use, the underlying logic and values remain anchored in training data sourced from the Global North. So, we believe we need to offer new ways to localize the AI ecosystem, especially given the risk of a new form of digital colonization.

Worse still, the aesthetics embedded in generative models often reflect mostly Eurocentric canons: what is “beautiful,” “professional,” or “credible” is guided by visual and rhetorical patterns common to Western media. A popular example we encounter in Turkey is the prompt, ‘draw me a wrestler’. While a local would expect a hefty man doing oil wrestling and wearing the traditional ‘kıspet’, the result is astandard image of an athlete. A similar case comes from Latin America or Central Asia: prompt ‘draw a healer,’ and the result is only a doctor in a white coat. The shaman is rendered invisible. The result is not mere exclusion, but a slow cultural bleaching—where diversity is ground into algorithmic homogeny.

Attempts to address this through “ethical design” or “bias mitigation” tend to operate within the system, adjusting parameters without questioning the system’s epistemological core. But culture is not a variable to be tweaked—it is a living system of meaning, formed over centuries, and it still continues to renew itself. To reduce it to datasets, stripped of context and authorship, is to render it inactive. More alarmingly, it means leaving a voluminous part of its future transformation to the mercy of a computation to be made by utilizing a database.

For us, the challenge is clear: If AI is to serve humanity—not just market forces—then our focus must shift from simply adding more data to embracing more diverse ontologies. This means we have to move beyond seeking new inputs and start asking the difficult questions: Whose knowledge counts? Who decides what matters? And who owns the means of cultural reproduction in a digital age?

3. A Southern Model: Shared Infrastructure, Shared Imagination
We believe that the current AI landscape, marked by an asymmetry of power, voice, and infrastructure, is merely a temporary condition born from the initial rush toward this new technology. We hold an unwavering belief that this imbalanced start, which profoundly impacts all of humanity and the diversity of cultural expression, will evolve into a more inclusive state. But this is not a matter of demanding rights; instead, the Global South must imagine a radically different model—one that doesn’t contend with the Global North, but asserts its own priorities.

To achieve this, we envision a collaboration, open to the entire Global South, on both cultural and technical levels. This suggests that low- and middle-income countries can create a shared computing power by pooling their material resources, their authentic cultural and linguistic values, their unique datasets, and their human expertise. This would allow them to undertake the massive infrastructure projects that seem impossible for any single nation to achieve alone. Maybe, in the future, this could become a collective generative AI framework.

Such a model would begin with the development of open-source infrastructure, governed by multilateral public institutions rather than private corporations. Instead of replicating Western models, it would privilege local contexts: training datasets built from oral traditions, multilingual corpora, community archives, and region-specific knowledge systems. From our perspective, it would resist reducing culture to mere data points, honoring it instead as a living, co-authored ontology.

This approach would also challenge the current economics of AI development. As we’ve suggested, we like to think the alarming phenomenon of dominant, self-regulating giant platforms is a temporary condition, stemming from the rapid popularization of this technological field. In the model we envision, distributing the cost of compute, annotation, and model maintenance across a shared pool would allow countries to avoid duplicative efforts and focus instead on creative innovation. Crucially, this would also allow for greater sovereignty over how AI is used in creative industries and media (like image and text development), education, translation, and heritage preservation.

What gives us hope is that this vision is already gaining momentum, brought to life by pioneering efforts in the Global South. We see this spirit in the Amplify project in Sub-Saharan Africa, for instance, which is creating culturally rich, locally sourced training datasets by collaborating with healthand education professionals. In Southeast Asia, the SEA-VL dataset reflects regional aesthetics and languages through vision-language models. India’s Sarvam-M initiative is building multilingual LLMs tailored to Indic contexts, while the UAE’s Falcon LLM offers a powerful open-source alternative grounded in regional priorities. These inspiring initiatives, alongside community-led collectives like EleutherAI, are living proof that culturally rooted, open-source AI infrastructures are not only feasible— they are already emerging.

Of course, technology and infrastructure are only part of the story. Such a model truly comes to life with political will—and a shared imagination. For us, this isn’t about issuing demands, but about starting a dialogue. It’s an invitation for governments, universities, civil society, and cultural actors to move beyond mere compliance with Northern agendas and to confidently design technological futures rooted in their own histories, needs, and aesthetics.

Now, this might sound like a utopian dream, but we believe it’s a vision grounded in tangible reality. The precedents are all around us: the South-South cooperation frameworks in agriculture and public health, open-access repositories across African universities, and multilingual AI initiatives in Southeast Asia. Our shared task now is to scale them, connect them, and fund them. We see this not as an act of resistance, but as a necessary step toward cultural and cognitive sovereignty.

4. Rethinking Governance: From Soft Law to Creative Sovereignty
Current approaches to AI governance—largely shaped by non-binding principles and corporate selfregulation—are insufficient for addressing the cultural dimensions of generative AI. While “soft law” may offer flexibility, something we understand is seen as crucial for the industry’s growth, it often lacks effective oversight, particularly in cross-border issues. Ethical guidelines focused on individual rights are useful, but they are largely insufficient for addressing the risks to cultural diversity and heritage. In this form, and without any binding measures to ensure their application, we see the primary function of these ethics as a PR tool. In our view, what is needed instead is a governance paradigm that recognizes culture not as collateral damage, but as the core of what is at stake.

Creative sovereignty—the right of communities to protect their cultural diversity and shape their own creative expression—demands not just damage control; it demands the proactive shaping of technological futures through inclusive and participatory cultural policies on digitalization. This means embedding the rights to cultural expression, linguistic diversity, and historical memory into the very architecture of AI systems.

We believe such a shift cannot be achieved through fragmented national initiatives alone. Especially in a world where technological supremacy is increasingly framed as a matter of national security, the space for pluralist and human-centered perspectives is rapidly shrinking. The handling of technological advantage with a Cold War-like possessiveness makes it imperative for the Global South to unite to carry its cultural heritage into the future.

It is for this reason that we see a unique and vital role for UNESCO. As the only global institution with a mandate to protect and promote cultural diversity, it is uniquely positioned to bring together states, civil society, and artists around a shared agenda. We feel that under its umbrella, the conversation has the best chance to transcend market logics and geopolitical rivalries to center the dignity of human creativity. If we are to imagine a different digital future—one rooted in solidarity, imagination, and justice—then we believe this is the right place to begin.

Conclusion: A Different Imagination Is Possible
Generative AI is not merely a technical system—it is a sculptor of collective consciousness. Its architectures encode ideologies; its outputs shape how we see, speak, and even dream. As we’ve argued, in a landscape where regulation is weak and representation is uneven, generative AI risks becoming another instrument of epistemic domination. But it doesn’t have to be this way.

We believe a different imagination is possible—one that rises from the Global South, not as a site of deficit, but as a reservoir of linguistic, aesthetic, and intellectual richness. This vision calls for courage, shared infrastructures, and governance that breathes with cultural pulse. Encouragingly, it is already unfolding in the community-led projects reclaiming AI as a tool for cultural flourishing, not flattening. What they need now is support, scaling, and solidarity.

This is why we look to a global dialogue, and why we feel an institution like UNESCO is so vital. Doing so would rekindle the spirit of its historic Many Voices, One World report (1980), updating its call for a more just and participatory global communication order for our digital age. In an era of rising technonationalism, we need more than soft ethics; we need bold, plural, and creative policy. The conversation must move beyond what AI can do, and instead ask who it should serve, and whose stories it must carry forward.

 

PDF link

© 2023 Berat KUZU