Anthropic, South Africa and the emerging divide in AI governance
As frontier AI companies debate moral reasoning in AI systems, governments across the Global South still struggle to build credible systems for AI oversight.

Two recent episodes have highlighted a widening divide in global artificial intelligence governance.
In San Francisco, Anthropic invited Christian theologians, priests and philosophers to discuss how its Claude chatbot should respond to grief, self-harm and questions surrounding its own possible “demise.” The closed-door discussions reflected the company’s growing focus on embedding moral reasoning into frontier AI systems.
In South Africa, meanwhile, the government withdrew its first national AI policy after journalists discovered that several academic citations in the document had been fabricated, apparently through the unverified use of generative AI.
The episodes reveal a growing imbalance in AI governance. While leading technology companies are increasingly shaping the ethical architecture of artificial intelligence, many governments are still struggling to establish the institutional credibility needed to regulate the technology itself.
The contrast also points to a broader geopolitical shift: AI governance is no longer only about regulation or innovation. It is increasingly a question of power, legitimacy and knowledge authority.
Speaking at the Africa Forward Summit in Nairobi on Tuesday, United Nations Secretary-General António Guterres called for “developing African capacity in artificial intelligence so that AI is shaped by African data, African languages, African researchers and African leadership.”
Anthropic’s theological turn
Anthropic has positioned itself as one of the safety-focused companies developing frontier AI systems. Its Claude models operate under a detailed internal “constitution” designed to guide the chatbot’s behavior around honesty, harm reduction and public safety.
The March meeting in San Francisco pushed those discussions further into questions traditionally associated with philosophy and religion. According to reports, participants discussed how Claude should respond to emotionally charged situations including grief and existential distress.
The company’s interest in embedding ethical behavior into its systems reflects a growing recognition inside the AI industry that large language models increasingly shape not only information flows, but also emotional interaction, reasoning and public discourse.
That shift is beginning to blur the boundary between technical design and normative governance.
Anthropic’s leadership has also faced political pressure over safeguards embedded in its systems that some U.S. defense officials viewed as limiting military applications. The dispute underscored how rapidly AI model design is becoming entangled with state priorities, security debates and geopolitical competition.
The values encoded into frontier models are increasingly geopolitical.
South Africa’s credibility crisis
If Anthropic’s debate centered on moral authority, South Africa’s crisis exposed a more basic challenge: epistemic authority.
The country’s draft national AI policy was withdrawn less than three weeks after publication when journalists identified fabricated academic citations in the document. Some journals did not exist. Others were real publications containing no such articles.
Communications and Digital Technologies Minister Solly Malatsi described the lapse as “unacceptable” and acknowledged that generative AI had likely been used without proper verification.
“This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy. As such, I am withdrawing the Draft National Artificial intelligence Policy,” he said in late April. “The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened.”
The draft policy had proposed establishing a National AI Commission, an Ethics Board and a regulatory authority intended to position South Africa as a continental leader in AI governance.
Instead, the episode quickly became a cautionary example of the risks governments face when institutional verification systems fail.
Writing in The Conversation, University of the Witwatersrand cyber law scholar Nomalanga Mashinini argued that the problem went beyond embarrassment.
“The hallucinated citations reveal two specific failures. Epistemic integrity (the assurance that research has been conducted through reliable, ethical and repeatable methods that any reader could verify) was absent,” she wrote. “So was information integrity (the public’s reasonable expectation that information from an authoritative source can be trusted).”
The incident also highlighted a structural asymmetry increasingly visible across global AI governance.
While major AI laboratories possess vast technical resources, interpretability teams and dedicated safety divisions, many governments, particularly in the Global South, are still building the expertise and institutional capacity needed to evaluate AI systems credibly.
That imbalance is beginning to shape who defines the rules, values and assumptions embedded in emerging AI governance frameworks.
A new science diplomacy challenge
Artificial intelligence governance increasingly resembles earlier global governance disputes surrounding nuclear technologies, biotechnology and climate systems. It involves standards-setting, strategic competition and transnational coordination.
But AI introduces a more intimate governance challenge because the systems shape communication, reasoning and access to information.
These systems increasingly mediate communication, education, emotional support and access to information. Questions about AI governance therefore extend beyond infrastructure and regulation into culture, ethics and public trust. That challenge is increasingly visible in international diplomacy.
Similar debates emerged at a Commonwealth anti-corruption conference in Cameroon, where officials discussed the use of AI tools in governance and public oversight while also warning about risks surrounding accountability and institutional integrity.
The parallel developments suggest AI governance is evolving unevenly across regions and institutions. In some parts of the world, the debate centers on aligning frontier systems with moral philosophy and human values. In others, the more immediate challenge is establishing reliable mechanisms for verification, transparency and oversight.
Africa has increasingly sought to position itself within those debates rather than outside them. Frameworks such as the African Union’s continental AI strategy and the Smart Africa initiative emphasize local governance capacity, digital sovereignty and regional coordination.
Malatsi said South Africa’s recent experience “proves why vigilant human oversight over the use of artificial intelligence is critical. It’s a lesson we take with humility. I want to reassure the country that we are treating this matter with the gravity it deserves. There will be consequence management for those responsible for drafting and quality assurance.”

