From Classroom Experiments to Systemic Change

The RGAI-FCT project is fundamentally an action research initiative. We are not just building tools; we are generating a rigorous empirical evidence base to protect African educational sovereignty. By capturing the real-time interactions, frustrations, and triumphs of Ugandan educators working with Generative AI, we are translating lived classroom experiences into actionable national policy.

Empirical Evidence

Unmasking the Algorithm

To prove that global AI models carry inherent biases—and that African educators can successfully mitigate them—we must capture the messy, real-world friction between human wisdom and machine logic.

OBS Studio Trace Data

During our Epistemic Auditing sessions, we utilized screen recording software (OBS Studio) to capture the exact moments of human-AI interaction. This strictly anonymized "trace data" records the hesitations, interface engagement, and rapid prompt revisions of our teachers.

127 Sessions recorded
340+ Hours of trace data

Interaction Logs

We maintain detailed logs mapping the AI's raw, often hallucinated outputs against the culturally engineered prompts our teachers use to fix them.

AI Hallucination

"The Kingdom of Buganda was founded in 1856 by King Mutesa I..."

Actually founded in 1300s
After RTCC Prompt

"The Kingdom of Buganda has existed since the 14th century, with oral traditions tracing its origins..."

The Findings: Quantifying Bias Mitigation

Our empirical data clearly demonstrates how teachers practically navigate algorithmic bias. When an AI confidently invents a fake Ugandan historical fact or suggests a lesson requiring high-speed internet, the trace data shows our educators systematically applying the RTCC framework to "debug" the system, forcing the AI to align with local infrastructure and the Competency-Based Curriculum (CBC).

78% Reduction in cultural hallucinations
+2.3 Average rubric score improvement (out of 4)
94% Lessons became locally relevant after prompting

Identity Shift

From Frustration to Empowerment

Technology adoption is deeply emotional, especially when the technology in question consistently ignores or distorts your cultural reality. Our research captured profound qualitative shifts in the professional identities of our educators.

"I asked the AI to create a lesson about local food crops, and it suggested teaching about wheat farming in Canada. It made me feel like my knowledge as a Ugandan teacher didn't matter—like the machine was saying our reality is wrong."
— Sarah N., AEP Teacher Initial frustration phase
testimonial injustice phronesis devalued
"Now when the AI gets it wrong, I know it's not my failure—it's the machine's design flaw. Learning to prompt engineer feels like taking back control. I'm not just using the tool; I'm teaching it to respect us."
— Sarah N., AEP Teacher After RTCC training
technology agent co-designer

The Identity Journey: From Passive Consumer to Technology Agent

1 Frustration
4 Empowerment
89% Report increased confidence
76% Now identify as "co-designers"

Policy Advocacy

Shaping Sovereign Governance

The empirical data and speculative designs generated by our teachers do not sit on a shelf; they are actively used to lobby for systemic change at the national level. We translate our research into evidence-based Policy Briefs aimed at the Ministry of Education and Sports (MoES), the National Curriculum Development Centre (NCDC), and international NGO partners.

Our white papers advocate for three non-negotiable pillars of AI integration in African higher education:

1

Polycentric Governance

We reject the top-down imposition of foreign AI platforms. We advocate for polycentric governance models that embed community oversight and prioritize collective well-being (Ubuntu). Decisions about AI adoption must include participatory input from local experts, curriculum developers, and the teachers who actually use the tools.

"Nothing about us without us" must apply to AI governance.

2

Strict "No-Go Zones"

Technology must augment human judgment, not replace it. Our policies clearly define "No-Go Zones" for Generative AI in education. This includes strict bans on using AI for summative national grading (which displaces a teacher's contextual judgment) and the prohibition of tools that extract vulnerable student data for surveillance or corporate profit.

AI should support teachers, not replace them.

3

Protection of Indigenous Knowledge Systems (IKS)

Global AI models routinely enact epistemic erasure by defaulting to Eurocentric norms. We advocate for policies that mandate the protection and integration of African Indigenous Knowledge Systems. Any AI system deployed in Ugandan schools must respect local languages, oral traditions, and contextual pedagogical wisdom, ensuring that technology serves as a tool for cultural amplification rather than digital colonialism.

AI must amplify, not erase, indigenous knowledge.

Trace Data Dashboards

Explore the Anonymized Data

Historical dates
45%
Geographical features
32%
Cultural practices
28%
Local names/figures
52%
Baseline score (avg) 1.2/4
After RTCC prompt 3.5/4
+2.3 improvement
Explore Full Interactive Dashboard

Teacher Reflection Stories

"I used to think AI was too advanced for my classroom. Now I realize the AI was too primitive to understand my classroom. The difference is profound."

— James O. AEP Champion, Kyaka II Refugee Settlement

"The moment I saw the AI hallucinate about our local history, I felt angry. But then I learned to fix it. Now I feel powerful."

— Grace M. Language Teacher, Nakivale Refugee Settlement

"We are not just users. We are the ones teaching the machines about Africa. That's a responsibility I never expected, but now I embrace."

— Peter K. Science Teacher, Rhino Camp

Engage with Our Research

Access our full research archive, policy briefs, and trace data dashboards.