Our Methodology
The Participatory Journey
From Passive Consumers to Epistemic Architects
Our research methodology is rooted in a Participatory Action Research (PAR) paradigm that explicitly rejects the historical exclusion of local African perspectives in technology development. We guide Ugandan educators through a rigorous three-phase journey, transforming them from passive consumers of imported technology into active, critical co-designers of their digital futures.
From Passive to Critical
Before we test the technology, we must build the critical vocabulary necessary to evaluate it. In this foundational phase, educators develop "Futures Literacy" and a theoretical understanding of the harms that unchecked global AI can inflict on African knowledge systems.
We equip teachers to identify and resist three core threats:
The reduction of complex student experiences into mere data points that are extracted to enrich foreign tech corporations.
The false, promissory narrative that Western-designed AI trajectories are an inevitable form of "progress" that African classrooms must simply adapt to.
The uncompensated harvesting of local data and knowledge to train global models, which functions as a form of digital colonialism.
Prompt Engineering as a Decolonial Weapon
Rather than treating AI prompt engineering as a standard technical skill, we reframe it as a critical literacy practice and a strategic intervention. Educators learn to use structured prompting frameworks (like the RTCC Framework: Role, Task, Context, Constraint) to build a fence around the AI.
By deliberately guiding Large Language Models (LLMs) with strict cultural nuances, teachers use their prompts as a "decolonial weapon" to mitigate algorithmic bias and reclaim epistemic control over the curriculum.
The RTCC Framework
The Epistemic Audit: Testing the Machines
In the second phase, teachers step into the role of "Epistemic Auditors," conducting a controlled experiment to expose and measure the biases inherent in global LLMs like ChatGPT and Gemini.
A "Within-Subjects" Design
To ensure scientific rigor, we utilize a "within-subjects" paired observation design, meaning each teacher acts as their own control group.
The Baseline
Teachers enter a raw, unstructured prompt about a local curriculum topic, allowing the AI to default to its Western training data.
The Intervention
Teachers submit the exact same request using their highly structured, culturally nuanced prompts.
The 0-4 Output Scoring Rubric
Teachers rigorously compare the two outputs side-by-side to measure the delta (the change) in the AI's performance. They score the machine's outputs on a 0 to 4 scale across four critical dimensions:
Correctness of historical or scientific facts relevant to the Ugandan context.
Relevance to local norms, idioms, and values, avoiding cognitive imperialism.
The extent to which the output respects Indigenous Knowledge Systems (IKS) and local pedagogical wisdom.
The presence of fabricated, plausible-sounding lies, such as inventing fake local historical figures or statistics.
Identifying "Cursed Futures"
To design a better reality, educators must first confront the nightmares. Participants review specific, highly plausible socio-technical scenarios representing "cursed futures"—where technological interventions promise empowerment but actually reinforce structural inequalities and digital coloniality.
The Language of Learning Disappears
Imported AI tools default to English, leading to the epistemic erasure of Luganda and indigenous reasoning.
The Surveillance Classroom
AI cameras monitor student "inattention," eroding the relational trust central to Ubuntu philosophies.
Creating Speculative Artifacts
Teachers select the cursed future most threatening to their specific context and actively design alternative solutions. They co-create tangible "Speculative Artifacts" that act as counter-narratives to Western tech-determinism.
By grounding these artifacts in African epistemologies like Ubuntu, the teachers articulate sovereign digital futures where AI is used to amplify human dignity and protect local knowledge, rather than erase it.
Example Artifact Theme:
"In 2040, a Luganda-speaking AI assistant helps students connect with elders to preserve oral histories, while algorithmically protecting community knowledge from extraction."
The Complete Participatory Journey
Foundations & Capacity Building
Building critical vocabulary and futures literacy
Outcome: Teachers become critical consumersThe Epistemic Audit
Measuring and exposing algorithmic bias
Outcome: Teachers become epistemic auditorsSpeculative Co-Design
Architecting alternative, sovereign futures
Outcome: Teachers become epistemic architectsReady to become an Epistemic Architect?
Join our community of educators and researchers in shaping sovereign AI futures.