The AI Euphoria in Botswana:Between Buzzword and Readiness – Part I


In 2020, I pivoted from accounting and finance to data science and AI with bright eyes and a hopeful heart. Like many, I was captivated by the promise of Artificial Intelligence—the elegance of algorithms, the predictive power of models, the magic of machine learning. 

It felt like a new frontier, one I was determined to be a part of. Since then, I have immersed myself in technical programs, fellowships, and research communities. Most recently, I have been engaging with AI ethics and accountability initiatives, deepening my understanding of AI’s complexities beyond mere innovation.

This article is the first in a series of reflections that explore the emotional, political, infrastructural, and ethical terrain that underpins AI adoption in Africa, starting with Botswana.

The Hype and the Hunger

AI has arrived in Botswana—at least as a buzzword. There is an undeniable surge of excitement. Public forums brim with phrases like “AI revolution,” “AI for everything,” and “disruption is here.” Young entrepreneurs pitch chatbot solutions. Policymakers envision smart cities. Professionals from every sector—finance, education, agriculture—are beginning to integrate AI tools, primarily conversational AI like ChatGPT, to improve efficiency.

The excitement is understandable. AI promises to do for knowledge work what the industrial revolution did for physical labour. But as I’ve observed the rising wave of AI enthusiasm, I’ve become increasingly uneasy.

A Crisis of Readiness

While Botswana (and much of Southern Africa) is racing to adopt AI, many of its systems are not ready. The backbone of effective AI—data infrastructure is either underdeveloped, siloed, or poorly maintained. AI thrives on data, and not just any data—contextual, clean, representative, and ethically sourced data. Most public systems in Botswana, however, are still grappling with basic digitisation. Without solid digital foundations, introducing AI is like installing a satellite on a collapsing roof.

Moreover, the enthusiasm seems to outpace understanding. Many are enchanted by the capabilities of large language models (LLMs), but few grasp the limitations, biases, and societal risks that come with deploying these systems unchecked. As Crawford (2021) notes in Atlas of AI, data is never neutral—it reflects histories of exclusion, oppression, and inequality. Deploying AI systems trained on non-African data risks importing and amplifying foreign biases in our most intimate social systems.

The Narrow Scope of Local Innovation

A worrying trend I’ve observed is the narrow band of AI use cases emerging in Botswana. Most local startups focus on chatbots, virtual assistants, and rudimentary NLP-based tools. While these are valid entry points, they represent only a fraction of AI’s potential. There is a striking absence of AI work addressing Botswana’s core developmental challenges—climate modelling, health diagnostics, agricultural forecasting, or intelligent infrastructure planning.

This reflects both a skills gap and a vision gap. AI should not merely be a tool to mimic the West. It must be wielded to solve local problems in culturally relevant ways. As Birhane (2021) emphasises in her critique of universalist AI narratives, Africa needs “relational ethics”—AI development rooted in local values, histories, and community-centred goals.

At various local innovation events and tech community sessions I’ve attended, there has been a notable emphasis on LLMs and chatbot demonstrations. While these capture immediate interest, foundational challenges around data policy, governance, and contextual application often remain unaddressed.

The Danger of Blind Adoption

A month ago, I had a conversation with a passionate entrepreneur who wanted to build an AI company “to insert AI into every business model.” Their conviction was admirable. But as they spoke, I cringed. Not because I doubted their intent, but because I’ve come to understand that AI is not always the answer.
Not every business, especially in the African context, needs AI. Many need better workflows, transparent governance, inclusive leadership, and functioning infrastructure. To inject AI into broken systems without first addressing underlying issues is to apply code over chaos.

Further, as the Centre for the Governance of AI (GovAI) notes in their public reports, unchecked AI deployment can exacerbate existing inequalities, undermine human agency, and introduce new governance challenges. Do we really want AI making decisions in public finance, healthcare, or education systems without robust oversight? Who will audit these systems? Who will be accountable?

Local Data, Global Models: A Mismatch

Another concern that I’ve observed is the tendency toward the blind adoption of Western-trained models. In my observation, many African nations—including Botswana—frequently adopt technologies built in foreign contexts without sufficiently adapting them for their own local realities. The result, in many cases, is a frustrating misalignment. AI systems trained on American, Chinese, or European data may not understand local dialects, cultural cues, or societal norms.

This is not a new concern. Scholars such as Timnit Gebru, Dr. Joy Buolamwini, and Abeba Birhane have long warned of AI systems failing marginalised groups due to skewed training data. For Botswana, this problem may be magnified. Without investment in local data collection, curation, and governance protocols, our AI landscape risks becoming a colonial interface—efficient, but alienating.

To move from AI euphoria to ethical and sustainable adoption, Botswana and its peers must:

  • Invest in data infrastructure before AI deployment.
  • Prioritise digital literacy and critical AI education for developers, policymakers, and the public.
  • Adopt AI governance frameworks rooted in local contexts, not borrowed from foreign playbooks.
  • Centre human well-being as the metric for success, not just efficiency or innovation.
In Closing: Sober Enthusiasm

AI holds immense potential—but only if approached with clarity, caution, and commitment to justice. My own journey, from wide-eyed fascination to grounded inquiry, reflects the maturity we need as a continent. The goal is not to reject AI, but to wield it wisely.

Let us not be intoxicated by innovation. Let us build, reflect, and steward.

This is Part I of a continuing series on Africa’s AI readiness. Future articles will explore data governance, AI in education, and the intersection of indigenous knowledge and algorithmic design.

References & Influences
  • Crawford, Kate. Atlas of AI. Yale University Press, 2021.
  • Birhane, Abeba. “Algorithmic injustice: a relational ethics approach.” Patterns, 2021.
  • Gebru, Timnit et al. “Datasheets for Datasets.” Communications of the ACM, 2021.
  • Centre for the Governance of AI. “Mission and Research.” https://www.governance.ai/
  • Buolamwini, Joy & Gebru, Timnit. “Gender Shades.” Conference on Fairness, Accountability and Transparency, 2018.
Author: Munenyashaishe Hove is a data scientist and responsible AI advocate currently deepening her expertise in AI governance and transparency. She engages in research, mentorship, and policy-aligned thought leadership across local and global platforms. munenyashaishe.techlead@outlook.com 



Previous Post Next Post

AD

AD