Close Menu
21stNews21stNews

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Girona Offers Ounahi to Manchester City

    April 26, 2026

    Morocco, Russia Deepen Multilateral Cooperation

    April 26, 2026

    SIAM 2026 : l’Italie met en avant ses technologies sur le marché marocain

    April 26, 2026
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    Pinterest Facebook LinkedIn
    21stNews21stNews
    • Home
    • Moroccan News
    • Industry & Technologies
    • Financial News
    • Sports
    Subscribe
    21stNews21stNews
    Home»Industry & Technologies»AI, Elections, and the Fragility of Rights in the Digital Age
    Industry & Technologies

    AI, Elections, and the Fragility of Rights in the Digital Age

    By April 25, 20267 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    At a recent policy conference held at the UM6P by the Policy Center for the New South and the Department of Political Affairs, Peace and Security of the African Union, I had the opportunity to join a diverse panel of participants from different backgrounds to discuss “Integrity and Governance: AI in the Electoral Cycle.” I shared reflections and insights from the perspective of the Conseil national des droits de l’Homme (the NHRI of Morocco).

    For us here in Morocco, this discussion could not have been more timely and relevant.  

    As Morocco approaches its next general elections, the intersection of artificial intelligence and democratic processes is no longer a theoretical concern; it is an imminent reality.

    Elections have often been framed as institutional exercises, organized, supervised, and ultimately certified by national authorities. That framing is no longer sufficient. Today, elections are deeply embedded in digital ecosystems shaped by algorithms, platforms, and AI systems. What voters see, share, believe, and ultimately decide is no longer incidental to democratic processes; it is central to them.

    Across the world, there is growing evidence that algorithmic amplification and algorithmic systems influence political discourse in profound ways. From coordinated networks of automated accounts to AI-driven amplification of divisive content, digital infrastructures are increasingly capable of shaping public opinion at scale. What we are witnessing is not simply interference. It is the gradual transformation of the conditions under which democratic choice is exercised.

    The threat of systemic disinformation

    Past electoral cycles have already revealed how automated accounts and coordinated networks may distort online debates. During the Brexit referendum, for instance, over 5% of the user base that tweeted referendum-related hashtags had simply disappeared after the vote. Thousands of automated accounts were later identified as having played a role in amplifying political messaging, raising serious concerns about the authenticity of digital participation.

    More recently, cases such as Slovakia’s election have been described by scholars as a turning point, a preview of how AI can be used to manipulate democratic processes with unprecedented speed and sophistication. Deepfakes, synthetic media, and highly targeted disinformation campaigns are no longer theoretical risks. They are operational tools capable of eroding trust in institutions, candidates, and even in the very idea of truth.

    Fraud or interference in elections is not a mere technical irregularity; it is fundamentally a human rights issue. When disinformation distorts public debate, when AI-driven systems manipulate what citizens see, or when data is exploited to influence voter behavior, the impact goes beyond the integrity of the electoral process. It directly undermines people’s right to freedom of expression, the right to access information, the right to privacy, and ultimately the right of every individual to participate in public affairs. All these rights may be directly affected by how digital technologies are deployed in election cycles.

    As affirmed in Article 21 of the Universal Declaration of Human Rights, genuine and periodic elections are themselves a fundamental right. Any attempt to manipulate or interfere with them is therefore not just an electoral issue; it is an attack on the exercise of basic freedoms and democratic legitimacy itself.

    Disinformation, in this context, is not just misleading content; it is a direct threat to the right to freedom of expression and the right to seek information, and even the right to informed opinion (which is an absolute right). It distorts the information environment in which citizens form informed opinions and make electoral choices.

    Embracing a human rights-based approach to AI governance

    At the same time, poorly designed responses to disinformation, such as overbroad content moderation, restrictions, or flawed automated detection systems, risk infringing on freedom of expression. Studies have shown that efforts to identify bots and harmful content can produce significant errors, sometimes wrongly targeting legitimate political speech that may shock or disturb.

    This tension underscores a critical principle: governance of AI in elections must be grounded in human rights. These rights are not optional, nor are they negotiable. They are binding obligations under international law. States have a duty not only to respect them but also to protect individuals from violations by private actors, including powerful technology companies. These platforms cannot be treated as neutral intermediaries when their systems actively shape public discourse.

    A human rights-based approach to AI requires embedding safeguards at every stage, from design and development to deployment and oversight. It also requires accountability mechanisms that extend beyond national borders. The infrastructure underpinning AI (data centers, algorithms, and corporate headquarters) is globally distributed, while its impacts are deeply local. No single country can effectively and safely regulate this ecosystem alone.

    This is particularly relevant for African countries, as the continent remains structurally behind in AI development resources and infrastructure, while the economic benefits of AI are expected to be overwhelmingly concentrated in the Global North.

    The quest for digital sovereignty in Africa

    Addressing this challenge requires confronting another deeper reality. The digital infrastructures that shape political discourse are not neutral. A small number of global technology companies, largely based outside the African continent, may play a decisive role in structuring what information circulates, how it is amplified, and who ultimately sees it.

    For African countries, this raises a fundamental question of “sovereignty.” The African AI strategy (adopted in 2024) leans more towards national approaches for governance and regulation, but from a strategic point of view, fragmented national approaches risk weakening regulatory leverage and reinforcing dependency. Africa may end up having 54 different approaches to AI governance.

    By contrast, a more coherent regional or continental strategy would not only strengthen bargaining power, but also help ensure that AI systems reflect local values and provide real solutions to local problems, and hopefully enable African states to set standards, negotiate with global platforms, and ensure that technological development aligns with local priorities. Such an approach would also contribute to narrowing the global AI divide.

    At the same time, in relation to democratic and electoral processes, an important blind spot remains insufficiently explored: the growing influence of encrypted messaging platforms, which have become central channels for political communication and communication in general. In these environments, disinformation may not necessarily spread the fastest, but it often spreads with the greatest credibility, circulating within trusted networks that are far harder to monitor or challenge.

    Fraud or interference in elections is not a mere technical irregularity; it is fundamentally a human rights issue.

    Unlike public social media, these spaces are far more difficult to monitor or study, yet they play a central role in how information circulates during elections. Disinformation in these channels is harder to detect, trace, and counter, making it a critical frontier for research and policy.

    Ultimately, the challenge is not simply that technology is influencing elections. It is that it is redefining the conditions under which democratic rights are exercised. AI accelerates the production and spread of information – both true and false – at a pace that institutions struggle to match. It amplifies existing vulnerabilities in political systems while introducing new ones that are not yet fully understood.

    In Morocco, this conversation is no longer distant or abstract. With general elections approaching, the role of AI in shaping the information landscape is likely to be significant. Political actors, supporters, and external stakeholders will have access to tools that can generate persuasive content, micro-target audiences, and potentially manipulate public perception.

    In general terms, during election processes, the use of AI may not necessarily be visible, but its effects may be deeply felt. But the key question is not whether AI will be used during an election. The question is how it is used, how quickly it can be detected, when harmful, and how quickly stakeholders can respond to it.

    Ensuring that AI deployment does not undermine fundamental rights requires vigilance, preparedness, and a firm commitment to a human rights-based approach. During elections, anticipating and addressing these challenges will be essential to preserving both the integrity of the process and the rights of those it is meant to serve.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleUS Investigates Deaths, Disappearances of Scientists Linked to Sensitive Programs
    Next Article Kenyan FA President Suspended Over Financial Misconduct Allegations

    Related Posts

    Industry & Technologies

    Sundowns Ask CAF to Reschedule First Leg Against AS FAR

    April 26, 2026
    Industry & Technologies

    Gunfire at White House Correspondents’ Dinner Triggers Evacuation, Trump Safe

    April 26, 2026
    Industry & Technologies

    Morocco Suspends AEVM Requirement for Malian Travelers Starting Monday

    April 25, 2026
    Top Posts

    How Google Gemini Helps Crypto Traders Filter Signals From Noise

    August 8, 202524 Views

    DeFi Soars with Tokenized Stocks, But User Activity Shifts to NFTs

    August 9, 202522 Views

    DC facing $20 million security funding cut despite Trump complaints of US capital crime

    August 8, 202521 Views
    News Categories
    • AgriFood (197)
    • Financial News (1,878)
    • Industry & Technologies (1,641)
    • Moroccan News (1,946)
    • Sports (1,314)
    Most Popular

    Morocco Launches Central Unit to Support Women Victims of Violence

    April 25, 20263 Views

    El Salvador Friendly in Doubt as Morocco Propose Venu Change

    April 23, 20263 Views

    ODCO Makes First Appearance at SIAM in Support of Agricultural Cooperatives

    April 22, 20263 Views
    Our Picks

    A Calm Capital • BEWILDERED IN MOROCCO

    April 20, 2026

    Swift Taps Consensys For Blockchain Settlement System

    September 29, 2025

    Leyla Hamed Speaks Out as Anti-Muslim Chants Spread Across Spanish Football

    April 8, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    • Home
    • About Us
    • Privacy Policy
    © 2026 21stNews. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.

    Go to mobile version