By Emmanuel Tzingakis, Technical Lead, Africa and Venture Markets at Trend Micro
In our fast-evolving cyber risk landscape, it’s easy to be captivated by the headlines—stories of cutting-edge exploits, wild new attack vectors, and AI’s role in shaping malware that once seemed impossible. But while these futuristic threats grab our attention, the real challenge lies in understanding the everyday risks that businesses face and the practical steps to mitigate them.
It’s evident that attackers are becoming increasingly inventive. The surge of AI-driven techniques is reshaping the global threat landscape, and Africa is no exception. The continent faces a growing tide of sophisticated fraud schemes, propelled by advancements in generative AI, deepfakes, and internal vulnerabilities. It’s a clear call for businesses to rethink their strategies and stay ahead in this advancing game.
And the best place to start is by focusing on the real-world risks. During Trend Micro’s recent World Tour in Johannesburg, we unpacked the tangible risks posed by AI advancements and shared actionable insights on how businesses can effectively counter these emerging challenges.
A new wave of AI-powered phishing emerges
We’ve all seen phishing evolve from poorly worded emails riddled with typos to messages that are polished, professional, and even translated flawlessly into multiple languages. But a more recent development is how attackers now leverage AI to scour social media posts—not just the content of posts but the rich ecosystem of interactions around them. There is a treasure trove of personalised insights that can be mined from comments and connections.
Bad actors are then capitalising on AI’s ability to seamlessly craft hyper-personalised messages with astonishing precision. The tools are readily available; even platforms like ChatGPT can be leveraged to generate phishing emails that feel tailored and authentic. This doesn’t require advanced coding expertise—it’s a straightforward process that puts powerful capabilities into the hands of malicious actors.
The implications are striking. It raises the stakes for businesses as the social engineering pressure through phishing channels continues to intensify, demanding a more vigilant and proactive approach to cybersecurity.
Deepfakes are becoming mainstream
Synthetic media has also entered the conversation, and it’s rewriting the rules of social engineering. Deepfakes, once a novelty, are now a mainstream threat. Remarkably, deepfake incidents in Africa increased sevenfold from Q2 to Q4 of 2024 due to advanced AI tools.
With just a few seconds of audio, voice cloning tools can convincingly mimic an executive’s voice, enabling fraudsters to issue urgent fund transfer requests that sound all too real. And it doesn’t stop there. Real-time face swaps on video platforms like WhatsApp mean that even a casual “let’s jump on a quick call” could be a trap. The line between real and fake is blurring fast, and attackers are exploiting that ambiguity with alarming precision.
Recent headline-grabbing incidents—like last year’s Quantum AI investment scam that cost consumers billions—underscore just how high the stakes have become.
AI is exposing deeper gaps in data governance
One of the rising challenges businesses must contend with is the risk of data leakage, especially with tools like AI assistants entering the picture. Imagine an employee—whether inadvertently or with malicious intent—asking for sensitive information such as salary details, acquisition plans, or financial results. If the correct access restrictions are not in place, the AI might serve up restricted data that was never meant for broader access.
What we’re seeing here is a classic case of AI inheriting flawed permissions—folders scattered across an organisation with access settings that are far too broad. Perhaps a folder is mistakenly set to “accessible to everyone” when, in reality, only specific employees should have clearance. AI tools will readily surface information that should remain locked down.
It’s crucial to understand that this issue isn’t solely about AI; it’s a reflection of deeper gaps in data governance and permissions management within organisations.
Open-source is an avenue for malicious code
Another emerging concern around AI lies in the potential for the spread of malicious code. Developers crafting AI applications often rely on open-source repositories or widely used models like Meta’s LLaMA. But if these repositories contain buggy or, worse, malicious code, those vulnerabilities can creep into your applications unnoticed. It’s a sobering reminder that even the tools we trust can become conduits for risk if not carefully vetted.
Hallucinations can prove catastrophic
Hallucinations are another critical consideration. These occur when AI models, particularly those hastily developed or inadequately vetted, generate information that simply isn’t real. Take, for example, OpenAI’s Whisper model, used for speech recognition and transcription in medical and business settings. When doctors paused during dictation, the software invented additional words seemingly out of thin air. In a medical context, this isn’t just inconvenient—it’s potentially catastrophic. It underscores an urgent need for robust quality assurance processes tailored to AI systems.
So, what’s the path forward? It starts with visibility—broad, deep, and continuous. Understanding where and how AI is being used across your organisation is no longer optional; it’s foundational. Monitor AI interactions closely. Are the prompts or responses raising red flags? That insight isn’t just diagnostic—it’s an opportunity to intervene, guide, and improve. At the same time, your application security processes must evolve to reflect the new AI-driven threat landscape. And if you’re training models, the integrity of your data is paramount. Govern it. Protect it. Own it.
The good news is that defensive AI is outpacing offensive capabilities, thanks to significant investments in talent, tools, and innovation. Even in areas where attackers are advancing—like vulnerability discovery—defenders are using the same techniques to stay one step ahead. And with the rise of agentic AI, we’re seeing a shift: more power is moving into the hands of those who protect. The future of cybersecurity isn’t just about reacting faster—it’s about anticipating smarter. And that future is already taking shape.