
Amenda Makhetha-Sebake
Artificial intelligence is no longer a future concept; it’s woven into the fabric of life today. And beyond just a technological revolution, AI is shaping up to be a test in trust. As it becomes embedded in daily life, what’s key is whether people believe
Recent global attention on advanced AI models such as Claude Mythos illustrates how quickly the stakes are rising. What makes this moment significant is not just the capability itself, but what it reveals: AI is advancing faster than the frameworks designed to govern it. The question is no longer whether organisations can use data to power intelligent systems. It’s whether they can do so in a way that customers, regulators and markets can trust.
And at the centre of that question is data.
AI systems thrive on vast amounts of personal information to function effectively. This creates a fundamental tension: the more data organisations use, the more powerful and personalised their services become. But this also increases the risk of crossing the invisible line between helpful and intrusive. And careless use of that data risks eroding something far more valuable than efficiency: trust.
This is where many organisations get it wrong. They treat data privacy as a compliance requirement rather than a question of legitimacy. Customers aren’t asking whether your company is compliant. They’re asking whether their data is used fairly, transparently, and respectfully.
Nowhere is this more evident than in financial services. Banks hold some of the most sensitive data in society while relying on rapidly advancing technology to detect and prevent fraud, assess risk, and tailor financial advice. Yet each of these advancements hinges its legitimacy on customers believing their information will be protected, not exploited.
In a digital economy, customers choose institutions they believe will act in their best interests. A single failure in how personal information is handled can erode confidence in digital systems altogether. Therefore, data privacy is a definitive measure of corporate credibility that can no longer be treated as an afterthought or compliance exercise alone. Organisations must embed it into how AI systems are designed and governed.
The principle of privacy by design is becoming increasingly important. It means embedding data protection into systems from the outset rather than retrofitting after deployment. It involves collecting only what’s necessary, being explicit about how data is used, and ensuring individuals retain meaningful control over their personal information.
Transparency is just as critical. As AI systems become more complex, there is a growing risk that decisions are made in ways that are difficult for customers to understand. Organisations have a responsibility to explain, in plain language, how data is used and how outcomes are determined. Without this, even the best innovations can undermine confidence.
Data privacy is also a people issue. Every employee who handles data plays a role in protecting it. Building a culture of, supported by training and clear governance, is essential to ensuring that privacy is consistently upheld.
Our understanding of risk must evolve long with AI. Cyber threats are more sophisticated, and the scale at which data can be processed increases the consequences of failure. Resilience, continuous monitoring, and ethical oversight are now fundamental to operating responsibly and sustainably in an AI-driven world.
Fortunately, opportunity lies among these challenges. Organisations that take data privacy seriously can strengthen their relationships with customers and differentiate themselves. In a crowded digital landscape, trust is a powerful competitive advantage. Customers are more likely to engage with digital services, adopt new technologies and share information when they are confident it will be treated with care.
South Africa faces a watershed moment in the face of AI. The choices organisations make today about how they collect, use, and protect personal information will determine the success of their products and offerings plus the level of trust that underpins them.
Ultimately, the debate about data privacy in the age of AI is about balance. It’s about harnessing innovation without compromising dignity. It’s about using technology to empower people, not expose them. And it is about recognising that progress achieved at the expense of trust is not progress at all.
The organisations that succeed now will be those that understand that in the age of intelligent machines, trust remains profoundly human.
Amenda Makhetha-Sebake – Head of Data Privacy and Protection, FirstRand Group. She writes in her personal capacity.
