
President Ntuli
Imagine handing the keys of your car to an unfamiliar driver. The vehicle may be state-of-the-art, packed with advanced safety features and powered by the latest engine technology. But if you don’t trust the person behind the wheel, you are unlikely to climb in, no matter how impressive the machinery looks.
This is increasingly where South Africa finds itself with artificial intelligence. Technology is advancing at pace, and its potential to accelerate growth and innovation is undeniable. Yet the speed at which AI truly takes hold across our economy will not be decided by infrastructure or computing power alone. It will be determined by people.
Too often, AI adoption is framed as a technical exercise: select the right platform, integrate the data, and switch it on. That view overlooks the tougher, more human work required to ensure success – preparing people, reshaping workflows, and building the trust and safeguards that determine whether AI is embraced or quietly resisted.
The real barrier is trust
We see this trust gap clearly in research conducted both globally and in South Africa. While awareness of AI is high, confidence remains fragile. Many local employees worry about unethical uses such as surveillance, while a significant proportion fear AI may threaten their livelihoods. Globally, less than half of employees surveyed by Great Place to Work said they trust their employer to use AI responsibly. .
This lack of trust has tangible consequences. In some organisations, employees disengage. In others, they adopt AI inconsistently, using unapproved tools, withholding effort, or actively undermining initiatives. These behaviours stem from uncertainty and a sense that change is happening to people rather than with them.
That is why successful AI adoption must begin not with code, but conversation.
Start with honest dialogue
One of the earliest lessons from HPE’s own AI journey was the importance of talking openly about how AI might change work; not glossing over difficult realities but addressing them directly.
People want honesty. Leaders need to acknowledge that some tasks will change, some roles will evolve, and some jobs may eventually disappear. Avoiding those conversations does not reduce anxiety – it amplifies it. Trust is built when realism is paired with support; when employees can see clearly where new opportunities lie and how they will be helped to adapt.
When organisations explain why AI is being introduced, how success will be measured, and where human judgment will still matter, fear starts to give way to agency.
Training is not optional
Conversation alone, however, is insufficient. Trust grows through competence, and competence requires ongoing investment in skills.
Research from EY shows that inadequate training is one of the biggest barriers to effective use of AI among staff. Often learning programmes are too superficial or disconnected from their day-to-day work. At the same time, more than half of South African workers worry that their current skills will not remain relevant in an AI-driven future.
This is the paradox at the heart of AI transformation: employees can see productivity gains, but fear being left behind.
The solution lies in deliberate, role-based enablement. Organisations must normalise AI use across everyday workflows, while providing hands-on training, mentorship and safe spaces to experiment. Recognising and rewarding innovation matters too. When people feel capable, confidence follows, and when confidence grows, adoption accelerates.
Why humans must remain in the loop
Trust also depends on assurance that AI will not operate unchecked.
AI excels at pattern recognition and scale, but it lacks context, ethics and accountability. Recent examples in South Africa, like cases of fabricated legal citations and biased decision making in critical services, highlight the risks of removing human oversight.
Human-in-the-loop models are essential. They ensure AI outputs are tested, questioned and, where necessary, overruled. Organisations that bypass this step may gain short-term speed, but often at the expense of credibility. And once trust is broken, it is far more difficult to rebuild.
Governance as an enabler, not a brake
Effective governance plays a central role in sustaining trust. Yet it is frequently misunderstood as a constraint on innovation. In reality, clear guardrails are what enable organisations to move faster and with greater confidence.
Strong AI governance clarifies how data may be used, where transparency is required, how bias is mitigated, and when human intervention is mandatory. It provides shared principles that teams can apply consistently, rather than reinventing rules for every new use case.
Crucially, governance also signals intent. It tells employees, customers and partners that AI is being deployed responsibly, with an understanding of risk, accountability and impact. That signal is foundational to trust.
What differentiates AI leaders from laggards is not access to technology, but the ability to bring people along. Local organisations that invest in open dialogue, training, and credible governance will unlock AI’s value faster and more sustainably than those that treat adoption as a purely technical rollout.
Which brings us back to that car and its driver. AI may be the engine powering the next phase of growth, but people are still in the driver’s seat. If we want South Africa to move faster, further and more safely with AI, earning human trust must be the priority. Only then will AI take us where we truly want to go.
President Ntuli, Managing Director, HPE South Africa. He writes in his personal capacity.
