Tech Features
ICT CHAMPION AWARDS 2026: FIELD NOTES — FROM HYPE TO HABIT
By Subrato Basu, Global Managing Partner, The Executive Board with Srijith KN Senior Editor, Integrator Media.
On 28 January 2026, Integrator Media hosted the 18th edition of the ICT Champion Awards at the Shangri–La Dubai Hotel, bringing together the region’s ICT ecosystem for an evening designed to celebrate milestones, recognise innovation, acknowledge ecosystem leaders, and foster community.
The programme—aligned with INTERSEC 2026—spotlighted organisations making measurable impact across enterprise solutions, critical infrastructure, cybersecurity, and public-sector technology.
By 7pm, the Shangri-La Dubai’s Al Nojoom Ballroom had the feel of a ‘state of the union’ for regional ICT—CXOs, partners, and platform leaders in one room, with AI dominating every board agenda. This wasn’t just an awards evening; it was a moment to take stock: are we still experimenting with AI, or are we ready to operationalise it at scale?
Across conversations at tables and in the corridors, the same theme surfaced: experimentation is easy—operational confidence is the hard part.
Opening keynote: “Is AI ready for us in the UAE—and what next?”
The evening’s tone was set by Mr. Maged Fahmy, Vice President, Ellucian MEA, who opened with a deliberately provocative question: Is AI ready for us in the UAE? What made the question stick wasn’t the technology—it was the implication that leadership models are now the constraint.
His message wasn’t framed as a technology debate—it was framed as a leadership test.
As a leader in enterprise technology for education and public-sector institutions—where trust, governance, and outcomes are non-negotiable—Fahmy’s ‘hype to habit’ message landed with particular weight.
His argument was simple: the UAE is past AI curiosity. The next phase is habit—repeatable, governed AI embedded in day-to-day work. The real question is no longer ‘Can we do a PoC?’ but ‘Can we run this reliably, measure it, and scale it?’
We’re moving from Generative AI (creating content) to Agentic AI (executing work). That shift changes leadership: fewer people doing repeatable steps, more orchestration of workflows across systems—with humans focused on judgement, risk, and exceptions.
For example, an agent can triage a service request, propose the fix, route it for approval, execute the change, and only escalate the ‘weird 3%’ to a human owner.
Leadership reality check: are we still leading like it’s 2022?
He also offered a leadership reality check: if your operating rhythm still assumes long cycles, manual coordination, and slow approvals, you’ll struggle in 2026. Strategy can’t be an annual exercise; it must become a live set of decisions, guardrails, and feedback loops.
AI gives the “how”; humans must own the “why”
His framing landed: AI increasingly gives you the how—options, sequencing, automation. But leaders must own the why—purpose, priorities, ethics, and accountability. In an agentic era, that ‘why’ is what keeps speed from becoming risk.
He also anchored AI’s value in a more human currency: time. Yes, AI drives efficiency. But the real prize is what leaders do with the time they get back: better customer interactions, faster decision-making, more innovation, and more space for creative work that machines cannot replicate.
Talent gaps, transformation, and “sovereign AI”
The keynote did not gloss over constraints. Fahmy flagged the talent gap that emerges when adoption rises faster than capability—especially in AI engineering, cybersecurity, governance, and change leadership. His call was practical: the future workforce isn’t only “AI builders,” but AI challengers—people who can validate outputs, pressure-test recommendations, and govern autonomous workflows.
He also introduced the importance of sovereign AI in the GCC context—where nations like the UAE and Saudi Arabia are thinking deeply about data residency, cultural alignment, regulatory control, and strategic autonomy. The point wasn’t simply “host it locally,” but to build AI that is trustworthy in local context: aligned to language, norms, governance expectations, and national priorities.
In practical terms, sovereign AI means keeping sensitive data and model control within national boundaries, enforcing local governance and auditability, and ensuring outputs reflect language, culture, and regulatory expectations.
Strategy ownership, authority, and misinformation
In 2026, he argued, leaders must be explicit about who owns strategy when decisions are increasingly shaped by AI systems. If an agent can recommend, negotiate, or trigger actions at speed, the organisation needs clarity on authority: approval thresholds, auditability, escalation paths, and responsibility when something goes wrong.
He also linked AI strategy directly to misinformation risk—not as a social media issue alone, but as an enterprise challenge: hallucinations, deepfakes, synthetic fraud, manipulated signals, and decision contamination. The answer, he implied, is not fear—it’s governed adoption: controls, verification, identity assurance, and clear human accountability.
He closed with a grounded reminder that landed strongly with the awards theme: the winners in 2026 won’t be defined by the “fastest AI,” but by the clearest purpose—and by the culture they’ve built to sustain transformation.
Panel discussion: “Seamless Intelligence” — when AI becomes invisible (and unavoidable)
The panel discussion, moderated by Srijith KN (Senior Editor, Integrator Media), brought the theme down from keynote altitude into product and platform reality. The session, titled “Seamless Intelligence: How AI and Dataare Powering the Next Generation of Intelligent Experiences,” featured:
- Mr. Rishi Kishor Gupta, Regional Director (Middle East & Africa), Nothing Technology
- Ms. Bushra Nasr, Global Cybersecurity Marketing Manager, Lenovo
- Mr. Nikhil Nair, Head of Sales (Middle East, Turkey & Africa), HTC
- Ms. Aarti Ajay, Regional Lead Partnerships (Ecosystem Strategy & Growth), Intel Corp
One way to read the panel: infrastructure decides what’s possible, security decides what’s safe, and experience decides what gets adopted.
The discussion converged on one powerful idea: in the next phase, the user shouldn’t “see” the intelligence—it should dissolve into the experience. The ambition is not “AI features,” but AI-native interactions that feel natural, predictive, and frictionless across devices and contexts.
Infrastructure: where does intelligence actually run?
From the infrastructure angle, the panel stressed that “AI everywhere” requires deliberate choices about where compute happens—on device, at the edge, or in the cloud—and how workloads move across that spectrum. This included clear emphasis on the hardware stack (CPU/GPU/NPU) and what it takes to scale AI responsibly.
“AI won’t scale on slogans; it scales on architecture—device, edge, and cloud—each with different cost, latency, and security trade-offs.”
Trust: security, fear factor, and the “moving data center”
From the trust perspective, the panel highlighted the growing “fear factor” around devices and autonomy: more sensors, more data, more models—more attack surface. A memorable analogy landed well: the modern connected vehicle increasingly behaves like a moving data center, raising the bar on governance, identity, and resilience.
“Every new AI capability is also a new attack surface—security has to be designed in, not bolted on.”
Human experience: AI as an experience, not a tool
On the human side, the conversation explored how AI will increasingly show up as experience—wearables, ambient assistance, multi-sensory support, and interactions that augment how people see, decide, and act. The subtext was clear: if AI is going to become ubiquitous, it must become intuitive—and aligned to what humans actually value.
“AI is becoming an experience, not an app—supporting how we see, decide, and act, often without the user noticing the machinery behind it.”
Consumer reality: “make human life smarter” and “declutter your life”
From the consumer device lens, the message was refreshingly plain: AI should help make human life smarter—not noisier. That includes automation that reduces cognitive load and helps people “declutter” their day-to-day, rather than introducing another layer of complexity.
The moderator wrapped the session with a sober economic note: as the stack expands from devices to cloud subscriptions and services, the cost of modern digital life rises—making it even more important that AI delivers tangible value, not just novelty.
“If AI doesn’t declutter your life, it’s not helping.”
Executive Board Commentary: The real shift is “delegation”—not adoption
If there was one undercurrent in the room, it’s that we’ve moved past the question of whether AI is “interesting.” The real question now is: what can we delegate—safely, repeatedly, and at scale—without degrading trust? That’s why the keynote’s emphasis on moving beyond PoCs into governed, repeatable operating models felt so relevant.
This is the step-change many organisations underestimate: adoption is a technology story; delegation is an operating model story. In an agentic era—where systems don’t just generate answers but initiate actions—the enterprise doesn’t need more demos. It needs a way to decide: what tasks can be automated end-to-end, what must stay human-led, and what requires a hybrid “human-in-the-loop” pattern?
A useful lens: the “Delegation Curve”
Think of your AI journey as a curve with three stages:
- Assist (copilot) – AI helps humans do the work faster (drafting, summarising, analysing).
- Act (agentic) – AI executes steps across workflows (triage → route → approve → action), escalating exceptions.
- Assure (governed autonomy) – AI operates with clear authority limits, auditability, and continuous controls (especially critical in regulated sectors and national infrastructure contexts).
Most enterprises are still celebrating Stage 1, experimenting in Stage 2, and under-investing in Stage 3. Yet Stage 3 is where operational confidence is built—and where reputational risk is avoided.
The missing KPI: “Trust latency”
The panel made it clear that infrastructure, security, and experience all shape whether “seamless intelligence” is adopted in the real world.
But the deeper measurement leaders should add is trust latency: how long it takes an organisation to trust an AI outcome enough to act on it without manual re-checking.
In practical terms, the most important AI metrics in 2026 won’t be model accuracy in isolation. They’ll look like:
- Time-to-trust (how quickly decisions can be taken without repeated human verification)
- Exception rate (the “weird 3%” humans must handle)
- Containment rate (how often an agent resolves end-to-end without escalation)
- Governance velocity (how quickly policy, approvals, and controls keep up with agent speed)
This is where leadership becomes the constraint—or the advantage.
Sovereign AI isn’t just residency; it’s “accountability at the boundary”
The keynote’s introduction of sovereign AI resonates strongly in the GCC because the stakes aren’t only technical. They are cultural, regulatory, and strategic.
The next phase of sovereign AI will be defined not by where data sits, but by where accountability sits—who can inspect, audit, override, and certify AI behaviour, especially when agents trigger actions across systems.
Sovereign AI done well will become a competitive advantage: it makes cross-sector adoption easier because it offers confidence by design—clear boundaries, policy alignment, and traceability.
The “AI dividend” test: what are you doing with the time you saved?
A subtle but powerful keynote point was that AI’s real asset is time.
The leadership question is what you do with it. In organisations that win, the reclaimed time becomes: better customer experience, sharper decision-making, faster innovation cycles—and more human attention where it matters.
In organisations that struggle, that time gets lost to rework, re-checking, and governance friction—because trust was never engineered into the operating model.
The new perspective to carry forward
At ICT Champion Awards, the celebration of winners implicitly reinforced the real benchmark for 2026: repeatability. Not “who has the flashiest AI,” but who can run it reliably with trust, governance, and measurable outcomes.
So perhaps the most useful question to take forward is this:
What are the first 3 workflows in your organisation that you are willing to delegate to agentic AI—end-to-end—under clearly defined authority, auditability, and exception handling?
That’s also what the ICT Champion Awards ultimately celebrated: not technology theatre, but execution maturity. The winners weren’t simply early adopters—they were organisations demonstrating innovation with outcomes, leadership with accountability, and scale with governance. In a year defined by agentic possibilities, the Awards served as a reminder that the real competitive edge is operational confidence—systems that work, controls that hold, and teams that can sustain change. Hype is easy; habit is earned.