Connect with us

Tech Interviews

RedHat Summit Connect 2025: A discussion with Ed Hoppitt and Adrian Pickering

Published

on

Red Hat Summit

Exclusive Interview with Ed Hoppitt, EMEA Director – Value Strategy, App and Cloud Platforms at Red Hat & Adrian Pickering, Regional General Manager, MENA & Enterprise Segment Lead for CEMEA at Red Hat  

Having worked in global telecom and advised some of the world’s largest enterprises, how have these experiences shaped your approach to developing IT solutions?
Ed: I believe that designing and running operational IT over many years gives one a deep understanding of what truly matters to a customer, especially those partnering with Red Hat. This background enables me to connect with our customers on a level where they feel understood regarding their pain points. Today’s biggest challenge for enterprise IT is building systems that are predictable, replicable, and standardized—yet able to scale effectively.

When you look at what Red Hat offers and how we help enterprises build these solutions, our focus is rooted in leveraging the open source community. We invest in projects that we know will create tremendous value for our enterprise customers, taking those projects upstream, incorporating them into the Red Hat portfolio, and industrializing them into platforms such as OpenShift. Customers choose platforms like OpenShift because they represent best-of-breed choices, delivering stability, reliability, predictability, and scalability. With my operational IT background, I appreciate just how crucial these outcomes are for every customer I speak with.

This year’s Red Hat Summit focuses on curiosity and turning acquired knowledge into practical application. It’s about transforming acquired knowledge into practical applications. Through this, what key message are you hoping to leave with the audience this year?
Adrian: I view curiosity as the foundation for working with our customers to truly understand their vision—where they want to be 18, 24, 30, or even 36 months down the line. It’s about gaining a clear grasp of the business challenges they face or the new markets they wish to serve in the future. We then align our best capabilities to support them along that journey, keeping cost efficiency in mind. This might involve modernizing infrastructure, existing applications, or even building new applications that open doors to entirely new customer segments or solutions.

A great example of this is our work with the Dubai Health Authority, who were on stage at Summit Connect Dubai. When we engaged with them, we took the time to deeply understand the challenges they were trying to address for the citizens and then brought not only our technical products but also our expertise in project management, implementation, training, and knowledge transfer. I’m very proud of our achievements over the years, and I believe that in doing so, we add significant value for our customers.

Ed: To add another perspective, the most compelling conversations I have with customers often begin with discussions that don’t initially center on technology. They start with, “I want to imagine a world where things are different—where you can help me achieve something extraordinary.” For instance, with Red Hat OpenShift AI, we collaborated with the US Department of Veterans Affairs to build a platform that effectively reduced self-harm and suicide rates. By harnessing a platform that could analyze how people called in for assistance—assessing tone and how they described their situations—we helped the teams prioritize who needed immediate care versus who could wait a while for some support.

It’s when someone presents you with such a profound challenge that you really see the immense opportunity we have as an organization. These technology platforms do more than enable business; they help vulnerable people receive the care they need and, ultimately, save lives.
 
You mentioned that for Red Hat it’s relatively easy to work on new technologies because of the robust support provided by partners and customers alike. Can you elaborate on just how important those relationships are for your team?

Adrian: The point is that, while we are proud of the solutions we deliver through Red Hat, many integrated solutions require components from multiple software vendors. Our partners and integrators are essential because they bring together the various components needed to deliver, implement, and support these complex solutions. In many regions, especially where we serve multiple countries, these partners offer additional scale and reach, often accessing markets where Red Hat might not have a direct footprint. This collaboration is a critical part of why we work so closely with our partners.

Ed: Another significant benefit of having partners is that it allows Red Hat to concentrate on what we do best. We aren’t trying to solve every aspect of the IT enterprise supply chain. Instead, we work with best-of-breed partners who focus on their own areas of expertise. This means that Adrian’s teams and others in our region can focus on delivering core value to our customers. As we saw on stage, one of the Middle East’s largest banking group  was very ahead of the curve in its approach to virtualisation and modernisation. These partners enable us to help customers execute at scale and with credibility. My background in operational IT tells me that although the journey is rarely smooth, having a trusted team and partners makes all the difference.

In today’s enterprise technology landscape, where hybrid and multi-cloud environments are the norm, how is Red Hat helping customers unlock the potential of open source technologies?
Ed: For me, the hybrid and multi-cloud narrative is essentially about providing customers with standardization. Some customers might say that they’re on a path toward data center consolidation, or are committed to a single hypervisor, or even a multi-cloud strategy. But once they embrace a hybrid approach, the underlying message is that they require a globally consistent management and operational platform—one that spans multiple cloud providers, private data centers, or even edge environments.

How do we achieve this consistency in an open source manner? When you’re a proprietary company, control is tight. With our strategy, we offer customers open choice—where to run their platform and which workloads to deploy on top of it. In essence, our approach empowers customers by eliminating the risks of siloed, locked-in solutions. This freedom enables businesses to continuously ask, “What should I run, and where and how should I run it?” They consider the portfolio of applications, evaluate whether low-latency edge deployment is needed—as is common for a supermarket loyalty system—or whether a core data center or public cloud deployment makes sense. The operational “how” is addressed by determining whether to run on a container platform, a virtual machine platform, or an alternative setup. Finally, the “why” ties back to ensuring the overall solution aligns with the customer’s cost and business objectives.

Ultimately, our focus is on answering one simple question for the customer: “What should I run, and where, how, and why should I run it?” This encapsulates our commitment to providing both choice and clarity in today’s complex IT environment.


Adrian: I find it quite interesting how that perspective plays out regionally. While we enable customers to run applications on our platforms, major players like Google are also part of the ecosystem. Particularly in Europe, where there is current uncertainty, many governments are questioning whether their sovereign data should reside on a cloud service originating from the U.S. Without diving too deeply into politics, this debate is prompting customers to consider alternative cloud options. For example, when running OpenShift on-premises or on a cloud provided by a specific country, it becomes easier to migrate to a new provider if necessary. This is an evolving discussion, especially in Europe, and it’s something that might expand beyond political cycles in the future.

Ed: Exactly. In Europe, the focus remains on providing choice. With open source technology, we sidestep many political concerns because of the transparency it offers. Customers can inspect the code to see that there are no hidden backdoors or data issues. Consequently, building a sovereign solution using Red Hat technology has gained significant traction. Both governments and organizations are increasingly interested in retaining full control over their data.
 
It seems that customers also desire a degree of freedom with their platforms; they want to ensure that no external party completely controls their systems. How does Red Hat provide this assurance of complete control?

Ed: Customers can deploy our platform in either of two ways. If they run it in their own data center, on-premises. In this case, they obtain full access—they have the code, the platform, all the necessary certificates, and they manage it themselves. In contrast, if they decide to run the platform on one of the hyperscalers, while the underlying compute infrastructure is provided by the hyperscaler, the platform—the layer where the data sits and the applications operate—remains in the open source domain. Therefore, even in these cases, customers retain the ability to influence, control, and understand what happens with their data and applications. And when it comes down to it, every country and organization will make its own decisions, but our consistent message remains: our focus is on choice. Whether a company decides to run its workloads privately, on the public cloud, or at the edge, we ensure that they have the consistent tools and platforms to do so efficiently.

Adrian: That’s exactly right. We have long maintained a commitment to enabling customers to choose the open hybrid cloud. Whether a customer opts for a sovereign cloud, a hyperscaler, or their own private cloud, our core mission is to grant them the freedom to choose and to operate in a simple, consistent, and controlled manner.


Where do you see the enterprise technology landscape heading in the next three to five years?
Ed: I believe that over the next three to five years, we will witness an increasingly consolidated effort to eliminate complexity within IT organizations. Over the last decade, IT has excelled in building silos—if anything, it’s been very effective at doing so. However, with the advent of AI, these separate silos of infrastructure and data are becoming even more problematic. When your data resides in multiple unconnected silos, it becomes extremely challenging to aggregate and leverage it for AI-based insights.

In a recent discussion with a financial services industry leader, the focus was increasingly on ensuring access to all their data, democratizing it internally, and enabling AI-driven querying. This represents a paradigm shift, as data today typically lives within isolated applications. In an AI-integrated world, breaking down these silos is critical. I foresee that one of the most significant developments in the near future—driven by AI—will be the democratization of data access across organizations.

Adrian: I concur. From a regional perspective, we might be a couple of years behind more developed markets like Europe or the U.S. For instance, we are still in the earlier stages of transitioning to the cloud. In the UAE and other regions, sovereign cloud providers are just beginning to expand their offerings. Financial institutions, aviation companies, and others are now starting to embrace the cloud more aggressively than they have in the past four or five years.

Ed: Another nuance here involves what we’re exploring with Granite and small language models. Often, to help Adrian’s customers manage support tickets, you don’t need a language model that knows Shakespeare by heart. Large language models typically contain vast amounts of data, much of which isn’t directly relevant to a given enterprise. Our focus has thus shifted to a choice: do we help organizations harness AI by asking questions of data they couldn’t access before, or do we tailor solutions with smaller language models designed to address specific enterprise challenges?
One notable example was how we applied a tailored small language model within Red Hat to support our own teams in resolving support tickets. This initiative not only saved millions of dollars but also significantly enhanced customer experience and sped up response times. Over time, while large language models have captured much of the buzz, I suspect we will see rapid adoption of small, specialized language models tailored for specific functions.

Tech Interviews

Digital Sovereignty in Practice: What It Means for Enterprises Today

Published

on

3D illustration of a complex digital circuit board with interconnected microchips, processors, and data pathways, representing advanced IT infrastructure and smart technology solutions by Hedges Information Technology

In our conversation with Ismail Ibrahim, General Manager, CEMEA at SUSE, we seek to understand the concept better along with his understanding of the industry and how enterprises in the UAE and Saudi Arabia can retain control in a rapidly evolving technology landscape.



What does “digital sovereignty” actually mean for an enterprise today, not in theory, but in day-to-day operations?

From an enterprise perspective, digital sovereignty becomes real the moment it changes what you do on a Monday morning. In practice, it means three things become operational requirements, not policy statements.

First, control over data. Not just where data is stored, but where it is processed, who can access it, and how you prove that in an audit. For many organizations in the UAE and Saudi Arabia, that is increasingly tied to sector rules, procurement requirements, and customer expectations.

You need the ability to keep sensitive workloads within national borders when required, but also to enable controlled data flows when innovation demands it. The important point is that sovereignty is not “ringfencing everything”. It is being deliberate about which data, which workloads and which dependencies must remain under your control.

Second, control over operations. Day-to-day, that looks like resilience and predictability: how quickly you can patch, how confidently you can recover, how consistently you can enforce policy across clusters, clouds and edge sites. This is where many enterprises discover that sovereignty is inseparable from operational excellence. If you cannot reliably manage your environments, you do not really control them.

Third, control over technology choices. This is where open source becomes practical, not ideological. When you build on open, enterprise-supported platforms, you are reducing dependency on opaque codebases and constraining the risk of being forced into a single vendor’s roadmap. Sovereignty is “choice by design”, because choice is what allows you to meet local requirements today and change course tomorrow.

That is why at SUSE we often frame sovereignty around pillars like control, choice and resilience, with autonomy as the long-term outcome. For enterprises, those pillars translate into everyday decisions: architecture, procurement, governance, patching, incident response and lifecycle management.

In the next three years, which will hurt enterprises more: security breaches, or being locked into the wrong technology stack?

    It is not an either-or, because the two risks are increasingly connected.

    A security breach is immediate and visible. It impacts customers, regulators, operations and reputation. But lock-in to the wrong stack can quietly increase breach risk over time, because it limits your ability to respond. If your architecture makes it hard to patch quickly, to segment workloads properly, to implement new controls, or to move sensitive workloads to a compliant environment, you have turned security into a dependency problem.

    Over the next three years, I would say the most damaging scenario for many enterprises is not “breach versus lock-in”, but breach plus lock-in, where an organisation is under pressure and finds it cannot adapt fast enough.

    This is exactly why sovereignty has moved into the C-suite and boardroom. Leaders are recognizing that digital sovereignty sits alongside cybersecurity and operational resilience as a strategic requirement. You need a risk-based approach to your data, workloads and support model, and you need the flexibility to change course.

    Practically, in the UAE and Saudi Arabia, many CIOs are already building mixed environments across on-prem, sovereign cloud, hyperscalers and edge. The goal is not to avoid the cloud. The goal is to avoid a situation where strategic choices are dictated by a single vendor’s constraints. Open, enterprise-grade platforms help you keep the option to move, modernize or localize when needed, without rewriting everything from scratch.

    As AI becomes embedded into infrastructure itself, do you believe enterprises are prepared to trust machines with operational decisions, or are we moving faster than governance allows?

    In many cases, we are moving faster than governance, but that does not mean enterprises should slow down. It means they should modernize governance at the same pace as adoption.

    The key is to separate hype from reality. “Trusting machines” does not mean handing over full autonomy overnight. For most enterprises, AI enters operations in stages.

    Stage one is assistive intelligence, where AI helps surface insights, detect anomalies, recommend actions and reduce manual effort. This is where many organizations see quick operational value, especially in areas like observability, incident triage, capacity planning and security monitoring.

    Stage two is bounded autonomy, where AI can execute actions within defined guardrails, such as automated scaling, routing, remediation playbooks, or policy-driven security responses. The governance requirement here is clear accountability: what is automated, under what conditions, with what approvals, and what audit trail.

      Stage three is agentic operations, where more complex systems handle multi-step tasks across environments. This is the phase where governance must be mature, because the risk is not simply “wrong output”, it is unintended consequences across interconnected systems.

      For the UAE and Saudi Arabia, readiness often depends on whether organisations have already done the foundations: standardised platforms, consistent policy enforcement, clean identity and access controls, and modern lifecycle management. If the foundation is fragmented, AI simply accelerates fragmentation.

      This is why we are seeing strong interest in approaches that support governance by design, including the ability to run AI solutions in more controlled environments. In many regulated sectors, that includes air-gapped or restricted environments, where organizations want to adopt AI while keeping strict control of data movement and operational boundaries.

      My view is that enterprises can absolutely trust AI in operations, but only when they treat trust as an engineering outcome: transparent systems, auditable controls, clear guardrails, and the ability to override. Governance is not a blocker. Governance is what makes adoption sustainable.

      By 2030, will enterprises still control their infrastructure choices, or will hyperscalers and AI vendors effectively decide that for them?

      Enterprises will control their choices if they design for control now. If they do not, the market will make the decision for them.

      By 2030, the default buying motion will push organizations toward managed services, vertically integrated AI stacks, and increasingly opinionated platforms. That can deliver speed, but it can also compress choice, especially if your applications, data pipelines, security controls and operational tooling are tightly coupled to one vendor.

      So the question is really about architecture and leverage. Enterprises that prioritise portability, standardization and open platforms will keep leverage. They can choose the right environment for each workload, based on performance, compliance, cost, and risk. Enterprises that ignore portability will find that “choice” exists on paper, but not in practice.

      This is where digital sovereignty is often misunderstood. Sovereignty does not mean rejecting global technology. It means retaining the ability to make deliberate decisions about where workloads run and who controls the critical layers. Many leaders now talk about “glocal” strategies: using global innovation while maintaining local control and compliance where it matters.

      At SUSE, our positioning has been consistent: open source supports sovereignty because it promotes transparency, portability and freedom from lock-in. That is not a slogan, it is a practical roadmap for keeping infrastructure choices in the hands of enterprises, not vendors.

      If you had to offer one piece of advice to CIOs and policymakers in the UAE and Saudi Arabia navigating rapid digital transformation, what would it be?

        My one piece of advice is this: treat sovereignty as an enabler of innovation, not a constraint, and build it into your operating model early.

        For CIOs, that means starting with a clear map of your critical workloads and dependencies. Decide what must remain under national control, what can run on hyperscalers, what needs sovereign cloud options, and what requires special governance. Then standardize your foundations so you can enforce policy consistently. When sovereignty is engineered into the platform layer, transformation becomes faster, because you are not negotiating compliance from scratch every time you modernize an application.

        For policymakers, it means continuing to create frameworks that encourage both innovation and trust. The UAE has taken a pragmatic approach in showing that openness and sovereignty do not have to conflict. When the policy environment supports clear requirements and predictable compliance expectations, enterprises can innovate with confidence.

        And for both, there is a shared point: invest in skills and ecosystem capability. Sovereign outcomes are not delivered by policy alone, they are delivered by people, platforms, and partnerships. When you develop local talent, strengthen the partner ecosystem, and support enterprise-grade open source, you build resilience and long-term autonomy without slowing innovation.

        Continue Reading

        Tech Interviews

        SCALING PRACTICAL AI FOR RETAIL GROWTH IN THE GCC

        Published

        on

        Exclusive interview with Mark Turner, President EMEA, Rezolve Ai

        What made Shoptalk Luxe Abu Dhabi a priority platform for Rezolve Ai this year?

        For Rezolve Ai, Shoptalk Luxe Abu Dhabi brings together the right audience at the right moment. Luxury retailers in the region are no longer exploring ideas, they are making decisions and investing. It is a practical forum to exchange views with brands that are actively shaping their customer engagement and commerce strategies, and to have grounded conversations about what is working in real retail environments. Abu Dhabi also reflects how influential the region has become in global luxury thinking.

        How is AI changing the way luxury retailers think about customer engagement today?

        Luxury retailers are becoming far more intentional about how and when they engage customers. AI is helping them move away from broad personalisation toward more contextual, timely interactions that respect the brand experience. The focus is on supporting customers at key moments, whether online or in store, and ensuring engagement feels consistent and considered rather than automated or intrusive.

        What distinguishes meaningful AI adoption in retail from short-term experimentation?

        Retailers that see lasting value from AI are those that embed it into day-to-day operations rather than treating it as a standalone initiative. Meaningful adoption is driven by clear commercial goals, fast implementation, and solutions that work within existing systems and teams. Short-term experimentation tends to stall when it lacks ownership, scale, or a clear link to performance outcomes.

        Why is the Middle East, and the UAE in particular, becoming increasingly important for luxury retail innovation?

        The Middle East, and the UAE in particular, has created an environment where luxury retail innovation can move quickly. Consumers are digitally confident, infrastructure is strong, and there is a clear push at a national level to adopt advanced technologies. This combination allows retailers to implement and test new models at scale, which is why the region is increasingly influencing global luxury strategies.

        Looking ahead, where do you see AI delivering the most value for luxury brands over the next few years?

        The greatest value will come from AI that directly supports growth while reinforcing operational discipline. For luxury brands, that means more relevant engagement that improves conversion and loyalty, alongside better forecasting and inventory decisions that protect margins. The priority will be practical use of AI that enhances the customer experience without compromising brand integrity.

        Continue Reading

        Tech Interviews

        Sennheiser: Beyond Hardware, Toward Seamless Integration

        Published

        on

        Exclusive Interview with Fadi Costantine, Sales Manager – Business Communication, Middle East at Sennheiser

        A professional studio headshot of Fadi Costantine, Sales Manager for Business Communication (Middle East) at Sennheiser. He is smiling and standing with his arms crossed against a plain white background. He has short, salt-and-pepper hair, wears glasses, and is dressed in a dark navy blue suit with a white collared shirt. The Sennheiser logo is visible in the top left corner.
        Fadi Costantine, Sales Manager – Business Communication, Middle East at Sennheiser

        Sennheiser has leveraged its role in shaping professional audio to build strong hybrid communication products for use across business and education environments. We caught up with Fadi Costantine, Sales Manager – Business Communication, Middle East at Sennheiser, to discuss the brand’s presence at the show, its integrated product ecosystem, and the growing importance of software-driven audio solutions.

        What are your most innovative products currently serving the business and education sectors?

        Sennheiser operates across several business units, with Business Communication being one of our most important. This unit is entirely dedicated to the installation market, where many of our most dynamic and innovative solutions are positioned.

        Professional audio is at the core of Sennheiser’s brand identity. Through our ownership of renowned brands such as Neumann and Merging Technologies, we have established ourselves as a global leader in audio communications. We leverage this expertise to develop advanced meeting and conferencing solutions that enhance business performance.

        Crucially, our products are not designed to operate in isolation. They are engineered to work together as a unified ecosystem, enabling seamless communication across devices and platforms. This ecosystem approach allows system integrators and end users to design complete, end-to-end audio solutions tailored to a wide range of applications and project requirements.


        Which industry verticals are currently driving demand for these solutions in the region?

        While we are active across multiple verticals in the region, we have a clear strategic commitment to deliver innovative, scalable, and future‑ready audio solutions tailored specifically for the needs of higher education and the modern corporate environment.

        In corporate environments, our microphone solutions are widely deployed in meeting rooms to support modern collaboration and video conferencing scenarios. In the education sector, our technologies are extensively used in lecture halls and hybrid learning environments, including classrooms and auditoriums designed to accommodate both in-person and remote participants.

        A strong example is our ceiling microphone solutions. These are frequently used not only in traditional meeting rooms but also in lecture halls for audio capture, video conferencing, and recording. They are also ideal for voice-lift applications, enabling students to hear the lecturer clearly without the need for wearable microphones. This creates a more natural, seamless teaching experience while minimizing complexity for the user.


        Software and integration are critical in these environments. How does Sennheiser support this alongside its hardware solutions?

        Workflow optimization has always been central to our product strategy and will remain a key focus going forward.

        Introducing a new era in AV Management, at ISE 2026, Sennheiser will officially launch DeviceHub, a secure, cloud-based platform designed for IT and AV managers, as well as system integrators. DeviceHub centralizes device visibility and remote management, streamlining workflows across enterprise, education, and corporate settings.

        DeviceHub provides real-time insights, simplified setup, and unified control, supporting organizations in creating better spaces for communication, learning, and teamwork. Following a successful private beta, ISE marks the transition to public availability. Visitors can explore DeviceHub’s capabilities and speak directly with product experts about how it can transform their AV and IT operations.

        Continue Reading

        Trending

        Copyright © 2023 | The Integrator