Connect with us

Tech Interviews

 

Published

on

Vertiv Outlines Data Center Evolution and AI Infrastructure Strategy

Exclusive Interview with Peter Lambrecht, Vice President Sales, EMEA, Vertiv

What specific challenges do clients face in powering and cooling AI infrastructure, and how did you address these at GITEX this year?

At GITEX this year, the focus on artificial intelligence (AI) was unmistakable. Every booth showcased AI-driven solutions running on GPU-powered servers from leading companies like NVIDIA. However, for AI applications to function effectively, the right infrastructure is essential to power and cool these GPUs, ensuring smooth and efficient performance. For us, this highlights our expertise in AI infrastructure, designed to support these platforms optimally.

One of our key focus areas is liquid cooling, as traditional air cooling in data centers is no longer sufficient. With rack densities now reaching 50, 60, and even up to 132 kilowatts per rack, air cooling alone cannot handle the thermal load. Liquid cooling has become critical, efficiently drawing heat away and directing it elsewhere. This core technology, developed with our partner NVIDIA, supports the deployment of GPUs worldwide, and together we provide, market, and deliver these advanced solutions to our clients.

Cooling, however, is only one part of the equation. The shift to AI also requires a comprehensive approach to power management, as AI workloads significantly alter electricity load patterns. Our solutions are designed to meet these power demands, and we have showcased these capabilities at our booth. We’ve been actively engaging with partners and clients to address these challenges as they implement their AI solutions, ensuring that both cooling and power needs are effectively met.

Can you elaborate on your partnership with NVIDIA and how it has evolved over the years?

Our partnership with NVIDIA has grown significantly over the years, and over the past year, it has reached an unprecedented level of collaboration. Both of our CEOs, Jensen Huang of NVIDIA and Giordano Albertazzi of Vertiv communicate regularly, aligning closely on joint development initiatives. Together, we create environments optimized for NVIDIA’s cutting-edge chips, ensuring they have the necessary infrastructure to operate at peak performance.

In a recent joint announcement, both CEOs unveiled new solutions that integrate NVIDIA GPUs with Vertiv’s advanced infrastructure, designed to maximize efficiency and reliability. This collaboration represents the core strength of our partnership, where we have refined reference designs that allow NVIDIA to deploy their GPUs seamlessly and effectively on a global scale.

What current trends do you see in the data center and critical infrastructure market? With many hyperscalers entering the market, what is your perspective on this development?

The data center market is dominated by Hyperscalers, whether through their direct deployments or co-location facilities—two primary models for quickly scaling data center capacity. These Hyperscalers are making substantial investments in infrastructure, fueling competition in this fast-evolving landscape.

The AI race is fully underway, with industry giants all striving toward the same goal. As their infrastructure partner, we are advancing with them, providing the essential support they need to drive this innovation. At the same time, the growth of the cloud sector remains foundational and continues to expand robustly. What we are seeing now is a dual growth trajectory: traditional cloud business growth compounded by the accelerated demand for AI infrastructure.

Trends in data centers reveal a marked increase in power consumption. A few years ago, a five-megawatt data center was considered significant; soon, however, a five-megawatt capacity will fit within a 10×10-meter room as rack density skyrockets, reducing the need for extensive white space but requiring expanded infrastructure areas. Data centers are scaling up to unprecedented sizes, with discussions now involving capacities of 300-400 megawatts, or even gigawatts. Visualizing a gigawatt-sized facility is challenging, yet that is the direction the industry is moving—toward ultra-dense, compacted facilities where every element is intensified, driving an enormous need for power.

As data centers continue to grow, how do you view the sustainability aspect associated with these large facilities?

Today, nearly everyone has a mobile phone in hand, yet data centers—the backbone of our digital lives—often face criticism despite being indispensable to modern society. Data centers are not disappearing; on the contrary, they are set to expand as the pace of digitalization accelerates globally. Power generation remains, and will continue to be, a critical challenge, particularly in regions where resources are limited.

Currently, we see significant advancements in AI infrastructure across Northern Europe. Countries like Sweden, Finland, and Norway benefit from ample hydropower and renewable energy sources, making them well-suited for sustainable AI development. Meanwhile, the Middle East is experiencing a technology boom, backed by its rich energy resources and favorable conditions for large-scale investment in data centers.

There’s also a rising trend toward on-site power generation, with organizations increasingly considering dedicated power stations and micro-grids that tap into renewable or alternative energy sources. In the U.S., for instance, discussions are underway about small, mobile nuclear reactors to support local power needs. Finding sustainable power solutions has become imperative. A sudden surge in electric vehicle usage, for instance, could stress current power supplies dramatically, underscoring the need for substantial changes in our energy landscape.

What support and services does Vertiv offer to its customers? What types of infrastructure investments is Vertiv making in various parts of the world?

At Vertiv, service is arguably the most critical aspect of what we do. Selling a product is only one phase of its lifecycle; the real value lies in our ability to maintain and support that product for the next 10 to 15 years. The availability of highly skilled labor and expertise is essential, as we know that issues will inevitably arise. This is why resilience is a cornerstone of data centers. A strong service infrastructure is vital for addressing challenges promptly when they occur. Just as a car will eventually break down, data center systems too will face difficulties over time. At Vertiv, we have developed an exceptional service framework to ensure that we are always prepared to support our clients.

My philosophy is simple: we don’t sell a product unless we can guarantee the service to support it. When you sell a solution, you’re essentially selling a potential future problem, so ensuring your service capabilities are in place is vital for sustainable growth. This commitment to service is one of our key differentiators in the market.

We are continuously enhancing our core production facilities and making ongoing investments in engineering, research, and development. We are also evaluating our global footprint to optimize production capabilities and meet the growing demand. We are not just focused on the immediate needs of today; we are preparing for the demands that will arise in the next six months, a year, or even two to three years. In this race for capacity, the winner will be the one best positioned with the most scalable and resilient capacity.

What is the future of cooling systems in relation to AI chips, and where do you see this race heading?

While we are not completely moving away from air cooling as it will always play a role in the equation, fully eliminating it would be prohibitively costly. The transition to liquid cooling is a critical step forward. We are already seeing advancements in the liquids used to cool servers, enabling them to absorb higher levels of heat. However, the primary challenge will be addressing the overall densification of data center systems, as more powerful and compact solutions require innovative approaches to heat management.

Our partnerships with industry leaders like NVIDIA and Intel are essential, as they provide us with invaluable, first-hand insights into the development of cutting-edge chips and GPUs. While cooling and power systems might seem straightforward, AI introduces a new layer of complexity that demands forward-thinking solutions. To meet these challenges, we are making significant investments in research and development to support the AI-driven data centers of today and tomorrow. Our commitment to continuous innovation ensures that we remain at the forefront of these critical advancements.

Tech Interviews

Digital Sovereignty in Practice: What It Means for Enterprises Today

Published

on

3D illustration of a complex digital circuit board with interconnected microchips, processors, and data pathways, representing advanced IT infrastructure and smart technology solutions by Hedges Information Technology

In our conversation with Ismail Ibrahim, General Manager, CEMEA at SUSE, we seek to understand the concept better along with his understanding of the industry and how enterprises in the UAE and Saudi Arabia can retain control in a rapidly evolving technology landscape.



What does “digital sovereignty” actually mean for an enterprise today, not in theory, but in day-to-day operations?

From an enterprise perspective, digital sovereignty becomes real the moment it changes what you do on a Monday morning. In practice, it means three things become operational requirements, not policy statements.

First, control over data. Not just where data is stored, but where it is processed, who can access it, and how you prove that in an audit. For many organizations in the UAE and Saudi Arabia, that is increasingly tied to sector rules, procurement requirements, and customer expectations.

You need the ability to keep sensitive workloads within national borders when required, but also to enable controlled data flows when innovation demands it. The important point is that sovereignty is not “ringfencing everything”. It is being deliberate about which data, which workloads and which dependencies must remain under your control.

Second, control over operations. Day-to-day, that looks like resilience and predictability: how quickly you can patch, how confidently you can recover, how consistently you can enforce policy across clusters, clouds and edge sites. This is where many enterprises discover that sovereignty is inseparable from operational excellence. If you cannot reliably manage your environments, you do not really control them.

Third, control over technology choices. This is where open source becomes practical, not ideological. When you build on open, enterprise-supported platforms, you are reducing dependency on opaque codebases and constraining the risk of being forced into a single vendor’s roadmap. Sovereignty is “choice by design”, because choice is what allows you to meet local requirements today and change course tomorrow.

That is why at SUSE we often frame sovereignty around pillars like control, choice and resilience, with autonomy as the long-term outcome. For enterprises, those pillars translate into everyday decisions: architecture, procurement, governance, patching, incident response and lifecycle management.

In the next three years, which will hurt enterprises more: security breaches, or being locked into the wrong technology stack?

    It is not an either-or, because the two risks are increasingly connected.

    A security breach is immediate and visible. It impacts customers, regulators, operations and reputation. But lock-in to the wrong stack can quietly increase breach risk over time, because it limits your ability to respond. If your architecture makes it hard to patch quickly, to segment workloads properly, to implement new controls, or to move sensitive workloads to a compliant environment, you have turned security into a dependency problem.

    Over the next three years, I would say the most damaging scenario for many enterprises is not “breach versus lock-in”, but breach plus lock-in, where an organisation is under pressure and finds it cannot adapt fast enough.

    This is exactly why sovereignty has moved into the C-suite and boardroom. Leaders are recognizing that digital sovereignty sits alongside cybersecurity and operational resilience as a strategic requirement. You need a risk-based approach to your data, workloads and support model, and you need the flexibility to change course.

    Practically, in the UAE and Saudi Arabia, many CIOs are already building mixed environments across on-prem, sovereign cloud, hyperscalers and edge. The goal is not to avoid the cloud. The goal is to avoid a situation where strategic choices are dictated by a single vendor’s constraints. Open, enterprise-grade platforms help you keep the option to move, modernize or localize when needed, without rewriting everything from scratch.

    As AI becomes embedded into infrastructure itself, do you believe enterprises are prepared to trust machines with operational decisions, or are we moving faster than governance allows?

    In many cases, we are moving faster than governance, but that does not mean enterprises should slow down. It means they should modernize governance at the same pace as adoption.

    The key is to separate hype from reality. “Trusting machines” does not mean handing over full autonomy overnight. For most enterprises, AI enters operations in stages.

    Stage one is assistive intelligence, where AI helps surface insights, detect anomalies, recommend actions and reduce manual effort. This is where many organizations see quick operational value, especially in areas like observability, incident triage, capacity planning and security monitoring.

    Stage two is bounded autonomy, where AI can execute actions within defined guardrails, such as automated scaling, routing, remediation playbooks, or policy-driven security responses. The governance requirement here is clear accountability: what is automated, under what conditions, with what approvals, and what audit trail.

      Stage three is agentic operations, where more complex systems handle multi-step tasks across environments. This is the phase where governance must be mature, because the risk is not simply “wrong output”, it is unintended consequences across interconnected systems.

      For the UAE and Saudi Arabia, readiness often depends on whether organisations have already done the foundations: standardised platforms, consistent policy enforcement, clean identity and access controls, and modern lifecycle management. If the foundation is fragmented, AI simply accelerates fragmentation.

      This is why we are seeing strong interest in approaches that support governance by design, including the ability to run AI solutions in more controlled environments. In many regulated sectors, that includes air-gapped or restricted environments, where organizations want to adopt AI while keeping strict control of data movement and operational boundaries.

      My view is that enterprises can absolutely trust AI in operations, but only when they treat trust as an engineering outcome: transparent systems, auditable controls, clear guardrails, and the ability to override. Governance is not a blocker. Governance is what makes adoption sustainable.

      By 2030, will enterprises still control their infrastructure choices, or will hyperscalers and AI vendors effectively decide that for them?

      Enterprises will control their choices if they design for control now. If they do not, the market will make the decision for them.

      By 2030, the default buying motion will push organizations toward managed services, vertically integrated AI stacks, and increasingly opinionated platforms. That can deliver speed, but it can also compress choice, especially if your applications, data pipelines, security controls and operational tooling are tightly coupled to one vendor.

      So the question is really about architecture and leverage. Enterprises that prioritise portability, standardization and open platforms will keep leverage. They can choose the right environment for each workload, based on performance, compliance, cost, and risk. Enterprises that ignore portability will find that “choice” exists on paper, but not in practice.

      This is where digital sovereignty is often misunderstood. Sovereignty does not mean rejecting global technology. It means retaining the ability to make deliberate decisions about where workloads run and who controls the critical layers. Many leaders now talk about “glocal” strategies: using global innovation while maintaining local control and compliance where it matters.

      At SUSE, our positioning has been consistent: open source supports sovereignty because it promotes transparency, portability and freedom from lock-in. That is not a slogan, it is a practical roadmap for keeping infrastructure choices in the hands of enterprises, not vendors.

      If you had to offer one piece of advice to CIOs and policymakers in the UAE and Saudi Arabia navigating rapid digital transformation, what would it be?

        My one piece of advice is this: treat sovereignty as an enabler of innovation, not a constraint, and build it into your operating model early.

        For CIOs, that means starting with a clear map of your critical workloads and dependencies. Decide what must remain under national control, what can run on hyperscalers, what needs sovereign cloud options, and what requires special governance. Then standardize your foundations so you can enforce policy consistently. When sovereignty is engineered into the platform layer, transformation becomes faster, because you are not negotiating compliance from scratch every time you modernize an application.

        For policymakers, it means continuing to create frameworks that encourage both innovation and trust. The UAE has taken a pragmatic approach in showing that openness and sovereignty do not have to conflict. When the policy environment supports clear requirements and predictable compliance expectations, enterprises can innovate with confidence.

        And for both, there is a shared point: invest in skills and ecosystem capability. Sovereign outcomes are not delivered by policy alone, they are delivered by people, platforms, and partnerships. When you develop local talent, strengthen the partner ecosystem, and support enterprise-grade open source, you build resilience and long-term autonomy without slowing innovation.

        Continue Reading

        Tech Interviews

        SCALING PRACTICAL AI FOR RETAIL GROWTH IN THE GCC

        Published

        on

        Exclusive interview with Mark Turner, President EMEA, Rezolve Ai

        What made Shoptalk Luxe Abu Dhabi a priority platform for Rezolve Ai this year?

        For Rezolve Ai, Shoptalk Luxe Abu Dhabi brings together the right audience at the right moment. Luxury retailers in the region are no longer exploring ideas, they are making decisions and investing. It is a practical forum to exchange views with brands that are actively shaping their customer engagement and commerce strategies, and to have grounded conversations about what is working in real retail environments. Abu Dhabi also reflects how influential the region has become in global luxury thinking.

        How is AI changing the way luxury retailers think about customer engagement today?

        Luxury retailers are becoming far more intentional about how and when they engage customers. AI is helping them move away from broad personalisation toward more contextual, timely interactions that respect the brand experience. The focus is on supporting customers at key moments, whether online or in store, and ensuring engagement feels consistent and considered rather than automated or intrusive.

        What distinguishes meaningful AI adoption in retail from short-term experimentation?

        Retailers that see lasting value from AI are those that embed it into day-to-day operations rather than treating it as a standalone initiative. Meaningful adoption is driven by clear commercial goals, fast implementation, and solutions that work within existing systems and teams. Short-term experimentation tends to stall when it lacks ownership, scale, or a clear link to performance outcomes.

        Why is the Middle East, and the UAE in particular, becoming increasingly important for luxury retail innovation?

        The Middle East, and the UAE in particular, has created an environment where luxury retail innovation can move quickly. Consumers are digitally confident, infrastructure is strong, and there is a clear push at a national level to adopt advanced technologies. This combination allows retailers to implement and test new models at scale, which is why the region is increasingly influencing global luxury strategies.

        Looking ahead, where do you see AI delivering the most value for luxury brands over the next few years?

        The greatest value will come from AI that directly supports growth while reinforcing operational discipline. For luxury brands, that means more relevant engagement that improves conversion and loyalty, alongside better forecasting and inventory decisions that protect margins. The priority will be practical use of AI that enhances the customer experience without compromising brand integrity.

        Continue Reading

        Tech Interviews

        Sennheiser: Beyond Hardware, Toward Seamless Integration

        Published

        on

        Exclusive Interview with Fadi Costantine, Sales Manager – Business Communication, Middle East at Sennheiser

        A professional studio headshot of Fadi Costantine, Sales Manager for Business Communication (Middle East) at Sennheiser. He is smiling and standing with his arms crossed against a plain white background. He has short, salt-and-pepper hair, wears glasses, and is dressed in a dark navy blue suit with a white collared shirt. The Sennheiser logo is visible in the top left corner.
        Fadi Costantine, Sales Manager – Business Communication, Middle East at Sennheiser

        Sennheiser has leveraged its role in shaping professional audio to build strong hybrid communication products for use across business and education environments. We caught up with Fadi Costantine, Sales Manager – Business Communication, Middle East at Sennheiser, to discuss the brand’s presence at the show, its integrated product ecosystem, and the growing importance of software-driven audio solutions.

        What are your most innovative products currently serving the business and education sectors?

        Sennheiser operates across several business units, with Business Communication being one of our most important. This unit is entirely dedicated to the installation market, where many of our most dynamic and innovative solutions are positioned.

        Professional audio is at the core of Sennheiser’s brand identity. Through our ownership of renowned brands such as Neumann and Merging Technologies, we have established ourselves as a global leader in audio communications. We leverage this expertise to develop advanced meeting and conferencing solutions that enhance business performance.

        Crucially, our products are not designed to operate in isolation. They are engineered to work together as a unified ecosystem, enabling seamless communication across devices and platforms. This ecosystem approach allows system integrators and end users to design complete, end-to-end audio solutions tailored to a wide range of applications and project requirements.


        Which industry verticals are currently driving demand for these solutions in the region?

        While we are active across multiple verticals in the region, we have a clear strategic commitment to deliver innovative, scalable, and future‑ready audio solutions tailored specifically for the needs of higher education and the modern corporate environment.

        In corporate environments, our microphone solutions are widely deployed in meeting rooms to support modern collaboration and video conferencing scenarios. In the education sector, our technologies are extensively used in lecture halls and hybrid learning environments, including classrooms and auditoriums designed to accommodate both in-person and remote participants.

        A strong example is our ceiling microphone solutions. These are frequently used not only in traditional meeting rooms but also in lecture halls for audio capture, video conferencing, and recording. They are also ideal for voice-lift applications, enabling students to hear the lecturer clearly without the need for wearable microphones. This creates a more natural, seamless teaching experience while minimizing complexity for the user.


        Software and integration are critical in these environments. How does Sennheiser support this alongside its hardware solutions?

        Workflow optimization has always been central to our product strategy and will remain a key focus going forward.

        Introducing a new era in AV Management, at ISE 2026, Sennheiser will officially launch DeviceHub, a secure, cloud-based platform designed for IT and AV managers, as well as system integrators. DeviceHub centralizes device visibility and remote management, streamlining workflows across enterprise, education, and corporate settings.

        DeviceHub provides real-time insights, simplified setup, and unified control, supporting organizations in creating better spaces for communication, learning, and teamwork. Following a successful private beta, ISE marks the transition to public availability. Visitors can explore DeviceHub’s capabilities and speak directly with product experts about how it can transform their AV and IT operations.

        Continue Reading

        Trending

        Copyright © 2023 | The Integrator