Connect with us

Tech Features

Cybersecurity Investment Market: Here’s to the Resilient Ones

Published

on

cybersecurity Investment Market

By Anastasia Komissarova – Deputy CEO at Group-IB

I guess we all agree that AI is the hottest segment today – 90% of all discussions in VC / Tech are around AI, mega-rounds, extreme multiples. 3-4 years ago, the very same discussions were about cybersecurity.

Cyber was and still is one of the most well-funded tech segments  – in 2021 alone cybersecurity companies received an unprecedent amount of $21.8 bn (just $8.7 bn in 2023 though). 2021-2022 were the days of crazy valuations and 10x+ EV/Revenue multiples. Basically, it didn’t even matter if you were profitable or even planning to be profitable – if you were good in marketing, had a nice pitch and could sell fear well, the money was yours.

It all changed in 2023. The cost of funds, global instability, new technological shifts led to cooling of investors’ appetite for risk and thus raised quite existential questions for cybersecurity players. 

At the same time cyber threats continue to grow worldwide. In latest Hi-Tech Crime Trends 2023/2024 Group-IB disclosed 74% yoy growth in data leaks, 70% yoy increase in zero-day exploits for sale, 30% reduction of average price of corporate access etc. 

However fear is not selling that well anymore – we start seeing a certain level of fatigue from constant growth of cybersecurity expenses on the customers’ side. New siloed solutions arise every day and CISOs receive hundreds of pitches per month. But is just another EDR a gamechanger for the customer? Or are businesses more interested in receiving a holistic proposal covering most of the key attack vectors? 

If the latter is true (which it is) it could only mean one thing – limited growth opportunities for mono-product vendors. Limited growth of revenue means higher increase of cash burn rate.  Since in most cases such companies got used to accessibility of external financing, which is now gone, we shall be prepared for a new wave of M&As in cybersecurity or for some companies going out of business.

It’s also true that some 5 years ago founders of cybersecurity shops facing doubts about next round valuations could have used another goldmine of tech companies – Initial public offering (IPO). In the past if you were growing fast and generating some $50 mn in annual recurring revenue (ARR) – you were a great IPO target. Most of cybersecurity companies became publicly traded unicorns with ARR around $100 mn. Today it’s also not an option. With current 4-5x EV/Revenue multiples, you need to have a solid ARR of at least $250 mn to be able to have a moderately successful IPO.

So, let’s recap that now:

  • • Growing fatigue level of executives in B2B segment limits growth opportunities for niche cybersecurity players
  • • Private capital became less available, investor’s now look not only how cool the tech is but how sustainable the business model is.
  • • Public markets’ requirements are toughening: only companies that reach significant revenue levels can be viewed as attractive IPO targets.

Some might feel that such shifts limit innovative potential and set higher barriers for entering the market. But I am feeling quite positive as it means the game is becoming more fair and more mature. It’s not just about those who burn cash on marketing and customers acquisition or spend more time in the Valley but more about those who know how to invest smartly and do more with less. I believe everyone will win from such shift in investment perspective:

  • • Businesses will be getting better solutions as vendors will focus more on quality of the product than on marketing.
  • • Start-ups will learn better financial discipline and healthier growth strategies.
  • • Scale-ups will focus on building billion-dollar-revenue companies rather than on billion-dollar valuations.
  • • Investors that do choose to invest into cybersecurity shops matching the new criteria even now will generate higher returns as their funds will be used more efficiently.

Cybersecurity has already become an existential part of each company’s strategy and it will preserve its place for years to come. Social significance of cybersecurity issues is crucial: from loss of privacy and advanced disinformation to artificial intelligence abuse. And this shall constantly drive the level of responsibility of cybersecurity companies. Tightening financial requirements, higher maturity of spending decisions from customers and investors will lead to more sustainable growth, higher resilience of cybersecurity companies and thus more probability to minimize cyber threats in the future.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Cover Story

AI Moves from Experiment to Essential in UAE’s Advertising Landscape

Published

on

By Srijith KN, Senior Editor, Integrator
From content creation to media buying, artificial intelligence is quietly reshaping how campaigns are built, delivered, and optimised across the GCC.

In the UAE and across the GCC, artificial intelligence has moved well beyond the stage of experimentation. What was once a buzzword discussed in boardrooms is now deeply embedded in the day-to-day execution of advertising. Brands are no longer testing AI—they are relying on it to run campaigns, generate content, and make increasingly precise decisions about audience targeting and timing.

On the creative front, the shift is particularly visible. AI-powered tools are now capable of producing ad copy, visuals, and even short-form video content at a pace that would have been unthinkable just a few years ago. For marketers operating in a market like the UAE—where campaigns often need to speak to audiences in both English and Arabic, while also resonating across a diverse mix of nationalities, this level of speed and adaptability is more than a convenience. It is becoming a necessity.

Behind the scenes, machine learning has also transformed how media buying is approached. Traditional methods that relied heavily on instinct or retrospective performance reports are steadily being replaced by systems that analyse audience behaviour in real time. These platforms continuously optimise campaign performance, adjusting budgets and placements based on how users interact with content.

In the UAE’s PR ecosystem, brands are already leveraging platforms such as Meltwater, Brandwatch, and Sprout Social to better understand media performance, audience sentiment, and the broader buying landscape.

A practical example of this shift can be seen in platforms like Skyscanner, where advertising systems respond dynamically to user intent. Instead of targeting broad demographic groups, campaigns are triggered by actual search behaviour and travel patterns, allowing for more relevant and timely engagement.

AI is also influencing emerging advertising formats. Digital billboards, for instance, are becoming more responsive, using live data inputs to tailor content based on factors such as time of day, location, and audience movement. Similarly, augmented reality experiences are beginning to incorporate behavioural insights, offering more contextual and interactive brand engagements.

Looking ahead, the trajectory appears clear. Advertising is moving towards deeper automation, more intelligent recommendations, and tighter integration between creative tools and analytics platforms. The industry is shifting from a model centred on broadcasting messages to one that focuses on responding to audiences in real time, with context and precision.

In this evolving landscape, AI is no longer just an enabler, it is becoming the foundation on which modern advertising is built.

Continue Reading

Tech Features

Can Middle East Banks Reclaim Their Digital Leadership in the Age of AI?

Published

on

Fernando Castanheira, Chief Technology Officer, at Riverbed Technology

Banks have long been the GCC’s digital pioneers. In the UAE, Saudi Arabia and Qatar, financial institutions were among the first to embrace mobile banking apps, roll out contactless payments at scale and introduce AI-powered chatbots to handle customer queries in Arabic and English. More often than not, banks set the pace and other sectors followed.

Given this decades-long precedent, you would expect the same pattern to be playing out with artificial intelligence. After all, AI is already embedded in the daily lives of Gulf consumers. Ride-hailing, e-commerce, government, and a plethora of other services across the region have increasingly integrated AI into their systems, to effectively personalise experiences and streamline transactions.

And yet, when we look inside banks themselves, the story is more complicated. According to the latest Riverbed Global Survey, only 40% of organizations in the financial sector consider themselves ready to operationalize AI. Just 12% of AI initiatives are fully deployed enterprise-wide, while 62% remain stuck in pilot or development phases. In a sector known for digital ambition, there is a striking gap between intent and execution.

Stuck in Pilot Purgatory

In most industries, pilots fail because the idea simply does not resonate. Testing reveals a weak product-market fit, limited customer appetite, or unclear commercial value.

That is not what we are seeing in banking AI. Regional banks have successfully piloted AI models that detect fraud in real-time, reduce false positives in anti-money laundering checks, predict liquidity requirements, and power conversational assistants capable of resolving complex service requests. Relationship managers have used AI tools to surface next-best-product recommendations based on behavioral data. And operations teams have leveraged machine learning to optimize payment routing and reduce processing delays.

In controlled environments, these pilots often deliver impressive results. And yet, few ever make it past this stage. The initiative remains confined to a sandbox. Expansion is delayed. Integration becomes “phase two.” Eventually, attention shifts to the next promising experiment. So, if the feature works and the value is clear, what is holding banks back?

AI that Fails to Scale

In my experience working with CIOs across the region, two obstacles repeatedly stand in the way of AI moving from proof of concept to production. The first is operational complexity. Most financial institutions operate in highly fragmented environments. Core banking platforms run alongside decades-old legacy systems, with critical workloads split across on-premise data centers, private clouds, and multiple public cloud providers. Third-party fintech integrations also adds further layers of interdependency.

Deploying AI into this landscape is not as simple as plugging in a model. AI workloads are data-hungry and latency-sensitive. They require reliable pipelines, consistent telemetry, and predictable performance across every layer of the stack. In a hybrid, multi-cloud architecture, even minor configuration mismatches can trigger cascading issues.

The second obstacle is limited visibility. Without a unified view of applications, infrastructure, networks, and user experience, AI-driven services can behave unpredictably. A model may be performing perfectly, but a network bottleneck slows response times. An upstream data source may degrade in quality, subtly skewing outputs, and an infrastructure change in one environment may impact inference speeds elsewhere.

When visibility is fragmented, issues take longer to diagnose and resolve, and Mean Time to resolution increases. Operational risk rises, particularly when customer-facing or revenue-critical services are affected. In a heavily regulated market such as the UAE or Saudi Arabia, that risk has compliance implications as well as reputational ones.

Left unaddressed, this kind of live digital environment leaves very little room for innovation. AI cannot become the transformational force many claim it to be if it is constantly constrained by hidden friction.

Conquering Complexity

Moving AI smoothly from pilot to production requires banks to create as frictionless an operating environment as possible. One of the most effective starting points is unified observability. By consolidating telemetry from applications, infrastructure, networks and end-user devices into a single, real-time view, banks can eliminate blind spots, and decision-makers can gain clarity over performance, dependencies and risk across the entire digital estate.

With this foundation in place, AIOps capabilities can correlate signals, reduce alert noise and automate root cause analysis. Instead of firefighting incidents after customers notice them, IT teams can proactively identify performance degradation and resolve issues before they impact revenue or service continuity.

Standardising on frameworks such as OpenTelemetry can further simplify instrumentation across heterogeneous environments, ensuring consistent data collection and analysis. At the same time, investing in data quality, governance and compliance processes ensures that AI models are trained and operated within regulatory boundaries.

In practical terms, this means rethinking infrastructure as an enabler of AI rather than an afterthought. It may involve accelerating data movement between environments, modernising integration layers or rationalising overlapping monitoring tools. The goal is not perfection, but coherence: a shared, real-time understanding of how systems behave and how AI performs under real-world conditions.

From Optimism to Optimisation

The debate about whether AI belongs in banking is effectively over. Across the Middle East, regulators are publishing AI guidelines, governments are investing heavily in digital transformation, and consumers increasingly expect intelligent, seamless services.

Institutions that continue to treat AI as a series of isolated pilots risk remaining in perpetual experimentation. However, those who address operational complexity head-on will move beyond optimism to optimisation.

Continue Reading

Tech Features

Addressing Structural Gaps in Enterprise Backup Strategies

Published

on

By Owais Mohammed, Regional Lead & Sales Director, WD – Middle East, Africa, Turkey & Indian Subcontinent

Today, organizations across the UAE are reassessing how they backup and recover data in increasingly complex environments. Organisations are managing data across cloud platforms, on-premises infrastructure, edge deployments, and increasingly, AI-driven workloads. As these environments scale, data moves across system and is reused for analytics, compliance, and performance optimisation. This increases the complexity of backup and retention requirements. When strategies do not keep pace, gaps become visible. 

Where backup strategies are falling short

A common challenge is the alignment between backup design and actual workload distribution. Many backup strategies are built around primary systems. But enterprise data now lives across multiple environments with different access patterns and retention requirements. This creates inconsistencies in backup coverage across cloud services, endpoints, and shared infrastructure.

A common misconception is that platform-level redundancy is sufficient. Cloud and application are designed to provide availability, but they do not replace independent backup layers. When data is modified, deleted, or encrypted within the same environment, recovery depends on whether a separate, unaffected copy exists.

Coverage inconsistencies also become more visible as organizations scale. Backup policies often prioritise transactional systems. Logs, archived records, development environments, and datasets used for analytics or AI workflows may be retained without structured protection. These datasets can become critical during investigations, audits, or system updates.

Recovery planning is where many strategies can break down. Backup processes may be in place, but recovery requirements are not always well defined. This includes defining dependencies, sequencing recovery, and aligning recovery times with business needs.

Why data resilience is now an infrastructure requirement

Enterprise data is now used across a wider range of functions. In analytics and AI-driven environments, data is revisited over time rather than stored and left unused. Historical datasets are essential to maintain performance and consistency. This means reliable backup and access are no longer secondary consideration, but core infrastructure needs.

Compliance expectations are also evolving. Organizations are increasingly need to retain records, demonstrate traceability, and provide access to data in a verifiable format. Backup and retention policies must align with recovery capabilities.

Building a more resilient data strategy

Addressing these gaps requires a structured approach to data resilience.

Infrastructure choices affect how backup strategies can be implemented. These decisions increasingly factor in not only performance and scalability, but also long-term cost efficiency as data environments expand. Many organisations are adopting hybrid models that combine cloud platforms with localised storage systems. This allows different workloads to be supported based on their access patterns and recovery requirements. In scenarios where consistent performance and recovery predictability are required, localized storage can provide additional control.

As environments grow, automation is important in maintaining consistency. Policy-driven automation helps ensure that backup processes are applied consistently, while monitoring tools provide visibility into system performance and potential gaps.

Recovery planning needs to be integrated into these processes. Clear recovery objectives and regular testing are essential for effective backup strategies.

Data prioritization also plays a role in managing scale. Not all data requires the same level of backup. Identifying critical datasets, allows organizations to allocate resources effectively.

Managing cost as data volumes scale

Cost considerations play a central role as data volumes scale. In large environments, power consumption, cooling requirements, and infrastructure footprint all contribute to total cost of ownership (TCO), particularly as data environments scale.

This is where tiered storage architecture becomes critical. High-performance storage is essential for active workloads such as analytics and real-time processing, while high-capacity, cost-efficient storage supports large datasets, backups, and long-term retention. This helps manage growth and scaling efficiently.

Treating all data the same is no longer practical. Infrastructure decisions need to reflect how data is used, how often it is accessed, and how quickly it needs to be recovered.

Backup strategies must align closely with infrastructure design. Data resilience now means ensuring data is accessible and recoverable across systems.

Many organizations are adopting hybrid models that combine cloud platforms with localized storage systems. In data-intensive environments, the ability to recover and reuse data is directly tied to operational continuity, system performance, and the ability to scale infrastructure effectively.

Continue Reading

Trending

Copyright © 2023 | The Integrator