Connect with us

Tech News

Proofpoint’s 2024 Voice of the CISO Report: More Than Two-Thirds of CISOs in the UAE Feel Prepared for Targeted Cyber Attacks

Published

on

proofpoint

89% of UAE CISOs are turning to AI-powered technology to protect against human error and block advanced human-centric cyber threats

Proofpoint, a leading cybersecurity and compliance company today released its annual Voice of the CISO report, which explores key challenges, expectations and priorities of chief information security officers (CISOs) worldwide.

The 2024 report draws attention to a notable trend: while fears of cyber attacks remain high, CISOs in the UAE demonstrate increasing confidence in their ability to defend against these threats, reflecting a significant shift in the cybersecurity landscape. Over two-thirds (70%) of surveyed CISOs in the UAE feel at risk of a material cyber attack over the next 12 months, compared to 75% the year before, and 44% in 2022. CISOs today clearly remain on high alert, but confidence among them is growing: just 34% feel unprepared to cope with a targeted cyber attack, showing a marked decrease over last year’s 57% and 47% in 2022.

Human error continues to be perceived as the Achilles’ heel of cybersecurity, with more than three-quarters (76%) of CISOs in the UAE identifying it as the most significant vulnerability. In a year of growing insider threats and people-driven data loss, more CISOs in the UAE than ever (83%) see human risk, in particular negligent employees as a key cybersecurity concern over the next two years. However, there’s growing optimism in the role of AI-powered solutions to mitigate human-centric risks, reflecting a strategic pivot towards technology-driven defenses.

The 2024 Voice of the CISO report examines global third-party survey responses from 1,600 CISOs from organizations of 1,000 employees or more across different industries. Throughout the course of Q1 2024, 100 CISOs were interviewed in each market across 16 countries: the U.S., Canada, the UK, France, Germany, Italy, Spain, Sweden, the Netherlands, UAE, KSA, Australia, Japan, Singapore, South Korea, and Brazil.

The report offers a vital perspective on the state of cybersecurity from those at the forefront of protecting people and defending data. The report also stresses the importance of maintaining robust cybersecurity measures in the face of economic pressures and the critical role of human factors in organizational cyber readiness. The survey also measures the changes in alignment between security leaders and their boards of directors, exploring how their relationship impacts security priorities.

“As we navigate through the complexities of today’s cyber threat environment, it’s encouraging to see CISOs in the UAE gaining confidence in their strategies and tools,” commented Emile Abou Saleh, Senior Regional Director, Middle East, Turkey, and Africa at Proofpoint. “However, the ongoing challenges of employee turnover, pressure on resources, and the need for continuous board engagement remind us that vigilance and adaptation are key to our collective cyber resilience.”

Key global findings from Proofpoint’s 2024 Voice of the CISO report for the UAE include:

  • Human error still tops cyber vulnerability threats but CISOs in the UAE turn to AI solutions to help. This year, we are seeing an uptick in the number of CISOs in the UAE who view human error as their organization’s biggest cyber vulnerability—76% in this year’s survey vs. 59% in 2023. However, 87% of CISOs believe that employees understand their role in protecting the organization. This confidence is higher than in previous years—56% in 2023 and 51% in 2022. This may be attributed to the 89% of UAE CISOs surveyed looking to deploy AI-powered capabilities to help protect against human error and advanced human-centered cyber threats.
  • CISOs in the UAE continue to fear cyber-attacks but fewer feel unprepared, showing growing confidence in their security measures. In 2024, 70% of CISOs surveyed in the UAE feel at risk of experiencing a material cyber-attack in the next 12 months, compared to 75% in 2023 and 44% in 2022. However, just 34% feel their organization is unprepared to cope with a targeted cyber-attack, compared to 57% in 2023 and 47% in 2022.
  • Generative AI tops CISOs security concerns in the UAE. In 2024, 49% of CISOs surveyed in the UAE believe that generative AI poses a security risk to their organization. The top three systems CISOs view as introducing risk to their organizations are: Microsoft 365 (50%), Perimeter network device (45%), Slack/Teams/Zoom/other collaboration tools (43%) and ChatGPT/other genAI (40%).
  • Employee turnover is still a concern, yet CISOs in the UAE trust their defenses. In 2024, 45% of security leaders reported having to deal with a material loss of sensitive data in the past 12 months, and of those, 64% agreed that employees leaving the organization contributed to the loss. Despite those losses, 83% of CISOs believe they have adequate controls to protect their data.
  • The majority of CISOs in the UAE have adopted DLP technology and invested more in security education. 51% of CISOs surveyed in the UAE, in 2024 have data loss prevention technology (DLP) in place compared to just 45% in 2023. More than half (55%) of CISOs surveyed invested in educating employees on data security best practices which is higher in 2024 compared to 2023 (41%).
  • Cloud account compromise and ransomware top CISO concerns in the UAE. The biggest cybersecurity threats perceived by CISOs in 2024 are cloud account compromise (Microsoft 365 or G Suite or other) (44%), ransomware attacks (42%) and malware (42%). These top threats are different from last year in which CISOs perceived distributed email fraud, cloud account compromise (Microsoft 365, G Suite or other), malware and smishing/vishing as the biggest threats.
  • Steady stance on ransom payments with increased reliance on cyber insurance in the UAE. In 2024, 64% (59% in 2023) of CISOs in the UAE believe their organization would pay to restore systems and prevent data release if attacked by ransomware in the next 12 months. 76% of CISOs said they would rely on cyber insurance claims to recover potential losses incurred, compared to 56% in 2023.
  • The Board-CISO relationship has improved significantly in the UAE. In 2024, 80% of CISOs agree their board members see eye-to-eye with them on cybersecurity issues. This is a significant jump from 63% in 2023, and 47% in 2022.
  • Pressures on CISOs in the UAE are unrelenting. In 2024, 69% of CISOs in the UAE admitted to burnout compared to 59% last year, while 87% feel they face excessive expectations, a steady increase from 59% last year and 38% in 2022. The sustainability of the ongoing expectations on CISOs continues to be tested—69% are concerned about personal liability (60% in 2023) and 74% (56% in 2023) would not join an organization that does not offer Directors & Officers (D&O) insurance coverage. In addition, 63% of CISOs agreed that the current economic downturn has hampered their ability to make business-critical investments, with 49% of them being asked to cut staff or delay backfills as well as reduce security budgets.

“While the cybersecurity landscape continues to evolve with increasing human-centric threats, the 2024 Voice of the CISO report highlights what appears to be a pivotal shift towards greater resilience, preparedness and confidence among global CISOs,” said Patrick Joyce, global resident CISO at Proofpoint. “This year’s findings underscore a collective move towards strategic defenses, including enhanced education, technological adoption, and an adaptive approach to emerging threats like generative AI.”

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech News

TRENDS IN AI COMPLIANCE INFLUENCING HOW GCC COMPANIES OPERATE

Published

on

Across the GCC, national growth strategies, with Saudi Arabia’s Vision 2030, the UAE’s National AI Strategy 2031, and Qatar’s national roadmap, place AI at the centre of economic diversification. McKinsey estimates AI adoption at roughly 84% across GCC organisations, with a potential $320 billion economic impact for the Middle East by 2030. As deployment accelerates, regulatory compliance is a defining factor separating ambition from sustainable scale. Shaffra, an AI research and applications company building autonomous AI teams for enterprises and governments, sees six clear shifts reshaping how companies operate.

1. Regulation is accelerating adoption in high-stakes sectors

Government entities, financial services, telecom, aviation, and large semi-government organisations are moving fastest. These sectors operate at scale, face strict efficiency mandates, and function under constant regulatory oversight. Healthcare and energy are advancing more cautiously due to safety and data sensitivity. In many cases, the more regulated the industry, the faster AI deployment progresses. However, rapid scaling also exposes governance weaknesses, particularly where documentation, ownership, and oversight mechanisms are underdeveloped.

2. Compliance is prerequisite for scale

Over the past year, 88% of Middle East CEOs have reported generative AI uptake. Today, organisations increasingly require audit trails, explainability, clear data lineage and residency controls, defined performance thresholds, and enforceable human oversight mechanisms. With one in four Middle East consumers citing privacy as a primary concern, compliance is being treated as a post-deployment validation exercise; it is a structural requirement for scaling AI responsibly.

3. Sovereign AI and data residency are shaping architecture

AI governance in the GCC is being influenced less by standalone AI laws and more by data protection and cybersecurity frameworks. The UAE’s federal data protection law, Saudi Arabia’s PDPL under SDAIA, and Oman’s PDPL reinforce lawful processing and cross-border controls. In highly regulated sectors such as banking, healthcare, energy, and telecommunications, data residency and local control over models are strategic imperatives. Sovereign AI is evolving from a policy ambition into an operational requirement affecting infrastructure, vendor selection, and system design.

4. Human accountability is being reasserted

When organisations deploy AI without defining who owns the decision, when human escalation is required, and what the system is permitted or restricted from doing, they create either over-reliance or under-utilisation. Without clearly defined ownership and documented review controls, accountability weakens and regulatory exposure increases.

For instance, DIFC reinforces responsible AI use in personal data processing. High-impact decisions involving legal standing, fraud, employment, healthcare guidance, or public sector determinations that affect citizens need to involve human oversight, while AI handles speed, consistency, and automation of repetitive tasks. High-impact decisions should involve accountable human oversight.

5. Governance maturity slows deployment activity

Many organisations are AI-active but still developing governance maturity. Common governance gaps are structural rather than technical. Multiple pilots often run in parallel, tool adoption is fragmented, and accountability is split across IT, legal, risk, and business functions. Growing enterprises often lack a central AI governance owner, a comprehensive use-case inventory, consistent vendor and model risk assessment, and formal escalation protocols. Policies may exist at the board level, yet it is not consistently embedded into day-to-day operations. Addressing this gap requires governance to be built into workflows from the outset.

6. Continuous auditing is discipline

Studies indicate that a majority of ML models degrade over time, through model drift, hidden bias, or misuse vulnerabilities. Initial audits frequently reveal undocumented use cases, weak access segmentation, insufficient logging, and unclear review protocols. Effective governance requires compliance with international and local data residency rules, structured risk tiering, data lineage validation, access controls, bias testing, performance benchmarking, and defined incident response procedures. High-impact systems warrant quarterly reviews supported by continuous monitoring, while lower-risk applications still require periodic reassessment. Governance is increasingly measured through evidence rather than policy statements. Boards are asking for dashboards, logs, and audit artefacts — not policy PDFs.

Governance is being considered as part of AI infrastructure. Compliance frameworks are evolving into operational architecture embedded within systems, workflows, and accountability models. The organisations that will lead in the GCC are those that design governance at the same time they design capability, ensuring AI scales with discipline rather than risk.

Continue Reading

Tech News

PNY ANNOUNCES STRATEGIC PARTNERSHIP WITH F5 TO ACCELERATE THE ADOPTION OF SECURE, HIGH-PERFORMANCE INFRASTRUCTURE IN EMEA

Published

on

PNY Technologies, a leading distributor of technology solutions and long-standing NVIDIA partner, today announced a partnership with F5, the global leader in delivering and securing

This agreement aims to strengthen access for enterprises across the EMEA region to advanced solutions designed to optimise, secure, and accelerate applications and IT infrastructures.

As AI adoption continues to accelerate, performance, data flow management, and application security are becoming critical priorities. Through this partnership, the F5 Application Delivery and Security Platform (ADSP) will complement PNY’s AI Factory ecosystem by providing advanced capabilities for traffic management, application security, and performance optimisation across on-premises, cloud, and hybrid environments.

PNY will leverage its technical expertise, partner network, and logistics capabilities to facilitate the deployment of F5 ADSP solutions for enterprises, system integrators, and service providers throughout the region.

“Collaboration between PNY, a specialist distributor of NVIDIA AI Factory solutions across the EMEA region, and F5 represents a major step forward for AI-dedicated infrastructure,” said Laurent Chapoulaud, VP Marketing at PNY. “Together, we optimise GPU environments through accelerated data flows and enhanced application security. This synergy between infrastructure and intelligent traffic management enables the deployment of AI architectures that are high-performance, resilient, and scalable.”

“This partnership brings together complementary strengths that directly benefit our partners and customers,” said Nasser El Abdouli, Regional VP EMEA Channel Sales, F5. “PNY’s longstanding partnership with NVIDIA, combined with F5’s growing AI-focused application delivery and security offerings, allows us to help partners capably respond to the rapidly increasing demand for secure and scalable AI infrastructure across EMEA.”

Through this collaboration, PNY and F5 aim to support enterprises in their strategic initiatives related to hybrid multicloud, cybersecurity, and application performance optimisation, while simplifying access to next-generation technologies.

Continue Reading

Tech News

MIDDLE EAST CONFLICT DRIVING A SURGE IN SCAMS, DEEPFAKES, AND GOVERNMENT IMPERSONATION

Published

on

Cybercriminals don’t wait for the dust to settle. As conflict escalates across the Middle East, a parallel threat has emerged targeting ordinary people through their inboxes and social media feeds.

On 4 March, the UAE Ministry of Interior warned the public about fraudulent emails impersonating government emergency services, falsely claiming that residents must complete a mandatory registration form to receive state support or insurance coverage. The emails bore hallmarks of official government communications, making them convincingly deceptive. They are designed to exploit fear, urgency, and the instinct to comply with perceived authority. These messages are already circulating.

Alongside financial scams, verified fact-checkers have identified AI-generated and mislabelled footage circulating online as supposed evidence of attacks in the UAE. This includes video from Bahrain that was picked up by international media outlets and incorrectly broadcast as a Dubai drone strike. Fabricated videos of the Burj Khalifa collapsing, AI-generated missile strike imagery, and decade-old footage repackaged as current events have also circulated widely. In another example, a supposed “before and after” satellite image of Dubai showing smoke rising over the city was mislabelled — the image was actually from Sharjah, the neighbouring emirate. In many cases, the content spread faster than the corrections. Dubai Police have warned that sharing unverified information can carry criminal penalties under UAE law, including fines of no less than AED 200,000. Despite these warnings, the flow of misleading content has not slowed.

KnowBe4 warns patterns observed during previous conflicts and crises, including the war in Ukraine and the COVID-19 pandemic, the public should also expect charity and donation scams exploiting humanitarian concern, phishing emails disguised as embassy or government alerts, and deepfake imagery engineered to provoke fear or spread disinformation.

Dr. Martin Kraemer, CISO Advisor at KnowBe4 said, “Crises are the most reliable recruitment tool bad actors have. When people are frightened and searching for information, they are not necessarily looking for the truth. They are looking for confirmation of what they already fear. That is exactly what scammers and disinformation actors exploit. What we are seeing right now, fake government emergency emails, mislabelled footage, AI-generated imagery, is not random. It is targeted, and it is designed to exploit the gap between what people feel and what they know. The antidote is not panic. It is discipline: pause, question the source, and go directly to official channels before acting on anything. That’s precisely how governments and organizations are educating people to react in stressful situations.”

What the Public Can Do Right Now

KnowBe4 urges residents, travellers, and anyone following events in the region to apply the following principles:

  • Treat urgency as a warning sign. Any message that pressures you to act quickly, register now, donate immediately, confirm your details before midnight, is likely designed to stop you thinking clearly.
  • Verify before you share. Before forwarding footage or information, check whether it has been verified by a reputable news outlet or official source. Reverse image searches take seconds and can prevent significant harm.
  • Go directly to official sources. If you receive communications claiming to be from a government ministry, embassy, or emergency service, navigate directly to their official website rather than clicking any link in the message.
  • Question what you see. AI-generated imagery has reached a level of quality where video alone is no longer reliable evidence. Look for verification from multiple credible sources before drawing conclusions.
  • Report suspicious communications. In the UAE, suspected scam emails or messages should be reported to the relevant authorities. Do not engage with the sender.
Continue Reading

Trending

Copyright © 2023 | The Integrator