Tech Features
In the Crosshairs of APT Groups: A Feline Eight-Step Kill Chain
By Alexander Badaev, Information security threat researcher, Positive Technologies Expert Security Center and Yana Avezova, Senior Research Analyst, Positive Technologies
In cybersecurity, “vulnerability” typically evokes concern. One actively searches for it and patches it up to build robust defenses against potential attacks. Picture a carefully orchestrated robbery, where a group of skilled criminals thoroughly examines a building’s structure, spots vulnerabilities, and crafts a step-by-step plan to breach security and steal valuables. This analogy perfectly describes the modus operandi of cybercriminals, with the “kill chain” acting as their detailed blueprint.
In a recent study, analysts from Positive Technologies gathered information on 16 hacker groups attacking the Middle East analyzing their techniques and tactics. It is worth noting that most of the threats in Middle Eastern countries come from groups believed to be linked to Iran—groups such as APT35/Charming Kitten or APT34/Helix Kitten. Let’s see how APT groups operate, how they initiate attacks, and how they develop them toward their intended targets.
Step 1: The Genesis of Intrusion (Attack preparation)

It all begins with meticulous planning and reconnaissance. APT groups leave no stone unturned in their quest for vulnerable targets. They compile lists of public systems with known vulnerabilities and gather employee information. For instance, groups like APT35 aka Charming Kitten known for targeting mainly Saudi Arabia and Israel, gather information about employees of target organizations, including mobile phone numbers, which they leverage for nefarious purposes like sending malicious links disguised as legitimate messages. After reconnaissance, they prepare tools for attacks, such as registering fake domains and creating email or social media accounts for spear phishing. For example, APT35 registers accounts on LinkedIn and other social networks to contact victims, persuading them through messages and voice calls to open malicious links.
Step 2: The Initial Access: Gaining a Foothold

Once armed with intelligence, cybercriminals proceed to gain initial access to their target’s network. Phishing campaigns, often masquerading as legitimate emails, serve as the primary means of infiltration. An example is the Desert Falcons group, observed spreading their malware through pornographic phishing. Notably, some groups go beyond traditional email phishing, utilizing social networks and messaging platforms to lure unsuspecting victims, as seen with APT35, Bahamut, Dark Caracal, and OilRig. Moreover, techniques like the watering hole method, where attackers compromise trusted websites frequented by their targets, further highlight the sophistication of these operations. Additionally, attackers exploit vulnerabilities in resources accessible on the internet to gain access to internal infrastructure. For example, APT35 and Moses Staff exploited ProxyShell vulnerabilities on Microsoft Exchange servers.
Step 3: Establishing Persistence: The Art of Concealment

Having breached the perimeter, APT groups strive to establish a foothold within the victim’s infrastructure, ensuring prolonged access and control. This involves deploying techniques such as task scheduling, as seen in the campaign against the UAE government by the OilRig group, which created a scheduled task triggering malicious software every five minutes. Additionally, many malicious actors set up malware autostart, like the Bahamut group creating LNK files in the startup folder or Dark Caracal’s Bandook trojan. Some APT groups, such as APT33, Mustang Panda, and Stealth Falcon, establish themselves in victim infrastructures by creating subscriptions to WMI events for event-triggered execution. Furthermore, attackers exploit vulnerabilities in server applications to install malicious components like web shells, which provide a backdoor for remote access and data exfiltration.
Step 4: Unraveling the Network: Internal Reconnaissance

After breaking in, APT groups don’t just sit there. They explore the system like a thief casing a house to find valuables and escape routes. This digital reconnaissance involves several steps. First, they perform an inventory check, identifying the computer’s operating system, installed programs, and updates, like figuring out a house’s security measures. For instance, APT35 might use a simple command to see if the computer is a powerful 64-bit system, capable of handling more complex tasks. Second, they map the network layout, akin to identifying valuable items and escape routes. APT groups might use basic tools like “ipconfig” and “arp” (like Mustang Panda) to see how devices are connected and communicate. They also search for user accounts and activity levels, understanding who lives in the house (figuratively) and their routines. Malicious tools, like the Caterpillar web shell used by Volatile Cedar, can list all usernames on the system. Examining running programs is another tactic, like checking for security guards. Built-in commands like “tasklist” (used by APT15 and OilRig) can reveal a list of programs currently running.
Finally, APT groups might deploy programs that hunt for secrets hidden within files and folders, like searching for hidden safes or documents. The MuddyWater group, for example, used malware that specifically checked for directories or files containing keywords related to antivirus software. By gathering this comprehensive intel, APT groups can craft targeted attacks, steal sensitive data like financial records or personal information, or exploit vulnerabilities in the system to cause even more damage.
Step 5: Harvesting Credentials: Unlocking the Vault

Access to privileged credentials is the holy grail for cyber attackers, granting them unrestricted access to critical systems and data. One common tactic is “credential dumping,” where tools like Mimikatz (used by APT15, APT33, and others) snatch passwords directly from a system’s memory, similar to stealing a key left under a doormat. Keyloggers, used by APT35 and Bahamut for example, acts like a hidden camera, silently recording keystrokes to capture usernames and passwords as victims type them in.
These stolen credentials grant access to even more sensitive areas. APT groups also exploit weaknesses in how passwords are stored. For instance, some target the Windows Credential Manager (like stealing a notepad with written down passwords). Brute-force attacks, trying millions of combinations, can crack weak passwords. Even encrypted passwords can be vulnerable if attackers have specialized tools. By employing these tactics, APT groups bypass initial security and access sensitive information or critical systems.
Step 6: Data Extraction: The Quest for Valuable Assets

Once inside, APT groups aren’t shy about snooping around. They leverage stolen credentials to capture screenshots, record audio and video (like hidden cameras and microphones), or directly steal sensitive files and databases. For instance, the Dark Caracal group employed Bandook malware, which can capture video from webcams and audio from microphones. This stolen data becomes their loot.
To ensure a smooth getaway, APT groups often employ encryption and archiving techniques. Imagine them hiding their stolen treasure chests—the Mustang Panda group, for example, encrypted files with RC4 and compressed them with password protection before shipping them out. This makes it difficult for defenders to identify suspicious activity amongst regular network traffic.
Step 7: Communication Channels: Establishing Control

APT groups rely on hidden communication channels with command-and-control (C2) servers to control infected machines and exfiltrate data. They employ various tactics to blend in with regular network traffic. This includes using common protocols (like IRC or DNS requests disguised as legitimate web traffic) and encrypting communication for further stealth.
However, some groups take it a step further. For instance, OilRig used compromised email servers to send control messages hidden within emails and then deleted them, making their C2 channel nearly invisible. These innovative techniques make it difficult for security measures to detect malicious activity, highlighting the importance of staying informed about evolving APT tactics.
Step 8: Covering Tracks: Erasing Digital Footprints

As the operation ends, APT groups meticulously cover their tracks to evade detection and prolong their presence in the compromised environment. Techniques like file obfuscation, masquerading, and indicator removal are employed to erase digital footprints and thwart forensic investigations. For example, the Bahamut group used icons mimicking Microsoft Office files to disguise malware, and the OilRig group used .doc file extensions to make malware appear as office documents. The Moses Staff group named their StrifeWater malware calc.exe to make it look like a legitimate calculator program.
To further bypass defenses, attackers often proxy the execution of malicious commands using files signed with trusted digital certificates. The APT35 group used the rundll32.exe file to execute the MiniDump function from the comsvcs.dll system library when dumping the LSASS process memory. Meanwhile, the Dark Caracal group employed a Microsoft Compiled HTML Help file to download and execute malicious files. Many APT groups also remove signs of their activity by clearing event logs and network connection histories, and changing timestamps. For instance, APT35 deleted mailbox export requests from compromised Microsoft Exchange servers. This meticulous cleaning makes it much more difficult for cybersecurity professionals to conduct post-incident investigations, as attackers often remove their arsenal of software from compromised devices after achieving their goals.
Conclusion: A Call to Vigilance
In a nutshell, the threat landscape in the Middle East is fraught with peril, as APT groups continue to refine their tactics and techniques to evade detection and wreak havoc on unsuspecting organizations. By understanding the anatomy of cyber intrusions and remaining vigilant against emerging threats, organizations can bolster their defenses and mitigate the risks posed by these sophisticated adversaries. Together, let us remain steadfast in our commitment to safeguarding the digital frontier against cyber threats.
Tech Features
Can Middle East Banks Reclaim Their Digital Leadership in the Age of AI?

Banks have long been the GCC’s digital pioneers. In the UAE, Saudi Arabia and Qatar, financial institutions were among the first to embrace mobile banking apps, roll out contactless payments at scale and introduce AI-powered chatbots to handle customer queries in Arabic and English. More often than not, banks set the pace and other sectors followed.
Given this decades-long precedent, you would expect the same pattern to be playing out with artificial intelligence. After all, AI is already embedded in the daily lives of Gulf consumers. Ride-hailing, e-commerce, government, and a plethora of other services across the region have increasingly integrated AI into their systems, to effectively personalise experiences and streamline transactions.
And yet, when we look inside banks themselves, the story is more complicated. According to the latest Riverbed Global Survey, only 40% of organizations in the financial sector consider themselves ready to operationalize AI. Just 12% of AI initiatives are fully deployed enterprise-wide, while 62% remain stuck in pilot or development phases. In a sector known for digital ambition, there is a striking gap between intent and execution.
Stuck in Pilot Purgatory
In most industries, pilots fail because the idea simply does not resonate. Testing reveals a weak product-market fit, limited customer appetite, or unclear commercial value.
That is not what we are seeing in banking AI. Regional banks have successfully piloted AI models that detect fraud in real-time, reduce false positives in anti-money laundering checks, predict liquidity requirements, and power conversational assistants capable of resolving complex service requests. Relationship managers have used AI tools to surface next-best-product recommendations based on behavioral data. And operations teams have leveraged machine learning to optimize payment routing and reduce processing delays.
In controlled environments, these pilots often deliver impressive results. And yet, few ever make it past this stage. The initiative remains confined to a sandbox. Expansion is delayed. Integration becomes “phase two.” Eventually, attention shifts to the next promising experiment. So, if the feature works and the value is clear, what is holding banks back?
AI that Fails to Scale
In my experience working with CIOs across the region, two obstacles repeatedly stand in the way of AI moving from proof of concept to production. The first is operational complexity. Most financial institutions operate in highly fragmented environments. Core banking platforms run alongside decades-old legacy systems, with critical workloads split across on-premise data centers, private clouds, and multiple public cloud providers. Third-party fintech integrations also adds further layers of interdependency.
Deploying AI into this landscape is not as simple as plugging in a model. AI workloads are data-hungry and latency-sensitive. They require reliable pipelines, consistent telemetry, and predictable performance across every layer of the stack. In a hybrid, multi-cloud architecture, even minor configuration mismatches can trigger cascading issues.
The second obstacle is limited visibility. Without a unified view of applications, infrastructure, networks, and user experience, AI-driven services can behave unpredictably. A model may be performing perfectly, but a network bottleneck slows response times. An upstream data source may degrade in quality, subtly skewing outputs, and an infrastructure change in one environment may impact inference speeds elsewhere.
When visibility is fragmented, issues take longer to diagnose and resolve, and Mean Time to resolution increases. Operational risk rises, particularly when customer-facing or revenue-critical services are affected. In a heavily regulated market such as the UAE or Saudi Arabia, that risk has compliance implications as well as reputational ones.
Left unaddressed, this kind of live digital environment leaves very little room for innovation. AI cannot become the transformational force many claim it to be if it is constantly constrained by hidden friction.
Conquering Complexity
Moving AI smoothly from pilot to production requires banks to create as frictionless an operating environment as possible. One of the most effective starting points is unified observability. By consolidating telemetry from applications, infrastructure, networks and end-user devices into a single, real-time view, banks can eliminate blind spots, and decision-makers can gain clarity over performance, dependencies and risk across the entire digital estate.
With this foundation in place, AIOps capabilities can correlate signals, reduce alert noise and automate root cause analysis. Instead of firefighting incidents after customers notice them, IT teams can proactively identify performance degradation and resolve issues before they impact revenue or service continuity.
Standardising on frameworks such as OpenTelemetry can further simplify instrumentation across heterogeneous environments, ensuring consistent data collection and analysis. At the same time, investing in data quality, governance and compliance processes ensures that AI models are trained and operated within regulatory boundaries.
In practical terms, this means rethinking infrastructure as an enabler of AI rather than an afterthought. It may involve accelerating data movement between environments, modernising integration layers or rationalising overlapping monitoring tools. The goal is not perfection, but coherence: a shared, real-time understanding of how systems behave and how AI performs under real-world conditions.
From Optimism to Optimisation
The debate about whether AI belongs in banking is effectively over. Across the Middle East, regulators are publishing AI guidelines, governments are investing heavily in digital transformation, and consumers increasingly expect intelligent, seamless services.
Institutions that continue to treat AI as a series of isolated pilots risk remaining in perpetual experimentation. However, those who address operational complexity head-on will move beyond optimism to optimisation.
Tech Features
Addressing Structural Gaps in Enterprise Backup Strategies

By Owais Mohammed, Regional Lead & Sales Director, WD – Middle East, Africa, Turkey & Indian Subcontinent
Today, organizations across the UAE are reassessing how they backup and recover data in increasingly complex environments. Organisations are managing data across cloud platforms, on-premises infrastructure, edge deployments, and increasingly, AI-driven workloads. As these environments scale, data moves across system and is reused for analytics, compliance, and performance optimisation. This increases the complexity of backup and retention requirements. When strategies do not keep pace, gaps become visible.
Where backup strategies are falling short
A common challenge is the alignment between backup design and actual workload distribution. Many backup strategies are built around primary systems. But enterprise data now lives across multiple environments with different access patterns and retention requirements. This creates inconsistencies in backup coverage across cloud services, endpoints, and shared infrastructure.
A common misconception is that platform-level redundancy is sufficient. Cloud and application are designed to provide availability, but they do not replace independent backup layers. When data is modified, deleted, or encrypted within the same environment, recovery depends on whether a separate, unaffected copy exists.
Coverage inconsistencies also become more visible as organizations scale. Backup policies often prioritise transactional systems. Logs, archived records, development environments, and datasets used for analytics or AI workflows may be retained without structured protection. These datasets can become critical during investigations, audits, or system updates.
Recovery planning is where many strategies can break down. Backup processes may be in place, but recovery requirements are not always well defined. This includes defining dependencies, sequencing recovery, and aligning recovery times with business needs.
Why data resilience is now an infrastructure requirement
Enterprise data is now used across a wider range of functions. In analytics and AI-driven environments, data is revisited over time rather than stored and left unused. Historical datasets are essential to maintain performance and consistency. This means reliable backup and access are no longer secondary consideration, but core infrastructure needs.
Compliance expectations are also evolving. Organizations are increasingly need to retain records, demonstrate traceability, and provide access to data in a verifiable format. Backup and retention policies must align with recovery capabilities.
Building a more resilient data strategy
Addressing these gaps requires a structured approach to data resilience.
Infrastructure choices affect how backup strategies can be implemented. These decisions increasingly factor in not only performance and scalability, but also long-term cost efficiency as data environments expand. Many organisations are adopting hybrid models that combine cloud platforms with localised storage systems. This allows different workloads to be supported based on their access patterns and recovery requirements. In scenarios where consistent performance and recovery predictability are required, localized storage can provide additional control.
As environments grow, automation is important in maintaining consistency. Policy-driven automation helps ensure that backup processes are applied consistently, while monitoring tools provide visibility into system performance and potential gaps.
Recovery planning needs to be integrated into these processes. Clear recovery objectives and regular testing are essential for effective backup strategies.
Data prioritization also plays a role in managing scale. Not all data requires the same level of backup. Identifying critical datasets, allows organizations to allocate resources effectively.
Managing cost as data volumes scale
Cost considerations play a central role as data volumes scale. In large environments, power consumption, cooling requirements, and infrastructure footprint all contribute to total cost of ownership (TCO), particularly as data environments scale.
This is where tiered storage architecture becomes critical. High-performance storage is essential for active workloads such as analytics and real-time processing, while high-capacity, cost-efficient storage supports large datasets, backups, and long-term retention. This helps manage growth and scaling efficiently.
Treating all data the same is no longer practical. Infrastructure decisions need to reflect how data is used, how often it is accessed, and how quickly it needs to be recovered.
Backup strategies must align closely with infrastructure design. Data resilience now means ensuring data is accessible and recoverable across systems.
Many organizations are adopting hybrid models that combine cloud platforms with localized storage systems. In data-intensive environments, the ability to recover and reuse data is directly tied to operational continuity, system performance, and the ability to scale infrastructure effectively.
Tech Features
THE CONVERGENCE OF CRISIS: HOW OVERLAPPING RISKS ARE REDEFINING WORKFORCE MOBILITY IN THE MIDDLE EAST

By Gillan McNay, Security Director Assistance – Middle East, International SOS
In today’s Middle East operating environment, mobility risk no longer arrives in isolation. Organisations are increasingly navigating multiple, overlapping disruptions that converge to affect how, when, and whether their people can move. Geopolitical tension, aviation restrictions, cyber exposure, misinformation, and workforce anxiety are no longer separate risk categories – they interact, amplify one another, and challenge traditional mobility assumptions.
This convergence is redefining what “safe movement” looks like for organisations with employees traveling, deployed, or working abroad across the region.
From Single Events to Layered Disruption
Historically, mobility planning focused on discrete scenarios, weather events, isolated security incidents, or airline strikes. Today, organisations are far more likely to face layered disruption, where one event triggers a cascade of secondary impacts.
A regional security escalation may coincide with airspace closures. Airspace closures may lead to congestion at land borders. Border congestion increases stress for travelers, which in turn heightens reliance on digital communication channels, precisely when misinformation and cyber activity surge. Each layer compounds the next.
International SOS’ Risk Outlook 2026 highlights this shift clearly: risk is now systemic and interdependent, not episodic. For mobility teams, this means plans designed for one‑dimensional threats will be insufficient.
Mobility Is Now a Strategic Exposure
Movement of people has become a strategic risk vector rather than a logistical one. When employees cannot travel as planned, the impact extends beyond delayed meetings or project timelines. It affects:
- Business continuity
- Leadership visibility
- Employee confidence and wellbeing
- Regulatory and duty‑of‑care obligations
In the Middle East, this is especially pronounced due to the region’s role as a global aviation hub and its highly international workforce. When airspace is disrupted in one country, the effects ripple across neighbouring states almost immediately.
As a result, organisations must treat mobility decisions with the same scrutiny as other strategic risks, cybersecurity, financial exposure, or supply‑chain dependency.
The New Reality: Mobility Under Uncertainty
In recent months, we have seen how quickly mobility conditions can change. Routes that were viable in the morning may be restricted by evening. Neighbouring jurisdictions may adjust entry requirements or limit transit with little notice. Information may circulate rapidly on social media before it can be verified.
The most resilient organisations recognise that movement decisions must be conditions‑based, not schedule‑based. Rather than asking “Can we move people today?”, leaders need to ask:
- What conditions would make movement unsafe tomorrow?
- What alternatives exist if a primary route closes?
- Are we prepared to shift from air to land, or to stabilise in place?
This approach requires planning optionality into every mobility decision.
Overlapping Risks Demand Integrated Decision‑Making
The convergence of crisis exposes one of the most common organisational gaps: mobility decisions are often segmented across functions. Security looks at threat levels, HR considers employee impact, travel teams focus on bookings, and IT monitors communications. In a converging‑risk environment, this fragmentation increases risk.
Mobility decisions must be informed by integrated intelligence, security assessments, aviation updates, border conditions, medical considerations and workforce sentiment. When these views are aligned into a single operating picture, organisations can act faster and with greater confidence.
This integrated approach is increasingly reflected in board‑level discussions, as highlighted in the Risk Outlook 2026, where executive oversight of crisis preparedness and workforce risk continues to rise.
The Human Layer Cannot Be Separated From Mobility
Overlapping crises do not only disrupt routes; they disrupt people. Uncertainty around travel amplifies stress, particularly for expatriates with families, employees traveling alone, or teams operating far from home support networks.
From an assistance perspective, we see that anxiety itself becomes a risk multiplier. Tired, stressed travelers are more likely to make poor decisions, rushing to airports prematurely, acting on unverified information, or attempting unsafe routing alternatives.
Mobility strategies must therefore incorporate psychological safety alongside physical safety. Clear guidance, predictable communication, and reassurance that decisions are being reviewed continuously make a material difference to outcomes.
Why “Move” Is Not Always the Right Answer
One of the most important shifts organisations are making is recognising that relocation or evacuation is not always the safest or most effective response. In converging‑risk scenarios, moving people can expose them to new uncertainties if the destination environment changes.
Stability, supported by shelter‑in‑place guidance, supply planning, and continuous monitoring, can be the safest posture while conditions clarify. Mobility planning should define three distinct postures:
- Stay and stabilise
- Relocate to a regional safe haven
- Evacuate out of the region
Each posture requires different triggers, communications, and support mechanisms. Treating them interchangeably increases risk.
Information Discipline Is a Mobility Imperative
Overlapping crises generate noise. For organisations managing mobility, information discipline becomes critical. Decisions based on rumours, unverified social media posts, or outdated aviation updates can lead to unnecessary movement, or unsafe delay.
Effective organisations establish clear information pathways:
- Who validates updates
- Which sources are trusted
- How frequently conditions are reviewed
- When decisions are escalated
This discipline supports faster pivots when conditions change and reduces the emotional load on traveling employees.
Building Adaptive Mobility for the Future
The convergence of crisis in the Middle East is not a temporary phenomenon. Geopolitical volatility, climate stress, digital disruption, and workforce expectations will continue to intersect. Mobility strategies must evolve accordingly.
Resilient organisations are already adapting by:
- Embedding workforce visibility into core systems
- Designing mobility plans with multiple fail‑safe options
- Training leaders to make people‑first decisions under pressure
- Aligning crisis planning with broader enterprise risk management
As the Risk Outlook 2026 underscores, preparedness is no longer about predicting the next event, it’s about building the capacity to adapt when events collide.
A Redefined Measure of Readiness
In this new operating reality, mobility readiness is not measured by the ability to move people quickly, but by the ability to make calm, informed, and proportionate decisions as risks converge.
Organisations that understand this will be better positioned to protect their people, maintain operational stability, and navigate periods of regional tension with confidence rather than urgency. The convergence of crisis is challenging, but with the right structures, discipline, and integration, it is manageable.
-
News10 years ago
SENDQUICK (TALARIAX) INTRODUCES SQOOPE – THE BREAKTHROUGH IN MOBILE MESSAGING
-
Tech News2 years agoDenodo Bolsters Executive Team by Hiring Christophe Culine as its Chief Revenue Officer
-
VAR12 months agoMicrosoft Launches New Surface Copilot+ PCs for Business
-
Trending6 months agoOPPO A6 Pro 5G Review: Reliable Daily Driver
-
Tech Interviews2 years agoNavigating the Cybersecurity Landscape in Hybrid Work Environments
-
Tech News9 months agoNothing Launches flagship Nothing Phone (3) and Headphone (1) in theme with the Iconic Museum of the Future in Dubai
-
Automotive2 years agoAGMC Launches the RIDDARA RD6 High Performance Fully Electric 4×4 Pickup
-
VAR2 years agoSamsung Galaxy Z Fold6 vs Google Pixel 9 Pro Fold: Clash Of The Folding Phenoms


