Tech Features
Making Sense of Identity Threat Risks
By David Warburton, Director, F5 Labs
The growing maturity of cloud computing, including shifts towards decentralized architectures and APIs, has highlighted the complexity of managing credentials in increasingly interconnected systems. It has also underlined the importance of managing non-human entities like servers, cloud workloads, third-party services, and mobile devices.
F5 Labs’ 2023 Identity Theft Report defines identity as an artifact that an entity uses to identify itself to a digital system – such as a workload, a computer, or an organization. Examples of digital identities include username/password pairs and other personally identifiable information or cryptographic artifacts such as digital certificates.
Digital identities cannot stand on their own. They require a system to accept and validate them. In other words, for a digital identity to function there must be at least two parties involved: an entity and an identity provider (IdP) that are responsible for issuing and vetting digital identities. However, not all organizations that provide resources are IdPs—many digital services rely on third-party IdPs such as Google, Facebook, Microsoft, or Apple to vet identities.
Based on our recent analysis, the three most prominent forms of attack in the identity threat arena currently are credential stuffing, phishing, and multi-factor authentication (MFA) bypass.
Credential stuffing
Credential stuffing is an attack on digital identity in which attackers use stolen username/password combinations from one identity provider to attempt to authenticate to other identity providers for malicious purposes, such as fraud.
It is a numbers game that hinges on the fact that people reuse passwords,
but the likelihood that any single publicly compromised password will work on another single web property is still small. Making credential stuffing profitable is all about maximizing the number of attempts, which requires automation.
Phishing
Phishing is perhaps rivaled only by denial of service (DoS) attacks in being fundamentally different from other kinds of attacks. It is an attack on digital identity, to be sure, but since it usually relies on a social engineering foothold, it is even more difficult to detect or prevent than credential stuffing.
Phishing attacks have two targets: there is the end user who is in possession of a digital identity, and there is the IdP, which the attacker will abuse once they’ve gotten credentials. Depending on the motives of the attacker and the nature of the system and the data it stores, the impact of a successful phishing trip can land primarily on the user (as in the case of bank fraud), solely on the organization (as in the case of compromised employee credentials), or somewhere in the middle.
On the attacker side, phishing can range from simple, hands-off solutions for unskilled actors to custom-built frameworks including infrastructure, hosting, and code. The most hands-off setup is the Phishing-as-a-service (PhaaS) approach in which the threat actor pays to gain access to a management panel containing the stolen credentials they want, and the rest is taken care of by the “vendor.”
Dark web research indicates that the most popular subtype of phishing service is best described as phishing infrastructure development, in which aspiring attackers buy phishing platforms, infrastructure, detection evasion tools, and viable target lists, but run them on their own.
Brokering phishing traffic, or pharming, is the practice of developing infrastructure and lures for the purposes of driving phishing traffic, and then selling that traffic to other threat actors who can capitalize on the reuse of credentials and collect credentials for other purposes.
Finally, the attacker community has a niche for those who exclusively rent out hosting services for phishing.
The most important tactical development in phishing is undoubtedly the rise of reverse proxy/ man-in-the-middle phishing tools (sometimes known as real-time phishing proxies or RTPPs), the best known of which are Evilginx and Modlishka. This is largely because it grants attackers the ability to capture most multi-factor authentication codes and replay them immediately to the target site facilitating MFA bypass but also making it less likely that the user victim will detect anything is amiss.
Multi-factor authentication (MFA) bypass
Recent years have seen attackers adopt a handful of different approaches to bypassing multi-factor authentication. The differences between these approaches are largely driven by what attackers are trying to accomplish and who they are attacking.
Nowadays, the reverse proxy approach has become the new standard for phishing technology, largely because of its ability to defeat most types of MFA.
MFA bypass tactics include:
- Malware. In mid-2022, F5 malware researchers published an analysis of a new strain of Android malware named MaliBot. While it primarily targeted online banking customers in Spain and Italy when it was first discovered, it had a wide range of capabilities, including the ability to create overlays for web pages to harvest credentials, collect codes from Google’s Authenticator app, capture other MFA codes including SMS single-use codes, and steal cookies.
- Social engineering. There are several variations of social engineering for bypassing MFA. Some target the owner of the identity, and some target telecommunications companies to take control of phone accounts.
- Social Engineering for MFA Code—Automated. These are attacks in which attackers make use of “robocallers” to make phone calls to the target, emulating an identity provider and asking the victim for an MFA code or one-time password (OTP).
- Social engineering for MFA code—Human. This is the same as the above approach except that the phone calls come from humans and not an automated system.
- SIM swaps. In this kind of attack, a threat actor obtains a SIM card for a mobile account that they want to compromise, allowing them to assume control of the victim’s phone number, allowing them to collect OTPs sent over SMS. There are several variations of this approach.
So, what does it all mean?
Identity threats are constant and continuous. Whereas a vulnerability represents unexpected and undesirable functionality, attacks on identity represent systems working exactly as designed. They are therefore “unpatchable” not only because we can’t shut users out, but because there isn’t anything technically broken.
This brings us back to the question of what digital identity really is. To go from real, human identity to digital identity, some abstraction is inevitable (by which we mean that none of us is reducible to our username-password pairs). We often teach about this abstraction in security by breaking it down to “something we know, something we have, and something we are.” It is this abstraction between the entity and the digital identity that attackers are exploiting, and this is the fundamental basis of identity risk.
By thinking about digital identities in this way, what we are really saying is that they are
a strategic threat on par with, but fundamentally different from, vulnerability management. With nothing to patch, each malicious request needs to be dealt with individually, as it were. If modern vulnerability management is all about prioritization, modern identity risk management is essentially all about the ability to detect bots and differentiate them from real human users. The next logical step is quantifying the error rate of detecting these attacker-controlled bots. This is the basis on which we can begin to manage the risk of
the “unpatchables.”
Tech Features
SUPPORTING EMPLOYEES ABROAD OR RELOCATING AMID REGIONAL TENSIONS: A STRATEGIC ADVISORY FOR ORGANISATIONS

By Gillan McNay, Security Director Assistance – Middle East, International SOS
Periods of regional tension place organisations under intense pressure to protect their people while sustaining operations. For UAE‑based companies with employees working from abroad, traveling frequently, or facing potential relocation, uncertainty can escalate quickly. Routes change, borders tighten, information moves faster than it can be verified, and employees look to their organisation for clarity and reassurance. In this environment, support must be strategic, deliberate, and people‑first.
Shift From Reaction to Preparedness
The most resilient organisations are those that move beyond reacting to events and instead operate with a preparedness mindset. This starts with acknowledging that uncertainty is not an exception but a condition organisations must continuously manage. Strategy, therefore, should anticipate disruption and define how the organisation will respond before decisions are forced by urgency.
Preparedness does not mean planning for every possible outcome. It means establishing decision frameworks that allow leaders to act confidently as conditions evolve, whether that results in continued remote work, relocation to a safe haven, or shelter‑in‑place with enhanced support.
Establish Workforce Visibility as a Strategic Capability
Supporting employees abroad begins with accurate, real‑time visibility. Leaders must know where their people are, their travel status, and whether they are working remotely, stationed overseas, or in transit with dependents. Visibility should extend beyond employees to include contractors and accompanying family members where duty‑of‑care obligations apply.
This visibility is strategic because it underpins all subsequent decisions. Without it, organisations risk delayed responses, fragmented communication, and uneven support. With it, they can act proportionately, supporting those most exposed while avoiding unnecessary disruption for others.
Differentiate Between Relocation, Evacuation, and Stability
One of the most common strategic mistakes during regional tensions is treating all movement decisions as evacuations. In reality, organisations need three clearly defined postures:
- Stability: Supporting employees to remain where they are with guidance, wellbeing checks, and secure working arrangements.
- Relocation: Moving employees to a safer location, often within the region, as a preventive measure.
- Evacuation: Executing time‑bound movement out of an area due to elevated risk.
Clear definitions allow leaders to choose the least disruptive option that still protects people. Often, relocation or stability with structured support is safer and more sustainable than rapid evacuation.
Prepare Employees Before Movement Is Required
Relocation becomes significantly smoother when employees are prepared before they are asked to move. Strategy should include guidance on documentation readiness, passport validity, visa requirements for neighbouring countries, preferred relocation countries and expectations around timelines and flexibility.
Employees working abroad need to understand not only what may happen, but how decisions will be made. When organisations explain decision triggers, what would prompt relocation, what would not, employees feel informed rather than anxious. This transparency builds trust and reduces panic-driven movement.
Integrate the Human Dimension into Planning
Strategic support must address the human impact of uncertainty. Employees working from abroad or facing relocation are often balancing professional obligations with family concerns, schooling, medical needs, and other emotional strains. Ignoring these factors weakens any relocation or stability strategy.
Effective organisations integrate wellbeing considerations into operational plans. This includes access to medical advice, continuity of prescriptions, support for family travel, and regular wellbeing check‑ins. Leaders should be attuned to signs of fatigue or anxiety and equip managers with guidance to support teams compassionately and consistently.
Communicate With Discipline and Predictability
In uncertain times, communication is as important as movement planning. Strategy should define how, when, and by whom information is shared. Centralised, fact‑based updates delivered at a predictable cadence reduce speculation and rumor.
Employees should know where official updates will come from and which sources to trust. Communications do not need to be frequent to be effective; they need to be consistent, clear, and grounded in verified information. Saying “there is no update yet” is often more reassuring than silence.
Support Employees Who Must Remain Abroad
Not all employees can or should relocate. Many will continue working from abroad in environments affected by regional tension. Supporting these employees strategically means ensuring they have guidance on local conditions, access to support services, and clearly defined expectations around work, availability, and safety.
Stability should be treated as an active posture, not inaction. Regular check‑ins, updated guidance, and contingency planning signal to employees that their situation is being managed deliberately, not overlooked.
Plan for Relocation as a Managed Process
When relocation is required and viable, it should be executed as a controlled, end‑to‑end process. This includes manifesting all individuals, front‑loading documentation checks, coordinating transport and accommodation, and communicating each step of the journey.
Strategically, leaders must also consider what comes after relocation: access to work, schooling for children, healthcare, and communication continuity. Relocation is not just movement; it is a temporary operating model that must be sustainable.
Learn, Adapt, and Strengthen
Each period of disruption provides insight into what worked and what did not. Strategic organisations capture these lessons and feed them back into planning. This may involve refining decision thresholds, improving data accuracy, or strengthening manager training.
Preparedness evolves as operating environments change, and organisations that invest in continuous improvement are better positioned to protect both their people and their business.
A Strategy Built on Trust and Clarity
Ultimately, supporting employees abroad or relocating amid regional tensions is a test of organisational maturity. Clear visibility, disciplined planning, transparent communication, and genuine care form the foundation of resilience. When organisations operate from these principles, employees feel supported rather than vulnerable, and leaders can make decisions with confidence rather than urgency.
Tech Features
IN THE AGE OF AI, THE BEST HEALTHCARE WILL STILL BE HUMAN

By Dr. Craig Cook, CEO, The Brain & Performance Centre, A DP World Company
Healthcare is entering one of the most transformative periods in its history. Artificial intelligence is accelerating diagnostics, enhancing imaging, and enabling more personalised treatment pathways than ever before. These advancements are no longer theoretical, they are already shaping how care is delivered across leading medical systems.
However, as the industry moves forward at pace, there is a risk of focusing too heavily on what technology can do, and not enough on what individuals actually need.
At its core, healthcare is not a technical transaction. It is a human experience. Within that experience, trust, communication and empathy are not optional, they are fundamental.
Strong human interaction between clinicians and clients remains one of the most important factors in delivering safe and effective care. Technology can identify patterns, process data and support decision-making, but it cannot replace the reassurance an individual feels when they are heard, understood and taken seriously. That interaction often determines whether someone follows through with treatment, shares critical information, or seeks support early rather than late.
From a safety perspective, this is critical. Individuals who feel comfortable with their clinician are far more likely to communicate openly about symptoms, concerns and uncertainties. They ask more questions, clarify instructions, and engage more actively in their own care. This level of engagement reduces the likelihood of miscommunication, improves adherence to treatment plans, and ultimately leads to better outcomes.
In contrast, when the human element is diminished, even the most advanced systems can fall short. An individual may receive accurate data but still leave uncertain about what it means. They may hesitate to disclose something important, or disengage entirely. No algorithm can compensate for that gap.
This is why meaningful communication must remain at the centre of healthcare delivery. It is not simply about explaining a diagnosis. It is about creating an environment where individuals feel safe to speak, where their concerns are acknowledged, and where complex information is translated into something clear and actionable.
As artificial intelligence continues to evolve, the role of the clinician will not diminish, it will become more important. Technology should reduce administrative burden, enhance precision, and create time. That time should be reinvested into the client relationship through greater clarity, deeper understanding and more considered care.
At The Brain & Performance Centre, A DP World Company, this balance is central to how we approach care. Advanced technologies play a critical role in our assessments and programmes, but they are always applied within a human-led framework. Every programme is personalised, every interaction is intentional, and every client journey is built on understanding the individual, not just the data.
The future of healthcare will undoubtedly be shaped by innovation. But its success will not be defined by how advanced the technology becomes. It will be defined by whether we use that technology to strengthen, rather than replace, the human connection at the centre of care. Because ultimately, the most powerful tool in healthcare is not artificial intelligence. It is trust.
Tech Features
6 Trends in AI Compliance Influencing How GCC Companies Operate
Across the GCC, national development agendas increasingly position artificial intelligence as a cornerstone of economic diversification. Saudi Arabia’s Vision 2030, the UAE’s National AI Strategy 2031, and Qatar’s national innovation roadmap all highlight AI as a critical driver of future growth. According to McKinsey, AI adoption has already reached around 84 percent among organisations in the GCC, with the technology projected to generate up to $320 billion in economic value for the Middle East by 2030. As adoption accelerates across industries, regulatory compliance is becoming a key factor that determines whether AI initiatives move beyond ambition to achieve sustainable scale.
Shaffra, an AI research and applications company building autonomous AI teams for enterprises and governments, sees six clear shifts reshaping how companies operate.
1. Regulation is accelerating adoption in high-stakes sectors
Government entities, financial services, telecom, aviation, and large semi-government organisations are moving fastest. These sectors operate at scale, face strict efficiency mandates, and function under constant regulatory oversight. Healthcare and energy are advancing more cautiously due to safety and data sensitivity. In many cases, the more regulated the industry, the faster AI deployment progresses. However, rapid scaling also exposes governance weaknesses, particularly where documentation, ownership, and oversight mechanisms are underdeveloped.
2. Compliance is prerequisite for scale
Over the past year, 88% of Middle East CEOs have reported generative AI uptake. Today, organisations increasingly require audit trails, explainability, clear data lineage and residency controls, defined performance thresholds, and enforceable human oversight mechanisms. With one in four Middle East consumers citing privacy as a primary concern, compliance is being treated as a post-deployment validation exercise; it is a structural requirement for scaling AI responsibly.
3. Sovereign AI and data residency are shaping architecture
AI governance in the GCC is being influenced less by standalone AI laws and more by data protection and cybersecurity frameworks. The UAE’s federal data protection law, Saudi Arabia’s PDPL under SDAIA, and Oman’s PDPL reinforce lawful processing and cross-border controls. In highly regulated sectors such as banking, healthcare, energy, and telecommunications, data residency and local control over models are strategic imperatives. Sovereign AI is evolving from a policy ambition into an operational requirement affecting infrastructure, vendor selection, and system design.
4. Human accountability is being reasserted
When organisations deploy AI without defining who owns the decision, when human escalation is required, and what the system is permitted or restricted from doing, they create either over-reliance or under-utilisation. Without clearly defined ownership and documented review controls, accountability weakens and regulatory exposure increases.
For instance, DIFC reinforces responsible AI use in personal data processing. High-impact decisions involving legal standing, fraud, employment, healthcare guidance, or public sector determinations that affect citizens need to involve human oversight, while AI handles speed, consistency, and automation of repetitive tasks. High-impact decisions should involve accountable human oversight.
5. Governance maturity slows deployment activity
Many organisations are AI-active but still developing governance maturity. Common governance gaps are structural rather than technical. Multiple pilots often run in parallel, tool adoption is fragmented, and accountability is split across IT, legal, risk, and business functions. Growing enterprises often lack a central AI governance owner, a comprehensive use-case inventory, consistent vendor and model risk assessment, and formal escalation protocols. Policies may exist at the board level, yet it is not consistently embedded into day-to-day operations. Addressing this gap requires governance to be built into workflows from the outset.
6. Continuous auditing is discipline
Studies indicate that a majority of ML models degrade over time, through model drift, hidden bias, or misuse vulnerabilities. Initial audits frequently reveal undocumented use cases, weak access segmentation, insufficient logging, and unclear review protocols. Effective governance requires compliance with international and local data residency rules, structured risk tiering, data lineage validation, access controls, bias testing, performance benchmarking, and defined incident response procedures. High-impact systems warrant quarterly reviews supported by continuous monitoring, while lower-risk applications still require periodic reassessment. Governance is increasingly measured through evidence rather than policy statements. Boards are asking for dashboards, logs, and audit artefacts — not policy PDFs.
Governance is being considered as part of AI infrastructure. Compliance frameworks are evolving into operational architecture embedded within systems, workflows, and accountability models. The organisations that will lead in the GCC are those that design governance at the same time they design capability, ensuring AI scales with discipline rather than risk.
-
News10 years ago
SENDQUICK (TALARIAX) INTRODUCES SQOOPE – THE BREAKTHROUGH IN MOBILE MESSAGING
-
Tech News2 years agoDenodo Bolsters Executive Team by Hiring Christophe Culine as its Chief Revenue Officer
-
VAR12 months agoMicrosoft Launches New Surface Copilot+ PCs for Business
-
Tech Interviews2 years agoNavigating the Cybersecurity Landscape in Hybrid Work Environments
-
Trending5 months agoOPPO A6 Pro 5G Review: Reliable Daily Driver
-
Tech News9 months agoNothing Launches flagship Nothing Phone (3) and Headphone (1) in theme with the Iconic Museum of the Future in Dubai
-
Automotive1 year agoAGMC Launches the RIDDARA RD6 High Performance Fully Electric 4×4 Pickup
-
VAR2 years agoSamsung Galaxy Z Fold6 vs Google Pixel 9 Pro Fold: Clash Of The Folding Phenoms


