Tech Features
WHEN MEDICAL SCANS END UP ONLINE: THE QUIET RISK HOSPITALS CAN FIX FAST

Attributed by Osama Alzoubi, Middle East and Africa VP at Phosphorus Cybersecurity
As Saudi Arabia races ahead in digital healthcare transformation, a quieter vulnerability lingers in the background: medical imaging systems that can be found – and sometimes accessed – directly from the public internet. Imaging infrastructure, diagnostic platforms, and hospital information systems are being modernized at speed improving outcomes, accelerating workflows, and bringing advanced clinical capabilities to more communities. But beneath this progress lies a quieter risk that rarely makes headlines: medical imaging systems being exposed on the public internet due to simple configuration errors.
Not a dramatic cyberattack. Not a threat actor breaching a firewall. Just avoidable misconfigurations that leave sensitive patient data reachable by anyone who knows where to look.
Medical imaging systems in Saudi Arabia face a persistent security challenge that differs from dramatic cyberattacks. Patient data exposure often occurs through configuration errors that leave systems accessible on the public internet. These technical oversights represent a significant vulnerability in healthcare’s digital infrastructure.
The Kingdom’s Personal Data Protection Law (PDPL) establishes strict requirements for handling health data. This legislation, modeled after international standards, mandates enhanced protection for medical information and imposes penalties for unauthorized disclosure. Hospitals must implement organizational and technical measures to prevent data exposure.
Radiology departments increasingly use digital platforms for case discussions and second opinions. Without proper configuration, these systems might allow unintended access to patient records. Teleradiology services, which expanded significantly during the pandemic, require secure transmission protocols to protect data during remote consultations.
When we hear about data breaches, we often imagine skilled hackers penetrating security systems. The reality is often simpler and more preventable. “Exposed” typically means a system is reachable from the public internet due to setup choices, not a sophisticated intrusion.
This happens in real-world healthcare settings for straightforward reasons: rushed deployments to meet clinical deadlines, vendor-supplied default configurations that were never changed, remote support access left open for convenience, and legacy systems that were connected to modern networks without proper security reviews.
The scale is significant. Research has identified over 1.2 million reachable devices and systems globally, including MRI scanners, X-ray systems, and related medical infrastructure. These are not theoretical vulnerabilities. They represent actual systems that can be found and accessed from anywhere with an Internet connection.
What gets exposed is more than images
Medical imaging files are not simply pictures. They carry identifiers and metadata that can connect scans directly to real people. Patient names, dates of birth, identification numbers, and clinical details often travel alongside the diagnostic images themselves.
This matters for several reasons. Beyond the obvious privacy violation, exposed patient imaging data creates risks of identity fraud, potential coercion or blackmail, serious reputational damage to healthcare institutions, and erosion of the trust patients place in their medical providers.
Security monitoring platforms have documented cases where exposed systems allowed direct access to both images and patient data—offering a level of detail that should never be open to anyone outside the clinical team.
Why this keeps repeating worldwide
Hospitals everywhere use similar device types and manage comparable data flows. The result is that the same setup mistakes appear repeatedly across different countries and healthcare systems. What starts as one hospital’s misconfiguration becomes everyone’s common failure mode.
The medical devices themselves often come with similar default settings. Imaging servers, picture archiving systems, and diagnostic viewers are deployed in comparable ways. When basic security steps are skipped during installation, the exposure follows a predictable pattern.
Health sector cybersecurity guidance from international authorities emphasizes the need for repeatable baseline controls precisely because these patterns recur. Reducing exposure requires not innovation, but consistent application of known protective measures.
Healthcare organizations face a common vulnerability pattern. A major healthcare provider addressed similar challenges across hundreds of hospitals, discovering that default passwords, vulnerable firmware, and device misconfigurations created entry points that threatened patient care and hospital operations across more than 500,000 connected medical and operational devices.
The Saudi-specific layer: connectivity at cluster scale
Saudi Arabia’s healthcare transformation includes the expansion of health clusters that connect multiple facilities into integrated networks. This approach improves care coordination and resource sharing, but it also means that one weak link can affect multiple sites.
National interoperability initiatives support the sharing of imaging and diagnostic reports across the healthcare system. The Saudi health ministry has established specifications for imaging data exchange through the national health information exchange platform, enabling providers to access patient scans regardless of where they were originally performed.
This connectivity is essential for modern healthcare delivery. It allows specialists to review scans remotely, supports second opinions, and ensures continuity of care when patients move between facilities. However, it also increases the need for consistent configuration rules and security standards across all connected sites.
When imaging systems within a cluster are not uniformly secured, the exposure risk multiplies. A misconfigured system in one facility can potentially provide access to data from across the entire cluster network.
A practical checklist hospitals can act on
Healthcare institutions can take concrete steps to reduce exposure risk. These are not theoretical recommendations but proven measures that address the most common vulnerabilities.
First, create a complete inventory. Every hospital should maintain a current list of what is connected to its network, including imaging devices, storage servers, viewing stations, web portals, and remote access tools. You cannot protect what you do not know exists.
Second, check external exposure. Verify that nothing sensitive is reachable from the public internet. This requires technical scanning from outside the hospital network to identify systems that respond to external queries. Many organizations discover exposures they did not realize existed.
Third, restrict remote access properly. Remote connections for maintenance and support should be tightly controlled, require strong authentication methods, and be removed entirely when no longer needed. Convenience should never override security when patient data is involved.
Fourth, implement safe setup procedures. Develop standard build guides for imaging systems, change all default passwords and settings, clearly document who owns each system, and establish responsibility for applying security patches and updates. Industry experience shows that default credentials remain one of the lowest barriers for attackers seeking entry into healthcare networks.
Fifth, conduct continuous checks. Exposure scanning should happen after any network changes, not just once annually. Healthcare networks evolve constantly, and new vulnerabilities can appear whenever systems are added or reconfigured.
These steps align with guidance from international cybersecurity authorities and health sector regulators, which emphasize reducing exposed services and strengthening baseline controls as priority actions for healthcare organizations.
The governance fix: make secure setup part of how clusters run
Individual hospital efforts are necessary but not sufficient. At the cluster level, governance structures must embed security into standard operations.
This begins with cluster-wide minimum standards for imaging systems and remote access. Every facility within a cluster should follow the same baseline security requirements, ensuring consistent protection regardless of which site a patient visits.
Clear ownership must be established for every system. Someone specific should be responsible for applying patches, approving access requests, and regularly checking for exposure. When accountability is diffuse, critical tasks get overlooked.
Procurement processes offer another leverage point. Purchase agreements should require vendors to provide secure default configurations, enable comprehensive logging capabilities, and commit to supported update cycles for the life of the equipment. Security should be a selection criterion, not an afterthought.
These governance approaches reflect sector framework guidance that encourages structured programs and repeatable controls rather than ad hoc responses to individual incidents.
Saudi Arabia has invested heavily in national cybersecurity frameworks and regulatory oversight across critical sectors, including healthcare. The foundation exists. The next step is ensuring those protections extend fully to the expanding ecosystem of IoT and IoMT devices — where simple configuration gaps can undermine otherwise sophisticated digital progress.
Prevent avoidable incidents
The goal is not perfection. Healthcare systems are complex, and some level of risk will always exist. The goal is removing the easiest path for data exposure: systems sitting openly on the public internet waiting to be found.
In connected healthcare, the quickest wins come from two simple principles: visibility and access control. Know what you have connected, and shut the doors that do not need to be open.
For Saudi Arabia’s health clusters, this represents an achievable objective. The infrastructure investments being made across the Kingdom’s healthcare sector create an opportunity to build security into expansion rather than retrofitting it later.
Medical imaging systems serve an essential clinical purpose. They should not also serve as unintended windows into patient data. With practical steps and consistent governance, hospitals can fix this quiet risk before it becomes a public incident.
In digital healthcare, exposure is rarely a mystery. It is usually a configuration. The question is not whether hospitals can fix it, but whether they will do so before patients pay the price.
Cover Story
AI Moves from Experiment to Essential in UAE’s Advertising Landscape

From content creation to media buying, artificial intelligence is quietly reshaping how campaigns are built, delivered, and optimised across the GCC.
In the UAE and across the GCC, artificial intelligence has moved well beyond the stage of experimentation. What was once a buzzword discussed in boardrooms is now deeply embedded in the day-to-day execution of advertising. Brands are no longer testing AI—they are relying on it to run campaigns, generate content, and make increasingly precise decisions about audience targeting and timing.
On the creative front, the shift is particularly visible. AI-powered tools are now capable of producing ad copy, visuals, and even short-form video content at a pace that would have been unthinkable just a few years ago. For marketers operating in a market like the UAE—where campaigns often need to speak to audiences in both English and Arabic, while also resonating across a diverse mix of nationalities, this level of speed and adaptability is more than a convenience. It is becoming a necessity.
Behind the scenes, machine learning has also transformed how media buying is approached. Traditional methods that relied heavily on instinct or retrospective performance reports are steadily being replaced by systems that analyse audience behaviour in real time. These platforms continuously optimise campaign performance, adjusting budgets and placements based on how users interact with content.
In the UAE’s PR ecosystem, brands are already leveraging platforms such as Meltwater, Brandwatch, and Sprout Social to better understand media performance, audience sentiment, and the broader buying landscape.

A practical example of this shift can be seen in platforms like Skyscanner, where advertising systems respond dynamically to user intent. Instead of targeting broad demographic groups, campaigns are triggered by actual search behaviour and travel patterns, allowing for more relevant and timely engagement.
AI is also influencing emerging advertising formats. Digital billboards, for instance, are becoming more responsive, using live data inputs to tailor content based on factors such as time of day, location, and audience movement. Similarly, augmented reality experiences are beginning to incorporate behavioural insights, offering more contextual and interactive brand engagements.
Looking ahead, the trajectory appears clear. Advertising is moving towards deeper automation, more intelligent recommendations, and tighter integration between creative tools and analytics platforms. The industry is shifting from a model centred on broadcasting messages to one that focuses on responding to audiences in real time, with context and precision.
In this evolving landscape, AI is no longer just an enabler, it is becoming the foundation on which modern advertising is built.
Tech Features
Can Middle East Banks Reclaim Their Digital Leadership in the Age of AI?

Banks have long been the GCC’s digital pioneers. In the UAE, Saudi Arabia and Qatar, financial institutions were among the first to embrace mobile banking apps, roll out contactless payments at scale and introduce AI-powered chatbots to handle customer queries in Arabic and English. More often than not, banks set the pace and other sectors followed.
Given this decades-long precedent, you would expect the same pattern to be playing out with artificial intelligence. After all, AI is already embedded in the daily lives of Gulf consumers. Ride-hailing, e-commerce, government, and a plethora of other services across the region have increasingly integrated AI into their systems, to effectively personalise experiences and streamline transactions.
And yet, when we look inside banks themselves, the story is more complicated. According to the latest Riverbed Global Survey, only 40% of organizations in the financial sector consider themselves ready to operationalize AI. Just 12% of AI initiatives are fully deployed enterprise-wide, while 62% remain stuck in pilot or development phases. In a sector known for digital ambition, there is a striking gap between intent and execution.
Stuck in Pilot Purgatory
In most industries, pilots fail because the idea simply does not resonate. Testing reveals a weak product-market fit, limited customer appetite, or unclear commercial value.
That is not what we are seeing in banking AI. Regional banks have successfully piloted AI models that detect fraud in real-time, reduce false positives in anti-money laundering checks, predict liquidity requirements, and power conversational assistants capable of resolving complex service requests. Relationship managers have used AI tools to surface next-best-product recommendations based on behavioral data. And operations teams have leveraged machine learning to optimize payment routing and reduce processing delays.
In controlled environments, these pilots often deliver impressive results. And yet, few ever make it past this stage. The initiative remains confined to a sandbox. Expansion is delayed. Integration becomes “phase two.” Eventually, attention shifts to the next promising experiment. So, if the feature works and the value is clear, what is holding banks back?
AI that Fails to Scale
In my experience working with CIOs across the region, two obstacles repeatedly stand in the way of AI moving from proof of concept to production. The first is operational complexity. Most financial institutions operate in highly fragmented environments. Core banking platforms run alongside decades-old legacy systems, with critical workloads split across on-premise data centers, private clouds, and multiple public cloud providers. Third-party fintech integrations also adds further layers of interdependency.
Deploying AI into this landscape is not as simple as plugging in a model. AI workloads are data-hungry and latency-sensitive. They require reliable pipelines, consistent telemetry, and predictable performance across every layer of the stack. In a hybrid, multi-cloud architecture, even minor configuration mismatches can trigger cascading issues.
The second obstacle is limited visibility. Without a unified view of applications, infrastructure, networks, and user experience, AI-driven services can behave unpredictably. A model may be performing perfectly, but a network bottleneck slows response times. An upstream data source may degrade in quality, subtly skewing outputs, and an infrastructure change in one environment may impact inference speeds elsewhere.
When visibility is fragmented, issues take longer to diagnose and resolve, and Mean Time to resolution increases. Operational risk rises, particularly when customer-facing or revenue-critical services are affected. In a heavily regulated market such as the UAE or Saudi Arabia, that risk has compliance implications as well as reputational ones.
Left unaddressed, this kind of live digital environment leaves very little room for innovation. AI cannot become the transformational force many claim it to be if it is constantly constrained by hidden friction.
Conquering Complexity
Moving AI smoothly from pilot to production requires banks to create as frictionless an operating environment as possible. One of the most effective starting points is unified observability. By consolidating telemetry from applications, infrastructure, networks and end-user devices into a single, real-time view, banks can eliminate blind spots, and decision-makers can gain clarity over performance, dependencies and risk across the entire digital estate.
With this foundation in place, AIOps capabilities can correlate signals, reduce alert noise and automate root cause analysis. Instead of firefighting incidents after customers notice them, IT teams can proactively identify performance degradation and resolve issues before they impact revenue or service continuity.
Standardising on frameworks such as OpenTelemetry can further simplify instrumentation across heterogeneous environments, ensuring consistent data collection and analysis. At the same time, investing in data quality, governance and compliance processes ensures that AI models are trained and operated within regulatory boundaries.
In practical terms, this means rethinking infrastructure as an enabler of AI rather than an afterthought. It may involve accelerating data movement between environments, modernising integration layers or rationalising overlapping monitoring tools. The goal is not perfection, but coherence: a shared, real-time understanding of how systems behave and how AI performs under real-world conditions.
From Optimism to Optimisation
The debate about whether AI belongs in banking is effectively over. Across the Middle East, regulators are publishing AI guidelines, governments are investing heavily in digital transformation, and consumers increasingly expect intelligent, seamless services.
Institutions that continue to treat AI as a series of isolated pilots risk remaining in perpetual experimentation. However, those who address operational complexity head-on will move beyond optimism to optimisation.
Tech Features
Addressing Structural Gaps in Enterprise Backup Strategies

By Owais Mohammed, Regional Lead & Sales Director, WD – Middle East, Africa, Turkey & Indian Subcontinent
Today, organizations across the UAE are reassessing how they backup and recover data in increasingly complex environments. Organisations are managing data across cloud platforms, on-premises infrastructure, edge deployments, and increasingly, AI-driven workloads. As these environments scale, data moves across system and is reused for analytics, compliance, and performance optimisation. This increases the complexity of backup and retention requirements. When strategies do not keep pace, gaps become visible.
Where backup strategies are falling short
A common challenge is the alignment between backup design and actual workload distribution. Many backup strategies are built around primary systems. But enterprise data now lives across multiple environments with different access patterns and retention requirements. This creates inconsistencies in backup coverage across cloud services, endpoints, and shared infrastructure.
A common misconception is that platform-level redundancy is sufficient. Cloud and application are designed to provide availability, but they do not replace independent backup layers. When data is modified, deleted, or encrypted within the same environment, recovery depends on whether a separate, unaffected copy exists.
Coverage inconsistencies also become more visible as organizations scale. Backup policies often prioritise transactional systems. Logs, archived records, development environments, and datasets used for analytics or AI workflows may be retained without structured protection. These datasets can become critical during investigations, audits, or system updates.
Recovery planning is where many strategies can break down. Backup processes may be in place, but recovery requirements are not always well defined. This includes defining dependencies, sequencing recovery, and aligning recovery times with business needs.
Why data resilience is now an infrastructure requirement
Enterprise data is now used across a wider range of functions. In analytics and AI-driven environments, data is revisited over time rather than stored and left unused. Historical datasets are essential to maintain performance and consistency. This means reliable backup and access are no longer secondary consideration, but core infrastructure needs.
Compliance expectations are also evolving. Organizations are increasingly need to retain records, demonstrate traceability, and provide access to data in a verifiable format. Backup and retention policies must align with recovery capabilities.
Building a more resilient data strategy
Addressing these gaps requires a structured approach to data resilience.
Infrastructure choices affect how backup strategies can be implemented. These decisions increasingly factor in not only performance and scalability, but also long-term cost efficiency as data environments expand. Many organisations are adopting hybrid models that combine cloud platforms with localised storage systems. This allows different workloads to be supported based on their access patterns and recovery requirements. In scenarios where consistent performance and recovery predictability are required, localized storage can provide additional control.
As environments grow, automation is important in maintaining consistency. Policy-driven automation helps ensure that backup processes are applied consistently, while monitoring tools provide visibility into system performance and potential gaps.
Recovery planning needs to be integrated into these processes. Clear recovery objectives and regular testing are essential for effective backup strategies.
Data prioritization also plays a role in managing scale. Not all data requires the same level of backup. Identifying critical datasets, allows organizations to allocate resources effectively.
Managing cost as data volumes scale
Cost considerations play a central role as data volumes scale. In large environments, power consumption, cooling requirements, and infrastructure footprint all contribute to total cost of ownership (TCO), particularly as data environments scale.
This is where tiered storage architecture becomes critical. High-performance storage is essential for active workloads such as analytics and real-time processing, while high-capacity, cost-efficient storage supports large datasets, backups, and long-term retention. This helps manage growth and scaling efficiently.
Treating all data the same is no longer practical. Infrastructure decisions need to reflect how data is used, how often it is accessed, and how quickly it needs to be recovered.
Backup strategies must align closely with infrastructure design. Data resilience now means ensuring data is accessible and recoverable across systems.
Many organizations are adopting hybrid models that combine cloud platforms with localized storage systems. In data-intensive environments, the ability to recover and reuse data is directly tied to operational continuity, system performance, and the ability to scale infrastructure effectively.
-
News10 years ago
SENDQUICK (TALARIAX) INTRODUCES SQOOPE – THE BREAKTHROUGH IN MOBILE MESSAGING
-
Tech News2 years agoDenodo Bolsters Executive Team by Hiring Christophe Culine as its Chief Revenue Officer
-
VAR12 months agoMicrosoft Launches New Surface Copilot+ PCs for Business
-
Trending6 months agoOPPO A6 Pro 5G Review: Reliable Daily Driver
-
Tech Interviews2 years agoNavigating the Cybersecurity Landscape in Hybrid Work Environments
-
Tech News9 months agoNothing Launches flagship Nothing Phone (3) and Headphone (1) in theme with the Iconic Museum of the Future in Dubai
-
Automotive2 years agoAGMC Launches the RIDDARA RD6 High Performance Fully Electric 4×4 Pickup
-
VAR2 years agoSamsung Galaxy Z Fold6 vs Google Pixel 9 Pro Fold: Clash Of The Folding Phenoms


