Tech News
VAST Data Partners with Google Cloud to Enable Enterprise AI at Scale Across Hybrid Cloud Environments
VAST Data, the AI Operating System company, today announced an expanded partnership with Google Cloud, the first fully managed service for the VAST AI Operating System (AI OS), enabling customers to deploy the AI OS and extend a unified global namespace across hybrid environments. Powered by the VAST DataSpace, enterprises can seamlessly connect clusters running in Google Cloud and on-premises locations, eliminating complex migrations and making data instantly available wherever AI runs.
Enterprises want to run AI where it performs best, but data rarely lives in one place and migrating can take months and costs millions. Fragmented storage and siloed data pipelines make it hard to feed the AI accelerators with consistent, high-throughput access and every environment change multiplies governance and compliance burdens.
VAST and Google Cloud address this challenge by making data placement a choice rather than a constraint. In this recorded demonstration, VAST showcased the power of the VAST DataSpace to connect clusters across more than 10,000 kilometers, linking one in the United States with another in Japan. This configuration delivered seamless, near real-time access to the same data in both locations while running inference workloads with vLLM, enabling intelligent workload placement so organizations can run AI models on TPUs in the US and GPUs in Japan without duplicating data or managing separate environments.
“Together with Google Cloud, VAST is building a unified data and computing environment that extends to wherever a customer wants to compute and unleashes the potential of AI by unlocking access to all data everywhere,” said Jeff Denworth, Co-Founder at VAST Data. “Delivered as a managed AI Operating System on Google Cloud, customers can go from zero to production in minutes – we’re turning hybrid complexity into a single, intelligent fabric that provides fast access to data, regardless of where it resides to accelerate time to value for agentic AI.”
“Bringing VAST AI Operating System to Google Cloud Marketplace will help customers quickly deploy, manage, and grow the data solution on Google Cloud’s trusted, global infrastructure,” said Nirav Mehta, Vice President, Compute Platform at Google Cloud. “VAST can now securely scale and support customers on their digital transformation journeys.”
Powering Google Cloud TPUs with seamless data access and near-local performance
Recent performance results also show how the VAST AI Operating System connects seamlessly to Google Cloud Tensor Processing Unit (TPU) virtual machines, integrating directly with Google Cloud’s platform for large-scale AI. In testing with Meta’s Llama-3.1-8B-Instruct model, the VAST AI Operating System delivered model load speeds comparable to some of the best options available in the cloud, while maintaining predictable performance during cold starts.
These results confirm that the VAST AI OS is not just a data platform but a performance engine designed to keep accelerators fully utilized and AI pipelines continuously in motion.
“The VAST AI OS is redefining what it means to move fast in AI, delivering model load speeds comparable to cloud-native alternatives while providing the full power of an advanced, enterprise-grade AI platform,” said Subramanian Kartik, Chief Scientist at VAST Data. “This is the kind of acceleration that turns idle accelerators into active intelligence, driving higher efficiency and faster time to insight for every AI workload.”
With VAST on Google Cloud, customers can benefit from:
- Deploy AI in Minutes, Not Months: Organizations can run production AI workloads on Google Cloud today against existing on-premises datasets without migration planning, transfer delays, or extended compliance cycles. Using VAST DataSpace and intelligent streaming, they can present a consistent global namespace of data across on-prem and Google Cloud instantly.
- Reduce Data-Movement Costs: Stream only the subsets that models require to avoid full replication and reduce egress – cutting footprint and redirecting budget from data movement to AI innovation with infrastructure that is future-ready for the demanding AI pipelines in genomics, structural biology, and financial services.
- Maximize Google Cloud Innovation with Flexible Data Placement: Choose what to migrate, replicate, or cache to Google Cloud while keeping one namespace and consistent governance by applying unified access controls, audit, and retention policies everywhere to simplify compliance and reduce operational risk. Leverage VAST DataStore and VAST DataBase to unify prep, training, inference, and analytics without rewiring pipelines.
- TPU-Ready Data Path: Feed TPU VMs over validated NFS paths with optimized model loading and metadata-aware I/O, delivering fast, consistent warm-start performance and predictable behavior during cold-starts.
- Build on a Unified Platform: The VAST AI Operating System delivers a DataStore, DataBase, InsightEngine, AgentEngine and DataSpace that scales across on-premises and Google Cloud environments and adapts to changing business needs without architectural rewrites, enabling data scientists to use a variety of access protocols with a single solution.
Tech News
TRENDS IN AI COMPLIANCE INFLUENCING HOW GCC COMPANIES OPERATE

Across the GCC, national growth strategies, with Saudi Arabia’s Vision 2030, the UAE’s National AI Strategy 2031, and Qatar’s national roadmap, place AI at the centre of economic diversification. McKinsey estimates AI adoption at roughly 84% across GCC organisations, with a potential $320 billion economic impact for the Middle East by 2030. As deployment accelerates, regulatory compliance is a defining factor separating ambition from sustainable scale. Shaffra, an AI research and applications company building autonomous AI teams for enterprises and governments, sees six clear shifts reshaping how companies operate.
1. Regulation is accelerating adoption in high-stakes sectors
Government entities, financial services, telecom, aviation, and large semi-government organisations are moving fastest. These sectors operate at scale, face strict efficiency mandates, and function under constant regulatory oversight. Healthcare and energy are advancing more cautiously due to safety and data sensitivity. In many cases, the more regulated the industry, the faster AI deployment progresses. However, rapid scaling also exposes governance weaknesses, particularly where documentation, ownership, and oversight mechanisms are underdeveloped.
2. Compliance is prerequisite for scale
Over the past year, 88% of Middle East CEOs have reported generative AI uptake. Today, organisations increasingly require audit trails, explainability, clear data lineage and residency controls, defined performance thresholds, and enforceable human oversight mechanisms. With one in four Middle East consumers citing privacy as a primary concern, compliance is being treated as a post-deployment validation exercise; it is a structural requirement for scaling AI responsibly.
3. Sovereign AI and data residency are shaping architecture
AI governance in the GCC is being influenced less by standalone AI laws and more by data protection and cybersecurity frameworks. The UAE’s federal data protection law, Saudi Arabia’s PDPL under SDAIA, and Oman’s PDPL reinforce lawful processing and cross-border controls. In highly regulated sectors such as banking, healthcare, energy, and telecommunications, data residency and local control over models are strategic imperatives. Sovereign AI is evolving from a policy ambition into an operational requirement affecting infrastructure, vendor selection, and system design.
4. Human accountability is being reasserted
When organisations deploy AI without defining who owns the decision, when human escalation is required, and what the system is permitted or restricted from doing, they create either over-reliance or under-utilisation. Without clearly defined ownership and documented review controls, accountability weakens and regulatory exposure increases.
For instance, DIFC reinforces responsible AI use in personal data processing. High-impact decisions involving legal standing, fraud, employment, healthcare guidance, or public sector determinations that affect citizens need to involve human oversight, while AI handles speed, consistency, and automation of repetitive tasks. High-impact decisions should involve accountable human oversight.
5. Governance maturity slows deployment activity
Many organisations are AI-active but still developing governance maturity. Common governance gaps are structural rather than technical. Multiple pilots often run in parallel, tool adoption is fragmented, and accountability is split across IT, legal, risk, and business functions. Growing enterprises often lack a central AI governance owner, a comprehensive use-case inventory, consistent vendor and model risk assessment, and formal escalation protocols. Policies may exist at the board level, yet it is not consistently embedded into day-to-day operations. Addressing this gap requires governance to be built into workflows from the outset.
6. Continuous auditing is discipline
Studies indicate that a majority of ML models degrade over time, through model drift, hidden bias, or misuse vulnerabilities. Initial audits frequently reveal undocumented use cases, weak access segmentation, insufficient logging, and unclear review protocols. Effective governance requires compliance with international and local data residency rules, structured risk tiering, data lineage validation, access controls, bias testing, performance benchmarking, and defined incident response procedures. High-impact systems warrant quarterly reviews supported by continuous monitoring, while lower-risk applications still require periodic reassessment. Governance is increasingly measured through evidence rather than policy statements. Boards are asking for dashboards, logs, and audit artefacts — not policy PDFs.
Governance is being considered as part of AI infrastructure. Compliance frameworks are evolving into operational architecture embedded within systems, workflows, and accountability models. The organisations that will lead in the GCC are those that design governance at the same time they design capability, ensuring AI scales with discipline rather than risk.
Tech News
PNY ANNOUNCES STRATEGIC PARTNERSHIP WITH F5 TO ACCELERATE THE ADOPTION OF SECURE, HIGH-PERFORMANCE INFRASTRUCTURE IN EMEA

PNY Technologies, a leading distributor of technology solutions and long-standing NVIDIA partner, today announced a partnership with F5, the global leader in delivering and securing
This agreement aims to strengthen access for enterprises across the EMEA region to advanced solutions designed to optimise, secure, and accelerate applications and IT infrastructures.
As AI adoption continues to accelerate, performance, data flow management, and application security are becoming critical priorities. Through this partnership, the F5 Application Delivery and Security Platform (ADSP) will complement PNY’s AI Factory ecosystem by providing advanced capabilities for traffic management, application security, and performance optimisation across on-premises, cloud, and hybrid environments.
PNY will leverage its technical expertise, partner network, and logistics capabilities to facilitate the deployment of F5 ADSP solutions for enterprises, system integrators, and service providers throughout the region.
“Collaboration between PNY, a specialist distributor of NVIDIA AI Factory solutions across the EMEA region, and F5 represents a major step forward for AI-dedicated infrastructure,” said Laurent Chapoulaud, VP Marketing at PNY. “Together, we optimise GPU environments through accelerated data flows and enhanced application security. This synergy between infrastructure and intelligent traffic management enables the deployment of AI architectures that are high-performance, resilient, and scalable.”
“This partnership brings together complementary strengths that directly benefit our partners and customers,” said Nasser El Abdouli, Regional VP EMEA Channel Sales, F5. “PNY’s longstanding partnership with NVIDIA, combined with F5’s growing AI-focused application delivery and security offerings, allows us to help partners capably respond to the rapidly increasing demand for secure and scalable AI infrastructure across EMEA.”
Through this collaboration, PNY and F5 aim to support enterprises in their strategic initiatives related to hybrid multicloud, cybersecurity, and application performance optimisation, while simplifying access to next-generation technologies.
Tech News
MIDDLE EAST CONFLICT DRIVING A SURGE IN SCAMS, DEEPFAKES, AND GOVERNMENT IMPERSONATION

Cybercriminals don’t wait for the dust to settle. As conflict escalates across the Middle East, a parallel threat has emerged targeting ordinary people through their inboxes and social media feeds.
On 4 March, the UAE Ministry of Interior warned the public about fraudulent emails impersonating government emergency services, falsely claiming that residents must complete a mandatory registration form to receive state support or insurance coverage. The emails bore hallmarks of official government communications, making them convincingly deceptive. They are designed to exploit fear, urgency, and the instinct to comply with perceived authority. These messages are already circulating.
Alongside financial scams, verified fact-checkers have identified AI-generated and mislabelled footage circulating online as supposed evidence of attacks in the UAE. This includes video from Bahrain that was picked up by international media outlets and incorrectly broadcast as a Dubai drone strike. Fabricated videos of the Burj Khalifa collapsing, AI-generated missile strike imagery, and decade-old footage repackaged as current events have also circulated widely. In another example, a supposed “before and after” satellite image of Dubai showing smoke rising over the city was mislabelled — the image was actually from Sharjah, the neighbouring emirate. In many cases, the content spread faster than the corrections. Dubai Police have warned that sharing unverified information can carry criminal penalties under UAE law, including fines of no less than AED 200,000. Despite these warnings, the flow of misleading content has not slowed.
KnowBe4 warns patterns observed during previous conflicts and crises, including the war in Ukraine and the COVID-19 pandemic, the public should also expect charity and donation scams exploiting humanitarian concern, phishing emails disguised as embassy or government alerts, and deepfake imagery engineered to provoke fear or spread disinformation.
Dr. Martin Kraemer, CISO Advisor at KnowBe4 said, “Crises are the most reliable recruitment tool bad actors have. When people are frightened and searching for information, they are not necessarily looking for the truth. They are looking for confirmation of what they already fear. That is exactly what scammers and disinformation actors exploit. What we are seeing right now, fake government emergency emails, mislabelled footage, AI-generated imagery, is not random. It is targeted, and it is designed to exploit the gap between what people feel and what they know. The antidote is not panic. It is discipline: pause, question the source, and go directly to official channels before acting on anything. That’s precisely how governments and organizations are educating people to react in stressful situations.”
What the Public Can Do Right Now
KnowBe4 urges residents, travellers, and anyone following events in the region to apply the following principles:
- Treat urgency as a warning sign. Any message that pressures you to act quickly, register now, donate immediately, confirm your details before midnight, is likely designed to stop you thinking clearly.
- Verify before you share. Before forwarding footage or information, check whether it has been verified by a reputable news outlet or official source. Reverse image searches take seconds and can prevent significant harm.
- Go directly to official sources. If you receive communications claiming to be from a government ministry, embassy, or emergency service, navigate directly to their official website rather than clicking any link in the message.
- Question what you see. AI-generated imagery has reached a level of quality where video alone is no longer reliable evidence. Look for verification from multiple credible sources before drawing conclusions.
- Report suspicious communications. In the UAE, suspected scam emails or messages should be reported to the relevant authorities. Do not engage with the sender.
-
News10 years ago
SENDQUICK (TALARIAX) INTRODUCES SQOOPE – THE BREAKTHROUGH IN MOBILE MESSAGING
-
Tech News2 years agoDenodo Bolsters Executive Team by Hiring Christophe Culine as its Chief Revenue Officer
-
VAR11 months agoMicrosoft Launches New Surface Copilot+ PCs for Business
-
Tech Interviews2 years agoNavigating the Cybersecurity Landscape in Hybrid Work Environments
-
Tech News8 months agoNothing Launches flagship Nothing Phone (3) and Headphone (1) in theme with the Iconic Museum of the Future in Dubai
-
Automotive1 year agoAGMC Launches the RIDDARA RD6 High Performance Fully Electric 4×4 Pickup
-
Trending5 months agoOPPO A6 Pro 5G Review: Reliable Daily Driver
-
VAR2 years agoSamsung Galaxy Z Fold6 vs Google Pixel 9 Pro Fold: Clash Of The Folding Phenoms


