Connect with us

Tech News

IBM Sets the Course to Build World’s First Large-Scale, Fault-Tolerant Quantum Computer at New IBM Quantum Data Center

Published

on

IBM Quantum Data Center

IBM unveiled its path to build the world’s first large-scale, fault-tolerant quantum computer, setting the stage for practical and scalable quantum computing.  

Delivered by 2029, IBM Quantum Starling will be built in a new IBM Quantum Data Center in Poughkeepsie, New York and is expected to perform 20,000 times more operations than today’s quantum computers. To represent the computational state of an IBM Starling would require the memory of more than a quindecillion (10^48) of the world’s most powerful supercomputers. With Starling, users will be able to fully explore the complexity of its quantum states, which are beyond the limited properties able to be accessed by current quantum computers.  

IBM, which already operates a large, global fleet of quantum computers, is releasing a new Quantum Roadmap that outlines its  plans to build out a practical, fault-tolerant quantum computer.

“IBM is charting the next frontier in quantum computing,” said Arvind Krishna, Chairman and CEO, IBM. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.”

A large-scale, fault-tolerant quantum computer with hundreds or thousands of logical qubits could run hundreds of millions to billions of operations, which could accelerate time and cost efficiencies in fields such as drug development, materials discovery, chemistry, and optimization.

Starling will be able to access the computational power required for these problems by running 100 million quantum operations using 200 logical qubits. It will be the foundation for IBM Quantum Blue Jay, which will be capable of executing 1 billion quantum operations over 2,000 logical qubits.  

A logical qubit is a unit of an error-corrected quantum computer tasked with storing one qubit’s worth of quantum information. It is made from multiple physical qubits working together to store this information and monitor each other for errors.

Like classical computers, quantum computers need to be error corrected to run large workloads without faults. To do so, clusters of physical qubits are used to create a smaller number of logical qubits with lower error rates than the underlying physical qubits. Logical qubit error rates are suppressed exponentially with the size of the cluster, enabling them to run greater numbers of operations.

Creating increasing numbers of logical qubits capable of executing quantum circuits, with as few physical qubits as possible, is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published.

The Path to Large-Scale Fault Tolerance

The success of executing an efficient fault-tolerant architecture is dependent on the choice of its error-correcting code, and how the system is designed and built to enable this code to scale.

Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations – necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be able to be implemented beyond small-scale experiments and devices.

A practical, large-scale, fault-tolerant quantum computer requires an architecture that is:

  • Fault-tolerant to suppress enough errors for useful algorithms to succeed.
  • Able to prepare and measure logical qubits through computation.
  • Capable of applying universal instructions to these logical qubits.
  • Able to decode measurements from logical qubits in real-time and can alter subsequent instructions.
  • Modular to scale to hundreds or thousands of logical qubits to run more complex algorithms.
  • Efficient enough to execute meaningful algorithms with realistic physical resources, such as energy and infrastructure.

Today, IBM is introducing two new technical papers that detail how it will solve the above criteria to build a large-scale, fault-tolerant architecture.

The first paper unveils how such a system will process instructions and run operations effectively with qLDPC codes. This work builds on a groundbreaking approach to error correction featured on the cover of Nature that introduced quantum low-density parity check (qLDPC) codes. This code drastically reduces the number of physical qubits needed for error correction and cuts required overhead by approximately 90 percent, compared to other leading codes. Additionally, it lays out the resources required to reliably run large-scale quantum programs to prove the efficiency of such an architecture over others.  

The second paper describes how to efficiently decode the information from the physical qubits and charts a path to identify and correct errors in real-time with conventional computing resources.

From Roadmap to Reality

The new IBM Quantum Roadmap outlines the key technology milestones that will demonstrate and execute the criteria for fault tolerance. Each new processor in the roadmap addresses specific challenges to build quantum systems that are modular, scalable, and error-corrected:

  • IBM Quantum Loon, expected in 2025, is designed to test architecture components for the qLDPC code, including “C-couplers” that connect qubits over longer distances within the same chip.
  • IBM Quantum Kookaburra, expected in 2026, will be IBM’s first modular processor designed to store and process encoded information. It will combine quantum memory with logic operations — the basic building block for scaling fault-tolerant systems beyond a single chip.
  • IBM Quantum Cockatoo, expected in 2027, will entangle two Kookaburra modules using “L-couplers.” This architecture will link quantum chips together like nodes in a larger system, avoiding the need to build impractically large chips.

Together, these advancements are being designed to culminate in Starling in 2029.

Tech News

VAST Data Partners with Google Cloud to Enable Enterprise AI at Scale Across Hybrid Cloud Environments

Published

on

Professional headshot of Cem Kul, General Manager of SO/ Ras Al Khaimah resort

VAST Data, the AI Operating System company, today announced an expanded partnership with Google Cloud, the first fully managed service for the VAST AI Operating System (AI OS), enabling customers to deploy the AI OS and extend a unified global namespace across hybrid environments. Powered by the VAST DataSpace, enterprises can seamlessly connect clusters running in Google Cloud and on-premises locations, eliminating complex migrations and making data instantly available wherever AI runs.

Enterprises want to run AI where it performs best, but data rarely lives in one place and migrating can take months and costs millions. Fragmented storage and siloed data pipelines make it hard to feed the AI accelerators with consistent, high-throughput access and every environment change multiplies governance and compliance burdens.

VAST and Google Cloud address this challenge by making data placement a choice rather than a constraint. In this recorded demonstration, VAST showcased the power of the VAST DataSpace to connect clusters across more than 10,000 kilometers, linking one in the United States with another in Japan. This configuration delivered seamless, near real-time access to the same data in both locations while running inference workloads with vLLM, enabling intelligent workload placement so organizations can run AI models on TPUs in the US and GPUs in Japan without duplicating data or managing separate environments.

“Together with Google Cloud, VAST is building a unified data and computing environment that extends to wherever a customer wants to compute and unleashes the potential of AI by unlocking access to all data everywhere,” said Jeff Denworth, Co-Founder at VAST Data. “Delivered as a managed AI Operating System on Google Cloud, customers can go from zero to production in minutes – we’re turning hybrid complexity into a single, intelligent fabric that provides fast access to data, regardless of where it resides to accelerate time to value for agentic AI.”

“Bringing VAST AI Operating System to Google Cloud Marketplace will help customers quickly deploy, manage, and grow the data solution on Google Cloud’s trusted, global infrastructure,” said Nirav Mehta, Vice President, Compute Platform at Google Cloud. “VAST can now securely scale and support customers on their digital transformation journeys.”

Powering Google Cloud TPUs with seamless data access and near-local performance

Recent performance results also show how the VAST AI Operating System connects seamlessly to Google Cloud Tensor Processing Unit (TPU) virtual machines, integrating directly with Google Cloud’s platform for large-scale AI. In testing with Meta’s Llama-3.1-8B-Instruct model, the VAST AI Operating System delivered model load speeds comparable to some of the best options available in the cloud, while maintaining predictable performance during cold starts.

These results confirm that the VAST AI OS is not just a data platform but a performance engine designed to keep accelerators fully utilized and AI pipelines continuously in motion.

“The VAST AI OS is redefining what it means to move fast in AI, delivering model load speeds comparable to cloud-native alternatives while providing the full power of an advanced, enterprise-grade AI platform,” said Subramanian Kartik, Chief Scientist at VAST Data. “This is the kind of acceleration that turns idle accelerators into active intelligence, driving higher efficiency and faster time to insight for every AI workload.”

With VAST on Google Cloud, customers can benefit from:

  • Deploy AI in Minutes, Not Months: Organizations can run production AI workloads on Google Cloud today against existing on-premises datasets without migration planning, transfer delays, or extended compliance cycles. Using VAST DataSpace and intelligent streaming, they can present a consistent global namespace of data across on-prem and Google Cloud instantly.
  • Reduce Data-Movement Costs: Stream only the subsets that models require to avoid full replication and reduce egress – cutting footprint and redirecting budget from data movement to AI innovation with infrastructure that is future-ready for the demanding AI pipelines in genomics, structural biology, and financial services.
  • Maximize Google Cloud Innovation with Flexible Data Placement: Choose what to migrate, replicate, or cache to Google Cloud while keeping one namespace and consistent governance by applying unified access controls, audit, and retention policies everywhere to simplify compliance and reduce operational risk. Leverage VAST DataStore and VAST DataBase to unify prep, training, inference, and analytics without rewiring pipelines.
  • TPU-Ready Data Path: Feed TPU VMs over validated NFS paths with optimized model loading and metadata-aware I/O, delivering fast, consistent warm-start performance and predictable behavior during cold-starts.
  • Build on a Unified Platform: The VAST AI Operating System delivers a DataStore, DataBase, InsightEngine, AgentEngine and DataSpace that scales across on-premises and Google Cloud environments and adapts to changing business needs without architectural rewrites, enabling data scientists to use a variety of access protocols with a single solution.
Continue Reading

Tech News

AUKEY PARTNERS WITH THE BROOKLYN NETS FOR AN ELECTRIFYING NBA SEASON

Published

on

Barclays Center arena interior during Cavaliers vs Nets NBA game with Aukey

AUKEY, a leading innovator in cutting-edge tech accessories, is proud to announce a multiyear partnership with the NBA’s Brooklyn Nets, beginning this 2025-26 NBA season. This collaboration is AUKEY’s first sports partnership, marking an exciting milestone for their expansion and reflecting their ongoing commitment to delivering high-quality, innovative technology experiences to a global audience.

Through this partnership, AUKEY will team up with the Brooklyn Nets to engage fans both on and off the court. Together, they’ve launched a non-commercial, limited-edition wireless power bank, the MagFusion M 5000 Brooklyn Nets Co-Branded Edition, combining the team’s bold identity with cutting-edge wireless charging technology.

Fans can participate in AUKEY’s social media giveaway activities for a chance to win one on Instagram and Facebook, keeping their energy flowing anytime, anywhere while enjoying exciting game moments.

MagFusion M 5000 Brooklyn Nets Co-Branded Edition

“We’re thrilled to partner with the Brooklyn Nets, a team that embodies creativity, resilience, and the spirit of New York,” said Jackey Li, CEO at AUKEY US. “At AUKEY, we power every moment with strength, endurance, and an unbreakable drive to keep innovating. The Nets share that same unstoppable spirit and we look forward to sharing that spirit of innovation and energy with basketball fans worldwide.”

AUKEY’s work with the Nets will extend in-arena at Barclays Center for the team’s home games, as well as on the team’s social media channels. This partnership represents a fusion of tech, sport, and culture and together, AUKEY and the Brooklyn Nets aim to unlock more power in every moment, from the court to the community, keeping fans charged for what’s next.

Continue Reading

Tech News

GCC COMPANIES ACHIEVE 30-SECOND PAYROLL PROCESSING WITH 100 PER CENT ACCURACY USING ADVANCED HRMS, REVEALS GREYTHR

Published

on

Girish Rowjee, Co-founder and CEO of greytHR

Companies across the GCC region have experienced higher workforce management efficiency using advanced AI-powered HRMS, reporting 100 per cent accuracy and stronger compliance with GCC labour regulations, reveals a recent survey conducted by greytHR, the leading full-suite Human Resource Management System (HRMS) platform. Notably, organisations with around 1000 employees could complete their payroll processing in just about 30 seconds using the innovative platform. 

The findings point to an exponential shift within the GCC HR landscape, where organisations are embracing intelligent automated HR operations amid evolving labour regulations, hybrid work models, and the rise of multi-country workforces. The company’s data shows that 75 per cent of GCC companies are first-time HR automation adopters, while 24 per cent have migrated from legacy systems, highlighting the ongoing regional transition towards fully digitised, compliance-ready HR frameworks.

greytHR is powering this digital shift through its robust cloud-based infrastructure and AI-powered tools, which simplify the entire hire-to-retire employee lifecycle, from recruitment and onboarding to core HR, leave, attendance, payroll, performance, exit and engagement.

Girish Rowjee, Co-founder and CEO of greytHR, said, “At greytHR, we believe that ‘people’ are the primary pillar of any business. A company’s growth relies on the dedication and hard work of its employees. As a result of this belief, we built our HRMS to make employee lifecycle management simpler, more transparent, and more connected within the HR ecosystem. Our goal is to help organisations reinvent how they manage and support their workforce through intelligent, people-focused automation in today’s digital world.”

Through it’s a highly intelligent and unified system, greytHR has been continuously addressing the region’s distinctive challenges and maximising impact through efficient workforce management.

Sayeed Anjum, Co-Founder & CTO, greytHR, said: “As companies expand across borders and hybrid work models become the norm, HR leaders face issues such as manual payroll errors, fragmented systems and limited automation, which can directly impact compliance, employee satisfaction, and productivity. Our platform is tailored to address these pain points and the region’s unique needs by serving as an intelligent, unified system that simplifies all stages of workforce management. This further aligns with our broader vision of creating measurable impact for companies and transforming the regional HR ecosystem through digitisation.”

He further stated: “Currently, IT & ITeS, Business, and Financial Service sectors lead in HRMS adoption, at 19 per cent, 15 per cent and 10.5 per cent respectively, highlighting the vital role of technology-driven and service-oriented businesses in catalysing the ongoing digital HR revolution.”

greytHR offers built-in compliance features tailored to GCC nations, including automated GPSSA deductions, multi-country payroll capabilities, and real-time analytics. Moreover, its intuitive interface and modular architecture make it accessible to businesses of all sizes, from startups to large enterprises.

The company showcased these advanced offerings at the recent HR Summit & Expo 2025, held in Dubai, highlighting its commitment to supporting the region’s evolving workforce needs. As GCC continues to position itself as a global business hub, greytHR remains steadfast in its efforts to positively shape the future of the regional HR industry.

Continue Reading

Trending

Copyright © 2023 | The Integrator