Connect with us

Tech News

IBM Sets the Course to Build World’s First Large-Scale, Fault-Tolerant Quantum Computer at New IBM Quantum Data Center

Published

on

IBM Quantum Data Center

IBM unveiled its path to build the world’s first large-scale, fault-tolerant quantum computer, setting the stage for practical and scalable quantum computing.  

Delivered by 2029, IBM Quantum Starling will be built in a new IBM Quantum Data Center in Poughkeepsie, New York and is expected to perform 20,000 times more operations than today’s quantum computers. To represent the computational state of an IBM Starling would require the memory of more than a quindecillion (10^48) of the world’s most powerful supercomputers. With Starling, users will be able to fully explore the complexity of its quantum states, which are beyond the limited properties able to be accessed by current quantum computers.  

IBM, which already operates a large, global fleet of quantum computers, is releasing a new Quantum Roadmap that outlines its  plans to build out a practical, fault-tolerant quantum computer.

“IBM is charting the next frontier in quantum computing,” said Arvind Krishna, Chairman and CEO, IBM. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.”

A large-scale, fault-tolerant quantum computer with hundreds or thousands of logical qubits could run hundreds of millions to billions of operations, which could accelerate time and cost efficiencies in fields such as drug development, materials discovery, chemistry, and optimization.

Starling will be able to access the computational power required for these problems by running 100 million quantum operations using 200 logical qubits. It will be the foundation for IBM Quantum Blue Jay, which will be capable of executing 1 billion quantum operations over 2,000 logical qubits.  

A logical qubit is a unit of an error-corrected quantum computer tasked with storing one qubit’s worth of quantum information. It is made from multiple physical qubits working together to store this information and monitor each other for errors.

Like classical computers, quantum computers need to be error corrected to run large workloads without faults. To do so, clusters of physical qubits are used to create a smaller number of logical qubits with lower error rates than the underlying physical qubits. Logical qubit error rates are suppressed exponentially with the size of the cluster, enabling them to run greater numbers of operations.

Creating increasing numbers of logical qubits capable of executing quantum circuits, with as few physical qubits as possible, is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published.

The Path to Large-Scale Fault Tolerance

The success of executing an efficient fault-tolerant architecture is dependent on the choice of its error-correcting code, and how the system is designed and built to enable this code to scale.

Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations – necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be able to be implemented beyond small-scale experiments and devices.

A practical, large-scale, fault-tolerant quantum computer requires an architecture that is:

  • Fault-tolerant to suppress enough errors for useful algorithms to succeed.
  • Able to prepare and measure logical qubits through computation.
  • Capable of applying universal instructions to these logical qubits.
  • Able to decode measurements from logical qubits in real-time and can alter subsequent instructions.
  • Modular to scale to hundreds or thousands of logical qubits to run more complex algorithms.
  • Efficient enough to execute meaningful algorithms with realistic physical resources, such as energy and infrastructure.

Today, IBM is introducing two new technical papers that detail how it will solve the above criteria to build a large-scale, fault-tolerant architecture.

The first paper unveils how such a system will process instructions and run operations effectively with qLDPC codes. This work builds on a groundbreaking approach to error correction featured on the cover of Nature that introduced quantum low-density parity check (qLDPC) codes. This code drastically reduces the number of physical qubits needed for error correction and cuts required overhead by approximately 90 percent, compared to other leading codes. Additionally, it lays out the resources required to reliably run large-scale quantum programs to prove the efficiency of such an architecture over others.  

The second paper describes how to efficiently decode the information from the physical qubits and charts a path to identify and correct errors in real-time with conventional computing resources.

From Roadmap to Reality

The new IBM Quantum Roadmap outlines the key technology milestones that will demonstrate and execute the criteria for fault tolerance. Each new processor in the roadmap addresses specific challenges to build quantum systems that are modular, scalable, and error-corrected:

  • IBM Quantum Loon, expected in 2025, is designed to test architecture components for the qLDPC code, including “C-couplers” that connect qubits over longer distances within the same chip.
  • IBM Quantum Kookaburra, expected in 2026, will be IBM’s first modular processor designed to store and process encoded information. It will combine quantum memory with logic operations — the basic building block for scaling fault-tolerant systems beyond a single chip.
  • IBM Quantum Cockatoo, expected in 2027, will entangle two Kookaburra modules using “L-couplers.” This architecture will link quantum chips together like nodes in a larger system, avoiding the need to build impractically large chips.

Together, these advancements are being designed to culminate in Starling in 2029.

Tech News

Intel Core Series 3 Extends AI-Ready Performance to Value and Edge Computing Segments

Published

on

Intel has introduced its latest Intel Core Series 3 mobile processors, aimed at expanding advanced computing capabilities to value buyers, commercial users, and essential edge deployments.

The launch reflects a broader shift in the industry, where performance, efficiency, and AI readiness are no longer confined to premium systems but are increasingly expected across all tiers of computing.

Built on the architectural foundations of Intel’s newer Core platforms and leveraging advanced process technology, the Core Series 3 processors are designed to deliver a balanced combination of performance, battery efficiency, and scalability. The focus is on enabling reliable, everyday computing while supporting emerging workloads, including AI-driven applications.

Driving Value-Oriented Performance

Intel positions Core Series 3 as a significant upgrade path for users operating on older systems. Compared to five-year-old PCs, the new processors deliver up to 47% improvement in single-thread performance and up to 41% gains in multi-thread workloads. GPU-based AI performance also sees notable enhancements, enabling improved responsiveness in modern applications.

This performance uplift is complemented by a strong emphasis on efficiency, with reduced processor power consumption and optimisations aimed at extending battery life for mobile systems.

AI Capability Moves to the Mainstream

One of the key differentiators of the Core Series 3 platform is the introduction of hybrid AI-ready architecture within the value segment. With support for up to 40 platform TOPS, Intel is enabling a new class of systems capable of handling AI workloads at the device level.

The platform also integrates modern connectivity standards, including Thunderbolt 4, Wi-Fi 7, and Bluetooth 6, ensuring compatibility with next-generation peripherals and networks.

Expanding into Essential Edge Deployments

Beyond traditional laptops, Intel is positioning Core Series 3 as a scalable solution for edge computing environments. The processors are designed to support a wide range of applications, including robotics, smart buildings, retail systems, and industrial deployments.

By combining AI acceleration with energy efficiency, the platform aims to deliver the performance required for real-time processing while maintaining operational reliability in diverse environments.

Ecosystem and Availability

Intel expects broad adoption across the ecosystem, with more than 70 designs from OEM partners set to launch across multiple form factors. Consumer and commercial systems powered by Core Series 3 are rolling out through 2026, while edge-focused deployments are expected from Q2 onwards.

Continue Reading

Tech News

62% OF SAUDI LEADERS ARE FAILING TO USE THEIR DATA EFFECTIVELY, NEW CLOUDERA REPORT FINDS

Published

on

 

Cloudera, the only company bringing AI to data anywhere, today released its latest global survey, The Data Readiness Index: Understanding the Foundations for Successful AI, examining how prepared enterprises are to support AI at scale. Surveying more than 300 IT leaders in the EMEA region, including strong insights from Saudi Arabia, the report finds that while AI adoption is growing, most organizations still lack the data foundation needed for success.

The findings highlight a sharp contrast in how effectively organizations track their data. Nearly 9 in 10 EMEA  IT leaders claim complete visibility into where all their data resides, compared to just 32% of respondents in Saudi Arabia. Furthermore, 62% of Saudi respondents cite data access restrictions as a major roadblock to effective data use.

This gap highlights an emerging ‘AI readiness illusion’: the belief that organizations are prepared to scale AI even as critical data challenges remain unresolved.

“Enterprises aren’t struggling to adopt AI, they’re struggling to operationalize it beyond experiments,” said Sergio Gago, Chief Technology Officer at Cloudera. “AI is only as effective as the data that fuels it. Without seamless access to all their data, organizations limit the accuracy, trust, and business value that AI can deliver. You can’t do AI without data.”

AI Adoption is High, but ROI Remains Elusive

While AI is now deeply embedded across the enterprise, achieving consistent returns on investment remains difficult due to a sharp geographical divide in implementation hurdles. Across EMEA, the struggle is largely centered on the inputs, with data quality issues (18%) and cost overruns (16%) cited as the primary causes of lackluster ROI. However, Saudi Arabia presents a different challenge focused on execution. In the Kingdom, weak integration into workflows is the overwhelming barrier at 29%, nearly doubling the concern over data quality, which sits at 15%.

These regional nuances are further tangled by significant infrastructure limitations. Around 65% of respondents in KSA report that performance constraints have hindered operational initiatives, highlighting the immense difficulty of scaling AI across fragmented environments.

Bridging The Data Gap

At the core of these challenges is a significant disconnect between data optimism and operational reality.

The report highlights that 95% of KSA respondents are highly confident in their data, but only 32% of that data is currently fully governed. While this outpaces the broader EMEA region, where only 26% of data is governed despite 91% confidence, it highlights a critical execution gap that organizations are now racing to fill.

The Kingdom is uniquely positioned to bridge this divide with 100% of Saudi respondents ready to adopt new governance frameworks, and 79% being extremely willing to transform their operations. This regional commitment suggests that Saudi Arabia’s proactive approach will likely outpace its peers in the race toward AI and digital maturity.

Strategic Alignment and the Accountability Gap

While leadership in both the EMEA and KSA regions understands the necessity of data infrastructure, the execution and accountability frameworks are worlds apart. More than 90% of EMEA respondents report a well-defined data strategy tied directly to business objectives, while only over half  (53%) of Saudi Arabian respondents feel the same level of alignment.

Accountability and internal culture further widen this divide. In EMEA, 69% of leaders hold the CIO or CTO chiefly responsible for data readiness, whereas in Saudi Arabia, only 35% place ultimate responsibility on this role, indicating a more emerging ownership structure.

Beyond accountability and alignment, respondents in Saudi Arabia face a unique internal hurdle: 50% struggle with insufficient data literacy, while nearly a third (32%) cite a lack of executive sponsorship.

Data Readiness Will Define the Next Phase of Enterprise AI

As enterprise AI shifts from experimentation to execution, data readiness is emerging as the defining factor separating leaders from laggards.

Organizations able to fully access and govern all their data, wherever it resides, are far better equipped to deliver trusted, scalable AI. Notably, every respondent in the report indicated their organization is willing to adapt existing frameworks to support true data readiness.

As enterprises confront the limits of the AI readiness illusion, the path forward is clear: unlocking AI’s full value will require more than ambition; it will demand genuine data readiness. Those that close this gap will be best positioned to drive lasting impact and lead the next era of intelligent business.

Continue Reading

Tech News

OPTRO LAUNCHES AI-POWERED GRC CAPABILITIES FOR THE MODERN ENTERPRISE WITH AI GOVERNANCE, CYBER RISK, AND CONTINUOUS CONTROL MONITORING

Published

on

Optro, the leading AI-powered GRC platform empowering enterprises to transform risk into opportunity, has announced several product capabilities to boost the effectiveness of customers’ risk management programs and enable them to innovate with AI confidently and responsibly. These capabilities follow shortly after the company changed its name to reflect what its AI-powered GRC platform enables: a single, coherent view across infosec, compliance, risk, and audit.

“Cyber risk now moves at machine speed, and legacy GRC tools can no longer keep up,” said Happy Wang, Chief Product and Technology Officer at Optro. “By leveraging AI to predict cyber risk, surface real-time insights, and accelerate mitigation, we help organizations shift from reactive reporting to proactive risk defense—building a true system of action that is ready for the AI era.”

Optro’s latest Risk Intelligence report found that AI governance program maturity is advancing, but unevenly. AI adoption continues to outpace AI governance, with 85 percent of organizations reporting they have integrated AI into their core operations or deployed it across multiple functions, while only a quarter report comprehensive visibility into employee AI use. At the same time, only 34 percent of organizations report their AI governance program is strategic and continuously improving. As these challenges become increasingly prevalent across industries, Optro has released the following product capabilities to help customers turn clarity into action:

  • Unified AI Governance: Serves as the essential orchestration layer for AI governance. By bridging the gap between policies & frameworks, your AI tech stack, and human oversight, this capability enables a unified, automated approach. We ensure that AI risks are visible, compliance is streamlined, and governance policies are enforceable across your entire organization.
  • Cyber Risk: Vulnerability Risk Monitoring: Provides a clear narrative of how a specific vulnerability affects an organization’s security posture and bottom line. This AI-powered functionality enables customers to understand the true business impact of a vulnerability. Included with IT and Cyber Risk Management (formerly IT Risk Management), it’s a paradigm shift in how organizations defend their digital perimeter.
  • Continuous Control Monitoring: With AI-driven recommendations for the controls best suited for automation, and a library of ready-to-use monitor templates, teams can bypass manual setup to start monitoring controls immediately. This capability helps customers reduce manual effort, improve consistency, and gain more timely visibility into control performance. By automating evidence collection and surfacing potential issues earlier, teams can address gaps more efficiently and move toward a more continuous approach to assurance.
Continue Reading

Trending

Copyright © 2023 | The Integrator