Connect with us

Tech Features

The Evolution of Data Centre Technologies: From Hardware to Software-Defined Infrastructure

Published

on

data centres

Kayvan Karim, Assistant Professor at Mathematical and Computer Sciences, Heriot-Watt University Dubai

In recent years, the landscape of data centre technologies has significantly transformed, shifting from the traditional hardware-based infrastructure to more flexible and efficient software-defined solutions. This change, driven by the increasing demand for scalable, agile, and cost-efficient computing resources in a rapidly digitised world, has brought numerous benefits. In this op-ed, Kayvan Karim delves into the journey of hardware-centric data centres. He explains how virtualisation, containerisation, and cloud computing have revolutionised data centre design, scalability, and efficacy.

Traditional Data Centres: The Era of Hardware Infrastructure

In the not-so-distant past, data centres were synonymous with massive physical servers, storage, and networking equipment housed in dedicated facilities. Once the gold standard, these environments were burdened with high capital costs, limited scalability, and resource inefficiencies. The exponential growth in data volume and the increasing complexity of business applications have made these traditional architectures obsolete. This stark reality underscores the need to embrace more modern, software-defined solutions.

The Rise of Software-Defined Data Centres (SDDCs)

To address the issues of hardware-centric data centres, the concept of software-defined infrastructure emerged as a game-changer. Software-defined data centres (SDDCs) are a paradigm shift. They decouple the management and control of data centre resources from the underlying hardware, enabling administrators to programmatically provision, manage, and orchestrate assets. This shift toward software-defined solutions has revolutionised how data centres are designed, deployed, and operated, offering unprecedented agility, scalability, and cost-effectiveness. These transformative benefits of SDDCs paint a promising picture for the future of data centre technologies. According to Precedence Research, the global software-defined data centre market size is expected to hit around USD 350.53 billion by 2032, poised to grow at a compound annual growth rate (CAGR) of 22.9% from 2023 to 2032.

Virtualisation: Empowering Data Centre Efficiency

Virtualisation, a key component of contemporary data centre technologies, has proven worth it. Allowing several virtual instances to run on one physical server significantly improves resource utilisation. This abstraction from underlying hardware systems regarding computer storage and networking capabilities provides greater flexibility and makes workload management easier. As per VMware estimates, virtualisation can reduce hardware and operational costs by up to 70%. This is a testament to its effectiveness and the reassurance it brings about the future of data centre technologies. Additionally, a survey conducted by Citrix revealed that 74% of companies experienced reduced IT expenditure due to virtualisation. This underscores the importance of virtualisation in enhancing data centre efficiency.

Containerisation: Driving Portability and Scalability

Containers have become popular for packaging applications in lightweight, portable environments. Unlike virtual machines, containers are based on the host operating system kernel; thus, they are resource-efficient and faster to deploy. Docker and Kubernetes containerisation technologies, among others, have been widely adopted, allowing organisations to build, deploy and scale applications with unprecedented speed and flexibility. According to Mordor Intelligence, the containerised data centre market size is expected to grow at a CAGR of 18.49% to reach USD 33.77 billion by 2029. This depicts a fast-growing trend of containers being used by many companies, where enterprises use software-defined solutions that could efficiently be utilised to streamline the management or deployment of such containers.

Cloud Computing: The Future of Data Centre Infrastructure

Cloud computing has fundamentally changed how organisations consume and deliver IT services by offering on-demand access to different computing resources over the internet, which can be used anytime needed. Public-private hybrid cloud deployments have also become increasingly common, enabling businesses to leverage upon scalability, flexibility, and cost advantages cloud technologies provide. According to Mordor Intelligence, the cloud computing market is expected to reach USD 1.44 trillion by 2029, growing at a CAGR of 16.40% during the forecast period (2024-2029). This shows that most firms are accelerating their adoption of clouds to enjoy benefits like agility scaling capabilities and cost savings from moving workloads into these platforms.

Embracing the Future of Data Centre Technologies

The transition from traditional hardware infrastructures to software-defined solutions in data centres signifies a complete change in how computing resources are allocated, administered, and optimised. Technologies such as virtualisation, containerisation, and cloud computing have made this shift possible, enabling organisations to construct adaptable and efficient data centre infrastructure. The future is likely characterised by software-defined data centres that bring with them new prospects for innovation, growth, and competitiveness in the digital age.

The evolution of data centre technologies holds an incredible prospect for businesses that want higher levels of efficiency, flexibility, and scalability within an increasingly analytic society. By taking advantage of recent developments in virtualisation, containerisation or cloud computing technology, corporations can prepare their data centre infrastructure for success in the digital age while making it more resilient at a relatively lower cost.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech Features

Establishing data sovereignty in a ‘datafied’ world

Published

on

data

By: Omar Akar, Regional Vice President for Middle East & Emerging Africa, Pure Storage

Data is the currency of the digital domain, and with every passing day, the world is getting increasingly ‘datafied’. Billions of gigabytes of digital data pertaining to citizens, businesses, governments, and institutions are generated, collected, and processed every day. Understandably, there are concerns about how we can protect personal data, business data, as well as sensitive data that has implications for national security.

Challenges associated with data sovereignty

It is possible that a company based in a certain country uses cloud infrastructure from a provider abroad, and that cloud provider also has customers in other countries and regions. If data collection, data storage, and data processing happen in different countries, it will be subject to the data sovereignty rules of all those countries. Many of the concerns surrounding data sovereignty pertain to ensuring data privacy and preventing data that’s stored abroad from violating the laws of that country. Many countries have therefore introduced new laws, or modified the existing ones, so that data is kept within the boundaries of the country where the individual or entity is based. However, verifying that data indeed exists only at permitted locations can be very difficult.

On the other hand, storing huge amounts of data at only a few locations can increase the risk of data loss and data theft through cyberattacks, which can have huge ramifications on the financial health and reputation of businesses.

Moreover, data sovereignty makes it complex to share data across international borders. This can increase cost and inefficiencies for any business that operates across multiple countries and requires flow of data between its offices. Such businesses must now establish infrastructure in local data centers to comply with data protection regulations in each country. Companies also need to keep in view the data sovereignty requirements of each country and international data sharing agreements while wanting to share data which can impact business operations.

Ways to ensure data sovereignty and elevate data performance

Although establishing data sovereignty is undoubtedly challenging, there are some best practices and approaches that can help in achieving it and elevating data performance. Organizations should conduct a comprehensive audit of their data, including where it is stored, processed, and shared. This is the first step in identifying potential data sovereignty risks and ensuring compliance with the relevant laws and regulations of the concerned countries. It is also necessary to adopt data protection measures — such as encryption, access controls, and monitoring — to prevent unauthorized access and use of data, whether it is in transit or at rest.

The company’s data protection policy should define protocols for handling and storing data as well as measures for protecting it. This policy should be regularly reviewed and updated to keep up with any changes in data protection laws and regulations. If an organization has a footprint spanning multiple regions, it is a good idea to take the strongest data sovereignty laws among them and implement it across all regions. Cloud providers can be of assistance in this regard.

Benefits of working with cloud service providers

Most cloud providers have data centers in multiple countries. Organizations should go for a provider whose data residency provisions are aligned with their own data sovereignty requirements. Today, leading cloud providers also offer other features, including data encryption, that can help in achieving data sovereignty. To take it one step further, companies must introduce strict data governance processes in the cloud. This will ensure regulatory compliance, risk assessment, and risk mitigation at all times.

Data sovereignty laws apply not only to data but also to data backups. It is therefore important to understand how your organization backs up information — whether it is done on-premises or using dedicated cloud services or public cloud services. Adopting cloud-ready solutions and leveraging the benefits of all-flash storage is one of the ways to future-proof your organization’s data storage infrastructure. Uncomplicating storage will help in reimagining data experiences and powering the digital future of the business.

Finally, it is important to view data sovereignty holistically, and not as the exclusive responsibility of any one individual or team. The need to comply with data regulations extends across the board, from businesses to suppliers to the end-users. From a business perspective, ensuring data sovereignty calls for robust governance, holistic risk management, and concerted efforts on the part of the IT security, legal department, procurement, risk managers, and auditors — under the guidance and supervision of the company’s Chief Information Officer. It is a good way to build digital trust in today’s business environment.

Continue Reading

Tech Features

‘Socially Responsible’ Data Centres Need to be a Cornerstone of the Region’s Digital Economy

Published

on

data centre

By Bjorn Viedge, General Manager at ALEC Data Center Solutions

Across the Middle East, digital agendas have long been seen as the necessary underpinnings of economic growth — a way to detach from historic dependencies on petrochemical trade and move forward as innovators.

Amid a series of economic visions that prioritise skilling, entrepreneurship, and industry disruption, we have seen the rise of the data centre as a fulcrum of progress. According to recent estimates, the Middle East data centre colocation market is expected to grow at a compound annual growth rate (CAGR) of 6.83% from 2022 to 2028. The United Arab Emirates leads its regional peers in this growth and has become one of the largest data centre hubs in the Middle East. Significant investments continue to flow into the country, with expectations of surpassing USD 1 billion by 2028. In April 2022, the UAE Cabinet launched a strategy to bolster the digital economy, aiming for it to contribute 20% to the gross non-oil GDP in the coming years. This initiative included the formation of a council to oversee digital economy progress, serving as a catalyst for accelerated data centre adoption.

Digitisation vs Sustainability

But the UAE is not nurturing technology in isolation. Part of the country’s vision is an embrace of the UN’s 17 sustainable development goals (SDGs), which cover everything from quality of work and social life to preservation of the environment. Research has shown the mounting environmental impact of data centres. Demand for data centre services has driven them to get bigger, hotter, and more expensive and a peer-reviewed study by Swedish researcher Anders Andrae predicts that ICT industry could use 20% of all electricity and emit up to 5.5% of the world’s carbon emissions by 2025. And in a region that already faces a looming water crisis, Middle East data centre planners should be aware that today’s data centres use up an Olympic swimming pool every two days.

Traditional building and cooling technologies are having trouble keeping pace with increasing chip densities, so those that build their own data centres should account for this impact when looking to comply with government regulations. And with the government signalling clear intent, data centre owners must be ready to play their part. In the age of ESG, they must be climate conscious, and they must look to the latest technologies to ensure their facilities are adding net value to society.

Many such technologies exist and have proven themselves, but not all are applicable in all geographies. For example, heat-recovery may be viable in colder countries, but is not suitable for the sun-soaked Middle East. However, other efficient means are on hand to make the region’s data centres greener. If planners aim for great design, then they must consider not just the exterior — elements such as the location, the resources used, the climate, and the temperature — but also the interior of the facility.

Inner Pieces

Rethinking the design of modern data centres means leaving no component overlooked — from the building itself down to the nuts and bolts of the servers. Indeed, server-cooling technologies are improving all the time and some older ones are making a powerful comeback.

Liquid-immersion cooling, for example, has been around since the 1940s, and with the surging demand for denser computing that we are seeing today, the technology may be the answer to many problems. Modern liquid-immersion cooling uses a dielectric (non-electrically conductive) fluid which is far more effective in conducting and therefore enabling the dissipation of heat produced by hardware, compared to traditional air-based cooling systems.

Liquid-immersion could represent the future of data centre cooling. Facilities can operate with less physical space compared with traditional air-based solutions, while gaining energy savings of up to 50%. Meanwhile, lower maintenance costs, cheaper builds, and power-usage effectiveness (PUE) scores lower than 1.03 (where 1.0 is the ideal) mean organisations can reduce the time needed to realise a full return on their investment.

Building Blocks

But cooling is not the only way to sustainability. Facility planners must also consider the building process itself. Emerging today, and rapidly gaining acceptance for data centres of smaller scale is the technique of prefabricated construction, also known as modular data centres. As the construction of the prefabricated modules primarily occurs offsite in dedicated fabrication facilities, standardised production methodologies can be implemented which improve efficiencies, enhance quality, and significantly reduce wastage.

Because prefabricated data centres have been assembled and tested in a controlled factory environment, construction is faster, less error-prone, and less labour-intensive on site. Additionally, modules can be added whenever the demand arises, meaning data centre companies need not build a large facility to accommodate future expansion. Instead, they can build quickly as needed. All of this leads to a cheaper, more efficient, more sustainable project.

Many regional governments, including that of the UAE, are firmly committed to the UN’s SDGs. Middle East authorities, and their counterparts elsewhere in Asia, the Americas and Europe, are placing greater emphasis on LEED certification and other standards in their regulatory frameworks. Nations everywhere, it seems, have recognised the importance of regulating their way to sustainability. But in playing their part, data centre owners can also take advantage of a lucrative new business model of long-term benefits — from quicker GTM to reduced operational costs.

Continue Reading

Tech Features

In the Crosshairs of APT Groups: A Feline Eight-Step Kill Chain

Published

on

hacking

By Alexander Badaev, Information security threat researcher, Positive Technologies Expert Security Center and Yana Avezova, Senior Research Analyst, Positive Technologies

In cybersecurity, “vulnerability” typically evokes concern. One actively searches for it and patches it up to build robust defenses against potential attacks. Picture a carefully orchestrated robbery, where a group of skilled criminals thoroughly examines a building’s structure, spots vulnerabilities, and crafts a step-by-step plan to breach security and steal valuables. This analogy perfectly describes the modus operandi of cybercriminals, with the “kill chain” acting as their detailed blueprint.

In a recent study, analysts from Positive Technologies gathered information on 16 hacker groups attacking the Middle East analyzing their techniques and tactics. It is worth noting that most of the threats in Middle Eastern countries come from groups believed to be linked to Iran—groups such as APT35/Charming Kitten or APT34/Helix Kitten. Let’s see how APT groups operate, how they initiate attacks, and how they develop them toward their intended targets.

Step 1: The Genesis of Intrusion (Attack preparation)

It all begins with meticulous planning and reconnaissance. APT groups leave no stone unturned in their quest for vulnerable targets. They compile lists of public systems with known vulnerabilities and gather employee information. For instance, groups like APT35 aka Charming Kitten known for targeting mainly Saudi Arabia and Israel, gather information about employees of target organizations, including mobile phone numbers, which they leverage for nefarious purposes like sending malicious links disguised as legitimate messages. After reconnaissance, they prepare tools for attacks, such as registering fake domains and creating email or social media accounts for spear phishing. For example, APT35 registers accounts on LinkedIn and other social networks to contact victims, persuading them through messages and voice calls to open malicious links.

Step 2: The Initial Access: Gaining a Foothold

Once armed with intelligence, cybercriminals proceed to gain initial access to their target’s network.  Phishing campaigns, often masquerading as legitimate emails, serve as the primary means of infiltration. An example is the Desert Falcons group, observed spreading their malware through pornographic phishing. Notably, some groups go beyond traditional email phishing, utilizing social networks and messaging platforms to lure unsuspecting victims, as seen with APT35, Bahamut, Dark Caracal, and OilRig. Moreover, techniques like the watering hole method, where attackers compromise trusted websites frequented by their targets, further highlight the sophistication of these operations. Additionally, attackers exploit vulnerabilities in resources accessible on the internet to gain access to internal infrastructure. For example, APT35 and Moses Staff exploited ProxyShell vulnerabilities on Microsoft Exchange servers.

Step 3: Establishing Persistence: The Art of Concealment

Having breached the perimeter, APT groups strive to establish a foothold within the victim’s infrastructure, ensuring prolonged access and control. This involves deploying techniques such as task scheduling, as seen in the campaign against the UAE government by the OilRig group, which created a scheduled task triggering malicious software every five minutes. Additionally, many malicious actors set up malware autostart, like the Bahamut group creating LNK files in the startup folder or Dark Caracal’s Bandook trojan. Some APT groups, such as APT33, Mustang Panda, and Stealth Falcon, establish themselves in victim infrastructures by creating subscriptions to WMI events for event-triggered execution. Furthermore, attackers exploit vulnerabilities in server applications to install malicious components like web shells, which provide a backdoor for remote access and data exfiltration.

Step 4: Unraveling the Network: Internal Reconnaissance

After breaking in, APT groups don’t just sit there. They explore the system like a thief casing a house to find valuables and escape routes. This digital reconnaissance involves several steps. First, they perform an inventory check, identifying the computer’s operating system, installed programs, and updates, like figuring out a house’s security measures. For instance, APT35 might use a simple command to see if the computer is a powerful 64-bit system, capable of handling more complex tasks. Second, they map the network layout, akin to identifying valuable items and escape routes. APT groups might use basic tools like “ipconfig” and “arp” (like Mustang Panda) to see how devices are connected and communicate. They also search for user accounts and activity levels, understanding who lives in the house (figuratively) and their routines. Malicious tools, like the Caterpillar web shell used by Volatile Cedar, can list all usernames on the system. Examining running programs is another tactic, like checking for security guards. Built-in commands like “tasklist” (used by APT15 and OilRig) can reveal a list of programs currently running.

Finally, APT groups might deploy programs that hunt for secrets hidden within files and folders, like searching for hidden safes or documents. The MuddyWater group, for example, used malware that specifically checked for directories or files containing keywords related to antivirus software. By gathering this comprehensive intel, APT groups can craft targeted attacks, steal sensitive data like financial records or personal information, or exploit vulnerabilities in the system to cause even more damage.
Step 5: Harvesting Credentials: Unlocking the Vault

Access to privileged credentials is the holy grail for cyber attackers, granting them unrestricted access to critical systems and data. One common tactic is “credential dumping,” where tools like Mimikatz (used by APT15, APT33, and others) snatch passwords directly from a system’s memory, similar to stealing a key left under a doormat. Keyloggers, used by APT35 and Bahamut for example, acts like a hidden camera, silently recording keystrokes to capture usernames and passwords as victims type them in.

These stolen credentials grant access to even more sensitive areas. APT groups also exploit weaknesses in how passwords are stored. For instance, some target the Windows Credential Manager (like stealing a notepad with written down passwords). Brute-force attacks, trying millions of combinations, can crack weak passwords. Even encrypted passwords can be vulnerable if attackers have specialized tools. By employing these tactics, APT groups bypass initial security and access sensitive information or critical systems.

Step 6: Data Extraction: The Quest for Valuable Assets

Once inside, APT groups aren’t shy about snooping around. They leverage stolen credentials to capture screenshots, record audio and video (like hidden cameras and microphones), or directly steal sensitive files and databases. For instance, the Dark Caracal group employed Bandook malware, which can capture video from webcams and audio from microphones. This stolen data becomes their loot.

To ensure a smooth getaway, APT groups often employ encryption and archiving techniques. Imagine them hiding their stolen treasure chests—the Mustang Panda group, for example, encrypted files with RC4 and compressed them with password protection before shipping them out. This makes it difficult for defenders to identify suspicious activity amongst regular network traffic.

Step 7: Communication Channels: Establishing Control

APT groups rely on hidden communication channels with command-and-control (C2) servers to control infected machines and exfiltrate data. They employ various tactics to blend in with regular network traffic. This includes using common protocols (like IRC or DNS requests disguised as legitimate web traffic) and encrypting communication for further stealth.

However, some groups take it a step further. For instance, OilRig used compromised email servers to send control messages hidden within emails and then deleted them, making their C2 channel nearly invisible. These innovative techniques make it difficult for security measures to detect malicious activity, highlighting the importance of staying informed about evolving APT tactics.

Step 8: Covering Tracks: Erasing Digital Footprints

As the operation ends, APT groups meticulously cover their tracks to evade detection and prolong their presence in the compromised environment. Techniques like file obfuscation, masquerading, and indicator removal are employed to erase digital footprints and thwart forensic investigations. For example, the Bahamut group used icons mimicking Microsoft Office files to disguise malware, and the OilRig group used .doc file extensions to make malware appear as office documents. The Moses Staff group named their StrifeWater malware calc.exe to make it look like a legitimate calculator program.

To further bypass defenses, attackers often proxy the execution of malicious commands using files signed with trusted digital certificates. The APT35 group used the rundll32.exe file to execute the MiniDump function from the comsvcs.dll system library when dumping the LSASS process memory. Meanwhile, the Dark Caracal group employed a Microsoft Compiled HTML Help file to download and execute malicious files. Many APT groups also remove signs of their activity by clearing event logs and network connection histories, and changing timestamps. For instance, APT35 deleted mailbox export requests from compromised Microsoft Exchange servers. This meticulous cleaning makes it much more difficult for cybersecurity professionals to conduct post-incident investigations, as attackers often remove their arsenal of software from compromised devices after achieving their goals.

Conclusion: A Call to Vigilance

In a nutshell, the threat landscape in the Middle East is fraught with peril, as APT groups continue to refine their tactics and techniques to evade detection and wreak havoc on unsuspecting organizations. By understanding the anatomy of cyber intrusions and remaining vigilant against emerging threats, organizations can bolster their defenses and mitigate the risks posed by these sophisticated adversaries. Together, let us remain steadfast in our commitment to safeguarding the digital frontier against cyber threats.

Research Link

Continue Reading

Trending

Please enable JavaScript in your browser to complete this form.

Copyright © 2023 | The Integrator