Connect with us

Tech Features

Enhancing Fuel Efficiency: The Power of Combustion Quality Monitoring

Published

on

fuel efficiency

By Rob Mortimer, Managing Director at Fuelre4m

A litre of diesel is the same as any other litre of diesel. Right?

When you fill your tank, the fuel goes into the engine and turns into energy. Doesn’t it?

Before embarking on the Fuelre4m journey, I shared this common belief. It never occurred to me to question whether all the fuel we put into the tank is effectively turning into power. Like many, I simply accepted that refuelling was necessary to ensure there was enough fuel to reach my destination. However, the reality is that every batch of fuel is different. Each grade varies, and every producer adds their unique formula and ingredients. Every time you fill up, you are mixing different batches and grades. “So what?” you might ask. You still need to fill the tank to make your engine run. Yes, but it’s ironic that we spend so much time checking the fuel efficiency of an engine while neglecting the efficiency of the fuel itself.

So, what difference does fuel quality make? To answer this, we must understand that fuel is consumed by weight (g/kWh), but we measure it in volume (km/l). When measuring a vehicle’s fuel efficiency, we look at kilometres per litre. To measure fuel efficiency accurately, we need to know how much power is delivered per gram of fuel combusted and how much unburned fuel is ejected through the exhaust. A quick note on engine exhaust: every bit of unburned fuel represents lost power, lost money, and worse, emissions. When unburned fuel reaches the exhaust and combusts at high temperatures, it generates heat energy and emissions without contributing to the engine’s power. In fact, the exhaust is where most harmful emissions occur.

Understanding fuel quality is crucial. The better the fuel quality and condition before combustion, the more efficiently the engine can convert fuel into power. Higher combustion rates mean less unburned fuel in the exhaust, resulting in fewer emissions. For example, untreated diesel has a combustion-to-unburned fuel ratio of around 3:1. For every litre of fuel consumed, only 75% combusts in the engine, while 25% is ejected through the exhaust. With Fuelre4m’s Re4mx diesel treatment, this ratio improves dramatically to 93% combustion and only 7% unburned fuel. However, without advanced technology, these insights would remain unknown.

Fuelre4m leverages state-of-the-art engine, fuel, and exhaust monitoring solutions to track and verify fuel combustion rates. Our technology continuously gathers data from the vehicle’s ECU, combining it with other sensors to create a real-time dashboard of fuel consumption. This system connects the engine and exhaust via 4G/5G/Wi-Fi or other wireless solutions to cloud-based servers. We map data, including RPM, speed, engine temperature, exhaust temperature, gear position, injector timing, oil pressure, GPS location, engine load, torque, and more, into Power BI dashboards. This allows us to accurately calculate and present actual fuel consumption and energy output.

Our devices simply connect to the OBD/CanBus ports and start transmitting real-time data, mapped by device ID to the correct position on our client’s dashboard. With the right analysis, we can determine the amount of fuel consumed relative to torque, RPM, and speed.

Why is this data useful? It aids in preventative maintenance, detecting fuel theft, and monitoring engine cleanliness. It also provides insights into tire performance, air filter clogging, air mix ratio, and fuel filter efficiency. We can even detect when oil is losing its viscosity. However, the most critical data we provide is actual fuel consumption and emissions production.

By analysing and recording the energy delivered by the fuel, we can not only check fuel quality but also accurately calculate and report on CO2 emissions, as well as NO, NO2, NOx, SO, SO2, SOx, and particulates. This data allows our clients to generate auditable Carbon Credit certificates, enabling them to trade carbon savings on international markets or avoid overpaying for carbon credits or taxes.

The most impressive part is that implementing our technology costs nothing upfront. Re4mtech specializes in building technology solutions on a monthly subscription basis, funded by the savings from using Fuelre4m’s Re4mx petrol and diesel conditioners. These conditioners have been proven (using the technology described above) to reduce consumption by 15% to 20%.

In conclusion, leveraging technology to monitor and improve the quality of fuel combustion is not just about enhancing efficiency but also about reducing costs and minimizing environmental impact. As we move towards a more sustainable future, understanding and optimizing fuel quality will play a pivotal role in achieving these goals.

Tech Features

How Cyber Risks Have Become Business Risks

Published

on

Cyber Risks

By Alain Sanchez, EMEA CISO, Fortinet.

Cyber risk is business risk. Anything that threatens IT threatens the company. We have become extremely dependent upon our digital assets. As a result, business leaders need to realize the magnitude of the change. The essence of what visionaries have shared with me in the last couple of months shows how much cybersecurity is now a permanent topic of discussion among chief information security officers (CISOs) and their corporate leadership.

Assessing Cyber Risks

Perhaps the most crucial role of the CISO is to rank cyber risks by order of actual impact.

Part of this assessment requires understanding the priorities inside the organization’s value chain and securing them accordingly. The second challenge is to look beyond the organization and see how outside forces may impact it. And among these external forces, we find the compliance framework. These new laws and regulations are necessary.

This very duality, good and complex, challenges many IT departments. They must ask themselves: How do we integrate legal considerations into what used to be a pure technological battlefield? The solution is to start from the top. The board of directors should always have this duality in mind. The more directors know about cyber risks and government regulations, the better. Consider the European Union’s Digital Operations Resilience Act (DORA). This legislation is focused on the European banking and financial system.

Mitigate Risks

In the past, resilience was more of a technical concept. It was about bringing back the servers. Today, it is a legal requirement documented by an auditable plan. We have moved from a series of technical steps to a contractual re-establishment of critical services.   

Four types of considerations underpin these plans:

  • • Prioritized recovery: A very delicate ranking that can only be established through a regular exchange between the board and the operations team. The board’s sign-off is crucial here. Otherwise, who would ever qualify their own activity as noncritical? However difficult to establish, this ranking is truly a fascinating exercise that brings the CISO and team to the heart of the business.
  • • Defending strategies: Assessing the right combination of products, services, staffing, and processes is crucial. Less is more in this matter. After years of accumulation, cyber officers have realized the hard way that a maelstrom of products and vendors was not very efficient. The next era of security will happen via convergence, not addition.
  • • Offer options: This is about providing information and an array of solutions in which, ultimately, the board makes the call. It is part of the CISO’s job to offer scenarios as a series of documented steps: investment 1, timeline 1, benefits 1, and risk 1. Then, the CISO can suggest a second and a third sequence of the above. Choosing how to proceed is the board’s job. This way, the CISO becomes an empowered execution lever for a consensual decision instead of being pinpointed as the only one to blame for the results.
  • • Executive leadership: The CISO needs to report directly to the CEO, otherwise the job is a “widow maker.” The consequences of unclear or diluted support go beyond the discomfort of the position; the survival of the company is at stake. In 2024 and beyond, submitting cybersecurity to any other consideration than the company strategy is a major governance mistake. Like the Titanic shipbuilders who traded rescue boats for rooms on the sundeck.

Cybersecurity is not only about avoiding icebergs. It is a holistic approach that embraces all the active and passive security dimensions into one integrated platform. Holistic here does not mean monopolistic. Legacy, old-school, best-of-breed, and point solutions are facts of life. However, the number of technologies, vendors, processes, and the magnitude of digital transformations call for simplification. Too often, this maelstrom turns into major incidents that operate as wake-up calls. Then the question is not about the 1 million dollars we did not spend, but about the 100 million dollars we just lost.

Continue Reading

Tech Features

Provisioning and Deprovisioning – A Guide to Stronger Identity and Access Management

Published

on

Access Management

By: Christopher Hills, Chief Security Strategist, BeyondTrust

Across the Middle East, CIOs and CISOs huddle together to determine ways of making their organizations more secure so that digitalization can align with the vision of business leaders. No enterprise can afford to shut itself off from the digital economy. Whether it operates locally, regionally or globally, a business must build trust. And to do that, it must master the art of identity management. Therefore, it must understand the importance of provisioning and deprovisioning.

Provisioning is the name we give to the granting of privileges. This is a more granular process than onboarding, in which a new user account is created. Each user may have privileges granted at any time. And we should remember that not all users are humans — employees, contractors, customers, and so on. Privileges may be assigned to service accounts, machinery, and other resources. The purpose of provisioning is to maintain access while accounting for security and compliance standards.

To meet modern security standards, however, deprovisioning is just as important. Again, this does not just occur during offboarding. Privileges can be revoked all the time. Not because of a loss of trust in the person or asset that held them, but because it is best practice. Effective provisioning and deprovisioning is the foundation of a robust identity-centric security solution.

Covering the bases

Both are important. Overprovisioning can lead to a junior employee or overlooked service having unnecessary privileges, and under-deprovisioning can lead to a range of invisible issues such as unmonitored or orphaned accounts, or stale privileges. Special care must also be taken when adding or removing accounts to user groups — which carry with them a predetermined set of privileges —because these actions amount to provisioning and deprovisioning.

Any active account is a potential entry point, so it should come as no surprise that security best practice lies in minimizing the number of accounts and the access privileges they hold. If an account is no longer needed — an employee has resigned, a project has come to an end, or a range of other scenarios — then it should be disabled, deleted, or its rights downsized. Threat actors rely on organizations not following this simple practice.

Tools and tricks

Robust IAM will also include just-in-time (JIT) provisioning, which goes hand in hand with PoLP. When deprovisioning occurs, the timely revocation of access also occurs. Regularly reviewing and adjusting access rights is best practice because it prevents unnecessary permissions being exploited by malicious parties inside or outside the organization. All unused accounts should be placed in a disabled state and removed from all relevant security groups until such time as they can be reviewed and, if appropriate, deleted.

Identity and access management cannot be effective without the right tools to simplify provisioning and deprovisioning. This is because looking after the end-to-end lifecycle of identities, privileges, and entitlements is a complex task that has grown even more complex since the region’s mass migration to hybrid and multi-cloud environments. Identity management tools can streamline the creation, maintenance, and deletion of human and non-human accounts. Governance management tools enforce policies that limit access based on the assigned privileges. Lifecycle management tools are useful for ensuring (from onboarding to offboarding) that privileges always fit the role of an account owner. Privileged access management (PAM) enforces PoLP and provides a useful integration hub for other tools so that IT and security teams have single-pane control over everything that may impact identity security.

In a modern setting, provisioning and deprovisioning tools must offer automation and user behavior analytics, which means they must incorporate some flavor of AI or machine learning. To be consistent with the implementation of PoLP and other governance policies, variants of AI are necessary to minimize human error. Granting and revoking access rights in a company of even moderate size is a constant process that responds to changes in personnel and circumstances. While some of these situations may be subject to planning, others, such as real-time behavioral anomalies, are not. Threats can arise at a moment’s notice and only AI offers a practical option for timely response.

Be strong

Having established provisioning and deprovisioning as the keys to strong IAM, enterprises will find they can implement more effective lifecycle management of identities, privileges, and entitlements. As with any new measure, ongoing reviews will uncover any additional requirements, and adjustments can be made to cover new regulations, new assets, or new business models. As the identity landscape fluctuates, so should provisioning and deprovisioning strategies.

Define roles clearly. If an account owner does not need access to a resource, do not grant it (PoLP); and if they do, wherever possible, grant access only for as long as it is required (JIT). Disable and delete accounts where appropriate and monitor access across the entire ecosystem as often as is practical — quarterly or annually.

Following the guidance laid out here will strengthen your identity security posture. The modern threat actor is always on the lookout for gaps in your defenses. Unfortunately, these often take the shape of overprovisioned identities or abandoned accounts that have not been adequately addressed. The good news is that by applying the steps above, you can shore up defenses and protect the enterprise from the worst of the threats beyond its walls.  

Continue Reading

Features

Robust patch management. In the fight against ransomware, it’s time to get back to basics

Published

on

ransomware

By Saeed Abbasi, Product Manager, Vulnerability Research, Qualys Threat Research Unit (TRU)

In the Arab Gulf region, ransomware has become an epidemic. Since 2019, Saudi Arabia has been a top target for RansomOps gangs. And the GCC remained the most affected territory in the Middle East and Africa, as of 2023, showing a 65% increase over 2022 for instances of victims’ information being posted to data-leak sites. According to the Known Exploited Vulnerabilities (KEV) catalog, maintained by the Cybersecurity and Infrastructure Security Agency (CISA) under the U.S. Department of Homeland Security, approximately 20% of the 1,117 exploited vulnerabilities are linked to known ransomware campaigns. Attackers have become more relentless and more sophisticated, just as regional security teams have become more overworked and overwhelmed by their new hybrid infrastructures.

In today’s climate, senior executives approach discussions about cyber risk with the expectation of hearing unfavorable news. Indeed, matters have escalated of late with the emergence of human-mimicking AI. We used to take comfort in the fact that at least artificial intelligence could not be creative like people could. But that was before generative AI came along and left us speechless — with delight or dread, depending on our day job. For security professionals, it is the latter because every new technology that arrives will eventually get exploited by threat actors. AI and its generative subspecies can make it easier to find vulnerabilities, which implies there will be a surge in the volume of zero-days. And GenAI can pump out convincing phishing content at a scale unreachable by human criminals.

But in a break with tradition, I offer good news. In the daily struggle with ransomware threats, the answer lies in the daily fundamentals of IT admin. Patch management is the keystone of cyber resilience. As each vulnerability becomes known and fixes are released, that dreaded countdown begins again. Whether threat actors have beaten vendors to the punch by publishing an exploit before the patch was released or not, organizations must be prepared to act strategically when fixes become available. It may be that a patch fixes an error that poses no risk to the enterprise, in which case patching would not have much impact on reducing cyber risk. Hence, organizations need to look at prioritizing patching the assets that cause the most existential risk to the company, maximizing their patch rate (a measure of how effectively vulnerabilities are addressed) and minimizing their mean time to remediation (MTTR) for such “crown jewel” assets.

Windows mean doors

The Qualys Threat Research Unit (TRU) uses these metrics often in anonymized studies of organizations’ cyber-readiness. Our 2023 Qualys TruRisk Research Report found that weaponized vulnerabilities are patched within 30.6 days in 57.7% of cases, whereas attackers typically publish exploits for the same flaws inside just 19.5 days. That 11-day window is where our concerns should be concentrated. It should spur us to revisit patch management and — if we have not already — to integrate it into our cybersecurity strategy so we can start to close our open doors to attackers.

If we imagine a graph of MTTR plotted against patch rate for every vulnerability, then we can imagine four quadrants, defined by combinations of “high” or “low” for our two metrics. Our sweet spot is in the bottom righthand corner, where patch rate is high and MTTR is low. We could call this quadrant, the “Optimal Security Zone”. If a vulnerability is in this zone, we are unfazed by it. It is low-risk because it is patched and resolved quickly. In the top right, we find that patch rate is still high, so we call this the “Vigilant Alert Zone”, but incidents take a longer time to remediate (high MTTR). But while this is a higher source of concern, it is less worrying than if a vulnerability falls in the bottom left quadrant — the “Underestimated Risk Zone”. Here, we find overlooked vulnerabilities (low patch rates) but unexpectedly short remediation times. These flaws can quickly become risks if left unaddressed. Finally, we come to our red-flag quadrant, the “Critical Attention Zone” (top left), where vulnerabilities have low patch rates and take a long time to resolve.

Combining metrics like this can give us important crossover information that allows us to triage our patch management effectively. By exploring the critical areas first, we can examine overlooked vulnerabilities and discover either that they pose little threat and are less of a source of concern, or that they could lead to a ransomware incident, in which case they become a top priority on our to-do list. With RansomOps groups now leveraging advanced automation tools, the importance of optimal patch management cannot be overstated. Ensuring that systems are updated and secure is critical to prevent potential vulnerabilities.

Action stations

Starting today, then, GCC organizations should look to their vulnerability management strategy and determine an approach that is able to stand up to armies of threat actors, working as a unified industry, equipped with advanced AI, to disrupt, disable, and damage the region’s innovative spirit. We all need to make sure that our vulnerability gaps are closed and our defenses tightened against these malicious actors. Technical and business stakeholders must collaborate on crafting roadmaps that make sense to their operational uniqueness.

The hope remains that one day, cyber criminals, a persistent threat today, will be effectively countered by innovative security technologies. However, we must confront the fact that attackers are becoming more sophisticated, their campaigns are escalating in scope, and the resources available for cybersecurity defense are often constrained.

The solution does not lie in an unknowable panacea, but in the day-to-day fundamentals — robust patch management that uses the four-quadrant principle and aims for the highest possible patch rate and the shortest possible resolution time. The top practitioners in any field — sports, business, the arts — will always extol the virtues of the fundamentals. If it works for them, then why not for us? So, let’s get back to basics and send the ransomware actor packing.

Continue Reading

Trending

Please enable JavaScript in your browser to complete this form.

Copyright © 2023 | The Integrator