Connect with us

Tech Features

Digitalizing Fuel Efficiency over Engine Efficiency: Integrating Technology to Measure Consumption

Published

on

fuel efficiency

By: Rob Mortimer, Director, Fuelre4m

Modern ships are already starting to bristle with technology to measure vessel efficiency, yet one thing stands out over all the results, tech and noise. The importance of the efficiency of fuel isn’t quite understood or calculated. You’ll hear reference back to SFOC (Specific Fuel Oil Consumption) at any time fuel consumption is measured, yet while the principal is right, the measuring and calculating is far from ideal.

Heavy Fuel Oil has an energy density of between 39MJ/kg and 42MJ/kg when burnt. That’s a wide range and depends very much on the source and quality of the fuel. How is it stored, transferred, settled, heated and purified to remove pollutants, particulate, water and reduce the ‘drop’ size to help with better atomisation when introduced into the engine. Large drops of fuel don’t fully combust in the engine. They undergo secondary combustion and turn into heat energy and emissions. Our goal, and what should be the goal of the whole shipping industry, irrelevant of fuel, vessel size and function, should be to be able to account for every drop of fuel consumed.

The Fuel System Lockdown:

MFM Bunker to Bunker

The first challenge is to know and agree what is being bunkered onto the vessel in the first place. To know the mass of the bunker, we must be using a correctly ranged Mass Flow Meter.

MFM Bunker to Settling Tank

When using Fuelre4m’s Re4mx Fueloil re4mulator, we need to dose the correct amount of product for the weight of fuel that is being treated either in the bunker or in the settling tank.

MFM Settling to Purification

 Having a mass flow meter after the settling and before purification isn’t wholly necessary, but can be beneficial in understanding the temperature and density of transferred fuel, as well as understanding what the percentage of water and waste material has been lost to this point.

MFM Before Mixing Column, Pre Main Engine – Fuel In

This is the last reference check point of the fuel before it is injected into the engine. What will be reported as accurately as possible from this point will be how much fuel by weight is now passing through for combustion.

MFM Post Main Engine – Fuel Out

To understand the fuel consumption of the main engine, it’s important to be able to measure as close to the Fuel In and Fuel Out points as possible. Fuel consumption of the Main Engine should be as simple as MFM IN minus MFM OUT.

Torque / Shaft Power Meter

So, we’ve locked down the mass of the fuel flowing into the engine, now how do we measure the power produced?  Despite how it sounds, a torque meter does not measure torque. It simply measures time and distance. As forces against the propellor change, the amount of power needed to maintain the same turning speed will also change, and the propellor shaft with ‘twist’ with torque.

Why is the ranging important? Because the maximum power rating of the engine changes depending on the quality of the fuel and the energy it can release.

If your fuel produces 1kWh for 160g, 1000kg of fuel will produce 6,250kWh of power. If your fuel produces 1kWh for 180g, 1000kg of fuel will produce only 5,550kWh of power. If the maximum Fuel In capacity of the engine, from where the power rating is calculated, is 1000kg, your maximum power rating of that engine, and with it, the SFOC, has now changed.

Power Cards / Power Curves

The taking of indicator cards, allows the ship’s engineer to receive more information about the combustion process (via the draw or out of phase card), measure the cylinder power output of the engine (via the power cards), and check the cleanliness of the scavenging process (via the light spring diagram).

For the purposes of measuring the efficiency of the fuel, the power cards can be used to calculate the energy release of the fuel. This can then be used to build an algorithm to ‘range’ or adjust the power readings from the torque meter to the quality of the fuel.

MFM Auxiliary Engines – Fuel In

The auxiliary engines, strangely, are probably the easiest to prove fuel efficiency and the efficiency of the fuel on. Why? Because they’re generating electrical power that can easily be measured.

MFM Auxiliary Engines – Fuel In

A common fuel flow in and fuel flow out MFM will suffice if all of the auxiliary engines are sharing a common fuel flow system.

Auxiliary Engines – Constant Power Meter

Being able to monitor the amount of power produced at a given moment is not enough. Electrical loads can vary, and at the time once an hour that the kW reading is taken, or the kWh counter is recorded, the load just two seconds later could change. The fuel consumption for 100kWh over 3 minutes is vastly different than 100kWh over 1 hour.

Boilers & Cargo Offload Systems

Some vessels use boilers to generate steam power, running off the same fuel as the main engines. It is important to lock down all fuel consumers to understand where the fuel is being consumed.

MFM Boiler – Fuel In

Often fed straight from the settling tank without needing to go through further purification, the boiler directly combusts the fuel to generate steam from water.

To be able to calculate the boiler and fuel efficiency, we now need to firstly look at how much fuel in mass is being consumed.

Volumetric or MFM – Water In

Fresh water has a very well-known density of 1g per ml, but this is also affected by temperature. The use of a temperature compensated mass flow meter will improve accuracy of water used to produce the required steam.  

Recordable Pressure Gauge

The last variable? How much water and fuel is being used to produce the same amount of steam pressure.  

Cover Story

AI Moves from Experiment to Essential in UAE’s Advertising Landscape

Published

on

By Srijith KN, Senior Editor, Integrator
From content creation to media buying, artificial intelligence is quietly reshaping how campaigns are built, delivered, and optimised across the GCC.

In the UAE and across the GCC, artificial intelligence has moved well beyond the stage of experimentation. What was once a buzzword discussed in boardrooms is now deeply embedded in the day-to-day execution of advertising. Brands are no longer testing AI—they are relying on it to run campaigns, generate content, and make increasingly precise decisions about audience targeting and timing.

On the creative front, the shift is particularly visible. AI-powered tools are now capable of producing ad copy, visuals, and even short-form video content at a pace that would have been unthinkable just a few years ago. For marketers operating in a market like the UAE—where campaigns often need to speak to audiences in both English and Arabic, while also resonating across a diverse mix of nationalities, this level of speed and adaptability is more than a convenience. It is becoming a necessity.

Behind the scenes, machine learning has also transformed how media buying is approached. Traditional methods that relied heavily on instinct or retrospective performance reports are steadily being replaced by systems that analyse audience behaviour in real time. These platforms continuously optimise campaign performance, adjusting budgets and placements based on how users interact with content.

In the UAE’s PR ecosystem, brands are already leveraging platforms such as Meltwater, Brandwatch, and Sprout Social to better understand media performance, audience sentiment, and the broader buying landscape.

A practical example of this shift can be seen in platforms like Skyscanner, where advertising systems respond dynamically to user intent. Instead of targeting broad demographic groups, campaigns are triggered by actual search behaviour and travel patterns, allowing for more relevant and timely engagement.

AI is also influencing emerging advertising formats. Digital billboards, for instance, are becoming more responsive, using live data inputs to tailor content based on factors such as time of day, location, and audience movement. Similarly, augmented reality experiences are beginning to incorporate behavioural insights, offering more contextual and interactive brand engagements.

Looking ahead, the trajectory appears clear. Advertising is moving towards deeper automation, more intelligent recommendations, and tighter integration between creative tools and analytics platforms. The industry is shifting from a model centred on broadcasting messages to one that focuses on responding to audiences in real time, with context and precision.

In this evolving landscape, AI is no longer just an enabler, it is becoming the foundation on which modern advertising is built.

Continue Reading

Tech Features

Can Middle East Banks Reclaim Their Digital Leadership in the Age of AI?

Published

on

Fernando Castanheira, Chief Technology Officer, at Riverbed Technology

Banks have long been the GCC’s digital pioneers. In the UAE, Saudi Arabia and Qatar, financial institutions were among the first to embrace mobile banking apps, roll out contactless payments at scale and introduce AI-powered chatbots to handle customer queries in Arabic and English. More often than not, banks set the pace and other sectors followed.

Given this decades-long precedent, you would expect the same pattern to be playing out with artificial intelligence. After all, AI is already embedded in the daily lives of Gulf consumers. Ride-hailing, e-commerce, government, and a plethora of other services across the region have increasingly integrated AI into their systems, to effectively personalise experiences and streamline transactions.

And yet, when we look inside banks themselves, the story is more complicated. According to the latest Riverbed Global Survey, only 40% of organizations in the financial sector consider themselves ready to operationalize AI. Just 12% of AI initiatives are fully deployed enterprise-wide, while 62% remain stuck in pilot or development phases. In a sector known for digital ambition, there is a striking gap between intent and execution.

Stuck in Pilot Purgatory

In most industries, pilots fail because the idea simply does not resonate. Testing reveals a weak product-market fit, limited customer appetite, or unclear commercial value.

That is not what we are seeing in banking AI. Regional banks have successfully piloted AI models that detect fraud in real-time, reduce false positives in anti-money laundering checks, predict liquidity requirements, and power conversational assistants capable of resolving complex service requests. Relationship managers have used AI tools to surface next-best-product recommendations based on behavioral data. And operations teams have leveraged machine learning to optimize payment routing and reduce processing delays.

In controlled environments, these pilots often deliver impressive results. And yet, few ever make it past this stage. The initiative remains confined to a sandbox. Expansion is delayed. Integration becomes “phase two.” Eventually, attention shifts to the next promising experiment. So, if the feature works and the value is clear, what is holding banks back?

AI that Fails to Scale

In my experience working with CIOs across the region, two obstacles repeatedly stand in the way of AI moving from proof of concept to production. The first is operational complexity. Most financial institutions operate in highly fragmented environments. Core banking platforms run alongside decades-old legacy systems, with critical workloads split across on-premise data centers, private clouds, and multiple public cloud providers. Third-party fintech integrations also adds further layers of interdependency.

Deploying AI into this landscape is not as simple as plugging in a model. AI workloads are data-hungry and latency-sensitive. They require reliable pipelines, consistent telemetry, and predictable performance across every layer of the stack. In a hybrid, multi-cloud architecture, even minor configuration mismatches can trigger cascading issues.

The second obstacle is limited visibility. Without a unified view of applications, infrastructure, networks, and user experience, AI-driven services can behave unpredictably. A model may be performing perfectly, but a network bottleneck slows response times. An upstream data source may degrade in quality, subtly skewing outputs, and an infrastructure change in one environment may impact inference speeds elsewhere.

When visibility is fragmented, issues take longer to diagnose and resolve, and Mean Time to resolution increases. Operational risk rises, particularly when customer-facing or revenue-critical services are affected. In a heavily regulated market such as the UAE or Saudi Arabia, that risk has compliance implications as well as reputational ones.

Left unaddressed, this kind of live digital environment leaves very little room for innovation. AI cannot become the transformational force many claim it to be if it is constantly constrained by hidden friction.

Conquering Complexity

Moving AI smoothly from pilot to production requires banks to create as frictionless an operating environment as possible. One of the most effective starting points is unified observability. By consolidating telemetry from applications, infrastructure, networks and end-user devices into a single, real-time view, banks can eliminate blind spots, and decision-makers can gain clarity over performance, dependencies and risk across the entire digital estate.

With this foundation in place, AIOps capabilities can correlate signals, reduce alert noise and automate root cause analysis. Instead of firefighting incidents after customers notice them, IT teams can proactively identify performance degradation and resolve issues before they impact revenue or service continuity.

Standardising on frameworks such as OpenTelemetry can further simplify instrumentation across heterogeneous environments, ensuring consistent data collection and analysis. At the same time, investing in data quality, governance and compliance processes ensures that AI models are trained and operated within regulatory boundaries.

In practical terms, this means rethinking infrastructure as an enabler of AI rather than an afterthought. It may involve accelerating data movement between environments, modernising integration layers or rationalising overlapping monitoring tools. The goal is not perfection, but coherence: a shared, real-time understanding of how systems behave and how AI performs under real-world conditions.

From Optimism to Optimisation

The debate about whether AI belongs in banking is effectively over. Across the Middle East, regulators are publishing AI guidelines, governments are investing heavily in digital transformation, and consumers increasingly expect intelligent, seamless services.

Institutions that continue to treat AI as a series of isolated pilots risk remaining in perpetual experimentation. However, those who address operational complexity head-on will move beyond optimism to optimisation.

Continue Reading

Tech Features

Addressing Structural Gaps in Enterprise Backup Strategies

Published

on

By Owais Mohammed, Regional Lead & Sales Director, WD – Middle East, Africa, Turkey & Indian Subcontinent

Today, organizations across the UAE are reassessing how they backup and recover data in increasingly complex environments. Organisations are managing data across cloud platforms, on-premises infrastructure, edge deployments, and increasingly, AI-driven workloads. As these environments scale, data moves across system and is reused for analytics, compliance, and performance optimisation. This increases the complexity of backup and retention requirements. When strategies do not keep pace, gaps become visible. 

Where backup strategies are falling short

A common challenge is the alignment between backup design and actual workload distribution. Many backup strategies are built around primary systems. But enterprise data now lives across multiple environments with different access patterns and retention requirements. This creates inconsistencies in backup coverage across cloud services, endpoints, and shared infrastructure.

A common misconception is that platform-level redundancy is sufficient. Cloud and application are designed to provide availability, but they do not replace independent backup layers. When data is modified, deleted, or encrypted within the same environment, recovery depends on whether a separate, unaffected copy exists.

Coverage inconsistencies also become more visible as organizations scale. Backup policies often prioritise transactional systems. Logs, archived records, development environments, and datasets used for analytics or AI workflows may be retained without structured protection. These datasets can become critical during investigations, audits, or system updates.

Recovery planning is where many strategies can break down. Backup processes may be in place, but recovery requirements are not always well defined. This includes defining dependencies, sequencing recovery, and aligning recovery times with business needs.

Why data resilience is now an infrastructure requirement

Enterprise data is now used across a wider range of functions. In analytics and AI-driven environments, data is revisited over time rather than stored and left unused. Historical datasets are essential to maintain performance and consistency. This means reliable backup and access are no longer secondary consideration, but core infrastructure needs.

Compliance expectations are also evolving. Organizations are increasingly need to retain records, demonstrate traceability, and provide access to data in a verifiable format. Backup and retention policies must align with recovery capabilities.

Building a more resilient data strategy

Addressing these gaps requires a structured approach to data resilience.

Infrastructure choices affect how backup strategies can be implemented. These decisions increasingly factor in not only performance and scalability, but also long-term cost efficiency as data environments expand. Many organisations are adopting hybrid models that combine cloud platforms with localised storage systems. This allows different workloads to be supported based on their access patterns and recovery requirements. In scenarios where consistent performance and recovery predictability are required, localized storage can provide additional control.

As environments grow, automation is important in maintaining consistency. Policy-driven automation helps ensure that backup processes are applied consistently, while monitoring tools provide visibility into system performance and potential gaps.

Recovery planning needs to be integrated into these processes. Clear recovery objectives and regular testing are essential for effective backup strategies.

Data prioritization also plays a role in managing scale. Not all data requires the same level of backup. Identifying critical datasets, allows organizations to allocate resources effectively.

Managing cost as data volumes scale

Cost considerations play a central role as data volumes scale. In large environments, power consumption, cooling requirements, and infrastructure footprint all contribute to total cost of ownership (TCO), particularly as data environments scale.

This is where tiered storage architecture becomes critical. High-performance storage is essential for active workloads such as analytics and real-time processing, while high-capacity, cost-efficient storage supports large datasets, backups, and long-term retention. This helps manage growth and scaling efficiently.

Treating all data the same is no longer practical. Infrastructure decisions need to reflect how data is used, how often it is accessed, and how quickly it needs to be recovered.

Backup strategies must align closely with infrastructure design. Data resilience now means ensuring data is accessible and recoverable across systems.

Many organizations are adopting hybrid models that combine cloud platforms with localized storage systems. In data-intensive environments, the ability to recover and reuse data is directly tied to operational continuity, system performance, and the ability to scale infrastructure effectively.

Continue Reading

Trending

Copyright © 2023 | The Integrator