Tech Features
5 KEY TECHNOLOGY TRENDS AFFECTING THE SECURITY SECTOR IN 2026
By Johan Paulsson, Chief Technology Officer at Axis Communications; Matt Thulin, Director of AI & Analytics Solutions at Axis Communications; and Thomas Ekdahl, Engineering Manager – Technologies at Axis Communications
It came as a surprise that this is the 10th time that we’ve looked at the technology trends that we think will affect the security sector in the coming year. It feels like only yesterday that we sat down to write the first – a reminder of how quickly time passes, and how fast technological progress continues to move.
Something that’s also become clear is that a completely new set of trends doesn’t appear year-on-year. Rather, we see an evolution of trends and technological developments, and that’s very much the case as we look towards 2026. Technological innovations regularly arrive, which impact our sector. Artificial intelligence, advancements in imaging, greater processing capabilities within devices, enhanced communications technologies…these and more have impacted our industry.
Even technologies which still seem a distance away, such as quantum computing, may have some potential implications in the near-term in preparing for the future. While we focus here on tech trends, it’s worth highlighting a shift that we’ve seen in recent years: the increasing involvement and influence of the IT department over decisions related to security and safety technology. The physical security and IT departments now work in close collaboration, with IT heavily involved in physical security purchasing decisions.
That influence, we feel, is central to the first of our trends for 2026…
1. “Ecosystem-first” becomes an important part of decision making
At a fundamental level, the greater influence of the IT department is changing the perspective regarding security technology purchasing decisions. We call this an “ecosystem-first” approach, and it influences almost every subsequent decision. Today, however, we start to see a trend that the first decision is increasingly defined by the solution ecosystem to which the customer wants to commit. In many ways, it’s analogous to how IT has always worked: decide on an operating system, and then select compatible hardware and software.
The ecosystem-first approach makes a lot of sense. With today’s solutions including a greater variety of devices, sensors, and analytics than ever before, seamless integration, configuration, management, and scalability is essential. In addition, product lifecycle management, including, critically, ongoing software support, becomes more achievable within a single ecosystem.
Committing to a single ecosystem – one offering breadth and depth in hardware and software from both the principal vendor alongside a vibrant ecosystem of partners – is the primary decision.
2. The ongoing evolution of hybrid architectures
A hybrid architecture as the preferred choice isn’t new. In fact, it’s something we’ve highlighted in previous technology trends posts. But it continues to evolve. Sometimes evolution can seem quite subtle. In reality, we’re seeing some fundamental shifts.
We’ve always described hybrid as a mix of edge computing within cameras, cloud resources, and on-premise servers. While that’s still the same today, what’s changing is the balance of resources, as capabilities are enhanced and new use cases emerge. Edge and cloud are becoming much more significant, with the need for on-premise server computing resources reduced.
This is largely a result of enhanced computing power and capabilities within both cameras and the cloud. More powerful edge AI-enabled surveillance cameras can, put simply, handle more than ever before. Improved image quality, the ability to more accurately analyze scenes and create valuable metadata have seen cameras take on tasks previously handled on the server.
Similarly, with such a wealth of data being created, cloud-based resources have the analytical power required to surface business intelligence and insights to enhance operational effectiveness.
There can still be legitimate reasons to retain some on-premise resources, such as network video recorders, but the true value is increasingly coming from edge devices and cloud resources. Ultimately, it’s a trend that meets both the IT department’s drive for efficiency, the security team’s desire for solution quality and effectiveness, and the data integrity and security needs of both.
But, even if hybrid architectures are a trend, we must not forget that a vast majority of all solutions are still very much on-prem solutions, and this will be the case for a long time.
3. The increased importance of edge computing
In many sectors, like the automotive industry, the need and potential for edge computing has only been recognized relatively recently. As regular readers will know, however, the value of increased computing resources within devices at the edge of the network has been a feature of our technology trends predictions for several years. Enhanced capabilities mark the beginning of a new era of edge.
In many ways, the increased importance of edge computing is directly related to the evolution of hybrid architectures described in the previous trend. When hybrid solutions have included edge, cloud, and server technologies, the full potential of edge AI hasn’t always been fully realized. With on-premise servers able to support some tasks, there has been less motivation to move these to the edge.
This is already changing and will accelerate over the coming year. This is in part due to the enhanced AI available to the edge, within devices themselves. The discussion and decisions about where to deploy AI across surveillance solutions – using the strengths of edge AI in devices and the power of cloud-based analytics – has brought focus to the capabilities of cameras and the increasing variety of edge AI-enabled sensors. These bring benefits in both effectiveness and efficiency.
Edge processing generates both business data — actionable insights derived directly from the scene — and metadata, which describes the objects and scenes within it. This information has become the basis for efficient scaling of system functionality, such as smart video searches, and for generating system wide insights. Edge processing enables a much smoother scaling of system compute performance, as the system performance grows with each added edge device.
The arguments against moving more to the edge, such as cybersecurity challenges, have diminished. With the strong cybersecurity capabilities of edge devices, such as secure boot and signed OS, they now have become a strong part of the overall system security solution.
4. Mobile surveillance on the rise
Mobile surveillance solutions, like mobile trailers, aren’t a trend in themselves. For numerous reasons – commercial and technological – mobile surveillance has already seen significant growth and is set to explode over the next year.
From a technological perspective, improved connectivity has helped unlock the ability to employ more advanced, higher-quality surveillance cameras in mobile solutions. Remote access and edge AI has further enhanced the capabilities of mobile surveillance solutions. This immediately makes them an attractive option in a greater variety of situations, from public safety to construction sites to festivals and sporting events.
Power management within surveillance cameras has also advanced, resulting in lower power utilization without a compromise in quality. This is particularly important where mobile surveillance solutions are making use of battery power and renewable energy. A mobile surveillance solution can also be more straightforward to approve than a permanent installation.
Ultimately, these factors mean that security and safety can be ensured in places where it is difficult or undesirable to place physical security personnel.
5. Technology autonomy: Easier said than done!
Less a new trend, and more a reflection on one of our trends from last year where we highlighted how companies across many sectors were looking to gain more control over key technologies essential to their products. Automotive companies looking to design their own semiconductors to mitigate against supply chain disruption was an example.
As many of those organizations are finding, however, extending an organization’s focus from its traditional business (e.g. making cars) to a fundamentally different and potentially highly complex area (e.g. designing semiconductors) is easier said than done. Attempts also highlight how interconnected global supply chains are, and that true autonomy is impossible to achieve.
As we have done for many years here at Axis, focus for technological autonomy should be on the areas of a business that make a fundamental difference to the offering. Designing our own system-on-chip (SoC), ARTPEC, which Axis started doing more than 25 years ago, has given us ultimate control over our product functionality.
An example of the benefit of this has been our ability to be the first surveillance equipment vendor to provide AV1 video encoding to our customers and partners, in addition to H.264 and H.265. It also allows us to prepare for future technologies that will bring opportunities and risks, even those that still seem many years in the future.
While we always enjoy putting together our thoughts on the trends that will define the industry over the coming year, our perspective stretches much further into the future. This is what gives us the ability to plan for and develop the innovations that continue to meet the evolving needs of customers, and opportunities to improve safety, security, operational efficiency and business intelligence.
Innovation doesn’t happen in isolation, however. The best ideas emerge through collaboration, by listening to our customers and understanding their challenges, by maintaining close relationships with our partners, and by exploring solutions together. These partnerships are what will continue to drive progress as we move into 2026 and beyond, whichever way the technological winds may blow.
Tech Features
FROM COST EFFICIENCY TO CARBON EFFICIENCY: THE NEW METRIC DRIVING TECH DECISIONS
Ali Muzaffar, Assistant Editor at School of Mathematical and Computer Sciences, Heriot-Watt University Dubai
In boardrooms across the globe, something big is happening, quietly but decisively. Sustainability has evolved far beyond being a “nice-to-have” addition to an ESG report. It’s now front and centre in business strategy, especially in tech. From green computing and circular data centers to AI that optimizes energy use, companies are reshaping their technology roadmaps with sustainability as a core driver and not as an afterthought.
Not long ago, tech strategy was all about speed, uptime, and keeping costs per computation low. That mindset has evolved. Today, leaders are also asking tougher questions: How carbon-intensive is this system? How energy-efficient is it over time? What’s its full lifecycle impact? With climate pressure mounting and energy prices climbing, organisations that tie digital transformation to their institutional sustainability goals.
At its heart, green computing seeks to maximise computing performance while minimising environmental impact. This includes optimising hardware efficiency, reducing waste, and using smarter algorithms that require less energy.
A wave of recent research shows just how impactful this can be. Studies indicate that emerging green computing technologies can reduce energy consumption by 40–60% compared to traditional approaches. That’s not a marginal improvement, that’s transformational. It means smaller operating costs, longer hardware life, and a lower carbon footprint without sacrificing performance.
Part of this comes from smarter software. Techniques like green coding, which optimise algorithms to minimise redundant operations, have been shown to cut energy use by up to 20% in data processing tasks.
Organisations that adopt green computing strategies aren’t just doing good; they’re driving tangible results. Informed by sustainability principles, energy-efficient hardware and
optimisation frameworks can reduce energy bills and maintenance costs at the same time, often with payback periods of three to five years.
Data centres are the backbone of the digital economy. They power software, store vast troves of data, and support the artificial intelligence systems driving innovation. But this backbone comes with a heavy environmental load. Collectively, global data centres consume hundreds of terawatt-hours of electricity each year, which is about 2% of total global electricity.
As AI workloads surge and data storage demand explodes, energy consumption is rising sharply. Looking ahead to 2030, the numbers are hard to ignore. Global data
centre electricity demand is expected to almost double, reaching levels you’d normally associate with an entire industrialised country. That kind of energy appetite isn’t just a technical issue, it’s a strategic wake-up call for the entire industry.
This surge has forced a fundamental rethink of how data centres are built and run. Enter the idea of the circular data centre. It’s not just about better cooling or switching to renewables. Instead, it looks at the full lifecycle of infrastructure, from construction and daily operations to decommissioning, recycling, and reuse, so waste and inefficiency are designed out from the start.
The most forward-thinking operators are already implementing this approach. Advanced cooling methods, such as liquid cooling and AI-driven thermal management, are revolutionising the industry, reducing cooling energy consumption by up to 40% compared to traditional air-based systems. That’s a big win not only for energy bills, but also for long- term sustainability.
Beyond cooling, operators are turning heat waste into a resource. In Scandinavia, data centres are already repurposing excess thermal output to heat residential buildings, a real- world example of how technology can feed back into the community in a circular way. These strategies are already showing results, with approximately 60% of data centre energy now coming from renewable sources, and many operators are targeting 100% clean power by 2030.
Circular thinking extends to hardware too. Companies are designing servers and components for easier recycling, refurbishing retired equipment, and integrating modularity so that parts can be upgraded without replacing entire systems.
For businesses, circular data centres represent more than environmental responsibility. They can significantly lower capital expenditures over time and reduce regulatory risk as governments tighten emissions requirements. While AI itself has been criticised for energy use, the technology also offers some of the most effective tools for reducing overall consumption across tech infrastructure.
AI algorithms excel at predictive optimisation, they can analyse real-time sensor data to adjust cooling systems, balance computing loads, and shut down idle resources. Across case studies, such systems have reliably achieved 15–30% energy savings in energy management tasks in cloud environments, dynamic server allocation and AI-assisted workload management have contributed to energy savings of around 25% when compared with conventional operations.
Tech Features
THE YEAR AI WENT MAINSTREAM
Talal Shaikh, Associate Professor, Heriot-Watt University Dubai
In 2025, artificial intelligence crossed a threshold that had little to do with model size or benchmark scores. This was the year AI stopped feeling like a product and started behaving like infrastructure. It became embedded across work, education, government, media, and daily decision-making. The shift was subtle but decisive. AI moved from something people tried to something they assumed would be there.
From my position at Heriot-Watt University Dubai, what stood out most was not a single breakthrough, but a convergence. Multiple model ecosystems matured at the same time. Autonomy increased. Regulation caught up. Infrastructure scaled. And nations began to treat intelligence itself as a strategic asset.
From one AI story to many
For several years, public attention clustered around a small number of Western firms, most visibly OpenAI and Google. In 2025, that narrative fractured.
Google’s Gemini models became deeply embedded across search, productivity tools, Android, and enterprise workflows. Their strength lay not only in conversation, but in tight coupling with documents, spreadsheets, email, and live information. AI here was designed to live inside existing habits.
At the same time, Grok, developed by xAI, took a different path. With real-time access to public discourse and a deliberately opinionated tone, it reflected a broader shift in design philosophy. AI systems were no longer neutral interfaces. They carried values, styles, and assumptions shaped by their creators. That diversity itself was a sign of maturity.
By the end of 2025, users were no longer asking which model was best. They were choosing systems based on fit, trust, integration, and intent.
The rise of agentic AI
If generative AI defined earlier years, agentic AI defined 2025.
In 2023, most people experienced AI as a chatbot. You asked a question, it replied, and the interaction ended. In 2025, that interaction became continuous. An agent does not simply respond. It reads context, sets sub-goals, uses tools, checks results, and decides what to do next.
A chatbot drafts an email. An agent reads the full thread, looks up past conversations, drafts a response, schedules a meeting, and follows up if no reply arrives. A chatbot explains an error. An agent runs tests, fixes the issue, commits code, and opens a pull request.
This transition from response to agency turned AI from a helpful assistant into an operational participant. It also shifted risk. As systems gained the ability to act, questions of oversight, auditability, and failure containment moved from academic debate into everyday management.
A shift I saw first in the classroom
This change was not abstract for me. I saw it unfold directly in my classrooms.
Only a short time ago, many students dismissed AI-assisted coding with a familiar phrase: “It hallucinates.” They were not wrong. Early tools often produced code that looked correct but failed logically. Students learned quickly that blind trust led to wasted hours.
In 2025, that language faded.
Students now approach AI differently. They no longer ask whether the model is correct. They ask why it produced a solution, where it might fail, and how to constrain it. In one recent lab, a student debugging a robotics control pipeline did not reject the AI output after a failed test. He used it to generate alternative hypotheses, compared execution traces, and isolated the fault faster than traditional trial and error would allow.
At one point, a student stopped and said, “It is not hallucinating anymore. It is reasoning, but only if I reason with it.”
That sentence captures 2025 better than any benchmark.
From skepticism to supervision, in industry
The same shift is visible among our alumni now working in software engineering, fintech, data science, and robotics. Several who once warned juniors not to trust AI code now describe it as a first-pass collaborator. They use it to scaffold architectures, surface edge cases, and speed up documentation, while keeping final judgment firmly human.
The concern is no longer hallucination. It is over-reliance.
AI moved from being treated as an unreliable shortcut to being treated as a junior colleague, fast, useful, and fallible, requiring supervision rather than dismissal.
Sovereign AI, two models of power
One of the clearest signals that AI went mainstream in 2025 was the divergence in how regions approached it.
In much of the West, the year was framed as a corporate contest. Product launches, market share, and valuation battles dominated headlines. Innovation moved fast, driven by competition between private firms.
In the Middle East, and particularly in the UAE, the framing was different. AI was treated as national infrastructure.
The UAE’s investment in sovereign models such as Falcon and Jais reflected a belief that intelligence, like water or electricity, must be secured, governed, and trusted within borders. This was not about isolation. It was about resilience, data sovereignty, and long-term capacity. Dependence without control came to be seen as a strategic risk.
In 2025, this idea matured. Sovereign AI stopped being a slogan and became a planning principle. While the West debated which company would win, the UAE focused on ensuring that the capability itself remained accessible, accountable, and locally anchored.
When culture embraced AI
Another signal of mainstream adoption arrived from outside the technology sector.
The strategic alignment between The Walt Disney Company and OpenAI marked a moment when AI entered the core of global culture. Disney does not adopt technologies lightly. Its value lies in storytelling, world-building, and intellectual property sustained over decades.
This move was not about automating creativity. It was about scale and continuity. Modern story worlds span films, series, games, theme parks, and personalised digital experiences. Managing that complexity increasingly requires intelligent systems that can assist across writing, design, localisation, and audience interaction.
When a company whose primary asset is imagination treats AI as foundational, it signals that intelligent systems are no longer peripheral to creative industries. They are becoming part of how stories are built, maintained, and experienced. In that sense, 2025 marked the moment AI became cultural infrastructure, not just technical tooling.
Work changed quietly
Another sign of mainstreaming was how little drama accompanied adoption. Professionals stopped announcing that they were using AI. They simply expected it.
Developers assumed code assistance and automated testing. Analysts assumed rapid summaries and scenario modeling. Marketers assumed content generation and performance analysis. Students assumed access, but outcomes increasingly depended on how well they could guide, verify, and critique what AI produced.
This created a new divide. Not between technical and non-technical people, but between those who could reason with AI and those who delegated thinking to it.
What this means for universities
For universities, 2025 closed the door on treating AI as optional.
Every discipline now intersects with intelligent systems. Engineers must understand ethics and regulation. Business graduates must understand automation and decision support. Creative fields must grapple with authorship and originality. Researchers must design methods that remain valid when AI is part of the workflow.
At Heriot-Watt University Dubai, this pushes us toward assessment that rewards reasoning over polish, and education that teaches students not just to use AI, but to supervise it.
The real shift
AI went mainstream in 2025, not because it became smarter, but because society reorganised around it. Multiple models coexisted. Agents acted with growing autonomy. Nations planned for sovereignty. Culture adapted. Classrooms recalibrated trust.
The next phase will not be defined by faster models alone. It will be defined by judgment.
That is the quieter, more demanding challenge left to us after the year AI went mainstream.
Tech Features
FROM AI EXPERIMENTS TO EVERYDAY IMPACT: FIXING THE LAST-MILE PROBLEM
By Aashay Tattu, Senior AI Automation Engineer, IT Max Global
Over the last quarter, we’ve heard a version of the same question in nearly every client check-in: “Which AI use cases have actually made it into day-to-day operations?”
We’ve built strong pilots, including copilots in CRM and automations in the contact centre, but the hard part is making them survive change control, monitoring, access rules, and Monday morning volume.
The ‘last mile’ problem: why POCs don’t become products
The pattern is familiar: we pilot something promising, a few teams try it, and then everyone quietly slides back to the old workflow because the pilot never becomes the default.
Example 1:
We recently rolled out a pilot of an AI knowledge bot in Teams for a global client’s support organisation. During the demo, it answered policy questions and ‘how-to’ queries in seconds, pulling from SharePoint and internal wikis. In the first few months of limited production use, some teams adopted it enthusiastically and saw fewer repetitive tickets, but we quickly hit the realities of scale: no clear ownership for keeping content current, inconsistent access permissions across sites, and a compliance team that wanted tighter control over which sources the bot could search. The bot is now a trusted helper for a subset of curated content, yet the dream of a single, always-up-to-date ‘brain’ for the whole organisation remains just out of reach.
Example 2:
For a consumer brand, we built a web-based customer avatar that could greet visitors, answer FAQs, and guide them through product selection. Marketing loved the early prototypes because the avatar matched the brand perfectly and was demonstrated beautifully at the launch event. It now runs live on selected campaign pages and handles simple pre-purchase questions. However, moving it beyond a campaign means connecting to live stock and product data, keeping product answers in sync with the latest fact sheets, and baking consent into the journey (not bolting it on after). For now, the avatar is a real, working touchpoint, but still more of a branded experience than the always-on front line for customer service that the original deck imagined.
This is the ‘last mile’ problem of AI: the hard part isn’t intelligence – it’s operations. Identity and permissions, integration, content ownership, and the discipline to run the thing under a service-level agreement (SLA) are what decide whether a pilot becomes normal work. Real impact only happens when we deliberately weave AI into how we already deliver infrastructure, platforms and business apps.
That means:
- Embed AI where work happens, such as in ticketing, CRM, or Teams, and not in experimental side portals. This includes inside the tools that engineers, agents and salespeople use every day.
- Govern the sources of truth. Decide which data counts as the source of truth, who maintains it, and how we manage permissions across wikis, CRM and telemetry.
- Operate it like a core platform. It should be subject to the same expectations, such as security review, monitoring, resilience, and SLA, as core platforms.
- Close the loop by defining what engineers, service desk agents or salespeople do with AI outputs, how they override them, and how to capture feedback into our processes.
This less glamorous work is where the real value lies: turning a great demo into a dependable part of a project. It becomes a cross-functional effort, not an isolated AI project. That’s the shift we need to make; from “let’s try something cool with AI” to “let’s design and run a better end-to-end service, with AI as one of the components.”
From demos to dependable services
A simple sanity check for any AI idea is: would it survive a Monday morning? This means a full queue, escalations flying, permissions not lining up, and the business demanding an answer now. That’s the gap the stories above keep pointing to. AI usually doesn’t fall over because the model is ‘bad’. It falls over because it never becomes normal work, or in other words, something we can run at 2am, support under an SLA, and stand behind in an audit.
If we want AI work to become dependable (and billable), we should treat it like any other production service from day one: name an owner, lock the sources, define the fallback, and agree how we’ll measure success.
- Start with a real service problem, not a cool feature. Tie it to an SLA, a workflow step, or a customer journey moment.
- Design the last mile early. Where will it live? Is it in ticketing, CRM, Teams, or a portal? What data is it allowed to touch? What’s the fallback when it’s wrong?
- Make ownership explicit. Who owns the content, the integrations, and the change control after the pilot glow wears off?
- Build it with the people who’ll run it. Managed services, infra/PaaS, CRM/Power Platform, and security in the same conversation early – because production is where all the hidden requirements show up.
When we do these consistently, AI ideas stop living as side demos and start showing up as quiet improvements inside the services people already rely on – reliable, supportable, and actually used.
-
Tech News2 years agoDenodo Bolsters Executive Team by Hiring Christophe Culine as its Chief Revenue Officer
-
VAR10 months agoMicrosoft Launches New Surface Copilot+ PCs for Business
-
Tech Interviews2 years agoNavigating the Cybersecurity Landscape in Hybrid Work Environments
-
News10 years ago
SENDQUICK (TALARIAX) INTRODUCES SQOOPE – THE BREAKTHROUGH IN MOBILE MESSAGING
-
Tech News7 months agoNothing Launches flagship Nothing Phone (3) and Headphone (1) in theme with the Iconic Museum of the Future in Dubai
-
Tech News2 years agoBrighton College Abu Dhabi and Brighton College Al Ain Donate 954 IT Devices in Support of ‘Donate Your Own Device’ Campaign
-
VAR1 year agoSamsung Galaxy Z Fold6 vs Google Pixel 9 Pro Fold: Clash Of The Folding Phenoms
-
Editorial1 year agoCelebrating UAE National Day: A Legacy of Leadership and Technological Innovation


