Connect with us

Technology

Safer Internet Day: Snap Inc. emphasizes the need for greater parental control over online teen activities in 2024

Published

on

Snapchat Family Center

Reiterating its commitment to online safety, this International Safer Internet Day, Snap Inc. has released new research showing that in 2023, parents’ found it more difficult to keep up with their teens online activities, and parents’ trust in their teens to act responsibly online faltered. This research was conducted across all devices and platforms – not just Snapchat.

The results are part of Snap’s ongoing research into Generation Z’s digital well-being and mark the second reading of our annual Digital Well-Being Index (DWBI), a measure of how teens (aged 13-17) and young adults (aged 18-24) are faring online in six countries: Australia, France, Germany, India, the UK, and the U.S. The latest findings show that globally, parents’ trust in their teens to act responsibly online fell in 2023, with only four in 10 (43%) agreeing with the statement, “I trust my child to act responsibly online and don’t feel the need to actively monitor them.” This comes in six percentage points down from 49% in similar research in 2022. In addition, fewer minor-aged teenagers (13-to-17-year-olds) said they were likely to seek help from a parent or trusted adult after they experienced an online risk, a drop of five percentage points to 59% from 64% in 2022. 50% of parents said they were unsure about the best ways to actively monitor their teen’s online activities. 

Snapchat has a unique and highly engaged audience in the MENA region, with over 90% of 13-34 year-olds active on the platform in KSA as well as reaching 1 in 3 of 18-34 year-olds in the UAE. With an active younger audience, Snap Inc. continues to leverage these and other research findings to help inform its product and feature design and development, including Snapchat’s Family Center. Launched in 2022, Family Center is a suite of parental tools, designed to provide parents and caregivers with insight into who their teens are messaging on Snapchat, while preserving teens’ privacy by not disclosing the actual content of those communications.

Reiterating its commitment to securing teen activities online, Snapchat continues to offer a host of additional safety features to protect young adults online. By default, teens have to be mutual friends before they can start communication with each other, and they aren’t able to have public profiles. Teens only show up as a “suggested friend” or in search results in limited instances, if they were to have mutual friends in common, making it harder for strangers to find their profiles.

Snapchat also offers confidential, quick and easy-to-use in-app reporting tools, so that Snapchatters can alert anything they see that violates the terms. The Snapchat global Trust and Safety teams work 24/7 to review reports, remove violating content and take appropriate action.

Designed to be brand safe and minimize the spread of harmful content, Snapchat also limits opportunities for potentially harmful content to ‘go viral’. All content on Spotlight and Discover is pre-moderated – by humans and computers, making it a safer experience.

To help remove accounts that market and promote age inappropriate content, Snapchat introduced a new Strike System. Under this system, any inappropriate content detected proactively or is reported will be immediately removed.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech News

MIDDLE EAST CONFLICT DRIVING A SURGE IN SCAMS, DEEPFAKES, AND GOVERNMENT IMPERSONATION

Published

on

Cybercriminals don’t wait for the dust to settle. As conflict escalates across the Middle East, a parallel threat has emerged targeting ordinary people through their inboxes and social media feeds.

On 4 March, the UAE Ministry of Interior warned the public about fraudulent emails impersonating government emergency services, falsely claiming that residents must complete a mandatory registration form to receive state support or insurance coverage. The emails bore hallmarks of official government communications, making them convincingly deceptive. They are designed to exploit fear, urgency, and the instinct to comply with perceived authority. These messages are already circulating.

Alongside financial scams, verified fact-checkers have identified AI-generated and mislabelled footage circulating online as supposed evidence of attacks in the UAE. This includes video from Bahrain that was picked up by international media outlets and incorrectly broadcast as a Dubai drone strike. Fabricated videos of the Burj Khalifa collapsing, AI-generated missile strike imagery, and decade-old footage repackaged as current events have also circulated widely. In another example, a supposed “before and after” satellite image of Dubai showing smoke rising over the city was mislabelled — the image was actually from Sharjah, the neighbouring emirate. In many cases, the content spread faster than the corrections. Dubai Police have warned that sharing unverified information can carry criminal penalties under UAE law, including fines of no less than AED 200,000. Despite these warnings, the flow of misleading content has not slowed.

KnowBe4 warns patterns observed during previous conflicts and crises, including the war in Ukraine and the COVID-19 pandemic, the public should also expect charity and donation scams exploiting humanitarian concern, phishing emails disguised as embassy or government alerts, and deepfake imagery engineered to provoke fear or spread disinformation.

Dr. Martin Kraemer, CISO Advisor at KnowBe4 said, “Crises are the most reliable recruitment tool bad actors have. When people are frightened and searching for information, they are not necessarily looking for the truth. They are looking for confirmation of what they already fear. That is exactly what scammers and disinformation actors exploit. What we are seeing right now, fake government emergency emails, mislabelled footage, AI-generated imagery, is not random. It is targeted, and it is designed to exploit the gap between what people feel and what they know. The antidote is not panic. It is discipline: pause, question the source, and go directly to official channels before acting on anything. That’s precisely how governments and organizations are educating people to react in stressful situations.”

What the Public Can Do Right Now

KnowBe4 urges residents, travellers, and anyone following events in the region to apply the following principles:

  • Treat urgency as a warning sign. Any message that pressures you to act quickly, register now, donate immediately, confirm your details before midnight, is likely designed to stop you thinking clearly.
  • Verify before you share. Before forwarding footage or information, check whether it has been verified by a reputable news outlet or official source. Reverse image searches take seconds and can prevent significant harm.
  • Go directly to official sources. If you receive communications claiming to be from a government ministry, embassy, or emergency service, navigate directly to their official website rather than clicking any link in the message.
  • Question what you see. AI-generated imagery has reached a level of quality where video alone is no longer reliable evidence. Look for verification from multiple credible sources before drawing conclusions.
  • Report suspicious communications. In the UAE, suspected scam emails or messages should be reported to the relevant authorities. Do not engage with the sender.
Continue Reading

Tech Features

WHY SECURITY MUST EVOLVE FOR THE HYBRID HUMAN-AI WORKFORCE

Published

on

By Javvad Malik, Lead CISO Advisor at KnowBe4

There is a specific moment in every security professional’s career when they realise the traditional rulebook hasn’t just been ignored—it’s been torn to pieces. Mine arrived last week while watching a colleague engage in a debate with an AI agent over expense policy, while simultaneously being phished by what was almost certainly another AI posing as IT support.

For decades, the cybersecurity industry has clung to a comfortable, binary premise: humans work inside the walls, threats exist outside, and our job is to keep the two apart. It was a tidy worldview that made for excellent spreadsheets, even if we knew it was fiction.

Then, AI walked into the office without knocking. It’s a reboot of the classic 2010 iPad launch, where executives demanded connection to the corporate network, heralding the age of “Bring Your Own Disaster”.

The Multi-Species Workforce

The most uncomfortable truth facing modern organizations is that they no longer employ just humans.

Your current headcount includes Peter from Accounts Payable, his three AI assistants (two sanctioned, one very much ‘shadow’), a recruitment algorithm, and whatever experimental automation Marketing has hooked up to Slack to bypass a slow internal process.

They are all making decisions. And they are all sharing data.

When Peter’s AI hallucinates a rogue clause into a vendor agreement, or a chatbot leaks PII because a prompt-engineer asked nicely, where does the buck stop? Traditional security loves clean lines—User vs. Admin, Internal vs. External. But we are now operating in a world that has gone full analogue. We have created a workforce that is part human and part silicon, yet the risk remains entirely ours to manage.

The Futility of Punitive Security

Historically, we have managed security like a digital Alcatraz. If a user clicks a phishing link, we chastise them. If they use unapproved software, we discipline them.

But punishing people for being human is like shouting at water for being wet. It provides a few seconds of emotional release for the security team, but it doesn’t change the outcome. You cannot discipline your way to a secure culture, and you certainly cannot punish an AI agent into making safer choices.

So, what happens when your workforce is 60% human, 40% AI, and rising?

Navigating the Shadow AI Explosion

Shadow AI isn’t born from malice; it’s born from friction. Employees use unsanctioned tools because the approved versions are often slow, restrictive, and designed by people who think ‘user-friendly’ as a type of malware.

If your IT ticket for an AI request won’t be resolved until Q3 2027 but the free version of ChatGPT is open in a browser tab right now, the choice for a busy employee is a foregone conclusion.

To manage this hybrid reality, we need to view the workforce as a single, unified, complex adaptive system. Here is the framework for securing the blur:

  • Govern the Decision, Not the Entity: We need governance frameworks that apply to the action, regardless of whether the actor is carbon-based or cloud-hosted. If a human isn’t allowed to export customer data to a personal drive, their AI assistant shouldn’t be able to either.
  • Design for Invisible Perimeters: Assume you will never have 100% visibility again. Security must shift toward real-time behavioral monitoring and anomaly detection that tracks patterns across both human and machine activity.
  • Build Intuitive Culture, Not Just Compliance: You teach a child to cross the road by explaining traffic lights, not by screaming at them every time a car passes. The same applies here. You cannot train culture into an AI model, but you can design systems where humans and AI operate within a framework that makes security intuitive.
  • Treat Shadow AI as a Signal: If half your workforce is using unsanctioned AI, that isn’t a compliance failure—it’s a sign your current tools are failing your people.

The question is no longer if your workforce will become a hybrid of human and machine. It already is.

The real question is whether our security models will evolve to meet this reality, or if we will keep building expensive walls around a perimeter that vanished years ago. The workplace has changed; our job is to design security that works with human nature, rather than against it.

Continue Reading

Tech News

ALTERYX ACCELERATES ITS NEXT PHASE OF GROWTH WITH AI-READY DATA AND AUTOMATION AT ENTERPRISE SCALE

Published

on

Alteryx a leading AI-ready data and analytics company, has announced its next phase of growth, surpassing $1 billion in ARR and powering more than 380 million automated workflows annually. As enterprises shift from AI experimentation to full-scale execution, demand for trusted automation and AI-ready data has never been higher. With Alteryx One, organizations are operationalizing AI responsibly and accelerating enterprise-scale decision-making.

Enterprises continue to invest heavily in AI, with 89% planning to maintain or increase spending in 2026, as generative and agentic AI technologies promise a transformative impact. Yet trust remains a critical barrier: 28% of organizations report limited or no confidence in the accuracy and quality of their data. In the UAE alone, 94% of data leaders say they lack complete visibility into AI decision-making processes. Reliable data and repeatable workflows have become the foundation for operationalizing AI successfully.

To address these challenges, Alteryx One brings together this strategy — a single platform trusted by thousands of customers that connects data, business context, and AI for insights. 

Scaling AI and Automation with Alteryx One

McKinsey & Company puts AI adoption at ~84% across surveyed orgs in the Middle East region. Against this backdrop, data remains the defining factor. As per Alteryx research, nearly half (49%) of leaders cite high-quality, accessible, and well-governed data as the top requirement for AI to reach its full potential. To meet this, Alteryx One provides a trusted logic layer, a governed, repeatable workflow that captures business logic, preserves lineage, and produces AI-ready outputs.

Adoption of Alteryx One is accelerating, with thousands of customers upgrading to the new, simplified edition pricing model, making it easier to access advanced AI and automation capabilities. Built-in enterprise security and governance provide the controls organizations need to scale. By seamlessly connecting to enterprise data sources, AI models, and business applications, Alteryx One delivers trusted, governed data wherever it’s needed. 

Andy MacMillan, CEO of Alteryx, said: “When automation becomes agentic, inconsistency is no longer just inefficient. It becomes an enterprise risk. AI requires a governed and repeatable logic layer. Without that foundation, organizations don’t just move faster — they scale risk faster than productivity. Alteryx is purpose-built for this next phase, giving enterprises the control, transparency, and confidence to operationalize AI, and giving lines of business the flexibility they need to adapt and change.”

In 2025, Alteryx also celebrated 10 years of its global Community, which now includes more than 750,000 members worldwide. Community members have shared thousands of peer-driven solutions, workflows, and best practices, helping organizations accelerate onboarding, scale analytics initiatives faster, and maximize the value of Alteryx One.

Automation at Enterprise Scale

The need for reliable, scalable automation has never been more evident. In 2025, Alteryx customers executed more than 380 million automated workflows, up from more than 260 million in 2023, highlighting how organizations are moving beyond experimentation to governed, enterprise-wide automation that operationalizes analytics. 

Alteryx enables organizations to extend automation into new generative AI use cases while maintaining explainable, auditable outputs aligned with enterprise compliance standards. Users can interact with data using natural language, accelerate model development, and embed AI-driven insights directly into trusted workflows — helping organizations scale innovation without sacrificing control.

Business Performance 

In 2025, the company surpassed $1 billion in ARR, signaling strong enterprise adoption and long-term customer commitment. Alteryx was also recognized in G2’s 2026 Best Software Awards for Best Analytics Software Products.

In parallel, Alteryx has expanded its cloud data platform ecosystem, including a deepened partnership with Google Cloud that enables customers to work directly with cloud-scale data and accelerate analytics and AI initiatives in modern cloud environments.

The company also introduced a refreshed brand identity reflecting its evolution into a unified platform for AI-powered analytics and enterprise-scale automation. With Alteryx One at the center, the company is redefining how enterprises scale AI and automation responsibly, providing the trusted foundation needed to drive intelligent outcomes.

Continue Reading

Trending

Copyright © 2023 | The Integrator