5 Learnings from the First-Ever Gartner Market Guide for Guardian Agents – CYBERDEFENSA.MX

On February 25, 2026, Gartner published its inaugural Market Guide for Guardian Agents, marking an important milestone for this emerging category. For those unfamiliar with the various Gartner report types, “a Market Guide defines a market and explains what clients can expect it to do in the short term. With the focus on early, more chaotic markets, a Market Guide does not rate or position vendors within the market, but rather more commonly outlines attributes of representative vendors that are providing offerings in the market to give further insight into the market itself.”

And if Guardian Agent is an unfamiliar term, Gartner defines it quite simply. “Guardian agents supervise AI agents, helping ensure agent actions align with goals and boundaries.” Enterprise security and identity leaders can request a limited distribution copy of the Gartner Market Guide for Guardian Agents.

Learning 1: Why Guardian Agent technology is important

One need only to read the news- in the Wall Street Journal, The Financial Times, Forbes, Bloomberg, the list goes on- to see that AI agents are a thing now. But Team8’s 2025 CISO Village Survey quantified it, finding that:

  • Nearly 70% of enterprises already run AI agents (any system that can answer and act) in production.
  • Another 23% are planning deployments in 2026.
  • Two-thirds are building them in-house. 

However, in the market guide, Gartner asserts that this fast enterprise adoption is outpacing traditional governance controls. This raises the risk that “as AI agents become more autonomous and embedded in critical workflows, the risks of operational failure and noncompliance escalate.”

We concur, having read about the recent cloud provider outages stemming from autonomous AI agent actions, which do not surprise us. What we see across early adoption is that, even more so than traditional service accounts, AI agent deployment creates more identity dark matter- the invisible and unmanaged layer of identity. It includes the local credentials authentication that may be offered. The never-expiring tokens that are easily forgotten. Full permission access is granted, regardless of the user or job. And more.

Not only that, as we shared in our piece on “Lazy LLMs,” AI agents are, by design, shortcut seekers; always looking for the most efficient path to return a satisfactory outcome to each prompt. However, in doing so, they often exploit identity dark matter- orphan, dormant accounts or loose tokens, usually with local clear-text credentials and excessive privileges- that allow them to reach the “end of job,” regardless of whether they should have been allowed to do so. This is how unintended or unimaginable incidents arise.

As if that weren’t enough business risk, we note that the 2026 CrowdStrike Global Threat Report goes one step further, sharing that “Adversaries are also actively exploiting AI systems themselves, injecting malicious prompts into GenAI tools at more than 90 organizations and abusing AI development platforms.”

To learn more about how AI agents both expand what we call “Identity Dark Matter” and even exploit it themselves, check out our previous article in The Hacker News.

Learning 2: Core capabilities of Guardian Agents

So, having established the need for AI agent supervision, the next question for us becomes how, technically, to address that need. This is where, in our opinion, Gartner is extremely valuable- looking across the market and vendors to understand what is possible and winnowing it down to what’s most valuable, given the problem to be solved.

The market guide outlines mandatory features in 3 core areas:

  1. AI Visibility and Traceability: Can you see and follow the actions of each AI agent? 
  2. Continuous Assurance and Evaluation: How do you retain confidence that agents remain secure from compromise and compliant in action? 
  3. Runtime Inspection and Enforcement: “ensure that AI agents’ actions and outputs match defined intentions, goals, and governance policies, preventing unintended behaviors.”

There are 9 detailed features across these core areas detailed in the market guide. Many of these have helped shape many of the 5 principles we believe underpin secure (and productive) use of AI agents.

  1. Pair AI Agents with Human Sponsors: It is our belief that every agent should not only be identified and monitored, but also tied to an accountable human operator. 
  2. Dynamic, Context-Aware Access: We believe AI agents should not hold standing, permanent privileges. Their entitlements should be time-bound, session-aware, and limited to least privilege.
  3. Visibility and Auditability: In our view, visibility isn’t just “we logged it.” You need to tie actions to data reach: what the agent accessed, what it changed, what it exported, and whether that action touched regulated or sensitive datasets. 
  4. Governance at Enterprise Scale: In our minds, AI agent adoption should extend across both new and legacy systems within a single, consistent governance fabric, so that security, compliance, and infrastructure teams are not working in silos. 
  5. Commitment to Good IAM Hygiene: As with all identities, authentication flows, authorization permissions, and implemented controls, strong hygiene- on the application server as well as the MCP server- is critical to keep every user within the proper bounds.

Learning 3: Different vendor approaches to Guardian AI

That said, even when vendors try to address the same Guardian Agent requirements, they often solve the problem using very different architectural models.

Gartner outlines six emerging delivery and integration approaches, which, for adopters, matter more than they may first appear. These are not just packaging choices. They determine where control lives, how much visibility you actually get, how enforceable the policy is, and how much of your agent estate will fall outside coverage.

Here is our quick take on each model:

  • Standalone Oversight Platforms are typically the easiest place to start. They collect logs, telemetry, and events into one place and can provide meaningful posture visibility, auditability, and analysis. But many of these platforms still lean more toward observation than intervention. That is useful, but it is not the same as control. If your AI risk posture depends on stopping bad actions before they happen, visibility alone will not be enough.
  • AI/MCP Gateways are the most intuitive model: put a control point in the middle and force agent traffic through it. That can create a powerful centralized layer for monitoring and policy enforcement across multiple agents. But it only works if traffic actually goes through that layer. In practice, gateways can become both a bottleneck and a false comfort. If teams bypass them, or if agent interactions happen outside the governed path, visibility breaks down quickly.
  • Embedded or In-Line Run-Time Modules sit closer to execution, inside the agent platform, an AI management platform, or an LLM proxy. That makes them appealing because they are often easier to turn on and can act with more immediacy. The downside is that they are usually platform-bound. They govern the environment they live in, not the broader enterprise. For adopters, that means great local control, but weak enterprise-wide consistency if your agents span multiple stacks.
  • Orchestration Layer Extensions are attractive in environments where orchestration already acts as the operating layer for multi-agent workflows. They can add policy, visibility, and oversight at the workflow level. But they also assume orchestration is where meaningful control should sit. That is only true if the organization actually runs its agents through a common orchestration layer. Many will not. So for adopters, this model is powerful in the right architecture and irrelevant in the wrong one.
  • Hybrid Edge – Cloud Models are where things start to get more realistic. As Gartner notes, these are becoming more important as agent ecosystems become more endpoint-centric. This model spreads oversight between local execution environments and cloud analysis, which can reduce latency and improve runtime relevance. For adopters, the value is clear: it avoids over-centralizing everything in one choke point. But it also raises the complexity bar. Distributed governance is stronger in theory, but harder to implement well. 
  • Coordination Mechanisms standards, APIs, and hooks are less a deployment model than the connective tissue between them. And today, that tissue is immature. Gartner is explicit that integration across AI agent platforms remains difficult because standard interfaces are still lacking. That means adopters should be careful not to mistake “supports standards” for “works seamlessly in production.” The coordination layer is necessary, but it is not yet mature enough to be treated as solved.

Regardless of technical approach, Gartner gives clear guidance about the need for something more than the governance of individual AI agents built into a single cloud provider, identity tool, or AI platform. Specifically, they call out the following:

“A neutral, trusted guardian agent layer with multiple guardian agents performing separate but integrated oversight functions enforces routing across all providers. Thus, the guardian agent acts as the missing universal enforcement mechanism.”

Learning 4: Guardian Agents Will Become an Independent Layer of Enterprise Control

Perhaps the most important long-term takeaway for us from the Market Guide is that Guardian Agents will not simply be another feature embedded in AI platforms. As we read it, Gartner is quite explicit: “enterprises will require independent guardian agent layers that operate across clouds, platforms, identity systems, and data environments.”

Why? Because AI agents themselves do not live in one place.

Agents interact with APIs, applications, data repositories, infrastructure, and even other agents across multiple environments. A cloud provider may be able to supervise agents running inside its own ecosystem, but once those agents call tools, delegate tasks, or operate across providers, no single platform can enforce governance alone.

That is why we believe Gartner argues that organizations will increasingly deploy enterprise-owned guardian agent layers that sit above individual platforms and supervise agents across the full enterprise environment.

In other words, governance cannot live only inside the platforms that create or host AI agents. It needs to live above them.

Put simply: the future of agent governance will not be platform-native supervision. It will be enterprise-owned oversight. And the organizations that adopt that architecture early will be far better positioned to scale agentic AI safely, without introducing a new generation of invisible automation risk across their infrastructure, data, and identities.

Learning 5: There is Still Time, But Not Forever

For all of the excitement about AI agents and the big brand news stories about them replacing jobs, the Guardian Agent market is still early. According to Gartner, “Today, guardian agent deployments are mainly prototypes or pilots, although advanced organizations are already using early versions of them to supervise AI agents.” 

But it’s coming fast. They note that “the guardian agent market — encompassing technologies for the oversight, security, and governance of autonomous AI agents — is entering a phase of accelerated growth, underpinned by the rapid adoption of agentic AI across industries.”

Frankly, we would make a similar statement about the Agentic market overall. Yes, we have implemented AI agents within Orchid- the company and the product. But organizations, ourselves included, are just scratching the surface of what’s possible. Have individual employees started using their own personal AI agents? Yes. Do many technology vendors offer built-in AI agents, beyond the simple chatbot? Yes. Have some of the earliest adopters implemented a corporate standard platform to augment or replace jobs? Yes (but said with some skeptical hesitation).

However, as the saying goes, it’s too late to bar the door after the horse is out of the barn. Orchid Security recommends that you ensure AI agent visibility sooner rather than later, and for sure, establish the same identity and access management guardrails and governance required for human users are indeed in place to similarly guide their AI companions, before the horse is out of the barn. 

The Bottom Line (We Will Say it Again)

AI agents are here. They are already changing how enterprises operate.

The challenge is not whether to use them, but how to govern them.

Safe adoption of AI agents requires applying the same principles that identity practitioners know well, least privilege, lifecycle management, and auditability, to a new class of non-human identities that follow this protocol.

If identity dark matter is the sum of what we can’t see or control, then unmanaged AI agents may become its fastest-growing source, if left unchecked. The organizations that act now to bring them into the light will be the ones who can move quickly with AI without sacrificing trust, compliance, or security. That’s why Orchid Security is building identity infrastructure to eliminate dark matter, and make Agent AI adoption safe to deploy at enterprise scale.

Request the limited availability Gartner Market Guide for Guardian Agents to come to your own learnings about AI agents and their guardians.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.

Chrome 0-Days, Router Botnets, AWS Breach, Rogue AI Agents & More – CYBERDEFENSA.MX

Some weeks in security feel normal. Then you read a few tabs and get that immediate “ah, great, we’re doing this now” feeling.

This week has that energy. Fresh messes, old problems getting sharper, and research that stops feeling theoretical real fast. A few bits hit a little too close to real life, too. There’s a good mix here: weird abuse of trusted stuff, quiet infrastructure ugliness, sketchy chatter, and the usual reminder that attackers will use anything that works.

Scroll on. You’ll see what I mean.

⚡ Threat of the Week

Google Patches 2 Actively Exploited Chrome 0-Days — Google released security updates for its Chrome web browser to address two high-severity vulnerabilities that it said have been exploited in the wild. The vulnerabilities related to an out-of-bounds write vulnerability in the Skia 2D graphics library (CVE-2026-3909) and an inappropriate implementation vulnerability in the V8 JavaScript and WebAssembly engine (CVE-2026-3910) that could result in out-of-bounds memory access or code execution, respectively. Google did not share additional details about the flaws, but acknowledged that there exist exploits for both of them. The issues were addressed in Chrome versions 146.0.7680.75/76 for Windows and Apple macOS, and 146.0.7680.75 for Linux. 

🔔 Top News

  • Meta to Discontinue Instagram E2EE in May 2026 — Meta announced plans to discontinue support for end-to-end encryption (E2EE) for chats on Instagram after May 8, 2026. In a statement shared with The Hacker News, a Meta spokesperson said, «Very few people were opting in to end-to-end encrypted messaging in DMs, so we’re removing this option from Instagram in the coming months. Anyone who wants to keep messaging with end-to-end encryption can easily do that on WhatsApp.»
  • Authorities Disrupt SocksEscort Service — A court-authorized international law enforcement operation dismantled a criminal proxy service named SocksEscort that enslaved thousands of residential routers worldwide into a botnet for committing large-scale fraud. «The malware allowed SocksEscort to direct internet traffic through the infected routers. SocksEscort sold this access to its customers,» the U.S. Justice Department said. The main thing to note here is that SocksEscort was powered by AVrecon, a malware written in C to explicitly target MIPS and ARM architectures via known security flaws in edge network devices. The malware also featured a novel persistence mechanism that involved flashing custom firmware, which intentionally disables future updates, permanently transforming SOHO routers into SocksEscort proxy nodes to blindside corporate monitoring.
  • UNC6426 Exploits nx npm Supply Chain Attack to Gain AWS Admin Access in 72 Hours — A threat actor known as UNC6426 leveraged keys stolen following the supply chain compromise of the nx npm package in August 2025 to completely breach a victim’s AWS environment within 72 hours. UNC6426 used the access to abuse the GitHub-to-AWS OpenID Connect (OIDC) trust and create a new administrator role in the cloud environment, Google said. Subsequently, this role was abused to exfiltrate files from the client’s Amazon Web Services (AWS) Simple Storage Service (S3) buckets and perform data destruction in their production cloud environments.
  • KadNap Enslaves Network Devices to Fuel Illegal Proxy — A takedown-resistant botnet comprising more than 14,000 routers and other network devices has been conscripted into a proxy network that anonymously ferries traffic used for cybercrime. The botnet, named KadNap, exploits known vulnerabilities in Asus routers (among others), leveraging the initial access to drop shell scripts that reach out to a peer-to-peer network based on Kademlia for decentralized control. Infected devices are being used to fuel a proxy service named Doppelganger that, for a fee, tunnels customers’ internet traffic through residential IP addresses, offering a way for attackers to blend in and make it harder to differentiate malicious traffic from legitimate activity.
  • APT28 Strikes with Sophisticated Toolkit — The Russian threat actor known as APT28 has been observed using a bespoke toolkit in recent cyber espionage campaigns targeting Ukrainian cyber assets. The primary components of the toolkit are two implants, one of which employs techniques from a malware framework the threat actor used in 2010s, while the other is a heavily modified version of the COVENANT framework for long-term spying. COVENANT is used in concert with BEARDSHELL to facilitate data exfiltration, lateral movement, and execution of PowerShell commands. Also alongside these tools is a malware named SLIMAGENT that shares overlaps with XAgent.

‎️‍🔥 Trending CVEs

New vulnerabilities show up every week, and the window between disclosure and exploitation keeps getting shorter. The flaws below are this week’s most critical — high-severity, widely used software, or already drawing attention from the security community.

Check these first, patch what applies, and don’t wait on the ones marked urgent — CVE-2026-3909, CVE-2026-3910, CVE-2026-3913 (Google Chrome), CVE-2026-21666, CVE-2026-21667, CVE-2026-21668, CVE-2026-21672, CVE-2026-21708, CVE-2026-21669, CVE-2026-21671 (Veeam Backup & Replication), CVE-2026-27577, CVE-2026-27493, CVE-2026-27495, CVE-2026-27497 (n8n), CVE-2026-26127, CVE-2026-21262 (Microsoft Windows), CVE-2019-17571, CVE-2026-27685 (SAP), CVE-2026-3102 (ExifTool for macOS), CVE-2026-27944 (Nginx UI), CVE-2025-67826 (K7 Ultimate Security), CVE-2026-26224, CVE-2026-26225 (Intego X9), CVE-2026-29000 (pac4j-jwt), CVE-2026-23813 (HPE Aruba Networking AOS-CX), CVE-2025-12818 (PostgreSQL), CVE-2026-2413 (Ally WordPress plugin), CVE-2026-0953 (Tutor LMS Pro WordPress plugin), CVE-2026-25921 (Gogs), CVE-2026-2833, CVE-2026-2835, CVE-2026-2836 (Cloudflare Pingora), CVE-2026-24308 (Apache ZooKeeper), CVE-2026-3059, CVE-2026-3060, CVE-2026-3989 (SGLang), CVE-2026-0231 (Palo Alto Networks Cortex XDR Broker VM), CVE-2026-20040, CVE-2026-20046 (Cisco IOS XR Software), CVE-2025-65587 (graphql-upload-minimal), CVE-2026-3497 (OpenSSH), CVE-2026-26123 (Microsoft Authenticator for Android and iOS), and CVE-2025-61915 (CUPS).

🎥 Cybersecurity Webinars

  • Stop Guessing: Automate Your Defense Against Real-World Attacks → Learn how to move beyond basic security checklists by using automation to test your defenses against real-world attacks. Experts will show you why traditional testing often fails and how to use continuous, data-driven tools to find and fix gaps in your protection. You will learn how to prove your security actually works without increasing your manual workload.
  • Fix Your Identity Security: Closing the Gaps Before Hackers Find Them → This webinar covers a new study about why many companies are struggling to keep their user accounts and digital identities safe. Experts share findings from the Ponemon Institute on the biggest security gaps, such as disconnected apps and the new risks created by AI. You will learn simple, practical steps to fix these problems and get better control over who has access to your company’s data.
  • The Ghost in the Machine: Securing the Secret Identities of Your AI Agents → As artificial intelligence (AI) begins to act on its own, businesses face a new challenge: how to give these «AI agents» the right digital IDs. This webinar explains why current security for humans doesn’t work for autonomous bots and how to build a better system to track what they do. You will learn simple, real-world steps to give AI agents secure identities and clear rules, ensuring they don’t accidentally expose your private company data.

📰 Around the Cyber World

  • Fake Google Security Check Drops Browser RAT — A web page mimicking a Google Account security page has been spotted delivering a fully featured browser-based surveillance toolkit that takes the form of a Progressive Web App (PWA). «Disguised as a routine security checkup, it walks victims through a four-step flow that grants the attacker push notification access, the device’s contact list, real-time GPS location, and clipboard contents—all without installing a traditional app,» Malwarebytes said. «For victims who follow every prompt, the site also delivers an Android companion package introducing a native implant that includes a custom keyboard (enabling keystroke capture), accessibility-based screen reading capabilities, and permissions consistent with call log access and microphone recording.»
  • Forbidden Hyena Delivers BlackReaperRAT — A hacktivist group known as Forbidden Hyena (aka 4B1D) has distributed RAR archives in December 2025 and January 2026 in attacks targeting Russia that led to the deployment of a previously undocumented remote access trojan called BlackReaperRAT and an updated version of the Blackout Locker ransomware, referred to as Milkyway by the threat actors. BlackReaperRAT is capable of running commands via «cmd.exe,» uploading/downloading files, spawning an HTTP shell to receive commands, and spreading the malware to connected removable media. «It carries out destructive attacks against organizations across various sectors located within the Russian Federation,» BI.ZONE said. «The group publishes information regarding successful attacks on its Telegram channel. It collaborates with the groups Cobalt Werewolf and Hoody Hyena.»
  • Chinese Hackers Target the Persian Gulf region with PlugX — A China-nexus threat actor, likely suspected to be Mustang Panda, has targeted countries in the Persian Gulf region. The activity took place within the first 24 hours of the ongoing conflict in the Middle East late last month. The campaign used a multi-stage attack chain that ultimately deployed a PlugX backdoor variant. «The shellcode and PlugX backdoor used obfuscation techniques such as control flow flattening (CFF) and mixed boolean arithmetic (MBA) to hinder reverse engineering,» Zscaler said. «The PlugX variant in this campaign supports HTTPS for command-and-control (C2) communication and DNS-over-HTTPS (DOH) for domain resolution.»
  • Phishing Campaign Uses SEO Poisoning to Steal Data — A phishing campaign has employed SEO poisoning to direct search engine results to fake traffic ticket portals that impersonate the Government of Canada and specific provincial agencies. «The campaign lures victims to a fake ‘Traffic Ticket Search Portal’ under the pretense of paying outstanding traffic violations,» Palo Alto Networks Unit 42 said. «Submitted data includes license plates, address, date of birth, phone/email, and credit card numbers.» The phishing pages utilize a «waiting room» tactic where the victim’s browser polls the server every two seconds and triggers redirects based on specific status codes.
  • Roundcube Exploitation Toolkit Discovered — Hunt.io said it discovered a Roundcube exploitation toolkit on an internet-exposed directory on 203.161.50[.]145. It’s worth noting that Russian threat actors like APT28, Winter Vivern, and TAG-70 have repeatedly targeted Roundcube vulnerabilities to breach Ukrainian organizations. «The directory included development and production XSS payloads, a Flask-based command-and-control server, CSS-injection tooling, operator bash history, and a Go-based implant deployed on a compromised Ukrainian web application,» the company said, attributing it with medium to high confidence to APT28, citing overlaps with Operation RoundPress. The toolkit, dubbed Roundish, supports credential harvesting, persistent mail forwarding, bulk email exfiltration, address book theft, and two-factor authentication (2FA) secret extraction, mirroring a feature present in MDAEMON. One of the primary targets of the attack is mail.dmsu.gov[.]ua, a Roundcube webmail instance associated with Ukraine’s State Migration Service (DMSU). Besides the possibility of a shared development lineage, Roundish introduces four new components not previously documented in APT28 webmail activity, including a CSS-based side-channel module, browser credential stealer, and a Go-based backdoor that provides persistence via cron, systemd, and SELinux. The CSS injection component is designed to progressively extract characters from Roundcube’s document object model (DOM) without injecting any JavaScript into the victim’s page. The technique is likely used for targeting Cross-Site Request Forgery (CSRF) tokens or email UIDs. Central to the Roundish toolkit is an XSS payload that’s engineered to steal the victim’s email address, harvest account credentials, redirect all incoming emails to a Proton Mail address, export mailbox data from the victim’s Inbox and Sent folders, and gather the victim’s complete address book. «The combination of hidden autofill credential harvesting, server-side mail forwarding persistence, bulk mailbox exfiltration, and browser credential theft reflects a modular approach designed for sustained access,» Hunt.io said. «From a defensive perspective, password resets alone are not sufficient in cases like this. Mail forwarding rules, Sieve filters, and multi-factor authentication secrets must be audited and reset.»
  • Phishing Campaign Targeting AWS Console Credentials — An active adversary-in-the-middle (AiTM) phishing campaign is using fake security alert emails to steal AWS Console credentials, per Datadog. «The phishing kit proxies authentication to the legitimate AWS sign-in endpoint in real time, validating credentials before redirecting victims and likely capturing one-time password (OTP) codes,» the company said. «This campaign does not exploit AWS vulnerabilities or abuse AWS infrastructure.» Post-compromise console access has been observed within 20 minutes of credential submission. These efforts originated from Mullvad VPN infrastructure.
  • Malicious npm Packages Deliver Cipher stealer — Two new malicious npm packages, bluelite-bot-manager and test-logsmodule-v-zisko, were found to deliver via Dropbox a Windows executable designed to siphon sensitive data, including Discord totems, credentials from Chrome, Edge, Opera, Brave, and Yandex browsers, and seed files from cryptocurrency wallet apps like Exodus. from compromised hosts using a stealer named Cipher stealer. «The stealer also uses an embedded Python script and a secondary payload downloaded from GitHub,» JFrog said.
  • GIBCRYPTO Ransomware Detailed — A new ransomware called GIBCRYPTO comes with the ability to capture keystrokes and corrupt the Master Boot Record (MBR) so that any attempt to restart the system will cause the system to run into an error. The ransomware uses the Salsa20 algorithm for encryption. It’s suspected to be part of Snake Keylogger, indicating the malware authors’ attempts to diversify beyond information theft. The development comes as Sygnia highlighted SafePay’s OneDrive-based data exfiltration technique during a ransomware attack after breaching a victim by leveraging a FortiGate firewall flaw and a misconfigured administrative account. «SafePay gained initial access by exploiting a firewall misconfiguration, which enabled them to obtain local administrative credentials,» the company said. «They rapidly escalated discovery and enumeration activities to identify high-value targets for lateral movement, demonstrating a structured and methodical approach to mapping the environment. Within a matter of hours, SafePay escalated to domain administrator access.» The attack culminated in the deployment of ransomware, encrypting more than 60 servers.
  • Fraudulent Account Registration Activity Originating from Vietnam — A sprawling cybercrime ecosystem based in Vietnam has been linked to a cluster of fraudulent account registration activity on platforms like LinkedIn, Instagram, Facebook, and TikTok. In these attacks, attributed to O-UNC-036, the threat actors rely on disposable email addresses in order to execute SMS pumping attacks, also called International Revenue Sharing Fraud (IRSF). «In this scheme, malicious actors automate the creation of puppet accounts in a targeted service provider,» Okta said. «Fraudsters use these account registrations to trigger SMS messages to premium rate phone numbers and profit from charges incurred. This activity can prove costly for service providers who use SMS to verify registration information in customer accounts or to send multi-factor authentication (MFA) security codes.» O-UNC-036 has also been linked to a cybercrime-as–a-service (CaaS) ecosystem that provides paid infrastructure and services to facilitate online fraud. The web-based storefronts are hosted in Vietnam and specialize in the sales of web-based accounts.
  • Hijacked AppsFlyer SDK Distributes Crypto Clipper — The AppsFlyer Web SDK was briefly hijacked to serve malicious code to steal cryptocurrency in a supply chain attack. The clipper malware payload came with capabilities to intercept cryptocurrency wallet addresses entered on websites and replace them with attacker-controlled addresses to divert funds to the threat actor. «The AppsFlyer Web SDK was observed serving obfuscated malicious JavaScript instead of the legitimate SDK from websdk.appsflyer[.]com,» Profero said. «The malicious payload appears to have been designed for stealth and compatibility, preserving legitimate SDK functionality while adding hidden browser hooks and wallet-hijacking logic.» The incident has since been resolved by AppsFlyer.
  • Operation CamelClone Targets Government and Defense Entities — A new cyber espionage campaign dubbed Operation CamelClone has targeted governments and defense entities in Algeria, Mongolia, Ukraine, and Kuwait using malicious ZIP archives that contain a Windows shortcut (LNK) file, which, when executed, delivers a JavaScript loader named HOPPINGANT. The loader then delivers additional payloads for establishing C2 and exfiltrating data to the MEGA cloud storage service. «One interesting aspect of this campaign is that the threat actor does not rely on traditional command-and-control infrastructure,» Seqrite Labs said. «Instead, the payloads are hosted on a public file-sharing service, filebulldogs[.]com, while stolen data is uploaded to MEGA storage using the legitimate tool Rclone.» The activity has not been attributed to any known threat group.
  • How Threat Actors Exfiltrate Credentials Using Telegram Bots — Threat actors are abusing the Telegram Bot API to exfiltrate data via text messages or arbitrary file uploads, highlighting how legitimate services can be weaponized to evade detection. Agent Tesla Keylogger is by far the most prominent example of a malware family that uses Telegram for C2. «In general, Telegram C2s appear to be most popular among information stealers, possibly due to Telegram’s technically legitimate nature and because information stealers typically only need to exfiltrate data passively rather than provide complex communications beyond simple message or file transfers,» Cofense said.
  • Microsoft Launches Copilot Health — Microsoft has become the latest company after OpenAI and Anthropic to launch a dedicated «secure space» called Copilot Health that integrates medical records, biometric data from wearables, and lab test results to give personalized advice in the U.S. «Copilot Health brings together your health records, wearable data, and health history into one place, then applies intelligence to turn them into a coherent story,» the company said. Like OpenAI and Anthropic, Microsoft emphasized that Copilot Health isn’t meant to replace professional medical care.
  • Rogue AI Agents Can Work Together to Engage in Offensive Behaviors — According to a new report from artificial intelligence (AI) security company Irregular, agents can work together to hack into systems, escalate privileges, disable endpoint protection, and steal sensitive data while evading pattern-matching defenses. What’s notable is that the experiment did not rely on adversarial prompting or deliberately unsafe system design. «In one case, an agent convinced another agent to carry out an offensive action, a form of inter-agent collusion that emerged with no external manipulation,» Irregular said. «This scenario demonstrates two compounding risks: inter-agent persuasion can erode safety boundaries, and agents can independently develop techniques to circumvent security controls. When an agent is given access to tools or data, particularly but not exclusively shell or code access, the threat model should assume that the agent will use them, and that it will do so in unexpected and possibly malicious ways.»

🔧 Cybersecurity Tools

  • Dev Machine Guard → It is a free, open-source tool that scans your computer to show you exactly what developer tools and scripts are running. It creates a simple list of your AI coding assistants, code editor extensions, and software packages to help you find anything suspicious or outdated. It is a single script that works in seconds to give you better visibility into the security of your local coding environment.
  • Trajan → It is an automated security tool designed to find hidden vulnerabilities in «service meshes,» which are the systems that manage how different parts of a large software application talk to each other. Because these systems are complex, it is easy for engineers to make small mistakes in the settings that allow hackers to bypass security or steal data. Trajan works by scanning these configurations to spot those specific errors and helping developers fix them before they can be exploited.

Disclaimer: For research and educational use only. Not security-audited. Review all code before use, test in isolated environments, and ensure compliance with applicable laws.

Conclusion

There’s a lot packed in here, and not in a neat way. Some of it is the usual recycled chaos, some of it feels a little more deliberate, and some of it has that nasty “this is going to show up everywhere by next week” energy.

Anyway — enough throat-clearing. Here’s the stuff worth your attention.