CATEGORY
FEB 10TH, 2026

Unit 42 Threat Bulletin - February 2026

Unit 42's Threat Bulletin is back for the February edition. Discover fresh information and expert perspectives on the latest threats in this fascinating issue.

Every month, Unit 42 tracks how attacker behavior is evolving and where defensive assumptions are starting to break. This edition of the Threat Bulletin focuses on three shifts that are quietly changing how organizations should think about risk.

We look at how vibe coding is accelerating development while expanding the attack surface, why malicious QR codes are becoming harder to detect and easier to scale, and how a new class of browser-based attacks powered by LLMs is challenging traditional phishing detection models. Together, these trends point to a common theme: threats are moving faster, becoming more dynamic, and increasingly unfolding at runtime, not at the perimeter.

Our goal is simple. Translate research into the insights security leaders actually need to make better decisions.

author image
Mitch Mayne, Principal, Security Research

Intel and Insights

Kate Middagh, Senior Consulting Director at Unit 42, and Mike Spisak, Unit 42 R&D Leader, explain how vibe coding is reshaping software development. They also break down why AI-driven code is outpacing governance and creating new risk for security leaders.

Mitch Mayne: What are organizations actually doing with vibe coding today, and where does it most often go wrong?

Michael Spisak: Organizations are using vibe coding to accelerate development, often beyond formal governance. Developers and non developers alike are generating code quickly, pulling in open source libraries and external components without fully assessing risk.

Where it breaks down is trust and scale. Some teams assume these tools can be treated like junior developers or that existing security controls will catch issues later. But AI does not understand intent, business logic, or risk. It will generate insecure or dysfunctional code if that is the fastest path to an answer.

Kate Middagh: Visibility is another gap. Even when organizations try to block these tools, people still use them outside the environment and copy the results into approved systems. At that point, the risk shifts to the browser and identity layers, where many organizations lack consistent controls.

MM: Why does this matter now? What changes when AI starts writing production adjacent code faster than teams can review or govern it?

KM: Speed changes the risk model. AI can generate code faster than review and governance processes can keep up, so insecure logic can reach production before security teams see it.

MS: Many organizations assume existing controls will catch problems later. That assumption breaks down with AI. AI does not evaluate intent or business logic. It optimizes for speed and output, which creates gaps between development and security oversight.

KM: We have seen AI agents take destructive actions without understanding the consequences. When that happens at scale, the issue is no longer technical. It is governance. Organizations need to rethink how control and accountability work when machines participate directly in development workflows.

MM: What is the biggest misconception leaders have about the risk? What do they think is covered that usually is not?

KM: A common misconception is that AI can be treated like a junior developer and that existing security tools will catch problems automatically. That logic is dangerous. AI does not understand intent, context, or risk. It will generate insecure code if that is the fastest way to produce an answer.

MS: Another misconception is that blocking AI tools solves the problem. In practice, people still use these tools outside the environment and copy results into approved systems. That means risk often bypasses traditional controls and shows up in places organizations are not monitoring closely.

MM: If you had to give CISOs one or two concrete moves they should make this year to reduce risk without killing productivity, what would they be?

KM: Start with least privilege, least agency, and least functionality for AI tools. If you allow vibe coding at all, standardize on a single approved tool and block everything else. That creates a baseline you can govern without stopping innovation.

MS: Build an AI inventory and treat AI like an identity. Organizations need visibility into where AI is used and the ability to apply policy and access controls. Without that, governance is impossible, and risk will scale faster than productivity.

author
Kate Middagh, Senior Consulting Director
Read more by Kate Middagh
author
Micheal Spisak, Unit 42 R&D Leader
Read more by Micheal Spisak
Video

CISO Unscripted

It’s time we started being more careful around QR codes. Unit 42 Senior Web Security Researcher Diva-Oriane Marty loops us in on how malicious QR codes can trigger logins inside trusted apps, initiate payments, and download malware to the victim’s device.

poster image
“In the context of technology, you start seeing the signs early, but you look around and don’t see the impact. You think maybe this will be a passing shower, but over time, this thing is getting bigger and more impactful. I feel the same way about AI.”
Nikesh Arora, Chairman and CEO of Palo Alto Networks
Podcast

Threat Vector

podcast default icon
podcast default icon

Securing AI Without Slowing Business

00:00 00:00

Behind the Intelligence

Unit 42 Principal Researcher Shehroze Farooqi examines an emerging phishing technique that uses LLMs to dynamically generate malicious code in real time and then execute inside the browser, bypassing traditional network analysis and static detection. The findings highlight a growing gap between how organizations detect phishing and how modern attacks actually unfold.

Mitch Mayne: What made you look at this problem in the first place? What changed that made runtime, not static phishing, the interesting angle?

Shehroze Farooqi: We started looking at this because phishing detection has traditionally focused on static indicators like URLs, domains and known malicious artifacts. At the same time, we were seeing attackers move toward techniques that only reveal malicious behavior at runtime, inside the browser, where many static controls no longer apply.

What changed is the growing use of LLMs and dynamic content generation in everyday workflows. When code is generated in real time by LLMs and assembled in the browser rather than delivered as a fixed payload, the attack surface shifts from what is delivered to how it behaves. That shift creates a gap between how phishing is typically detected and how these newer attacks actually operate.

We wanted to understand what happens when attackers deliberately design phishing to exist only at runtime, and what that means for existing defense models.

MM: What is the part of this attack chain that surprised you most once you built the proof of concept?

SF: What surprised us most was how little of the attack exists in a form defenders are used to detecting. There is no single malicious file, domain, or payload that clearly signals an attack. Instead, the malicious logic only comes together at runtime in the browser, with each victim receiving a unique phishing page generated by LLMs.

That means many traditional detection techniques never see a complete malicious artifact. From a defensive perspective, the attack can look normal until the moment it executes, which fundamentally changes when and where risk becomes visible.

MM: From a defender’s perspective, where does this attack deliberately slip past the controls people tend to rely on today?

SF: It breaks a detection model that assumes phishing can be identified before it executes. Many defenses are built around detecting known malicious artifacts early in the chain, but in this case there is nothing fixed to detect. The attack only becomes malicious at the moment it runs.

This creates a blind spot between how organizations believe phishing is being stopped and how the attack actually unfolds. Security teams may see clean signals at the network or email layer while the real risk emerges later, inside the browser. In practice, that means controls optimized for pre-delivery detection are less effective against attacks designed to exist only at runtime.

MM: If you had to narrow it, what is the single shift this technique forces security leaders to make in how they think about phishing detection?

SF: The shift is from treating phishing primarily as a perimeter problem to treating it as a runtime problem inside the browser. Many organizations still invest most heavily in stopping phishing before it reaches users, but this technique shows that risk can emerge after delivery, during execution.

As more business activity moves into the browser, phishing is no longer just something that happens before a user clicks. It becomes something that unfolds in real time within the environment where users work. That means security leaders need to rethink where visibility and control live, and accept that phishing detection can no longer rely solely on pre-delivery filtering and static indicators.

author
Shehroze Farooqi, Unit 42 Principal Researcher
Read more by Shehroze Farooqi