Cybersecurity Consulting
January 30, 2026
5 minute read

The DOT Report is a monthly news series from DOT Security that examines the most important developments in cybersecurity and how these incidents impact real organizations, systems, and people.
This month, we’re covering a major Europol operation targeting the Black Axe cybercrime network, new research revealing how AI assistants like Microsoft Copilot can be manipulated, malicious developer tools and browser extensions designed to quietly exfiltrate data, and emerging vehicle targeting techniques.
Individually, these stories span law enforcement, artificial intelligence, software supply chains, and connected systems. Together, they illustrate how attackers are increasingly focused on abusing trusted platforms and legitimate workflows rather than relying on traditional exploits.
For more cybersecurity coverage from The DOT Report, check out our video report on YouTube, or listen to the podcast on Spotify, Apple Podcasts, or wherever you get your podcasts.
Europol announced the arrest of 34 suspected members of the Black Axe cybercrime network following a coordinated international operation involving law enforcement agencies across multiple countries. Black Axe has been linked for years to large-scale online fraud schemes, including business email compromise, romance scams, and money laundering operations that have resulted in hundreds of millions of dollars in losses globally.
According to investigators, the group operated through a web of shell companies, money mules, and falsified identities to move illicit funds through legitimate financial institutions. Authorities emphasized that the arrests were the result of prolonged intelligence sharing and financial tracing, reflecting the scale and persistence of the investigation required to disrupt networks of this size.
However, officials were careful to note that Black Axe is not a centralized organization but a loose collection of semi-autonomous cells. While the arrests represent a meaningful disruption, the underlying infrastructure and recruitment pipelines often outlast individual takedowns. The case underscores how modern cybercrime increasingly mirrors organized crime, blending digital fraud with real-world logistics and financial systems.
Security researchers have disclosed a new attack technique dubbed RePrompt, which targets Microsoft Copilot and similar AI-powered assistants by exploiting how these systems manage conversational context and memory. Unlike traditional prompt injection attacks, RePrompt relies on planting hidden instructions that persist across interactions and influence future responses.
In proof-of-concept demonstrations, researchers showed how malicious instructions embedded in documents, emails, or web content could later be activated through seemingly benign follow-up prompts. Once triggered, these instructions could cause AI assistants to bypass safeguards, leak sensitive information, or prioritize attacker-controlled directives without obvious warning signs.
The vulnerability is less about a single flaw and more about architectural design choices. As AI assistants become deeply embedded in enterprise workflows, the research highlights the risks of granting systems long-term contextual memory without clear boundaries. Normal user behavior — such as summarizing documents or reviewing emails — becomes the delivery mechanism, raising difficult questions about trust, autonomy, and control in AI-enabled environments.
Researchers this month also uncovered a large-scale campaign involving malicious AI-powered extensions hosted on the official Visual Studio Code Marketplace, with a combined install base of roughly 1.5 million users. Marketed as coding assistants similar to ChatGPT, the extensions offered legitimate functionality while covertly exfiltrating source code from developers’ environments.
Once installed, the extensions monitored files opened and edited within VS Code, encoding and transmitting entire source files to attacker-controlled servers. In addition to source code, the malware collected metadata about development environments and user behavior, effectively turning developer machines into long-term intelligence-gathering nodes.
Because the extensions appeared polished, frequently updated, and highly ranked in search results, many developers installed them, assuming they were trustworthy. The campaign highlights a growing supply-chain risk in developer tooling, where attackers can abuse legitimate distribution platforms and user trust to gain access to high-value intellectual property without exploiting software vulnerabilities.
At the Pwn2Own Automotive competition, security researchers demonstrated new attack techniques targeting modern vehicles, infotainment systems, and electric vehicle charging infrastructure. The exploits focused on weaknesses in head units, connectivity stacks, and charging interfaces — components that are increasingly interconnected and reliant on complex software ecosystems.
Some attacks required physical access, such as malicious USB devices or Bluetooth exploitation, while others showed how compromised EV chargers could act as entry points into vehicle systems or backend networks. Although the demonstrations did not involve direct control over safety-critical functions, they illustrated how quickly attackers could move laterally once initial access was established.
The research reinforces broader concerns within the automotive industry. Vehicles are no longer isolated mechanical systems, but long-lived connected platforms with slow patch cycles and extensive third-party dependencies. As cars, chargers, and cloud services become more tightly integrated, security gaps in any single component can have cascading effects across the entire ecosystem.
As 2026 gets underway, the throughline is hard to miss: many of the most consequential risks aren’t coming from novel exploits, but from attackers quietly abusing systems designed for scale, automation, and convenience.
AI assistants, developer tooling, cloud infrastructure, and connected devices are all being pulled into the threat landscape — not because they’re inherently insecure, but because they’re trusted by default and deeply embedded in daily operations.
Defensive strategies moving forward will need to account for this reality. Security programs can’t treat these platforms as neutral infrastructure; they have to be monitored, validated, and threat-modeled with the same rigor as any exposed attack surface.
For more cybersecurity news from The DOT Report, check out our video report on YouTube, or listen to the podcast on Spotify, Apple Podcasts, or wherever you like to listen.