articles
Phishing trends: AI-phishing, Qrishing, and voice attacks

Phishing trends: AI-phishing, Qrishing, and voice attacks

by
Kymatio
|

NIS2 raises the bar. Learn how attackers use AI, QR codes and synthetic voices—and how to counter them with continuous training, clear AI policies, and verified processes.

IN THIS article

2026 marks a turning point for cybersecurity leaders in regulated environments. The entry into force of the NIS2 Directive requires regulated organizations to demonstrate a proactive, structured and continuous approach to digital threats. It is not enough to have a firewall or provide annual training: it is now necessary to prove effective measures to reduce human risk.

This regulatory pressure comes at the worst possible time. The professionalization of cybercriminals, massive access to generative artificial intelligence tools, and the use of hyper-personalized social engineering techniques have transformed AI phishing into a technique that is difficult to distinguish from legitimate internal communication. What used to be spelling errors and suspicious domains, today are AI phishing attacks  with emails indistinguishable from the originals, signed by real managers, sent at the exact moment and with credible internal data.

First line of Defense: Cybersecurity and the human factor

Advanced phishing in 2026 isn't just limited to email. It has evolved on multiple fronts, and one of the most worrying is qrishing, a technique that exploits QR codes as a way to enter corporate events, offices and even internal printed materials. Many organizations still do not have clear protocols against these threats or detection systems on physical endpoints.

The reality is uncomfortable: while organizations strengthen their firewalls, attackers study human behavior, manipulate digital trust and detect gaps in seemingly secure processes. The gap is not technical; it is cultural, structural and often invisible in audits.

A faster evolution than defenses

Today, the speed of innovation of attackers exceeds the ability of most organizations to react. In particular:

  • AI makes it possible to automate highly targeted deception campaigns.
  • Malicious QR codes infiltrate physical media without raising suspicion.
  • Synthetic voices are used to manipulate employees from calls that appear legitimate.

All this in an environment where, in many organizations, there are still no clear policies or defined processes to control these tools on the internal use of generative tools, nor has specific training been implemented against new impersonation techniques.

This article discusses the top AI phishing and qrishing trends in 2026, to help you anticipate risks, understand the most commonly used techniques, and make decisions that strengthen your organization's human resilience.

It is part of our editorial series on comprehensive digital human risk management, which also addresses adaptive awareness, advanced simulations, and industry best practices. You can delve into these aspects in our complete guide to attack simulation and Security Awareness 2026.

For more regulatory context, also consult ENISA 's 2024 report, which details the requirements of NIS2 and emerging risks for critical infrastructure.

The rise of AI phishing: how AI amplifies attacks

Phishing  has always evolved, but in 2026 it has taken a qualitative leap. Generative AI technologies have multiplied the ability of attackers to launch more precise and effective AI phishing campaigns to design persuasive, automated messages adapted to the internal context of each company. AI phishing is no longer an emerging risk, but a common technique in targeted, cost-effective and expanding campaigns.

Below, we explore how this technology is being used, why phishing with artificial intelligence overflows conventional defense frameworks and forces us to rethink detection mechanisms, as well as the real consequences it is having for regulated companies in Europe.

1. Emails that look like they're written by your team

AI phishing campaigns  are no longer crude or easily detectable. Thanks to the use of advanced language models (LLMs), attackers generate emails that are practically indistinguishable from internal ones: with a corporate tone, adequate context, and without errors.

These messages can:

  • Refer to actual projects or specific employees.
  • Accurately imitate the way a manager writes.
  • Use the organization's internal language.

Automation also allows thousands of versions to be sent adapted by position, language or department, multiplying click-through rates.

A recent study by Microsoft Security confirms the systematic use of LLMs in campaigns targeting finance and administration personnel. 

2. Social engineering fueled by leaked data

The extreme personalization of these attacks is supported by information obtained through:

  • Previous breaches (such as leaks of emails, credentials, or employee lists).
  • Massive scraping of professional profiles and social networks.
  • Analysis of publicly available organizational structures.

This allows the message to be adapted to the context of the recipient and to exploit their routines, work relationships or known projects. In regulated environments, where internal trust is critical, this level of realism makes employees an easy target.

3. Case Studies: AI-Targeted Fraud

In 2026, attacks have already been documented in which phishing with artificial intelligence was used  to impersonate managers, obtain privileged access, and order fraudulent transfers.

Recent examples include:

  • A digitally signed email that mimicked the exact wording of the CFO at several branches of an insurance company.
  • Campaigns targeting critical infrastructure providers with fake urgent audit orders.

In many cases, the computers executed the command without additional prior validation, due to the apparent legitimacy of the message.

To prepare for this changing scenario, it is key to analyze how the TTPs (Tactics, Techniques, and Procedures) associated with AI phishing evolve. In our series of incidental analyses, we delve into the latest lessons of response and recovery in the aftermath of these attacks. You can check it out in our piece on first forensic lessons after recent  AI phishing campaigns.

Qrishing in 2026: from QR to silent attack

The use of QR codes has grown in all corporate environments: from access control at events to internal surveys or connection to Wi-Fi networks. This mass adoption has opened a quiet door to new threats. Qrishing—a form of phishing based on QR codes— is consolidated in 2026 as one of the most effective and least detected techniques by traditional defenses.

The challenge isn't just technical: many organizations have yet to implement clear policies to manage these vectors, leaving both employees and visitors exposed.

How does a qrishing attack work?

Qrishing takes advantage of the trust implicit in a printed QR code. Unlike  classic email phishing, these attacks are deployed in physical spaces: visitor badges, brochures, temporary posters or even promotional mugs.

By scanning the QR, the victim is redirected to:

  • A website that imitates a legitimate one (login portal, room reservation, internal app).
  • An external domain that downloads malware directly to the device.
  • An authentication gateway that steals corporate login credentials.

All this without the need for the attacker to be present.

Common techniques and vectors used

Attackers exploit multiple weaknesses:

  • Shortened or hidden URLs that prevent pre-view.
  • Stealth redirects that lead to malicious sites after a neutral first load.
  • False forms embedded in landings identical to the official ones.
  • QR tapes on legitimate signage in offices or trade shows.

In many cases, not even MDM (Mobile Device Management) controls or EDR (Endpoint Detection and Response) solutions detect initial access, as the user is acting from an unmanaged personal device.

Real cases and common mistakes

In several European organizations, qrishing attacks have been detected aimed at:

  • Visitors to corporate events, through manipulated welcome kits.
  • Employees in shared offices, where QR codes for room reservations were replaced by malicious ones.
  • Customers scanning codes on software product packaging.

In all cases, the organizations did not exercise control over the media, which facilitated the success of these qrishing campaigns aimed at employees and visitors on the physical media that displayed the codes and the lack of signage to verify their legitimacy.

If your organization uses QR codes on a regular basis, it's crucial to reevaluate current procedures. In our specialized guide, we detail how to address this threat in a structured way, including physical, digital, and awareness measures. We recommend you consult [our recommendations to detect and prevent this type of attack in real environments].

CISA has also recently published a guide with good practices against qrishing in the corporate environment.

Vishing and deepfakes: when the CEO's voice plays against you

By 2026, voice cloning technologies are no longer reserved for laboratories or state actors. With accessible tools and minimal training, attackers can replicate a manager's voice from public snippets, recorded presentations, or previous calls. The result: urgent, credible and precisely orchestrated orders, typical of highly sophisticated AI vishing offensives, which activate payments, share credentials or unlock access.

Voice-based authentication, which has long been an informal shortcut of trust, has become a critical attack vector.

What is synthetic voice fraud?

 Vishing ("voice phishing") is based on deceiving a person through a phone call. The use of voice deepfakes amplifies this technique by allowing you to simulate:

  • The exact tone and timbre of a CEO, CFO or area manager.
  • Sentences with real linguistic patterns, taken from previous interventions.
  • Urgent scenarios ("we have an audit now", "I need the transfer now").

A study published in Nature Machine Intelligence documents how AI-generated voices can fool even biometric authentication systems

Real cases: the voice that unlocked millionaire payments

Some European organizations have suffered attacks where:

  • A finance employee received a call from the alleged CEO requesting an urgent payment for "audit purposes."
  • The attackers combined synthetic voice with background noises (office, traffic, echo) to simulate contextual credibility.
  • The team verbally validated the order without additional checks.

In most cases, two-person verification was not applied —for example, confirming the order with another person in charge—due to time pressure or the perceived authority of the alleged sender.

Can we still trust the voice?

Not anymore. From a certain level of risk, voice verification alone is unacceptable. Basic recommendations:

  • Set internal keywords or validation methods on sensitive calls.
  • It trains teams to detect signals of voice manipulation or manufactured emergencies.
  • Review authorization processes that rely solely on verbal interactions.

In our series on voice-based threats, we look at these scenarios in more detail. You can expand on the piece on emerging risks related to synthetic voice fraud and spoofing in internal calls.

Main statistics and most affected sectors in 2026

The 2026 figures confirm what many cybersecurity teams already intuited:  AI-powered phishing and variants such as qrishing are having an uneven, but sustained, impact on critical sectors. With NIS2 demanding greater traceability and incident response, understanding patterns by industry becomes imperative to prioritizing resources and bolstering defenses.

Below, we review the most relevant data collected in the most recent sector and cyber intelligence reports.

Most attacked sectors: banking, health and energy lead the ranking

According to  Verizon's Data Breach Investigations Report (DBIR) 2026, these are the sectors most affected by AI phishing attempts  and variants such as qrishing, according to 2026 data:

  • Banking and insurance → 38% of the campaigns detected.
  • Health and pharma → 21%, with a focus on administrative staff.
  • Energy and critical infrastructure → 17%, especially in industrial environments.
  • Public administration → 12%, with an upturn in hiring processes and scholarships.
  • Legal and consulting → 8%, taking advantage of the sensitivity of documents exchanged.

These figures confirm the increasingly vertical orientation of  advanced phishing, adapting messages and tactics to the language and operations of each sector.

Attempts Ssuffered and detection time

A cross-sectional analysis in European organisations shows that:

  • 7 out of 10 regulated companies have identified at least one attempt  at qrishing or  AI-powered phishing in the last 12 months.
  • It takes an average of 28 days for organizations to detect a successful attack, especially when it occurs from unmanaged devices.
  • In 63% of the documented cases, the input vector was an action initiated by an employee with access to sensitive data.

Operational conclusion: prioritize according to sectoral risk

To reduce exposure, it is key to:

  • Map attack patterns specific to your industry.
  • Assess the degree of maturity against threats such as AI phishing and qrishing.
  • Strengthen the layers of human verification and early detection.

We have compiled a detailed summary by industry in the benchmark sector analysis for this year. You can see this in our report on the most exposed sectors and the most effective responses applied in 2026.

Recommended defenses: from the endpoint to corporate culture

In the face of rapidly and accurately evolving threats such as AI phishing or qrishing, organizations cannot limit themselves to technical solutions. The NIS2 Directive makes it clear that protection must be proactive, demonstrable and aimed at reducing human risk. Below, we present a comprehensive approach that combines technology, processes and corporate culture.

Ongoing training and human risk assessment

The most sophisticated attacks target people, not machines. Therefore, the first line of defense continues to be human behavior.

Recommendations:

  • Conduct periodic digital human risk assessments segmented by profile and role.
  • It replaces generic training with adaptive training, based on real contexts and recent cases.
  • Align awareness programs  with emerging techniques such as AI phishing, vishing,  or qrishing attacks.

This evaluation must be integrated into a continuous system, not as a one-off action. Awareness is not an end; it is a process of sustained improvement.

Signaling and detection from the first point of contact

Many attacks are triggered by scanning a code or opening an email. Applying early detection controls drastically reduces the risk surface.

Checklist:

  • Standardize internal QR code signage with validated corporate design.
  • Ensure that all QR accesses are monitored and audited.
  • Develop internal protocols for verifying requests by voice, especially in sensitive roles.
  • Strengthen email gateway tools with  generative AI phishing attempt detection modules.

You can find more practical details in our recommendations to prevent QR attacks in corporate spaces or business events.

Clear policies on the use of generative AI

The massive entry of AI tools into the workplace is not only an opportunity, it's also a potential source of data leakage, reverse social engineering, and inadvertent exposure.

Key steps:

  • It establishes a clear and accessible corporate policy on the use of generative AI in professional environments.
  • Define which tools are approved, what types of prompts are allowed, and what data cannot be shared.
  • Control access and activity logs using endpoint  or browser security solutions.

In our specialized guide we address this point with compliance criteria aligned with NIS2. You can consult it in the document on responsible and safe use of generative AI in corporate environments.

As a framework, we recommend reviewing the controls proposed in ISO /IEC 27002, which include guidelines on digital asset management, access control, and security in the use of emerging technologies.

Takeaways and next steps for CISOs and management

The sophistication of AI phishing and the proliferation of new techniques such as qrishing and synthetic voice fraud  (vishing and deepfakes) have reconfigured priorities in corporate cybersecurity. Threats no longer rely on  technical exploits; they now target human behavior, organizational culture, and a lack of up-to-date protocols. And with NIS2 in place, ignoring this evolution is no longer an option.

What can your organization do today?

Prioritize actionable and measurable actions. Some immediate steps:

  • Audit your current verification, access, and signage processes.
  • Assesses the actual level of exposure to human risk in each area.
  • Update your trainings to include AI phishing, qrishing, and vishing.
  • Define and implement a clear policy on the internal use of generative AI.
  • Synchronize efforts between cybersecurity, HR, and compliance around shared goals.

 Quick self-diagnosis checklist

Your organization...?

  • It has identified emerging threats such as qrishing and vishing.
  • Enforce controls over voice-based authentication or QR messages.
  • It has an internal protocol for suspicious emails or calls.
  • Periodically assess human risk with up-to-date data and criteria.
  • It has defined an internal policy for the use of AI in work environments.

Frequently asked questions

What is AI phishing and why is it so dangerous?

It'  s AI-powered phishing, capable of generating credible messages at scale and misleading even trained employees.

How can I detect a qrishing attack in my company?

Always verify the URL after scanning a QR, avoid printing codes without context, and use secure signage.

Which sectors are most vulnerable to  AI phishing?

Banking, health, energy and public administration lead the ranking in 2026 according to cybersecurity reports.

What steps should I take to comply with NIS2 in the face of these threats?

Implement ongoing training, endpoint discovery,  and generative AI policies in critical environments.

Can the CEO's voice be supplanted with AI?

Yes, voice deepfakes have already been used to deceive employees and cause fraudulent transfers.

Towards a real human risk management strategy

Abandoning reactive logic implies redesigning the foundations of prevention. Recommend:

  • Incorporate specific audits such as those we analyze in the report on tactics and traceability in AI-assisted phishing  attacks.
  • Review blind spots in identity verification processes, especially in calls or remote actions. We detail this in our piece on voice fraud targeting executives and finance departments.
  • Align the compliance strategy with the requirements of the NIS2 Directive, which focuses on demonstrable and prevention-oriented measures.

Investing in human risk management is essential to anticipate AI phishing, prevent qrishing campaigns, and comply with NIS2. It's a competitive advantage in an environment where organizational resilience starts with people.

Human Cyber-Risk Management