Supercharged phishing and the North Korea Watcher

2025 05 05·
Junotane
Junotane
· 8 min read

Artificial Intelligence (AI) has changed the rules of phishing. It no longer relies on clumsy English or poorly spoofed addresses. Today, it’s powered by large language models (LLMs), social graph mining, and contextual mimicry. For the Korea watcher community—a small, digitally active, and often isolated group—this evolution has made phishing far more dangerous.

North Korea has consistently used phishing to target researchers, analysts, and academics focused on the regime—individuals who pose an indirect threat by shaping policy, public opinion, or sanctions enforcement. These attacks date back over a decade, with early campaigns impersonating fellow scholars, journalists, or conference organizers to trick targets into opening malware-laden attachments or submitting credentials. The intent was clear: infiltrate devices, access private research, and monitor the intellectual networks scrutinizing the DPRK.

Notable campaigns have spoofed think tanks, universities, individual researchers, and UN-affiliated entities, exploiting the small, highly-networked nature of the Korea watcher community. In some cases, researchers found themselves victims of long-term surveillance, with compromised accounts used to phish others in their circle—turning trust into a weapon. As the threat evolved, phishing became more precise, more patient, and more psychologically manipulative—making even seasoned analysts susceptible.

North Korea’s broader cyber operations, already known for their ingenuity and persistence, are rapidly adopting AI. This is not just speculation; there is growing evidence that offensive cyber units tied to the Reconnaissance General Bureau (RGB) and Lazarus Group have begun integrating generative AI into reconnaissance, social engineering, and payload delivery. Although yet to be utilized in phishing campaigns targeting researchers, analysts, and academics, it is abundantly clear that they will be.

The next generation of North Korea’s phishing attacks targeting researchers, analysts, and academics will be faster, more credible, and tailored with unnerving precision.

What should you look out for? Well, the truth is, you’ll never spot it. “Detection” is a thing of the past.

AI can perfectly mimic the language and personality of a target’s network. Traditional phishing relied on generic appeals: a fake Dropbox link, a fake LinkedIn invitation, a vague request to open a “conference document.” It worked sometimes, but most researchers grew adept at spotting such clumsy attempts. That changed with the introduction of AI models capable of analyzing and replicating language patterns.

If a North Korea watcher maintains a blog, tweets regularly, or publishes academic work, an attacker can train an AI on that data to produce phishing emails that sound just like a close colleague or respected figure in the field. The AI knows whether you use British or American spelling, whether you start emails with “Hi” or “Dear,” and how you sign off—“Best,” “Cheers,” or “Yours in solidarity.” These subtle markers of trust are now easily cloned.

A fake email offering collaboration on a defector interview, a panel discussion, or leaked satellite imagery can now include specific references to past publications or shared contacts. And it doesn’t require a human operative. AI does the imitation, instantly and repeatedly, until it gets through.

AI automates reconnaissance and social graph mapping. Phishing has always required reconnaissance—knowing who the target is, who they trust, and how they communicate. In the past, this was labor-intensive. Human hackers had to comb through online footprints, identify targets, and guess at relationships. Today, AI automates that process.

Natural language processing and machine learning tools can scrape Twitter threads, LinkedIn connections, conference rosters, and Substack comment sections. They build detailed social graphs: who retweets whom, who cites whom, who co-authors with whom. The goal isn’t just to target the watcher, but to impersonate someone within their trusted circle.

Once inside a network through the weakest link, the operation can move steadily closer to the target. At each instance, the AI builds greater knowledge until it can even anticipate the likely responses of the target.

This matters because the Korea watcher community is small. Researchers, journalists, and analysts are clustered in a handful of institutions, think tanks, and online communities. Many use real names; others use pseudonyms that are still easily linked to professional identities - and at least one of them will be in South Korea where cyber security culture hardly exists.

Despite South Korea’s cyber security dialogue with partners, the average South Korean researcher, analyst, or academic and their institution, has no clue about cyber security (the only saving grace is the cultural preference for messaging and message groups rather than email). Once an attacker has the graph, AI can generate personalized emails for each target—crafted in the voice of someone they know, referencing recent news, events, or conversations.

AI enables real-time engagement and multi-step interaction. AI doesn’t stop at the first email. Unlike traditional phishing, which is a one-shot attack, AI enables dynamic engagement. If a target replies to a message—perhaps asking a follow-up question or requesting clarification—an LLM can generate plausible responses on the fly. It can even simulate a short email chain.

This matters because sophisticated watchers are cautious. They probe. They ask, “What’s the source of this file?” or “Can we talk via Signal instead?” AI systems can be trained to keep the ruse going long enough to build trust. In some cases, the phishing payload (a link or file) doesn’t even come in the first message—it comes in the third or fourth, when the target has let their guard down.

These conversational phishing attempts are far more effective. They erode suspicion through familiarity and delay. They mirror how real colleagues communicate—especially in a community where remote collaboration is the norm.

AI enables scalability without sacrificing specificity. In the past, phishing campaigns had to choose: cast a wide net with generic messages, or spend time tailoring attacks for high-value targets. AI eliminates that trade-off. It allows attackers to generate hundreds of highly personalized messages in minutes.

This scalability is particularly effective for a state actor like North Korea. With limited human resources but high motivation, AI lets them launch precision campaigns that once required a team of fluent English speakers and cyber operatives. Now, the model handles the language, context and timing, and even formatting. Only the final payload—malicious links or documents—requires technical expertise, and even that is being automated.

Moreover, AI can adjust its tactics based on response rates. If one kind of message fails, it tries another. If one target is resistant, it pivots to someone else in the network. It’s phishing as a feedback loop—adaptive, iterative, and relentless.

AI obfuscates attribution and reduces detection. Finally, AI helps attackers hide. Traditional phishing often raised flags: recycled messages, known domain names, or spelling errors that triggered spam filters. AI-generated content is more diverse, harder to fingerprint, and less likely to match known patterns in cybersecurity databases.

Moreover, AI can help attackers randomize headers, email structure, and metadata. Even sophisticated phishing detection systems struggle with polymorphic content—messages that change slightly with each iteration. For institutions trying to protect North Korea watchers—especially universities, think tanks, and media outlets—this makes defense exponentially harder.

What can you do? Not much! Email is an inherently insecure platform. It is now even more insecure. The best you can do is change your practices.

  1. Use a Virtual Machine (VM) or isolated environment for opening content. AI-generated phishing often arrives in the form of well-crafted documents or realistic-looking collaboration invites. Open any unverified attachments or links in a virtual machine that can be wiped or reset. Treat every document as potentially weaponized—even if it looks like a real panel invite or policy brief.

  2. Compartmentalize communication identities. Do not use a single email address for everything. Use separate, compartmentalized emails for publishing, conference submissions, social networking, and direct peer communications. Ensure emails are not representative of your identity - for example, if your name is John Smith, forget the john.smith@johnsmith.com account.

    By doing so, if an AI phisher gains access to one address, they won’t have full visibility into your professional network. Disposable addresses for public contact info (like bios or academic sites) reduce long-term exposure.

  3. Never trust a message based on familiarity alone. AI phishing excels at reproducing someone’s tone, humor, and habits. A message “from a colleague” referencing your recent publication or joking about a shared experience can easily be fabricated. Verify any unexpected message—especially those with attachments or meeting requests—through a separate, encrypted channel such as Signal or a call.

  4. Develop community verification norms. In tight-knit expert communities, introduce norms for verification: watermarked drafts, agreed-upon phrases, or alternate accounts for sensitive communication. Collective awareness can help detect phishing campaigns before they spread across networks.

In the end, even this may not be enough. Previous North Korean campaigns that targeted researchers, analysts, and academics would more than likely hold compromised machines that allow ongoing monitoring. Every time you send a mail to someone in Seoul, someone in Pyongyang may also be the ultimate recipient!

AI has made phishing personal, scalable, adaptive, and nearly invisible. For a community already vulnerable to surveillance and manipulation, this is not just a technical challenge—it’s a paradigm shift. North Korea’s embrace of AI-driven phishing isn’t some futuristic scenario. It’s happening now, and researchers, analysts, and academics, as well as NGOs, diplomats and government workers, will likely likely soon be in the crosshairs.