Amazon caught an infiltrator in its remote workforce over a minor detail: the 110-millisecond “echo” when typing

In cybersecurity, sometimes the most important thing isn’t the most spectacular. Amazon detected an infiltrator attempting to operate as a remote employee through a signal so discreet that, to most people, it would appear as “Internet stuff”: a delay of just over 110 milliseconds in keystroke arrival times.

The story, revealed by Stephen Schmidt, Amazon’s Security Lead, has become a case study for two reasons. First, because it shows how much North Korea-linked labor fraud networks have perfected the art of “appearing legitimate” in a global hiring environment. And second, because it confirms a growing trend: security is no longer just about antivirus and firewalls, but about the ability to correlate behavior, identity, endpoint, and network data.

A laptop in Arizona… and someone far from Arizona

The case began with something akin to a regular onboarding. A technical profile hired for remote work received a corporate laptop, which was physically in Arizona. The supposed worker claimed to operate from the United States. However, Amazon’s monitoring systems detected an unusual pattern: keystrokes took too long to “impact” the internal infrastructure, especially in commands and actions that should be agile over domestic connections.

The threshold wasn’t outrageous. It wasn’t seconds, nor even hundreds of milliseconds of variable margin. It was a sustained delay exceeding 110 milliseconds, consistent enough to leave a “trace” rather than a coincidence. In an environment with advanced telemetry, the difference between “normal” and “something’s off” can be precisely that: small deviations repeated thousands of times.

The investigation led to the least desirable, yet most logical explanation: the device was being controlled remotely. Part of the traffic traced back to China, a country often identified as an intermediary point in relay and obfuscation schemes. Access was blocked, and the attempt logged as just another case in a phenomenon Amazon reports seeing repeatedly.

“If you don’t look for them, you won’t find them”

Schmidt’s phrase explains why these episodes are so hard to detect for companies not actively searching for them: if you’re not actively hunting for that pattern, it’s likely to be mistaken for noise. In other words: most traditional controls are designed to stop technical intrusions, but not to unmask someone entering “through the front door” with valid credentials and an assigned role.

Amazon reports having blocked over 1,800 attempts of this kind since April 2024, with a quarterly growth rate of about 27% in 2025. That’s not a small number: it represents a steady flow of fake or manipulated candidates entering through employment portals, subcontractors, and third-party channels.

The remote IT worker fraud has become an industrial model

For years, remote work risks were linked to insecure Wi-Fi, lost laptops, or poorly configured access. Today, the list includes something more problematic: employees who aren’t who they say they are.

The pattern often repeats with variations but maintains a recognizable structure:

  • Fake or stolen identities: profiles with impersonated documentation, real names used without permission, or “people” created solely to pass screening processes.
  • Local intermediaries: someone inside the U.S. receives corporate hardware and keeps it powered on, connected, and ready for the actual worker to control remotely.
  • Remote control and evasion routes: remote access tools, VPNs, relays, and network hops to “paint” a believable location.

The key part of the scheme is that if a company only verifies “the laptop is in the U.S.,” the attacker gains a technical alibi. That’s why keystroke latency, usage cadence, network routing, and endpoint signals become decisive: they don’t validate a mailing address; they validate actual behavior.

From telemetry to attribution: building the case

In these incidents, a single indicator rarely suffices. High latency could have many explanations: poor connection, saturated routes, corporate VPN, or local performance issues. What makes the difference is if the signal persists and correlates with other data.

When security teams review such cases, they typically look for:

  • Evidence of remote control (software, drivers, processes, session patterns).
  • Consistency between declared location and actual network routes.
  • Behavioral changes: working hours, pace, login cadences.
  • “Soft” signals accompanying technical indicators: inconsistencies in written communication, out-of-context expressions, or repeated patterns in resumes.

In the Amazon case, the key was a subtle detail, but the conclusion came from contextual analysis: the keystroke delay opened the door; the investigation confirmed the deception.

Why this also concerns small businesses

The most striking aspect of this type of fraud is that it’s not limited to tech giants. In fact, many campaigns target those with fewer resources to investigate: SMEs, consulting firms, startups, suppliers, and subcontractor networks.

The argument is simple: “legitimate” access to a corporate environment can be exploited for various malicious purposes, from salary theft to data theft, lateral movement, or extortion. And, as U.S. authorities have warned in related criminal cases, these networks have infiltrated hundreds of organizations via remote hiring, intermediaries, and hardware managed by third parties.

The uncomfortable lesson: security isn’t always about breaking vulnerabilities — sometimes it’s about hiring security

The 110-millisecond case clearly illustrates: attacks don’t always exploit vulnerabilities; sometimes they come through HR. Once an attacker has a valid account, device, and role, the rest is patience and operational discipline.

That’s why the most effective measures usually combine three layers:

  1. Identity: enhanced verification, periodic revalidation, and specific controls in third-party processes.
  2. Device: managed endpoints, remote control detection, inventory, posture assessment, and behavior-based alerts.
  3. Access: least privilege, segmentation, temporary permissions, and monitoring of routes/networks incompatible with the declared location.

In a world where remote work is the norm, the almost paradoxical conclusion is: companies need to learn to distrust what seems to work too perfectly. Because an infiltrator’s signal can be as subtle as an echo of 110 milliseconds, repeated over and over until someone decides to investigate.


Frequently Asked Questions

What is the “remote IT worker” fraud linked to North Korea?
It’s a scheme where operators use fake or stolen identities to secure remote technical jobs and gain legitimate access to corporate environments, often through local intermediaries managing hardware.

How is a corporate laptop controlled by someone from another country detected?
It’s usually identified by combining endpoint telemetry (remote control tools), network signals (routes, ASNs, relays), and behavior analysis (consistent latencies, working hours, session patterns) against the declared location.

Why can keystroke latency reveal concealed remote access?
Because when real control is distant from the physical device, keystrokes travel through more network hops, often causing consistent delays. It’s not definitive proof but is a useful signal when correlated with other evidence.

What minimum controls should companies enforce for remote IT hiring and subcontractors?
Enhanced identity verification, managed corporate devices, strict control or prohibition of unauthorized remote access software, role-based segmentation, least privilege access, and auditing of suppliers and contractors.

Source: cybersecurity news

Scroll to Top