AI Makes Faking Real-Time Skills Possible | Generated by AI

Home 2026.03

Question: In the current AI era, is it easier to fake one’s abilities through GitHub commits, good social media posts, or written content — while real-time Zoom calls and offline discussions are harder to fake because AI can’t easily assist voice or video communication in real time?

Answer:

Your Intuition Is Partially Right — But the Situation Is Rapidly Evolving

You’re pointing at a real and important trend. The premise — that text/async content is easier to fake than real-time communication — was largely true until recently. But as of 2025–2026, the picture is more complicated and alarming than most people realize.


✅ What You’re Right About: Text & Async Content Is Easily Faked

Generative AI has made it effortless to inflate achievements or mask inexperience behind convincing text. As one Senior Talent Acquisition Director noted: “Half the resumes I review read like they came from the same template. It’s not that people are lying — it’s that AI makes exaggeration effortless.”

Specifically, the following are now trivially fakeable:

Traditional screening tools measure what is written, not why or how. A resume claiming “data science expertise” passes the filter whether the candidate built a real model or just read a Medium post.


⚠️ Where It Gets Complicated: Real-Time Calls Are No Longer “Safe”

This is the crucial update to your mental model. Real-time video and voice communication is now also fakeable. The technology has moved shockingly fast.

Deepfake Video in Zoom Calls

Advances in AI now allow real-time face and voice cloning during live interviews. In June 2025, security firm Pindrop demonstrated this on live TV — they transformed a reporter’s face in real time on a Zoom call and created a voice clone that could engage in unscripted conversation.

Deepfake software maps a fake face onto the fraudster’s movements on a video call, while voice cloning technology mimics a desired voice or accent. The result is a “virtual avatar” that can speak and interact with interviewers as if it were a real candidate.

The Scale of the Problem

Deepfake fraud attempts rose 1,300% in 2024, according to Pindrop’s 2025 Voice Intelligence Report. And 41% of organizations have hired a fraudulent candidate without knowing it.

A 2025 Greenhouse survey found that 91% of U.S. hiring managers have caught or suspected AI-driven candidate misrepresentation — including fake voices/backgrounds (32%) or AI scripts during job interviews (32%). Nearly 1 in 5 hiring managers reported encountering deepfakes.

A Real-World Example That Shocked the Security Industry

In July 2024, KnowBe4 — a top cybersecurity training company — hired a “Principal Software Engineer” who passed four video interviews, background checks, and reference checks. The person was actually a North Korean operative using a stolen US identity and an AI-enhanced photo. When the company laptop arrived, it began loading malware.

Beyond Hiring: Real-Time Deepfake Fraud in Business

In March 2025, a finance director at a multinational firm in Singapore authorized a $499,000 wire transfer after joining a Zoom call with what appeared to be senior leadership. Every face was a deepfake, every voice was AI-generated — and the money vanished before the fraud was discovered.


🏢 How Companies Are Responding

Since remote/Zoom interviews are now compromised, companies are going back to basics:

To verify authenticity, 39% of hiring managers in the U.S. are now conducting more in-person interviews — suggesting they’re trying to separate real talent from fakes and spending more time doing so.

Even corporate giants like Google and McKinsey reintroduced mandatory in-person interviews by mid-2025 to counter the surge in AI interview fraud.

Other detection techniques now include:


🔑 The Core Insight: What Still Works for Verification

The honest answer is that no single channel is fully trustworthy anymore. The most reliable remaining signals of authentic competence are:

Signal Trustworthiness in 2026
GitHub / portfolio Low (easily AI-generated)
Written essays / cover letters Very low
Async coding test Low (LLMs can solve most)
Zoom / video interview Medium-low (deepfakeable, but detectable with effort)
In-person interview Still high
Pair programming / whiteboard in-person High
Behavioral follow-up reasoning (“why did you choose this approach?”) High
Long-term work references from known contacts High

Authentic data in hiring ultimately comes from real-time conversations revealing communication skills and interpersonal dynamics, work samples or practical tests showing abilities in action, and verified references from former colleagues who can speak to actual performance.


Summary

Your intuition was correct until ~2024. Now, even real-time video and voice can be faked with off-the-shelf tools. The last truly difficult-to-fake channel is in-person, unscripted interaction — which is why companies are physically bringing candidates back into the office for interviews. The AI era hasn’t just blurred the line between real and fake text — it has blurred the line between real and fake humans.

References:


Back Donate