Skip to main content
YUFAN & CO.
Back to Blog
blog.categories.guides

Beating the synthetic applicant tax in your hiring process

Yufan Zheng
Founder · ex-ByteDance · MSc Peking University
1 min read
· Updated
Cover illustration for Beating the synthetic applicant tax in your hiring process

You open your inbox on a typical morning. 400 applications for a mid-level ops role.

You click the first one. The cover letter is a flawless, four-paragraph essay about how the candidate is "passionate about driving operational excellence" and "eager to align with your strategic objectives."

You click the next. Same structure. Same buzzwords.

No typos. No personality. Just a wall of ChatGPT.

You used to filter candidates by who took the time to write a thoughtful letter. That filter is dead. The friction of applying has dropped to zero, and your hiring process is buckling under the weight of it.

The synthetic applicant tax

The synthetic applicant tax is the hidden cost in hours your team spends reading, screening, and interviewing candidates who used AI to sound competent on paper but cannot do the actual job.

It is a structural shift in hiring. A few years ago, writing a tailored cover letter took a candidate 45 minutes. That friction acted as a natural filter. It proved they wanted the job enough to put the work in, and it proved they could string a coherent sentence together.

That barrier is gone. Candidates now use tools to mass-apply to hundreds of jobs with a single click. The BBC tracked a massive surge in this behaviour, noting that employers are drowning in applications filled with generic phrases like "optimising my skillset" [source](https://www.bbc.co.uk/news/business-68212345).

This hits SMEs the hardest. If you run a 50-person business, you don't have an enterprise talent acquisition team. You don't have layers of HR analysts filtering the noise. The ops manager, the sales director, or the founder ends up reading these applications late at night.

You end up interviewing people who look perfect on paper. Then you get them on a Teams call, and they cannot explain basic concepts they claimed to have mastered. They freeze when asked for a specific example of the "strategic optimisation" they wrote about.

The tax isn't just the wasted 30 minutes on the interview. It's the context switching. It's the frustration of preparing for a call that goes nowhere. It's the weeks lost in the hiring cycle while the actual work piles up and your existing team burns out covering the gap.

Why the obvious fix fails

The obvious fix is buying an off-the-shelf "AI resume screener" or plugging Zapier into OpenAI to score candidates automatically.

Most SMEs try this first. They set up a basic Zapier flow that takes the incoming PDF CV, sends it to ChatGPT, and asks for a score out of 10 based on the job description. Or they pay a monthly subscription for a SaaS tool that promises to detect AI-generated text.

Neither works. And the mechanism behind the failure is deeply frustrating.

First, AI detectors are fundamentally broken. They flag non-native English speakers as AI because of predictable sentence structures, and they completely miss text generated by heavily prompted Claude or ChatGPT models. Relying on them means you reject good candidates and pass the sophisticated fakers.

Second, using a basic LLM prompt to score an AI-generated CV is a trap. LLMs are inherent sycophants. They are designed to predict the most likely next word, not to critically evaluate truth. If your job spec asks for "leadership and proactive problem solving", the candidate's AI will generate a CV claiming exactly that.

When your Zapier flow asks ChatGPT to evaluate that CV, the model sees a perfect string match. Zapier's basic OpenAI integration lacks the context window and strict schema enforcement to actually evaluate a CV against a nuanced job spec. The model defaults to lazy evaluation. It sees a match, outputs a positive score, and moves on.

It gives the candidate a 9/10. You are using an AI to read an AI, and they are just high-fiving each other in the cloud. They bypass your filter entirely.

The pattern I keep seeing is companies building these automated screening flows, only to realise their interview pipeline is still full of candidates who can't do the job. They just have a faster way of moving bad candidates to the interview stage.

You cannot filter synthetic applications by looking at the syntax or matching keywords. The text is too good. You have to change what you are measuring. Not how they write a letter, but how they think through a problem.

The approach that actually works

The approach that actually works

An automated hiring pipeline using n8n forces candidates to solve real scenarios, allowing Claude to grade specific technical outcomes via JSON.

Ditch the cover letter entirely. Delete the upload field from your careers page. Replace it with a hard, specific, asynchronous skills test routed through an automation platform.

You need a system that forces the candidate to demonstrate actual thought, using tools they would use on the job.

Here is what that looks like operationally.

A candidate applies via a simple Typeform. No cover letter, just their name, email, and a LinkedIn URL.

When they hit submit, n8n catches the webhook. The automation immediately emails the candidate a specific scenario based on the role.

If you are hiring an accounts assistant, the email says: "Attached is a messy PDF invoice from a supplier, and a screenshot of a Xero reconciliation screen. The numbers don't match. Reply to this email explaining exactly what you would click to fix it."

This is where the magic happens. n8n waits for the email reply. When it lands in a dedicated Outlook inbox, n8n triggers a Claude API call.

You don't ask Claude "is this a good answer?" That invites the same sycophantic behaviour as the Zapier CV screener. Instead, you use a strict JSON schema to ask extraction questions. "Did the candidate identify the £40 discrepancy? (Boolean)." "Did they mention voiding the line item? (Boolean)." "Is the tone professional? (Boolean)."

If the JSON returns true for the critical steps, n8n pushes the candidate into the 'Interview' column in Pipedrive. It then sends a Slack alert to the hiring manager with a summary of the candidate's approach. If they fail, n8n queues a polite rejection email for three days later.

This build takes roughly 1 to 2 weeks to ship, and costs £4k-£8k depending on your existing integrations. The API calls cost pennies.

The beauty of this is how it breaks the AI advantage. If a candidate copies your messy Xero scenario into ChatGPT, the model will spit out a generic textbook guide to reconciliation. It won't know your specific edge case because it doesn't have the context of your messy internal ledger.

The candidate replies with a generic essay about accounting principles. Claude parses the reply, sees they missed the specific £40 discrepancy, and fails them.

You stop measuring their ability to prompt an LLM, and start measuring their ability to solve your actual business problems. The synthetic applicant tax disappears. You only talk to people who can do the work.

Where this breaks down

This system is highly effective, but it is not a universal fix. You need to know exactly where it fails before you commit to building it.

First, it breaks down on senior strategic hires. If you are recruiting a CFO or an MD, you do not send them an automated Typeform and a test via a webhook. They will simply walk away.

This approach is built for high-volume, mid-level roles where the applicant pool is flooded with noise. Think sales reps, ops managers, and junior analysts.

Second, it fails if your own internal processes are undocumented. You cannot build a strict JSON grading schema for Claude if your own team doesn't agree on the right answer. If your invoices come in as scanned TIFFs from legacy accounting and nobody knows the exact steps to clear them, you can't test a candidate on it.

The error rate jumps from 1% to ~12% because the AI is grading against a moving target.

Finally, you have to watch out for the friction threshold. If you make the asynchronous test too long, good candidates will drop out. The test should take no more than 15 minutes to complete. It is a filter, not a free consulting project.

If you respect their time, the good ones will appreciate a process that actually tests their skills rather than their ability to write a generic letter.

What to do this week

You cannot fight AI with AI detectors. You have to change the game entirely. Here is how to start adapting your hiring ops right now.

  1. Kill the cover letter field. Open your HubSpot form, Workable, or whatever you use to collect applications. Delete the cover letter upload option. It is giving you zero signal and costing you hours of reading time.
  2. Write one friction question. Look at the role you are hiring for. Write a single, highly specific question based on a real problem that happened last month. "A customer on Shopify wants a refund for a damaged item, but they threw away the packaging. How do you reply?"
  3. Test the manual version. Before you build the n8n automation, run it manually. Email the friction question to the next 10 applicants. Watch what comes back. You will instantly spot the ones who actually know the job versus the ones who pasted it into ChatGPT.
  4. Map the grading logic. Write down the three specific things a correct answer must include. Once you have those rules on paper, you are ready to automate the screening with a strict JSON schema. Stop reading essays. Start testing skills.

Get our UK AI insights.

Practical reads on AI for UK businesses — teardowns, how-to guides, regulatory news. Unsubscribe anytime.

Unsubscribe anytime.