How to Beat the 360Brew Algorithm and Avoid the AI Homogenisation Tax

You open LinkedIn right now, and the feed feels like a ghost town of forced enthusiasm. Every second post is a bulleted list about leadership, generated by an LLM, liked by bots, and ignored by actual humans. You check your own company page. Reach is down 47% since last year. You wonder if your marketing manager forgot how to write, or if the algorithm just hates you.
It's the latter, but not for the reasons you think. In late 2024, LinkedIn ripped out its old signal-counting algorithm and deployed 360Brew, a 150-billion-parameter AI model. It doesn't count hashtags. It reads your text. It reads your profile. And it actively hunts for generic, low-effort AI content to suppress.
The AI homogenisation tax
The AI homogenisation tax is the silent 30% to 50% reach penalty LinkedIn's 360Brew algorithm applies to posts that lack unique semantic depth and read like generic LLM output.
This isn't a conspiracy theory. It's a mechanical reality of how LinkedIn evaluates content in 2026. The old algorithm used thousands of fragmented models to count likes, comments, and dwell time. If you posted a generic listicle and got ten friends to comment in the first hour, you went viral.
That era is dead. 360Brew is a unified decoder-only transformer model trained on LinkedIn's proprietary professional data source. It doesn't just look at engagement velocity. It performs zero-shot reasoning on your post to understand its meaning, context, and expertise level.
When your ops manager uses ChatGPT to draft a post about the future of logistics, the output is statistically average text. 360Brew recognises that predictable token distribution. It flags the post as low-effort. It then checks your profile. If your headline says Sales Director but you are suddenly posting generic AI thoughts about supply chain logistics, the model sees a coherence mismatch.
LinkedIn shows your post to a tiny test audience. It fails the semantic depth check. It dies. This affects every SME owner trying to scale their personal brand by delegating content creation to a junior marketer armed with a £25/month ChatGPT subscription.
The more you try to automate your thought leadership with generic prompts, the more you pay the AI homogenisation tax. You are optimising for volume in a system that now strictly penalises it. End of.
Why the obvious fix fails
Fully automated AI posting pipelines fail because 360Brew actively penalises the predictable token distribution and lack of semantic novelty that LLMs generate. Most SMEs try to fix the problem by wiring up Zapier flows to auto-draft posts from industry RSS feeds, or they buy an off-the-shelf SaaS tool that promises viral hooks.
Here is what actually happens. You set up a Zapier automation that watches a specific industry blog. When a new article drops, Zapier sends the URL to ChatGPT with a prompt asking for an engaging LinkedIn post. Zapier pushes the output straight to your HubSpot or Buffer queue.
It looks like you are publishing consistently. But mechanically, you are feeding poison to the 360Brew algorithm.
In my experience reviewing dozens of SME content pipelines, this fully automated approach fails because it fundamentally misunderstands what LinkedIn now rewards. 360Brew doesn't want volume. It wants knowledge density and profile-content coherence source.
When ChatGPT summarises an article, it strips out the contrarian edge. It removes the messy, hard-won experience. It outputs a sanitised, bulleted summary that looks exactly like the 4,000 other AI-generated posts published that hour. 360Brew's transformer architecture easily detects this lack of semantic novelty. It knows you are just repeating the internet back to itself.
Worse, SMEs often try to game the system using AI comment pods to boost early engagement. Under the old rules, a flurry of comments in the first 60 minutes guaranteed reach. Now, 360Brew detects auto-generated comments and penalises the author. The algorithm actually rewards delayed engagement. A thoughtful comment left 48 hours after publishing is worth 4 to 6 times more than a generic reply left in the first ten minutes.
A £25/month ChatGPT subscription can't replace a £35k salary, and it certainly can't replace the actual expertise sitting in your head. When you outsource your perspective to an LLM, you strip out the exact signals the algorithm is built to reward.
The approach that actually works

Make orchestrates a Claude API call with a strict JSON schema to extract raw data from supplier PDFs, pushing facts to Notion for human review.
To survive 360Brew, you must extract proprietary data from your internal operations and use AI strictly for formatting, not creation. You build a system that captures your actual operational exhaust.
Here is a concrete build for a B2B logistics firm. Instead of asking Claude to write about supply chain trends, we tap into reality. Every week, the operations director receives a PDF report from their main freight supplier, detailing port delays and container pricing.
We set up an automation using Make. The operations director forwards that supplier PDF to a dedicated Gmail address. A Make webhook catches the email, extracts the PDF attachment, and sends it to the Claude API.
Pay attention to this part. We don't ask Claude to write a LinkedIn post. We use a strict JSON schema. The system prompt instructs Claude to extract exactly three things: the biggest price jump this week, the specific port causing the most delays, and one data point that contradicts mainstream news.
The Claude API returns a clean JSON payload. Make parses that JSON and pushes it into a Notion database as a draft.
Now, the human steps in. The founder opens Notion, sees the extracted hard data, and adds a single sentence of raw perspective: Everyone is complaining about Felixstowe, but our data shows Southampton is quietly adding 4 days to turnaround times. Plan your Q3 stock accordingly.
This takes two minutes of human time. You copy the text and post it natively on LinkedIn.
Because the post contains specific, proprietary data and a clear, contrarian stance, it has immense knowledge density. 360Brew reads it, recognises the semantic depth, and matches it perfectly to the founder's Logistics Expert profile headline. Readers save the post because it contains actionable data. Under 360Brew, saves are the new north-star metric, worth five times more than a like.
Building this exact pipeline takes about 2-3 weeks of build time and costs roughly £4k-£8k, depending on how messy your source data is.
The main failure mode here is OCR failure. If your supplier sends scanned TIFFs instead of native PDFs, the text extraction breaks, and Claude hallucinated numbers to fill the JSON schema. You catch this by adding a mandatory human-review step in Notion, never auto-publishing directly to LinkedIn. Once you let the machine publish without a human checking the numbers, you risk torching your reputation.
Where this breaks down
This data-extraction approach fails completely if your business lacks a specialised niche or proprietary data sources. 360Brew strictly enforces topic consistency. The algorithm expects you to pick two or three core topics aligned with your profile and stick to them for at least 90 days.
If your Notion pipeline feeds you a brilliant insight about commercial real estate on Tuesday, and a sharp take on software pricing on Thursday, the algorithm gets confused. It sees a coherence mismatch and throttles your reach, no matter how good the individual posts are. The machine wants a clear, predictable lane.
You need to check your own operational exhaust before committing to this build. If your daily inputs are just generic industry newsletters, piping them through Claude will only give you a slightly better version of the AI homogenisation tax.
You must have access to proprietary data. Raw client transcripts, messy supplier invoices, or internal Slack debates. If you don't have unique inputs, no amount of API orchestration will save your reach. The machine can only parse the reality you feed it. If your reality is generic, your output will be too. You cannot fake semantic depth.
Three questions to sit with
Auditing your LinkedIn strategy requires testing your automated pipelines against the new semantic requirements of 360Brew. The platform now demands actual expertise, and it has the technical infrastructure to enforce that demand. Before you spend another pound on a social media scheduling tool, or sign up for another AI copywriting subscription that promises viral hooks, test your current setup against reality. Stop trying to outsmart a 150-billion-parameter model with a generic prompt.
- If I stripped my name and company logo off my last five LinkedIn posts, could my competitors have published the exact same text without changing a single word?
- Does my LinkedIn profile headline explicitly match the specific technical topics my automated content pipeline is currently producing?
- Am I feeding my AI tools raw, proprietary business data to extract insights, or am I just asking an LLM to summarise public articles anyone can read?
Get our UK AI insights.
Practical reads on AI for UK businesses — teardowns, how-to guides, regulatory news. Unsubscribe anytime.
Unsubscribe anytime.