AI
AI Analysis
Live Data

Workers Fight AI Job Clones: GitHub Anti-Distill Explained

Viral GitHub saga: Chinese firms used 'skill' files to build AI employee clones; workers countered with anti-distill tools that remove key knowledge — 60%

@MilkRoadAIposted on X

This is WILD. A secret workplace war just broke out in China and it has gone fully viral on GitHub. Companies started ordering their workers to document all their knowledge as AI "skill files." Why? to replace those same workers with AI but workers figured out the plan fast so they fired back. Someone built a tool called colleague.skill, software that scrapes a coworker's chat logs, emails, and work docs from Chinese platforms like Feishu and DingTalk, then clones them into an AI agent. The idea was savage, digitize your colleague before they digitize you, hand the AI clone to the company, and watch your coworker get laid off while you survive. A real GitHub project that exploded in popularity in days but then someone else entered the chat and changed everything. A developer released anti-distill.skill, a tool that takes the skill file your company forces you to write, then strips out every piece of real knowledge before you hand it in. The output looks perfectly professional, totally complete, impressively detailed but every critical insight has been secretly removed. Your company gets a hollow shell while you keep the real knowledge locked away in a private backup. The tool even has three intensity levels, light, medium, and heavy depending on how closely your bosses are watching. Companies across China have been building AI digital twins of departed employees, feeding their old chat histories and documents into large models to produce clones that keep working after the humans are gone. One verified case is that an employee left, and their replacement was literally an AI trained on every message they ever sent. The anti-distill tool went viral on GitHub within hours of being posted, racking up stars faster than almost anything trending that week. The implications reach far beyond China's borders. Every knowledge worker on earth now faces a version of this question, when your company asks you to document your process, they may be building the tools to replace you.

View original tweet on X →

Community Sentiment Analysis

Real-time analysis of public opinion and engagement

Sentiment Distribution

77% Engaged
60% Positive
17% Negative
Positive
60%
Negative
17%
Neutral
22%

Key Takeaways

What the community is saying — both sides

Supporting

1

Employees will sabotage or “poison” skill files

submit hollow, fake or corrupted documentation so AI trained on it becomes useless or dangerous to the employer.

2

People will simply refuse to hand over their tacit knowledge

workers will hide critical judgment and keep expertise in their heads or build independent, sovereign systems.

3

Incentives matter more than the tools

folks share when they benefit; when they feel threatened they withhold or distort what they know.

4

Companies are building “digital twins” to replace departed staff

feeding chat logs and documents into models to keep work going without humans.

5

Human judgment can’t be fully codified

many replies insist subtle decision-making and context-dependent judgment won’t survive translation into skill files.

6

This can backfire economically for employers

mass replacement risks collapsing demand: if people have no jobs, who buys the products?

7

AI trained on internal logs will produce confident but wrong output

messy corporate data and edited skill files will yield catastrophic or comedic failures.

8

Workers will weaponize AI against management

using models to expose incompetence, improve bargaining power, or turn the tables on bosses.

9

Ethical, legal and trust problems are central

fears of wrongful firing, privacy invasion, and a “capitalist hellscape” where sustainability of workers is ignored.

10

Workplace power dynamics are shifting — an arms race

companies try to extract knowledge; workers adapt with hollow files, sabotage, or counter-tools, making this a contested battleground.

Opposing

1

People can’t be cloned by copying outputs

critics argue that copying messages or “personality” ignores the inputs and internal processes that create work, so an LLM trained on someone’s chat logs won’t truly replace that person.

2

Documentation increases value

several replies say well-written SOPs/skill files make workers more valuable and are the right way to scale organizations; those who document are rewarded, not replaced.

3

This is hype and nonsense

many view the claim as exaggerated or a “cool story” creepypasta, not reflecting how work actually gets done.

4

Legal and regulatory limits will push back

some point out laws (or potential laws) that ban firing people to be replaced by AI, and predict regulatory or legal collapse of this approach to layoffs.

5

AI-native companies will outcompete incumbents

others warn the real effect will be market-driven: firms built around AI will beat companies weighed down by human “deadweight.”

6

Adapt or reskill

practical responses urge workers to learn new skills, document their work deliberately, or build technical countermeasures (e.g., “anti-distill”) to protect their value and income.

Top Reactions

Most popular replies, ranked by engagement

K

@kevinkoosk

Supporting

This is why in the old Kung Fu movies, the master always kept one last move hidden from his student - just in case the student decided to challenge him. Usually it would be handed down at the end of his life ... written in a book or parchment, hidden in some cave...

135
4
5.5K
W

@WeUnicate

Supporting

The quietest revolution today is not fighting the system. It is refusing to outsource your thinking, your knowledge, and your competence to any external authority — and quietly building your own sovereign life instead.

44
1
5.3K
M

@MilkRoadAI

Supporting

https://t.co/3hQfNvALMI Also, China!

32
0
19.9K
B

@bradsl

Opposing

Now every AI worker can operate at DMV efficiency!

19
1
1.5K
D

@DarnellTheGeek

Opposing

I don’t get it. People were being paid to just send messages? How does implanting an AI with someone’s personality and which areas of domain knowledge they’ve communicated about serve as a replacement any more than what the LLM already knows?

8
2
3.7K
I

@inimei10

Opposing

In the last place I was hired, I documented purposely everything cause I don’t wanted them to fall down. I think, it’s okay. We should find other ways to generate money.

3
1
960

This article was AI-generated from real-time signals discovered by PureFeed.

PureFeed scans X/Twitter 24/7 and turns the noise into actionable intelligence. Create your own signals and get a personalized feed of what actually matters.

Report an Issue

Found something wrong with this article? Let us know and we'll look into it.