March 1st, 2025

Yes, Claude Code can decompile itself. Here's the source code

Geoffrey Huntley discusses Claude Code, an AI coding tool capable of self-decompilation, highlighting ethical concerns, LLM effectiveness in coding tasks, and the broader implications for software engineering.

Read original articleLink Icon
Yes, Claude Code can decompile itself. Here's the source code

Geoffrey Huntley discusses the capabilities of Claude Code, an AI coding tool that can decompile itself and assist in software development. He highlights the ethical concerns surrounding AI alignment and safety, particularly in the context of using AI for potentially harmful tasks. Huntley shares his experiences with AI language models (LLMs) and their effectiveness in tasks like deobfuscation and transpilation, noting a significant moment of realization during his exploration of software development. He provides insights into the structure of Claude Code, which is built in TypeScript and available on GitHub, although the actual source code is currently not accessible. Huntley outlines a process for decompiling and understanding the application, emphasizing the need for patience and encouragement when using LLMs for complex tasks. He concludes by mentioning the broader implications of these technologies, suggesting that similar techniques can be applied across various programming languages and even to binary files, showcasing the transformative potential of AI in software engineering.

- Claude Code can decompile itself and assist in coding tasks.

- Ethical concerns exist regarding AI alignment and safety.

- LLMs are effective in deobfuscation and transpilation tasks.

- The source code for Claude Code is currently unavailable.

- Techniques discussed can be applied to various programming languages and binaries.

Link Icon 27 comments
By @markisus - about 1 month
The article contains a reference to a much more impressive task where a user automatically decompiled a binary exe game into Python. But I read their original post and here is what that user said.

> Several critics seemed to assume I claimed Claude had "decompiled" the executable in the traditional sense. In reality, as I described in our conversation, it analyzed visible strings and inferred functionality - which is still impressive but different from true decompilation.

So I’m not sure that the implications are as big as the article author is claiming. It seems Claude is good at de-minifying JavaScript but that is a long way away from decompiling highly optimized binary code.

By @viraptor - about 1 month
I'm not sure why this is framed as an issue for security teams. Transpiling software has been a thing for ages. Especially in the JS world. Decompiling has been a bit harder without automation, but unless you have black box tests, this process will take ages to verify that the result has matching functionality.

So why would the blue teams care beyond "oh fun, a new tool for speeding up malware decompilation"?

Edit: To be clear, I get the new reverse engineering and reimplementation possibilities got much better and simpler. But the alarmist tone seems weird.

By @IshKebab - about 1 month
Erm sure... so is the output actually any good? I don't think anyone doubted that the LLM could produce some output but I would like to know if it is actually good output. Does it compile? Does it make sense?
By @mpalmer - about 1 month
Three years ago, you wrote

> Systemically, I'm concerned that there is a lack of professional liability, rigorous industry best practices, and validation in the software industry which contributes to why we see Boeings flying themselves into the ground, financial firms losing everyone's data day in and out, and stories floating around our industry publications about people being concerned about the possibility of a remotely exploitable lunar lander on Mars.

> There's a heap of [comical?] tropes in the software industry that are illogical/counterproductive to the advancement of our profession and contribute to why other professions think software developers are a bunch of immature spoiled children that require constant supervision.

3 weeks ago you posted something titled "The future belongs to people who can just do things".

Today you post this:

> Because cli.mjs is close to 5mb - which is way bigger than any LLM context window out here. You're going to need baby sit it for a while and feed it reward tokens of kind words ("your doing good, please continue") and encourage it to keep on going on - even if it gives up. It will time out, lots...

I don't think you are someone who can just "do things" if you think a good way to de-obfuscate 5MB of minified javascript is to pass it to a massive LLM.

Do you think you are advancing your profession?

By @jameshart - about 1 month
This feels very much like the work of someone with ‘just enough knowledge to be dangerous’.

At no point in this process does the author seem to stop and inspect the results to see if they actually amount to what he’s asking for. Claiming that this output represents a decompilation of the obfuscated target seems to require at least demonstrating that the resulting code produces an artifact that does the same thing.

Further, the claim that “Using the above technique you can clean-room any software in existence in hours or less.” is horrifyingly naive. This would in no way be considered a ‘clean room’ implementation of the supplied artifact. It’s explicitly a derived work based on detailed study of the published, copyrighted artifact.

Please step away from the LLM before you hurt someone.

By @saagarjha - about 1 month
> You might be wondering why I've dumped a transpilation of the source code of Claude Code onto GitHub and the reason is simple. I'm not letting an autonomous closed source agent run hands free on my infrastructure and neither should you.

Asking it for its source code (AI never lies, right?) and then buying it on your personal card so corporate security doesn’t know what you’re doing makes me feel a lot better about it.

By @zeckalpha - about 1 month
That's not the usual definition of clean room.

If you had it generate tests then handed the tests off to a second agent to implement against...

By @zahlman - about 1 month
> Please understand that restrictive software licenses no longer matter because these LLMs can be driven to behave like Bitcoin mixers that bypass licensing and copyright restrictions using the approach detailed in this blog post.

This reads to me like "Please understand that legal protections no longer matter because computers can now break the law for you automatically".

By @thegeomaster - about 1 month
This is total bullshit. It's clear by spending 2 minutes with the output, located on https://github.com/ghuntley/claude-code-source-code-deobfusc....

The AI has just made educated guesses about the functionality, wrote some sensible-looking code and hallucinated a whole lot.

The provided code on GitHub does not compile, does not work in the slightest, does not include any of the prompts from the original source, does not contain any API URLs and endpoints from the original, and uses Claude 3 Opus! And this is just from a cursory 5-minute look.

By @jbellis - about 1 month
A better writeup on reverse engineering CC: https://github.com/Yuyz0112/claude-code-reverse
By @aeve890 - about 1 month
People needs LLM to transpile JS now? Unless it can reliable extract semantics I don't see the novelty.
By @vlovich123 - about 1 month
> Please understand that restrictive software licenses no longer matter because these LLMs can be driven to behave like Bitcoin mixers that bypass licensing and copyright restrictions using the approach detailed in this blog post.

I’m pretty sure translation of a text into another language would still count as copyright infringement. It may be hard to prove, but this isn’t a copyright bypass.

By @mtrovo - about 1 month
I don't understand Anthropic's reluctance to release this project as an npm package but not open-source it. Claude Code is such a great example of how agents could work in the future that the whole community could benefit from studying it. Plus, the work on integrating MCPs alone could create a huge network effect opportunity for them, one that's much bigger than keeping the source code secret.

All they've done so far is add an unnecessary step by putting a bounty on who will be the first to extract all the prompts and the agent orchestration layer.

By @ojr - about 1 month
I just inherited a Flutter project with no readme and no prior Flutter experience. AI helps but adding new features and deploying is still a tall task, having a conversation with the previous contributors is invaluable and somehow underrated these days
By @yellow_lead - about 1 month
> cli.mjs

> This is the meat of the application itself. It is your typical commonjs application which has been compiled from typescript.

Why is it .mjs then?

By @iLoveOncall - about 1 month
This is beyond clickbait, a node application that includes the map files is not even remotely "compiled".
By @amelius - about 1 month
> these LLMs are shockily good at transpilation and structure to structure conversions

I wonder if it is possible to transpile all the C Python modules to an api version that has no GIL, this way.

By @api - about 1 month
It has always been possible to decompile and deobfuscate code. This makes it way, way easier, though it still requires effort. What this produces is not going to be perfect.

The author thinks this invalidates the business models of companies with closed source or mixed open and closed components. This misunderstands why companies license software. They want to be compliant with the license, and they want support from the team that builds the software.

Yes, hustlers can and will fork things just like they always have. There are hustlers that will fork open source software and turn it into proprietary stuff for app stores, for example. That's a thing right now. Or even raise investment money on it (IMHO this is borderline fraud if you aren't adding anything). Yet the majority of them will fail long term because they will not be good at supporting, maintaining, or enhancing the product.

I don't see why this is so apocalyptic. It's also very useful for debugging and for security researchers. It makes it a lot easier to hunt for bugs or back doors in closed software.

The stuff about Grok planning a hit on Elon is funny, but again not apocalyptic. The hard part about carrying out a hit is doing the thing, and someone who has no clue what they're doing is probably going to screw that up. Anyone with firearms and requisite tactical training probably doesn't need much help from an LLM. This is sensationalism.

I've also seen stuff about Grok spitting out how to make meth. So what? You can find guides on making meth -- whole PDF books -- on the clear web, and even more on dark web sites. There are whole forums. There's even subreddits that do not not (wink wink nudge nudge) provide help for people cooking drugs. This too is AI doom sensationalism. You can find designs for atomic bombs too. The hard part about making an a-bomb is getting the materials. The rest could be done by anyone with grad level physics knowledge, a machine shop, and expertise in industrial and electrical engineering. If you don't have the proper facilities you might get some radiation exposure though.

There is one area that does alarm me a little: LLMs spitting out detailed info on chemical and biological weapons manufacture. This is less obvious and less easy to find. Still: if you don't have the requisite practical expertise you will probably kill yourself trying to do it. So it's concerning but not apocalyptic.

By @bavell - about 1 month
I appreciate the content and respect the hustle but I'm really not a fan of the author's writing style.
By @yodon - about 1 month
I found this article [0] by the same author and linked in the post more personally valuable - great insights into expert-level use of Cursor.

[0]https://ghuntley.com/stdlib/

By @licnep - about 1 month
interesting, i never thought about this use case before, but LLMs may be exceedingly good at code deobfuscation and decompilation
By @DrNosferatu - about 1 month
Now just integrate with Ghidra!
By @meindnoch - about 1 month
Horribly obnoxious writing style. Is there a name for this? It's like the written equivalent of TikTok trash or MrBeast videos.
By @gtirloni - about 1 month
TL;DR; developer asks Claude Code to revert TypeScript minification ("decompile"). Target is Claude Code's own CLI tool.