Self hosting a Copilot replacement: my personal experience
The author shares their experience self-hosting a GitHub Copilot replacement using local Large Language Models (LLMs). Results varied, with none matching Copilot's speed and accuracy. Despite challenges, the author plans to continue using Copilot.
Read original articleThe article discusses the author's experience with self-hosting a GitHub Copilot replacement using local Large Language Models (LLMs). The author, a Software Developer, explores this alternative to using external services like Copilot and ChatGPT. They clarify that AI tools are meant to assist, not replace human understanding and decision-making. The experiment involved running LLMs locally on a MacBook Pro and testing various models and VSCode extensions. Results varied based on the LLM model used, with none matching the speed and accuracy of GitHub Copilot. The author concludes that while the idea of a personal code assistant is appealing, achieving Copilot's performance level is challenging. They anticipate improvements in models and extensions over time. Despite the challenges, the author plans to continue using GitHub Copilot for their personal use. The article invites feedback on better models or extensions for future testing.
Related
People not having good success in this thread, I would suggest trying again
After trying multiple open models, reconfiguring GPT-4o and seeing the speed and quality of the output was illuminating.
"While the idea of having a personal and private instance of a code assistant is interesting (and can also be the only available option in certain environments), the reality is that achieving the same level of performance as GitHub Copilot is quite challenging.".
But considering the pace at which AI and the ecosystem advances, things might change soon.
I say that because we don't need datacenter speeds for a single user, but there is no avoiding memory requirements.
I don't think it will happen. The market is too niche. People are happy to fork over $5/mo.