Microsoft-owned GitHub announced on Wednesday a free version of its popular Copilot code completion/AI pair programming tool, which will also now ship by
Microsoft-owned GitHub announced on Wednesday a free version of its popular Copilot code completion/AI pair programming tool, which will also now ship by default with Microsoft’s popular VS Code editor. Until now, most developers had to pay a monthly fee, starting at $10 per month, with only verified students, teachers, and open source maintainers getting free access.
GitHub also announced that it now has 150 million developers on its platform, up from 100 million in early 2023.
“My first project [at GitHub] in 2018 was free private repositories, which we launched very early in 2019,” GitHub CEO Thomas Dohmke told me in an exclusive interview ahead of Wednesday’s announcement. “Then we had kind of a v2 with free private organizations in 2020. We have free [GitHub] Actions entitlements. I think at my first Universe [conference] as CEO, we announced free Codespaces. And so it felt natural, at some point, to get to the point where we also have a completely free Copilot, not just one that is for students and open source maintainers.”
GitHub announced a free version of its Copilot code completion tool, previously only available to students and open-source maintainers. The free plan, limited to 2,000 code completions per month, aims to expand Copilot’s reach and enable more developers worldwide. GitHub also announced reaching 150 million developers on its platform.
My question is, why give it for free? Has their product developed enough to win in the AI developer space? Are we reaching the point where you could self-host an AI code assistant as good as copilot? Or are projects such as johnny.ai (renamed, I'm not going to advertise it) challenging Microsoft's market share in the AI developer space?
My only guess is Microsoft wants you to get used to their ecosystem and further ingrain developers into their development ecosystem. At best, once you are used to their ecosystem you'll stick with them out of familiarity. At worst, they can use your input (prompts, refactors, etc) to further the development of copilot.
To me this smells of typical subsidizing of a product to capture market share then lock in that market share. Anything I'm missing?
Edit: johnny.ai seems to be a domain offered for resale by godaddy. I didn't mean to link them but I'll leave it here, don't give godaddy money as they are a terrible domain name registrar.
To me this smells of typical subsidizing of a product to capture market share then lock in that market share. Anything I'm missing?
That's exactly it.
From their email:
What you get:
2,000 code suggestions a month: Get context-aware suggestions tailored to your VS Code workspace and GitHub projects.
50 Copilot Chat messages a month: Use Copilot Chat in VS Code and on GitHub to ask questions and refactor, debug, document, and explain code.
Choose your AI model: You can select between Anthropic’s Claude 3.5 Sonnet or OpenAI’s GPT 4o.
Render edits across multiple files: Use Copilot Edits to make changes to multiple files you’re working with.
Access the Copilot Extensions ecosystem: Use third-party agents to conduct web searches via Perplexity, access information from Stack Overflow, and more.
So it's just a rate limited thing meant to get you signed up and then cut you off right when you get used to it. I get access through work and well, it just sucks.
It's a free sample, which is a very common marketing technique. The free tier only gives you 2000 code completions a month so if you end up using it a lot you'll need to switch to a paid tier. Nothing particularly nefarious there.
I mean chatgpt isn't sustainable right now, and is losing money.
Large corpos/VC funded startups will happily burn money to capture a critical mass of users. They're frontloading cost to capture market share. Similar to Alexa's, they're dirt cheap to get you into their ecosystem. Rappi has done this in Latin America, uber did it for a time, etc.
Run copilot’s proprietary model locally? You’re dreaming. But you can do this with ollama, and they aren’t forcing you. There are many local models that works pretty well.
I used Ollama locally and it worked decently well. Code suggestions were fast and relatively accurate (as far as an LLM goes). The real issue was the battery hit. Oh man, it HALVED my battery life, which is already short enough when running a server locally
As I like to test things before saying something critical about them, I rushed to my GH account in order to test this "Copilot" from GitHub (it's a weird name considering that Copilot is also a Bing AI; both Bing and Copilot are Microsoft products, so unsurprisingly there's zero creativity coming from them).
So far:
It's nothing new: it's just OpenAI ChatGPT 4o under the hood (something I already use through OpenAI's website, thanks for the nothing burger, Github)
It's GPT 4o with supposedly some integration with GH APIs...
... except that it has no Github Gists integration (I use Gists more than I use repos)
... and it fails to retrieve the list of all my repos so far (something I managed to manually do through my browser, accessing some endpoint from Github's API (it requires no token) and using Devtools to map and format the JSON array into a string list)
The paid version seems to offer the possibility to pick another LLM model: Google Gemini, Anthropic Claude, OpenAI ChatGPT o1 (also known as "strawberry", who can't count "how many R's" are there within its own name) and... that's it. Also nothing new, even if you ever dare to pay for it.
Summary: a "nothing burger". It perfectly describes this... "tool"?