GitHub Copilot Workspace: Technical Preview

https://github.blog/2024-04-29-github-copilot-workspace/

308 points by davidbarker on 2024-04-29 | 323 comments

Automated Summary

GitHub has introduced GitHub Copilot Workspace, a developer environment that allows developers to go from idea to code to software in natural language. This new Copilot-native developer environment is designed to deliver and enhance developer creativity, empower systems thinking, and lower the barrier of entry for building software. It starts with the task and builds a full plan based on a deep understanding of the codebase and issue. It offers a step-by-step plan, test code, and an editable, streamlined list in natural language. Users can run code directly in Copilot Workspace, share workspaces with their team via a link, and use it from any device. GitHub Copilot Workspace aims to make it easier to get started, learn, and execute, enabling a world with 1 billion developers by lowering the barrier of entry to building software.

Comments

singluere on 2024-04-29

While I've not used this product, I've created somewhat similar setup using open source LLMs that runs locally. After having used it for about three months, I can say that debugging LLM prompts was far more annoying than debugging code. Ultimately, I ended up abandoning my setup and going in favor of writing code the good old fashioned way. YMMV

candiddevmike on 2024-04-29

I had ChatGPT output an algorithm implementation in Go (Shamir Secret Sharing) that I didn't want to figure out. It kinda worked, but everytime I pointed out a problem with the code it seemed more bugs were added (and I ended up hating the "Good catch!" text responses...)

Eventually, figuring out why it didn't work made me have to read the algorithm spec and basically write the code from scratch, throwing away all of the ChatGPT work. Definitely took more time than doing it the "hard way".

fragmede on 2024-04-29

The skill in using an LLM currently is in getting you to where you want to be, rather than wasting time convincing the LLM to spit out exactly what you want. That means flipping between having Aider write the code and editing the code yourself, when it's clear the LLM doesn't get it, or you get it better than it does.

kfajdsl on 2024-04-29

This is the key thing that I feel most people who dislike using LLMs for development miss. You need to be able to quickly tell if the model is just going to keep spinning on something stupid, and just do it yourself in those scenarios. If you're decent at this then there can only really be a net benefit.

esperent on 2024-04-30

The bits GPT4 always gets wrong - and as you say, more and more wrong the further I try to work with it to fix the mistakes - are exactly the bits I want it to do for me. Tedious nested loops that I need to calculate on paper in particular.

What it's good for is high level overview and structuring of simple apps, which saves me a lot of googling, reviewing prior work, and some initial typing.

After my last attempts to work with it, I've decided that until there's another large improvement in the models (GPT5 or similar), I won't try to use it beyond this initial structure creation phase.

The issue is that for complex apps that already have a structure in place - especially if it's not a great structure and I don't have the rights or time to do a refactoring - the AI can't really do anything to help. So in this case, for new, simple, or test projects it'll seem like an amazing tool and then in the real world it's pretty much useless or even just wastes time, except for brainstorming entirely new features that can be reasoned about in isolation, in which case it's useful again.

A counterpoint is that code should always be written in a modular way so that each piece can be reasoned about in isolation. Which doesn't often happen in large apps that I've worked on, unfortunately. Unless I'm the one who writes them from scratch.

KeplerBoy on 2024-04-30

Copilot is a decent autocomplete saving you half a line here and there, that's about it.

Sammi on 2024-04-30

I can regularly get it to autocomplete big chunks of code that are good. But specifically only when it's completely mind numbingly boring, repetitive and derivative code. Good for starting out a new view or controller that is very similar to something that already exists in the codebase. Anything remotely novel and it's useless.

WorldMaker on 2024-04-30

I have strange documentation habits and sometimes when you document everything in code comments up front, Copilot does seem to synthesize from your documentation most of the "bones" that you need. It often needs a thorough code review, but it's not unlike sending a requirements document to a very Junior developer that sometimes surprises you and getting back something that almost works in a PR you need a fine tooth comb on. A few times I've "finished my PR review" with "Not bad, Junior, B+".

I know a lot of us generally don't write comments until "last" so will never see this side of Copilot, but it is interesting to try if you haven't.

Sammi on 2024-05-01

Yes I use this trick regularly too.

torginus on 2024-04-29

An alterinative to this workflow that I find myself returning to is the good ol' nicking code from stackoverflow or Github.

ChatGPT works really well because the stuff you are looking for is already written somewhere and it solves the needle-in-the-haystack problem of finding it, very well.

But I often find it tends to output code that doesn't work but eerily looks like it should, whereas Github stuff tends to need a bit more wrangling but tends to work.

briHass on 2024-04-29

The big benefit to me with SO is that with a question with multiple answers, the top up voted question likely works, since those votes are probably people that tried it. I also like the 'well, actually' responses and follow up, because people point out performance issues or edge cases I may or may not care about.

I only find current LLMs to be useful for code that I could easily write, but I am too lazy to do so. The kind of boilerplate that can be verified quickly by eye.

sroussey on 2024-04-29

Once it writes the code, take that into a new session to fix a bug. Repeat with new sessions. Don’t let it read the buggy code, it will just get worse.

neom on 2024-04-30

Yah this works for me and I'm not a SWE. I use it to make marketing websites. Sometimes it will do something perfectly but mess up one part, if I keep getting it to fix that one part in the same session almost certainly it's never going to work (I burnt a week this way). However, if I take it into a brand-new GPT sessions and say here is this webpage i wrote, but I made a mistake and the dropdown box should be on the left not the right, it can almost always fix it. Again, I'm not really a SWE so I'm not sure what is going on here, but if you click the drop down on that "Analyzing" thing that shows up, in the same session it seems to try to re-work the code from memory, on a new session if you look at the drop down Analyzing thing, it seems to be using a different method to re-work the code.

obmelvin on 2024-04-29

Interesting - I almost always iterate on code in the same session. I will try doing it with history off and frequently re-starting the session. I naively assumed the extra context would help, but I can see how it's also just noise when there are 5 versions of the same code in the context.

mvdtnz on 2024-04-29

How do you not let it read the buggy code but also take it into a new session?

goriv on 2024-04-29

I assume just copy and paste it

no_carrier on 2024-04-30

I'm just as confused... If you're copy/pasting the code into a new session, isn't that reading the code?

SushiHippie on 2024-04-30

The way I understand it:

First Variant:

1. User: Asks coding question

2. Ai: Outputs half functioning code

3. User: Asks to fix specific things

4. Ai: Creates buggy code

5. User: asks again to fix things

6. Ai: writes even more buggy code

Proposed second variant with copying code:

Until step 4 everything stays the same, but instead of asking it to fix the code again you copy it into another session, this way, you'll repeat step 3 again, without the LLM "seeing" the code it previously generated for step 4.

neom on 2024-04-30

I dunno how you SWE's are doing it, but I have my ChatGPT output files and if multi, zip files, not code snippets (unless I want a code snippet), and then I re-upload those files to new session using the attach thinger. Also, in my experience just building marketing websites, I don't do step 3, I just do step 1 and 2 over and over in new sessions, it's longer because you have to figure out a flow through a bunch of work sessions, but it's faster because it makes wwwwaaaaayyyyyyy fewer mistakes. (You're basically just shaking off any additional context the GPT has at all about what you are doing when you put it in a brand-new session, so it can be more focused on the task, I guess?)

H12 on 2024-04-30

The only time I've had success with using AI to drive development work is for "writers block" situations where I'm staring at an empty file or using a language/tool with which I'm out of practice or simply don't have enough experience.

In these situations, giving me something that doesn't work (even if I wind up being forced to rewrite it) is actually kinda helpful. The faster I get my hands dirty and start actually trying to build the thing, the faster I usually get it done.

The alternative is historically trying to read the docs or man pages and getting overwhelmed and discouraged if they wind up being hard to grok.

015a on 2024-04-30

I've literally never seen an LLM respond negatively to being told "hold on that's not right"; they always say "Oh, you're right!" even if you aren't right.

GPT-4 today: "Hey are you sure that's the right package to import?" "Oh, sorry, you're right, its this other package" (hallucinates the most incorrect response only a computer could imagine for ten paragraphs).

I've seen junior engineers lose half a day traveling alongside GPT's madness before an adult is brought in to question an original assumption, or incorrect fork in the road, or whathaveyou.

fwip on 2024-04-29

One thing that's helped me a little bit, is to open up the spec as context, and then asking the LLM to generate tests based on that spec.

bastardoperator on 2024-04-29

I just asked it the same question and this was the answer it gave me:

go get go.dedis.ch/kyber/v3

LOL...

AlexCoventry on 2024-04-29

That's pointing to a fairly solid implementation, though (I've used it.) I would trust it way before I'd trust a de novo implementation from ChatGPT. The idea of people using cryptographic implementations written by current AI services is a bit terrifying.

ajbt200128 on 2024-04-29

> Shamir Secret Sharing

> ChatGPT

please don't roll your own crypto, and PLEASE don't roll your own crypto from a LLM. They're useful for other kinds of programs, but crypto libraries need to be to spec, and heavily used and reviewed to not be actively harmful. Not sure ChatGPT can write constant time code :)

halfmatthalfcat on 2024-04-30

People always say this but how else are you going to learn? I doubt many of us who are "rolling our own crypto" are actually deploying it into mission critical contexts anyway.

Arainach on 2024-04-30

Asking an LLM to do something for you doesn't involve any learning at all.

halfmatthalfcat on 2024-04-30

I’m not talking about the LLM case, just the mantra of “don’t roll your own crypto” constantly. Comes off as unnecessarily gatekeepy.

rsynnott on 2024-04-30

I mean, by that, people don't generally mean, literally, "never write your own crypto". They just mean "on no account _use_ self-written crypto for anything".

whatever1 on 2024-04-29

That is my struggle as well. I need to keep pointing out issues of the llm output, until after multiple iterations it may reach the correct answer. At that point I don't feel I gained anything productivity wise.

Maybe the whole point of coding with llms in 2024 is for us to train their models.

freedomben on 2024-04-29

Indeed, and the more niche the use case, the worse it gets.

lostintangent on 2024-04-29

I can definitely echo the challenges of debugging non-trivial LLM apps, and making sure you have the right evals to validate progress. I spent many hours optimizing Copilot Workspace, and there is definitely both an art and a science to it :)

That said, I’m optimistic that tool builders can take on a lot of that responsibility, and create abstractions that allow developer to focus solely on their code, and the problem at hand.

singluere on 2024-04-29

For sure! As a user, I would love to be able to have some sort of debugger like behavior for debugging the LLM's output generation. Maybe some ability for the LLM to keep on running some tests until they pass? That sort of stuff would make me want to try this :)

slavoglinsky on 2024-04-30

see langtail app (I am not maker)

idan on 2024-04-29

I'm sure we'll share some of the strategies we used here in upcoming talks. It's, uh, "nontrivial". And it's not just "what text do you stick in the prompt".

lenerdenator on 2024-04-30

Honestly, I've found using GH CoPilot chat to be the real value add. It's amazing for rubber ducking.

That being said, my employer pays for it. I am still on the fence about which LLM to subscribe to with my own money.

BoorishBears on 2024-04-29

Creating this using Open Source LLMs would be like saying you tried A5 Wagyu by going to Burger King, respectfully.

I think benchmarks are severely overselling what open source models are capable of compared to closed source models.

Zambyte on 2024-04-29

I really don't think they're being over sold that much. I'm running llama 3 8b on my machine, and it feels a lot like running claude 3 haiku with a much lower context window. Quality wise it is surprisingly nice.

BoorishBears on 2024-04-30

Llama 3 just came out so they couldn't have used it, and Claude Haiku is the smallest cheapest closed source model out there from what I've seen.

Github is likely using a GPT-4 class model which is two (massive) steps up in capabilities in Anthropic's offerings alone

Zambyte on 2024-04-30

Yeah I just mentioned Llama to point out that the open weight models have been really catching up.

Microsoft is almost certainly using GPT-4 given their relationship with ClosedAI, but I would definitely not put GPT-4 (nor Turbo) "two massive steps up" from Claude 3 Opus. I have access to both through Kagi, and I have found myself favoring the responses of Claude to the point where I almost never use GPT(TM) anymore.

BoorishBears on 2024-05-01

You're misreading in multiple ways, maybe in a rush to dunk on "Closed AI".

Github Copilot is not the same as Copilot Chat which uses GPT-4, there still some uncertainty on if Copilot completions use GPT-4 as outsiders know it (and iirc they've specifically said it doesn't at some point)

I also said Haiku is two massive steps behind Anthropic's offerings... which are Sonnet and Opus.

Anthropic isn't any more open than OpenAI, and I personally don't attribute any sort of virtue to any major corporation, so I'll take what works best

Zambyte on 2024-05-01

I... don't think I misread you? Maybe you didn't mean what you wrote, but what you said was:

> Github is likely using a GPT-4 class model which is two (massive) steps up in capabilities in Anthropic's offerings alone

Comparing GPT-4 to Anthropics offerings, which, as you say, includes Sonnet and Opus.

> Anthropic isn't any more open than OpenAI, [...] so I'll take what works best

I understand that, and same here. I don't prefer Claude for any reason other than the quality of its output. I just think OpenAIs name is goofy with how they actually behave, so I prefer the more accurate derivative of their name :)

Regarding what model Copilot Completions is using - point taken, I have no comment on that. My original comment in this thread was only meant to point out that open weight models are getting a lot better. Not saying they're using them.

BoorishBears on 2024-05-01

I used "in Anthropic's capabilities" intentionally: it's two steps up in what they can do from Claude Haiku

paradite on 2024-04-29

Locally running LLMs in Apr 2024 are no where close to GPT-4 in terms of coding capabilities.

specproc on 2024-04-29

Depends what for, I find AI tools best for boilerplate or as a substitute for stackoverflow. For complex logic, even GPT-4 ends up sending me down the garden path more often than not.

I got Llama 3 8B down over the weekend and it's alright. Not plugged it in to VSCode yet, but I could see it (or code specific derivatives) handling those first two use cases fine. I'd say close enough to be useful.

arvinsim on 2024-04-30

Agreed. Can even get specialized LLMs like Deepsync Coder for better results

015a on 2024-04-29

And GPT-4 is nowhere close to the human brain in terms of coding capabilities, and model advancements appear to be hitting an asymptote. So...

throwaway4aday on 2024-04-29

I don't see a flattening. I see a lot of other groups catching up to OpenAI and some even slightly surpassing them like Claude 3 Opus. I'm very interested in how Llama 3 400B turns out but my conservative prediction (backed by Meta's early evaluations) is that it will be at least as good as GPT 4. It's been a little over a year since GPT 4 was released to the public and in that time Meta and Anthropic seem to have caught up and Google would have too if they spent less time tying themselves up in knots. So OpenAI has a 1 year lead though they seem to have spent some of that time on making inference less expensive which is not a terrible choice. If they release 4.5 or 5 and it flops or isn't much better then maybe you are right but it's very premature to call the race now, maybe 2 years from now with little progress from anyone.

015a on 2024-04-30

I shouldn't have used the word asymptote; I should have said logarithmic. I don't doubt a best-case situation where we get a GPT-5, GPT-6, GPT-7, etc; each is more capable than the last; just that there will be more months between each, it'll be more expensive to train each, and the gain of function between each will be smaller than the previous.

Let me phrase this another way: Llama 3 400B releases and it has GPT-5 level performance. Obviously; we have not seen GPT-5; so we don't have a sense of what that level of performance looks like. It might be that OpenAI simply has a one year lead, but it might also be that all these frontier model developers are stuck in the same capability swamp; and we simply don't have the compute, virgin tokens, economic incentives, algorithms, etc to push through it (yet). So, Meta pulls ahead, but we're talking about feet, not miles.

mountainriver on 2024-04-30

Maybe they know something about GPT5

studenthrow3831 on 2024-04-29

Student here: I legitimately cannot understand how senior developers can dismiss these LLM tools when they've gone from barely stringing together a TODO app to structuring and executing large-scale changes in entire repositories in 3 years. I'm not a singulatarian, but this seems like a brutal S-curve we're heading into. I also have a hard time believing that there is enough software need to make such an extreme productivity multiplier not be catastrophic to labor demand.

Are there any arguments that could seriously motivate me to continue with this career outside of just blind hope that it will be okay? I'm not a total doomer, currently 'hopium' works and I'm making progress, but I wish my hopes could at least be founded.

sixhobbits on 2024-04-29

Accountants thought spreadsheets would kill their profession, instead demand for them exploded.

Compilers made it much easier to code compared to writing everything in Assembly. Python made it much easier to code than writing C. Both increased the demand for coders.

Code is a liability, not an asset. The fact that less technical people and people who are not trained engineers can now make useful apps by generating millions of lines of code is also only going to increase the need for professional software engineers.

If you're doing an HTML or even React boot camp, I think you'd be right to be a bit concerned about your future.

If you're studying algorithms and data structures and engineering best practices, I doubt you have anything to worry about.

lelandbatey on 2024-04-29

I've seen it already. A small business owner (one man show) friend of mine with zero developer experience was able to solve his problem (very custom business specific data -> calendar management) in a rough way using ChatGPT. But it got past about 300 lines long and really started to get bad. He'd put dozens of hours of time on his weekend to getting it to where it was by using ChatGPT over and over, but eventually it stopped being able to make the highly specific changes he needed as he used it more and more. He came to me for some help and I was able to work through it a bit as a friend, but the code quality was bad and I had to say "to really do this, I'd need to consult, and there's probably a better person to hire for that."

He's muddling along but is looking for low cost devs to contract with on it now that he's getting value out of it though. And I suspect that kind of story will continue quite a bit as the tech matures.

imiric on 2024-04-29

> And I suspect that kind of story will continue quite a bit as the tech matures.

Don't you think that this tech can only get better? And that there will come a time in the very near future when the programming capabilities of AI improve substantially over what they are now? After all, AI writing 300 line programs was unheard of a mere 2 years ago.

This is what I think GP is ignoring. Spreadsheets couldn't to do every task an accountant can do, so they augmented their capabilities. Compilers don't have the capability to write code from scratch, and Python doesn't write itself either.

But AI will continually improve, and spread to more areas that software engineers were trained on. At first this will seem empowering, as they will aid us in writing small chunks of code, or code that can be easily generated like tests, which they already do. Then this will expand to writing even more code, improving their accuracy, debugging, refactoring, reasoning, and in general, being a better programming assistant for business owners like your friend than any human would.

The concerning thing is that this isn't happening on timescales of decades, but years and months. Unlike GP, I don't think software engineers will exist as they do today in a decade or two. Everyone will either need to be a machine learning engineer and directly work with training and tweaking the work of AI, and then, once AI can improve itself, it will become self-sufficient, and humans will only program as a hobby. Humans will likely be forbidden from writing mission critical software in health, government and transport industries. Hardware engineers might be safe for a while after that, but not for long either.

lispisok on 2024-04-29

You are extrapolating from when we saw huge improvements 1-2 years ago. Performance improvements have flatlined. Current AI predictions reminds me of self-driving car hype from the mid 2010s

imiric on 2024-04-29

> Performance improvements have flatlined.

Multimodality, MoE, RAG, open source models, and robotics, have all been/seen massive improvements in the past year alone. OpenAI's Sora is a multi-generational leap over anything we've seen before (not released yet, granted, but it's a real product). This is hardly flatlining.

I'm not even in the AI field, but I'm sure someone can provide more examples.

> Current AI predictions reminds me of self-driving car hype from the mid 2010s

Ironically, Waymo's self-driving taxis were launched in several cities in 2023. Does this count?

I can see AI skepticism is as strong as ever, even amidst clear breakthroughs.

lispisok on 2024-04-29

You make claims of massive improvements but as an end user I have not experienced such. With the amount of fake and cherrypicked demos in the AI space I dont believe anything until I experience it myself.

>Ironically, Waymo's self-driving taxis were launched in several cities in 2023. Does this count?

No because usage is limited to a tiny fraction of drive-able space. More cherrypicking.

imiric on 2024-04-30

Just because you haven't used text generation with practically unlimited context windows, insight extraction from personal data, massively improved text-to-image, image-to-image and video generation tools, and ridden in an autonomous vehicle, doesn't mean that the field has stagnated.

You're purposefully ignoring progress, and gating it behind some arbitrary ideals. That doesn't make your claims true.

sussmannbaka on 2024-04-30

No. The progress is not being ignored. Normal people just have a hard time getting excited for something that is not useful yet. What you are doing here is the equivalent of popular science articles about exciting new battery tech - as long as it doesn’t improve my battery life, I don’t care. I will care once it hits the shelves and is useful to me, I do not care about your list of acronyms.

imiric on 2024-04-30

I was arguing against the claim that progress has flatlined, and when I gave concrete examples of recent developments that millions of people are using today, you've now shifted the goalpost to "normal" people being excited about it.

But sure, please tell me more about how AI is a fad.

sussmannbaka on 2024-05-01

You are seeing enemies where there are none. I am merely commenting on AI evangelists insisting I have to be psyched about every paper and poc long before it ever turns into a useable product that impacts my life. I don’t care about the internals of your field, nobody does. Achieve results and I will gladly use them.

hanniabu on 2024-04-30

We've entered the acceleration age

layer8 on 2024-04-29

> But AI will continually improve

There is a bit of a fallacy in here. We don’t know how far it will improve, and in what ways. Progress isn’t continuous and linear, it comes more in sudden jumps and phases, and often plateaus for quite a while.

imiric on 2024-04-29

Fair enough. But it's enough of a fallacy as asserting that it won't improve, no?

The rate of improvement in the last 5 years hasn't stopped, and in fact has accelerated in the last two. There is some concern that it's slowing down as of 2024, but there is a historically high amount of interest, research, development and investment pouring into the field that it's more reasonable to expect further breakthroughs than not.

If nothing else, we haven't exhausted the improvements from just throwing more compute at existing approaches, so even if the field remains frozen, we are likely to see a few more generational leaps still.

srcreigh on 2024-04-29

AI can’t write its own prompts. 10k people using the same prompt who actually need 5000 different things.

No improvements to AI will let it read vague speakers’ minds. No improvement to AI will let it get answers it needs if people don’t know how to answer the necessary questions.

Information has to come from somewhere to differentiate 1 prompt into 5000 different responses. If it’s not coming from the people using the AI, where else can it possibly come from?

If people using the tool don’t know how to be specific enough to get what they want, the tool won’t replace people.

s/the tool/spreadsheets

s/the tool/databases

s/the tool/React

s/the tool/low code

s/the tool/LLMs

imiric on 2024-04-29

> AI can’t write its own prompts.

What makes you say that? One model can write the prompts of another, and we have seen approaches combining multiple models, and models that can evaluate the result of a prompt and retry with a different one.

> No improvements to AI will let it read vague speakers’ minds. No improvement to AI will let it get answers it needs if people don’t know how to answer the necessary questions.

No, but it can certainly produce output until the human decides it's acceptable. Humans don't need to give precise guidance, or answer technical questions. They just need to judge the output.

I do agree that humans currently still need to be in the loop as a primary data source, and validators of the output. But there's no theoretical reason AI, or a combination of AIs, couldn't do this in the future. Especially once we move from text as the primary I/O mechanism.

heavyset_go on 2024-04-29

I agree with your point, just want to point out that models have been trained on AI generated prompts as synthetic data.

faeriechangling on 2024-04-30

I predict the rate in progress in LLMS will diminish over time, whereas the difficulty of an LLM writing an accurate computer program will go up exponentially with complexity and size. Is an LLM ever going to be able to do what say, Linus Torvalds did? Heck I've seen much less sophisticated software projects than that which it's hard to imagine an LLM doing.

On the lower end, while Joe Average is going to be able to solve a lot of problems with an LLM, I expect more bugs will exist than ever before because more software will be written, and that might end up not being all that terrible for software developers.

throwaway4aday on 2024-04-29

I don't know about the rest of the developers in the world but my dream come true would be a computer that can write all the code for me. I have piles of notebooks and files absolutely stuffed with ideas I'd like to try out but being a single, measly human programmer I can only work on one at a time and it takes a long time to see each through. If I could get a prototype in 30 seconds that I could play with and then have the machine iterate on it if it showed promise I could ship a dozen projects a month. It's like Frank Zappa said "So many books, so little time."

gls2ro on 2024-04-30

If that will be the case then in a finite and small amount of time all your ideas will already have a wide range of implementations/variations because everybody will do the same as you.

It is like now LLMs are on the way to take over (or destroy) content on the web and will take over posts on social media thus making anyone create anything so fast that the incentive to put manual labor into a piece of content is becoming irrelevant in some ways. You work days to write a blog post and publish it and in the same time 1000s of blog posts are published along with yours fighting for the attention of the same audience. who might just stop reading completely because of so much similar things.

throwaway4aday on 2024-04-30

Honestly, I don't care if other people are creating similar things to what I am. Actually, I prefer if there are more people working on the same things because it means there are other people that I can talk to about those things and collaborate with. Even if I don't want to work with others on a project I'm not discouraged by other implementations existing, there's always something that I would want to do differently from what's out there. That's the whole point of building my own things after all; if I were happy with using whatever bog standard app I could find on the web then why would I need to build it? It isn't just about making it my own either, it's also about the fun of diving into the guts of a system and seeing how things work, having a machine capable of producing that code gives me the fantastic ability to choose only the parts I'm interested in to do a deep dive on while skipping the boring stuff I've done 1000x times before and I don't have to type the code if I don't want to, I can just talk to the computer about the implementation details and explore various options with it. That in itself is worth the time spent on it but the awesome side effect is you get a new toy to play with too.

rurp on 2024-04-29

This makes sense to me. When updating anything beyond a small project, keeping things reliable and mantainable for the future is 10x more important than just solving the immediate bug or feature. Short term hacks add up and become overwhelming at some point, even if each individual change seems manageable at the time.

I used to work as a solo contractor on small/early projects. The most common job opportunity I encountered was someone who had hired the cheapest offshore devs they could find, seen good early progress with demos and POCs, but over time things kept slowing down and eventually went off the rails. The codebases were invariably a mess of hacks and spaghetti code.

I think the best historical comp to LLMs is offshore outsourcing, except without the side effect of lifting millions of people out of poverty in the third world.

faeriechangling on 2024-04-30

A large amount of the problems in the world don't require computer programs over a few hundred lines long to solve, so LLMs will still see use by DIY types.

People may underestimate how difficult it is for an LLM to write a long or complex computer program though. It makes sense LLMs do very well at pumping out boilerplate and leetcode answers or trivial programs, but it doesn't nessecarily track that it would be they would that good at writing complex sophisticated and unique custom software. It may in fact be much further away from doing that than a lot of people anticipate, in a self-driving is just around the corner kind of way.

homarp on 2024-04-29

> as the tech matures.

Then it means you can use the matured tech and build in one day a superb service. And improve it the next day.

api on 2024-04-29

This is what I've been predicting for over a year: AI-assisted programming will increase demand for programmers.

It may well change how they do their work though, just like spreadsheets did for accountants and compilers did for the earliest generation of hand-code-in-ASM developers. I can imagine a future where we do most of our coding at an even higher level than today and only dive down into the minutia when the AI isn't good enough or we need to fix/optimize something. The same is true for ASM today-- people rarely touch it unless they need to debug a compiler or (more often) to write something extremely optimized or using some CPU-specific feature.

Programming may become more about higher level reasoning than coding lower level algorithms, unless you're doing something really hard or demanding.

atq2119 on 2024-04-30

A crucial difference to the other examples is this. Compilers and spreadsheets are deterministic and repeatable, and are designed to solve a very specific task correctly.

LLMs, certainly in their current form, aren't.

This doesn't necessarily contradict what you and GP are writing, but it does give a flavor to it that I expect to be important.

idan on 2024-04-29

Exactly this.

When was the last time you wrote assembler?

skydhash on 2024-04-29

People also forgets that coding is formal logic that describe algorithms to computer which is just a machine. And because it’s formal, it’s rigid and not prone to manipulation. Instead of using LLMs you’d better off studying a book and add some snippets to your editor. What I like about Laravel is their extensive use of generators. They know that part of the code will be boilerplate. The nice thing about Common Lisp is that you can make the language itself generate boilerplate.

Phurist on 2024-04-29

You start by talking about apples and finish talking about cars.

hackermatic on 2024-04-29

Could you explain what you mean by that idiom?

WorldMaker on 2024-04-30

A way that I like to describe something like this is that code is long form poetry with two intended audiences: your most literally minded friends (machines), and those friends that need and want a full story told with a recognizable beginning/middle/end (your fellow developers, yourself in the future). LLMs and boilerplate generators (and linters and so many other tools) are great about the mechanics of the poetry forms (keeping you to the right "meter", checking your "rhymes" for you, formatting your whitespace around the poem, naming conventions) but for the most part they can't tell your poem's story for you. That's the abstract thing of allegory and metaphor and simile (data structures and the semantics behind names and the significance of architecture structures, etc) that is likely going to remain the unique skill set of good programmers.

faeriechangling on 2024-04-30

Hard to know without the benefit of hindsight if a productivity improvement is:

1: An ATM machine - which made banks more profitable, so banks opened up more of them and drew people into the bank with the machines then told them insurance and investments.

2: Online banking - which simply obsoleted the need to go to the bank at all.

My inclination that LLMs are the former, not the latter. I think the process of coding is an impediment to software development being financially viable, not job security.

mlhpdx on 2024-04-29

Well said. Assistive technology is great when it helps developers write the “right” code and tremendous destructive to company value otherwise.

thallavajhula on 2024-04-29

This is such a well written, thoughtful, and succinct comment. It is people like you and input like this that make HN such a wonderful place. Had OP (of the comment you responded to) posted this on Twitter or Reddit, they would probably have been flooded with FUD-filled non-sense.

This is what the newcomers need. I've been saying something similar to new Software Engineers over the past couple of years and could never put something in a way you did.

Every single sentence is so insightful and to the point. I love it. Thank you so much for this.

sqeaky on 2024-04-29

I strongly agree with your points and sentiment as far as the state of machine intelligence remains non-general.

Currently, LLMs summarize, earlier systems classified, and a new system might do some other narrow piece of intelligence. If system is created that thinks and understands and is creative with philosophy and ideas, that is going to be different. I don't know if that is tomorrow or 100 years from now, but that is going to be very different.

packetlost on 2024-04-29

The hardest part of software development is not writing code, full stop. It never has been and it never will be. The hard part is designing, understanding, verifying, and repairing complex systems. LLMs do not do this, even a little bit.

studenthrow3831 on 2024-04-29

I guess my worst fear is not "no more jobs because AI can code" but "no more junior jobs because AI can code under the supervision of a senior". SWE jobs will exist, but only seniors will have them and juniors are never hired. Maybe the occasional "apprentice" will be brought on, but in nowhere near the same amount.

Where my blind hope lies more specifically is in networking into one of those "apprentice" roles, or maybe a third tech Cambrian explosion enabled by AI allows me to find work in a new startup. I don't want to give up just yet.

SCUSKU on 2024-04-29

If you're a junior looking for a job, it's always a tough time. Getting your first gig is insanely brutal (college career fairs help a lot). That said, I wouldn't give up and blame AI for "taking our jerbs". I would say the current macroeconomic conditions with higher interest rates have reduced the amount of developer headcount companies can support. AKA companies are risk averse right now, and juniors are a risk (albeit a relatively low cost).

If I were in your shoes, I would just stop consuming the doom and gloom AI content, and go heads down and learn to build things that others will find useful. Most importantly, you should be having fun. If you do that you'll learn how to learn, have fun, build a portfolio, and generally just be setting yourself up to succeed.

heavyset_go on 2024-04-29

> I guess my worst fear is not "no more jobs because AI can code" but "no more junior jobs because AI can code under the supervision of a senior

You're posting under a thread where many seniors are discussing how they don't want this because it doesn't work.

You cannot make a model understand anything. You can help a person understand something. You can accomplish that with a simple conversation with a junior engineer.

I will never make GPT-4 or whatever understand what I want. It will always respond with a simulacrum that looks and sounds like it gets what I'm saying, but it fundamentally doesn't, and when you're trying to get work done, that can range from being annoying to being a liability.

crooked-v on 2024-04-29

On that note, for anyone who hasn't run into the "looks like it understands, but doesn't" issue, here's a simple test case to try it out: Ask ChatGPT to tell you the heights of two celebrities, and ask it which would be taller if they were side-by-side. Regenerate the response a few times and you'll get responses where it clearly "knows" the heights, but also obviously doesn't understand how a taller-shorter comparison works.

mnk47 on 2024-04-30

>You're posting under a thread where many seniors are discussing how they don't want this because it doesn't work.

Many artists and illustrators thought AI art would never threaten their livelihood because it did not understand form, it completely messed up perspective, it could never draw hands, etc. Look at the state of their industry now. It still doesn't "understand" hands but it can sure as hell draw them. We're even getting video generation that understands object permanence, something that didn't seem possible just over a year ago when the best we got were terrible low quality noisy GIFs with wild inconsistencies.

Many translators thought AI would never replace them, and then Duolingo fired their entire translation team.

I'm sure that GP isn't worried about being replaced by GPT-4. They're worried about having to compete with a potentially much better GPT-5 or 6 by the time they graduate.

JTyQZSnP3cQGa8B on 2024-04-29

IMHO juniors who rely on AI to write code will never learn. You need to make mistakes to learn, and AI never makes mistakes even when it’s wrong.

As a senior, I write code 25% of the time, and it’s always to understand the intent of what I should fix or develop. This is something that AI will not be able to do for a long time since it cannot speak and understand what customers want.

The last 75% of my time are spent refactoring this "intent" or making sure that the business is running, and I’m accountable for it. AI will never be accountable for anything, again for a long time.

I’m scared for juniors that don’t want to learn, but I work with juniors who outsmart me with their knowledge and curiosity.

packetlost on 2024-04-29

Unless there's some major breakthrough and AIs are able to gain judgement and reasoning capabilities, I don't think they'll be taking junior jobs any time soon.

layer8 on 2024-04-29

Seniors don’t grow on trees, and they all were juniors at some point. And juniors won’t become seniors by only typing AI chat prompts. I wouldn’t fear.

0x457 on 2024-04-29

That requires seniors to adopt these AI tools. So far, I only see juniors going hard to those tools.

erksa on 2024-04-29

You can not have seniors without juniors.

Interacting with a computers (and therefor creating software) will probably soon detach itself from the idea of single chars and the traditional QWERTY keyboard.

Computing is entering a fascinating phase, I'd stick around for it.

vineyardlabs on 2024-04-29

A tough pill to swallow that I think a lot of students and very junior engineers fail to realize is that bringing on a new grad and/or someone very junior is quite often a drain on productivity for a good while. Maybe ~6 months for the "average" new grad. Maybe AI exacerbates that timeline somewhat, but engineering teams have always hired new grads with the implicit notion that they're hiring to have a productive team member 6 months to a year down the line, not day one.

sqeaky on 2024-04-29

Might I suggest Andrei Alexandrescus's CppCon 2023 Closing Keynote talk: https://www.youtube.com/watch?v=J48YTbdJNNc

The C++ standard library has a number of generic algorithms that are very efficient, like decades of careful revision from the C++ community efficient. With the help of ChatGPT Andrei makes major improvements to a few of them. At least right now these machines have a truly impressive ability to summarize large amounts of data but not creativity or judgement. He digs into how he did it, and what he thinks will happen.

He isn't fearmongering he is just one coder producing results. He does lay out some concerns, but at least for the moment the industry needs junior devs.

idan on 2024-04-29

How many students / people early in career would benefit from having something to help them explore ideas?

How many don't have the advantages I had, of a four-year university, with professors and TAs and peers to help me stumble through something tricky?

How many have questions they feel embarrassed to ask their peers and mentors because they might make them look stupid?

Don't give up. This is a generational opportunity to lift up new developers. It's not perfect (nothing is). But if we sweat hard enough to make it good, then it is our chance to make a dent in the "why are there not more ______ people in tech" problem.

karmajunkie on 2024-04-30

I dropped out of school and went into startups with my first full-time gig in March, 2000. Managed to make that one last a few years, but whoooo boy that was a tough time to be a junior-to-mid developer looking for a job. I even went back to school with plans to go to medical school (yet I'm still a developer 20 years later.)

Being a junior is rough, landing those first few gigs, no doubt about it. It didn't get any better with the advent of code schools, which pretty much saturated the entry level market. But, if you stick it out long enough and keep working on learning, you'll acquire enough skills or network to land that first gig and build from there.

I wouldn't freak out about AIs—they're not going to take all the jobs. They're a tool (and a good one, sometimes.) Learn to use it that way. Learning a good tool can easily accelerate your personal development. Use it to understand by asking it to summarize unfamiliar code, to point you in the right direction when you're writing your own code, but don't have it write code you don't understand (and probably can't, because it doesn't work as written.)

Give it a few years, things will generally work out. Make a plan to be resilient in the meantime and keep learning and you'll be fine.

screye on 2024-04-30

LLMs remove the most annoying bits, making it possible for 1 person to do 3 people's work. LLMs are also good at fixing minor bugs. So version upgrades, minor maintenence and finding the right API handshakes will soon be doable for big LLMs without user supervision. Lastly, LLMs are accelerating existing tailwinds towards software commodotization. If an LLM can create a good-enough website on a low code platform, how likely are you to hire a front end engineer for the last 10% of excellence ?

Think about how many jobs are 'build a website', 'build an app' or 'manage this integration' style roles. They are all at risk of being replaced.

> hardest part of software development is not writing code

I agree, but you have to write a lot of code before you become good enough to think that clearly. If juniors don't get the opportunity to work their way up to a senior, then they might just never pick up the right skills. What's more like is is that CS education will undergo drastic changes, and masters/specialization might become a more degree requirement. But, those already on the market are in for a big shock.

sqeaky on 2024-04-29

I like the Primeagen's examples for the simple stuff. On stream he fires up an editor and copilot then writes the function signature for quick sort. Then copilot gets it wrong. It creates a sort function, but one worse than quick sort but not as bad as bubble sort.

These LLMs will get better. But today they are just summarizing. They screw up fairly simple tasks in fairly obvious ways right now. We don't know if they will do better tomorrow or in 20 years. I would wager it will be just a few years, but we have code that needs to be written today.

LLMs are great for students because they are often motivated and lacking broad experience, and a summarizer will such person very far.

jazzyjackson on 2024-04-29

can't wait until they're good enough to screw up complex tasks in subtle ways after undercutting the pay of junior developers such that no one is studying how to program computers anymore

throwup238 on 2024-04-29

In the future there won't be a Spacer Guild but a Greybeard Guild that mutates greybeard developers until they're immortal and forces them to sit at a terminal manipulating ancient Cobol, Javascript, and Go for all eternity, maintaining the software underpinnings of our civilization.

The electrons must flow.

idan on 2024-04-29

Absolutely. Copilot Workspace might not seem like it, but it's very much our first step towards tools to aid in comprehension and navigation of a codebase. I think a lot of folks have conflated "generative AI" with "writes code" when reading and understanding is a much larger part of the job

cedws on 2024-04-29

Less time spent writing code is more time you can spend thinking about those hard parts, no?

packetlost on 2024-04-29

Yes? But it's commonly understood that reading code is harder than writing code. So why force yourself into a reading-mostly position when you don't have to? You're more likely to get it wrong.

There are other ways to decrease typing time.

theshrike79 on 2024-04-29

It's not harder unless you write hard to read code.

> “Indeed, the ratio of time spent reading versus writing is well over 10 to 1. We are constantly reading old code as part of the effort to write new code. ...[Therefore,] making it easy to read makes it easier to write.” - Robert C. Martin in Clean Code

LLMs make exceptionally clean code in my opinion. They don't try to be fancy or "elegant", they just spit out basic statements that sometimes (or most of the time) do what you need.

Then you _read_ what it suggests, with a skilled eye you can pretty much glance and see if it looks good and test it.

cedws on 2024-04-29

>it's commonly understood that reading code is harder than writing code

I don't know about that. Maybe for kernel code or a codec. But I think most people could read (and understand) a 100 line class for a CRUD backend faster than they could write one.

skydhash on 2024-04-29

  php artisan make:controller
There are code generators. Even with dealing with other languages library, I mostly copy-paste previous implementations and editing with Vim motions makes it faster.

jondwillis on 2024-04-29

Luckily, we have perfectly behaving, feature-rich software for every complex system already. /s

bossyTeacher on 2024-04-29

I was looking for something like this. The only that might change is some of your toolset but LLMs won't change the nature of the job (which is what people seem to be thinking about)

kylestlb on 2024-04-29

or getting engineers to communicate properly with each other :)

jonahx on 2024-04-29

Serious answer to a legitimate question:

1. Good senior developers are taking the tools seriously, and at least experimenting with them to see what's up. Don't listen to people dismissing them outright. Skepticism and caution is warranted, but dismissal is foolish.

2. I'd summarize the current state of affairs as having access to an amazing assistant that is essentially a much better and faster version of google and StackOverflow combined, which can also often write code for well specified problems. From what I have seen the current capabilities are very far from "specify high-level business requirements, get full, production app". So while your concern is rational, let's not exaggerate where we actually are.

3. These things make logical errors all the time, and (not an expert) my understanding is that we don't, at present, have a clear path to solving this problem. My guess is that until this is solved almost completely human programmers will remain valuable.

Will that problem get solved in the next 5 years, or 10, or 20? That's the million dollar question, and the career bet you'll be making. Nobody can answer with certainty. My best guess is that it's still a good career bet, especially if you are willing to adapt as your career progresses. But adapting has always been required. The true doom scenario of business people firing all or most of the programmers and using the AI directly is (imo) unlikely to come to pass in the next decade, and perhaps much longer.

packetlost on 2024-04-29

I think latency is the biggest reason I killed my Copilot sub after the first month. It was fine at doing busy-work, like 40%~ success rate for very very standard stuff, which is a net win of like... 3-5%. If it was local and nearly instant, I'd never turn it off. Bonus points if I could restrict the output to finishing the expression and nothing more. The success rate beyond finishing the first line drops dramatically.

freedomben on 2024-04-29

Interesting, latency has always been great for me. If it's going to work, it usually has suggestions within a second or two. I use the neovim plugin though so not on the typical VS-code based path.

packetlost on 2024-04-30

I also used the neovim plugin. I'm on a fiber connection in the Midwest, so that is likely a factor. Latency was on the order of 2-5s consistently, which is way more than enough to interrupt my flow.

freedomben on 2024-04-30

Interesting indeed! Do you normally experience high latencies >1s from your connection or is copilot more of an outlier? I have noticed that when I travel to the midwest I will get latencies around 70 to 90 ms rather than my current 30 ms, but It's not something I really notice too much, though that tends to be in major cities.

packetlost on 2024-04-30

Most people in the midwest likely have cable or DSL internet which adds 30-50ms~ of latency out of the gate. My high speed business fiber connection to my apartment gets 28ms to github.com and 36ms to api.github.com, so it's probably not that, though it depends on where the Copilot datacenters are.

incorrecthorse on 2024-04-29

> they've gone from barely stringing together a TODO app to structuring and executing large-scale changes in entire repositories in 3 years.

No they didn't. They're still at the step of barely stringing together a TODO app, and mostly because it's as simple as copying the gazillionth TODO app from GitHub.

coffeebeqn on 2024-04-30

I’ve used copilot recently in my work codebase and it absolutely has no idea what’s going on in the codebase. At best it’ll look at the currently open file. Half the time it can’t seem to comprehend even the current file fully. I’d be happy if it was better but it’s simply not.

I do use chatgpt most recently today to build me a GitHub actions yaml file based on my spec and it saved me days of work. Not perfect but close enough that I can fill in some details and be done. So sometimes it’s a good tool. It’s also an excellent rubber duck- often better than most of my coworkers. I don’t really know how to extrapolate what it’ll be in the future. I would guess we hit some kind of a limit that will be tricky to get past because nothing scales forever

amiantos on 2024-04-29

I've been working on the same product for 6 years now, on a team of people who have been working on it for 8 years. It is ridiculous how hard it is for us to design a new feature for the product that doesn't end up requiring the entirety of the deep knowledge we have about how existing features in the product interact with each other. That's not even getting into the complex interplay between the client and the backend API, and differences in how features are described and behave between the client and server. It's, in some ways, a relatively straightforward product. But the devil is in the details, and I find it hard to believe LLMs will be able to build and possess that kind of knowledge. The human brain is hard to beat deep down, and I think it betrays a pessimism about our overall capability when people think LLMs can really replace all the things we do.

a_t48 on 2024-04-29

For adding and refactoring it can be a great tool. For greenfield development, it's more tricky - yesterday I sat down and started writing something new, with no context to give Copilot. Mid sourcefile, I paused to think about what I wanted to write - it spit out three dozen lines of code that I then had to evaluate for correctness and just ended up throwing away. I could have probably helped the process by writing docs first, but I'm a code first, docs second kind of guy. Totally sold on LLM written unit tests though, they are a drag to write and I do save time not writing them by hand.

It's going to be a bit before LLMs can make an app or library that meets all requirements, is scalable, is secure, handles dependencies correctly, etc, etc. Having an LLM generate a project and having a human check it over and push it in the right direction is not going to be cheaper than just having a senior engineer write it in the first place, for a while. (I could be off-base here - LLMs are getting better and better)

I'm not worried about being replaced, my bigger worry is in the mean time the bottom end falling out of the engineering market. I'm worried about students learning to program now being completely dependent on LLMs and never learning how to build things without it and not knowing the context behind what the LLM is putting out - there's definitely a local maxima there. A whole new "expert beginner" trap.

idan on 2024-04-29

So, part of the trickiness here is that there's a few different moving pieces that have to cooperate for success to happen.

There needs to be a great UX to elicit context from the human. For anything larger than trivial tasks, expecting the AI to read our minds is not a fruitful strategy.

Then there needs to be steerability — it's not just enough to get the human to cough up context, you have to get the human to correct the models' understanding of the current state and the job to be done. How do you do that in a way that feels natural.

Finally, all this needs to be defensive against model misses — what happens when the suggestion is wrong? Sure, in the future the models will be better and correct more often. But right now, we need to design for falliability, and make it cheap to ignore when it's wrong.

All of those together add up to a complex challenge that has nothing to do with the prompting, the backend, the model, etcetc. Figuring out a good UX is EXACTLY how we make it a useful tool — because in our experience, the better a job we do at capturing context and making it steerable, the more it integrates that thinking you stopped to do, but should have had some rigorous UX to trigger.

a_t48 on 2024-04-29

Yeah to be clear I think Copilot Workspace is a great start. I wonder if the future is multi-modal though. Ignoring how obnoxious it would be to anyone near me, I could foresee narrating my stream of thoughts to the mic while using the keyboard to actually write code. It would still depend on me being able to accurately describe what I want, but it might free me from having to context switch to writing docs to hint the LLM.

idan on 2024-04-29

I mean we explored that a little with Copilot Voice :D https://githubnext.com/projects/copilot-voice/

But yeah, the important part is capturing your intent, regardless of modality. We're very excited about vision, in particular. Say you paste a screenshot or a sketch into your issue...

heavyset_go on 2024-04-29

> Totally sold on LLM written unit tests though, they are a drag to write and I do save time not writing them by hand.

This is where I've landed, but I'm also skeptical of totally relying on them for this.

In my personal experience, it's worked out, but I can also see this resulting in tests that look correct but aren't, especially when the tests require problem domain knowledge.

Bad tests could introduce bugs and waste time in a roundabout way that's similar to just using LLMs for the code itself.

sensanaty on 2024-04-29

I don't even trust AI for tests, except for generating test cases, but even then it usually does something idiotic and I have to think up a bunch of other test cases anyways

idle_zealot on 2024-04-29

In my experience these tools continue to be really bad at actually translating business needs, that is, specific logic for handling/processing information, specific desired behaviors. They're good at retreiving general patterns and templates, and at transforming existing code in simple ways. My theory is that translating the idea of how some software is supposed to function into working code requires a robust world model and capacity for deep thought and planning. Certainly we will create AI capable of these things eventually, but when that happens "did I make a mistake in studying computer science?" will not be your greatest concern.

unregistereddev on 2024-04-29

Staff engineer here. First started writing code roughly 24 years ago, been doing it professionally in one form or another for about 18 years. Many years ago I was asked how I planned to compete - because all IT was being offshored to other countries who worked cheaper and had more training in math than I did.

I've been asked whether no-code platforms would make us obsolete. I've wondered if quantum computing would make everything we know become obsolete. Now people are wondering whether LLM tools will make us obsolete.

All these things make us more productive. Right now I'm excited by AI tools that are integrated into my IDE and offer to finish my thoughts with a stroke of the 'Tab' key. I'm also very underwhelmed by the AI tools that try to implement the entire project. You seem to be talking about the latter. For the type of coding exercises we do for fun (test drive an implementation of Conway's Game of Life), LLM's are good at them and are going to get better. For the type of coding exercises we do for pay (build a CRUD API), LLM's are mediocre at them. They can give you a starting point, but you're going to do a lot of fiddling to get the schema and business logic right. For the type of coding exercises we do for a lot of pay (build something to solve a new problem in a new way), LLM's are pretty terrible. Without an existing body of work to draw from, they produce code that is either very wrong or subtly flawed in ways that are difficult to detect unless you are an expert in the field.

Right now, they are best used as a productivity enhancer that's autocomplete on steroids. Down the road they'll continue to offer better productivity improvements. But it's very unlikely they will ever (or at least in our lifetimes) entirely replace a smart developer who is an expert in their field. Companies know that the only way to create experts is to maintain a talent pipeline and keep training junior developers in hope that they become experts.

Software development has continued to grow faster than we can find talent. There's currently no indication of LLM's closing that gap.

qwertox on 2024-04-29

Don't forget that this is marketing.

"You'll be the best cook if you buy the Mega Master Automated Kitchen Appliance (with two knives included)"

That line is marketed at me, who does not know how to cook, they're telling me I'll be almost a chef.

You'll hear Jensen say that coding is now an obsolete skill, because he's marketing the capabilities of his products to shareholders, to the press.

It might well be that in 10 years these LLMs are capable of doing really serious stuff, but if you're studying CS now, this would mean for you that in 10 years you'll be able to use these tools much better than someone who will just play with it. You'll really be able to make them work for you.

Yeul on 2024-04-29

All the famous chefs didn't become famous from their cooking. They became famous because of their charisma. Jamie Oliver looked really good on camera.

AI will never be able to bullshit the way humans can.

fragmede on 2024-04-30

LLMs bullshit, or hallucinate, or lie, or confabulate all day long.

999900000999 on 2024-04-29

Simple.

It's the 90%, 10% theory.

LLMs will do the 90% that is easy, the final 10% it'll get wrong and will insist on it's solutions being correct.

If anything this is horrible for junior level developers. A senior dev now has a restless junior developer at their whim.

As far as your own career, I'd argue to finish your degree, but be aware things are about to get really rough. Companies don't like headcount. Even if it's not true today, in the future AI + 1 senior engineer will be faster than 4 juniors + 1 senior.

michaeljx on 2024-04-29

That's because we've been here before. Be it the ERPs of the 90s-00s, the low/no-codes of the 2010s, the SaaS and the chatbots of 2015. There was always hype about automating the job. At the end of the day, most of a programmer's job is understanding the business domain, it's differences and edge cases, and translating those into code. An LLM can do the latter part, the same way a compiler can do high-level java into assembly

idan on 2024-04-29

I mean, I don't disagree!

The leading coefficient of these tools successfully getting you to/near the goal is all about clearly articulating the domain and the job to be done

Ergo, it's pretty important to craft experiences that make their core mechanic about that. And that's how Copilot Workspace was designed. The LLM generating the code is in some ways the least interesting part of CW. The effort to understand how the code works, which files must be touched, how to make coordinated changes across the codebase — that's the real challenge tackled here.

michaeljx on 2024-04-29

But there is so much context that the LLM has no access to. Implicit assumptions in the system, undocumented workflows, hard edge cases, acceptable bugs and workarounds, Peter principle boundaries, etc... All these trade-offs need someone that understands the entire business domain, the imperfect users, the system' implementation and invariants, the company's politics and so much more. I have never encountered a single programmer, no matter intelligence and seniority, that could be onboarded on a project simply by looking at the code.

no_wizard on 2024-04-29

A simple allegory to this: Cloud Computing.

Cloud computing boomed, and is by some measure continuing to do so, the last ~15 years, from AWS to Firebase to VPS providers like Linode.

The promise, in part, was that it would replace the need for certain roles, namely system administrators and - depending on what technologies you adopted - you could replace good chunks of backend engineers.

Yet, what happened was roles shifted. System Administration became DevOps, and backend engineers learned to leverage the tools to move faster but provide value elsewhere - namely in designing systems that are stable and well interconnected between different systems, and developing efficient schema representations of data models, among other things.

The reality today, is I can buy an entire backend, I can even buy a backend that will automatically stand up API endpoints in GraphQL or REST, (or both!). Even though this is true, the demand for backend engineers hasn't shrunken dramatically (if anything, it seemingly increased).

Technologies enable things in unforseen ways all the time, and whether LLMs will displace alot of tech workers will be up for debate, and the reality is - for some at least - it will, but overall, if we take the closest situations possible from the past, it will overall increase the demand for software engineers over time, as LLMs paired with humans have thus far shown that it works best that way and I foresee that continuing to the case, much like accountants + excel is better than accountants - excel.

lispisok on 2024-04-29

>they've gone from barely stringing together a TODO app to structuring and executing large-scale changes in entire repositories in 3 years

First they are definitely not currently as capable as you say. Second there is a misconception that the rise of LLMs has been exponential but the curve is really logistic and we've hit the flat tail hard imo. Where is ChatGPT5? All the Coding AI tools I've tried like Copilot either havent gotten better since release or seemingly gotten worse as they try to fine tune them. Third there is ton more to being a software engineer than writing ReactTodoAppMVCDemo which many responses have been talking about.

simonw on 2024-04-29

These tools make people who know software engineering massively more productive.

Given the choice between an LLM-assisted non-engineer and an LLM-assisted experienced software engineer, I know who I would want to work with - even if the non-engineer was significantly cheaper.

idan on 2024-04-29

The opposite, we see these tools as mechsuits to help developers, and particularly newer developers, to do things that they would struggle to do.

Power tools did not result in fewer buildings built. I mean I guess some early skyscrapers did not benefit from modern power tools. But I don't think any construction company today is like "nah we'll just use regular saws thanks".

The allergy to hype is real; I don't think this or any tool is a magic wand that lets you sit back and just click "implement". But the right UX can help you move along the thought process, see solutions you might not have gotten to faster, and iterate.

015a on 2024-04-29

I haven't seen any evidence that these systems are capable of structuring and executing large-scale changes in entire repositories; but given you're still a student, your definition of large might be different.

The middle of an S-curve looks like an asymptote, which is where we're at right now. There's no guarantee that we'll see the same kind of exponential growth we saw over the past three years again. In fact, there's a ton of reason to believe that we won't: models are becoming exponentially more expensive to train; the internet has been functionally depleted of virgin training tokens; and chinks in the armor of AI's capabilities are starting to dampen desire for investment in the space.

Everyone says "this is the worst they'll be"; stated as a fact. Imagine its 2011 and you're running Windows 7. You state: "This is the worst Windows will ever be". Software is pretty unpredictable. It does not only get better. In fact, software (which absolutely includes AI models) has this really strange behavior of fighting for its life to get worse and worse unless an extreme amount of craft, effort, and money is put into grabbing the reins and pulling it from the brink, day in, day out. Most companies barely manage to keep the quality at a constant level, let alone increase it.

And that's traditional software. We don't have any capability to truly judge the quality of AI models. We basically just give each new one the SAT and see the score go up. We can't say for certain that they're actually getting better at the full scope of everything people use them for; a feat we can barely accomplish for any traditionally observable software system. One thing we can observe about AI systems very consistently, however, is their cost: And you can bet that decision makers at Microsoft, Anthropic, Meta, whoever, obsess about that just as much if not more than capability.

krainboltgreene on 2024-04-29

> Student here: I legitimately cannot understand how senior developers can dismiss these LLM tools

Because we've seen similar hype before and we know what impactful change looks like, even if we don't like the impact (See: Kubernetes, React, MongoDB).

> executing large-scale changes in entire repositories in 3 years

Is this actually happening? I haven't seen any evidence of that.

Vuizur on 2024-04-29

>executing large-scale changes in entire repositories in 3 years

You can look at SWE-Agent, it solved 12 percent of the GitHub issues of their test dataset. It probably depends on your definition of large-scale.

This will get much better, it is a new problem with lots of unexplored details, and we will likely get GPT-5 this year, which is supposed to be a similar jump in performance as from 3.5 to 4 according to Altman.

krainboltgreene on 2024-04-29

This is a laughable definition of large-scale. It's also a misrepresentation of that situation: It was 12% of issues in a dataset for the top 5000 repositories pypy packages. Further "solves" is a incredibly generous definition, so I'm assuming you didn't read the source or any of the attempts to use this service. Here's one where it deletes half the code and replaces network handling with a comment to handle network handling: https://github.com/TBD54566975/tbdex-example-android/pull/14...

"this will get much better" is the statement I've been hearing for the past year and a half. I heard it 2 years ago about the metaverse. I heard it 3 years ago about DAOs. I heard it 5 years about block chains...

What I do see is a lot more lies. Turns out things are zooming along at the speed of light if you only read headlines from sponsored posts.

rsynnott on 2024-04-30

> Here's one where it deletes half the code and replaces network handling with a comment to handle network handling

... Wait, that's not one that they considered a _success_, is it? Like, one of the 12%?

krainboltgreene on 2024-04-30

We unfortunately have no idea what they consider a success! That's just one of the most recent ones by some random user who wanted to use the program in the real world.

jdlyga on 2024-04-29

Nobody really dismisses LLM's as not being useful. I've been a developer for 15 years and LLM's help a ton with coding, system design, etc. My main piece of advice for students is make sure that your heart is in the right place. Tech isn't always an easy or secure field. You have to love it.

huygens6363 on 2024-04-29

It is hard to find direct comparisons as the tech is truly novel, but I have heard people say we don’t need to learn math because your calculator can do “university level math”. I don’t know how close that argument is to yours, but there is some overlap.

Your calculator can indeed do fancy math, but you will not be able to do anything with it because you do not understand it.

This is like fancying yourself an engineer because you constructed an IKEA cupboard or an automotive expert because you watched Youtube.

Anything an amateur can come up with is blown to pieces by an actual expert in a fraction of the time and will be of considerable higher quality. The market will go through a period of adjustment as indeed the easy jobs will be automated, but that makes the hard jobs even harder, not easier.

Once you automate the easy stuff, the hard stuff remains.

Basically:

Expert + AI > Amateur + AI

bmitc on 2024-04-29

Because they don't work? I've been harsh on these LLM models because every time I have interacted with them, they've been a giant waste of time. I've spent hours with them, and it just goes nowhere, and there's a huge amount of noise and misinformation.

I recently had another round where I tried to put aside my existing thoughts and perhaps biases and tried a trial of Copilot for a couple of days, using it all day doing tasks. Nearly every single piece of code it gave me was broken, and I was using Python. I was trying to use it for a popular Python library whose documentation was a bit terse. It was producing code from the various versions of the library's API, and nothing it gave me compiled. We ended up just going in circles, where it had no idea what to do. I was asking something as simple as "here's a YAML file, write me Python code to read it in" (of course in more detail and simple steps). It couldn't do it. I eventually gave up and just read the documentation and used StackOverflow.

About the only thing I have been able to use it for so far with relatively consistent success is to write boilerplate code. But even then, it feels like I'm using more time than just doing it myself.

And that happens a lot with this stuff. I initially got very excited about Copilot because I thought, shit I was wrong about all this, this is useful. But after that wore off, I saw it for what it is. It's just throwing a bunch of statistically correlated things at me. It doesn't understand anything, and because of that, it gets in the way.

coolgoose on 2024-04-29

Because the problem is not necessarily coding.

90% of the market is just doing CRUDS, and every year there's a new magical website that will make all websites be built by a WYSIWYG drag and drop editor.

The problem is even defining the correct requirements from the start and iterating them.

My concern is not the death of the market, but more of the amount of not good but workable code that's going to make juniors learning path a lot harder.

As others said, I do think this will help productivity by removing the let's please update the readme, changelog, architecture diagram etc etc part of the codebase, and maybe in some cases actually remove the need to generate boilerplate code all together (why bother when it can be generate on the fly when needed for eg).

thaumaturgy on 2024-04-29

> Are there any arguments that could seriously motivate me to continue with this career outside of just blind hope that it will be okay?

FWIW, as an oldish, so far everything that has been significantly impacted by deep learning has undergone a lot of change, but hasn't been destroyed. Chess and Go are a couple of easy examples; the introduction of powerful machine learning there has certainly changed the play, but younger players that have embraced it are doing some really amazing things.

I would guess that a lot of the same will happen in software. A lot of the scut work will evaporate, sure, but younger devs will be able to work on much more interesting stuff at a much faster pace.

That said, I would only recommend computing as a career to youth that are already super passionate about it. There are some pretty significant cultural, institutional, and systemic problems in tech right now that are making it a miserable experience for a lot of people. Getting ahead in the industry (where that means "getting more money and more impressive job titles") requires constantly jumping on to the latest trends, networking constantly for new opportunities, and jumping to new companies (and new processes / tech stacks) every 18 months or so. Companies are still aggressively culling staff, only to hire cheaper replacements, and expectations for productivity are driving some developers into really unhealthy habits.

The happiest people seem to be those that are bringing practical development skills into other industries.

blueboo on 2024-04-29

First of all, yes, this is a provocative prompt that bears engagement. You're right to be concerned.

I share your frustration with the reticence of seasoned engineers to engage with these tools.

However, "structuring and executing large-scale changes in entire repositories" is not a capability that is routinely proven out, even with SOTA models in hellaciously wasteful agentic workflows. I only offer a modest moderation. They'll get there, some time between next week and 2030.

Consider: Some of the most effective engineers of today cut their teeth writing assembly, fighting through strange dialects of C or otherwise throwing themselves against what are now incontestibly obselete technologies but otherwise honed their engineering skills to a much higher degree than their comrades who glided in on Java's wing.

Observe that months of hand-sculpted assembly has turned into a single Python call. AI is yet another tier of abstraction.

Another lens is application -- AI for X domain, for X group, for X age, for X culture. Lots to do there.

Finally, there's empowerment. If this technology is so powerful, do you concede that power to others? Or are you going to be a part of the group that ensures it benefits all?

FYI, OpenAI published a labor market study suggesting professions that are more or less exposed to AI. Take a look.

EarChromeLocal on 2024-05-02

Not enough context to tell if YOU should worry.

Anyways, in general, I wouldn't worry because if we get to a point where software can replace human software engineers then almost everyone else will be without a job soon after (think bug-free software being produced exponentially for every market niche).

It seems to me we never make less of something when we make it more efficiently. The opposite seems true.

Sure, if one clung to writing "code" on binary punch cards instead of adopting assembly then that person would have been redundant after a while. Today some people still write assembly but the vast majority uses higher level languages. LLMs will probably be the next step in the abstraction ladder. If you think yourself a <insert programming language> programmer then, yeah, you should worry. Current programming languages will be obsolete in my opinion. Letting LLMs write code and then reading/changing it is a very short term (and doomed) trend. You don't read compiler-written assembly of your <insert programming language> programs, do you? Almost no one cares how a piece of software works unless it's slow and/or needs to be modified. Software programming will get to a much higher level of abstraction (think modules, integrations). So much so that everyone could do it. The same way everyone could be a plumber but almost no one is. A plumber is paid to suffer under the sink or getting covered in dirty water and or crap. Something you don't want to deal with. Sure business owner could get an LLM to write their brand new idea of the day but they won't. They will pay someone else to do it, you'll be that someone because they'd rather handle other business stuff or enjoy the money they're making. On top of all consider that the amount of new things people could do will also increase exponentially. If we lived lives like our ancestors we could do nothing all day (and probably we are, to their eyes) but we don't, actually we get busier and busier.

wrl on 2024-04-30

after the first two or three times i got asked to code-review something that another developer "didn't know how to write so just asked copilot/chatgpt/etc and it produced this, could you tell me if it's right?" i got pretty tired of it. obviously it's useless to ask questions about how the code was written because the person asking for the code review didn't actually write it and they don't have any answers about why it was written how it was.

especially on the back of the xz supply chain attack and, y'know, literally any security vulnerability that slipped through code review, i refuse to have unaccountable, unreviewed code in projects i work on.

somewhat recently, there was the case with air canada's LLM-based support bot making a false statement and then a judge forcing air canada to honour it. i think we're setting the stage for something like that happening with LLM-written code – it's going to be great for a while, everyone's going to be more productive, and then we'll all collectively find out that copilot spat out a heartbleed-level flaw in some common piece of software.

wilsonnb3 on 2024-04-29

> I also have a hard time believing that there is enough software need to make such an extreme productivity multiplier not be catastrophic to labor demand.

Every single time a change like this happens, it turns out that there is in fact that much demand for software.

The distance between where we are now and the punch card days is greater than where we are now and the post-LLM days and yet we have more software developers than ever. This pattern will hold and you would need much stronger evidence than “LLMs seem like an effective productivity multiplier” for me to start to doubt it.

Also don’t forget that 80% of software development isn’t writing code. Someone is still gonna have to convert what the business wants into instructions for the LLM so it can generate Java code so the JVM can generate byte code so the runtime can generate assembly code so the the processor can actually do something.

And lastly, there are a lot of industries that won’t touch LLM’s for security reasons for a long time and even more that are just still writing Java 8 or COBOL and have no intention of trying out fancy new tools any time soon.

So yeah, don’t be too down in the dumps about the future of software development.

michaelmior on 2024-04-29

> Someone is still gonna have to convert what the business wants into instructions for the LLM

It seems like with GitHub is aiming for is a future where "what the business wants" can just be expressed in natural language the same way you might explain to a human developer what you want to build. I would agree that right now, LLMs generally don't do well with very high-level instructions, but I'm sure that will improve over time.

As for the security concerns, I think that's a fair point. However, as LLMs become more efficient, it they become easier to deploy on-prem, that mitigates one significant class of concerns. You could also reasonably make the argument that LLMs are more likely to write insecure code. I think that's true with respect to a senior dev, but I'm not so sure with junior folks.

piva00 on 2024-04-29

> It seems like with GitHub is aiming for is a future where "what the business wants" can just be expressed in natural language the same way you might explain to a human developer what you want to build.

We've been there before with 4GL in many forms, they all failed on the same principle: it requires reasoning to understand the business needs and translate that into a model made in code.

LLMs might be closer to that than other iterations of technology attempting the same but they still fail in reasoning, they still fail to understand imprecise prompts, correcting it is spotty when the complexity grows.

There's a gap that LLMs can fill but that won't be a silver bullet. To me LLMs have been extremely useful to retrieve knowledge I already had (syntax from programming languages I stopped using a while ago; techniques, patterns, algorithms, etc. that I forgot details about) but every single time I attempted to use one to translate thoughts into code it failed miserably.

It does provide a lot in terms of railroading knowledge into topics I know little about, I can prompt one to give me a roadmap of what I might need to learn on a given topic (like DSP) but have to double-check the information against sources of truth (books, the internet). Same for code examples for a given technique, it can be a good starting point to flesh out the map of knowledge I'm missing.

Any other case I tried to use it professionally it breaks down spectacularly at some point. A friend who is a PM and quite interested in all GenAI-related stuff has been trying to hone in prompts that could generate him some barebones application to explore how it could be used to enhance his skills, it's been 6 months and the furthest he got is two views of the app and saving some data through Core Data on iOS, something that could've been done in an afternoon by a mid-level developer.

michaelmior on 2024-04-29

I agree that we're far off from such a future, but it does seem plausible. Although I wouldn't be surprised to find that when and if we get there, that the underlying technology looks very different from the LLMs of today.

> something that could've been done in an afternoon by a mid-level developer

I think that's pretty powerful in itself (the 6 months to get there notwithstanding). I expect to see such use cases become much more accessible in the near future. Being able to prototype something with limited knowledge can be incredibly useful.

I briefly did some iOS development at a startup I worked at. I started with literally zero knowledge of the platform and what I came up with barely worked, but it was sufficient for a proof of concept. Eventually, most of what I wrote was thrown out when we got an experienced iOS dev involved. I can imagine a future where I would have been completely removed from the picture at the business folks just built the prototype on their own. Failing that, I would have at least been able to cobble something together much more quickly.

wilsonnb3 on 2024-04-29

> It seems like with GitHub is aiming for is a future where "what the business wants" can just be expressed in natural language the same way you might explain to a human developer what you want to build.

I do agree that this is their goal but I expect that expressing what you want the computer to do in natural language is still going to be done by programmers.

Similar to how COBOL is closer to natural language than assembly and as such more people can write COBOL programs, but you still need the same skills to phrase what you need in a way the compiler (or in the future, the LLM) can understand, the ability to debug it when something goes wrong, etc.

“Before LLM, chop wood, carry water. After LLM, chop wood, carry water.”

As for the security stuff, on premise or trusted cloud deployments will definitely solve a lot of the security issues but I think it will be a long time before conservative businesses embrace them. For people in college now, most of them who end up working at non-tech companies won’t be using LLM’s regularly yet.

skydhash on 2024-04-29

SQL and python are arguably the languages closest to English, and even then getting someone to understand recursion is difficult. How do you specify that some values should be long lived? How do you specify exponential retries. Legalese tries to be as specific as possible without being formal and even then you need a judge on a case. Maybe when everyone has today’s datacenter compute power in their laptop.

michaelmior on 2024-04-29

> arguably the languages closest to English

Yes, but they're not English. All the concerns that you mention are ones that I think LLM development tools are aiming to eliminate from explicit consideration. Ideally, a user of such a tool shouldn't even have to have ever heard of recursion. I think we're a long way off from that future, but it does feel possible.

troupo on 2024-04-29

Have you ever actually tried getting proper non-contradictory requirements in pain natural language from anyone?

Good luck

michaelmior on 2024-04-30

This is absolutely a skill in itself. It could well be the case that such a plain expression of requirements in natural language is a valuable skill that enables use of such tools in the future.

wiredfool on 2024-04-29

Maybe it would help me, not sure. I haven't been impressed with what I've seen when team members have "run stuff through chatgpt". (I'm senior, been doing this professionally for 25 years. Made my share of mistakes, supported my code for a decade or two.)

My main issue at the moment with Junior devs is getting stuck in the weeds, chasing what they think is a syntax error, but not seeing (or hearing) that what they have is a lack of understanding. Some of that is experience, some of that is probably not being able to read the code and internalize what it all means, or make good test cases to exercise it.

If you can't produce the code, and have a sketchy grasp of reasoning it out, debugging it is going to be a step too far. And the AIs are (hopefully) going to be giving you things that look right, there will be subtle bugs. This puts it in the dangerous quadrant.

StefanWestfal on 2024-04-29

My two cents: I worked in a different engineering field before transitioning to Software Engineering because "coding" was and is what we need to solve problems, and I got the hang of it. A few years in, I spend little of my day actually writing code but more time in meetings, consoles, documentation, logs, etc. Large language models (LLMs) help when writing code, but it's mostly about understanding the problem, domain, and your tools. When going back to my old area, I am excited about what a single person can do now and what will come, but I am also hitting walls fast. LLMs are great when you know what you are doing, but can be a trap if you don't and get worse and worse the more novel and niche you go.

cush on 2024-04-29

The market still desperately needs engineers. We’re still at a point in supply/demand where experienced engineers are making 2-3x national median salaries. It’s tougher for juniors to land the most lucrative positions, but there are still tons of jobs out there. The more money you accumulate early in your career, the more time that money has to grow. Interest rates are high, so it’s a great time to be saving money.

Also, the skills you learn as an engineer are highly transferable, as you learn problem solving skills and executive function - many top CEOs have engineering backgrounds. So if you do need to pivot later in your career, you’ll be set up for success

skydhash on 2024-04-29

Software Engineering is not a subset of computer science, they just intersect. And as a software engineer, your job can be summarized as gathering requirements and designing a solution, implementing and verifying said solution, and maintaining the solution in regards to changes. And the only thing AI does now is generating code snippets. In The Mythical Man Month, Brooks recommend to spend 1/3 of the schedule to planning, 1/6 to coding, 1/2 to testing components and systems (half for each). And LLMs can’t do the coding right. What LLMs add, you still have to review and refactor and it would have been faster to just do it.

Menu_Overview on 2024-04-29

> and it would have been faster to just do it.

False. Obviously this depends on the work, but an LLM is going to get you 80-90% of the way there. It can get you 100% of the way there, but I wouldn't trust it, and you still need to proof read.

In the best of times, it is about as good as a junior engineer. If you approach it like you're pair programming with a junior dev that costs <$20/mo then you're approaching it correctly.

troupo on 2024-04-29

> Obviously this depends on the work, but an LLM is going to get you 80-90% of the way there.

No. No it can't.

However amazing they are (and they are unbelievably amazing), they are trained on existing data sets. Anything that doesn't exist on StackOverflow, or is written in a language slightly more "esoteric" than Javascript, and LLMs start vividly hallucinating non-existent libraries, functions, method call and patterns.

And even for "non-esoteric" languages it they will wildly hallucinate at every turn apart from some heavily trodden paths.

fragmede on 2024-04-29

Yes it can. When the project is yet another javascript CRUD app, 80% isn't brand new, never existed before code, but almost-boilerplate that does exist on StackOverflow, on a heavily trodden path where the LLM will get you 80% of the way there.

troupo on 2024-04-30

You've literally repeated what I said

fragmede on 2024-04-30

but with yes instead of no

troupo on 2024-04-30

Nothing changed in your description compared to what I wrote. It still remains "for a well-trodden path in a well-known language with SO-level solutions it will help you, for anything else, good luck"

fragmede on 2024-04-30

I'm not contradicting what you're saying, no. I'm emphasizing that the well trodden path as being the majority of the work out there, as opposed to possibly being flippant about "anything else". if I'm reading you wrong, apologies.

shepherdjerred on 2024-04-29

AI _will_ take jobs. It's a matter of when and not if. The real question is will that occur in the next 10/50/100 years.

It might not happen in your lifetime, but as you've noted the rate of progress is stunning. It's possible that the latest boom will lead to a stall, but of course nobody knows.

IMO it's way too hard to predict what the consequences will be. Ultimately the best thing you can do are to continue with your degree, and consider what skills you have that an AI couldn't easily replicate. e.g. no matter how good AI gets, robotics still has a ways to go before an AI could replace cooks, nurses, etc.

kristiandupont on 2024-04-29

I've been writing software professionally for 25 years and I am absolutely not dismissing them, on the contrary.

We are currently in a window where LLM's are helpful but nothing more, making them a great tool. I suspect that will last for a good while and probably turn me into more of a "conductor" in time -- instructing my IDE something like "let's replace this pattern with this other one", and have it create a PR for me that changes many files in one go. But I see absolutely no reason why the evolution shouldn't continue to the point where I just need to tell it what I want from a user perspective.

trashface on 2024-04-29

Either way you're going to want to have a backup career plan. By 40 if not earlier you could be forced out of tech by ageism or something else. Unless you transition into management, but even then. I don't think middle management is going to be immune to AI-enabled employment destruction. So you basically should plan to make most of your money from software in the first decade or two. Live cheap and save/invest it.

Best to plan and train early because its super hard to switch careers mid life. Trust me, I'm failing at it right now.

brailsafe on 2024-04-29

Well, I've been out of work for now over a year, and it's the third time in 10 years. People will say that advances in tooling create more work, but ultimately more work is created when there's more money flowing around, and when that money is invested in tooling to eek out productivity gains, which will continue, but will it outpace how much we're padding out the bottom of the funnel? Will so much more money start flowing around for a good reason that it matches how many people can do the work?

It's also worth considering that if you finished school prior to 2020 and started trying to tackle the brutal fight upstream that software development already was, why the hell would it be worth it? For... the passion? For... the interest in technical stuff? Quite frankly, in a tech career, you need to get quite lucky with timing, skill, perception of your own abilities and how they relate to what you're paid to do, and if you have the ability to be passably productive at it, it's at least worth considering other paths. It may end up comfy, or it may end up extremely volatile, where you're employed for a bit and then laid off, and then employed, and laid off, and in-between you end up wondering what you've done for anyone, because the product of your labor is usually at-best ephemeral, or at-worst destructive to both the general population and your mind and body; waking up and going straight over to your computer to crank out digital widgets for 8 hours might seem lovely, but if it's not, it's isolating and sad.

Also worth considering the tax changes in the U.S that have uniquely made it more difficult to amortize the cost of software development, but I don't claim to understand all that yet as a non-US person.

darepublic on 2024-04-29

Ever been frustrated with a piece of software? Or wished for some software to exist to solve a problem? Well just point these nifty tools at it and watch the solutions magically materialize. That will comfort you somewhat after being bummed out over obsoletion. But if you find you can't materialize the desired software quick time then.. I guess at least for now humanity still required

wilg on 2024-04-29

Noah Smith has an economic argument that even if AI is better than humans at literally everything, we'll still have full employment and high wages: https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-...

HideousKojima on 2024-04-29

Writing code is one of the less important parts of your job as a developer. Understanding what the client actually wants and turning that into code is where the real difficulty lies for most software development. Also, I doubt we'll be able to overcome the "subtly wrong/insecure" issues with a lot of LLM generated code

JTyQZSnP3cQGa8B on 2024-04-29

> Understanding what the client actually wants

This is what AI bros don’t understand since they seem to spend their days writing CRUD backends for REST APIs.

You need to understand a lot of stuff before coding anything:

- client: what do you want? - product owner: what does the client really want? - me: what do they fucking want and how will I do it? - QA: how will I test this cleanly so that they don’t bother me all day long? - manager: when do you want it? - boss: how much are you willing to spend for this?

We usually say that nerds are shy and introverted, but we are central to the development of a product, and I don’t think an AI can change this.

nojvek on 2024-04-29

I have yet to see an LLM build and debug something complex. Barely stringing TODO apps, sure.

Still need a competent human to oversee. Hallucination are a serious problem. Without symbolic reasoning, LLMs quickly start to fall apart due to context limits and being able to know what exactly is wrong and needs to be changed.

rco8786 on 2024-04-29

We’re not dismissing them! They’re just not that good at helping us with our actual work.

I have Copilot on and it’s…fine. A marginal productivity improvement for specific tasks. It’s also great at variable names, which is probably the main reason I leave it on.

But it’s not replacing anyone’s job (at least not yet).

epolanski on 2024-04-29

Copilot is great at boilerplate and as a super autocomplete.

Useful when needing to recall some api without having to open the browser and google too.

But honestly writing code is nowhere near the hard part of the job, so there's 0 reasons to fear LLMs.

troupo on 2024-04-29

No engineer worth their salt (whether junior, mid or senior) should be concerned.And this is a good illustration why: https://news.ycombinator.com/item?id=40200415

mike_hearn on 2024-04-29

Whoa, don't quit your course because of a product announcement! That'd be overreacting by a lot. Please consider these points instead!

Firstly, it's not true that LLMs can structure and execute large scale changes in entire repositories. If you find one that can do that please let me know, because we're all waiting. If you're thinking of the Devin demo, it turned out on close inspection to be not entirely what it seemed [1]. I've used Claude 3 Opus and GPT-4 with https://aider.chat and as far as I know that's about as good as it gets right now. The potential is obvious but even quite simple refactorings or changes still routinely fox it.

Now, I've done some research into making better coding AIs, and it's the case that there's a lot of low hanging fruit. We will probably see big improvements ... some day. But today the big AI labs have their attention elsewhere, and a lot of ideas are only executable by them right now, so I am not expecting any sudden breakthroughs in core capabilities until they finish up their current priorities which seem to be more generally applicable stuff than coding (business AI use cases, video, multi-modal, lowering the cost, local execution etc). Either that or we get to the point where open source GPT-4+ quality models can be run quite cheaply.

Secondly, do not underestimate the demand for software. For as long as I've been alive, the demand for software has radically outstripped supply. GitHub claims there are now more than 100 million developers in the world. I don't know if that's true, because it surely captures a lot of people who are not really professional developers, but even so it's a lot of people. And yet every project has an endless backlog, and every piece of software is full of horrible hacks that exist only to kludge around the high cost of development. Even if someone does manage to make LLMs that can independently tackle big changes to a repository, it's going to require a very clear and precise set of instructions, which means it'll probably be additive. In other words the main thing it'd be applied to is reducing the giant backlog of tickets nobody wants to do themselves and nobody will ever get to because they're just not quite important enough to put skilled devs on. Example: any codebase that's in maintenance mode but still needs dependency updates.

But then start to imagine all the software we'd really like to have yet nobody can afford to write. An obvious one here is fast and native UI. Go look at the story that was on HN a day or two ago about why every app seems so inefficient these days. The consensus reason is that nobody can afford to spend money optimizing anything, so we get an endless stream of Electron apps that abuse React and consume half a gig of RAM to do things that Word 95 could do in 10MB. Well, porting a web app to native UI for Mac or Windows or Linux seems like the kind of thing LLMs will be good at. Mechanical abstractions didn't work well for this, but if you can just blast your way through porting and re-porting code without those abstractions, maybe you can get acceptably good results. Actually I already experimented with porting JavaFX FXML files to Compose Multiplatform, and GPT-4 could do a decent job of simple files. That was over a year ago and before multimodal models let it see.

There are cases where better tech does wipe out or fundamentally change jobs, but, it's not always the case. Programmer productivity has improved enormously over time, but without reducing employment. Often what we see when supply increases is that demand just goes up a lot. That's Jevon's Paradox. In future, even if we optimistically assume all the problems with coding LLMs get fixed, I think there will still be a lot of demand for programmers but the nature of the job may change somewhat to have more emphasis on understanding new tech, imagining what's possible, working out what the product should do, and covering for the AI when it can't do what's needed. And sometimes just doing it yourself is going to be faster than trying to explain what you want and checking the results, especially when doing exploratory work.

So, chin up!

[1] https://news.ycombinator.com/item?id=40010488

nyarlathotep_ on 2024-04-29

> Secondly, do not underestimate the demand for software. For as long as I've been alive, the demand for software has radically outstripped supply. GitHub claims there are now more than 100 million developers in the world.

And yet jobs are more difficult to come by than any time in recent history (regardless of skill or experience; excepting perhaps "muh AI" related roles), a seemingly universally expressed sentiment around these parts.

sensanaty on 2024-04-29

People usually mean FAANG jobs with the absurdly overinflated FAANG-level pay when they talk about jobs being hard to come by, to be fair.

greatwhitenorth on 2024-04-29

Give an example of "structuring and executing large-scale changes in entire repositories". Let's see the complexity of the repository along with what it structured and executed.

chasd00 on 2024-04-29

it's a new tool that really works well in some ways and falls on its face in others. My advice, learn the tool and how/when to use it and become an expert. You'll be in a better place than many "seniors" and your peers by having a large toolset and knowing each one very well. Also, be careful believing the hype. Some specific cases can make for incredible videos and those are going to be everywhere. Other use cases really show the weaknesses but those demos will be much harder to find.

Reki on 2024-04-29

They're asymptotic to human performance.

guluarte on 2024-04-29

LLMs are autocomplete with context, for simple tasks they do OK but for complex tasks they get lost and produce crap

ThrowawayTestr on 2024-04-29

AI can't prompt itself. The machines will always need human operators.

mistrial9 on 2024-04-29

this is completely wrong.. the entire LLM system was bootstrapped by "self-supervised learning" .. where data sets are divided and then training proceeds on parts.. it is literally self-training

thehoneybadger on 2024-04-29

Having written and sold machine learning software for over 15 years, you are definitely over-reacting. There is about a 10 year pattern. Every 10 years AI gets drummed up as the next big thing. It is all marketing hype. It always fails.

This happened in the 70s, 80s, 90s, 00s, 10s, and now the 20s. Without fail. It is a hyped up trend.

Only be concerned when someone is presenting a real breakthrough in the science (not the commercial aspect). A real breakthrough in the science will not have any immediate economic impact.

Convolutional neural networks are absolutely not revolutionary over the prior state of the art. These are incremental gains in model accuracy at the cost of massive data structures. There is no real leap up here in the ability for a machine to reason.

ChatGPT is just snake oil. Calm down. It will come and go.

bossyTeacher on 2024-04-29

Programming languages are a formal way to deliver instructions to a machine. Software is a set of instructions whether you use Python or some visual language . LLMs are just another way of generating those instructions. You still need someone that knows how those instructions work together logically and how to build it using best practices. You can't escape that no matter what you use (Python, some no code tool or LLMs).

So in that sense, the role is not going anyway anytime soon. The only thing that could change is how we make software (but even that is unlikely to change that much anytime soon)

smrtinsert on 2024-04-29

If you really think its over, pivot into business. As someone who is all in on AI assistance, there's simply too much curation for me to do to think it's replacing people any time soon, and that's just with coding which is maybe 20% of my job these days. I will say CoPilot/ChatGPT helped reduce my 30% into 20% pretty quickly when I first started using it. Now I largely take suggestions instead of actually typing it out. When I do have to type it out it's always nice to have years of experience.

Said it before will say it again, it's a multiplier, that's it.

DontchaKnowit on 2024-04-29

Dude once you work in industry you will realize that LLMs aint coming for your job any time soon. The job, for many software engineers, is primarily soliciting and then translating domain-expert requirements into technical requirements. The coding is just wrapping things up.

LLMs are useless for this.

nforgerit on 2024-04-29

Don't fret. What people call AI these days is just a gigantic economically unsound bullshit generator (some underlying ideas might be valuable though) that passed very stupid tests. It is brutally marketed like blockchain & crypto by some sociopaths from Silicon Valley, their mini-mes and middle management hell which needs to double down on bad investments.

The bigger problem I see is the economical situation.

zackmorris on 2024-04-29

After programming for 35 years since I was 12 and learning everything from the transistor level up through highly abstracted functional programming, I'm a total doomer. I put the odds of programming being solved by AI by 2030 at > 50%, and by 2040 at > 95%. It's over.

Programming (automating labor) is the hardest job there is IMHO, kinda by definition. Just like AGI is the last problem is computer science. You noticed the 3 year pace of exponential growth, and now that will compound, so we'll see exponential-exponential growth. AIs will soon be designing their own hardware and playgrounds to evolve themselves perhaps 1 million times faster than organic evolution. Lots has been written about this by Ray Kurzweil and others.

The problem with this is that humans can't adapt that fast. We have thought leaders and billionaires in total detail of the situation. Basically that late stage capitalism becomes inevitable once the pace of innovation creates a barrier to entry that humans can't compete with. The endgame will be one trillionaire or AI owning the world, with all of humanity forced to perform meaningless work, because the entity in charge will lack all accountability. We're already seeing that now with FAANG corporations that are effectively metastasized AIs using humans as robots. They've already taken over most governments through regulatory capture.

My personal experience in this was that I performed a lifetime of hard work over about a 25 year period, participating in multiple agencies and startups, but never getting a win. So I've spent my life living in poverty. It doesn't matter how much I know or how much experience I have when my mind has done maybe 200 years of problem solving at a 10x accelerated rate working in tech - see the 40 years of work in 4 years quote by Paul Graham. I'm burned out basically at a level of no return now. I'm not sure that I will be able to adapt to delegating my labor to younger workers like yourself and AI without compromising my integrity by basing my survival on the continued exploitation of others.

I'd highly recommend to any young people reading this to NOT drink the kool-aid. Nobody knows what's going to happen, and if they say they do then they are lying. Including me. Think of tech as one of the tools in your arsenal that you can use to survive the coming tech dystopia. Only work with and for people that you can see being someday. Don't lose your years like I did, building out someone else's dream. Because the odds of failure which were once 90% in the first year are perhaps 99% today. That's why nobody successful pursues meaningful work now. The successful people prostitute themselves as influencers.

I'm still hopeful that we can all survive this by coming together and organizing. Loosely that will look like capturing the means of production in a distributed way at the local level in ways that can't be taken by concentrated wealth. Not socialism or communism, but a post-scarcity distribution network where automation provides most necessities for free. So permaculture, renewable energy, basically solarpunk. I think of this as a resource economy that self-evidently provides more than an endlessly devaluing and inflating money supply.

But hey, what do I know. I'm the case story of what not to do. You can use my story to hopefully avoid my fate. Good luck! <3

zackmorris on 2024-05-01

Hey I was feeling particularly dour when I wrote this, but not everything is doom and gloom. AI/AGI will be able to solve any/all problems sometime between 2030 and 2040. I take the alarmist position because I've been hit with bad news basically every day since the Dot Bomb and 9/11, and don't feel that I'm living my best life. But we can manifest a brighter tomorrow if we choose to:

'ChatGPT for CRISPR' creates new gene-editing tools:

https://www.nature.com/articles/d41586-024-01243-w

https://news.ycombinator.com/item?id=40205961

The intersection of AI with biology will enable us to free ourselves of the struggles of the human condition. Some (like me) are concerned about that, but others will run with it and potentially deliver heaven on Earth.

The way I see it all going is that a vanishingly small number of people, roughly 1 in 10,000 (the number of hackers/makers in society) will work in obscurity to solve the hard problems and get real work done on a shoestring budget. But we'll only hear about the thought leaders and billionaires who do little more than decide where the resources flow.

So the most effective place to apply our motivation, passion and expertise will be in severing the hold that capital has over innovation. Loosely that looks like UBI and the resource economy I mentioned, which I just learned has the name Universal Basic Services (UBS), and intentionally avoids complications from the money side being manipulated by moneyed interests:

https://en.wikipedia.org/wiki/Universal_basic_services

The idea is that by providing room and board akin to an academic setting, people will be free to apply their talents to their calling and work at an exponentially faster rate to get us to a tech utopia like Star Trek, instead of being stuck in the path we're on now towards a neofeudalist tech dystopia.

Sorry if I discouraged anyone. I truly believe that there is still hope!

PoignardAzur on 2024-04-29

I think Microsoft is going the wrong direction with Copilot (though it's a reasonable direction given their incentives).

Right now Copilot is terrible at large changes to complex codebases; the larger and more complex, the worse. But it's great at suggesting very short snippets that guess exactly what you were in the middle of writing and write it for you.

I wish Copilot focused more on the user experience at the small scale: faster and smaller completions, using the semantic info provided by the IDE, new affordances besides "press tab to complete" (I'd love a way for Copilot to tell me "your cursor should jump to this line next"), etc. This whole focus on making the AI do the entire end-to-end process for you seems like a dead end to me.

bloomfieldj on 2024-04-29

I find Cursor’s Copilot++ is miles ahead of GitHub’s in terms of speed and autocomplete helpfulness. They’re also working on the “your cursor should jump to this line next" feature, but I haven’t relied on many of its suggestions yet. It’s available in their vscode fork, but doesn’t seem to be in their docs yet.

PoignardAzur on 2024-04-30

Had a glance. Copilot++ looks intriguing, though the landing page is terrible (I can't even see what the demos are trying to show). I might give it a try.

lenerdenator on 2024-04-30

> But it's great at suggesting very short snippets that guess exactly what you were in the middle of writing and write it for you.

I found it reasonably good when I described what code should do through comments and let it generate based on that, evaluated the output, moved on to the next piece of logic, and repeated.

Basically, chaining the snippets together. Then again I'm doing Python back-end web development, so not something terrifically hard.

vizzier on 2024-04-29

proper insertion points in completions akin to older Visual Studio templates would probably be ideal, but you can use ctrl + right arrow in vscode to accept completions one word at a time, rather than a whole block with tab.

Roonerelli on 2024-05-01

Jetbrains are doing something like this (in the .net space) https://blog.jetbrains.com/dotnet/2024/04/30/jet-brains-ai-a...

konschubert on 2024-04-29

Totally.

msoad on 2024-04-29

This should've been done in VSCode. Most developers don't want to write code in a browser. Once VSCode extension is built they could generalize it to other editors.

I'm sure they are thinking of VSCode integration and I am hopping for that to be available soon

lispisok on 2024-04-29

I wonder if AI tools is going to kill editor diversity. Every one of these tools VS Code is the first class citizen and other editors are an afterthought. Sure people can write their own Emacs package but that's only if the tool developers enable it and the experience is usually not as good as the official VS Code version. I can also see the future where not using VS Code is a signal you are less reliant on AI tools to code

padthai on 2024-04-29

A thousands of small cuts, almost everything comes first to VSCode before other editors, even mainstream things are underrepresented in most other editors (Jupyter, commercial integrations, etc…).

Although that was true about Atom too though?

freedomben on 2024-04-29

This is a great fear of mine as a (neo)vim user. I've repeatedly encountered situations already where it was assumed everyone was using VS code, and I only see that getting worse as more and more integrations get built into IDEs.

acheong08 on 2024-04-29

There are lots of hackers bringing VsCode-only extensions to Neovim. I brought Copilot Chat to neovim back when it first became available to me and the community is still alive and well, though with a different maintainer. Tbh AI tools aren’t that useful, and those that are will get ported. Currently trying to bring Cursor to Neovim but that has proven much more difficult

freedomben on 2024-04-30

I don't know that I can say thank you enough for your work! Thank you!!!

Do you know why Github didn't make an official version? I'm definitely going to give yours a try, and I don't really care whether it's "official" or not, but rather I'm curious because looking for official neovim support is a useful signal of their priorities/intentions.

acheong08 on 2024-04-30

Link btw since it’s on a org: https://github.com/CopilotC-Nvim/CopilotChat.nvim

Re: official version

There was some discussion but the answer is that it’s just too difficult. Our version was reversed from a bunch of MITM and guesswork. The VSCode implementation has a “@workspace” command which involves complex tree-sitter integration, sending off all your code to be vectorized, and uses RAG to get the most relevant snippets. We obviously weren’t able to implement all features. They can get away with this in JavaScript because you can get just about anything via npm. In Lua, most things have to be from scratch. I had to spend a few hours just getting tiktoken working in Lua and even then it requires manual installation. The package management system with Lazy.nvim is very lacking.

freedomben on 2024-04-30

Thank you! I'll be trying this out in the next few days as soon as I get some time. I much appreciate your effort. I know you don't do it for the money, but I'll throw some sponsor bucks your way. I'd love to buy you a few beers (or coffees or whatever your love is) :-)

hanniabu on 2024-04-30

I'm still chugging along on sublime text 3

hackermatic on 2024-04-29

We used to say the same about Eclipse!

idan on 2024-04-29

We are indeed thinking about how to do that in a way that feels natural in VS Code. :)

Terretta on 2024-04-29

But, VSCode is in a browser?

https://vscode.dev/

pluc on 2024-04-29

It's very different to have the code and suggest things than it is to obtain the code and suggest things, without introducing noticeable delay in processing that code

sdesol on 2024-04-29

Interesting. I don't think the AI code generation will live up to developer expectations, but I do see the value in "project management for developers". The value I see with workspace is not the code generation, but the ability to help developers organize their thoughts.

lostintangent on 2024-04-29

One of our main goals with Copilot Workspace is to help offer a "thought partner"/rubber duck for developers, and potentially even other members of a software team (as you said, PMs, designers, etc.). And code generation is obviously just one means of helping you think through a problem, along with a fleshed out spec (what are you trying to accomplish?), and plan for how to approach the problem (how might you actually accomplish it?).

And so while we want to help generate code for tasks (e.g. from issue->PR), we also find that it's just super helpful to take an idea and make it more tangible/concrete. And then use that Workspace session to drive a conversation amongst the team, or spark the implementation. Especially since that might only take a couple clicks.

Within the GitHub Next team, I'll often file issues on one of the team's project repos, and then pause for a moment, before realizing I'm actually curious how it might be accomplished. So I'll open it in CW, iterate a bit on the plan, and then either 1) realize it's simple enough to just fix it, or 2) understand more about my intentions and use a shared session to drive a discussion with the team. But in either case, it's pretty nice to give my curiosity the space to progress forward, and also, capitalize on serendipitous learning opportunities.

So while AI-powered code generation is clearly compelling, I agree with you that there are other, more broadly interesting benefits to the idea->code environment that CW is trying to explore. We have a LOT of work to do, but I'm excited about the potential :)

alfalfasprout on 2024-04-29

The real problem is a large part of the enormous hype train is investors thinking they can replace talented engineers w/ AI. That's fundamentally doomed for failure.

What AI can really do well is take an already competent engineer and suddenly get rid of a lot of the annoying tedium they had to deal with. Whether it's writing boilerplate, doing basic project management, organizing brain dumps/brainstorming, etc.

sdesol on 2024-04-29

I do think workspace by GitHub is a smart move. What is clear as day is, we don't have enough data to model how developers think and iterate to solve problems. With workspace, I see it as a way to capture structured data on how developers think.

This is certainly a long game though. I think GitHub with MS money can continue to lose money on Copilot for the next 5 years to gather data. For other VC ventures, I don't think they can wait that long.

bossyTeacher on 2024-04-29

Pretty much this. I worry that we might face an AI winter soon as investors in 2029 suddenly realise that LLMs are not going to replace anyone anytime soon and that they won't be able to obtain the fantasied cash out of replacing millions of workers with LLMs.

Looking at the sad fate of Deep Mind (R.I.P), I feel that the shortermism generated by LLMs is going to be really painful

nprateem on 2024-04-29

Yes. And they've also introduced a new kind of tedium in going round in circles while it doesn't understand you, then you give up and go back to SO.

dgb23 on 2024-04-29

It’s pretty good at completing relatively simple code. If you already know what exactly you write and have a consistent and simple coding style it will typically recognize what you are doing and complete whole blocks.

It takes an experienced eye to fix the code after, but overall it makes you a bit faster at those kinds of tasks.

But if you’re in thinking and exploration mode then turn it off. It’s a massive distraction then.

godzillabrennus on 2024-04-29

These AI coding assistants already do what you are describing. At some point in the future, AI should be capable enough to do the job of a competent developer. This is not a short-term milestone, IMHO.

idan on 2024-04-29

Bingo.

odiroot on 2024-04-29

Maybe AI can at least help with the deluge of Dependabot PRs ;)

goncalo-r on 2024-04-29

Imagine if AI could update my Jira ticket status.. I would pay big money for that.

steve_adams_86 on 2024-04-29

Watching the examples and other people's demos, I get the sense that this product completely ignores what makes good software. It has no "big picture" plan or contextual awareness required to make those nuanced yet critical decisions in the far ends of applications. Those cases where you badly need domain expertise, understanding of how the user interfaces with the product, awareness of existing technical debt, and so on.

The scope it can assist in is too limited. It is exactly the kind of thing I'd use to generate self-destructing spaghetti. The thing is, people with less experience will eat this up because it can do some tasks with ease which might not be familiar to them yet. Over time, the code they produce will be a mine field of incongruent patterns and solutions.

It appears to support the hypothesis that these tools may actually create more work, but I worry that the work will be much like that created by outsourced developers shipping duct-taped nonsense back to North America throughout the last 20 years or so.

mlsu on 2024-04-29

And, on top of that, the first few fixes will actually work! They will look good to PMs and to the business! Look at the velocity!

The velocity is good enough to scale back the size of the team, it's good enough to mandate a pace that is only possible with AI doing automated submission and review of PRs, it's good enough to not have any kind of formal design work. Just ship.

After several months of this, any additional bug fix by the AI is just adding to the existing morass of tech debt created by the AI. Slowly but surely, the AI will stop being able to submit a PR that doesn't result in a regression. The people who actually care about software quality aren't on new projects; those are run by juniors and copilot. Nope, they're going to be stuck on maintenance of this garbage.

And meanwhile, that new team working on new feature X is moving so quickly... better give them a raise!

The people who suffer here are, of course, the non-marginal user, who will have to contend with every new feature breaking every other existing feature that they like in the software.

idan on 2024-04-30

That hasn't been our experience using it in-house! It's not perfect, but not every software engineering task is some galaxy-brain architectural shit. Sometimes, you have to lay down some bricks. And sometimes, to lay down bricks, you need to touch more than one tiny patch of code.

Having something round up the likely areas of the codebase that needs touching feels magical. It doesn't always succeed! But it feels pretty magical to get that boost when you're new to some part of a codebase (which, real talk, code I wrote > 1 month ago, I must page back into memory).

Making it easy for me to progressively add context for the model is an accurate analogue for how I think as a developer when tackling a task. I have to build a mental model of how things work. And then a plan for how I'm going to change it.

Maybe for the kinds of tasks you usually tackle, it won't have value. But the amount of context it's attempting to bring to bear on whatever task you give it is categorically more — and better — than any other tool I've seen. I have seen (and been the author of) spaghetti. Could I make CW generate spaghetti? Surely. That's why it's a tool for developers, not a substitute for developers.

thomasfromcdnjs on 2024-04-30

A part of building great products is allowing your engineers to focus on things that really matter. But sometimes you need to change copy, add small features, from my experience with some of these tools, it is incredible. A PM can just create a repo issue, and the bots will go away with high accuracy and quality (if your codebase is setup well), and submit a PR.

This level of velocity for teams cannot be understated.

garbanz0 on 2024-04-30

A lot of software doesn't need to be good. I could see using this to prototype internal tools fast.

Also, it doesn't need to be good yet - whoever has the best tooling infrastructure when the models get better will win.

Cilvic on 2024-04-29

I'm not sure why they insist on code spaces instead of running this inside vs code?

Setups/working looks pretty similar to aider [1] that I've been using and liking, Aider is smaller steps than plandex, but plandex went into some kind of loops a couple of times so I stoped using it for now.

[1] https://github.com/paul-gauthier/aider

lostintangent on 2024-04-29

We definitely intend to explore a VS Code extension in the not too distant future. And we decided to build a web client + integrated cloud terminal, simply because that allowed us to create an experience that is one-click away from an issue, and could be accessed from anywhere and any device (i.e. we’ve deeply optimized CW for mobile, and I do quite a bit of code thinking in that modality).

In the meantime, when you open a Codespace from Copilot Workspace, you could open that Codespace in VS Code desktop. And use that as a companion editor to the web client (since we bi-directionally sync file changes between them). But totally agreed that a more integrated VS Code experience will be compelling!

Cilvic on 2024-04-30

> simply because that allowed us to create an experience that is one-click away from an issue,

That's indeed pretty cool and as you said it's not an either-or. Thanks for providing more background.

danenania on 2024-04-29

> plandex went into some kind of loops a couple of times so I stoped using it for now.

Hey, Plandex creator here. I just pushed a release today that includes fixes for exactly this kind of problem - https://github.com/plandex-ai/plandex/releases/tag/cli%2Fv0.... -- Plandex now has a much better 'working memory' that helps it not to go into loops, repeat steps it's already done, or give up too early.

I'd love to hear whether it's working better for you now.

Cilvic on 2024-04-30

Actually I got the newsletter, will definitely again!

danenania on 2024-04-29

For anyone who might be interested in an open source, terminal-based approach to using AI on larger tasks and real-world projects, I'm building Plandex: https://github.com/plandex-ai/plandex

I've tried to create a very tight feedback loop between the developer and the LLM. I wanted something that feels similar to git.

Apart from the planning and code generation itself, Plandex is also strongly focused on version control. Every interaction with the model is versioned so that it's easy to try out different strategies and backtrack/revise when needed.

I just released a new update (literally a few minutes ago) that includes some major improvements to reliability, as well as support for all kinds of models (previously it had been OpenAI-only). It's been fun trying out Claude Opus and Mixtral especially. I'd love to hear people's thoughts!

Bnjoroge on 2024-04-29

just played with it. pretty solid! questions for me, how do you validate the new changes will work without any issues after applying them? the biggest issue I have with most code generators is the feedback loop from suggested code -> testing it out -> failing, doing it again. would be great if this was more seamless. additionally, would be helpful to control the code generation process in real-time.

danenania on 2024-04-29

Thanks for trying it! So far, the the testing and validation stage has been left to the developer. While this could change in the future, my experience has been the models aren't quite good enough yet to make an auto-test/auto-fix loop like you've described the default behavior. You end up with a lot of tire-spinning where you're burning a lot of tokens to fix issues that a human can resolve trivially.

I think it's better for now to use LLMs to generate the bulk of a task, then have the developer clean up and integrate rather than trying to get the LLM to do 100%.

That said, you can accomplish a workflow like this with Plandex already by piping output into context. It would look something like:

  plandex new
  plandex load relevant_context.ts some_more_context.ts
  plandex tell 'some kind of complex task'
  # ...Plandex does its thing, but doesn't get it 100% right
  npm test | plandex load
  plandex tell 'please fix the problems causing the failed tests'
As the models improve, I'm definitely interested in baking this in to make it more automated.

cfcfcf on 2024-04-29

Out of interest, what kind of cost ranges are you seeing users spend on the OpenAI API using Plandex? (if only anecdotally)

danenania on 2024-04-29

There's a pretty huge range. It's a function of how much you load into context and how long the task is. So if you dump an entire directory with 100k tokens into context and then proceed to do a task that requires 20 steps, that will cost you a lot. Maybe >$10. But a small task where you're just making a few changes and only have a few small files in context (say 5k tokens) won't cost much at all, maybe like $0.10.

I haven't done too much digging into exactly how much people who are using Plandex Cloud are spending, but I'd say the range is quite wide even among people who are using the tool frequently. Some are doing small tasks here and there and not spending much--maybe they're on track for $5-10 per month, while I'd guess some other heavy users are on track to spend hundreds per month.

rahulpandita on 2024-04-29

GitHubNext Dev here: We have a terminal feature that allows you to connect to sandbox in the cloud for you to validate these changes before pushing them out to your repo

colemannerd on 2024-04-29

I wish there were fixes to devcontainers before doing adding copilot. I really want declarative, repeatable builds that are easily used in both Codespaces AND Actions. I know all of the functionality is theoretically there in devcontainers.json, it is so manual to configure and confusing, that anytime I've done it, I use it for 2 weeks and then just go back to developing on local because I don't have time to keep that up. ESPECIALLY if you're deploying to AWS cloud and also, want to use alternative package managers like poetry, uv, yarn, jsr, etc.

lostintangent on 2024-04-29

If folks want to see what Copilot Workspace looks like, here’s a sample session where I addressed a feature request in an OSS VS Code extension I maintain: https://copilot-workspace.githubnext.com/lostintangent/gitdo....

You can see the originating issue and the resulting PR from there. And note that while the initial spec/plan/code was mostly good, I iterated on a couple parts of the plan, and then made a minor tweak to the code manually (everything in CW is editable). Which is a key part of our goal with CW: to help bootstrap you with a task (or think out loud with AI), and then provide the iteration primitives to explore further.

idan on 2024-04-29

Hi All! GitHub Next here, happy to answer questions about Copilot Workspace and Next in general <3

Yenrabbit on 2024-04-29

Looks extremely cool. Have you shared / do you plan to share examples of this in practice? I would be great to read some case studies or even just browse some actual PRs rather than trying to judge from the catchy video :)

idan on 2024-04-29

GitHub Stars have had access to this since last week, and a few of them have made in-use videos:

- https://www.youtube.com/watch?v=FARf9emEPjI by Dev Leonardo - https://www.youtube.com/watch?v=XItuTFn4PWU by Ahmad Awais

And keep an eye on https://x.com/githubnext, we'll be sharing / linking to more in-action things.

Any PR created with Workspace will have a link to a readonly copy of the workspace so you can see how it happened. We expect those to start circulating as people get access!

JTyQZSnP3cQGa8B on 2024-04-29

Can I install it locally without this tool having an internet connection?

idan on 2024-04-29

No, it's a webapp, like github.com.

esafak on 2024-04-29

I don't use Copilot in particular but in the software space, everything is constantly changing, and the models are trained infrequently so they are never familiar with the things I ask about. Reality is outrunning the models for now but that may change.

throwup238 on 2024-04-29

There's a Black Mirror-esque movie plot idea in there! Everyone starts using bionic implants to filter out offensive parts of reality like people they don't like and stuff, until one day all the filters start falling out of sync with reality...

...and then aliens invade.

jl6 on 2024-04-29

They Live... kinda?

azhenley on 2024-04-29

I just wrote up my experiences of using it for a week: https://austinhenley.com/blog/copilotworkspace.html

TL;DR: Copilot Workspace is a great concept. But the UX and framing is entirely wrong and sets the wrong expectation for users. The current prototype is very slow (5+ minutes for one-line code changes) and the generated code is often buggy or has nothing to do with the specification. It doesn’t help me understand the code. The generated plan is usually good. ChatGPT does a much better job in my head-to-head comparisons (assuming I already know exactly what code is relevant). I'm still optimistic of where it can go from here.

I recommend everyone sign up to try it and give the team feedback.

rgbrenner on 2024-04-29

Thanks for writing that up. It's what I suspected from my use of CoPilot.

I love copilot as an autocomplete tool... but it frequently gets things wrong, and using the chat feature to ask it to complete some task usually just generates code that breaks things. So until that improves, Im skeptical a workspace tool would work.

Workspace seems like an awesome idea though.. once the tech is further along.

davidbarker on 2024-04-29

Have you tried Cursor? (https://cursor.sh)

It's a fork of VS Code with some AI features sprinkled in. It writes around 80% of my code, these days.

It also has a few useful features:

- a chat interface where you can @-mention files, folders, and even documentation

- if you edit a line of code, it suggests edits around that line that are useful (e.g. you change a variable name and it will suggest updating the other uses, which you accept just by pressing Tab)

- as you're writing/editing code, it will suggest where your cursor might go next — press Tab and your cursor jumps there

cedws on 2024-04-29

Looks interesting, but I don't really want my code to go via some unknown company. As far as I can tell in "Privacy Mode" code still goes via their servers, they just promise not to store anything (with the caveat that OpenAI retain stuff for 30d).

davidbarker on 2024-04-29

They give you the option to use your own OpenAI/Anthropic/Azure API keys, but in all honesty, I don't know if they still gather information about your code even using your own API keys.

You could use something like Little Snitch (on Mac) to check if it makes any calls to their servers.

They also allow you to override the URL for the OpenAI models, so although I haven't tried, perhaps you can use local models on your own machine.

cedws on 2024-04-29

Unfortunately, it looks like code still goes via their servers:

https://cursor.sh/privacy

> Even if you use your API key, your requests will still go through our backend!

> That's where we do our final prompt building.

davidbarker on 2024-04-29

Ah, that's unfortunate.

rahulpandita on 2024-04-29
elwell on 2024-04-29

> and is expressly designed to deliver–not replace–developer creativity

I'm not fooled by such corp speak.

strix_varius on 2024-04-29

If they could, they would!

ilaksh on 2024-04-29

I think these types of tools will become really effective when they stop trying to target every single programming language and platform ever created and focus on one specific version of an integrated platform.

Most businesses don't actually need software to be in whatever flavor of the month the dev team happens to be interested in. They just want to be able customize their software.

By restricting the scope somewhat, you make it much more feasible to make sure the model has the training and/or knowledge ready for retrieval to fulfill tasks using a specific stack.

So I see this type of thing as quickly evolving into a tool for non-developers. And within a few years, these tools will cut into software engineering jobs. It will become part of the evolution of no-code and low-code.

jstummbillig on 2024-04-29

As long as it does the language that my business currently uses, I am all for that!

Jokes aside, I am not sure I buy the premise. I have read somewhere (citation required) that LLMs get better at other things after having learned how to code. And maybe they also get better at coding in one language after having learned a bunch.

cryptoz on 2024-04-29

This is quite similar to my idea and prototype I applied to YC with just last week. Uh-oh! Haha. I did call out GitHub as a primary competitor so I did see this coming - and I guess on second looks I'm still quite differentiated. All these new tools will overlap some but also have their own pros/cons since there are so many different ways to approach building an AI code tool!

Just yesterday I put up a cheap $5 linode running a prototype but not ready for signups yet.

https://codeplusequalsai.com/

I think I have some good ideas about how to get the the LLM to modify code - specifically working with ASTs.

I wonder how GitHub prompts the LLM to get usable code modifications out of it. How often does it encounter errors?

sqeaky on 2024-04-29

Good luck! I hope to see many competitors in this space because hegemony in this space would inhibit the software industry as a whole. Figure out your differentiators and figure what copilot sucks at. Succeeding here can be done.

Bjorkbat on 2024-04-29

You can see my earlier comment somewhere here, but I feel obliged to remind everyone that the claim that Github Copilot made developers "55% more productive" came from a study where they asked 100 developers to implement an HTTP server in Javascript, split the group up roughly 50/50, and gave one group Github Copilot. The Copilot group did it in an hour and 11 minutes, whereas the control group got it done 2 hours and 41 minutes.

https://github.blog/2022-09-07-research-quantifying-github-c...

That's where the 55% number is coming from. It's coming from this experiment, and only this experiment.

So yeah, if you're wondering why you aren't somehow 50% more productive when using Github Copilot, it's probably because you're not implementing a simple task that's been done to death in countless tutorials.

mucle6 on 2024-04-29

Do you not have a good experience with github copilot? I recently started using it and it will write functions that are 90% right based on comments, and I just modify what I want.

I've felt at least a 2x speedup, maybe even 4x. That said I'm working on a new project where I'm writing a lot of code. For making small changes I could see how its much less valuable.

chasd00 on 2024-04-29

"I just modify what I want."

i think that's key, you use the llm to get something started and then change/fix/enhance to get what you want. That works for me too but the folks that want to prompt an llm from nothing to a finished application are in for a rough ride.

echelon on 2024-04-29

I'm a fan of AI-empowered coding, but I'm not a fan of a single SaaS vendor dominating the entire software lifecycle. That's an incredibly dangerous proposition.

I'd love to see anyone but Microsoft push ahead, and ideally, lots of players.

daviding on 2024-04-29

One side effect that I don't think I like of these tools is the inevitable push for developers to just use the most popular languages. Javascript and python are the main LLM trained languages, so it just self reenforces that it is best to use those for everything. There is something a bit sad about that. I guess the dream is that they are just intermediatory data anyway, like a chatty bytecode layer.

sqs on 2024-04-29

Cool idea! Agree on the end vision. The alpha is in figuring out how to get there and making it work really well along the way.

Also, why isn’t anybody connecting the Copilot Workspace announcement to Devin? Biggest company in the world announces a release of a competitor to the most widely seen dev product announcement of the year? Only saw one incidental mention of the connection.

Protostome on 2024-04-29

Curious to see how it goes. For 20 years I have been working with Vim and recently added the Co-pilot plugin which is awesome. I can't however see myself using a different editor/IDE than the good'ol vim. It's not that I haven't tried, but all those IDEs tend to be bloated and consume tons of memory.

idan on 2024-04-29

When video terminals first came out, everyone started out using line editors even though line editors make no sense when you can display arbitrary buffer. It took a while until editors changed to be "screen native". But they did change, meaningfully.

When GUIs first came out, editors were just "terminal editors in a window". Took a while for the modern concept of an IDE to happen, with hovers, red squigglies, sidebars, jump to definition. All of that was possible on the first day of the GUI editor! But it took a while to figure out what everyone wanted it to be.

I think we're at a similar inflection point. Yeah, everyone today (myself included) is comfortable in the environment we know. VS Code is lovely. And AI (plus realtime multiplayer) is not a display technology. But I think it's a material technology shift in the same vein as those two moments in history. I would not bet that the next thirty years is going to continue look like today's VS Code. I don't know to say what it WILL look like — we have to keep prototyping to find out.

Protostome on 2024-04-30

I mostly agree with all of your points. But IMHO, if (and that's a big if nowadays, with modern web and mobile dev) text editing is the core task of the app your'e using as your IDE, then all the graphics are just redundant. All the highlighting, hovers / etc can be done via much simpler graphics that is memory and CPU efficient. As a dev, I would like to IDE developers to invest in core functionality, rather than a fancy GUI.

Take the electron IDE for example. It embeds the chrome runtime which is a total waste, given that i just want to edit some text files.

7thpower on 2024-04-29

I use Copilot in VS code for work and it is great for auto complete, but the chat experience is very poor compared to tools like Cursor. The inline edits are even worse.

Long story short, is I don’t have a lot of confidence in the product right now.

That being said, I am very optimistic on the product long term and I generally like the vision.

l5870uoo9y on 2024-04-29

I cancelled my subscription mainly because it made me neglect overall structure and instead autocomplete quick inline and repetitive solutions. Haven’t missed it.

ianbutler on 2024-05-01

A little late to the party here, but we (https://www.bismuthos.com) offer something already to scratch this itch. We provide a workspace to build Python backends. Chat on the left, code and visual editors on the right. However, we also handle deployments, data storage (we have a blob store), serving (we built a home grown function runtime) and logging. The experience is tightly integrated with the copilot and the idea is to get ideas off the ground as quickly as possible. We're early days and you can try it for free right now.

idan on 2024-04-29

Hello! GitHub Next here, happy to answer questions and unpack how we think about AI tools for developers (spoiler: it's less about codegen and more about helping with the rest of the dev cycle — building an understanding of how the system works, clearly specifying how it should change, etc)

roboyoshi on 2024-04-29

Interesting stuff! I've been trying ollama/gpt + continue.dev and copilot in VSCode for a bit now and the chat-style assistant is a huge plus. Especially in DevOps (my main work) the codegen is rather unhelpful, but the ability to let LLMs explain some sections and help in rubber-ducking is huge.

I see that a good output requires a good input (prompt). How does copilot workspace determine a good input for the prompt? I see that in the github repo there is already a bunch of "Tips and Tricks" to get better results. What is your experience so far? Should we change our way of creating issues (user-stories / bug-reports, change-requests) to a format that is better understood by AI/Copilot? (half-joking, half-serious).

idan on 2024-04-29

Well, that's basically the heart of Copilot Workspace! The whole UX is structured to make it easy for the human to steer.

- Alter the "current" bullet points in the spec to correct the AI's understanding of the system today - Alter the "proposed" bullet points in the spec to correct the AI's understanding of what SHOULD be - Alter files/bullets in the plan in order to correct the AI's understanding of how to go from current to proposed.

That said, I think there's definitely a future where we might want to explore how we nudge humans into better issue-writing habits! A well-specified issue is as important to other humans as it is to AI. And "well-specified" is not about "more", it's about clarity. Having the right level of detail, clear articulation of what success means, etc.

rjindael on 2024-04-30

If it is more about code planning, how much different is it than simply telling ChatGPT the overall structure of your code and asking it to give you a rudimentary plan on what to do next? Would it be able to actually execute steps of the plan by generating code, then creating PRs for it? I feel like this is a great tool for our team since my understanding of the announcement is that it is more or less functionally equivalent to hiring another programmer on your team (or, if not that, at the very least having a really useful assistant.) Kudos to the GitHub team and I have immensely enjoyed using Copilot thus far to increase my productivity :-)

ahnix on 2024-04-29

[dead]

happypumpkin on 2024-04-29

> At last count, Copilot had over 1.8 million paying individual and 50,000 enterprise customers.

> Copilot loses an average of $20 a month per user, according to a Wall Street Journal report, with some customers costing GitHub as much as $80 a month.

Presumably at some point they need to actually make money on this? That is a $432 million / yr loss just on individual users.

Yenrabbit on 2024-04-29

Despite being widely repeated they've publically denied this claim a few times (for e.g. https://x.com/natfriedman/status/1712140497127342404, https://x.com/simonw/status/1712165081327480892)

rgs224 on 2024-04-29

was able to get around waitlist: copilot-workspace.githubnext.com /<owner>/<repo>?task=<description>

throwup238 on 2024-04-29

Great job! Worked for me to create the plan and now slowly generating the file changes for the plan.

jckwind on 2024-04-30

i think its patched :/

neuralnerd on 2024-05-01

Yup. Got patched :/

sqs on 2024-04-29

You can try out GitHub Copilot Workspace now in read-only mode:

https://copilot-workspace.githubnext.com/AnandChowdhary/anan...

nextworddev on 2024-04-29

RIP Devin

nojvek on 2024-04-29

I use Github co-pilot but 55% productive is a bullshit number. Perhaps 1% may be. Most of co-pilot suggestions are either simple pattern matches or subtle hallucinations where I have to catch and fix silly bugs.

Github Chat is not very useful at understanding what the code is doing. Tried it once or twice and gave up.

The hype will help with Microsoft Stock though. Seems like bean counter management is taking over.

Bjorkbat on 2024-04-29

I was actually about to make a comment on that. They got that number from some dumb "experiment" they did where told two groups of developers to implement an HTTP server in Javascript. The group with Copilot got the job done 55% faster (https://github.blog/2022-09-07-research-quantifying-github-c...)

So yeah, they did an experiment with <100 developers asking them to create implement something that only took the control group 3 hours to finish from scratch, and from this we got the "55% more productive" statistic.

chasd00 on 2024-04-29

That's really disappointing and a little bit scary. That seems like a task tailor fit for an LLM, it's been done a million times before, follows an established protocol, and the implementations are all basically the same. Granted a lot of code is like that but a more realistic task may have been something like "create an online ordering system for a cake company that sells different kinds of cakes, has a sale every so often, and uses coupons printed in the local paper". That task is more ambiguous and better reflects what software devs are tasked with day to day IMO.

rany_ on 2024-04-30

Is this sort of like a response to Devin? I'm having a hard time understanding what this is.

burntcaramel on 2024-04-29

Pretty sure I wouldn’t want to work on a team using this.

“Who wrote this code, it has a serious flaw?”

“The customer needed that feature quickly so we used Copilot. No sorry, I don’t know how it works, I trust that it does. I glanced over the code and it looked about right.”

“Did you read the checklist that it generated? The flaw stems from an assumption it made.”

“Oh no, I didn’t see that. To be honest I don’t read the checklists it makes.”

mucle6 on 2024-04-29

I wonder if we can use "copilot-ability" as a proxy for code base complexity.

In other words, if copilot can't help you write code, then could that mean your code base is too complicated for a beginner to understand and modify?

layer8 on 2024-04-29

That assumes that it would be reasonable to expect a beginner to understand the code base of any nontrivial application or system. I don’t think that’s a reasonable expectation, if only because there are countless requirements, assumptions, experiences made, priorities and design decision that have to be known to understand the code.

It’s also a question of economy. You can always try to make a code base easier to understand and to document anything and everything, but it comes at a cost. It doesn’t help if a code base is in principle understandable by a beginner if they have to spend months reading documentation first, and someone had to spend months if not years to write it.

chasd00 on 2024-04-29

I hope not but i bet you're right. I'm sure there's lots of people working on llms to generate a "cognitive complexity" score like some of the static code analyzers we have. It will be so ripe with false positives as to be effectively worthless however the score is an easy thing to build a metric, report, and policy on... just like with the analyzers today.

kylecarbs on 2024-04-29

I was able to try it (without being approved) here: https://copilot-workspace.githubnext.com/

asadm on 2024-04-29

hmm doesnt work anymore

TriNetra on 2024-04-30

can we use gpt to convert say, a Xamarin Android app code into React-Native? has anyone tried similar conversion successfully?

mlhpdx on 2024-04-29

I’m letting it sink in that GitHub thinks it would be a win if 10%+ of the world population is involved in creating software.

I have no hope of hiring an electrician in that world.

idan on 2024-04-29

You have succeeded at hiring an electrician in THIS world? What's their number? Do they actually show up when they say they will?!?!?!!1!one

iknownthing on 2024-04-29

I noticed there were a lot of YC companies trying to do similar things. I'm not sure how they think they have a shot against Copilot.

jaredcwhite on 2024-04-29

So GitHub has outright declared war on competence in programming. Great, just great. The code examples is this blog post are horrible…I'd be embarrassed to share any such demonstration of generated code offering these solutions. Reject that PR in production, for the love of god!

Funny that I already started migrating my personal projects off of GitHub and onto Codeberg over the weekend. I'll be rapidly accelerating that process after today. Copilot has gone from vaguely irritating (and of course I refuse to use it on ethical grounds) to an overall direction for the company I *intensely* disagree with.

Lucasoato on 2024-04-29

Am I the only one disappointed by Github copilot quality lately? I get much better responses by copying and pasting on ChatGPT, rather than the suggestions in the vscode plugin.

smrtinsert on 2024-04-29

I'm pretty sure they toggle between gpt 3 and 4 depending on how much traffic they're getting in order to make sure they can meet all suggestions.

frontalier on 2024-04-29

the machines were not supposed to get tired

hackermatic on 2024-04-29

People have been anecdotally reporting and investigating similar problems since at least last year[0], and it's entirely possible that changes to improve one aspect of a model could make it much worse at other aspects without careful regression testing and gradual rollout. I think models intended to solve every problem make it very hard to guarantee they can solve any particular problem reliably over time.

Imagine if a million developers simultaneously got much worse at their jobs!

[0] https://arstechnica.com/information-technology/2023/07/is-ch...

mistrial9 on 2024-04-29

Github copilot trained on GPL-licensed code?

Brad Smith you continue to outdo yourself

naikrovek on 2024-04-29

Do you see that stated somewhere or are you just trying to stir people up?

mistrial9 on 2024-04-29

can we review all current lawsuits and their discovery status? pointers welcome

benzguo on 2024-04-30

it's so over for Devin

LeicaLatte on 2024-05-04

I am still waiting on copilot 2. Instead we got another product, another idiom.

vegancap on 2024-04-29

Jesus christ, I don't know whether to be excited or terrified by some of these new AI releases...

joandaniel497 on 2024-05-01

[dead]