This is a sign that the user hasn't taken the time to set up their tools. You should be able to type log and have it tab complete because your editor should be aware of the context you're in. You don't need a fuzzy problem solver to solve non-fuzzy problems.
Then it's a real bad case of using the LLM hammer thinking everything is a nail. If you're truly using transformer inference to auto fill variables when your LSP could do that with orders of magnitude less power usage, 100% success rate (given it's parsed the source tree and knows exactly what variables exist, etc), I'd argue that that tool is better.
Of course LLMs can do a lot more than variable autocomplete. But all of the examples given are things that are removing cognitive overhead that probably won't exist after a little practice doing it yourself.
This. Set up your dev env and pay attention to details and get it right. Introducing probabilistic codegen before doing that is asking for trouble before you even really get started accruing tech debt.
You say "probabilistic" as if some kind of gotcha. The binary rigidness is merely an illusion that computers put up. At every layer, there's probabilistic events going on.
- Your hot path functions get optimized, probabilistically
- Your requests to a webserver are probabilistic, and most of the systems have retries built in.
- Heck, 1s and 0s operate in a range, with error bars built in. It isnt really 5V = 1 and 0V = 0.
Just because YOU dont deal with probabilistic events while programming in rust, or python doesnt mean it is inherently bad. Embrace it.
We’re comparing this to an LSP or intellisense type of system, how exactly are these probabilistic? Maybe they crash or get a memory leak every once in a while but that’s true of any software including an inference engine… I’m much more worried about the fact that I can’t guarantee that if I type in half of a variable name, that it’ll know exactly what i’m trying to type. It would be like preparing to delete a line in vim and it predicts you want to delete the next three. Even if you do 90% of the time, you have to verify its output. It’s nothing like a compiler, spurious network errors, etc (which still exist even with another layer of LLM on top).
> Just because YOU dont deal with probabilistic events while programming in ...
Runtime events such as what you enumerate are unrelated to "probabilistic codegen" the GP references, as "codegen" is short for "code generation" and in this context identifies an implementation activity.
The scheduler that puts your program on a CPU works probabilistically. There are no rigid guarentees of workloads in Linux. Those only exist in real time operating systems.
> The scheduler that puts your program on a CPU works probabilistically. There are no rigid guarentees of workloads in Linux. Those only exist in real time operating systems.
Again, the post to which you originally replied was about code generation when authoring solution source code.
This has nothing to do with Linux, Linux process scheduling, RTOS[0], or any other runtime concern, be it operating system or otherwise.
Apples and oranges.
It's frankly nonsense to tell me to "embrace it" as a phantom / strawman rebuttal about a broader concept I never said was inherently bad or even avoidable. I was talking much more specifically about non-deterministic code generation during implementation / authoring phase.
> This. Set up your dev env and pay attention to details and get it right. Introducing function declarations before knowing what assembly instructions you need to generate is asking for trouble before you even really get started accruing tech debt.
Old heads cling to their tools and yell at kids walking on lawns, completely unaware that the world already changed right under their noses.
We know the "world has changed": that's why we're yelling. The Luddites yelled when factories started churning out cheap fabric that'd barely last 10 years, turning what was once a purchase into a subscription. The villagers of Capel Celyn yelled when their homes were flooded to provide a reservoir for the Liverpool Corporation – a reservoir used for drinking water, in which human corpses lie.
This change is good for some people, but it isn't good for us – and I suspect the problems we're raising the alarm about also affect you.
Honestly, I've used a fully set up Neovim for the past few years, and I recently tried Zed and its "edit prediction," which predicts what you're going to modify next. I was surprised by how nice that felt — instead of remembering the correct keys to surround a word or line with quotes, I could just type either quotation mark, and the edit prediction would instantly suggest that I could press Tab to jump to the location for the other quote and add it. And not only for surrounding quotes, it worked with everything similar with the same keys and workflow.
Still prefer my neovim, but it really made me realize how much cognitive load all the keyboard shortcuts and other features add, even if they feel like muscle memory at this point.
> Then it's a real bad case of using the LLM hammer thinking everything is a nail. If you're truly using transformer inference to auto fill variables when your LSP could do that with orders of magnitude less power usage, 100% success rate (given it's parsed the source tree and knows exactly what variables exist, etc), I'd argue that that tool is better.
I think you're clinging onto low-level thinking, whereas today you have tools at your disposal that allow you to easily focus on higher level details while eliminating the repetitive work required by, say, the shotgun surgery of adding individual log statements to a chain of function calls.
> Of course LLMs can do a lot more than variable autocomplete.
Yes, they can.
Managing log calls is just one of them. LLMs are a tool that you can use in many, many applications. And it's faster and more efficient than LSPs in accomplishing higher level tasks such as "add logs to this method/methods in this class/module". Why would anyone avoid using something that is just there?
I have seen people suggesting that it's OK that our codebase doesn't support deterministically auto-adding the import statement of a newly-referenced class "because AI can predict it".
I mean, sure, yes, it can. But drastically less efficiently, and with the possibility of errors. Where the problem can be easily soluble, why not pick the solution that's just...right?
Some tools are better than others at specific things. AI as is commonly understood today is better at fuzzy problems than many other tools. In the case of programming and being able to tab complete your way through symbols, you'll benefit greatly from having tools that can precisely parse the AST and understand schemas. There is no guess work when you can be exact. Using AI assistants for simple tab completion only opens the door for a class of mistakes we've been able to avoid for years.
I don't want to wade into the debate here, but by "their tools" GP probably meant their existing tools (i.e. before adding a new tool), and by "a fuzzy problem solver" was referring to an "AI model".
Tab complete what? No LSP will complete context-appropriate message and parameters without writing any of it. "What did the user want to log here" is inherently a fuzzy problem.
Most of the time, yes. Since you refer to this as chatbot, I'm guessing you don't have much experience with things like cursor completion - give it a go before you're too negative.
> This is a sign that the user hasn't taken the time to set up their tools.
You are commenting a blog post on how a user set up his tools. It's just that it's not your tool that is being showcased.
> You should be able to type log and have it tab complete because your editor should be aware of the context you're in.
...or, hear me out, you don't have to. Think about it. If you have a tool that you type "add logs" and it's aware of best practices, context, and your own internal usage... I mean, why are you bothering with typing "log" at all?
To be perfectly fair saying ”it’s aware of the best practices, context and internal usage” is very misleading. It’s aware none of those (as it is not ”aware” of anything), and that is perfectly clear when it produces nonsensical results. Often the results are fine, but I see nonsensical results often enough in my more LLM dependant coworkers’ PRs.
I’m not saying not to use them, but you putting it like that is very dishonest and doesn’t represent the actual reality of it. It doesn’t serve anyone but the vendors to be a shill about LLMs.
This isn't a language thing, it's a project thing. Language things I can do fluently (like the example of a for loop in the OP comment... lol). But I work on so many different projects that it's impossible to keep this kind of dependency context fresh in my head. And I think that's fine? I'm more than happy to delegate that kind of stuff.
I find there is a limit to the number of programming languages I can stay actively proficient in at any given time.
I am using a much wider range of languages now that I have LLM assistance, because I am no longer incentivized to stick to a small number that are warm in my mental cache.
Really? At some point syntax became kind of a vague sense of color on top of the data flow, which is ultimately the same in any language. I don't even recall what it means to be proficient in one language versus another—surely most career programmers can ramp up on any given syntax or runtime in a relatively short period of time. The hard part is laying out the data flow.
Granted, AI can definitely ease that ramp-up time at the cost of lengthening it.
You still get to be proficient in an environment. I've got around 10 projects open in different Cursor windows. Logging in each of them is one or more of: logger.info, log.info, echo, eventLog.WriteEntry, console.log, syslog, printf, active_span&.set_tag, puts, rollbar.info, ... (and more)
It's not the ramp up time. There's no problem with learning yet another one. There's a problem with remembering them all as you switch between the projects. Most of the time LLM will know exactly what to use, how, and what data I want to log. Which will take way less time than me rediscovering how a specific project I haven't seen in weeks does things.
Yeah - a lot of these complaints feel like what I heard very early in my career about how you shouldn't learn python. The C learning I did was still useful, but I appreciate not artisanally crafting all my memory management by hand so that I can ship something that i've created faster
I definitely would never use Python in production. But it remains an amazing tool for prototyping and writing quick dirty scripts.
I can reasonably expect Python to be installed on every Linux system, the debugging experience is amazing (e.g. runtime evaluation of anything), there's a vast amount of libraries and bindings available, the amount of documentation is huge and it's probably the language LLMs know best.
If there were two languages I would suggest anyone to start with, it would be C and Python. One gives a comprehensive overview of low-level stuff, the other gives you actual power to do stuff. From there on you can get fancy and upgrade to more advanced languages.
Oh 100%, there's definitely an important trade-off, and it's important to know. But there was definitely a cultural disdain and judgment for "why are you using python to create a simple side project - C is superior".
It's still important to know both, and especially when I began working on aspects like multithreading I found my basis in C helped me learn far easier, but i'm definitely more supportive of the ship it mindset.
It's better to have a bad side project online than have none - you learn far more creating things then never making things, and if you need LLMs and Python to do that, fine!
I think it depends on how you approach these tools, personally I still quite focus on learning general, repeatable concepts from LLMs as i'm an idiot who needs different terms repeated 50 times in similar ways to understand them properly!
Programmers love to pretend their crap needs to handle FFANG loads to give them an excuse to overengineer :)
In many many cases Python will be perfectly fine until you hit absolutely massive loads, and even then you can optimise the hot path with something else while keeping the rest as is.
If you remove all the tiny details that you detest because you think you should better spend your time on the “important stuff”, be careful; you may wake up one day and not care enough about anything because you have been discarding stuff bit by bit.
I like building stuff - I mean like construction, renovations. I like figuring out how I need to frame something, what order, what lengths and angles to cut. Obviously I like making something useful, but the mechanics are fun too.
I agree. This is like asking a carpenter "do you like sawing a plank of wood or do you like designing and creating a finished piece of work?" The answer is "both"—the low-level details may look tedious to those who don't have a true love for the work, but it's just a context switch to another deeply enjoyable aspect of the work as a whole.
> Is that the part of programming that you enjoy? Remembering logger vs logging?
No but genuinely like writing informative logs. I have been in production support roles and boy does the lack of good logging (or barely any logs at all!) suck. I prefer print style debugging and want my colleagues on the support side to have the same level of convenience.
Not to mention the advantages of being able to search through past logs for troubleshooting and analysis.
Enjoying any part of this seems a little odd to me. The enjoyable part is using the thing you built. Regardless, programming (remembering logger vs logging, syntax, debugging) is certainly the easier end of things.
> Is that the part of programming that you enjoy? Remembering logger vs logging?
If a person cannot remember what to use in order to define their desired solution logic (how do I make a log statement again?), then they are unqualified to implement same.
> But in the end, focus on the parts you love.
Speaking only for myself, I love working with people who understand what they are doing when they do it.
It's not unreasonable to briefly forget details like that, especially when you're dealing with a multi-language codebase where "how do I make a log statement?" requires a different pattern in each one.
> It's not unreasonable to briefly forget details like that, especially when you're dealing with a multi-language codebase where "how do I make a log statement?" requires a different pattern in each one.
You make my point for me.
When I wrote:
... I love working with people who understand what they
are doing when they do it.
This is not a judgement about coworker ability, skill, or integrity. It is instead a desire to work with people who ensure they have a reasonable understanding of what they are about to introduce into a system. This includes coworkers who reach out to team members in order achieve said understanding.
I actually take pride in the logs I write because I write good ones with exactly the necessary context to efficiently isolate and solve problems. I derive a little bit of satisfaction from closing bugs faster than my colleagues who write poor logs.
Is that the part of programming that you enjoy? Remembering logger vs logging?
For me I enjoyed the technical chalenges, the design, solving customer problems all of that.
But in the end, focus on the parts you love.