It makes me feel very secure in my job that so many engineers ITT are downplaying the ability and productivity of AI coding tools. You can pry cursor out of my cold dead hands. If you aren't seeing a 10x boost, then you must not have tried it lately, or haven't got the experience to prompt well.
What it excels at:
- Boilerplate code that's been written 1000x, which can saps your time and enthusiasm for the meaty problems beyond that.
- Complex DSA work. It has been demonstrated millions of times in training material.
- Simple and tedious tasks like making dummy data for tests and struct literals.
- Tightly scoped refactors.
Where does it falter?
- Mapping your product/business to the code or abstractions needed. I think this is where junior devs struggle to leverage it.
- Doing large scale multi-file refactors without proper specifics, guidance, and context. It also can't write a huge project from scratch. Humans are still need to fit the pieces all together or provide guidance. I think this gap closes soon.
Code quality simply isn't a problem IME. If it didn't one-shot your dream abstraction, you probably weren't specific enough in the prompt. Most human-written code is also junk, so pointing out a minor gaffes isn't really a dunk on AI. It's still a massive productivity booster if wielded by even a half-competent engineer.
The things you mentioned it does well on are things that help you avoid tedium, but I don't think that's what's most important to businesses. The things you mentioned it does poorly at are the things that matter most.
To pile on: if a large part of our job is purely mechanical, then there is a bigger problem with our engineering processes and AI can't fix that.
> if a large part of our job is purely mechanical, then there is a bigger problem with our engineering processes and AI can't fix that.
It is! And AI is fixing precisely that. What businesses actually care about (well, 99% of them where code is written) is shipping fast and solving the immediate problem, NOT code quality and craft. It goes against what I want to believe as an engineer. Most problems are not new, they are not hard, they are not sensitive. You will need to start with a good understanding of the business need. It's not that the AI can't code to this. I will often stub out an abstraction, explain inputs/outputs in detail, provide sample data etc. That's all. There are frighteningly few showstopper problems with AI coding at this point, and it's moving so quickly.
We're not at the point where non-engineers are capable engineers with AI, but if you are an engineer not using AI extensively, you are being lapped.
I don't think AI is really fixing business problems, though. I think it's only fixing developer problems. And unfortunately nobody really cares about that except for developers.
I just find it sad that instead of focusing on improving how we build things and reducing the need for so much mindless, tedious, repetious, mechanical work, we're content to just build bad things faster with AI and call it a win.
> I just find it sad that instead of focusing on improving how we build things and reducing the need for so much mindless, tedious, repetious, mechanical work, we're content to just build bad things faster with AI and call it a win.
The AI is doing precisely that: reducing the mindless, tedious, repetitious, mechanical work. And what "vibe coding" wants you to embrace is treating high-level code as if it were compiled assembly: an implementation detail you never want to look at or care about if you can help it.
Yes, in some sense AI isn't fixing anything, because all that "mindless, tedious, repetitious, mechanical" code still exists, it's just autogenerated. I too wish we could've first eliminated the need for that entirely. But we didn't, because most programmers and the industry at large still don't understand where the problem is in the first place. They can't see we've long reached Pareto frontier in our programming languages, that we're being limited by the default paradigm of working directly on plaintext codebase that's a single source of truth.
So yeah, in this sense, LLMs aren't fixing anything - they're just an abstraction layer on top of our exhausted coding paradigm.
> shipping fast and solving the immediate problem, NOT code quality and craft
This is also what puts many companies out of business and create huge security issues. If AI is not fixing this but making it worse, then that's not improving software engineering.
> This is also what puts many companies out of business
Those companies you mention just overdid it. Like with everything else on the market, there's a limit to how much value/quality you can optimize away before the end result stops being fit for purpose. However, existence of this limit doesn't stop companies from racing to the very edge of it.
> and create huge security issues.
Security is mostly a solved problem.
Yes, it truly is - at least from the business point of view.
Nobody except attackers and infosec people cares about the mathematical and technical details, or whether your stack or coding practice is secure enough. Not the customers, as they neither understand any of this, nor could do anything about it even if they did. Not the companies, since they manage it at a higher level of abstraction. Whatever holes and vulnerabilities the AI coding introduces, the industry will account for it. Some headlines will be made, some stocks will move, and nothing will change.
FWIW, I don't like either of these things. I'm an engineer in my heart, so it pains me to be constantly reminded that our work is merely means to an end, and matters only to the extent it can't be substituted by some alternative.
If you make the user "go to the AI" and activate it somehow, you did it wrong.
A good AI/LLM experience is one which meets you where you are already working. Github Copilot is FAR from the best LLM at coding, but it is amazingly useful the way it integrates with my workflow.
On the other hand, Apple's Image Playground, Genmoji, and writing tools all make you visit the AI feature to use it. Same problem with chatGPT. It's absolutely painful to visit a web UI and copy/paste information in and out of the thing to get anything done.
I actually do get a little value from Apple's notification group summaries. AI/LLMs need to be blended into existing places. Bring the AI to the user, don't make the user go to the AI.
To put it another way. AI is not a feature. It can make a feature better. Somehow everyone wants it to be a standalone feature. It is not.
What’s so good about fresh air? Like I don’t want stinky stuffy air but as someone with central HVAC I had no issues with my indoor air. Are we trying to get outdoor smells? Or is it something else?
High CO2 levels impair cognition and stale air accumulates pathogens, not just smells. The V in your HVAC stands for Ventilation, so you're already getting fresh air, that's probably why you have no complaints. If you live in an air tight apartment with no forced circulation where CO2 levels spike super fast requiring ventilation several times a day, it's a different story.
In the winter it's cold outside and opening the window cools down the room -> no ventilation most of the time.
In the summer it's not a problem for me, I leave my windows partially open all the time but in the winter especially when working from home this would be quite neat. Also, I live in a small town in germany so the air quality here is comparatively good to many of the city folks here.
Why does SOTA matter here? OpenAI has actually failed already to make something useful and mass-market. After all these years of pushing SOTA, it's still a website with a text box. Siri in its current form is more useful to me than ChatGPT 4. You simply don't need SOTA models for a lot of valuable features; you need context, annotation APIs, and the right platform and integrations to assist users where they already are (on their phones, in cars, in their IDE, etc). MSFT's GitHub copilot is a good example of getting it right.
Apple relegated ChatGPT to a 3rd tier AI capability for "google answers", and "write me a poem" party tricks, and rightfully put it behind a privacy disclaimer. It looks barely better than the various ChatGPT shortcuts so many people have cobbled together. That part of the announcement stuck out like a sore thumb and looked to me like a huge L for OpenAI. The "partnership" was a nothingburger. Like someone at Apple agreed to it early on but then late in the project realized it wasn't needed at all. So much for SOTA.
I mostly agree with you. OpenAI is reportedly still doing great in terms of revenue, but Apple's implementation is magical if it performs as shown in the keynote. In my opinion, it's the best implementation of LLM/AI in a consumer device. It's amusing to think back to all the buzz around Humane's AI pin.
> It's amusing to think back to all the buzz around Humane's AI pin
It certainly is. And the Rabbit R1. How in the world did people who supposedly know anything about AI think they could make that work as a standalone device detached from rich context. The sad thing is, rich context may not be possible outside of OS-level integration that is gate kept, but I still think they're idiots for trying.
gate kept OS-level integration is one thing, companies that won't push data to it is another. I fully expect facebook/messenger to not support intents/siri/shortcuts/etc
Same for discord and any other social media platform right now.
Thank god. I think 1pw has been mostly good, but it has frustrating quirks... Like requiring me to input the master password on the iOS app/OSX/Browser extension (on the same device) as if each of these apps have no way of communicating.
I constantly have issues with it not engaging on a form where I have to manually switch to 1pw, though it has gotten a bit better over the years.
I hate to see a company/product get sherlocked but I don't feel like password security was something we should need to have a subscription for.
The AI/Cartoony person being sent as a birthday wish was super cringey, like something my boomer father would send me. I'm a fan of genmoji. That looks fun. Less a fan of generated clip art and "images for the sake of having an image here", and way, way less into this "here, I made a cornball image of you from other images of you that I have" feature. It's as lame as Animoji but as creepy as deepfakes.
LOL you haven't been in group chats with idiot drunk friends apparently shit like that kills, i had a friend who hates iphones, i sent a dozen bing ai images of him as a cartoon doing... things... to the phone... entire chat was dieing for days.
I was surprised how little they are leaning on OpenAI. Most of the impressive integrations that actually look useful are on-device or in their private cloud. OpenAIs ChatGPT was relegated to a corner of Siri for answering "google queries", if you grant it permission. This seems like an L for OpenAI, not being a bigger part of the architecture (and I'm glad).
Agreed. The rumors beforehand made it sound Apple and OpenAI would practically be merging. This felt like a fig leaf so Apple could say you can access SOTA models from you iPhone. But for me personally, the deep integration with the ecosystem + semantic index are way way more interesting.
The OpenAI/ChatGPT part of this looks pretty useless. Similar to what some shortcuts like “hey data” already do. I was shocked, and relieved that Apple isn't relying on their APIs more. Seems like a big L for OpenAI.
Pretty interesting to see the difference in the grain. I do notice the softness of the wall studs when I hang anything in my newer place. Still "they dont build em like they used to" is a bit... wrong. Nimby's or "old heads" as we call em, badmouth new builds for their quality. I've lived in extremely well-built 18th century homes, and now one that is less than a decade old and I greatly, greatly prefer the latter.
Newer features I appreciate are engineered joists (stronger, less creaking floors and noise transmission), doors and double pane windows that seal out air and noise, good insulation, central HVAC, PEX plumbing, neutral wiring, and the fact that if I need to fix or replace anything, it can be easily ordered if not found at a hardware store.
Yeah, maybe some of these materials are not as "sturdy" to the touch and maybe they have a shorter life-span, but I am positive they work better and are cheaper/easier to maintain.
> Yeah, maybe some of these materials are not as "sturdy" to the touch and maybe they have a shorter life-span, but I am positive they work better and are cheaper/easier to replace.
They are cheaper, they are easier to replace, and at the moment (wood nerd hat on) we genuinely do not know if they have a shorter lifespan. But it's very likely that they do not, in part because of the construction around them. These concerns are not, in my experience, something that you really hear people who spend much time around structural work bring up. It's mostly a "back in the day" thing.
For furniture, on the other hand--things are different. But dimensional lumber as practiced today is a modern miracle.
I'm definitely no wood nerd but, yeah, I wouldn't intuit that a softer, new-growth wood would just disintegrate in 100 years time, or much more, for that matter.
As you allude to the material around it, we now have engineered, weather resistant sheathing, house wrap, and vinyl or metal siding, or some such very weather-proof stuff.
Most of the issues I have had with "old growth" wood in my 17th century homes were just due to water penetrating where it should not, rotting that wood, and then causing me a headache trying replace it with a similar material.
That’s wildly different from my experience. Never lost the pieces or had to “track” them and the charge lasts for longer than I can comfortably listen to something without a short break.
Being tethered to a phone or computer with a wire that pulls, catches, and makes rubbing noise through your ears is truly an awful relic of the past.
What it excels at: - Boilerplate code that's been written 1000x, which can saps your time and enthusiasm for the meaty problems beyond that.
- Complex DSA work. It has been demonstrated millions of times in training material.
- Simple and tedious tasks like making dummy data for tests and struct literals.
- Tightly scoped refactors.
Where does it falter?
- Mapping your product/business to the code or abstractions needed. I think this is where junior devs struggle to leverage it.
- Doing large scale multi-file refactors without proper specifics, guidance, and context. It also can't write a huge project from scratch. Humans are still need to fit the pieces all together or provide guidance. I think this gap closes soon.
Code quality simply isn't a problem IME. If it didn't one-shot your dream abstraction, you probably weren't specific enough in the prompt. Most human-written code is also junk, so pointing out a minor gaffes isn't really a dunk on AI. It's still a massive productivity booster if wielded by even a half-competent engineer.