The mandate/goal went pretty far up the chain, too. Windows got moved from being under Azure to under "CoreAI" in the org structure. Incentive structures usually reflect org structure. In this case the fingers can point pretty far up on why incentives shifted the way that they did.
> People outsourcing thinking and entire skillset to it - they usually have very little clue in the topic, are interested only in results, and are not interested in knowing more about the topic or honing their skills in the topic
And this may be fine in certain cases.
I'm learning German and my listening comprehension is marginal. I took a practice test and one of the exercises was listening to 15-30 seconds of audio followed by questions. I did terribly, but it seemed like a good way to practice. I used Claude Code to create a small app to generate short audio (via ElevenLabs) dialogs and set of questions. I ran the results by my German teacher and he was impressed.
I'm aware of the limitations: Sometimes the audio isn't great (it tends to mess up phone numbers), it can only a small part of my work learning German, etc.
The key part: I could have coded it, but I have other more important projects. I don't care that I didn't learn about the code. What I care about is I'm improving my German.
Seems like you are part of the first group then, not the second. The fact that you are interested in learning and are using it as a tool disqualifies you from someone who has little clue and just wants to get something out (i.e. just spit out code)
As I reread the original post, I'm not actually not sure which group I fall into. I think there's a bunch of overlap depending on perspective/how you read it:
> Group 1: intern/boring task executor
Yup, that makes sense I'm in group 1.
> Group 2: "outsourcing thinking and entire skillset to it - they usually have very little clue in the topic, are interested only in results"
Also me (in this case), as I'm outsourcing the software development part and just want the final app.
Soo... I probably have thought too much about the original proposed groups. I'm not sure they are as clear as the original suggests.
False dichotomy is one of the original sins. The two groups as advertised aren't all that's out there. Most people are interested in results. How we get those results is part of the journey of getting results, and sometimes it's about the journey not the destination. I care very much about the results of my biopsy or my flight, I don't know much about how we get there, I want to know if I have cancer, and that my plane didn't crash. I hope that doesn't put me on the B ark that gets sent into the sun.
I'd say you're still in the group 1. Your main goal is not the app but learning German. Therefore creating the app using AI is only a means to an end, a tool, and spending time coding it yourself is not important in this context.
The AI usage was not about learning German, but for creating an app. This would be group 2. He may use the tool he made to learn German, but using that tool isn't using AI
They could admittedly be more defined, but I think the original commenter missed a key word. It really boils down to whether or not you are offloading your critical thinking.
The word "thinking" can be a bit nebulous in these conversations, and critical thinking perhaps even more ambiguously defined, so before we discuss that, we need to define it. I go with the Merriam-Webster definition: the act or practice of thinking critically (as by applying reason and questioning assumptions) in order to solve problems, evaluate information, discern biases, etc.
LLMs seem to be able to mimic this, particularly to those who have no clue what it means when we call an LLM a "stochastic parrot" or some equally esoteric term. At first I was baffled that anyone really thought that LLMs could somehow apply reason or discern its own biases but I had to take a step back and look at how that public perception was shaped to see what these people were seeing. LLMs, generative AI, ML, etc are all extremely complex things. Couple that with the pervasive notion that thinking is hard and you have a massive pool of consumers who are only too happy to offload some of that thinking on to something they may not fully understand but were promised that it would do what they wanted, which is make their daily lives a bit easier.
We always get snagged by things that promise us convenience or offer to help us do less work. It's pretty human to desire both of those things, but proving to be an Achilles Heel for many. How we characterize AI determines our expectations of it; so do you think of it as a bag of tools you can use to complete tasks? Or is it the whole factory assembly line where you can push a few buttons and an pseudo-finished product comes out the other side?
More recently, I'm using Claude Code to handle my inventory management by having it act as an analyst while coding itself tools to access my Amazon Seller accounts to retrieve the necessary info: https://theautomatedoperator.substack.com/p/trading-my-vibe-...
They only run locally on my machine, and they use properly scoped API credentials. Is there some theoretical risk that someone could get their hands on my Gemini API key? Probably, but it'd be very tough and not a particularly compelling prize, so I'm not altogether too concerned here.
On the verification front, a few examples:
1. I built an app that generates listing images and whitebox photos for my products. Results there are verifiable for obvious reasons.
2. I use Claude Code to do inventory management - it has a bunch of scripts to pull the relevant data from Amazon then a set of instructions on how to project future sales and determine when I should reorder. It prints the data that it pulls from Amazon to the terminal, so that's verifiable. In terms of following the instructions on coming up with reorder dates, if it's way off, I'm going to know because I'm very familiar with the brands that I own. This is pretty standard manager/subordinate stuff - I put some trust in Claude to get it right, but I have enough context to know if the results are clearly bad. And if they're only off by a little, then the result is I incur some small financial penalty (either I reorder too late and temporarily stock out or I reorder too early and pay extra storage fees). But that's fine - I'm choosing to make that tradeoff as one always does when one hands off work.
3. I gave Claude Code a QuickBooks API key and use it to do my books. This one gets people horrified, but again, I have enough context to know if anything's clearly wrong, and if things are only slightly off then I will potentially pay a little too much in taxes. (Though to be fair it's also possible it screws up the other way, I underpay in taxes and in that case the likeliest outcome is I just saved money because audits are so rare.)
Not every tool can have a "security risk". I feel that this stems from people who see every application as a product and products must be an online web app available to the world.
Let's say I have a 5 person company and I vibe-engineer an application to manage shifts and equipment. I "verify" it by seeing with my own eyes that everyone has the tools they need and every shift is covered.
Before I either used an expensive SaaS piece of crap for it or did it with Excel. I didn't "verify" the Excel either and couldn't control when the SaaS provider updated their end, sometimes breaking features, sometimes adding or changing them.
I just saw an R5 on the street in the bright green. Super cool looking car. There are a whole bunch of promising small EVs coming out in the EU. Hyundai Inster, VW ID.1, Kia EV2, etc.
Took one for a test drive - it was fun. The only downside is compared to some other compact/city EV's the legroom in the back is REALLY bad (and I'm not exactly tall).
The legroom in my son's VW e-Up! is markedly better, despite it being a smaller car.
Is there any indication that they're going to "defeat common sense" again? They're cancelling products, making marginal improvements to old models, alienating their customers, etc.
Tesla as a car company seems dead-set on a continuous downward spiral.
Maybe the switch to robots will pay off and you'll be right. Somehow, I'm skeptical.
> Is there any indication that they're going to "defeat common sense" again?
If you equal Elon to Tesla then there are plenty of - SpaceX dominates near-earth orbit payload launches. A private company competing against and replacing NASA would have been a laughingstock idea 30 years ago. xAI made competitive SOTA models despite a very, very late start.
Of course Elon isn't Tesla. I think the biggest risk of Tesla now is the investors realizing he's more into AI and politics and will siphon resources from Tesla to his other companies.
Except SpaceX "competing and replacing NASA" is ... also a meme.
SpaceX is essentially the same kind of commercial provider as always, except that they didn't sit on laurels of 1960s ICBM work, and among other things built their own additional infrastructure.
... But remember they were explicitly early financed to do that by DoD and NASA.
Small odd thing, but that's the first tracking warning modal I've seen that says they don't actually use tracking. And I can decline the no tracking? Kinda funny.
> the time zones are killer, and this can't be ignored
100% agree, especially when there is minimal overlap during normal office hours. I was managing a dev team in India from the US and it was a real challenge. The company ended up moving team to the US, relocating most of my team. Despite all the people being the same, management became much easier.
Since then I've done US and EU, and EU and IN, and those have all worked fine because we had sufficient overlap during business hours.
He didn't need 8 hours, but zero didn't work. The us and india are about 12 hours apart (there are 4 times zones in the us, day light savings time, and india is offset half an hour, but it rounds out to 12 hours for discussion)
> If you needed 8 hour overlap you were micromanaging?
...ok. I didn't need 8 hours of overlap.
As I mentioned in my first comment, I've also now done US/EU and EU/IN. Both of which have only partial overlap and things have gone well.
With US West Coast and India, I was often doing meetings at 7AM and my devs were doing meetings at 9 or 10PM. That was challenging, irrespective of any cultural differences.
I doubt LLM-generated software is going to replace more traditional software any time soon, especially when accuracy is pretty important (such as accounting). One thing I learned from years as a PM in a very data-centric organization is understanding data, how it is generated/stored/cut/etc. is very important to getting accurate results.
Where I could see some really interesting results is the marriage of the two. For example, you have a solid data structure that an LLM can generate infinite custom views from.
i think the same, i think backend where data is more prominent is not going anywhere soon. llms produce very bad data structures.
but from good apis, good data, good interface they can generate quite nice frontends.
i guess, frontend as job is going to have a hard time.
also, writing code is not cognitive load, its always reading code. and llms just increase that. so i mostly try to avoid using them.
but i do like researching with them. context free. like googles ai mode, etc. not from my code editor cause then they get biased and suggest stupid sh8t all the time.
With the current tech, I agree this will still be pretty niche. I'm vibe-coding my own iOS apps, and it still needs a decent understanding of the tech and a willingness to put up with a lot of rough edges.
However, with a proper framework (e.g., a very opinionated design system, the ability to choose from some pre-designed structures/flows, etc.) I could very much see ad hoc creation of software becoming more widespread.
> It will be interesting to see if Apple/Android provide a platform for vibe-apps.
It would be interesting, particularly for Apple, as this would cannibalize fees charged on the App Store. I imagine they could charge for use of the vibe-coding platform, but Apple hasn't been great at figuring out LLMs.
It would be cool if 3rd partly app platform could provide this functionality, but as I noted in another comment, I cannot even install my own vibe-coded apps to my own iPhone. (Without the 100 USD a year developer tax.) So I'm not sure how the architecture would work on iOS.
reply