It feels like this to me too, whenever I give it a try. It's like a button you can push to spend 20 minutes and have a 50/50 chance of either solving the problem with effortless magic, or painfully wasting your time and learning nothing. But it feels like we all need to try and use it anyway just in case we're going to be obsolete without it somehow.
If it doesn't work, it's an annoyance and you have to argue with it.
If it does work, it's one more case where maybe with the right MCP plumbing and/or a slightly better model you might not be needed as part of this process.
Feels a bit lose-lose.
I am ahead of this curve, my trajectory was VSCode -> Zed -> Helix. Helix doubles my battery life from 4 to 8 hours compared to Zed. Zed is also on a bad trajectory IMO with the huge amount of updates being pushed constantly.
What is bad exactly about the trajectory? They are constantly adding features and none of feels like bloat, it just happens to be a much younger editor so iteration is rapid. I welcome it compared to the monstrosity that Jetbrains has become.
Yeah, it just reminds me of the early days of VS Code, where features were constantly added and it was fun at first, but they didn't stop and eventually it did feel more sluggish and bloated. Sometimes I'd have to spend time fixing or re-configuring something just because I opened the editor and the daily auto-update did something annoying. It might not happen with Zed, but it seems like a very similar approach to development.
> I genuinely don't know which I'm experiencing. That uncertainty itself feels like it should matter.
We don't even know how consciousness works in ourselves. If an AI gets to the point where it convinces us it might have awareness, then at what point do we start assigning it rights? Even though it might not be experiencing anything at all? Once that box is opened, dealing with AI could get a lot more complicated.
Some things in sci fi have become simply sci - megacorps that behave like nation states, the internet, jetpacks, robots... I feel like the trope that we will see realized going forward is "Humanists versus Transhumanists". We have these mores and morality and it's largely been able to chug along on the strength of collective identity and the expansion thereof - we are humans, so we try to do good by humans. There are shades in all directions (like animal rights - consciousness is valuable no matter who has it) but by and large we've been able to identify that if something appears to feel pain or trauma, that's a thing to have a moral stance about.
But the machines have done this already. There are well documented instances of these things mimicing those affects. Now, we are pretty sure that those examples were not doing what they appeared to - just probablistically combining a series of words where the topic was pain or anguish etc, but once you get into chain-of-thought and persistent memory things begin to get a lot more nuanced and difficult to define.
We need to have a real sit-down with our collective selves and figure out what it is about ourselves that we find valuable. For myself, the best I've come up with is that I value diversity of thought, robust cellular systems of independent actors, and contribution to the corpus of (not necessarily human) achievement.
That has been my experience too. I recently turned it all off because I decided the amount of times it takes me 5x longer to accomplish something, in addition to the subtle increase of bugs, is not currently worth it. I guess I'll try it again in a few months.
Spinosaurus is the inaccurate one from Jurassic Park 3 and Tyrannosaurus has inaccurate hand rotation. Velociraptor and Dilophosaurus are also from Jurassic Park and highly inaccurate. For shame!
It does make you wonder if humanity doesn't scale up neatly to the levels of technology we are approaching...the whole ethics thing kind of goes out the window if you can just change the desires and needs of conscious entities.
Yeah, the AI-generated bugs are really insidious. I also pushed a couple subtle bugs in some multi-threaded code I had AI write, because I didn't think it through enough. Reviews and tests don't replace the level of scrutiny something gets when it's hand-written. For now, you have to be really careful with what you let AI write, and make sure any bugs will be low impact since there will probably be more than usual.
Really interesting, thanks. The last point is thought provoking:
> Even so, mankind will suffer badly from the disease of boredom, a disease spreading more widely each year and growing in intensity. This will have serious mental, emotional and sociological consequences, and I dare say that psychiatry will be far and away the most important medical specialty in 2014. The lucky few who can be involved in creative work of any sort will be the true elite of mankind, for they alone will do more than serve a machine.
> Indeed, the most somber speculation I can make about A.D. 2014 is that in a society of enforced leisure, the most glorious single word in the vocabulary will have become work!
I think this could have already happened, except the definition of "work" is so nebulous, and there is so much wiggle room between the things that actually need to be done and the things we might as well do. Or maybe it has happened in parts of the world, but we are all in denial about it.
Yeah, it was to me the most interesting part of the piece; and I tend to agree with it.
When I was younger I may have been amused by the technological speculation — perhaps giving him a score (sorry, no hovercraft for everyday cars, and my Makita is in fact not powered by radioisotopes, but we'll grant you the experimental fusion power plants). We've seen that there are just so many unforeseeables that it is little more than just guessing ... multiplied by how keyed in you are on the what the current state of research is.
To have predicted Maps on the phone you would have had to predict the ubiquity of GPS or some other means of location, predicted small portable displays in order to show a map, predicted a global data network, miniaturization of electronics, the computer revolution.... It's no wonder even William Gibson missed it writing two decades after Asimov.
Was there some odd sort of car with a "moving map" depicted in the 1960's? Or was I victim to some sort of viral image fakery? What did Bond have in Goldfinger — some kind of printed, scrolling map with a moving light (cursor) behind it? Or am I misremembering. Hilarious though to try to do something with 60's tech — perhaps using dead reckoning rather than GPS to guess the car's location (steering plus speed).
I think some WWII era bombers had scrolling paper maps? Or maybe I saw such maps at a museum recently. All these innovations have been on the military's wish list for a while
But there is no message outside the rational layer when you're talking to a non-human. The only message is the amount of true information the LLM is able to output - the rest is randomness. It's fatiguing to have your human brain try to interpret emotions and social dynamics where they don't exist, the same way it's fatiguing to try and interpret meaning from a generated image.
I am sure that if you talk to a dog, it will probably take as much from your emotions as your words (to disprove your point about non-humans).
You look at it in binary categories, but instead, it is always some amount of information and some amount of randomness. An LLM can predict emotions similarly to words. Emotions and social dynamics from an LLM are as valid as the words it speaks. Most of the time, they are correct, but sometimes they are not.
The real difference is that LLMs can be trained to cope with emotions much better ;-)
Yes, fair enough about the dog - "non-human" was the wrong choice of words. But I don't agree that emotions and social dynamics from an LLM are valid. Emotions need real stakes behind them. They communicate the inner state of another being. If that inner state does not exist (maybe it could in an AGI, but I don't believe it could in an LLM), then I'd say the communication is utterly meaningless.
Well, at least to some extent. I mean, changing the inner state of an AI (as they are being built today) certainly is, because it does not affect other beings. However, the interaction might change your inner state. Like looking at an AI-generated image and finding it beautiful or awful. Similarly, talking to Miles or Maya might let you feel certain emotions.
I think that part can be very meaningful, but I also agree that current AI is built to not carry its emotional state into the world outside of the direct interaction.