Hacker Newsnew | past | comments | ask | show | jobs | submit | skwirl's commentslogin

On this topic I think it’s pretty off base to call HN a “well insulated bubble” - AI skepticism and outright hate is pretty common here and AI negative comments often get a lot of support. This thread itself offers plenty of examples.

Surely we all know that when we post or upvote comments like this that we are being incredibly disingenuous.


The database is being reverse engineered and published anyways, as per the article.


I think Archive is just rehydrating shortened links in webpages that have been archived. I doubt They’re discovering previously unknown urls.


No they really are trying to enumerate all 230 billion possible shortlinks; that’s why they need so many people to help crawl everything.


Got a source? I don’t see details one way or another


From the article:

> there are about 230 billion* links that need visiting

> * Thanks to arkiver on the Archive Team IRC for correcting this number.

Also when running the Warrior project you could see it iterating through the range. I don't have any logs handy since the project is finished but they looked a bit like

  https://goo.gl/gEdpoS: 404 Not Found
  https://goo.gl/gEdpoT: 404 Not Found
  https://goo.gl/gEdpoU: 302 Found -> https://...
  https://goo.gl/gEdpoV: 404 Not Found


They are useful for putting URLs in print materials like books. Useful for sharing very long links in IRC and some other text based chat apps (many google maps links would span multiple IRC lines if not shortened, for example). They are good for making more easily scannable QR codes.


Not when everyone logs in as ec2-user.


That's horrific!


So not being able to download any of the games you have purchased ever again?


Article says you can still play those, just not re-download on the same system it seems.


ah, very reasonable...


The major difference is that in the type of reading Joel Splosky is talking about, you are coming in not knowing the code's intent. It was written by one or more other people at some point in the past, likely with many iterative changes over a period of time. Figuring out the intent in this case is 90%+ of the work. With LLM generated code, you know the intent. You just told the assistant exactly what your intent was. It's much, much easier to read code that you already know the intent of.


It doesn’t really matter what this or that person said six months ago or what they are saying today. This morning I used cursor to write something in under an hour that previously would have taken me a couple of days. That is what matters to me. I gain nothing from posting about my experience here. I’ve got nothing to sell and nothing to prove.

You write like this is some grand debate you are engaging in and trying to win. But to people on what you see as the other side, there is no debate. The debate is over.

You drag your feet at your own peril.


The thing about people making claims like “An LLM did something for me in an hour that would take me days” is that people conveniently leave out what their own skill level is.

I’ve definitely seen humans do stuff in an hour that takes others days to do. In fact, I see it all the time. And sometimes, I know people who have skills to do stuff very quickly but they choose not to because they’d rather procrastinate and not get pressured to pick up even more work.

And some people waste even more time writing stuff from scratch when libraries exist for whatever they’re trying to do, which could get them up and running quickly.

So really I don’t think these bold claims of LLMs being so much faster than humans hit as hard as some people think they do.

And here’s the thing: unless you’re using the time you save to fill yourself up with even more work, you’re not really making productivity gains, you’re just using an LLM to acquire more free time on the company dime.


Again, implicit in this comment is the belief that I am out to or need to convince you of something. You would be the only person who would benefit from that. I don’t gain anything from it. All I get out of this is having insulting comments about my “skill level” posted by someone who knows nothing about me.


You don’t know the harm you’re inflicting. Some manager will read your comment and conclude that anyone who isn’t reducing tasks that previously took hours or days into a brief 1 hour LLM session is underperforming.

In reality, there is a limit to how quickly tasks can be done. Around here, the size of PRs usually have changes that most people could just type out in under 30 minutes if they knew exactly what to type. However, getting to the point where you know exactly what you need to type takes days or even weeks, often collaborating across many teams and thinking deep about potential long term impacts down the road, and balancing company ROI and roadmap objectives, perhaps even running experiments.

You cannot just throw LLMs at those problems and have them wrapped up in an hour. If that’s what you’re doing, you’re not working on big problems, you’re doing basic refactors and small features that don’t require high level skills, where the bottleneck is mostly how fast you can type.


>And some people waste even more time writing stuff from scratch when libraries exist for whatever they’re trying to do

That's an argument for LLMs.

>you’re just using an LLM to acquire more free time on the company dime.

This is a bad thing?


> you’re just using an LLM to acquire more free time on the company dime

You might as well do that since any productivity gains will go to your employer, not you.


That's not how FAANG compensation works.


FAANG engineers are still working class.


It's hard to be working class on like 4x the median income and stock compensation.

You can't own your own SV home but you can become a slumlord somewhere else remotely.


Do you just mean within Amazon? Because outside of Amazon, there was major resistance to AWS/cloud computing in general from older devs highly invested in the status quo. I have spent a significant amount of effort in my career fighting for cloud adoption.


Nobody going to the sphere is expecting the 1939 theater experience. And nobody has truly had that experience for 85 years in any case. This movie is 10 years from being in the public domain, and would have been decades ago if not for the lobbying of moneyed interests. Perhaps it's time to stop clutching pearls and, if you don't like what they are doing at the sphere, just don't go see it.


I didn’t say they shouldn’t be allowed to do it, and the only thing I said about “the theater experience” was sneering at the idea. I am talking about the film itself, and saying that these edits to the film are tasteless trash. Claiming I’m “clutching at pearls” is a bad faith insult - my point is this shit fucking sucks!

David Lynch used ML to remaster Inland Empire, which was shot on a digital camcorder and was simply too dark and blurry. This was an excellent use of the technology. Blowing up The Wizard of Oz for the sake of tech bros and tourists is a terrible use of the technology.


> as if Google thinks direction and editing are technological limitations

“Google” isn’t making the artistic decisions here - there’s a full production staff from the studio doing that. Google is making what they ask for.

>I am … saying that these edits to the film are tasteless trash

Making such a claim with zero knowledge or experience is a pretty bold move - how are you so confident here?

While I didn’t get to see the private preview shown at the sphere this week, I’ve spoken to about a dozen people who did and they were all very positive about it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: