Hacker Newsnew | past | comments | ask | show | jobs | submit | bathtub365's commentslogin

The more you have to rerun your actions to debug them, the more money Microsoft makes. They aren’t incentivized to save you time.

Completely bonkers that people, companies and organizations just swallow this, bait and all.

Free hosting, CI minutes, and an ecosystem.

Commit and push to test small incremental changes, self-hosted runners' time still count towards CI minutes, and an ecosystem hellbent on presenting security holes as new features. I'm a bit unimpressed :)

The average person already automates a lot of things in their day to day lives. They spend far less time doing the dishes, laundry, and cleaning because parts of those tasks have been mechanized and automated. I think LLMs probably automate the wrong thing for the average person (i.e., I still have to load the laundry machine and fold the laundry after) but automation has saved the average person a lot of time

For example, my friend doesn’t know programming but his job involves some tedious spreadsheet operations. He was able to use an LLM to generate a Python script to automate part of this work. Saving about 30 min/day. He didn’t review the code at all, but he did review the output to the spreadsheet and that’s all that matters.

His workplace has no one with programming skills, this is automation that would never have happened. Of course it’s not exactly replacing a human or anything. I suppose he could have hired someone to write the script but he never really thought to do that.


What sorts of things will the average, non-technical person think of automating on a computer that are actually quality-of-life-improving?

A work colleague had a tedious operation involving manually joining a bunch of video segments together in a predictable pattern. Took them a full working day.

They used "just" ChatGPT on the web to write an automation. Now the same process takes ~5 minutes of work. Select the correct video segments, click one button to run script.

The actual processing still takes time, but they don't need to stand there watching it progress so they can start the second job.

And this was a 100% non-tecnical marketing person with no programming skills past Excel formulas.


My favorite anecdotal story here is that a couple of years ago I was attending a training session at a fire station and the fire chief happened to mention that he had spent the past two days manually migrating contact details from one CRM to another.

I do not want the chief of a fire station losing two days of work to something that could be scripted!


I don't want my doctor to vibe script some conversion only to realize weeks or months later it made a subtle error in my prescription. I want both of them to have enough fund to hire someone to do it properly. But wanting is not enough unfortunately...

Humans make subtle errors all the time too though. AI results still need to be checked over for anything important, but it's on a vector toward being much more reliable than a human for any kind of repetitive task.

Currently, if you ask an LLM to do something small and self-contained like solve leetcode problems or implement specific algorithms, they will have a much lower rate of mistakes, in terms of implementing the actual code, than an experienced human engineer. The things it does badly are more about architecture, organization, style, and taste.


But with a software bug, the error becomes rapidly widespread and systematic, whereas human error are often not. Doing wrong with a couple of prescription because the doc worked for 12+ hrs is different from systematically doing wrong on a significant number of prescriptions until someone double check the results.

An error in a massive hand-crafted Excel sheet also becoms systematic and wide-spread.

Because Excel has no way of doing unit tests or any kind of significant validation. Big BIG things have gone to shit because of Excel.

Things that would have never happened if the same thing was a vibe-coded python script and a CSV.


I agree with the excel thing. Not with thinking it can't happen with vibecoded python.

I think handling sensitive data should be done by professional. A lawyer handles contracts, a doctor handles health issue and a programmer handles data manipulation through programs. This doesn't remove risk of errors completely, but it reduces it significantly.

In my home, it's me who's impacted if I screw up a fix in my plumbing, but I won't try to do it at work or in my child's school.

I don't care if my doctor vibe codes an app to manipulate their holidays pictures, I care if they do it to manipulate my health or personal data.


Of course issues CAN happen with Python, but at least with Python we have tools to check for the issues.

Bunch of your personal data is most likely going through some Excel made by a now-retired office worker somewhere 15 years ago. Nobody understands how the sheet works, but it works so they keep using it :) A replacement system (a massive SaaS application) has been "coming soon" for 8 years and cost millions, but it still doesn't work as well as the Excel sheet.


Shouldn’t the AI technology that Microsoft is spending billions on make this trivial?

AI is also likely not following the terms of the license. I.e., for BSD it needs to include attribution.


These basically seem like numbers of last resort. After you’ve profiled and ruled out all of the usual culprits (big disk reads, network latency, polynomial or exponential time algorithms, wasteful overbuilt data structures, etc) and need to optimize at the level of individual operations.


I wouldn’t expect them to accept photocopies of a passport


I would hope that they have access to a tool to look up the passport by number and confirm that the details match the copy and the photo appears to look like the person.


They do, but it can and will be ignored, based on events to date. The goal is to create ambiguity to enable a power imbalance enabling working outside of the legal framework to accomplish target outcomes. It turns an objective boolean evaluation (“is_citizen”) into a subjective one (“is_preferred_and_compliant”).


You might even hope that such a system would be able to work off of their name and some other memorable, identifiable information like address, origin country, date of birth, and would display their papers with photo-identification available, but alas...

The goal isn't to be reasonable or helpful.


We have machines that only do some parts of these tasks.


At least many of the comments here still seem to be human written and so are much more interesting to read than the increasing number of AI written articles that get linked.


You're absolutely right.

Although to be fair, the fact that the comment section of HN is often more interesting than TFA is something than long predates LLMs.


You can do this on almost every online service by trying to create an account with an email that already has an account.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: