Hacker Newsnew | past | comments | ask | show | jobs | submit | srj's commentslogin

It was the same at google. If I'm remembering right we couldn't export any vector type data (raster only) and the tiles themselves had to be served out of South Korea.


I've seen zero job loss from AI but substantial job loss to off-shoring. What you said about over-hiring I think is also true, but if you look at headcount numbers they have dropped only marginally. The geographic distribution of that headcount however has shifted in a big way to India, Latin America, and Eastern Europe. The reason is obvious, people in those countries are paid far less (often 1/3 or 1/4) compared to their US counterparts.

It seems this rarely gets discussed in the media though. As you said, AI gets more readership attention. I also get the impression people feel there's something culturally offensive about discussing off-shoring.


Reading the text it feels like a giveaway to an "AI safety" industry who will be paid well to certify compliance.


Big4 audit all have a leech class of boomer audit partners who won't let the advisory arm separate and want money. This is a great new income stream. Figure deloitte in particular will make out like bandits on this.


There's definitely a cottage industry forming around "AI safety compliance"


The commenter’s profile indicates they work for a major AI development companies — where being against AI regulation aligns nicely with one’s paycheck. See also the the scare quotes around AI safety.

We all have heard the dogma: regulation kills innovation. As if unbridled innovation is all that people and society care about.

I wonder if the commenter above has ever worked in an industry where a safety culture matters. Once you have, you see the world a little bit differently.

Large chunks of Silicon Valley have little idea about safety: not in automotive, not in medicine, not for people, and certainly not for longer-term risks.

So forgive us for not trusting AI labs to have good incentives.

When regulation isn’t as efficient as we’d like, that is a problem. But you have to factor what happens if we don’t have any regulation. Also, don’t forget to call out every instance of insider corruption, kickback deals, any industry collusion.


I welcome any substantive commentary and disagreement, as always.

I’m happy to stray outside the herd. HN needs more clearly articulated disagreement regarding AI regulation. I made my comment in response to what seemed like a simplistic, ideology-driven claim. Few wise hackers would make an analogous claim about a system they actually worked on. Thinking carefully about tech but phoning it in for other topics is a double standard.

Bare downvotes don't indicate (much less explain) one's rationale. I can’t tell if I (1) struck a nerve (an emotional response); (2) conjured a contentious philosophy (i.e. we have a difference in values or preferences or priorities), (3) made a logical error, (4) broke some norm or expectation, or (5) something else. Such a conflation of downvotes pushes me away from HN for meaningful discussion.

I’ve lived/worked on both coasts, Austin, and more, and worked at many places (startups, academic projects, research labs, gov't, not-for-profits, big tech) and I don’t consider myself defined by any one place or culture. But for the context of AI regulation, I have more fundamental priorities than anything close to "technical innovation at all costs".

P.S. (1) If a downvote here is merely an expression of “I’m a techno-libertarian” or "how dare you read someone's HN profile page and state the obvious?" or any such shallow disagreement, then IMO that’s counterproductive. If you want to express your viewpoint, make it persuasive rather than vaguely dismissive with an anonymous, unexplained downvote. (2) Some people do the thing where they guess at why someone else downvoted. That’s often speculation.


FWIW I didn't downvote you. I don't work on AI personally, and while I have no way of proving it to you I certainly am not trying to shill for my employer.

My skepticism of AI safety is just because of skepticism of AI generally. These are amazing things, but I don't believe the technology is even a road to AGI. There's a reason it can give a chess move when prompted and explain all the rules and notation, but can't actually play chess: it's not in the training data. I simply think the hype and anxiety is unnecessary, is my issue. Now this is most definitely just my opinion and has nothing to do with that company I work for who I'd bet would disagree with me on all of this anyway. If I did believe this was a road to AGI I actually would be in favor of AI safety regulation.


> My skepticism of AI safety is just because of skepticism of AI generally. These are amazing things, but I don't believe the technology is even a road to AGI.

Thanks for your response. I'm curious how to state your claim in a way that you would feel is accurate. Would you say "LLMs are not a road to AGI"?

I put ~zero weight on what an arbitrary person believes until they clarify their ideas, show me their model, and give me a prediction. So:

- Clarify: What exactly do you mean by "a road to"? Does this mean you are saying any future technology that uses LLMs (for training? for inference? something else) won't assist the development of AGI?

- Model: On what model(s) of how the world works do you make your claims?

- Prediction: If you are right, when will we know and what will be observe?


Yes I'm talking about LLMs in particular. I'm in the stochastic parrot camp. Though I could be convinced humans are no more than stochastic parrots, in which case it does have a path for development of AGI.

If I'm right the breakthroughs will plateau even while applications of the technology continue to advance for the next several years.


Here is my take. When people use the stochastic parrots phrase, very often they use it as an explanation of what is happening. But in many cases, I don't think they appreciate: (1) good explanations must be testable models; (2) different explanations exist at different levels of abstraction; (3) having one useful level of explanation does not mean that other levels of explanation are not accurate nor useful.

Sure, optimization based on predicting the next word is indeed the base optimizer for LLMs. This doesn't prevent the resulting behavior from demonstrating behavior that corresponds with some measurable levels of intelligence, as in problem-solving in particular domains! Nor does it prevent fine-tuning from modifying the LLMs behavior considerably.

One might say e.g. "LLMs only learn to predict the next word." The word only is misleading. Yes, models learn to predict the next word, and they build a lot of internal structures to help them do that. These structures enable capabilities much greater than merely parroting text. This is a narrow claim, but it is enough to do serious damage to the causal wielder of the "stochastic parrots" phrase. (To be clear, I'm not making any claims about consciousness or human-anchored notions of intelligence.)


> I'm in the stochastic parrot camp.

If you wouldn't mind doing me a favor?... For a few minutes, can we avoid this phrase? I don't know what people really mean by it.* Can you in plain English translate your view into sentence of theses form:

1. "Based on my understanding of LLMs, X1 and X2 are impossible."

2. "This leads to predictions such as P1 and P2."

3. "If I observed X3 or X4, it would challenge my current beliefs of LLMs."

* I've read the original stochastic parrots paper. In my view, the paper does not match how many people talk about it. Quite the opposite. It is likely many people name drop it but haven't read it carefully. I may have some misinterpretations, sure, but at least I'm actively seeking to question them.


> There's a reason it can give a chess move when prompted and explain all the rules and notation, but can't actually play chess: it's not in the training data.

I don't understand how you can claim an LLM can't play chess. Just as one example, see: https://dynomight.net/chess/


As an austinite I'm nervous about these things. My son and his classmates play along the street and I'm 90% sure I saw one of these driving by our house last week, presumably for testing. The street is legally at a higher speed than most people will drive because there's a lot of activity and no sidewalks which I'm about to argue for changing. Normal people will slow when they see kids around but autonomous cars still drive their normal speed.


>> There are no out of work olive farmers in the US.

I'm not sure this is true. I buy olive oil specifically from California. It's niche but could be larger if they weren't competing with lower overseas labor costs.


Not 50 times larger which is what it would need to be to supply the current domestic consumption. California only produced 1.94 million gallons of olive oil in 2023, that same year the US used ~98.5 million gallons of olive oil.

Even if we could snap our fingers and create the orchards out of thin air there's not enough land and water to grow 50x our current production. Then where's the worker population coming from? They're also trying to drive overall immigration to essentially zero.


Don't olive trees take decades to reach maturity?


It takes time to ramp up olive oil production, so it’s way more cost effective to just import olive oil from countries with established crop.


It's surprising to me that the prosecutor is allowed to essentially insinuate crimes to influence the jury, without the need to prove them. That seems to undermine the process because it creates a "there's smoke so there must be fire" mentality for the jury.


There was plenty of evidence that he ordered the hits, and the defense had the opportunity to address the evidence in court. The chat logs go far beyond "insinuation"

It's ridiculous that people are pretending there is any doubt about his guilt because they like crypto and/or drugs.


So why not properly charge him then?

Do you not think the optics are a bit weird when you sentence someone to life for something relatively small, but the reason is another crime you’re very sure he did but you didn’t bother to charge him with?


Prosecutors often choose not to pursue additional charges against someone already serving a life sentence. This approach helps avoid wasting court time and resources on cases that are unlikely to change the individual’s circumstances or contribute meaningfully to justice (none of the murders for hire resulted in victims).

I actually wonder if those charges may still be on the table now that a pardon has been granted.

https://en.wikipedia.org/wiki/Prosecutorial_discretion


AFAIK they were dismissed with prejudice, so can't be brought again.


If I understand correctly, only one of the "murder-for-hire" allegations was dismissed with prejudice[0]. However, he was suspected of orchestrating a total of six "murder-for-hire" plots.

[0] https://freeross.org/false-allegations/


Comically (horrifically sadly?) they were dismissed that way because he was already in prison for life with no possibility of getting out, so the court did not want to waste time on it.

And here we are


Being a drug kingpin is not considered "something relatively small" under US law, as you can see from the sentencing. Being the leader of a large drug operation and ordering hits to protect your business would be considered worse than trying to take out a hit for whatever "personal reasons".

Obviously the hits are a lot messier to prosecute as well with the misconduct of the FBI agents, maybe you could hammer that enough to confuse a jury. But people are commenting like the evidence outright didn't exist - I can only think they have either heard it told second-hand, or are employing motivated reasoning.


Of course it is. Throwing in potential evidence of unrelated crimes to sway other people's (specifically jury's) opinion about the defendant without formally charging him is exactly what the word "insinuation" means[0]:

the action of suggesting, without being direct, that something unpleasant is true

[0]: https://dictionary.cambridge.org/dictionary/english/insinuat...


> There was plenty of evidence that he ordered the hits, and the defense had the opportunity to address the evidence in court

Clearly not that much evidence if the state didn't bother to prosecute those charges. And why would they? The judge sentenced him as though he had been found guilty of them.


Coincidentally, on the same day, SCOTUS confirmed in Andrew v. White ruling [1] that admitting prejudicial evidence violates due process rights under the 14th Amendment.

1. https://www.supremecourt.gov/opinions/24pdf/23-6573_m647.pdf


It's a gross miscarriage of justice.

The gov should have to prove you committed a crime before that information is admissible at sentencing.


It's a next token predictor capable of some impressive stuff, but there's no intelligence behind it. You will never find a novel idea from an LLM.

Like so many successful applications of computers it's a new way of taking a monotonous task and grinding through it quickly. In this way I think it's different from some previous fads (e.g. block chain) and there is real utility.

I agree though that it's exhausting to read people anthropomorphize and hype it up.


> I think people seriously overestimate the difference engineer quality makes. Most products can be built with mediocre talent. I'm sorry, that's the truth. We all love to have strong opinions on who we should hire and I say "almost anyone, just throw meat at the problem". Most problems are solved with time and not cleverness.

I'm surprised that your experience here is so different from mine. The best engineers I've had are capable of things that the average to below average ones could likely have never achieved, even with an order of magnitude more time.

I don't think it comes down to cleverness as it does inventiveness. There are dots that great engineers can connect that often nobody else could spot. They also need less process, and a large number of people with all of the coordination overhead does not linearly scale.


ATF approvals for individuals are much faster now. Last week I got a Form 4 approval in just 2-3 days.


I think the animosity is some indication people feel justice was not being served.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: