Hacker Newsnew | past | comments | ask | show | jobs | submit | justinbaker84's commentslogin

I was thinking the same thing.


Microsoft is saying they have a 27% stake in the company after this deal closes.


I don't understand why more people don't talk about how fast the models are. I see so much obsession with bechmark scores but speed of response is very important for day to day use.

I agree that the models from OpenAI and Google have much slower responses than the models from Anthropic. That makes a lot of them not practical for me.


If the prompt runs twice as fast but it takes an extra correction, it’s a worse output. I’d take 5 minute responses that are final.


I don’t agree that speed by itself is a big factor. It may target a certain audience but I don’t mind waiting for a correct output rather than too many turns with a faster model.


Well, it depends on what you do. If a model can produce a PR that is ready to merge (and another can't), waiting 5 minutes is fine.


I feel like if I just do a better job of providing context and breaking complex tasks into a series of simple tasks then most of the models are good enough for me to code.


I am a professional developer so I don't care about the costs. I would be willing to pay more for 4.5 Haiku vs 4.5 Sonnet because the speed is so valuable.

I spend way to much time waiting for the cutting edge models to return a response. 73% on SWE Bench is plenty good enough for me.


How do you review code when the LLM can produce so much so fast?


Just read it when it is done writing it.


with an LLM


I am very excited about this. I am a freelance developer and getting responses 3x faster is totally worth the slightly reduced capability.

I expect I will be a lot more productive using this instead of claude 4.5 which has been my daily driver LLM since it came out.


The people who succeed the most with fraud are those who tell lies that people want to believe. A LOT of people wanted to believe that there could be a second electric car company and that they could get rich off it. That is why the fraud worked so well.


AI is the same. I am pretty sure at any company you have executives saying things about AI that not only aren't true, they can never be true. However, this is the story that people are willing to believe.

Also, just generally, the question is wrong. Perpetrating a massive fraud is very time-consuming and, ultimately, requires a level of self-deception that most people don't have the energy for. Milton, SBF, etc. did the things they did because they wanted to believe they were someone other than who they were. There is nothing wrong with knowing who you are and just being that person. To say this another way, Milton was clearly unwell, he is now unwell with more money than can be actually used, being unwell is not an example for anyone particularly when you trade it for something with extremely limited marginal value.


> The people who succeed the most with fraud are those who tell lies that people want to believe.

Jason Zweig:

    There are three ways to make a living:
        1) Lie to people who want to be lied to, and you’ll get rich.
        2) Tell the truth to those who want the truth, and you’ll make a living.
        3) Tell the truth to those who want to be lied to, and you’ll go broke.
* https://jasonzweig.com/three-ways-to-get-paid/


A difference in skill level is not a difference in morality. Nobody is out there only scamming people out of thousands because they have a moral objection to taking millions.


I predict a false flag cyber attack on the US that is supposed to whip us into a frenzy for war in Iran.


Can confirm. One of my best friends is a senior engineer at thomson reuters and they are focused on that.


Have they published anything yet? Would love to read


It is dissapointing to see how frequently VCs invest hundreds of millions of dollars into fraudulent companies. This is very different from investing in legit companies that don't work.

It often seems like you are more likely to raise VC by being a fraud than by being a responsible person who wants to do something positive in this world.


> It often seems like you are more likely to raise VC by being a fraud than by being a responsible person who wants to do something positive in this world.

You can't 10x - 100x and get an exit on a responsible person who wants to do something positive in this world.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: