Hacker Newsnew | past | comments | ask | show | jobs | submit | dz0707's commentslogin

I think your math is wrong. Most of modern cars do up to 150g of CO2 per 100km, there are other emissions too, but they are in way smaller numbers.


I think the units there are off, a Camry hybrid is about 100g direct CO2 per km. One widely repeated calculation has total direct + indirect emissions for a grocery bag at 200g. So 1km driven vs 1 bag is a similar magnitude of emissions.


Please be careful of such "metrics/statistics." Their very nature means they're politically and financially incentivized lean towards a higher or lower number than "the other guy." And, of course, a big number is scarier in a vacuum. What if a paper bag is 250g of emissions?

The poster child for me for this is low-GWP refrigerants. Sounds good, right? Well, think about how CO2 captured filtered and compressed compares. I'll leave everybody to argue with their-self on this. Does co2 vs r-whatever use more energy? Less? Does it somehow justify the emissions and pollution of manufacture?

My conclusion is... I don't know.


We have enough data to estimate the reasonable range of possibilities and exclude the upthread assertion that a ten minute car ride is similar emissions to 10k plastic bags. A degree of uncertainty need not make us helpless in the face of loud ignorance, that's how we end up giving equal weight in the media to common consensus of professionals in whatever field and political operatives with fringe beliefs but no evidence.


Sorry, I screwed up and misread what you wrote- primarily, a simple "we can do way better than 30mpg." And theres not a lot in the way of wiggle room to debate with any integrity the amount of CO2 burning a set quantity of gas produces. A couple percentage points for NOx and friends and thats it.

I am confused why everybody mentions emissions though. In a discussion on paper/plastic/reusable bags, in a response to a call for napkin math for a claim of "10,000 bags from the fuel needed to get to the store" (essentially the argument made)- CO2 isn't relevant: just the mass of the gas used to get to the store.

I'm not pleased with how this turned out. to be told I'm wrong? That's fine, its the internet. I'm disappointed and alarmed with how badly wrong the suggested corrections are... it's deeply frustrating for me as well.


Thats comically wrong. Human Resting metabolism is on the order of 20grams of CO2/hr.

See: https://www.sciencedirect.com/science/article/pii/S036013232...

As for a kilo of gas per 10 miles- see https://en.m.wikipedia.org/wiki/Gasoline - says 0.71-0.77g/mL, standard conversion table says 3.785L per gallon. (https://www.engineeringtoolbox.com/volume-units-converter-d_...), and finally- since we're comparing burning gas for a car vs using it in plastic: the figure of merit is petroleum usage, not greenhouse gas emission. Technically, plastic and gasoline aren't going to be 1:1. But that's not napkin math anymore unless you're a petroleum engineer/chemist.


Also most of that weight is oxygen. The mass of carbon from the gasoline in an apples to apples comparison to plastic would be much lower.

It doesn't really make sense to be comparing plastic waste to CO2 emissions though. These aren't fungible.


I did a little test that I like to do with new models: "I have rectangular space of dimensions 30x30x90mm. Would 36x14x60mm battery fit in it, show in drawing proof". GPT5 failed spectacularly.


I tried it again today out of curiosity. OpenAI said there was some routing bug on launch and requests were going to the cheaper model.

Today it seems pretty good. Not perfect, but not a spectacular failure.

https://chatgpt.com/s/t_68966fcf457c8191811968b9a6a2e81e


This was a fun prompt. I learned things from the models. Gemini 2.5 was wayy better than gpt5 here even though quite incomplete in the first response


I'm wondering if this could turn into some kind of prompt tunning tool - like to detect weak or undesired relationships, "blur" in embeddings, etc.


Having written quite a bit of open API specs, I don't agree with you. Json is hard to read, yaml has own quirks, especially when you try to spilt it into parts. Amazon also tries to invent own language for describing apis, so I guess they are not happy with open API too. Anyway, without ability to generate code from spec, there is not much use from it. Code gen/nswagger/open API generator and others produce terrible code, at least for java/c#/typescript(there's 4.1k open issues for open API generator), using custom generators for codegen make problem less painful, but that is additional burden, I'm looking for better alternative, would be very interesting to see what they will do with code generation.


Codegen is coming online as we speak. We do codegen from TypeSpec in Azure across multiple languages, and the results are pretty great. We're moving that over to the TypeSpec project so everyone can generate code. Obviously my opinion is biased, but I think the results are significantly better than what you find elsewhere in the ecosystem.


Tried couple queries that I've used lately for the job and results were meh, seems that getPayload was tokenized into get and Payload and that resulted into much not related stuff from sites that have nothing to do with programming. In code search in my opinion there needs to be subtle distinction when to do exact match, when not, even to keep syntax symbols, so that I could search for call usage, call usage with specific generic parameter, not definition, etc.


Thanks. You're right, we gotta improve the tokenization and work better with special characters. Hoping to ship that end of next week.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: