Yes, cement absorbs CO2 as it sets, there are reams of "green cement" startups based on that premise like CarbonBuilt. This paper presents new estimates on how much is actually taken up and what factors matter, but the abstract does not mention whether there is any actionable information. Yawn.
The #1 problem I have typing on my iPhone is that I hit letter keys (mostly 'n') instead of the space bar and the phone just doesn't anticipate this as a possible typo and doesn't offer the right corrections. (I have AutoCorrect off.) It doesn't seem able to learn that this is a common typo, either.
Hah! I have exactly the opposite problem, I hit the space bar, instead of N, and the iPhone doesn't understand this a possible typo, so all the suggestions and auto-corrects are wrong.
Claude Code usage probably isn't counted as "chatbot" use. Also, I think you're overestimating how many people program vs. how many people are using AI chatbots as the new websearch. Orders of magnitude more of the latter.
I was just reading about an app in the iOS App Store called Seeing AI that "narrates the world around you". (All disclaimers apply, this is exactly all I know about it.)
> So unlike a normal e-bike, when its battery dies it turns into a stationary bike.
Maybe you pedal the generator on the kickstand for a minute to give it enough charge to operate the electronics, and then away you go working hard like on any other e-bike that's out of charge? I don't see why it couldn't move.
Instagram tip: if you click the wordlogo “Instagram” at the top (in the mobile app), you can select “Following” and get a feed of only posts from accounts you follow, with no suggested posts and no reels.
I end up going through that feed in a few minutes and it insulates me from the endless scrolling.
Facebook mobile tip: if you click on the burger menu and select "Feeds" you will be taken to a page with a list of different feeds at the top. If you then select the "Friends" tab you will see only posts from your friends. Doesn't get rid of ads, unfortunately, but it does get rid of all the crap from recommended pages, etc...
You can't, and I've watched as they've added/removed UI to indicate that you can even press it. I'm glad the feature is there, but it's clear Meta doesn't want you finding it.
You're saying roughly "you can't trust the first answer from an LLM but if you run it through enough times, the results will converge on something good". This, plus all the hoo-hah about prompt engineering, seem like clear signals that the "AI" in LLMs is not actually very intelligent (yet). It confirms the criticism.
Not exactly. Let's say, you-the-human are trying to fix a crash in the program knowing just the source location. You would look at the code and start hypothesizing:
* Maybe, it's because this pointer is garbage.
* Maybe, it's because that function doesn't work as the name suggests.
* HANG ON! This code doesn't check the input size, that's very fishy. It's probably the cause.
So, once you get that "Hang on" moment, here comes the boring part of of setting breakpoints, verifying values, rechecking observations and finally fixing that thing.
LLM's won't get the "hang on" part right, but once you point it right in their face, they will cut through the boring routine like no tomorrow. And, you can also spin 3 instances to investigate 3 hypotheses and give you some readings on a silver platter. But you-the-human need to be calling the shots.
You can make a better tool by training the service (some of which involves training the model, some of which involves iterating on the prompt(s) behind the scene) to get a lot of the iteration out of the way. Instead of users having to fill in a detailed prompt we now have "reasoning" models which, as their first step, dump out a bunch of probably-relevant background info to try to push the next tokens in the right direction. A logical next step if enough people run into the OP's issue here is to have it run that "criticize this and adjust" loop internally.
But it all makes it very hard to tell how much of the underlying "intelligence" is improving vs how much of the human scaffolding around it is improving.
Yeah given the stochastic nature of LLM outputs this approach and the whole field of prompt engineering feels like a classic case of cargo cult science.