Despite the headline, this is not a toothbrush. This is a "toothbrush-shaped ultrasound transducer". Mind you, I don't know why this wouldn't "increase dentist margin". This is an analysis tool that makes dentistry easier (just like dental X-rays).
I should redo my CPR then. Learned two on the front in high school in NJ. But also to read the instructions though I'm sure when seconds count you don't.
Modern AEDs have voice guidance telling the person what to do. So you can follow the instructions as you do it.
Also, you should call the emergency number in your region and (at least in Australia) they'll transfer you to someone who can coach you through using the defib and performing CPR until professional help arrives.
Don't let that stop anyone from getting their CPR up to date though. The more experience you have the better equipped you'll be if you need to use it
I see AEDs at work. If I have a heart attack, I have no confidence in my team being able to use it. I've seen how they handle requirements and documentation in stories.
No, because they’re not exposed to users as features that they interact with. From a user perspective, they don’t ever see if it was made using those tools. They see the AI front and center.
They see AI features front and center. They dont see the backend (which may or may not be running Llama). Same goes for React although, with React, the framework is what the end user directly interacts with. So, by that logic, React is more of a product than Llama.
If you are a user, what feature is react giving you? Pre and post react, Facebook is largely the same to a user.
Pre and post graphql, Facebook is also the same.
To the user, neither of those are expressed as features.
Llama / Meta AI is a feature that changes how a user interacts with the system. When it was added, it’s noticeable. If it were to be taken away it would be noticeable. Nobody was boycotting react.
That is the difference between an implementation detail and a product/feature.
I'm not sure I understand this analogy. If Meta changes what underlying model they use, it'll still be called Meta AI. Having the criteria be "what new features does it offer to the users of the site" means that things like databases, servers, etc. are not products. This is objectively false.
He wasn't having trouble with his WiFi since he was connected directly to the device. Although it might have helped to specify an Ethernet link. Still, chatbots are generally a terrible user experience.
Totally agree with you on chatbots, but you just committed the same "sin" as him! To most people, and probably to that chatbot, WiFi == Internet == Ethernet. Actually, I doubt specifying Ethernet would have helped him at all. When it asked him if he was having trouble with his WiFi, he should have just said yes, and then "spammed 0" if that didn't help. Or maybe he should have just immediately started spamming 0. Now that I think about, for anything more technical than unplugging the device, you should probably start with asking for a human.
Superconductors do not generate heat for a constant DC current. Computers are very, very AC, and you do get heat production anytime the current changes.
It is, IMO, a bit dubious whether or not anything is truly flowing in a superconductor at constant current. Electrons don't have identity, so the 'constant flow of electrons' can be rephrased as 'the physical system isn't changing'... and the degree to which you can tell that there are electrons moving about is also the degree to which the superconductor isn't truly zero-resistance.
You can tell there's a magnetic field, certainly. My argument is essentially one of nomenclature; I don't feel a constant electron-field should count as 'flowing'.
Of course it isn't actually constant -- there are multiple electrons, and you can tell that the electron field is quantized. But the degree to which that is visible, is the exact degree to which the superconductor nevertheless doesn't superconduct!
Their customers aren’t going to build their own RAG and agent frameworks, vector DBs, data ingest pipelines, finetunes, high scale inference serving solutions, etc, etc.
Right, but they can just use Llama/Mistral for free, instead of their inferior models, which I'm sure take quite a bit of resources to train in the first place.