I've triggered similar conversation level safety blocks on a personal Claude account by using an instance of Deepseek to feed in Claude output and then create instructions that would be copied back over to Claude (there wasn't any real utility to this, it was just an experiment). Which sounds kind of similar to this. I couldn't understand what the heuristic was trying to guard against, but I think it's related to concerns about prompt injections and users impersonating Claude responses. I'm also surprised the same safeguards would exist in either the API or coding subscription.
I look at products like Hershey's chocolate or Reeses more like their own category of processed food, kind of like Spam. They have a close, but not exact resemblance to "normal" chocolate or peanut butter, but they're also sort of an acquired taste, and I think their customers would be upset if Reese's Peanut Butter cups suddenly tasted like the Trader Joe's versions (with real peanut butter instead of a mysterious chalky peanut-flavored substance), or if Hershey's stopped using the butyric acid process that makes them taste like vomit to non-americans.
I haven't actually had that much luck with having them output a boring API boilerplate in large Java projects. Like "I need to create a new BarOperation that has to go in a different set of classes and files and API prefixes than all the FooOperations and I don't feel like copy pasting all the yaml and Java classes" but the AI has problems following this. Maybe they work better in small projects.
I actually like LLMs better for creative thinking because they work like a very powerful search engine that can combine unrelated results and pull in adjacent material I would never personally think of.
> Like "I need to create a new BarOperation that has to go in a different set of classes and files and API prefixes than all the FooOperations and I don't feel like copy pasting all the yaml and Java classes" but the AI has problems following this.
One thing I suspect is that leadership at tech companies that would have previously been working off of direct experience with technical processes, even if they no longer work directly on their own codebases, is pretty clueless about AI coding because it's so new. All they have to go with is what they read, or sales pitches, or their experience dabbling with Cursor to build simple python utilities (which AI tools work pretty well for most of the time), and they don't see what it can and can't do on a real codebase.
These are people who are stock market shook. I'd be looking at reducing your impact from index funds or if you were stupid enough to invest in tech stocks directly cash out now.
I recently ran across this toaster-in-dishwasher article [1] again and was disappointed that the LLMs I have access to could replicate the "hairdryer-in-aquarium" breakthrough (or the toaster-in-dishwasher scenario, although I haven't explored it as much), which has made me a bit skeptical of the ability of LLMs to do novel research. Maybe the new OpenAI research AI is smart enough to figure it out?
This looks like my experiments to get R1 to write fiction and I think it’s worse than what you get from openai. For instance, it’s using very colorful language to describe a place that’s both a remote fishing village on the edge of a cliff hours before dawn, and a bustling wharf with chattering laborers and large ships anchored in the distance. It also starts by saying the protagonist wakes up with his mouth tasting like blood, that he was screaming, and that his throat is hoarse from holding back from screaming. It’s very colorful but it’s very confusing to read.
I suspect you can update the prompt to make the setting more consistent, but it will still throw in a lot of inappropriate detail. I’m only nitpicking because my initial reaction was that it’s very vivid but feels difficult to understand and I wanted to explain why.
I agree that it felt hard to read. It also doesn't make sense that they're fishing in a storm. But from a prose perspective I don't think it's cringe, which is an improvement from my expectation. I'd share some of the writings I think are terrible but I don't like to pick on people.
Somebody (probably a programmer or engineer) took the time to create all of that rad 3D word art, multicolored pie-chart, and the mountain logo, it's not hard to imagine they'd also throw in an eye-catching fake nuclear warhead for fun.
The barcode scanner drops the likelihood of typos in part and serial numbers
I've heard it said that an airplane is ten thousand parts flying in formation: keeping track of maintenance and replacement parts is important in many (safety-critical) industries, and so having a scanner that one doesn't have to remember, lug around, and fumble with [0] could be useful.
Government procurement contracts for these probably wanted it since they need a way to inventory assets and this gives an all-in-one solution. Military and law enforcement are probably the main purchasers of these.
15 years ago, if you were a delivery company you might use something like a Panasonic Toughbook CF-U1 in each van. Rugged, built-in GPS and 3G, and it runs a full copy of Windows XP. You want a dashboard-mounted docking station? How about a docking station designed for 100,000 connect-disconnect cycles?
The barcode scanner, believe it or not, was useful to scan barcodes.
Not for scanning barcodes; I've seen them in action, they're noticeably slower and the image processing seems more involved than whatever magic is in barcode scanners.
The barcode scanning in the Libib app (an app to keep track of books) seems quite fast, it is quite addictive to just zap the barcodes and see the book data added to the catalog.
It seems comparable to the laser based scanners used at library checkouts for example.
To a degree, the dedicated barcode scanners is still a fair bit faster. So anything dealing with high volumes of barcodes like warehousing inventory or store stock management benefits from a dedicated scanner.
I work on Public Safety applications. Tracking the chain of custody for items collected by officers is very important. Many vendors have support for this. Officers can print labels from their car and begin maintain a custody log from out in the field. Officers can scan labels with barcode scanners or phone cameras. They also often use barcode scanners to scan drivers licenses with can have 1D or 2D information encoded in them.
One possible use case would be for police officers to scan driver licenses when pulling people over. When I worked for a city IT department, they had to have separate barcode readers installed in the cars for that, so I imagine it'd be nice to have it integrated into the laptop.
Probably a fair number of document, media, or small part inventory tasks where the laptop would be on cart. Sometimes easier to bring the barcode to the scanner than bring the scanner to the barcode.
It's for public safety / law enforcement use cases. ID cards like driver's licenses have a barcode on them, they can read the barcode in their police software to look you up and get your warrants, driving record, etc and digitally ticket you.
For Public Safety usage most all ID’s now have at least a 1d barcode and most have a 2d barcode. When the usage environment permits their use, barcode readers permit quick and accurate data entry versus manual keyboard entry.
This study tested sun-sign (which is basically birth month I think) against personality tests for predicting life outcomes and found that sun sign did very poorly compared to the personality tests. I'd have thought there would be a small chance of birth month predicting some things, and then adding in other astrological facts (the position of Jupiter or whatever) would make things worse, but both methods appear to be equally bad.
If I wanted to see the heat index at 3PM in Dark Sky, I could just tap the "feels like" button under the hourly forecast (pictured further down in the linked blog post) and look at what it says at 3PM.
I just tried in Apple Weather, and the process was:
1. Tap on the hourly forecast, or the day, to go into the graph screen
2. Tap on the dropdown icon
3. Tap "feels like"
4. Either drag your finger along the graph until the time indicator at the top indicates you're close to 3PM, then read the temperature, or you can try to read it directly off the graph, but the axes aren't labeled clearly enough to make this feasible
Why would you need degree-perfect precision in a subjective measurement? Eyeball it. It’ll feel like around 90ish. Or it’ll feel like around 85ish. There’s no reason for an indication that it’ll feel 87.500.
reply