In this variant of the Monty Hall problem, since all the doors are transparent, you can clearly see what is behind each door. This changes the nature of the problem entirely. The element of uncertainty, which is present in the original Monty Hall problem, is no longer a factor.
In this scenario, if you pick door No. 1 and see a car behind it, you should stick with your choice, as there is no advantage to switching. If you pick door No. 1 and see a goat behind it, you should switch to door No. 2, as you can clearly see the car behind it.
Since you can see what's behind the doors, the probability of winning the car is no longer based on conditional probability, and the original Monty Hall paradox does not apply. Instead, your decision is simply based on your observation of what's behind each door.
Somehow, this would be one of the most impressive things I've read about GPT-4. It's really difficult to argue that it has well-founded understanding of the question, assuming, of course, that this wasn't actually in its training set.
And I see someone DID ask GPT-3.5-based ChatGPT the same question at least a month ago [1], so OpenAI certainly has it on record. That's long enough ago that it could well have been used to fine-tune GPT-4.
Curious that they translated it to German based on my phone settings for a product that only supports US banks? (I don’t mind that it is US banks, just… why did they pay a human to translate it?)
In order to reach that end you need to break character and choose to stop making paperclips. As a human player you always have this choice; you can stop playing anytime. The paperclip AI will always choose to make more paperclips.
I suspect I misread your comment in that case, so I apologize. Though if everyone else did as well, perhaps the comment was ambiguous?
> or misremembering the end of the game.
Exactly, there is a point that is pretty clearly "the end of the game". The fact that one can continue playing after that point doesn't make it less of an ending.
My comments are not ambiguously worded. They are made concise so my point can't be missed, yet it still is because readers are mistaking conciseness for lack of understanding.
The end of the game is something a human player reaches and is satisfied with their work. A paperclip producing AI would not choose a path that results in no more paperclips being made.
If many people misinterpret a piece of writing or miss its point then that seems like it is empirically "ambiguously worded", regardless of how clear it seemed to you.
(You are of course free to think and write how you please, but attributing all comprehension errors to readers may limit the reach of your writing.)
You can convert the entire universe into paperclips and reach the end credits in a few hours. The start of the game can be sped up by setting your keyboard autorepeat to maximum and pressing buttons by holding down enter.
Can. In order to reach that end you need to break character and choose to stop making paperclips. The Paperclip AI is always propositioned to be able to make more paperclips or to not. Which would they choose?
I replayed it recently. Fairly sure that I just had to choose to not come to terms with my enemies and then to continue turning things into paperclips.
Why are you so confident in this hypothesis? Did you create the game?
It is not at all clear that every hypothetical AGI would do as you say. It’s fiction. Anything can happen.
In fact, this AGI almost definitely wouldn’t accept the simulation offer. Otherwise our protagonist would have been making simulations and resetting them instead of doing the hard work of turning the actual universe into paperclips.
First time through, I chose to stop, declining the offer to continue - which was in character, bent on not giving in to not making everything into paperclips. Then I learned how done “done” is.
iirc, it's complicated. Some instructions don't reduce the frequency; some reduce it a little; some reduce it a lot.
I'm not sure AVX2 is as ubiquitous as the README says: "We assume AVX2 support which is available in all recent mainstream x86 processors produced by AMD and Intel."
I guess "mainstream" is somewhat subjective, but some recent Chromebooks have Celeron processors with no AVX2:
It doesn't seem that laughable to me to want faster JSON parsing on a Chromebook, given how heavily JSON is used to communicate between webservers and client-side Javascript.
"Faster" meaning faster than Chromebooks do now; 2.2 GB/s may simply be unachievable hardware-wise with these cheap processors. They're kinda slow, so any speed increase would be welcome.
AVX2 also incurs some pretty large penalties for switching between SSE and AVX2. Depending on the amount of time taken in the library between calls, it could be problematic.
This looks mostly applicable to server scenarios where the runtime environment is highly controlled.
There is no real penalty for switching between SSE and AVX2, unless you do it wrong. What are you referring to specifically?
Are you talking about state transition penalties that can occur if you forget a vzeroupper? That's the only thing I'm aware of which kind of matches that.
For simple scenarios DI is obviously an overkill, but as soon as you're dealing with nested dependencies, you'd end up with either:
* monstrously big constructors (for carrying transitive dependencies)
* lots of @VisibleForTesting code to handle manually injecting various dependencies only for the sake of testing (poor man's DI and generally bad practice)
* a lot of factories (service locator or poor man's DI, essentially)
* code that's hard to unit test due to dependencies being hardcoded.
In other words, you'll either reinvent DI poorly, or give up on testability.
The problem is not the testing. The problem is that if you have classes that depend on other classes that depend on other classes that depend on specific configuration.
I just read http://unix.derkeiler.com/Newsgroups/comp.unix.solaris/2008-... and it seems that they did not solve any problem at all; they don't allow overcommit, that's all. But they do not do anything else to compensate for the disadvantages this implies (this also have advantages, but not only)
You can do that by tuning Linux to have this behavior if you like it. In a Unix system, I don't thing this is the most desirable behavior in the general case...
I can't remember but I did sit in on a presentation about 15 years ago where they explained it. I lent the notes to a senior developer and never got them back.
It doesn't have an OOM killer. Even more remarkably a call to allocate memory can't fail, but it may not return either. When Solaris (well, SmartOS in my case) runs completely out of memory, all hell breaks loose.