Hacker Newsnew | past | comments | ask | show | jobs | submit | imrehg's commentslogin

Reminds me of PoC||GTFO.

Loads of fun in this and in that. And does indeed make me sometimes question if I know anything about programming (in a good way:)


Yes, the site seems to save the checked cocktails into Local Storage (you can click some, and in your browser's inspection tools you can check. Eg. in Firefox > Inspect > Storage > Local Storage, there's a key with "cocktail-tracker").

I've checked and closing/reopening works (of course locally only, no incognito tabs, etc...)


News TV channels in Taiwan that I usually watch, very often use videos from Threads for local news reporting (stuff sent in by the public). X-originated ones pretty much disappeared for the same use. This doesn't account for millions of users (by a long shot), but definitely a noticeable shift.


I still have my Thinkpad X201 from 2011 running, and I use it as my personal machine (have an M1 MacBook from/for work).

Of course, had to replace the hard drive once or twice, replaced the whole motherboard once[0], and even though it's 64-bit, the CPU arch (Westmere) lacks some instructions that make some things non-functional (MongoDB, some Steam games don't start), and I had to limit the CPU frequency so it doesn't go into thermal shutdown. Nonetheless it's a joy to use still, and I boot it up with pleasure every time...

Thinking when will I pull the trigger on a Framework, though at least I don't feel the pressure too much just yet. :)

[0]: https://gergely.imreh.net/blog/2022/07/an-open-heart-motherb...


This feels like an edge case.

The "reasonable limit" is likely set based on experimentation, and thus on how much people post on average and the load it generates (so the real number is unlikely to be exactly "2000", IMHO).

If you follow a lot of people, how likely it is that their posting pattern is so different from the average? The more people you follow, the less likely that is.

So while you can end up in such situation in theory, it would need to be a very unusual (and rare) case.


Where's that Alain Bertaud quote from? I'd be interested in listening/reading more about his stuff.


I do not know from where the quote is but he wrote a book - Order Without Design


I found Notion's URL schema interesting as well. They have to contend with renames of pages, reorganisation of the hierarchy and all that. So they have something like:

    notion.so/:account/Current-Name-of-Page-:pageid 
where the name changes if the page is renamed, but the redirect works, as the page ID is unchanged. In fact, one can just use

    notion.so/:account/:pageid 
and gets redirected to the right page, or even

    notion.so/:account/Anything-else-:pageid
works too...

This is very handy in my use cases, when various Notion data is extracted into another tool, reassembled, and then needed to have a link to the original page. I don't need to worry about the page's name, or how that name gets converted into the URL, or any race conditions....

The page hierarchy is then just within the navigaton, not in the URL, so moved pages continue to work too (even if this looks like a flatter hierarchy than it really is).

I'm sure there are plenty of drawbacks, but I've found it an interesting, pragmatic solution.


I've noticed that Confluence, Reddit, and a good number of news sites do the same thing. Usually the title segment is entirely ignored, meaning you can prank someone by changing the title part to something shocking, and it just redirects to the usual page, because the server only cared about the ID bit

The fact that so many sites do this (including "normie" news sites) shows that site designers clearly believe users want and expect "informational"/"denormalized" URLs, rather than /?id=123


It’s an SEO thing.

The better way to implement this is to serve a 301 redirect if the words in the URL don't match the expected ones, that avoids trickery and also removes the risk of the same page being accidentally indexed as duplicate content.


There is (was?) also a terrible flaw with this, in that any site hosted on Notion (even using its own domain) would show you public pages with a valid page id.

So, if Paul Graham hosted his site via Notion (he doesn't), I could link someone to `https://paulgraham.com/Why-I-Hate-Hackernews-be2839f0-e145-4...` and it would show my (fake) page on PG's domain.


I don't understand why the :pageid needs to be prefixed with anything here.


So you can tell what the URL might point to by looking at it. That’s one of the important things mentioned in the article linked on this HN post: URLs are used by both computers and people.


its for humans I would expect, to know what the page refers to without opening it.

Kinda smart.

Also; taking it from the end means you only need to parse the string as an offset from the end. It can make load balancing much faster in theory.


It's also for crawlers. When doing technical SEO, having a human readable slug in the URL is low hanging fruit that is often overlooked. This, as well as having a <title> structure of `CURRENT_CONTENT_TITLE | WEBSITE_NAME` are things that are quite trivial to implement and provide a significant uplift in both SEO and UX.


It’s for SEO. Perhaps it’s a historical concern and no longer relevant, but the URL is/was used by Google to understand what the page is about.


I believe amazon does something similar for product pages. The `/dp/` is the product id that really matters.


Why end with the id instead of doing it like SO, as per the example in the article?


Besides all the other advice of using the password manager as a 2FA store as well, on the stand-alone side there is Aegis. I have good experience with it, and allows better interoperability than Authy as well.


Very interesting, especially when compared to shogi (Japanese chess), where captured pieces can be dropped in anywhere on the board. So for shogi players this "ideal square" calculation can be even more natural and more flexible as well: besides the "getting existing pieces from A to B", the "drop on B" is a lot simpler. No wonder that piece exchanges (so there is something in the hand to drop) are basic feature of the gameplay.

(Source: being a fan of shogi but very very very early in my learning journey, so experts would likely describe this differently.)


That extra dimension makes Shogi such a brain burner. It also forces something of a permanent middlegame.


There is a chess variant like this - Crazy house.


> There is a chess variant like this - Crazy house.

Interesting, I've always heard it referred to as "Bughouse" or "Siamese chess".

https://en.wikipedia.org/wiki/Bughouse_chess


Bughouse is the 4 player version played on 2 boards. A lot easier to play with conventional physical boards.


Bughouse is the 4 player, 2 boards variant. You capture your opponent's pieces and give them to your team mate who can then drop them on their board. That way there is no problem with the piece being the wrong color to drop.

Shogi solves that by having the pieces be flat with the two colors on different sides, afaik.

Crazyhouse is online only, so it has no problem switching the color of the captured piece.


the two-sidedness of Shogi pieces is to deal with promotions, not control.

the pieces have a pointy end, which indicates control; you just rotate the piece so it's pointing the correct way when you drop it. when promoted pieces are captured, they also do lose their promoted status.

you can indeed DIY an OTB crazyhouse set with either flippable or directional pieces (here, you can use a marker to mark 180-rotational symmetric chess pieces so they have a "front") though, or even just use a Shogi set to do so with a piece mapping (ignoring the promoted side).


It does seem like it'd be simple to have a second chess set to play Crazyhouse in person, considering you need two sets to play bughouse just without 4 players


Looking at the two specs, interesting to see how Frontier (the first, running AMD CPUs) has much better power efficiency than Aurora (the second, running Intel), 18.89 kW/PFLOPS vs 38.24 kW/PFLOPS respectively... Good advertisement for AMD? :)


These days this is true from top to bottom, desktop, servers, ... Even in gaming, the 7800X3D is cheaper than the 14700K, it is also more performant and yet uses roughly 20% less power at idle and the gap only grows at full charge.

AMD's current architecture is very power responsible, and Intel has more or less used watt overfeeding to catch back in performance.


Is there any good estimate of how much of AMD’s power efficiency advantage can be attributed to TSMC’s process vs Intel’s? I know in GPUs AMD doesn’t enjoy the same advantage vs nVidia since they’re both manufactured by TSMC, and with nVidia actually being on a smaller node, iirc.


7800x3d maxes out around 80 watts (has to be gentle to the vcache), the 14900k can go up to 300w (out of box, though Intel is issuing a new bios to limit that), and they trade blows in gaming.

I would say that's a bit more than process efficiency?

https://youtu.be/2MvvCr-thM8?t=423


Oh, certainly there are significant architectural advantages, especially for the vcache SKUs in gaming. It would just be interesting to see how much TSMC is still (or maybe further) ahead of Intel. Intel was so used to having the process advantage vs AMD that their architecture could afford to be less efficient. But now that they're the ones behind in both process and arch, they're really hurting, especially on mobile now that AMD is making inroads and Snapdragon X is about to get a serious launch in a week. I'm typing this on a ThinkPad 13s with a Snapdragon 8cx CPU running Windows, and it's a pretty usable device that lasts much longer on a smaller battery than my comparable Intel laptop. It seems to particularly use much less power on standby, although it can't seem to wake up from hibernation reliably.


Aurora has 21K Xeons and 64K Intel X(e) GPUs which provide most of the compute power. The GPUs are made by TSMC.

https://en.wikipedia.org/wiki/Intel_Xe


I was under the impression that AMD desktops/home servers generally don't go below 15-20 W, while Intel can get down to 4-6 W idle for the full system. Has that changed? AMD seems to generally be the better perf/$, but I thought power usage at idle was their big drawback for desktops/low-usage servers.

IIRC the numbers I've read are that (at least desktop) Intel CPUs should be using something like 0.2 W package power at idle if the OS is correctly configured, regardless of whether it's a performance (K) or "efficiency" (T) model. Most power usage is the rest of the system.


https://en.wikipedia.org/wiki/Cool%27n%27Quiet

They both have similar frequency and voltage scaling algorithms at this point. You will probably not see 0.2W idle though, both probably idle around 10W on desktop and 5W on laptop. But Intel is getting much more aggressive with "turbo boost" to try to hide their IPC/process deficit vs. AMD/TSMC, to the point that a 14900k will use 120W+ to match the performance of a 7800x3d at 60W.


As far as I can gather, that's not the case. These guys[0] have been crowdsourcing information about power efficiency for a while now, and the big takeaways right now seem to be that

* Intel is the best for idle (there's several people that have systems that run at less than 5 W for the full system using modified old business minipcs off ebay). Allegedly someone has a 9500T at less than 2 W full system power.

* It doesn't matter which Intel processor you use; all of them for many years will get down to 1 W or less for the CPU at idle. A 14900K will idle just as well as an 8100T, which will be much better than a Ryzen 7950X.

* AMD pretty much never gets below 10 W with any of the Ryzen chiplet CPUs. Only their mobile processors can do it, but they don't sell them retail and they're usually (always?) soldered.

* Every component except the CPU is more important. Your motherboard and PCIe devices need to support power management. You need an efficient PSU (which has nothing to do with the 80-plus rating, which doesn't consider power draw at idle). One bad PCIe device like an SSD or a NIC can draw 10s of watts if it breaks sleep states. Unfortunately, this information seems to be almost entirely undocumented beyond these crowdsourcers.

For a usually idle home-server, Intel seems to be better for power usage, which is unfortunate because AMD tends to have more IO and supports ECC.

[0] https://www.hardwareluxx.de/community/threads/die-sparsamste...


Also the delta between theoretical performance and benchmarked performance is much smaller for Frontier (AMD) than for Aurora (Intel).

That being said, note that the software is also different on the two computers.


Wouldn't be surprised if it's the same thing : more watt usage, more heat, more throttling.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: