Hacker Newsnew | past | comments | ask | show | jobs | submit | araes's commentslogin

What CSS needs (opinion, caveat, really far away in advanced features) is:

- A way to get values from sliders (type="range") (and style and modify them easily). Really, a way to get "value" from any <input> element.

- A way to have radio / checkboxes or labels inside labels work for multiple choices (or something other than making enormous arrays of radio selections and CSS choices)

- A way to have indexed arrays of numbers / choices (other than using a crazy complicated animation timing trick)

- String concatenation thats not so incredibly finicky and difficult to implement correctly

- Some way to retain the final "state" of a condition without resorting to playing an animation on "forwards" mode. Also, related, some way to not require the hidden checkbox hack everywhere.

- (Advanced, lower priority) A way to force var() calculation and optimizations other than making a zillion @property statements. "This expensive trig calculation is used a hundred times later on, better make it a separate ... nvm, it gets regex placed into every other calculation on the entire page ... " ¯\_(-_')_/¯

- If() and Function() ... wait, /inappropriate_swear, we're actually getting these after a quarter century.


It looks like HTML may be getting the ability to manipulate the class/attributes of elements in response to clicks (declaratively, without JS). Which would solve a lot of the interactivity issues (checkbox hack, etc).

See: https://developer.mozilla.org/en-US/docs/Web/API/Invoker_Com...


Thanks. Saw this, thought it looked cool. Just could not find what it can do other than "popover" "popovertarget" without using JS.

The examples provided for custom events all seem to use JS anyways. Ex:

  <button commandfor="the-image" command="--rotate-landscape">

  ...

  if ( event.command == "--rotate-landscape" ) { image.style.rotate = "-90deg" }
Agree it would be cool, it just doesn't seem to do a lot. Generalized "popovertarget" except that basically means +modal <dialog> in most of the examples. And all the modal dialogs are almost always "popover". And you can already make "popover" appear over "popover" with multiple onscreen. Maybe there's something with looking. [1]

[1] https://developer.chrome.com/blog/command-and-commandfor

Edit: Here's an example of the overlapping popover functionality. https://araesmojo-eng.github.io/

Main Page -> General Tests Menu -> Open Popover 1 | Open Popover 2

Should both open popovers simultaneously on top of each other while dismissing neither, and also change the state of elements on the page at the same time.


This kind of article and the people who write this type of description are a lot of what causes America to be this way.

"Why are you unhappy?" "Cause you're dumb and you should be happy, cause the stock market has lots of money, and we can buy lots of material stuff" Almost every argument in the entire article is BS.

"wealth isn’t confined to only the top percent" "American's are materially wealthy" Almost every argument is money, money, money, money, money...

Wealth is completely confined to the top 1%. The entire government shut down for a month and the part that actually got attention? ... "there's no foodstamps". 40 million people won't have Supplemental Nutrition Assistance Program benefits.

Standard response from IRS filings, 50% of America only needs to file for the rebate, cause they barely (or don't even) make the deduction. (Last column, 51-100, 76,794,954 returns, averaging $19,936.70) The next 25% average less than the often cited "enough to get by without much difficulty" number of $80k-$100k (for singles) and way below the $200k cited number for a family of four. (Notably, 200k seems a bit excessive, maybe $150k) [1][2][3][4]

75% of America is beneath the "get by comfortably in America." Generalized results from MIT for CA, TX, FL, NY have breakdowns by category as examples (for ~127 million of the 340 million Americans) [5][6][7][8]

In the most recent tax filing season data available (2024), there were tax returns of:

                                        Top 1%       Top 5%      Top 10%       Top 25%       Top 50%   Bottom 50%  All Taxpayers
  Number of Returns                  1,535,899    7,679,495   15,358,991    38,397,477    76,794,954   76,794,954    153,589,908
  Average Income Taxes Paid           $653,730     $187,468     $108,251       $50,963       $27,891         $667        $14,279 
  Adjusted Gross Income (Millions)  $3,872,395   $6,182,180   $7,745,525   $10,613,602   $13,191,209   $1,531,038    $14,722,247
If we then break those into the actual groups, and numbers per group, then we find their Average Per Capita Income

                                             1          2-5         6-10        11-25        26-50       51-100
  Number of Returns                  1,535,899    6,143,596    7,679,496   23,038,486   38,397,477   76,794,954
  Income Taxes Paid (Millions)      $1,004,063     $435,594     $222,966     $294,234     $185,068      $51,225 
  Adjusted Gross Income (Millions)  $3,872,395   $2,309,785   $1,563,345   $2,868,077   $2,577,607   $1,531,038 
  Average Tax Rate                       25.9%        18.9%        14.3%        10.3%         7.2%         3.3%
  Average Per Capita Income      $2,521,256.28  $375,966.29  $203,573.91  $124,490.69   $67,129.59   $19,936.70
[1] https://www.cnbc.com/2025/06/07/salary-a-single-adult-needs-...

[2] https://fortune.com/2025/06/09/sinks-earnings-family-by-stat...

[3] https://www.manchestertimes.com/news/national/how-much-do-yo...

[4] https://livingwage.mit.edu/

[5][6][7][8] (CA) https://livingwage.mit.edu/states/06, (TX) https://livingwage.mit.edu/states/48, (FL) https://livingwage.mit.edu/states/12, (NY) https://livingwage.mit.edu/states/36

TLDR, I agree with you. Found myself less happy after reading their spew article. Lots of other data.


Was really hoping it was an article about making electronics out of fried bread products. "With electrodes wired to our margarine covered breadboard we were able to accomplish ... "

> played a few DM-Morpheus bot matches

This part is one of the features that doesn't seem to get mentioned enough. Unreal Tournament was one of the first franchises to have bot behavior that was well thought out, relatively realistic and challenging, and most importantly worked for every single one of the game types included (including multi-player team objectives).

In addition to just being able to play single player in what would have normally required a functioning online environment and WWW connection for other franchises, you could also fill out large teams with bot players if you could not find enough actual humans to log in and join your game.

Gametypes like Team Deathmatch and Assault were then viable for single or small groups even when you didn't have a great (or completed lacked) WWW connection.


Somebody had PC Part Picker [1] on a different thread. Found it kind of helpful. 3x, 3.5x, and 4x is relatively common across most memory types. DDR4-3200 2x32GB is one of the few that appears to have flattened slightly at 2.5x, although most DDR4-3200 is not as severe as other DDR4 or DDR5. None of the other lines appear to have flattened significantly yet.

[1] https://pcpartpicker.com/trends/price/memory/


General question to people who might actually know.

Is there anywhere that does anything like Backblaze's Hard Drive Failure Rates [1] for GPU Failure Rates in environments like data centers, high-performance computing, super-computers, mainframes?

[1] https://www.backblaze.com/blog/backblaze-drive-stats-for-q3-...

The best that came back on a search was a semi-modern article from 2023 [2] that appears to be a one-off and mostly related to consumer facing GPU purchases, rather than bulk data center, constant usage conditions. It's just difficult to really believe some of these kinds of hardware deprecation numbers since there appears to be so little info other than guesstimates.

[2] https://www.tweaktown.com/news/93052/heres-look-at-gpu-failu...

Found an Arxiv paper on continued checking that's from UIUC UrbanaIL about a 1,056 A100 and H100 GPU system. [3] However, the paper is primarily about memory issues and per/job downtime that causes task failures and work loss. GPU Resilience is discussed, it's just mostly from the perspective of short-term robustness in the face of propagating memory corruption issues and error correction, rather than multi-year, 100% usage GPU burnout rates.

[3] https://arxiv.org/html/2503.11901v3

Any info on the longer term burnout / failure rates for GPUs similar to Backblaze?

Edit: This article [4] claims it's 0.1-2% failure rate per year (0.8% (estimated)) with no real info about where the data came from (cites "industry reports and data center statistics"), and then claims they often last 3-5 years on average.

[4] https://cyfuture.cloud/kb/gpu/what-is-the-failure-rate-of-th...


To the original authors, really needs a default tileset for the demo, with maybe a link to the files. An an progress/upload bar for when you're trying to upload a .zip. Tried, and appeared to stall out for minutes on upload with no response.

> spending time to carefully and deeply review a paper because they care and they feel on principal that’s the right thing to do

Generally agree, although several parts of that issue.

One of the first was covered by a paper back in 2023 that speaks to the issue about maximum extraction mode. [1] Fairness, honesty, and loyalty are usually rewarded with exploitation. If you spend time to carefully and deeply review the paper, then that ironically marks you as someone that can be exploited. You're implicitly marked as someone who will make personal sacrifices for the academic community and allow even more awful behavior to be piled on top of you. Unless they're caught with something especially egregious, the people that don't, get promoted, spend less time on reviews, and get further rewards.

[1] https://www.sciencedirect.com/science/article/abs/pii/S00221...

The academic community has talked about this a bunch for years. Editors / reviewers that don't paid, or get minimal payment, and sacrifice large amounts of their personal time effectively volunteering, while authors pay $1000's for each paper submitted, and then journals charge $10,000's for each subscription. It's been talked about for decades, and yet in all that time, very little has actually occurred to change the situation.

Another part on top of the "deeply reviewing papers" is that the sheer volume has massively increased (which has been an issue in a bunch of industries, sci-fi compilation Clarkesworld broke for quite a while in 2023 for similar reasons [2]). In the land of "type a sentence, and get a free academic paper" the extremely prolific are pouring out a paper a month, sometimes greater amounts. In areas like clinical medicine, hyper-prolific publishing has hit 70+ papers a year rates. [3] ~1.5 papers a week. Every few days somebody cranks out yet another paper that needs to be reviewed. In the article linked, one author had 140 articles to a single journal alone. Almost 3 times a week, all year long, you've got a paper claiming research worthy of publishing you need to review.

[2] https://neil-clarke.com/how-ai-submissions-have-changed-our-...

[3] https://www.sciencedirect.com/science/article/pii/S175115772...

One that I have less direct, citeable proof for, yet am rather suspicious of, is that theft has also dramatically increased with a huge surge in invasive monitoring and snooping. If my TV changes what I'm watching, and what's recommended, because I typed a text message to somebody, it seems likely that a lot of academia is also dealing with massive intellectual theft issues. This then heavily prioritizes pouring out material as quickly as possible, with as little effort as possible, to get the equivalent of first post and maximal posts, before it can be scraped, exfiltrated, and published by somebody else.

Finally, a lot of the reward and incentive has become metric chasing. Publish or Perish [4] and the Replication Crisis [5] are relatively well known ideas. Citation is a proxy of the impact of a paper, tenure and advancement is heavily related to quantity of publications and citations, and researchers would prefer to be cited more. And weirdly, if it does not work, and it's junk work, in a theme with the above, then it has been suggested nonreplicable publications are cited more than replicable ones [6]. In the linked paper, the view is that when "interesting" findings are published, they get more views, more media, more citations, and lower review standards get applied. And afterward there's very little social punishment for proving the results are false and not replicable (or reward for those illustrating lack of reproducability). Notably, the paper actually got a counterpoint stating that in psychology at least, lack of replication eventually predicts citation decline [7] (cited by 10), while the original actually got its authors ~250 citations, and a bunch of media mentions.

[4] https://en.wikipedia.org/wiki/Publish_or_perish

[5] https://en.wikipedia.org/wiki/Replication_crisis

[6] https://www.science.org/doi/10.1126/sciadv.abd1705

[7] https://www.pnas.org/doi/10.1073/pnas.2304862120


Thought the comment was somewhat helpful. Sparked considering the various anti-patterns in automobile design and searches came back with several others that have been vaguely thought about, just never really identified very clearly for me.

  - Inaccessible Components (Poor Physical Layout): One of the main ones you're talking about.  Take out the engine to repair a light on the dashboard.

  - Integrated, Non-Modular Systems: Minor damage or failure ruins an entire assembly.  You dinged the bumper, replace the entire front.

  - Lack of Standardization: Even from year-to-year, designs change and mechanics have to learn yet another system.

  - Forced Replacement over Repair: Object is "black box", thou shalt replace, not repair.

  - Dead/Onion/Boat Anchor Components: No longer used, maybe need it, build stuff on top of it, layer after layer, "can we even remove it"?

  - Spaghetti Wiring/Code and System Coupling: Single modules that route all over the car, another "can we even remove it"?

  - Proprietary Diagnostics and Restricted Data Access: Don't have the special tools, you can't repair, or even find out what's wrong.

CGMthrowaway has an interesting comment on the other thread about this subject, that it's likely not solar radiation. "failing solid state relay or contactor on the shared avionics power bus" [1] Related to the previous 2008 incident on Qantas 72 that had similar characteristics.

[1] https://news.ycombinator.com/item?id=46083560

  > On the Qantas 72 flight (2008), the ATSB report showed the same power spike that upset the ADIRU also left tidy 1-word corruptions in the flight data recorder. Those aligned with the clock cycle, shared the same amplitude and were confined to single ARINC words. That is pretty much exactly the signature of a failing solid state relay or contactor on the shared avionics power bus (upstream of both FDR and fly by wire).

  > Radation-driven bit flips would be Poisson distributed in time and energy. So that is one way to find out

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: