Hacker Newsnew | past | comments | ask | show | jobs | submit | NobodyNada's commentslogin

> When a question gets closed before an answer comes in, the OP has nine days to fix it before it gets deleted automatically by the system.

One of the bigger problems with the site's moderation systems was that 1) this system was incredibly opaque and unintuitive to new users, 2) the reopen queue was almost useless, leading to a very small percentage of closed questions ever getting reopened, and 3) even if a question did get reopened, it would be buried thousands of posts down the front page and answerers would likely never see it.

There were many plans and proposals to overhaul this system -- better "on hold" UI that would walk users through the process of revising their question, and a revamp of the review queues aimed at making them effective at pushing content towards reopening. These efforts got as far as the "triage" queue, which did little to help new users without the several other review queues that were planned to be downstream of it but scrapped as SE abruptly stopped working on improvements to the site.

Management should have been aggressively chasing metrics like "percentage of closed questions that get reopened" and "number of new users whose first question is well-received and answered". But it wasn't a priority for them, and the outcome is unsurprising.


Yes.

The "on hold" change got reversed because new users apparently just found it confusing.

Other attempts to communicate have not worked because the company and the community are separate entities (and the company has more recently shown itself to be downright hostile to the community). We cannot communicate this system better because even moderators do not have access to update the documentation. The best we can really do is write posts on the meta site and hope people find them, and operate the "customer service desk" there where people get the bad news.

But a lot of the time people really just don't read anyway. Especially when they get question-banned; they are sent messages that include links explaining the situation, and they ask on the meta site about things that are clearly explained in those links. (And they sometimes come up with strange theories about it that are directly contradicted by the information given to them. E.g. just the other day we had https://meta.stackoverflow.com/questions/437859.)


Shog9 was probably the best person on staff in terms of awareness of the moderation problems and ability to come up with solutions.

Unfortunately, the company abruptly stopped investing in the Q&A platform in ~2015 or so and shifted their development effort into monetization attempts like Jobs, Teams, Docs, Teams (again), etc. -- right around the time the moderation system started to run into serious scaling problems. There were plans, created by Shog and the rest of the community team, for sweeping overhauls to the moderation systems attempting to fix the problems, but they got shelved as the Q&A site was put in maintenance mode.

It's definitely true that staff is to blame for the site's problems, but not Shog or any of the employees whose usernames you'd recognize as people who actually spent time in the community. Blame the managers who weren't users of the site, decided it wasn't important to the business, and ignored the problems.


Blame the managers who weren't users of the site, decided it wasn't important to the business, and ignored the problems.

This always cracks me up. I've seen it so many times, and so many books cover this...

Classic statement is "never take your eye off the ball".

Sure, you need to plan ahead. You need to move down a path. But take your eye off of today, and you won't get to tomorrow.

Maybe they'll SCO it, and spend the next 10 years suing everyone and their LLM dog.

You know, I wonder how the board and execs made out suing Linux related... things. End users were threatened too, compelled to pay...

SO could be spun off into a neat tiger, nipping at everyone's toes.


But was “today “ that profitable? Stack overflow always struck me as a great public good and a poor way to make money. If the current business makes very little money, it may not be worth the work.

His tone was extremely passive aggressive and rude. I don’t think he made the site better - he contributed to the downfall

Can you provide an example? The only rude Shog9 posts I can think of were aimed at people abusing the system: known, persistent troublemakers, or overzealous curators exhibiting the kinds of behaviours that people in this thread would criticise, probably far more rudely than Shog ever did.

This sounds plausible - I grew up in the Midwestern US, and thus "vaguely passive-aggressive" is pretty much my native language. The hardest part of the job for me was remembering to communicate in an overtly aggressive manner when necessary, developing a habit of drawing a sharp line between "this is a debate" and "this is how it is."

Sometimes I put that line in the wrong place.

That said... I can't take credit for any major change in direction (or lack thereof) at SO. To the extent that SO succeeded, it did so because it collectively followed through on its mission while that was still something folks valued; to the extent that it has declined, it is because that mission is no longer valued. Plenty of other spaces with very different people, policies, general vibes... Have followed the same trajectory, both before SO and especially over the past few years.

With the benefits of hindsight, probably the only thing SO could have done that would have made a significant difference would have been to turn their Chat service into a hosted product in the manner of Discord - if that had happened in, say, 2012 there's a chance the Q&A portion of SO would have long ago become auxillary, and better able to weather being weaned from Google's feeding.

But even that is hardly assured. History is littered with the stories of ideas that were almost at the right place and time, but not quite. SO's Q&A was the best at what it set out to do for a very long time; surviving to the end of a market may have been the best it could have done.


I always found these discussions around the tone of SO moderation so funny—as a German, I really felt right at home there. No cuddling! No useless flattery! Just facts and suggestions for improvement if necessary, as it should be. Loved it at the time.

Similar idea: https://monster6502.com/

But note:

> Does it run at the full speed of an original 6502 chip?

> No; it's relatively slow. The MOnSter 6502 runs at about 1/20th the speed of the original, thanks to the much larger capacitance of the design. The maximum reliable clock rate is around 50 kHz. The primary limit to the clock speed is the gate capacitance of the MOSFETs that we are using, which is much larger than the capacitance of the MOSFETs on an original 6502 die.

So if you built a SID using the same techniques and components, you couldn't run it in real-time without the pitch being way too low or without modifying the design. I'm not sure how hard this would be to avoid with better-spec'd components, but intuitively it makes sense for a much larger circuit to run much slower.


LLMs also generally don't put spaces around em dashes — but a lot of human writers do.


I think you're thinking of british-style "en-dashes" – which is often used for something that could have been separated by brackets but do have a space either side – rather than "em" dashes. They can also be used in a similar place as a colon – that is to separate two parts of a single sentence.

British users regularly use that sort of construct with "-" hyphens, simply because they're pretty much the same and a whole lot easier to type on a keyboard.


That's exactly right. There's a really good article about it here: https://www.pagetable.com/?p=39


Let's throw this into godbolt: https://clang.godbolt.org/z/qW3qx13qT

    is_divisible_by_6(int):
        test    dil, 1
        jne     .LBB0_1
        imul    eax, edi, -1431655765
        add     eax, 715827882
        cmp     eax, 1431655765
        setb    al
        ret
    .LBB0_1:
        xor     eax, eax
        ret

    is_divisible_by_6_optimal(int):
        imul    eax, edi, -1431655765
        add     eax, 715827882
        ror     eax
        cmp     eax, 715827883
        setb    al
        ret
By themselves, the mod 6 and mod 3 operations are almost identical -- in both cases the compiler used the reciprocal trick to transform the modulo into an imul+add+cmp, the only practical difference being that the %6 has one extra bit shift.

But note the branch in the first function! The original code uses the && operator, which is short-circuiting -- so from the compiler's perspective, perhaps the programmer expects that x % 2 will usually be false, and so we can skip the expensive 3 most of the time. The "suboptimal" version is potentially quite a bit faster in the best case, but also potentially quite a bit slower in the worst case (since that branch could be mispredicted). There's not really a way for the compiler to know which version is "better" without more context, so deferring to "what the programmer wrote" makes sense.

That being said, I don't know that this is really a case of "the compiler knows best" rather than just not having that kind of optimization implemented. If we write 'x % 6 && x % 3', the compiler pointlessly generates both operations. And GCC generates branchless code for 'is_divisible_by_6', which is just worse than 'is_divisible_by_6_optimal' in all cases.


I also tried this

  bool is_divisible_by_15(int x) {
      return x % 3 == 0 && x % 5 == 0;
  }

  bool is_divisible_by_15_optimal(int x) {
      return x % 15 == 0;
  }
is_divisible_by_15 still has a branch, while is_divisible_by_15_optimal does not

  is_divisible_by_15(int):
        imul    eax, edi, -1431655765
        add     eax, 715827882
        cmp     eax, 1431655764
        jbe     .LBB0_2
        xor     eax, eax
        ret
  .LBB0_2:
        imul    eax, edi, -858993459
        add     eax, 429496729
        cmp     eax, 858993459
        setb    al
        ret

  is_divisible_by_15_optimal(int):
        imul    eax, edi, -286331153
        add     eax, 143165576
        cmp     eax, 286331153
        setb    al
        ret


Humans have not left Earth's gravity well. We've built probes that have, but humans have only gotten as far as orbit.


Did you forget about the Moon landings?


That's pretty close to escaping the Earth's gravity well, but not quite out, since the Moon is definitely still orbiting the Earth.


That is fantastic, I love it!

If I may submit an extremely pedantic music nerd bug report: at 46s in the video demo (https://www.youtube.com/watch?v=qboig3a0YS0&t=46s), the display should read Bb instead of A#, as the key of C minor is written with flats :)

(The precise rule is that a diatonic scale must use each letter name for exactly one note, e.g. you can't have both G and G# in the same key, and you can't skip B. This has many important properties that make music easier to read and reason about, such as allowing written music to specify "all the E's, A's, and B's are flat" once at the start of the piece instead of having to clutter the page with redundant sharps or flats everywhere.)


If only the users contributing chord charts to sites like Ultimate Guitar understood this; the number of times I've seen this wrong is astounding. For example, a progression like I-iii-IV in the key of E major will be written as E-Abmin-A but ought to be E-G#min-A for the reason you stated: pratically it's confusing, theoretically it's wrong, and there's simply no upside at all.

Using exclusively sharps (or flats, but that's not so common) is for piano technicians, frequency-to-note calculators, and similar utilitarian situations that aren't in a diatonic context.

Aside: this is also an easy way to explain double sharps and double flats. If you stumble upon one, and decide to see what would happen if you eliminate it in favor of an enharmonic equivalent (i.e., a natural), you'd end up with a scale that uses some letter twice and also skips a letter. The double sharp/flat achieves the use of each letter exactly once. A bit cumbersome on most instruments (keyed instruments especially), but it does make for easier sight reading (vocals especially) when stepwise movement uses each line/space of the staff, rather than skipping.


The device is fantastic indeed!

Regarding flats and sharps: one could ignore the Pythagorean stuff and go full well-tempered dodecaphonic, thinking purely in terms of semitones in the intervals. This toy sort of nudges towards this. It would be fun to add 12 small LEDs along the faders, and show the number of semitones with them, relative to the previous fader's position.

On one hand, the fact that the same sound can be named A# and Bb may be puzzling for a kid (they could differ on a violin, I suppose); OTOH if the kid later learns formal music notation, this becomes helpful, so your comment holds.


> On one hand, the fact that the same sound can be named A# and Bb may be puzzling for a kid

I think that, given the toy is (currently) diatonic, and doesn't really have any ability to visualize the chromatic scale (like a piano keyboard does), using the formally correct note names is more intuitive. That way, only the accidentals change when you change modes ("when I change it to C minor, the B becomes a Bb"). This naturally teaches a simple and correct mental model: "the slider chooses a letter and pushing the orange knob makes letters flat or sharp".

If you only ever use sharps instead of sticking to the correct notation, then the notes change inconsistently between different keys ("changing from C major to C minor turns the B into an A#, but changing from C# major to C# minor changes the F into an E"). This is incomprehensible unless you've already memorized the piano keyboard layout.

The OP's choice of restricting to the diatonic scale seems sensible to me -- it helps the kid learn the vocabulary of Western music (if that's your goal!) and it benefits the parents as well by making it hard to create something that sounds bad.


Why stop at 12 semitones? That's so square and limiting, man...

https://en.wikipedia.org/wiki/31_equal_temperament


I'm pretty sure that NobodyNada knows this, but for pedants out there using Bb instead of A# is specifically a classical European music notation thing.

There's nothing wrong with using A# and plenty of other notations do. For a modern, hacker-y example, tracker notation only uses sharps).


If no references are involved, writing unsafe Rust is significantly easier than writing correct C, because the semantics are much clearer and easier to find in the documentation, and there's no insane things like type-based aliasing rules.

If references are involved, Rust becomes harder, because the precise semantics are not decided or documented. The semantics aren't complicated; they're along the lines of "while a reference is live, you can't perform a conflicting access from a pointer or reference not derived from that reference". But there aren't good resources for learning this or clarifying the precise details. This area is an active work-in-progress; there is a subteam of the Rust project led by Ralf Jung (https://www.ralfj.de/blog/) working on fully and clearly defining the language's operational semantics, and they are doing an excellent job of it.

When it comes to Zig, the precise rules and semantics of the memory model are much less clear than C. There's essentially no documentation, and if you search GitHub issues a lot of it is undecided and not actively being worked on. This is completely understandable given Zig's stage in development, but for me "how easy it is to write UB-free code" boils down to "how easy is it to understand the rules and apply them correctly", and so to me Zig is very hard to write correctly if you can't even figure out what "correct" is.

Once Zig and Rust both have their memory models fleshed out, I hope Zig lands somewhere comparable to where Rust-without-references is today, and I hope that Rust-with-references ends up being only a little bit harder (and still easier than C).


Every reverse engineer learns very quickly that "add [rax], al" has the machine code representation "00 00".


And this is mainly you use 0x90 padding (NOP) when the start of a function is being padded to align with a cache boundary. If you put zeros, you get a distracting barrage of "add [rax], al" in the disassembly listing in front of nearly every function.


Or "add [bx+si], al" for those from an earlier era.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: