Hacker Newsnew | past | comments | ask | show | jobs | submit | Eridrus's commentslogin

Federal law makes it illegal to retaliate against people for forming a union. Companies still do, and there's no law saying they have to keep hiring in that unit, but if they fire everyone they will probably lose the resulting lawsuit.

This does not currently show up on Google Trends, so unless you're trying to go full conspiracy mode here I don't see why we should trust this random screenshot to be accurate: https://trends.google.com/trends/explore?date=now%207-d&geo=...


If you change the region, it does show up.

https://trends.google.com/trends/explore?date=now%207-d&geo=...


Are you in a different timezone?

I an in EST and do not see any traffic before 7pm Nov 26 EST.

I can confirm I am seeing data in EST time because the latest data available is from 20 mins ago in EST.


I think these graphs are just too unreliable. If I set it to Past 90 Days, there's a blip on Sept 6, 2025 with a value of 28. The next data point isn't until Nov 27, 2025 and it goes up to 100. If I then set it to Past 12 Months, the only data is in Nov and there is nothing for Sept. Then, even more interestingly, if I change it to "News Search" (instead of Web Search) and set it to Past 12 Months, the biggest spike is actually May 18-24, 2025 where the value is 100 and the Nov blip is only 9. https://trends.google.com/trends/explore?geo=CA&gprop=news&q...

EDIT - that was set to Canada. If I set it to Worldwide the values change obviously.


Ok, but this was an RCT, so enrollment was randomized after people self selected into this experiment.


It also had an obvious and unhelpful result. Of course kids who spend all day learning will know that stuff better than kids who don't. What really matters is long term life outcomes.

Rudolf Steiner would say all that early learning is harmful and they should have been playing and imagining spiritual things.


"What really matters is long term life outcomes."

What would those be and how do we measure them?

There are studies that show Montessori students tend to have better executive function, better working memory, and no significant difference in creativity. I'm not aware of any that look at lifetime income or anything like that.


I'm not exactly sure, but measuring performance on education tests as a child is just a proxy for the whole point of education and raising children and it could even be backwards.


At my company, we do interviews remotely and record them and then sometimes review them and give feedback.


Maybe I am just bad at interviewing people, but I have tried giving the experiential interviews Casey describes, but I find it quite hard to get signal out of them.

You run into questions of how well a candidate remembers a project, which may not be perfect. You may end up drilling into a project that is trivial. The candidate may simply parrot things that someone else on the team came up with. And when candidates say things, you really have no way to understand if what they're saying is true, particularly when internal systems are involved.

I have found system design interviews specifically much much better at getting signal. I have picked a real problem we had and start people with a simplified architecture diagram of our actual system and ask them how they would solve it for us. I am explicitly not looking for people to over design it. I do give people the advice at the start of every skills interview tp treat this as a real work problem at my startup not a hypothetical exercise.

I have had a lot more luck identifying the boundaries of people's knowledge/abilities in this setting than when asking people about their projects.

And while everyone interviewing hates this fact, false positives are very expensive and can be particularly painful if the gap is "this person is not a terrible programmer, just more junior than we wanted" because now you have to either fire someone who would be fine in another role if you had the headcount for it or have a misshapen team.


    I have found system design interviews specifically 
    much much better at getting signal. I have picked a 
    real problem we had and start people with a simplified 
    architecture diagram of our actual system
To me, this heavily biases towards engineers that have already built or at least designed a similar system to the one you're presenting them.

I believe this will tend to be true even in the ideal case, which is when the interviewer is focused on "is the candidate asking great questions and modifying their proposed design accordingly" rather than "is the candidate coming up with the 'right' solution."

Because, if the candidate his already built/designed a similar system, they will naturally be able to ask better and more focused questions about the design goals and constraints.

     I have tried giving the experiential interviews Casey 
     describes, but I find it quite hard to get signal out of them.

     [...] when candidates say things, you really have no way to 
     understand if what they're saying is true, particularly when 
     internal systems are involved.
Okay, here's where I think I would tend to disagree in a very specific way. First, I would point to the linked interview where Casey conducts a brief mock drilldown interview with the host.

https://youtu.be/gZ2V5VtwrCw?t=2068

The host discussed work he did for Amazon on an event ticket ordering system. Now, I don't think you need to know much about the ticket sales business to think about the challenges here!

The most obvious challenge to me is some sort of distributed lock to ensure the system doesn't don't sell the same seat to multiple people. Another obvious challenge would be handling very bursty traffic patterns, ie an onrush of people and bots when popular event tickets go on sale. Another challenge, brought up by the host, that I wouldn't have thought of is that ticket sales need to avoid "fragmentation" as much as possible. Most people don't want single seats, so you want to sell tickets in a fashion that doesn't leave scattered, "orphaned" single seats that nobody wants to buy.

Those are interesting challenges, and if the interview is an experienced developer, I don't think a candidate could really bullshit through them to a significant degree.

    The candidate may simply parrot things that someone else on 
    the team came up with.
I personally wouldn't care if the ideas originated with the candidate vs. another member of the team. I'd be looking for how well the candidate understood the problem and the solution they implemented, the other solutions considered, the tradeoffs, etc.

    You may end up drilling into a project that is trivial
This feels trivially avoided. If I was the candidate and the interviewer picked a boring project when I had juicier ones, I would just say so. And if I was the interviewer and a candidate told me that, I'd happily pick one of the juicy ones. The point isn't to rigidly lock into a particular project.

    You run into questions of how well a candidate remembers 
    a project, which may not be perfect
This would be expected. Again, I'd be looking big picture here. Approaches considered, tradeoffs made, the "unknown unknowns" that popped up during implementation and how those were handled, etc.


> To me, this heavily biases towards engineers that have already built or at least designed a similar system to the one you're presenting them.

Yes, this is not an IQ test, we are trying to see how people react to problems in our domain not measure some generalized form of reasoning. The advantage of picking a problem as close to our real problems as possible is that I don't have to worry how they generalize from the interview to work.

In general, my experience with system design interviews is that people make bad designs, and when you drill down on them they give bad rationales. Similar to coding screens, people just out themselves as not very good at their jobs regularly.

> Those are interesting challenges, and if the interview is an experienced developer, I don't think a candidate could really bullshit through them to a significant degree.

It's not really about "bullshit" per se, but about whether their understanding of their context is correct or not. They can tell you fully reasonable sounding things that are just wrong. In a mock interview, you can see if they ask good questions about their context.

> I personally wouldn't care if the ideas originated with the candidate vs. another member of the team. I'd be looking for how well the candidate understood the problem and the solution they implemented, the other solutions considered, the tradeoffs, etc.

I totally disagree with this. It is very different to be able to remember the design doc for the project and parrot the things that were talked about vs actually writing it.

If I want to hire someone who can design things well from scratch and I get someone who makes bad decisions unless someone is supervising them, I will be very disappointed.

In general, I have given both interviews to the same candidate and after saying a bunch of reasonable things about their existing work, when I ask them how to do design something I quickly find that they are less impressive than they seem. Again, maybe I'm bad at giving experiential interviews, but being hard to administer is a point against them.

My experience of hiring is also that I am generally not looking to take chances, unwinding bad hires is super painful.


This was a major issue, but it wasn't a total failure of the region.

Our stuff is all in us-east-1, ops was a total shitshow today (mostly because many 3rd party services besides aws were down/slow), but our prod service was largely "ok", a total of <5% of customers were significantly impacted because existing instances got to keep running.

I think we got a bit lucky, but no actual SLAs were violated. I tagged the postmortem as Low impact despite the stress this caused internally.

We definitely learnt something here about both our software and our 3rd party dependencies.


I dunno man, what part of the AWS experience leaves you thinking the software is amazing?

It's good enough, but there's no real evidence it's the best, simply the largest.


There is a distinction between usability and reliability lol. If AWS reliability trends down then it's an industry wide problem.


Yeah, it's nonsense.

I think the core problem is that innovators typically only capture low single digit percent of the value they generate for society.

Bell Labs existed in an anomalous environment where their monopoly allowed them to capture more of the value of R&D, so they invested more into it.

This is the typical argument for public subsidy of R&D across both public and private settings because this low capture rate means that it is underprovisioned for society's benefit.


Something I haven't seen mentioned in this thread or TFA is just how high corporate taxes were (and even personal investment taxes) in the 50s and 60s, and this influenced spending on R&D immensely because that investment wasn't considered taxable income. Tax rates were over 50% for much of the era of Bell Labs and Xerox PARC.


Despite people mocking the parallel, I think it's actually pretty apt. RHNA is basically a socialist planning exercise because people were unwilling to stomach a market economy for housing construction and demanded state control through zoning.

It's better than the alternative of letting local governments do what they want, but it very much is a socialist planning exercise.


The note about economists and data science in the article felt weird, because data science as a title was invented to get non-CS PhDs to do analyst work because they wanted smarter people doing it.

The point of hiring an economics PhD in industry is largely not because they learnt something but because it's a strong and expensive signal.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: