> What does Altman bring to the table exactly. What is going to be lost if he leaves.
If Altman did literally nothing else for Microsoft, except instantly bring over 700 of the top AI researchers in the world, he would still be one of the most valuable people they could ever hire.
It's probably helpful to first get a sense of how much work it is to gather customer feedback. As an example, Qualtrics was getting single digit percent response rates, on a simple NPS survey (https://delighted.com/blog/average-survey-response-rate). If you're asking people to give you more detailed feedback, I'd imagine the hit rate might be even lower. Another data point, people who gather feedback professionally often offer customers $100 for an hour of their time.
New grad talent is a huge pipeline for Google or any other FAANG, and they'll recruit at every well-rated and large engineering program they can find, the large majority of which aren't just MIT. And tons of engineering programs that aren't those, though they may just do all their recruiting through the colleges' online resources, since they have a limited number of humans to physically travel.
Online recruiting isn't the same as in-person recruiting though. Note that Google (just using them as a stand-in) recruits in-person at MIT but doesn't at other places where they "recruit online" from.
I'm so glad Internet Explorer/Edge/Trident is Officially Over. If you told me in 2010 that in 10 years the worst browser I'd have to support was Safari/Webkit... I'd be overjoyed.
One thing (only thing?) I honestly miss about IE5.5-8 is how amenable the engine was to polyfilling. It wasn't fast, but you could do almost anything with the right polyfill technique.
No sessionStorage? Use window.name. No (then-) modern CSS? Use CSS3PIE [0]. IE doesn't support the transform CSS property? Use an *.htc behavior to convert the transform to a matrix filter.
It was madness, and it was beautiful in a Cthulhu kind of way.
For us, we started off with a world where each service communicates to each other only via RabbitMQ, so all fully async. So theoretically each service should be able to be down for however it likes with no impact to anyone else, then it comes back up and starts processing messages off its queue and no one is the wiser.
Our data is mostly append-only, or if it's being changed, there is a theoretical final "correct" version of it that we should converge to. So to "get" data, you subscribe to messages about some state of things, and then each service is responsible for managing its own copy in its own db. This worked well enough until it didn't, and we had to start doing true-ups from time to time to keep things in sync, which was annoying, but not particularly problematic, as we design to assume everything is async and convergent.
The optimization (or compromise) we decided on, was that all of our services use the same db cluster, and that if the db cluster goes down, it means everything is down. Therefore, if we can assume the db is always up, even if a service is down, we consider it an acceptable constraint to provide a readonly view into other services db. Any writes are still sent async via MQ. This eliminates our syncing drifting problem, while still allowing for performant joins, which http apis are bad at and our system uses a lot of.
So then back to your original question, the way that this contract can break is via schema changes. So for us, since we use postgres, we created database views that we expose for reading. And postgres view updates are constrained that they must always be backwards compatible from a schema perspective. So then now our migration path is:
- service A has some table of data that you like to share
- write a migration to expose a view for service A
- write an update for service B to depend upon that view
- service B now needs some more data in that view
- write a db migration for service A that adds that missing data, but keeping the view fully backwards compatible
> So then back to your original question, the way that this contract can break is via schema changes. So for us, since we use postgres, we created database views that we expose for reading. And postgres view updates are constrained that they must always be backwards compatible from a schema perspective. So then now our migration path is:
> - service A has some table of data that you like to share
>
> - write a migration to expose a view for service A
>
> - write an update for service B to depend upon that view
>
> - service B now needs some more data in that view
>
> - write a db migration for service A that adds that missing data, but keeping the view fully backwards compatible
I don't think I understand. You need to update (and deploy) service B every time you perform a view update (from service A), although it's backward compatible?
if service B needs some new data from the view that isn't being provided, then you first run the migration on service A to update that view and add a column. Then you are able to update service B to utilize that column.
If you don't need the new column, then you don't need to do anything on service B, because you know that existing columns on the view won't get removed and their type won't change. You only need to make changes on service B when you want to take advantage of those additions to the view.
This only works if you apply backward compatible changes all the time. Sometimes you do want to make incompatible changes in your implementation. Database tables are an implementation detail, not an API which you're trying to expose as a view, etc.
But hey, every team and company has to find their strategy to do things. If this works for you, that's great!
I would never claim that our setup uses microservices. Probably just more plainly named "services".
And yes, that is correct, we agree that once we expose a view, we won't remove columns or change types of columns. Theoretically we could effectively deprecate a column by having it just return an empty value. Our use cases are such that changes to such views happen at an acceptable rate, and backwards incompatible changes also happen at an acceptable rate.
Our views are also often joins across multiple tables, or computed values, so even if it's often quite close to the underlying tables, they are intentionally to be used as an abstraction on top of them. The views are designed first from the perspective of, what form of data do others services need?
Yeah, this. It's even quoted in the article. Picking the terminology from mafia movies is more cringeworthy than picking it from analogies to government, of course...
the idea that Page needed to step up as "wartime consigliere/CEO" in 2011 because Google was staring down existential threat if it didn't do stuff drastically differently to stave of the threat from social media is also quite funny in hindsight.
It's also funny because Google didn't change one bit since then. Stevey's Google platform rant is every bit as relevant today as it was in 2011 when it was published.
People ask why they don't hear about it anymore -- oh, it's because we listened and banned CFCs and fixed the hole.