Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think that having an in-house system was partly driven by the need to fully utilize Google's proprietary/confidential computational infrastructure. My understanding is that every big company is different, and leveraging your CPUs to their fullest extent means there isn't a one-size-fits-all model.

The only way you could move towards a single model is if all the companies open-sourced their data centers like Facebook did, so you'd at least have a good idea of which infrastructures you need to run on.

DISCLAIMER: I work as an intern at Google in engtools (the group that manages the linked article), but have no idea about the design decisions that went into the build system.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: