> The goal of the Restaurant Edge Compute platform was to create a robust platform in each restaurant where our DevOps Product teams could deploy and manage applications to help Operators and Team Members keep pace with ever-growing business, whether in the kitchen, the supply chain, or in directly serving customers.
> (Previous article) Our hypothesis: By making smarter kitchen equipment we can collect more data. By applying data to our restaurant, we can build more intelligent systems. By building more intelligent systems, we can better scale our business.
I must admit, from an outsiders perspective, it really sounds like a bunch of buzzwords justifying a solution in search of a problem. Their examples of forecasting waffle fries reminds me of a failed startup that forecasted how many checkout lines to open via computer vision (which I can't find on Google). In the end, it turned out it was a lot easier for a human manager to simply open a new line when required, and the computer vision provided the wrong forecasting to be accurate. I wonder what CFAs success criteria and metrics are for this project.
Tech-wise, wouldn't it be a lot simpler to do a single node, single application that gets updated via something like RAUC? Especially if you have a small team (which they emphasized), it seems to me like adding a Kubernetes cluster at the edge adds complication without much benefit, other than "redundancy" (how redundant is a single rack with the same power source anyways?). Also, how would they get an important security update to the host, if it becomes necessary?
It's a lot of nitpicks, but the project overall is very cool. Sounds like they solved a lot of hard tech problems and executed well on the ops.
Fundamentally what they've deployed here is something a lot of organisations struggle with or don't even recognize as worth having - A reliable edge which you can trust sufficiently to form the functional core of a site.
For a restaurant chain this is something worth putting the development effort into because once you've figured it out and ran it for a few years to demonstrate it's reliability you can pitch the shift from a network-optional edge at each site to a network-dependent site with intelligent components hanging off it and depending on it. That's a pathway to having a major competitive advantage in the medium term that your competitors won't be able to put into place overnight once they realize you've left them behind.
You can't get there with the amount of effort often put into untrusted edge sites like this - aka a pc in a cupboard. You also can't get there with cloud when the weakest element in the chain is unrealiable site connectivity.
They could have done it in a lot of different ways, but going with cheap commodity hardware and avoiding expensive cluster license nonsense (vSphere etc) were smart choices. Spend that money on a compenent centralized tech team rather than vendor shinyness, and you can do a hell of a lot more (and often move faster, to boot).
> Their examples of forecasting waffle fries reminds me of a failed startup that forecasted how many checkout lines to open via computer vision (which I can't find on Google).
CFA leads the industry in revenue per site. I think more accurate forecasting is a significant factor contributing to this. Their sites aren't larger or better located than their competitors. In fact, they're often right next to their competitors in a similar footprint. Since I have a CFA nearby which I drive by multiple times a day, I've seen first hand that they always have substantially more cars in the drive-thru line and parking lot than their competitors in the same strip mall parking lot.
Customers will see the overflowing CFA line at the drive-thru yet still choose to pull into line because they've learned that CFA's throughput is dramatically faster than their competitors. In my experience I'd guess about 2x-3x faster which is incredible when you think about it. They achieve that by getting a lot of things right but it seems obvious ensuring their order delivery backlog is as fast as possible through more accurate load prediction would be a key factor.
All I know is that I can be car #105 in line at Chick-fil-A and I'll get my food faster than being car #2 in line at McDonald's. Whatever they're doing is working.
Yeah, in my experience they are by far both the fastest and most consistent fast food restaurant. I've read they are also the highest revenue per-site chain.
I agree that K8s is adding unnecessary complexity here. But the edge computing idea might actually apply in a couple ways.
Mostly though, any data they collect will be very valuable, as forecasting is a core component of fast food logistics. Fast food lives and dies on efficiency.
Instead of building the entire stack using (OS + K8) why not use Azure IoTEdge or AWS Greengrass for fleet management? These services seem to have solved a lot of the problems (low-footprint, redundancy, cloud management) already.
Picking standard open source starting places seems like a more than obvious move.
Saying you want to invest in ongoing intense data-driven store innovation, then building the whole thing atop a platform that you cannot rely on (may get discontinued, price may become huge, may become a barrier to technical innovation), that you dont control seems like an obviously bad move.
Finding smart people, rolling up your sleeves, & recognizing this as a core competency, an enabler, a driver of your business, & not outsourcing the problems, is the right move. If future teams do a better job building edge kubernetes, there should also be good portability.
The lock in isn't a negative, it's just a cost. If you didn't build it yourself, that was a time and expertise savings. If they go away, you either just use the other vendor, or you have to pay for the time and expertise now, which you would have been paying anyway if you didn't use the vendored solution to begin with.
> (Previous article) Our hypothesis: By making smarter kitchen equipment we can collect more data. By applying data to our restaurant, we can build more intelligent systems. By building more intelligent systems, we can better scale our business.
I must admit, from an outsiders perspective, it really sounds like a bunch of buzzwords justifying a solution in search of a problem. Their examples of forecasting waffle fries reminds me of a failed startup that forecasted how many checkout lines to open via computer vision (which I can't find on Google). In the end, it turned out it was a lot easier for a human manager to simply open a new line when required, and the computer vision provided the wrong forecasting to be accurate. I wonder what CFAs success criteria and metrics are for this project.
Tech-wise, wouldn't it be a lot simpler to do a single node, single application that gets updated via something like RAUC? Especially if you have a small team (which they emphasized), it seems to me like adding a Kubernetes cluster at the edge adds complication without much benefit, other than "redundancy" (how redundant is a single rack with the same power source anyways?). Also, how would they get an important security update to the host, if it becomes necessary?
It's a lot of nitpicks, but the project overall is very cool. Sounds like they solved a lot of hard tech problems and executed well on the ops.