> Imagine if the compiler could work out that one program has a ratio of processing of 2:1 then it could scale out automatically.
Don't think that's possible, as there rarely are programs where compute to IO ratio is constant. Even the simplest of say web API will have calls that take more CPU and calls that just wait for network somewhere.
It would be nice to have tooling to figure out whether app is "just" idling or waiting for network for scaling purposes, we already have that in form of IO wait on disk IO.
Then again Go's solution of "have threads so light you can just run 100000 of them to fill CPU even if each of them have a lot of network wait" works well enough. There is also of course async/event driven but that generally leads to code with worse readability (at best similar) and more annoying debugging
I am currently investigating coroutine transpilation with async await similar to Protothreads implementation that uses switch statements. I am trying to do it all in one loop rather than reentrant functions as in Protothreads.
Essentially I break the code up around the async/await and put different parts into a switch. I "think* it's possible to handle nested while loops inside these coroutines.
The problem I don't know how to handle is if a coroutine A calls coroutine B and B calls A. I get the growing data problem which I'm trying to avoid. I want fixed size coroutine list
I think some Microservices take more CPU than others. You can scale one Microservice more than another based on CPU usage.
Another of my benchmarks is multithreading message generation and sending it between threads in thread safe. I can get 20-100 million message sent per second, sending in batches
Don't think that's possible, as there rarely are programs where compute to IO ratio is constant. Even the simplest of say web API will have calls that take more CPU and calls that just wait for network somewhere.
It would be nice to have tooling to figure out whether app is "just" idling or waiting for network for scaling purposes, we already have that in form of IO wait on disk IO.
Then again Go's solution of "have threads so light you can just run 100000 of them to fill CPU even if each of them have a lot of network wait" works well enough. There is also of course async/event driven but that generally leads to code with worse readability (at best similar) and more annoying debugging