There are open source versions of everything done within a GCP API call, but it requires multiple machines and lots of data to build an NLP model to be as fast and accurate as GCP, and cloud computing is relatively new compared to OCR.
There are? Can you give a list of pointers or what to look for?
I was looking for an OCR that can do license plates while the car is moving, for a hobby project. The image quality is less than perfect, the lighting is never very good, and as the camera is mounted on my side window, all plates have a perspective transformation applied (e.g., topline and baseline are essentially never parallel)
Tesseract fails miserably. Trying to help it, I have not found a good open source project that would consistently equalize color pictures to black-and-white - sometimes there's shadow on the plates that foils all simple attempts.
And yet, GCV needs no parameters, and seem to do this perfectly on images I've tried.
So, assuming I'm willing to put in the time - how do I build my own GCV -- even if it's just for the hobby use case of reading license plate (and the next stage: reading house numbers - which GCV does reasonably well, although it is a much much harder problem)
Training the model would be computationally intensive, but deploying that to use Tensorflow.js and predicting a single datapoint in the browser shouldn't be as much, right?
There are ML models that are so computationally intensive that they can't reasonably run on the edge. AI accelerator chips obviously help move the line, but AI accelerators benefit the cloud, too. Furthermore, Models can be tens to hundreds of megabytes in size. Okay for the cloud, not okay for wasm running in the browser.
The embedded assumption is that current demand is sustainable, rather than simply a reduction in a multi-year fanatic backlog.
Embedded inside that assumption are a lot of assumptions about ramps in production with consistent quality and finish, and lack of EV competition.
I personally disagree that Tesla can maintain production and quality at these levels without needing more financing, but the verdict is definitely not in yet.
Have the people who complain about javascript page load performance read about netflix.com? Netflix uses JS to render their landing page server side. No client JS necessary. How can a page load faster than raw HTML?
You do realise that Netflix doesn't need to depend-on/load adverts and mass tracking scripts and given the company itself is tech based their sales doesn't get the mighty power to force their devs inserting x40 tracking/adverts scripts and annoying popups? P.S. Netflix does ~17 ajax calls immediately after loading to hydrate its SSRd page ;)
I'm not making that argument, though that is often the reaction I get.
My argument is that you should understand when you're choosing to compromise the user experience, why, and how compromised it is. For example, you should know roughly how much worse the user's battery life is because of your decision. That makes the decision an informed one, based on your goals, budget, time to market, and so on.
To claim a decision born of necessity comes with no compromises is delusion. That delusion might not kill you but it points toward muddled thinking. That's dangerous at the best of times, doubly so for a startup.
This is such an interesting counter point. So really there needs to be a balance in tool choice that finds an optimum between deliverable UX with enough devX so that devs can stay sane enough to get the product built in the first place so the UX can exist.
Surely we're not going to start making arguments that out ultimate UX delivered apps are hand written asembly (at the most absurd end of the argument ;-) )
Similar, but harder to use safely and correctly in class methods using the new class syntax. There's more surface area for typos and other mismatch errors since you have to type the method names twice. See: https://mobx.js.org/best/decorators.html