"Docker+Wasm" is just a shorthand for the Technical Preview build, which allows you to build both traditional container apps, as well as Wasm apps. Behind the scenes, we try to let Wasm apps be developed largely without interference from any container technology — just giving you a good local environment you can use to code against. That said, if you want, we do offer the ability to run Wasm apps within a Docker Compose application. We do also offer the possibility to package Wasm apps within an OCI image, with an embedded Wasm runtime (WasmEdge) so you can a) easily share these via an image registry like Docker Hub, AWS ECR, etc. and b) easily run this anywhere you’d run a container. That said it’s not mandatory, and if you want the benefits of (a) without the benefits of (b) you can easily unpack the image to just get the Wasm payload and run that however you want. We dove into the details of the approach at Kubecon today, and the video should be coming out shortly.
I'm still confused. This is the big thing I'm not really getting:
> which allows you to build both traditional container apps, as well as Wasm apps
I can already do that. Using Rust for the sake of example: `cargo build` can give me a WASM binary, and `docker build` can give me a container. Is Docker+WASM going to replace `cargo build`? Or is it going to wrap the WASM binary produced by cargo in another layer of abstraction? If the latter, how is this new layer of abstraction different from just using one of the WasmEdge docker containers [0]?
I'm not trying to be combative, I'm just sincerely confused at what problem this technical preview is intended to solve.
> I'm just sincerely confused at what problem this technical preview is intended to solve.
This is all good feedback, and we’ll definitely try to explain the added value better in the future. The main advantages we see in this technical preview are:
1. Easy, reproducible dev environment to quickly & reliably develop cloud/edge apps that target Wasm, or code frontend apps that target a Wasm backend (for example, as part of a microservice architecture). This is particularly helpful if you build apps that have a mix of Wasm & container components[0]
2. Easy way to share & deploy Wasm artifacts, using trusted infra like Docker Hub, but also Dockerfiles and Docker Compose
3. Transparent, reliable way to deploy Wasm applications to existing container-based infrastructure such as k8s (via OCI images) — but these apps can also be “unpacked” to run natively on edge infrastructure
> is it going to wrap the WASM binary produced by cargo in another layer of abstraction? If the latter, how is this new layer of abstraction different from just using one of the WasmEdge docker containers [0]?
Our approach is close to this. First, it was built with the WasmEdge folks, so you’re correct to detect the similarity. Second, it does wrap resulting artifacts into a OCI image, because we believe that can generate a lot of advantages (points #2 and #3 above) BUT you can also easily unpack the Wasm payload from the binary image at deploytime/runtime if you’d rather deploy your app on Wasm-native infrastructure (as opposed to container-native infra)
I think you are falling into a “deeply-technical explainer” trap. You’ve been so deeply engrossed in the use-cases of this novel technical combo that they are all assumed understood from your POV.
And, this tech is so general that there’s a second trap of “What can it do? It can do anything! Sure, but what can it do??” Again, because I’m your head “anything” is pre-supposed to a narrow set of goals that this tech fulfills very nicely. It cannot, for example, take my dog for a walk. So, it can’t do “anything” ;p It’s hard to get out of that head-space because you’ve been so deep in it for so so long.
But, I and 85% of people here have no idea what narrow set of goals Docker+Wasm fulfills. Something about apps and security something something. Mostly for servers probably.
Some awesome-fit exemplar use cases would help a lot.
What's confusing to me is the "main advantages" Tim describes are the same as Docker, so I'm left wondering what's different about it from Docker? The only thing I can parse out of it that's different is 'apps can also be “unpacked” to run natively on edge infrastructure', but I'm not entirely sure what that means.
My best guess is that Tim is trying to say, "now can run your Docker apps on AWS Lambda"?
AWS announced support for containers on Lambda last year[0]
> the "main advantages" [redacted] describes are the same as Docker
Yup, that’s it! If you value Docker to build container apps, we think this will help you build Wasm apps in the same way, and the only container-centric abstraction this Technical Preview uses (packaging artifacts as OCI images) can be bypassed, if you prefer to deploy your artifacts as native Wasm binaries. The latter can be helpful if you are trying to get the full speed & efficiency benefits of a Wasm-native deployment, as the shim in our OCI package introduces a small performance penalty.
"Docker images can now be deployed directly to the WASM runtime! This means your AWS Lambdas, Cloudflare Workers, etc. will boot faster and cost less..."
When you rehash what Docker already does, it's watering down the messaging. Even adding "AWS announced support for containers on Lambda last year" in the last reply made the voice in my head ask again, "What's different about it? How is it better?"
That’s not quite right. You can’t take an existing container app and just "export" it as Wasm. (Technically you might, but it would require a pretty big re-architecture and re-write, as Wasm doesn’t support garbage collection or multithreading at the moment. It also requires you use a language that can be compiled to Wasm, which can be limiting. Due to this, Wasm — at this stage — is probably best fitted to functions rather than full apps, although that is changing quickly.)
What you can however, is build apps for Wasm (or apps that combine Wasm and containers) with the same ease you currently enjoy when building pure container apps, i.e. see my comment above [0]
I think what's missing is why you'd want to do that.
If I have a rust app on a scratch image, why would I want to turn it into a wasm container?
My assumption is because wasm can run on multiple platforms (x86 and arm) so one image supports both, is that correct? Are there other reasons not as obvious?
Assuming you had code that somehow could either be packaged as a linux container, or as a Wasm binary, then the advantage of the latter would be that yes Wasm supports multiple CPU architectures out of the box, it also consumes less resources (memory, etc.), will usually have faster start times, and the Wasm security sandboxing is stronger.
Really? I would not have expected that. Is that just under the assumption that most apps have an underlying OS (like alpine) and aren't on a scratch container?
what are the limitations of this? ok, for example can you deploy postgres on the browser with this? can you have a full OS container running on the browser with this, say ubuntu shell frontend with some javascript and an ubuntu container running on the browser?
The first impression from 'Docker + WASM' surely sounds like that is what it is, but after about 30 mins, I am not so sure that is actually the case.
I have some feedback, though more general: can you please stop pushing your flavor of Docker Dev Environments that only works with your proprietary application and get behind the https://containers.dev standard AKA Dev Containers?
You are going to lose this battle because your Dev Environments don't have a lot of buy-in and you're not standardized and open specced. I'd love to see Docker take a more OSS-friendly approach.
So if I understand you correctly, you want Docker to be a trusted host for wasm binaries as well as container images? Where the tooling you've made is intended to aid users along this path.
It sounds like you're anticipating a new market segment for artifact deployment and want to be its primary service provider.
This seems like a good way to muddle the remaining value prop that docker has. I have zero idea why I'd want wasm via docker tooling vs what exists, especially as people more more and more to not-docker for building and running their containers. I think I see what someone is trying to do, but I don't know any dev looking for this or having a problem solved by it.
> I don't know any dev looking for this or having a problem solved by it.
Judging by the reception at KubeCon & elsewhere today, we think at least some folks are excited by it. But it’s still early, and who knows, you may be right in the end. We launched this as a technical preview to test a hypothesis and learn from it, and so far the interactions from this HN thread alone have been greatly helpful.
I tried to answer this above[0]. Instead of trying to explain it again, I’d encourage you to give it a try[1], and if after going through the 5-minute tutorial you still don’t get the point then a) maybe we messed up (and I’ll be sorry for having wasted your time!) or b) maybe it’s not for you (and I’ll also be sorry I wasted your time). It took me a while to wrap my head around this Docker+Wasm thing too when I first heard about it internally — then again it took me months to wrap my head around my first demo of Docker, so maybe I’m just dense!
It might be easier if you simplify it to, "We're trying not to be just the app you choose to run containers with. You can now use non-container runtimes like WASM."
So docker+WASM does not build a Linux container image with the WASM application inside but it builds a WASM application packaged as a docker image and started with docker run instead of (say) an ELF or Windows or Mac binary started with the OS exec, whatever it is?
If this is the case I'd leave docker outside the name of the technology. I imagine the confusion. At least one customer of mine doesn't fully get the difference between a docker image and a VM yet, after years they are using docker in production.
> So docker+WASM does not build a Linux container image with the WASM application inside but it builds a WASM application packaged as a docker image
With this preview, we are leveraging the OCI specification that defines how to build an image. Linux/Windows containers are the most common use of this specification, but many other types of artifacts exist (OPA policies, Helm charts, etc.).
When using any of these other artifact types, tooling has to know how to use that specific artifact type and run it (eg, extract the image and run it as a container or extract the Helm chart and deploy it). In our Wasm use case, we are doing the same thing... package and ship a Wasm module and we'll extract it and run it on the new Wasm runtime. That runtime then "converts" the Wasm module into native machine code for the OS you're running on.
> If this is the case I'd leave docker outside the name of the technology. I imagine the confusion.
That's great feedback! While most know us as "the container company", our mission doesn't even talk about containers. We want to help all developers succeed by reducing app complexity. We can certainly do more to help educate folks between the different types of workloads you might be running. We're still very early on in this process, so stay tuned (and keep the feedback coming)!
'Docker + WASM' definitely gave me the impression that docker is now leveraging wasm to be able to run your normal container workloads directly on the browser.
Like if you have any docker container, you can now take that, with some modification and run that directly on the browser. Reading further, I think this is totally not what it actually is.
You could say 'Docker for WASM applications' or 'Docker to deploy WASM apps' that would make the relationship more clear I think.
That’s our experience as well… This Technical Preview is an early downpayment, and we’re definitely looking for feedback on how one may could make the Wasm development experience better!
Great question! In the demo app we showed yesterday (source here - https://github.com/second-state/microservice-rust-mysql), the Dockerfile is leveraging a multi-stage build where the first stage builds the Wasm module and the second stage extracts the Wasm module into a "FROM scratch" image.
In Compose, the "server" service is running using the image that is produced by that build. It's then handed off to the new Wasm runtime, where the module is extracted and executed. Hope that helps! Feel free to follow up with more questions!
Long live Wasm indeed! Obviously we feel a bit differently about Docker and k8s being in the past — Docker is used by 68% of professional developers according to the latest SO survey[0], and k8s is still growing in popularity at 28%. But obviously the technology landscape changes rapidly, and maybe one day (we hope) Wasm will be at 28%, 68% or higher. We’re frankly just excited about the possibilities, and wanted to help along the way :)
Hi Tim. The stats logic will probably apply as well for gas car makers vs electric ones, or any field to that matter that might be at risk of disruption in a perceivable timeframe (but are not disrupted just yet).
Don’t get me wrong though, I have tons of admiration for Docker (in fact, Solomon Hykes is an investor in Wasmer) and the great ergonomics you introduced along the way to help developers and reach the current status quo. Without you guys probably we would have reached the cloud advancements much later in time. However, we paid those advances with an order of magnitude greater complexity in other layers (with the likes of cloud providers profiting from it).
But now I sincerely believe we need more powerful abstractions for the edge, serverless and Web 3.
In any case, I’m incredibly excited that you are researching more into WebAssembly. That’s great for the ecosystem and also will help to bring more devs onboard. Thanks for all the work!
Broadly, we agree. The goal of this Technical Preview is not to encourage Wasm to be mediated by containers in production, but rather to enable people to locally build & package Wasm apps easily. In production, that could look like a number of different scenarios, from "bare metal" edge (just running in a Wasm VM), to running your Wasm workloads in a nomad/k8s cluster if that’s what you need (e.g. if you want hybrid container/wasm orchestration).
Hey Tim, I appreciate your willingness to engage. What I'm getting at is a bit more pointed. In terms of composability, Docker and WASM are opposites: Docker requires a stand-alone runtime, and an orchestration layer, etc., whereas a WASM module & runtime can be embedded directly into my application, into a browser, or into (comparatively) simple, time-tested tools like Apache[0].
So, to your point, the opportunity here is to provide WASM-native alternatives to existing container-specific technologies, that are more composable by design, because they don't have to deal with the complexity of trying to ape an extra operating system just to run some software.
For example, I'd love to see container orchestration be supplanted by something like a la carte Erlang-style service discovery—that's a primitive that could easily be composed with other primitives, and wouldn't result in the combinatorial explosion of nouns we see in systems like k8s / Swarm / others.
No disagreement here! Just to contextualize: Docker — the company as it exists today — is singularly focused on the development experience, i.e. the inner loop of code/test/build, not the outer loop of deploying to production (which is largely controlled by cloud platforms and k8s at this point). I know that separation can seem arbitrary, considering containers are largely successful because they help bridge the two, but that’s just the reality of what we’re focused on in our daily jobs @ Docker.
Within that framework, we see Wasm as extremely compatible with our goals of improving the local development experience, and yes, giving people alternatives to container-centric approaches. I’m personally inclined to agree with the points you are making about opportunities in orchestration, but we’re starting today by just trying to give people to a solid toolset that lets you iterate on your Wasm apps locally, and easily export the resulting artifacts, so you can deploy them as you see fit. In the process we try to be careful about shedding any container-centric assumptions, while porting over some of the wins of the docker tooling that we think can translate well to Wasm (easy local dev environment, standard artifacts, broad platform compatibility across Windows/Linux/M1, etc.) We will happily work with anyone interested in working with us to improve the production/deployment landscape for Wasm, and in fact I would say the main reason that drove us to launch this technical preview today, was to attract feedback on how the Wasm community (ourselves included) could best deliver an alternative path to production for applications going forward.
Hope this makes sense and that I understood your point accurately!
> Hope this makes sense and that I understood your point accurately!
I'm happy to hear all of that and, yes, I believe you did.
> Docker [...] is singularly focused on the development experience, i.e. the inner loop of code/test/build, not the outer loop of deploying to production (which is largely controlled by cloud platforms and k8s at this point).
I would push back on this a bit: although it may not be a focus, and that's totally fair, deployment is undeniably part of the developer experience, particularly within the context that containers were born into (i.e. DevOps values of continuous deployment & no silos). I certainly wouldn't presume to tell you what to do, but I would suggest that Docker is uniquely situated to address developer experience from end to end.
> We will happily work with anyone interested in working with us to improve the production/deployment landscape for Wasm
If you gave us a path out of containers for the end-to-end developer experience, I believe many of us would be eternally grateful. Let me know how I can help.
The amazing react-native-camera plugin! [0] I’m still getting a few camera-related crashes on Android right now, but overall I would say it makes things pretty smooth!
Great question — I did not, because I had unfortunately spent all of my data on that last training run, and I did not have a untainted dataset left to measure the impact of quantization on. (Just poor planning on my part really.)
It’s also my understanding at the moment that quantization does not help with inference speed or memory usage, which were my chief concerns. I was comfortable with the binary size (<20MB) that was being shipped and did not feel the need to save a few more MBs there. I was more worried about accuracy, and did not want to ship a quantized version of my network without being able to assess the impact.
Finally, it now seems that quantization may be best applied at training time rather than at shipping time, according to a recent paper by the University of Iowa & Snapchat [0], so I would probably want to bake that earlier into my design phase next time around.
Thanks! Haven't seen that paper, I'll check it out. I think quantization only helps with inference speed if the network is running on CPU with negligible gains on GPU (Tensorflow only supported CPU on mobile last I looked which was a while ago). However your app is already super fast so don't I think anyone would notice if it was marginally faster at this point!
While we’re here and chatting about this, I should say most of the credit for this app should really go towards the following people:
Mike Judge, Alec Berg, Clay Tarver, and all the awesome writers that actually came up with the concept: Meghan Pleticha (who wrote the episode), Adam Countee, Carrie Kemper, Dan O’Keefe (of Festivus fame), Chris Provenzano (who wrote the amazing “Hooli-con” episode this season), Graham Wagner, Shawn Boxee, Rachele Lynn & Andrew Law…
Todd Silverstein, Jonathan Dotan, Amy Solomon, Jim Klever-Weis and our awesome Transmedia Producer Lisa Schomas for shepherding it through and making it real!
Our kick-ass production designers Dorothy Street & Rich Toyon.
Meaghan, Dana, David, Jay, Jonathan and the entire crew at HBO that worked hard to get the app published (yay! we did it!)
My takeaway is that local development has a huge developer experience advantage when you are going through your initial network design / data wrangling phase. You can iterate quickly on labeling images, develop using all your favorite tools/IDEs, and dealing with the lack of official eGPU support is bearable. Efficiency-wise it’s not bad. As far as I could tell the bottleneck ended up being on the GPU, even on a 2016 MacBook Pro with Thunderbolt 2 and tons of data augmentation done on CPU. It’s also a very lengthy phase so it helps that’s it’s a lot cheaper than cloud.
When you get into the final, long training runs, I would say the developer experience advantages start to come down, and not having to deal with the freezes/crashes or other eGPU disadvantages (like keeping your laptop powered on in one place for an 80-hour run) makes moving to the cloud (or a dedicated machine) become very appealing indeed. You will also sometimes be able to parallelize your training in such a way that the cloud will be more time-efficient (if still not quite money-efficient). For Cloud, I had my best experience using Paperspace [0]. I’m very interested to give Google Cloud’s Machine Learning API a try.
If you’re pressed for money, you can’t do better than buying a top of the line GPU once every year or every other year, and putting it in an eGPU enclosure.
If you want the absolute best experience, I’d build a local desktop machine with 2–4 GPUs (so you can do multiple training runs in parallel while you design, or do a faster, parallelized run when you are finalizing).
Cloud does not quite totally make sense to me until the costs come down, unless you are 1) pressed for time and 2) will not be doing more than 1 machine learning training in your lifetime. Building your own local cluster becomes cost-efficient after 2 or 3 AI projects
per year, I’d say.
I have used the AWS machine learning API and would recommend it. The time savings using that vs running it on my hacked together ubuntu-chromebook-mashup is worth more than what I had to pay.
I have also used Paperspace. My only issue was that whatever they use for streaming the virtual desktop to the browser didn't work over sub 4MB/s network connection.
Yes, that’s what you see in the picture, although as completely personal advice, I would stop short of recommending it. For one there are arguably better cases out there now, and you can sometimes build your own eGPU rig for less. Finally, the Mac software integration (with any eGPU) is very hacky at the moment despite the community’s best efforts, and I had to deal with a lot of kernel panics and graphics crashes, so overall I’m not sure I would recommend others attempt the same setup.
Well for a while I was lulled into complacency because the retrained networks would indicate 98%+ accuracy, but really that was just an artifact of my 49:1 nothotdog:hotdog image imbalance. When I started weighing proportionately, a lot of networks were measurably lower, although it’s obviously possible to get Inception of Vgg back to a “true” 98% accuracy given enough training time.
That would have beat what I ended up shipping, but the problem of course was the size of those networks. So really, if we’re comparing apples to apples, I’ll say none of the “small”, mobile-friendly neural nets (e.g. SqueezeNet, MobileNet) I tried to retrain did anywhere near as well as my DeepDog network trained from scratch. The training runs were really erratic and never really reached any sort of upper bound asymptotically as they should. I think this has to do with the fact that these very small networks contain data about a lot of ImageNet classes, and it’s very hard to tune what they should retain vs. what they should forget, so picking your learning rate (and possibly adjusting it on the fly) ends up being very critical. It’s like doing neurosurgery on a mouse vs. a human I guess — the brain is much smaller, but the blade says the same size :-/
Ha you have no idea how hard chicago hotdogs made my life! There was a joke in the show about Dinesh having to stare at a lot of “adult” imagery for days on end to tune his AI, but my waterloo was chicago hotdogs — the stupid pickles end up hiding the sausage more times than not, which makes it hard to differentiate them from say, a sandwich.
Less interesting than you'd expect, as it was for a rapid mobile app prototyping class.
We had a telesync'd demo that let you play along with a Jeopardy episode by yelling answers at your phone. The app knew the timing markers for when the question was asked + when a contestant answered, so would only give you credit if you beat the contestant with the correct answer.
Our model user was "people who yell answers at the screen when Jeopardy is on."
Still think it would have made a decent companion app to the show though...
Trebek's elocution is just something you pick up on after rewatching an episode enough times. He has really interesting ways of emphasizing things, but they seem normal if you're just listening to them once through.