Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Getting Started with Firecracker on Raspberry Pi (dev.l1x.be)
85 points by sairamkunala on Nov 23, 2020 | hide | past | favorite | 25 comments


Fun article!

I really don't like that diagram. The folks doing serverless work at University of Wisconsin-Madison generally do a good job, but they really whiffed on that one.

I don't think there's a reasonable sense in which KVM/Qemu moves more functionality to the guest kernel than KVM/Firecracker does. Both depend on the guest kernel for VM setup (KVM), low-level virtualization operations (KVM), and doing the actual IO below their paravirtualized device drivers. On the other end, Linux containers don't have a guest kernel (unless you use some kind of library OS). So those boxes should collapse together. If you look at it one way, gVisor is a guest kernel, which depends on the host kernel in fundamentally similar ways to what Firecracker or Qemu do.


> moves more functionality to the guest kernel

I took the diagram to just mean the whole guest, not just the kernel. In the qemu case, the guest has to interact with the bios and other emulated peripherals. Firecracker skips the whole bios layer so it does operate at a different layer


Unrelated to the content: I dunno what the author/site admin has done - but that was one of the fastest loading web pages I've ever seen on my phone.


Mostly these: Hugo / Tachyons / AWS S3 / Cloudfront / Compression

The biggest impact is not to have JS and not to have a giant amount of HTML / CSS. That keeps the content very small. The syntax highlight is rendered at compile time.


There's no JavaScript and only a small number of CSS dependencies. No caching, 239kB transfer. With caching, it's only a 3.1kB transfer.

Interestingly HN's front page is an order of magnitude smaller in uncached transfer size. It does include some JS, but the lower perceived snappiness seems to owe to longer round trip times (for me anyway ~60ms for HN vs ~10ms for dev.l1x.be).


You could find this interesting: https://news.ycombinator.com/item?id=25151773


To the haters— low-barrier-to-entry tinkering is the sort of thing the raspberry pi is built for. As a semi-practical example use case my raspberry pi 3b runs shairport and pihole in separate docker containers which has been a breeze to configure and has no noticeable slowness. Just being able to try new containers and services without having to worry about messing up your root environment (which would take re imaging the sd card among other things) makes it worth doing for my use. I think if you’re comparing it to a production workload you’re in the wrong ballpark.


What haters?


This OTHER Firecracker is mostly obsolete now, but is still being sold. One could easily interface IT to a RPi as well. https://www.x10.com/cm17a.html


I could see this being useful on a Pi 4 compute (maybe), but all my work with Pis are the Zeros. There just doesn't seem like there is enough extra processing overhead to justify the simplification of deploying services instead of imaged OS.


For KVM virtualization, a 64-bit kernel is needed. So the Zero will not work anyway.


Nice. Any reason this wouldn't work on 64-bit Raspbian?


None.


Pi is not exactly speed daemon. I do not see much point in running containers in it. I'd rather assembler cluster and consider each Pi as a container.


Containerisation makes it a lot easier to manage a fleet of Raspberry Pis being used as appliances. Rather than having to manage in-place updates, including potentially updates across multiple versions, or rolling back to previous releases, you simple push a new container image then ask the supervisor layer to switch to the new one.


I run couple of my apps on Raspberry Pi. As soon as I release new version they're automatically pulled from common source and restarted. I do not see any difficulty here at all as I do not have to lift my finger right after I've pushed the new code. And I can also revert to the old version just as easy. I've been doing those things (not on Raspberry Pi of course) long before all this CI/CD had become a buzzword. One of my products was running in thousands commercial locations all across North America in the starting in 2001 and all updates were completely automatic. 15 years run time after which it was retired.


There's nothing stopping you from doing your development, testing, and deployment against the base OS on your Pis.

Even beyond pure learning opportunities, though, there are good reasons to containerize applications even on a system as low-powered as a Raspberry Pi.

For example: if you have a k8s, Docker Swarm, Nomad (disclaimer: I work on the Nomad team @ HashiCorp), etc. orchestrator in place you can keep workloads up and running even as you take down individual machines. (This can help a lot for OS upgrades, replacing failing hardware, disk upgrades, etc. on a running cluster.)

Likewise, you can develop + test your containers on a more performant or convenient machine (like a PineBook Pro, or VMC on a Chromebook) and deploy to the Pi(s) without having to manually map your FS layout, dependencies, etc. to Raspbian or another Pi-specific distro.

It's not something you _need_ to do, necessarily, but neither does that mean there isn't value, esp. for folks already comfortable with a container-based deployment workflow.


>"...It's not something you _need_ to do, necessarily..."

I completely agree with you that there is a value in containers but everything has a price and so far I have yet to find project with the scale that does require all this overhead.

I had no problems making / working with / deploying containers on Azure when it was required by client. Since client is the king I do as instructed. For my own business or when I am the one making deployment decisions for a client I get away without containers.


> I work on the Nomad team @ HashiCorp

I've been setting up Nomad on my pi cluster. I really like it. But the documentation really isn't up to speed, and there isn't really many avenues to get support. Sometimes I regret not doing K8s instead.


I guess then you agree that doing exactly what you just described is a good idea. You just prefer it your way vs. using what is described in the article.


Thank you for explaining the use case to me.


This is exactly the reason we would like to use FC on RPI (and similar devices).


Pi 4B is actually pretty powerful and Firecracker is pretty lean. The combination of these two makes super slick to run FC microVMs on RPI4. There are a ton of possibilities opening up once you can run a fully virtualized yet also containerized kernel + app. Once i figure out how to have an arbitrary combination of kernel + rootfs I will post an update about that. We would like to use it primary for deploying apps securely on a device that is living in a untrusted environment.


Speed demonicity and use of containers are mostly orthogonal things.

But Firecracker is VMs, not containers.


A lot of tasks don’t need speed, as long as the work is done within a few hours, all is well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: