Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it's going to start becoming reasonable to package up applications in VMs and distributing the VMs "appliances" to run instead of installing software directly in the OS. I think this is going to start happening regularly in the consumer space sooner rather than later (and already has in some cases like with XP Mode). This is pretty much modus operandi in the service space today.

There's lots of really good reasons to do this (sandboxing, ease of installation, compatibility, snapshots/state saving, etc.) and VM tech at the consumer level is good enough for most applications. Doing so also enables you to distribute the same application for different host architectures relatively easily (swap out the virtualization core with an emulation core).

VM technology basically will allow consumer software vendors to start treating your computer like a set-spec videogame console instead of worrying about millions or billions of possible complications from how your computer is set up. Once VMs in the consumer space get good enough to really run high-end games, imagine everybody just writes to some Valve defined Linux spec that just happens to match some Steam Box, but you can install the VM for that game on your Mac or Windows or whatever and get to gaming.

If this happens, VMs will chew through RAM faster than just about anything out there.

So instead of installing and running Adobe Suite, you start up the Adobe Suite VM and boom, 8GB of your RAM vaporizes. Fire up your web browser VM and boom, there goes another 4GB. Your e-mail client annihilates 4GB more and now we've eaten up 16GB of RAM to run a handful of applications. Open up an MS-Office component and there goes another 8-16GB. Run a non-virtualized legacy app? Why those all just get sandboxed into an automatic "old shit" VM so the virii keep out.

This isn't inconceivable and I wouldn't be at all surprised if this wasn't already on the drawing boards somewhere.



Containerization could offer close to the same level of isolation as VMs without the insane memory bloat. Plus, VMs might be able to share common memory pages if it becomes necessary.


Funny, this occurred to me more than ten years ago (as a result of seeing knoppix, actually) but it still hasn't come to pass. Given the increasing importance of mobile, i doubt many users will sacrifice battery life or laptop comfort for the dubious benefit of having their applications partitioned into VMs.

Using VMs for apps does make sense for some pro apps, especially those with idiotic system requirements and/or copy protection. And obviously for testing.


I can see it being spun pretty hard as an anti-virus initiative at some point, or a "guarantee you software investment into the future" kind of thing.

Nobody (consumers) really care that it makes it easier for app developers or most of the other benefits, but consumers can be scared into all kinds of weirdness.

Bonus for PC makers, it would give a huge boost to kick off the upgrade cycle again for home PCs. More cores, more RAM, more disk space needed for all these dozens of VMs (each with their own multi-GB OS and software install).

Heck, I know of at least half a dozen people who do a variant of this right now in order to run a single Windows only application on their Macs.


If this gets popular, I can see them stripping the OS and other cruft down so that the application almost runs on bare (virtual) metal. A complete desktop OS with user software and drivers for all the unused hardware sounds unlikely.


This is basically what OSv is. It's stripped down virtualization environment meant to only run a single application on bare (virtual) metal.


Proof of concept viruses are already out for this architecture, so it just becomes a bigger management headache.


The primary reason you are correct about this assumption is the fact that the going trend is to package up applications and run them as a SaaS service. Those 'appliances' you are talking about will be web applications running a more highly decentralized hosting model, occasionally hosting it on the user's computer and more frequently on a neighborhood's deployment as a whole. This newer model of hosting will likely resemble true cloud compute than what we consider it today: 8-9 data centers running 80% of the public cloud in an offering called AWS.


>I think it's going to start becoming reasonable to package up applications in VMs and distributing the VMs "appliances" to run instead of installing software directly in the OS.

Full VMs for regular desktop apps? I don't think so. We already have sandboxes.

And in any case, this wont happen in the life span of this professor being relevant (say, 5 years), so, it can't be an issue that necessitates 64 bit.


Isn't the OP referring to what happens after the 'lifespan' of this processor?

I'm still happily running a Mac Pro I maxxed out in 2008 and expect a couple more years out of it at the least.

It would be nice if this kind of machine could last a similar 6-8 years instead of entering (and I think that was the OPs point) 'engineered' obsolescence in 4/5 years?


>Isn't the OP referring to what happens after the 'lifespan' of this processor?

No, he's reffering to what will happen in "2 years". I quote: "But in two years, swapping in 128gb would be the no-brainer upgrade to this thing. That this is being ruled out ahead of time is not a good thing".

And there's just no way that normal users will run their desktop apps in VMs in two years -- which is what was said a as a justification for needing > 64gb.


I feel though consumer hardware is going to be a lot more standard from now on. The wild west hardware age may be coming to an end, so VMs-everywhere would be trying to solve a problem of the past rather than to be a solution for the future.


This was what was revolutionary about Quake III, no? It ran inside some iD VM...


He is talking about using VMs for real arquitectures (SO + apps).


Um, no.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: