The Xeon isn't overclockable, which is a big part of the niche this processor sits in.
If you read my post again, I'm not saying that 64gb is too little right now. It's probably the right match for the processor for most workloads, today. 32gb would seem weak with 8c/16t (I have that much in my 4770 system), and 128gb could be excessive.
But in two years, swapping in 128gb would be the no-brainer upgrade to this thing. That this is being ruled out ahead of time is not a good thing.
(Barring an Intel microcode revision, as is being speculated by the sibling commenters. But I'm not holding my breath, as Intel Ark is pretty definitive.)
Idk... I'm struggling to see why an average user in the overclocking/high-end pc market would run into the 64 gig limit assuming the high end market has a relatively short part lifetime. I mean if you're in it for the video editing then the sky is the limit but for an average user? A user could ram-cache 4 hard drives with a 4gb buffer each, power up the entire adobe suite including Illustrator, Photoshop, start a browser session with 100 tabs and 10 video streams, torrent client, email client, backup client, vpn, a couple modest ftp and web servers, a transcoding media streaming server AND crysis 3 and still likely have 10-20 gigs to play with. I think if you need much more than that running concurrently you probably should be starting to think about server hardware.
If you think 64gb will be an easy limit for an average user to hit in the near future I would love to hear your envisioned use case.
I think it's going to start becoming reasonable to package up applications in VMs and distributing the VMs "appliances" to run instead of installing software directly in the OS. I think this is going to start happening regularly in the consumer space sooner rather than later (and already has in some cases like with XP Mode). This is pretty much modus operandi in the service space today.
There's lots of really good reasons to do this (sandboxing, ease of installation, compatibility, snapshots/state saving, etc.) and VM tech at the consumer level is good enough for most applications. Doing so also enables you to distribute the same application for different host architectures relatively easily (swap out the virtualization core with an emulation core).
VM technology basically will allow consumer software vendors to start treating your computer like a set-spec videogame console instead of worrying about millions or billions of possible complications from how your computer is set up. Once VMs in the consumer space get good enough to really run high-end games, imagine everybody just writes to some Valve defined Linux spec that just happens to match some Steam Box, but you can install the VM for that game on your Mac or Windows or whatever and get to gaming.
If this happens, VMs will chew through RAM faster than just about anything out there.
So instead of installing and running Adobe Suite, you start up the Adobe Suite VM and boom, 8GB of your RAM vaporizes. Fire up your web browser VM and boom, there goes another 4GB. Your e-mail client annihilates 4GB more and now we've eaten up 16GB of RAM to run a handful of applications. Open up an MS-Office component and there goes another 8-16GB. Run a non-virtualized legacy app? Why those all just get sandboxed into an automatic "old shit" VM so the virii keep out.
This isn't inconceivable and I wouldn't be at all surprised if this wasn't already on the drawing boards somewhere.
Containerization could offer close to the same level of isolation as VMs without the insane memory bloat. Plus, VMs might be able to share common memory pages if it becomes necessary.
Funny, this occurred to me more than ten years ago (as a result of seeing knoppix, actually) but it still hasn't come to pass. Given the increasing importance of mobile, i doubt many users will sacrifice battery life or laptop comfort for the dubious benefit of having their applications partitioned into VMs.
Using VMs for apps does make sense for some pro apps, especially those with idiotic system requirements and/or copy protection. And obviously for testing.
I can see it being spun pretty hard as an anti-virus initiative at some point, or a "guarantee you software investment into the future" kind of thing.
Nobody (consumers) really care that it makes it easier for app developers or most of the other benefits, but consumers can be scared into all kinds of weirdness.
Bonus for PC makers, it would give a huge boost to kick off the upgrade cycle again for home PCs. More cores, more RAM, more disk space needed for all these dozens of VMs (each with their own multi-GB OS and software install).
Heck, I know of at least half a dozen people who do a variant of this right now in order to run a single Windows only application on their Macs.
If this gets popular, I can see them stripping the OS and other cruft down so that the application almost runs on bare (virtual) metal. A complete desktop OS with user software and drivers for all the unused hardware sounds unlikely.
The primary reason you are correct about this assumption is the fact that the going trend is to package up applications and run them as a SaaS service. Those 'appliances' you are talking about will be web applications running a more highly decentralized hosting model, occasionally hosting it on the user's computer and more frequently on a neighborhood's deployment as a whole. This newer model of hosting will likely resemble true cloud compute than what we consider it today: 8-9 data centers running 80% of the public cloud in an offering called AWS.
>I think it's going to start becoming reasonable to package up applications in VMs and distributing the VMs "appliances" to run instead of installing software directly in the OS.
Full VMs for regular desktop apps? I don't think so. We already have sandboxes.
And in any case, this wont happen in the life span of this professor being relevant (say, 5 years), so, it can't be an issue that necessitates 64 bit.
Isn't the OP referring to what happens after the 'lifespan' of this processor?
I'm still happily running a Mac Pro I maxxed out in 2008 and expect a couple more years out of it at the least.
It would be nice if this kind of machine could last a similar 6-8 years instead of entering (and I think that was the OPs point) 'engineered' obsolescence in 4/5 years?
>Isn't the OP referring to what happens after the 'lifespan' of this processor?
No, he's reffering to what will happen in "2 years". I quote: "But in two years, swapping in 128gb would be the no-brainer upgrade to this thing. That this is being ruled out ahead of time is not a good thing".
And there's just no way that normal users will run their desktop apps in VMs in two years -- which is what was said a as a justification for needing > 64gb.
I feel though consumer hardware is going to be a lot more standard from now on. The wild west hardware age may be coming to an end, so VMs-everywhere would be trying to solve a problem of the past rather than to be a solution for the future.
Your use case would not use more than 64gb of memory, no; but it would also run on the CPU side just fine with a $339 4c/8t 4770k. A user with that workload wouldn't need 128gb, but they'd also not need an 8 core CPU.
Put it this way: a $120, dual core Core i3-2100 released in 2011 has support for 32gb of ram. But a $1000 eight core processor, released more than three years later, for nearly ten times the price, supports just twice as much memory.
I believe this is imbalanced. And expecting to be able to upgrade a workstation tomorrow, when purchased today for likely well over $2000, is not unreasonable.
Adobe products (looking at you After Effects) consume way more memory than you are giving them credit for. 64GB is not enough memory to support the computation this part is capable of. Next year, this chip will support at least 128GB.
many people today still use 8GBs, pro users might use up to 32GBs but its not that RAM usage has been rising rapidly in the last 4-5 years. In 2009 8GB was pretty standard for high end desktops like 16GB is today. This is a desktop cpu mind you, workstation and server cpus obviously support much more ram.
If you read my post again, I'm not saying that 64gb is too little right now. It's probably the right match for the processor for most workloads, today. 32gb would seem weak with 8c/16t (I have that much in my 4770 system), and 128gb could be excessive.
But in two years, swapping in 128gb would be the no-brainer upgrade to this thing. That this is being ruled out ahead of time is not a good thing.
(Barring an Intel microcode revision, as is being speculated by the sibling commenters. But I'm not holding my breath, as Intel Ark is pretty definitive.)