Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Nvidia's quad-core Kal-El demos next-gen mobile graphics, blow minds (w/ video) (engadget.com)
118 points by evangineer on May 30, 2011 | hide | past | favorite | 49 comments


That means the simulations we're watching require a full quartet of processing cores on top of the 12-core GPU NVIDIA has in Kal-El. Mind-boggling stuff.

Mind boggling indeed, but totally absent in the text and demo was any mention of power usage. Four cores running at 75% utilization plus 12 GPU cores is going to suck down a tablet battery in a hurry.


Not (much) more than a Tegra 2 running at 75% utilization. Each Tegra 3 core should run at 50-60% power consumption of each Tegra 2 core, if history is any guide. When every chip maker works on a new ARM chip generation, they try to increase performance while maintaining the same overall power consumption of the whole chip. You won't see an ARM chip with 1W TDP in one year, and 1.5-2W the next year, when the new generation arrives. That's why we see dual core phones now with the same, or even bigger battery life than previous single core phones.

Even so, they manage to improve the performance about 2.5x every 12 months, and about 4x every 18 months. That's twice as fast as Moore's Law for x86 chips. That's why the ARM chips keep impressing every year on how fast they progress.


History is partly based on moving to moving to new process nodes to allow you to do more stuff with less power. Unfortunately Kal-El will be running on the same 40nm process that current Tegra processors are running on. To the extent that the device can quickly wake up, get stuff done more quickly than a Tegra 2, then go back to sleep it might be more power efficient than last generation, but not with something like this demo.

EDIT: Remember, the previous generation had two A9 cores, and this one has four A9 cores. My vague impression is that the A9s in Tegra 2 really are more power efficient than the A8s in the original Tegra, but we won't see a transition like that in this generation.


Even worse for Tegra 3, it looks like it's going to start shipping around the same time as Qualcomm's Krait which is 28 nm process. Nvidia needs to ride herd on their supply chain, they keep missing their ideal launch window by a few months.


Qualcomm won't have quad core Krait chips until a year later (mid 2012), so it won't compete with Tegra 3 in performance by far. By mid 2012, Tegra 4 will come out at 28nm as well.

But yes, I agree it would've been nice if they could've moved to 28 nm now, too. But I think Qualcomm will really remain behind performance in the next few years. Their custom Scorpion cores are weaker than Cortex A9 now, and I assume Krait will also be weaker than Cortex A15 which Nvidia, Apple and others will use. It's probably the reason why they could go to 28nm so fast, too.


Qualcomm has been right in the thick of it over the past 18 months. They're leapfrogged atm but they also just bumped their GPU and are about to start shipping 1.5 Ghz chips that appear more then competitive:

http://www.anandtech.com/show/4243/dual-core-snapdragon-gpu-...

Yes that's a higher clock speed then the competition but they've also announced higher clock speeds for their quad cores next year.


Sorry... must... resist... can't!

Moore's law has nothing to do with speed, only transistor count in the same space. While the speed increases are impressive, they're still picking off the low hanging fruit, as people haven't fully figured out how to optimize for the space yet. Kal-El will reportedly be using 40nm transistors, compared to the 22nm process Intel is able to do.

Which means there's more room for the amazing in the arena. :)


Intel won't have an Atom at 22nm until late 2013. By then Tegra 5 should be out, which who knows with how many Cortex A15 cores, at 28 nm. In 2014, ARM chip makers will move to 20 nm.

Intel's advantage is only of about 1 processing node generation, but that's not even for Atom right now. It's only for their highest end chips. Current Atom is at 45 nm, and it will move to 32nm next year, when ARM chips will be at 28nm. But this doesn't even matter anyway. Even if Intel was 2 processing nodes ahead, ARM would still be more efficient.


Intel will stop making bricks (metaphor) and skip 28nm and go directly to 20nm for Atom. It is their only hope to get into mobile, even then it will be a weak value prop as the entire mobile market will be on an ARM ISA.

Even though the whole dev chain is slightly more flexible to an ISA change, Intel will still be slighty x86ed out of the market only they will be on the other side of the fence this time.

Intel won't run on the phones, but they might, if they play their cards right run on tablets and netbooks.

It really does suck to be the 900 Lb gorilla.


Actually, Intel is moving Atom to 14nm (!) in 2014: http://www.geek.com/articles/chips/intel-14nm-atom-chip-in-2...

ARM chips are only (notably) more efficient when they slower. If you compare like vs like it's not as simple. Eg, 2 GHz ARM A9's are roughly more as powerful as a 1.8 GHz Atom, and have similar power budgets (leaving aside Intel's pathetic chipset & graphics support for now)


Intel is still in the mobile game? Sorry this is kind of news to me, I haven't been keeping up.

I wonder if they would use Android or MeeGo...


Meego was originally a joint effort between Intel and Nokia.


And since Nokia's bailed it wouldn't really shock the world if Intel eventually gave up on Meego too.


There was a fair amount of community involvement and some other companies. Intel and Nokia were the major players here. Also Nokia is supposed to release a Meego phone; they said that they will be using Windows mobile to replace Symbian in low and middle tier devices, and they haven't really commented on upper tier. To be honest, I probably won't be buying a Meego phone from Nokia if they ship one. My N900 is fun, but I feel it doesn't get enough attention from them.

Obviously, the biggest factor for Meego's success is if it can get any real attention from the media and consumers.


According to a salesmen at a Nokia booth after being asked the question: How about battery life?

Battery life should increase. Because we have 4 cores, we can run them on lower voltages. So overall, our plans are, and we expect to have better battery life.

http://www.youtube.com/watch?v=qhDmQCOyXrU

The power consumption and battery life should be akin to the Tegra 2.


If you're doing one core worth of work on four cores and it's well-parallelized then you can run the cores slower. But in this demo they're running all the cores pretty busy, which will use more power.


"Mind boggling indeed, but totally absent in the text and demo was any mention of power usage."

That is because NVIDIA is really bad with power usage. Like Intel, they are optimized(the graphic chips) for being faster and faster, not taking less power.

It is not so easy for them to change that. The same way ARM was created from scratch for mobile devices so was OpenVR (the graphic accelerator behind iPad and iPhone).

OpenVR started in the desktop developing low power graphic cards, I used one of them but NOBODY CARED back on the day. People do not care at all about the Nvidia card consuming 200Watts because power on the grid is cheap.

I guess they will make mobile devices for gaming with huge batteries and people that want high quality games will buy them.


I think you mean PowerVR


In case anyone is wondering, Kal-El is Superman's Kryptonian name.


My concern with this is heat. If you have 75% utilization in a mobile device, most turn into briquettes. They're impossible to hold. I wonder how NVIDIA plans to address that.


I think the problem space is different.

Power hungry processes and mobility are usually mutually exclusive and IMHO, they should be. The win in this case is not mobility, but portability

I envision that we will see Crysis on mobiles in a couple of years, but that will be in conjunction with a dock - which will not only connect the mobile to a HD display - but also keep feeding it power and optionally cooling it. When removed from the dock, the game will ideally switch to a low poly mode, where it no longer is gonna exercise the GPU+CPU to its max - thereby saving power.


It's a feature in colder climates.


Why do you think they use multiple cores instead of a single fast one?


I really hope that somewhen soon someone will introduce next-gen quad core batteries to provide enough energy to power mobile beasts like this one.


Exactly. Almost everything got better at least by factor of 2, while batteries haven't seen any major improvements.


Batteries have seen continuous improvement for ages.


As a matter of fact, they are still quite far from the progress made by portable devices in terms of power consumption.


Funnily, my quad Intel Core i7 (two cores both with hyperthreading) couldn't play the 720p Flash video on YouTube smoothly...

Impressive stuff! The thing I paid most attention to was the way they had figured out at least some way of what to do with those ARM cores. Getting four cores running at 75% (no idea whether they were throttled at the time) means there's some serious work being done. The usual mobile graphics benchmarks don't stress the cpu side as much the gpu side so Tegra3 wouldn't gain as much by running existing demos.


That's funny - my Core 2 Duo MBP played it smooth as butter.

And you don't get to claim extra cores because of hyperthreading - it's just not the same thing at all.


it can, you are doing something wrong


Yea, using flash to play the video.


I bet Google will come up with Chrome for phones and let Nvidia fully run on its new OS. I can foresee Android killing more than 2 cores out of Kal-El' 4...


I was disappointed when Google made the same mistake with Google TV set top boxes, and put first Atom chips inside Chromebooks, which led to expensive units.

I hope we can see Tegra 3 Chromebooks as soon as they can be shipped in products(August). That should make Chromebooks significantly cheaper and with much better battery life (and thinner).


What are you talking about? Atom isn't any more expensive than Tegra2/3, nor is the physical size any bigger.


Why did Apple break up with Nvidia..? Now they have to face the competition.


Physical quality control issues. There was a big problem a while back with Nvidia integrated graphics failing pretty often that apparently had to do with bad potting material material in the chip modules that didn't have the same thermal expansion properties as the solder that Nvidia had switched to. A large number of MacBooks had to be replaced, costing a lot of money and consumer confidence. Google "bumpgate" for more information.

And while Nvidia's Kal-El is really nifty, I'd rather have an OMAP 5 in a mobile device. Four A15s at 40nm will probably drain too much power to get any sort of battery life while playing that demo. 28nm should be much better.


What do you mean? They still use Nvidia chips on Macbook and Macbook Air.


He means on the iPhone and iPad. Nvidia acquired PortalPlayer which was the chip used in the iPods, but Apple decided not to use them anymore and developed their own chip. Meanwhile PortalPlayer chips eventually became Tegra.

Also due to a lawsuit that was recently settled with Intel, Nvidia was unable to provide Nehalem chipsets to Apple, meaning that Apple ended up shipping Nvidia's Core 2 Duo chipset, but that won't last forever and Apple will eventually need a different partner (ATI). Nvidia announced earlier this year that they are developing their own CPUs based on ARM for desktops/notebooks, who knows maybe that will end up in Apple notebook line.


> and developed their own chip.

Apple did not "develop their own chip" in any meaningful sense, especially not in back in 2007 when Nvidia acquired PortalPlayer. The iPhone used an off-the-shelf Samsung part (the S5L8900), and PortalPlayer was in the business of audio SoCs for PMPs. The first generation Tegra was more than two years away at the release of the original iPhone.

Even if Apple had wanted to contract PortalPlayer for iPhone, they could not have.

Since then, they've mostly kept to off-the-shelf blocks, even the A4 only has minor restructuring of the blocks at the SoC level, the GPU is a standard SGX block.


If Nvidia is extracting a premium over other ARM vendors it's unlikely any kind of mobile deal with Apple would happen. Not sure they'd be crazy about rebranding their mobile flagship with an Apple logo either.


> If Nvidia is extracting a premium over other ARM vendors it's unlikely any kind of mobile deal with Apple would happen.

Not really.

> Not sure they'd be crazy about rebranding their mobile flagship with an Apple logo either.

Why would they have to rebrand anything? The iPhone and 3G SoCs were not branded, and the 3GS only had minor branding (an Apple logo on the top 20%). The A4 was the first iPhone SoC with truly significant rebranding.


That's the problem for nVidia as a supplier for Apple: iOS hardware is Apple branded, and most people do not know what is inside. nVidia wants to build their own brand, but Apple has no interest in that.


Or Sandy bridge and a compatible graphics card.


Sandy Bridge is good compared to AMD chips, but it's overrated compared to ARM chips. ARM chips are still far ahead in power consumption. Besides Intel, won't even have a Sandy Bridge Atom by late 2013.


Thats the bet nvidia made when they lost the chipset business. I'd say they are more interested in their own CPU specially now that Win8 is compatible with it.


Sure as you can see they are already quite fast :) It's nice to see a new competitor in the CPU business..


For those wondering weather more cores means more power consumption: http://superuser.com/questions/163567/why-does-the-heat-prod...


Does it run Crysis?


That name bugs me. Made worse by the fact I just re-read Crisis on Infinite Earths so my mind is full of DC goodness.

Be sure not to use this under a red sun.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: