Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intel Arc A580 could be the next great affordable GPU (techradar.com)
160 points by mikece on Aug 5, 2023 | hide | past | favorite | 220 comments


The revolution here is not that it is as good as the competition, but that it might have a much better price, leading to a much better price/performance ratio.

If Intel is patient to compete for lower grade GPUs for a few years without killing its GPU line, then we might see them competing for the top in the future. And the top is where the big money is. They'll also need a CUDA compatible API to stand a chance, but they have time.

The only thing that saved Intel until now is that Nvidia can't legally produce x86 CPUs, although they tried.

The more competition, the better.


Apart from the software stack, Nvidia has invested a lot in the networking infrastructure for high bandwidth between gpu in a cluster. I think their acquisition of Mellanox might have helped here.

So competing on server grade gpu might be more difficult for intel.


The same intel that has been closely involved with ieee 802.3 development for decades and has had cutting edge NICs for datacenters ready to go in advance of new clause ratification? I'm sure they'll be fine.


Intel is nowhere in networking. IPU is dead last, they killed Fulcrum and Barefoot was a failure.

I would love to see Intel get somewhere but they are dead last among the major players.


You would be surprised.


Intel Gaudi2 is actually really good, better than A100s and cheaper (including interconnect, 2.15 Tb/s)

CUDA isn't that important any more.

The weird one was that Aurora at 2 exaflops using.. Intel Max?


> They'll also need a CUDA compatible API to stand a chance, but they have time.

Arc already has an extremely good value proposition: hardware AV1 encoding - so you can keep your existing GPU (let's face it: it's probably still more than fine) and augment it with a cheap Arc.


My A770 16GB originally intended for OpenVINO and OneAPI was repurposed into a workstation dedicated to AV1/VP9 encoding.

Slow developments are fine with me; it's a beautiful card.


They'll also need a CUDA compatible API to stand a chance, but they have time.

I guess CUDA compatible would be fine, but for myself, I'd buy their cards for compute workloads as long as they have some reasonable API regardless of whether it's CUDA or not. If they have something usable based on OpenCL, SYCL, whatever, they would have a shot at getting my money. Ideally that would also include then providing some support to make sure there are usable backends for PyTorch, Tensorflow, etc.

Ah well. Let's see if they can figure this out. Between Intel and AMD, you'd think somebody could step forward and give NVIDIA some serious competition.


Intel already has a working GPGPU stack, using oneAPI/SYCL.

They also have arguably pretty good OpenCL support, as well as downstream support for PyTorch and Tensorflow using their custom extensions https://github.com/intel/intel-extension-for-tensorflow and https://github.com/intel/intel-extension-for-pytorch which are actively developed and just recently brought up-to-date with upstream releases.


> And the top is where the big money is.

But the bottom is where volume and profits are. If the PC market has shown us anything, it’s that only producing for the top of the market gets you outcompeted.


I guess the top in the case might be the datacenter and stuff like A/H100 which have totally obscene margins compared to any consumer grade GPUs


Sure. But it’s a lot easier to pay back your investment in product development if you can amortise it over 50.000.000 GPUs instead of 50.000 GPUs. Same for ironing out the bugs in your software.

Getting paid a small yearly salary for every one of your products may look really nice but the size of the market makes it a precarious business.


I wouldn't call that the top, that's an entirely different market.


Datacenter is considered a separate segment from gaming GPUs. So is automotive or AI.


Nvidia never needs to make a GPU again.


No, the profits are not in the volume. See the profitability vs. volume of iOS vs. Android. Ultimately, Nvidia charges a lot of mid-range cards because they don't have any competition. If Nvidia did have competition, they could lower the price of their midrange cards to stifle Intel while still profiting on the high end.


Midrange cards subsidize the higher end cards because they are often chips that were meant to be a tier or two higher but didn't pass QC at those performance levels. If you take a price hit on that, your entire fab/process is less profitable overall because you're not able to recover as much revenue from less than ideal yield.

Intel competing in the midrange is far from a nothingburger.


Apple produced for the top and IBM for the masses. Look where both companies are today.


Apple certainly didn't produce for the top. The Mac Pro always was a niche product for media people.


They're not talking about today's apple, but the apple that sold a $10,000 Lisa.


It most certainly was priced for the top.


No, it was priced like a niche product for media people.


We obviously remember this time differently. Most people I knew had PCs, the few that had Macs had only one thing in common: Enough money to afford one.

The only other place where I saw a considerable number of Mac deployments was university. So portraying it as a niche product for media people has some truth in it but is in my opinion far from the whole truth.


IBM made very few products for the masses. Their platform took off because of clones like Compaq, HP, and Dell.


I advise every IBM naysayer to check their earnings, and all the beloved FOSS products that depend on IBM paychecks.


Did you forget a "/s" tag? Because that pretty much sounds like ten years of AMD history!


Intel can afford to throw much more money at the problem. AMD can't even get their drivers straight for years now, apart from the gaming use case. I really see a chance for Intel.


Unless things have improved recently, Intel‘s graphics drivers are not that great on Windows.

I really wish someone would give Nvidia some effective competition, but I just don’t see that being Intel.


Regarding Arc, I agree, the drivers were a nightmare at release, but I'm fairly confident that was just a very rushed release because arc was already way behind schedule, and things will get better. The old UHD i915 drivers were (are) pretty solid, let's hope that's where Arc is aimed at, plus maybe all those gaming-focused bells and whistles as an optional feature.


Not sure if this has ever improved, back in the day Intel drivers were known to lie about their OpenGL capabilities, stating supported, when the feature was actually emulated in software.


If they only say their driver supports the feature, that technically is not lying, is it?

And what do you expect the driver to say instead?

Also, for graphics cards, I expect every feature requires some work from the driver. It’s not as if OpenGL is a hardware interface.


It is, because the drivers are only supposed to tell about what is actually supported on the hardware, via the capabilities queries.

A critical feature for game engines to actually decide what features to use, otherwise instead of 60 FPS, you might get single digit FPS.


So, educate me.

https://www.saschawillems.de/creations/opengl-hardware-capab... claims:

“The “OpenGL hardware capability viewer” (short “glCapsViewer) a multi-platform client-side application that reads out all important hardware capabilities of the current OpenGL-implementation present on your system”

Reading its source (https://github.com/SaschaWillems/glCapsViewer/blob/85ee6ab68...), I get the impression it does that by calling either of:

  - glGetIntegerv
  - glGetInteger64v
  - glGetIntegeri_v
  - glGetProgramivARB
  - glGetFloatv
  - glGetString
I think https://registry.khronos.org/OpenGL-Refpages/gl4/html/glGet.... documents all those functions.

I don’t see any occurrence of ‘hardware’ on that page. Looking at the data collected by that tool (example: http://opengl.gpuinfo.org/displayreport.php?id=10016), and checking some of the capabilities listed, I can find all I looked at on that page (https://registry.khronos.org/OpenGL-Refpages/gl4/html/glGet....), so I don’t think that tool uses other methods to obtain information (I didn’t read its full source code, though)

Also:

1) https://registry.khronos.org/OpenGL/specs/gl/glspec46.core.p... doesn’t mention ‘capabilities’, doesn’t mention ‘hardware’ much, and even says

“While an implementation of OpenGL may be hardware dependent, the Specification is independent of any specific hardware on which it is implemented. We are concerned with the state of graphics hardware only when it corresponds precisely to GL state.”

2) https://community.khronos.org/t/determining-hardware-opengl-... asks “I am trying to find a way for my program to determine (during run time) whether the graphics card supports OpenGL hardware acceleration, and how much RAM is installed on the graphics card”

and

https://community.khronos.org/t/determining-hardware-opengl-... replies: “Sorry, there’s no way in OpenGL to do what you want. You’ll just have to trust the driver to do the right thing.”

Of course, that last reply may be incorrect, but from what I found, I think it is correct.

So, what’s going on? Is the spec unclear about the glGet functions? Am I overlooking a call that _does_ return hardware info? As I said: educate me.

(And of course, “implemented in hardware” does not guarantee anything about performance.)


That is the thing Intel drivers did not do the right thing, versus what AMD, NVidia, and all now gone 3D vendors.


Their drivers have improved enough that for the old OpenGL/DX9 games they are emulating it's able to just brute force it's way to acceptable performance since those games are really old usually.


I've had ATI/AMD cards for more years than not since 2004. And in my personal experience, I have maybe once had a driver issue on an X800. I know sometimes they struggle with new cards, but is it really still a thing?


graphics drivers are some of the most complex pieces of software running on any PC with lines of code running into millions easily. yes, driver issues are most definitely 'a thing' for all GPU manufacturers.


I don't think they meant drivers with respect to gaming, but I could of misunderstood. I think it is more in the context of GPGPU.


No even gaming I was constantly bombarded with driver crash and quitting the unsaved game


Why can't nvidia legally produce x86 chips?


Because you need a license and very few companies have one.


IP laws strike again. This is like saying someone can't make Lego compatible plastic bricks, or a Torx wrench.

Seem unreasonable to me. Even without a legal protection there is some degree of protection for the original creator something as complicated as a CPU.


Can Intel refuse to sell such license to nVidia or ask for much higher fees than other companies? If nVidia bought one of those companies, would that allow them to produce x86 chips at will?


> Can Intel refuse to sell such license to nVidia or ask for much higher fees than other companies?

Yes. There's no requirement to sell licenses to patents in general. Sometimes (not always) if a patent is included by a government in a standard that government will require the owning company to license it on "Fair, Reasonable, And Non-Discriminatory" terms (FRAND). Most patents, including the x86, x86_64, and various extension patents are not required to be licensed under FRAND terms.

AMD and Intel have a mutual licensing agreement. When Intel patents a new extension, they license it to AMD. Likewise when AMD patents a new extension, the license it to Intel. That's why Intel CPUs use the AMD x86_64 instruction set, AMD CPUs use the Intel AVX 512 vector instructions, etc.


AMD only got one for the 286 because IBM wanted a second source supply for their IBM PC's and things got murky when the i386 came around and AMD needed to compete.

There were some copying and I think they settled with AMD licensing the Intel FSB for $$$ (they switched to DEC Alpha bus protocol when the AMD Athlon 64 bit era came around).

With the AMD 64 they licensed it back to Intel when it became clear that Itanuim was dead as it was so slow and Microsoft forced Intel's hand by releasing a version for Windows Server.

The market is big enough for both.


> clear

Yep, but they're both willing to use the patents to keep other entrants out.


As far as I can remember they would have to buy AMD, VIA or IBM to get a X86 licence. That's just the beginning though as many later patents are also important, but they would likely also be already licensed by these companies (like AMD64 for one). If they could even use the license after a take over.... I don't know.


If AMD got taken over, then they would automatically lose their x86 license, as I believe it is a condition in their cross-licensing agreement. This is why I don't expect AMD to get bought in the near future.


That's correct yes, if AMD is bought then the cross-licensing deals terminate in both directions - i.e. Intel also lose access to the AMD patents.

It's a poison pill which means Intel themselves would be the only serious buyer.


Wouldn’t that mean Intel would really have to negotiate a new deal with whoever buys AMD since otherwise they couldn’t make any new cpus?


Technically AMD could still make ARM[1] or RISC-V based CPUs under a new owner.

[1] https://www.trustedreviews.com/news/amd-reveals-its-ready-to...


> It's a poison pill which means Intel themselves would be the only serious buyer.

Which market regulators would never allow. This is even bigger in terms of monopoly than Nvidia trying to buy arm.


the license doesn't transfer if the company is bought.


Because you need a license to do so and Intel and AMD are loathe to give a license to would-be competitors.

I believe that the person you're replying to is referring to an attempt to produce x86 compatible processors that Nvidia undertook when they bought the license that Via owned.

IIRC that didn't go so well for Nvidia because the license didn't include new parts of the instruction set like the 64 bit instructions that AMD made in the mid 2000s and AMD wouldn't license them.


I do believe they've tried to circumvent the IP by implementing a translation layer like in Transmeta Crusoe, but Intel still threatened them with lawsuits.


Nvidia can't do a translation layer. Back around 2010 Intel and Nvidia were suing each other, the settlement agreement specifically said Nvidia wouldn't produce anything like that


This is a redherring; Amazon and Apple are happier on ARM. Nvidia may have been blocked from buying them outright but are free to still make them.


Nvidia has produced arm cpus, eg for the Nintendo switch.


That is what I am saying; Nvidia don't need or want an x86 license. Thier best move is to dedicate all posible manufacturing to AI but a fallback stratagy might be to outcompete Intel in CPU. (They tried to buy Arm a year or so ago but it was stopped by the FTC.)


The problem with arm is that it is lacking in software support and optimizations compared to x86; it's only recently that there's proof of viability of arm cpus outside mobile through apple, and they also have the advantage of full stack integration and over a decade of experience building the best performing ARM cpus.

So there's a reason why they'd prefer x86. Hopefully they manage to release an ARM competitor for the desktop market, though with AI boom, it seems more likely that it will be a server focused on supporting their GPU lineup.


I think the tipping point was a few years ago. The are simply more Phones/tables/tvs/cars then the have ever been desktop PCs. They tend to be more latency sensitive and resource constrained. Therefor it is unsurprising that more optimization work has been invested in ARM then x86.

Graviton's certainly work out better for every program I have tried on AWS (obviously this is mainly because AWS set the price to ensure they do).

I mean the are complex x86 programs that run just as fast for longer on battery on a M1 then a comparable Dell/Lenovo.


And Jetson


I hope Intel keeps throwing more VRAM at their top cards than the competition. With the way current AI works that VRAM alone could make them attractive to try to develop for in the consumer/hobbyist space.


> Nvidia can't legally produce x86 CPUs, although they tried

What did they try? And when?


What became Denver CPUs were originally targeting x86.


I have an a770 as my daily driver and it’s a great card. Especially at the prices. 16gb of vram, av1 encoder, can play most modern games with decent settings, good blender integration and the oneAPI libraries for compute are slowly maturing. If you don’t need CUDA or bleeding edge performance, it’s a great mid-level card at a price thats hard to beat.


>and the oneAPI libraries for compute are slowly maturing

I'll say I had a miserable time trying so setup oneAPI for VS Code on Windows. Their website made it seem like it was possible, there are even plugins, but I never got it working. Eventually I just gave up and downloaded Visual Studio.

Getting oneAPI to work on Arch Linux, where it's not even officially supported, was also tricky but considerably easier.

Haven't been able to successfully get PyTorch or TensorFlow code to utilize the GPU but did get some DPC++ working with oneAPI. All-in-all it's not a terrible experience but could be a lot better.


I got it installed on Ubuntu and managed to get OneApi in blender, pytorch and stable diffusion playground. I’d say if this is your primary interest, wait another generation. I’m such a fan of opensource vendor drivers that I had to try already


I'm pretty happy with Arch so I don't think I'll switch.

As far as stable diffusion I read a guide on setting it up using WSL and it never worked. No telling what's wrong because there's just very little info about these GPU's out there.


I have an nVidia card and I never managed to get any AI stuff working on my base system. The only thing that's worked off the box is Docker images with CUDA support.


When did you try PyTorch and Tensorflow? I think they were updated pretty recently to the latest upstream releases, which is necessary to work with Python 3.11 which is the version that Arch is currently shipping.


I had never priced it and expected something reasonable and it is $426 on Amazon…

I don’t mean to start a holy war but this is why people buy consoles. I feel like the entire PC video card universe is so disconnected from reality. A PS5 or Xbox Series X is $499 and is a fully functioning device.


In the console ecosystem, you pay more for games, subscriptions, fees and such, depending on your habits.

And you don't get a fully functioning work PC out of it... Which is stupid, as both consoles would be great PCs, especially the XSX with bigger RAM ICs. They are kind of like high power Mac M1s.


I don’t know how it is in the US, but where I live no employer allows you to use your private pc for work.

I agree with your first point. However a plus on the console side is that I can actually still own my games instead of licensing them (if you buy physical).


> However a plus on the console side is that I can actually still own my games instead of licensing them (if you buy physical).

LOL, LMAO even.

Sweet summer child, wait until you actually find out.


Really helpful comment.

Mind explaining what you mean (preferably without being insufferable)?


Increasingly, the situation for console games is the same as for PC games. So you're also only licensing the game, need to be online to at least authenticate or even always to play, etc.

There probably still are games where you can just insert the disc into an offline console and start playing, but the trend goes in the other direction.

EDIT: I'm obviously talking about modern consoles. Get a used PS2 and some games and play to your heart's content!


I can see the trend you’re talking about, but the vast majority of games on consoles I can buy, put into my console and play without being online. Some games are obviously always online, some are just straight broken without patches, but the majority are just ok without internet. So I find that, currently, my point stands.


Absolutely fair.

As long as there are physical releases in the first place, that option will probably never vanish completely. Personally, I'm more "concerned" about the Indie/AA scene - when there's no physical release in the first place, I can't even visit the ship down the road to buy it. But that's independent of the PC/console divide, so I'm just rambling now.


Absolutely. But I do find that this actually gotten somewhat better in recent years with distributors such as Limited Run Games and the like (even when I loath them sometimes for their FOMO tactics) have often made possible small runs of physical editions of games, that would likely not have seen a release otherwise.


I mean you still can buy discs. You can buy it play the game and resell it on eBay if you don’t like it or are just done with it. If it’s a new game you can basically sell it for close to what you paid for it. I recently got out of a Diablo IV purchase I realized I didn’t like with minimal damage that way.

My brothers’ steam library makes me sad when I see how much money is wasted there in games they don’t play.



Well, yeah. A console is basically a discrete gpu with a few cpu cores bolted on to the gpu core, and a ssd added.

Discrete gpu involves paying for all the memory costs, all of the cooling costs, all of the video outputs and BOM costs, all of the testing and validation costs, and then just doesn’t do the last little bit that makes it a fully functional PC. Instead you are expected to buy a second set of memory, cooling, fans, etc and ship them all individually with total packaged shipping size of a small pallet, compared to a console shipped in something the size of a breadbox. It’s literally the most expensive way to build a computer with the most redundancy in system design and the most waste in shipment and validation.

So it’s not surprising that console costs and midrange dGPU costs are convergent. They are 90% of the way to a console, just missing the last few bits!

(But thats what PC gamers get from clinging to an outdated 1980 standard for computer design, and the form factors it provides for expansion cards. Just ask people to buy a new backwards compatible power cable string for their gpu, and provide them with a free adapter, and then watch the tantrums flow. The religious reverence for the ATX and pcie add-in-card form-factors is absurd and people get what they deserve when the pc designs that shake out of it 40 years later are incredibly poorly-fit to the real-world needs. Everything has changed, gpus dominate the system now, we deliver up to 400W through an intricate network of 75w/150w “aux” connector, and we still design cards and cases and motherboards like it’s 1980 and a gpu and cpu can both be passively cooled…)

You can build a Steam Console much cheaper, but it will also involve some sacrifices the PC community hates, like soldered (GDDR6/7) memory and no CPU/GPU upgradeability. But it’ll be 1/4 of the price, so it’ll be worth it even if you have to replace the whole unit to upgrade. That's why consoles are built that way, and not as an ATX PC.


An A770 should be ~$370, and you can buy A750s for ~$250, new of course. $426 sounds like someone's trying to fleece you or there just isn't enough floating around to sell anymore as Intel gears up for the next lineup.


https://www.newegg.com/amp/intel-21p01j00ba/p/N82E1681488300...

The next google result was that one on Newegg. I saw it was out of stock and said I’ve seen all this before and gave up.


> I don’t mean to start a holy war but this is why people buy consoles [...] A PS5 or Xbox Series X is $499 and is a fully functioning device.

How much do the console games cost? PC and console gamers tend to spend more on games than hardware, and PC games are offer more bang for the buck when amortized over playable time. Additionally, consoles are not backwards compatible with games for earlier platforms, so one has to re-buy versions of games they already own.

Consoles are appliances, and I understand the appeal of their simplicity. However, until consoles offer the flexibility of playing a game I bought 10 years ago[1] or the latest and greatest AAA title, I can't abandon PC gaming.

1. Or a 10-year old title on sale for $3.99


> this is why people buy consoles.

Gaming consoles also seem to have fewer cheaters.

A console also compartmentalizes video gaming away from your compute/Internet stuff. (Both technically/security/privacy-wise, and in terms of distraction.)

(Source: Uses only a PS4 Pro for gaming, even though I have a PC with an RTX 3090 and gobs of RAM, sitting usually idle.)


I come from the other end. I can never bring myself to console games again. I am too old/busy to ever want to get good at gaming again. I play a bit now and may not touch it again for a week later. So the muscle memory etc. don't stick and I don't want that to frustrate me.

So I just play single player and use tools to give myself bullet time in every game, edit resources to escape grind. This is only possible on a PC. For content tourist style gaming, consoles are a no-go.

I do like the compartmentalization aspect of consoles though. Sooner or later Steam games will start being a security hazard. I don't want to do banking on a computer with games installed.


If it's just sitting there, why not have it earn money for you? (vast.ai/etc)


Where are they paying to rent out my GPU so that it covers the electricity and the wear?


I don't understand what you're asking. If they don't pay enough to cover depreciation and the cost of electricity because you live in a high cost of electricity location, just say that.


Sole reason of not being able to use keyboard and mouse (and actually run software on it) is worth my PC'a money, and it's not much more expensive than console.


Many xbox games these days support KBM.

I play Modern Warfare II almost exclusively with a mouse and keeb.


They are too expensive but it's slightly better if you consider it as a marginal cost on top of a PC you already have. Upgradability is the whole point of PCs really. You can keep a system going for many years by taking advantage of this. And a console isn't really a "fully functional system" when compared to a PC.


You know that most consoles are sold below cost, with the expectation that the difference will be made up in the premiums on games, right?


Not necessarily premiums. The manufacturers make their money back just by taking a percentage of all the (digital) sales. That's why even the Steam Deck can be sold so cheap, without a premium on games. They can make up the difference with just more sales because you now have a device you'll use more.


> A PS5 or Xbox Series X is $499 and is a fully functioning device.

They can play games and nothing else. So completely worthless comparison.


Not really true. They can also double as media centers and as Blu-ray players.

They’re not very good at the former, but given the apps available for both, they’re at least serviceable.


Console and PC are very different gaming experiences though. One is no substitute for the other.


> They can play games and nothing else. So completely worthless comparison.

Except they cost less than a high end video card (not to mention the rest of the PC) and your games are guaranteed to work.

> Console and PC are very different gaming experiences though. One is no substitute for the other.

That's absolutely correct, but innovation in PC games is on the indie side and those aren't so GPU hungry. Most AAA titles are the same on consoles and PC these days so might as well get them for the console.


> and your games are guaranteed to work

PC games "not working" hasn't been an issue since maybe the early 2000s.

Everything has been standardized for decades. Unless you've got really old hardware that doesn't support Direct3D 12, stuff just works.


So if you want to play both AAA and indie you buy both a PC and a console? How is that supposed to be cheaper than just adding a graphic card to your PC?


The indies I like don't need a serious graphic card. A good bunch of them work just fine on a 2018 i3 mac mini with the integrated graphics decelerator...

Don't you need the 1000+ EUR cards to play AAAs at the same performance as a console? That is, twice the price of a console only for the video card.

Edit: and I need the PCs for work anyway.


No, even considering the optimization involved, consoles are about the same performance as a RX 6700 non-XT. They lean heavily on upscalers, and the upscalers are usually inferior quality so a 3060 ti / 3070 is hitting in the same general ballpark of performance too.

Xbox series X also only has 10GB of vram and series S is 8GB, which people usually don’t realize. Microsoft used a fast partition/slow partition strategy (like GTX 970) so in practice the slow segment is your system memory and you can’t really cross over between them because it kills performance.

You can get 3060 Ti for $275 now or 6700XT for $330. NVIDIA has DLSS which is generally higher quality for a given level of upscaling (FSR2 quality is closer to DLSS balanced/performance level), which offsets the raw performance difference a bit. Or AMD has more raw raster and VRAM. But that's kinda your ballpark price comparison, not a 4090. The consoles aren't 4090 either, they're rendering games in 720p or 640p and upscaling.

But it gets into this weird space where people refuse to turn down a single setting or use even the highest-quality upscalers on PC, but PC is too expensive, so they'll buy a console where the settings are pre-turned-down for them and they'll be upscaled silently from even lower resolutions with even worse-quality upscalers, with no choice in the matter. Consoles are like the Apple products of the world, they take away the choices and that makes people happier because having too much choice is burdensome.


> But it gets into this weird space where people refuse to turn down a single setting or use even the highest-quality upscalers on PC, but PC is too expensive, so they'll buy a console where the settings are pre-turned-down for them and they'll be upscaled silently from even lower resolutions with even worse-quality upscalers, with no choice in the matter. Consoles are like the Apple products of the world, they take away the choices and that makes people happier because having too much choice is burdensome.

Yeah, i'd rather play the -ing game instead of counting the fps?


> Yeah, i'd rather play the -ing game instead of counting the fps?

yes, but, you can do that on PC too - just punch in medium settings and turn on DLSS Quality mode and away you go. You can get a $300 GPU that does the same thing as the console, you don't need to spend $700+ on a GPU to get console tier graphics.

The problem is that people insist on making comparisons with the PC builds at max settings, native-resolution/no upscaling, while they don't have a problem with doing those things on the consoles. And when you insist on maxing out a bunch of exponentially-more-expensive settings, you need a 4090 to keep up, and gosh, that makes PC building so much more expensive than just buying a console!

but again, the console is running 640p-960p internal resolution and upscaling it to 4K, which is like DLSS Performance or Ultra Performance mode. And if you enable those settings on PC, you can get the same thing for a pretty reasonable price. Not quite as good, but you're getting a full PC out of the deal, not a gaming appliance.

It's always been about consoles having an Apple-style model where they lock you into a couple reasonably-optimized presets, while PC gamers hyperventilate if you take a single setting off Ultra or benchmark with DLSS turned on. And obviously in that case you're going to need a lot more horsepower than consoles offer. Which is more expensive.

Also, GeForce Experience has a settings auto-optimizer which does this with one click, or you can use settings from the PCMR Wiki or DigitalFoundry etc. It does tend to target lower framerates than I'd prefer (as a 144 hz-haver) but there's a slider and you just move it a couple notches to the left.


> The indies I like don't need a serious graphic card. A good bunch of them work just fine on a 2018 i3 mac mini with the integrated graphics decelerator...

Yes, but your console don't run them so you need a (low end) PC + a console if you want to play both.

> Don't you need the 1000+ EUR cards to play AAAs at the same performance as a console?

No you don't. If you want to pay 1000+ that's because you ~~like to waste money~~ want to play at 4K res 144fps with the highest possible setting in the next 5 years at least. You can play AAA titles with the same kind of settings you have on console on a 300-500 euro graphic card. So for the price of the console you get an equivalent upgrade for your PC and don't have to pay a premium for your games and can play all your games on the same device.


Aren't console graphics usually not as good as the PC versions? Even if they have the same resolution, I was under the impression that effects and whatnot were lower. If that's in fact the case, then a 1000 EUR card wouldn't give you the same experience as a console. Hell, my mid-range AMD I bought new for 300 EUR has better graphics than my sister's PS4 pro.


Do you really mean better graphics instead of higher numbers?

Is your mid range AMD card the same generation as the PS4 Pro or you should compare with a PS5?

Are you comparing the same title?


I actually mean better graphics, yes. I'm not sure what you mean by "numbers" (fps?), but I'm talking draw distance, shadows, grass details, etc. I was comparing Red Dead Redemption 2. We don't have other games in common.

My AMD card is a 5600 XT, bought in 2020 IIRC, right before prices exploded due to mining. I don't think the PS5 was out at the time. Anyway, I've never seen one, so I'd be hard-pressed to make any comparison with it.

I've also not tested this in person, but I seem to remember watching a recording of someone playing GTA V on a PS4, and the graphics didn't look as good as on my PC. But then, I don't know how the compression and whatnot affected the quality.


You may be right for Rockstar games. Last one I played on PC was San Andreas. I mean in theory as a game developer you can add larger textures and whatnot in a PC game. If you want to spend money on it.

My impression generally is that, especially for the PS5 generation, there is a negligible difference unless you want those 180 fps and 8k and 16x fake frames or whatever DLSS is.


I'm not a hardcore gamer, nor do I follow these things too closely so I may be off here, but if I compare the specs of a 5600XT and the PS4's GPU, the former seems quite a bit faster. On the other hand, the PS4 has unified memory, whereas my PC is running PCIe 3 and DDR3 (quad channel, but still).

> My impression generally is that, especially for the PS5 generation, there is a negligible difference unless you want those 180 fps and 8k and 16x fake frames or whatever DLSS is.

As someone not particularly interested in the field (I own a PC because I need to do actual PC stuff, the "gaming" GPU I bought to kill time during covid lockdowns), my impression is that when a new generation console comes out, it's quite competitive with non-absurd PC builds. But PC GPUs tend to get noticeably better during the lifetime of the console. The PS4 came out 6 - 7 years before AMD released my GPU.


Its not what the GPUs can do. Its what the game devs find easy to support.


For 1080p and 1440p certainly not. Something like an RX 6750 XT will carry you all the way.

(The PS5 GPU was comparable to an RX 5700 XT / RTX 2070 Super at the time, although it's apples and oranges as the PS5 is an integrated system target whereas the PC is an open platform)


Aren't consoles sold at a loss?

Maybe that's what hackers should target then.


> 16gb of vram

Having only 8GB of VRAM seems to be the biggest crippling factor for this A580. I was looking at A770s on eBay and on good day you can get them for quite a good price.


Yeah, for gaming anyway. AAA games are console-first these days, and are finally targetting current-gen (ps5/series x) which have 16GB (though gpu/cpu shared), so 8gb will become a problem.


You can buy it new for around $300. If you understand the limitations it’s amazing value iyam


Only the 8GB model. Hardware Unboxed latest gpu market update shows the 16GB model is selling for $50 above MSRP at $400 and the 8GB model for $280. You definitely shouldn't buy it at $400 since the 6700xt handily beats it and costs $330.

https://www.youtube.com/watch?v=fISiTHe89eA

The RX 7600 costs $270, has 8GB of VRAM and AV1 encoding, and outperforms the 8GB a770. So I really don't see an objective reason to buy the a770 aside from supporting a 3rd player in the gpu market (which is a good reason IMO).


What's the state of Linux drivers for it?


Last I looked not great. They work, but with worse performance than under Windows, relative to AMD. https://m.youtube.com/watch?v=nkSkt7JqR9U was my source. They did get some performance improvements afterwards, but not enough to make up the difference, estimated.


That video is for 6.2. 6.4 drivers are much better. And in general, since Intel don't have proprietary drivers for Linux, they'll very soon have far better driver support than both Nvidia and AMD on Linux.


Why? The AMD drivers for Linux are also free, aren't they?

I doubt that the 6.4 kernel improves performance enough to make up the difference to AMD cards in their category, but granted, that was the estimation I mentioned. A newer source would be nice!


Mostly, but they've still got propietary "pro" drivers for certain feature sets. I certainly trust Intel more to work with the open source community on driver support for their hardware, based on the history. Intel iGPUs had open source Linux drivers from the beginning. AMD and ATI before them ignored us for decades, and they're still not fully there. I applaud them for finally having made some progress, but it's been slow, and they still have a ways to go. Intel are poised to overtake them quite quickly in this respect(Linux driver support) despite the added work of having started completely from scratch.

Of course they're still behind on hardware performance, but that's to be expected from a first gen product. I definitely think they could make a mark on the mid/high(but not top/enthusiast) tier market within a couple gens, especially when price vs Nvidia is taken into account.


Intel's iGPU driver might have been free for longer, but it also wasn't very good. If you tried to run games with it that wasn't a great experience, even with the i5-5676C and its relatively strong Iris Pro graphics. And that wasn't only the hardware being weak relative to modern dedicated gpus, it was also related to driver issues and thus games not running that should have, or not running without crashes. Plus no support for Variable refresh rate. And which driver to pick also needed to be figured out (iris vs i965), that was just confusing. From what I read, game developers hated the driver for its issues.

Sure, better than Nvidia, as it mostly just worked (when it worked) and nothing needed to be compiled.

On the other hand, AMD switched from fglrx to support radeon and amdgpu when exactly? According to the gentoo wiki that was 2016. And radeon was usable before that, iirc (2014 I wrote in my blog about radeon being better than fglrx to play Witcher 2). That's also not yesterday, and works much better in practice for a long while now.

I'm also rather optimistic about how well Intel is likely to support Arc on Linux, but that they will give us better driver support than AMD I wouldn't be certain at all. AMD does a very good job there.


Intel and nVidia/ATI came from very different situations though.

The fear of nVidia and ATI was always that their "tricks" would be adopted by the $other_party. Imagine your driver has 5 tricks to make the performance 5% better in total: this is a huge competitive advantage. Now the other company reads those tricks in your source and can (legally) adopt them for their own drivers (adopting a programming construct or some trick isn't a copyright violation).

While this fear isn't completely realistic because most of the time performance isn't really determined by these sort of factors, it's also not completely UNrealistic because it certainly could be, at least in some cases! In a world where everyone is closed source and you're locked in a bitter rivalry with performance differences often being fairly small, it just makes complete sense to keep stuff closed.

Contrast this with Intel which just made integrated graphics: for much of its history performance wasn't a huge concern, and there wasn't really much direct competition either. There was never any reason to not open things.


You have to choose between HuC firmare loading (required for media hardware-acceleration; old i915 driver only) and VM_BIND support (required by various games; new Xe driver only)

https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/234


I havent benchmarks my a750, but it works like a charm out of the box.


Mmm these Intel cards are a nice unknown.

Who should one get a video card from these days if the main goal is good Steam/Proton performance on Linux?

Atm I have a PS5 and this Linux box with a Ryzen G series (and a Mac Mini but that's off topic). If i want to use it to play games through Proton what vendor should I get a video card from for the least hassle? Note least hassle not best fps.

Although come to think of it, I haven't even attempted to try and see what I can get out of the integrated graphics... console backlog is pretty big so the Linux box is only used for work and doesn't even have a monitor connected :)


I'd wager you'd do best with a AMD card, as Steam ships their own hardware with Linux + AMD CPU/GPU, so I'm guessing that's the same vendor that has the best support on with their desktop software as well.


AMD is the best supported under Linux


And yet I've had 4 years of so many random crashes with my 5700XT that I'm not sure I'd buy AMD again on the GC side…


Sounds like a hardware defect. 2 years of zero crashes with my 6600XT.


+1 for AMD on Linux

"Nvidia - fuck you!" - Linus Torvalds

https://www.youtube.com/watch?v=_36yNWw_07g


I dont want to be pessimistic but that is still a lot of "if" and "could". I think they are still at least 2 years away from seriously competing against Nvidia.

And this is 6 years after Raja has taken their GPU team lead ( and now left ).


Has Nvidia assembled one of the greatest engineering teams in human history, or is everyone else just not interested in competing?

I don't get it. Intel have decades of relevant expertise, and have been printing money for years due to AMD's effective absence from the top-class CPU market between 2005 and 2015.

And yet years and years and years go by, and Nvidia is now the world's fifth most valuable company, far surpassing all other chip manufacturers (except Apple), and nothing is happening on the market.

How the fuck is this even possible?


CUDA. Jensen Huang saw the high performance computing market so many years ahead that it is built on nvidia's stuff. It is a software victory.


Nvidia managed to hit the right spot to win the market - developers. I wrote shaders for depth detection before CUDA, and it was already great back then, but was very graphics centric. Then CUDA came and pushed everything even further forward. I completely agree - it was a software victory.


Yes, Khronos, Intel and AMD managed to make a mess out of OpenCL.

Which is one of the reasons why Apple stop caring about OpenCL, apparently they have repented giving it Khronos, and how it was managed afterwards.


Intel has a lot of hidden unemployment, internal politics and people who honestly don’t really like working.

Intel hires about half of the graduates from my uni, everyone who likes coding fled to startups or super specific teams.

There are strong engineers in Intel for sure but it has a pretty awful culture.

Nvidia’a isn’t great but it’s heaps better in terms of actually taking results into account.

My personal experience is only with the local (Israel) branches of both.


Nvidia does have, at the very least, the best and greatest PhDs of computer graphics, and I think has been recently attracting the ML/DL folks. And of the three companies, it is the one that is still run by the original founder. Founder vision + best research team can't be a recipe for disaster.

Note: I dislike much of what Nvidia does (proprietary drivers, insane prices, ghetto anti-competition tricks, etc). But you gotta give it to them as far as innovation and quality of engineering goes.


> Has Nvidia assembled one of the greatest engineering teams in human history, or is everyone else just not interested in competing?

> I don't get it. Intel have decades of relevant expertise, and have been printing money for years due to AMD's effective absence from the top-class CPU market between 2005 and 2015.

Noone gets it. From an armchair-expert point of view, anything nvidia can do, anyone else can do, they only need money, and they (Intel, Qualcomm, FAANG) have it.

Probably indeed noone was interested in competing, it didn't seemed to be a good deal, until the past few years.


Consumer GPUs are not as large a market as you give credit for. Most GPUs are going to datacenters and consoles and devices like tablets and phones. The fact that Intel is willing to spend trillions to get a foot in the door is astonishing, and AMD is willing to let nVidia focus on ML and shoot themselves in the foot with the gamer market to hold on consoles and maybe get a leg up on desktops.


Even in 2022, gaming was still the largest revenue generating sector for Nvidia...


Pretty close in 2022, but third quarter 2023 gaming revenue decreased 51% from previous year and data center revenue increased 31%, so it may be helpful to use up-to-date figures. I couldn't find profit breadowns by sector but I would be surprised if gaming even came close to datacenter profit-wise.

* https://nvidianews.nvidia.com/news/nvidia-announces-financia...


To be fair, though, pretty much the entire (vocal) enthusiast PC gamer niche has universally panned the recent releases from Nvidia and is hoping and waiting for the next 5000 generation. If Nvidia had released a "good" and competitively priced 4060/70/ti segment, their numbers might have looked different.

Anecdata (n=1): I'm currently running a 3070 but increased my screen size lately, so I'd like to upgrade and hand down my current card to my wife. After checking the current market, I essentially have two choices: either spend ~300€ more than I should have to on a 4070ti/4080 or bury the chance of properly working with AI locally and go with AMD (which might increase my power costs immensely as well).

So I just do neither and wait, and I'm surely not alone in this position.


It seems dedicating their chips to AI and datacenter products are going to be the focus for a while. Not only is the profit better but the distribution is easier, they don't have to deal with third-party card manufacturers, and they don't have to deal with the notoriously picky gamer market. If they can sell every chip they produce in a datacenter card, why would they care about gamers?


No wonder considering the prices of the 4000 series. They look as if they didn’t want it to sell well.


Trillions?


Unspecified currency


Excuse a bit of hyperbole.


Intel has a habit of claiming victory before releasing and then furiously fixing bugs post release to achieve near parity in general and small victories in limited scenarios. Their bug fixing velocity is impressive though. There is such a vibrant community of independent reviewers now that they don't get away with it like they used to.


Agreed. It's transistor and power budget is huge for the performance it gives.

It's just not a good architecture. The Arc 770 has ~22 billion transistors to compete with a 3060 with ~13billion transistors. Power usage is much higher too.

Gamers will accept this since we don't have our PCs on all day. But they're years off an architecture that can be competitive in a data centre scenario.


I do not think it needs to trade blows on performance, just something that can be in the market nipping at the low-mid tier market is an improvement from the current conditions. We are currently in a market with 1.5 suppliers.


After all the driver updates a770/750 already seem to be quite competitive in that segment already even if the launch was quite rough.


I think Intel has occupied the low-end laptop market. Nowadays nobody wants to build a laptop on ARM(except Apple). Chromebooks still have a large market, and they do not need any fancy GPU.


a low end laptop is just a windows equivalent of a chromebook. the hardware may even mostly match. it can run windows, and it can run about one thing at a time. with a junk gpu. like a chromebook.

there are still plenty of arm models available. many of which are really just tablet hardware in a clamshell. which, so are many of the intel models if you look closely.

it isn’t so much intel occupying the low end laptop space, as commodity tablet hardware.


An 8GB card is not "coming out swinging". My 1070 I got on the cheap in 2016 has 8GB. Give me some fucking VRAM already.


The 1070 hit a great sweet spot yet it seems crazy that 7 years later it's still a pretty respectable graphics card.


Mine appears to be finally giving up the ghost, starting to get weird artifacts on my screen.

It’s a testament to how good of a card it was though - up until LLM’s hit us, I never felt the urge to look for something else. It would drive my 49in monitor and run anything I cared to play at good settings.


The A770 is available in a 16 GB variant, making it the cheapest card on the market to have that much VRAM. Nvidia's closest competitor, the 3060, only has 12 GB and to find more you need to go up to a 3090/4080. On the AMD side the 6800 also has 16 GB, but it's a fair bit more expensive than an A770.


Why can't we have DIMM slots on video cards?


Modern VRAM (GDDDR) runs at very high frequencies and very low latency. This makes the wiring between the RAM chips and GPU very tricky (they need to be short and the same length), so slots aren't an option. There were actually VRAM slots in some 90s GPUs.


VRAM runs at high frequencies, delivers insane throughput, but the latency is not great. GPUs don’t need low memory latency. They have very high degree of parallelism. GPU cores switch to other threads instead of waiting for data from memory.

Here’s an interesting system with AMD Zen2 CPU and GDDR6 memory, salvaged from XBox: https://www.tomshardware.com/news/4800s-xbox-chip-shows-us-w... As you see, high latency of GDDR6 memory ruins the performance of CPU-running code.


Not enough bandwidth.

My current GPU has 484 GB/second memory bandwidth. It would require 7 channels of DDR5-8400 memory (the fastest one currently defined by these specs), and GPUs aren’t yet large enough to fit 7 slots of SO-DIMM.


If you look at a modern card you can see how the ram is designed - randomly around the actual GPU core - to get it as close and consistent as possible.

Some chips are moving towards having the ram on the same die package as the actual GPU as one integrated chip.


Because that reduces bandwidth and occupies more space


I'd still just buy the AMD gpu.

Mature drivers, and better tested alongside AMD cpus.

Also, I never liked Intel[0].

0. https://www.youtube.com/watch?v=osSMJRyxG0k


In 2023, 8GB of RAM is simply insufficient for anything above entry-level.

Already Ratchet & Clank in FHD has a >50% performance advantage when you compare the 16GiB 4060TI version against the 8GB version, when comparing avg. fps P1 it rises to +70%.

This is the biggest outlier, but many other games suffer badly with just 8GB [0]. It will only get worse going forward, so don’t buy a 8GB card if don’t have to, it will age badly.

[0] https://www.pcgameshardware.de/Geforce-RTX-4060-Ti-16GB-Graf...


> 8GB of RAM is simply insufficient for anything above entry-level.

I don't disagree, but also I think the A580 mentioned, at it's price point of 150-200 dollars is firmly entry-level. The low-end market dedicated GPU market is basically gone with integrated GPU being good enough nowadays.


Oh I misread that - you're right.


Can't read that because I don't understand the popup that covers the article and I'm not clicking on something I can't read :)

However, isn't this specific game you're mentioned just a case of a crap port from consoles?

Personally I wouldn't try any traditional console franchise on a PC. They're just not designed for it, even if you plug in a controller.


This is just the usual cookie banner. This is a direct URL to the image: https://www.pcgameshardware.de/Geforce-RTX-4060-Ti-16GB-Graf...

Point being that a lot of current-gen AAA games already benefit from more RAM, and this is not going to get better. Consider that frame generation and raytracing also consume lots of VRAM, and FG will be essential to keep 4060Ti level silicon competitive in the future.

Here is a Youtuber making the same case, that no, its not just that one badly optimized game: https://www.youtube.com/watch?v=_-j1vdMV1Cc

Does this affect all games? No. But this won't help you if it affects a game you'd like to play, now or in the future.


> This is just the usual cookie banner.

I tried passing the link through google translate and it skipped the banners unfortunately. I have a feeling it wanted me to make an account though. I uninstalled Chrome on this laptop so I can't use the in-browser translate option.

> Point being that a lot of current-gen AAA games already benefit from more RAM, and this is not going to get better.

Point taken. Wasn't aware of that because I get the few AAA games I'm interested in on console anyway. Most are series that started on consoles so the company is likely to be more experienced on that platform.

Edit: and to be a bit snarky, I could argue that ports from console that don't take into account the amount of VRAM on common PC video cards are a case of crap console port.


It seems increasingly to be a more general issue that if you don’t have more then 8 GB of VRAM you just don’t get console level (or above) image quality. Current gen consoles have 16 GB of unified RAM, and end up effectively assigning more than half of that to the GPU side of things (I know, it’s a bit more complex than a simple split between “CPU” and “GPU” pools in reality). Which can be a problem, at least of you want to lord it over console owning plebs with your several times more expensive battlestation. Although you might get “better framerate, worse textures” type trade offs.


Intel GPUs are entry level. IMO 8 GB is fair for $250 or less.


Prospective owners which use Linux should know, that they will have to decide which driver to run: i915 with hardware-acceleration support (more generally, HuC), or Xe with VM_BIND support (highly relevant for gaming):

https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/234


I got the a750 for the $200 price (which means about $235 in Sweden). The most amazing part was how well it works in Linux and how much speed you get for the money. The amd 7600 is more efficient, but definitely not as much juice for the money.


I still think Intel has a real opportunity to be the go-to second card for encoding/streaming and therefore get in to a lot of households.

If they could throw some forwards compatibility in (maybe an FPGA?) so that users could do future real-time encoding of h266 or av3? That would be an easy sell.


It's worth keeping in mind that the absolute market leader in consumer/household graphics isn't Nvidia, let alone AMD: It's Intel.

Every single personal computer with an Intel CPU since 2010 has an Intel GPU inside.


https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video

"Intel Quick Sync Video is Intel's brand for its dedicated video encoding and decoding hardware core. Quick Sync was introduced with the Sandy Bridge CPU microarchitecture on 9 January 2011 and has been found on the die of Intel CPUs ever since."


For a plex or jellyfin server intel graphic cards are supposedly great as you get multiple encoding decoding 4k streams and better compatibility with linux for cheap


They need to get the pro a60 out asap. I am so tired of getting dicked around on quadros for work. I need lots of 4k screen outputs but modest gpu resources and that card just doesnt exist at nvidia or amd anymore since the p600/t600 cards were discontinued and not replaced with newer bht still inexpensive versions.


> AMD have been slacking on low-end graphics card releases this gen

That's blatant misinformation, this A580 seems on par with RX 7600, same price, AMD has lower TDP, it was released May 2023, and the journalist dare to say AMD has been slacking off?! What Intel doing for releasing same GPU as competition 1 year later?!


Releasing a competitive GPU as a first-gen product and only being a year late is impressive as hell.

Let me go through why it's hard to release GPUs at the right time as someone who's done this at a different company. Development probably started 3-5 years ago, before COVID. They would have estimated what AMD/Nvidia's performance would be that far out and what TSMC's process would look like, then sketched out something that might meet a reasonable performance/cost target, with random guesses at a bunch of important and completely unknown factors like driver quality. Then COVID hit and the entire manufacturing world descended into utter chaos for the better part of 2 years. This is probably a large factor behind why Intel didn't manage their initial 2020/2021 estimates, with another part being that management at any company has no idea how to estimate silicon timelines. On top of that, the 10, 20, and 30 series all had completely different performance increases, probably generating a few revisions of that performance target on their own. Every time they revise the design to keep it competitive though, firmware is screaming at them about stability, and software is screaming at them about timelines, and leadership is screaming about costs.

But somehow despite all of that, they manage a relatively timely release and it's competitive in the low end after a few months of driver improvements. Unfortunately there's another problem. These things are fabbed at TSMC, not Intel. Intel doesn't have that many wafers at TSMC compared to Nvidia/AMD and meaningfully increasing that is expensive. So instead they make the rational decision to release the higher-margin stuff first and let the drivers bake some more. Everything else can get price-adjusted (and minimally design-adjusted) before it releases to make it competitive, which is why the article doesn't have actual pricing.


Yeah if I look up Intel's GPUs in the open-source Blender benchmark for 3D performance [1], the leading Intel Arc A770 GPU is slower than ... the NVidia GeForce RTX 2070?! Which came out in 2019 [2], so Intel's latest GPUs are slower than NVidia's chips that are 2 generations and 4 years old.

  - [1] https://opendata.blender.org/benchmarks/query/?compute_type=OPTIX&compute_type=CUDA&compute_type=HIP&compute_type=METAL&compute_type=ONEAPI&group_by=device_name&blender_version=3.6.0
  - [2] https://en.wikipedia.org/wiki/GeForce_20_series


The Arc A770 came out in the time of the 30x0 series, so it would be more fair to say that it is slower than a higher end card one generation older than it. That isn't an uncommon occurrence. For instance, the 2070s are often faster than the 3060s, and the A770 was supposed to be roughly aiming for 3060 performance.

Overall, I'd call it close enough in performance that I'd be interested in giving it a try. On the work side, I worry more about being bitten by lack of CUDA (although I like the look of SYCL that Intel/Kronos push as an alternative), and on the gaming side, I worry more about how if DX10/DX11 titles will ever really get sorted out. Still, when I get a desktop that supports Resizable BAR, I will likely put an Intel GPU in that machine to give it a try.


How does oneAPI/SYCL compare to CUDA? We certainly need an alternative to OpenCL, but every day, I can't help but notice the widening gulf between CUDA and any other GPGPU API out there.


Worse, only does C++, while CUDA is polyglot, and tooling isn't at the level as something like NInsights.


Is there any potential on the horizon for an Intel card to work with CUDA?


Bet on WebGPU. Intel and Google are the two main drafters of WebGPU spec. I can tell you that the card definitely will support WebGPU well, and WebGPU is not just for web. It will run on any device with a very small footprint runtime library and does not need a browser.


WebGPU isn't polyglot as CUDA, so already out of question, let alone non existing tooling like NInsights.


What do you mean by polyglot? Things like CUDA.jl?


Being able to write C, C++, Fortran, Haskell, Java, C#, F#, Futhark, Julia code to run on the GPU, or any other language with a compiler toolchain able to target PTX bytecode.

And the related graphical tooling for debugging such workflows when stuff goes wrong on the GPU.


Any CUDA alternative has to be on the same level supporting polyglot programming, IDE and graphical debugging, libraries.

Alternatives based on C or shading languages, have already lost before the game has even started.


You keep posting this but the problem with AMD isn't some luxury problem like that. The problem is that the driver crashes.


Drivers not crashing is basic stuff, alone fixing them isn't going to change anything, unless ROCm can actually provide similar tooling.


“drivers not crashing is basic stuff” sounds like an argument for not bothering with anything that would rely on the cards, like tooling, because what’s the point if the drivers are just going to crash.

it would be like saying it isn’t enough for your car to run to win a race. while that may be true, if you enter a non-running car into a race, your car isn’t the biggest hindrance to your success.


After a decade of failing, AMD and Intel don't have anyone to blame but themselves.

No proper drivers, OpenCL tooling always stuck in C mindset until it was too late for anyone to care for SPIR, no support for consumer cards, hardly any tooling, no library ecosystem,....


What's so important about polyglot support? Isn't CUDA code similar to C?


Yes, if you are stuck on a version older than CUDA 3.0.

CUDA supports C, C++20 (minus modules currently), Fortran and has Python bindings, additional plenty of alternative toolchains for PTX bytecode exist, some of which with NVidia's official backing, for Haskell, .NET, Java, Futhark, Julia, and whoever wants to target PTX.


OpenVINO is super easy in PyTorch and a few other places.

And there are other specific efforts like Embree (which now supports Intel GPU rendering), and Vukan ML efforts like Apache TVM and MLIR-based projects.


Not any time soon. CUDA is very Nvidia specific.


When people say CUDA is Nvidia specific, I don't think they realize _how_ specific. It's not just a matter of having the API implementation.

CUDA does things like depend on specific scheduler behavior for GPU threads in order to guarantee forward progress, allowing more efficient single-pass computation routines. Or allowing CUDA kernels to launch additional kernels or allocate GPU memory from within the kernel, without host communication. Or checking the GPU architecture and performing specific micro-optimizations designed around that hardware's internal design.

GPU's are nothing like CPU's, in that the x86 architecture is fairly stable. There's not a _ton_ of difference between different CPUs. Sure, performance characteristics and cache sizes might be different, but generally they have the same instruction set. Each GPU generation is basically a completely different architecture, much less between GPU companies. That's (partly) why Vulkan is so complicated - it tries to support the lowest spec 2013 mobile GPU, and the highest spec 2023 desktop GPU in one single API.


This is the result of GPU computationally intensive tasks not being as simply defined as the CPU workloads. It was always thus for the various accelerators and HPC designs. They must make design choices that favor a relatively narrow range of computational patterns whereas the CPU will happily chew through a wider range of instruction sequences with the same performance.

The corresponding GPU software complexity is something that feels not quite appreciated. Yet if we are going to see the massive adoption of GPU's that the market projects, with multiple providers of hardware, we'll need some serious advances of the software side.


I think we need more stuff like Futhark or StarLisp.

Being able to describe GPU computations in a declarative way, with a driver similar to an SQL query optimizer, distributing the load.



I don’t understand how Intel can afford to sell these so cheaply given they are fabbed at TSMC.


NVDA also fabs at TSMC and has 60% gross margins, so there's a lot of room to undercut on price while still being profitable.

The real question is why doesn't TSMC charge more? How can a customer using the most cutting edge fab get 60% margins? (Similar for other customers).

Clearly there's a lot of pricing power on the fab side that's not being asserted.


Samsung Foundry exists, and Nvidia used them for the 30 series iirc. TSMC just doesn't have that much pricing power as a result. Their best processes are obviously good, but if they try to gouge too much customers probably just take the hit for a year. If they have to they can always sacrifice size/power for performance on an older, cheaper process.


Have a look at Nvidia financial statement to get an idea why ;)


Intel is dumping overstocked inventory.


So

> a TBP of 175W.

The RTX 3060 was 170 Watt, the RX 7600 is 165 W. So it's on par efficiency wise as well, it would seem.


There’s no facts here just rumors.


How open source is it ?

Does it have closed-source firmware blobs?


As far as I can tell, at least the Linux kmd is open source^:

https://www.intel.com/content/www/us/en/docs/graphics-for-li...

https://wiki.archlinux.org/title/Intel_graphics

^ The repo ships with dual licenses and exceptions, have not read the details.


I thought Intel was giving up on GPUs already...


If only it supported Windows XP!


i find it hilarious how companies try to make their chips aesthetically pleasing, as if i stare at the chip and not the monitor.

its already a highly demanded product, you don't need this kind of marketing to boost sales.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: