Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ryzen 4000 Review (pcworld.com)
298 points by caseyf7 on March 30, 2020 | hide | past | favorite | 206 comments


An interesting proxy for CPU evolution is seeing the recommendations shift in communities like reddit.com/r/buildapc. Last year many medium to high end builds were choosing intel CPUs, but now most are recommending AMD. The last bastion will be professional gamers and streamers when they upgrade their 9900KS level CPUs - likely when the new 3080 gpus come out.

Good on AMD for pushing the competition forward by significant amounts.


I built a new workstation in December, and I like my computers to be as quiet as possible, so I looked for a huge cooler and a top-quality fan.

Reading reviews of coolers and fans was just eye-opening with regards to AMD's recent success. Most reviews proudly mentioned some Ryzen product, and barely any mentioned Intel at all (and most coolers are compatible with both).

I realize that the build-your-own-PC enthusiast represents only a minor fraction of the overall market, but seeing AMD's reputation going from "the budget CPU" to becoming the new Coca-Cola whereas Intel is now Pepsi, all in the span of what, 2 years? was amazing.

The server and laptop markets have other dynamics (a few large buyers, instead of many individuals) and I'm not sure AMD can compete with Intel on these dynamics, but I really wish them success.


One anecdote data point; we’re building new bare metal clusters, about 500 new quad chassis servers. I did the performance testing and per performance/price from Dell we went with AMD Epyc CPUs whereas at my job only 2 years ago doing a similar build it never even came up, Intel was the unquestioned default.

Also personally built a gaming PC this year, picked a Ryzen just for fun.


I built a gaming PC last fall, also Ryzen, but I picked it specifically because I know AMD likes to keep a CPU socket around a good long time. So really, my main goal was to buy a Socket AM4 motherboard, and then stuff a cheap CPU and some RAM into it.

Right now, it's low-end gaming, I got a 3200G because the integrated graphics are adequate for the games I play. The PCIe slot sits empty, waiting.

In some years when AM5 comes out and AM4 chips get cheap, I'll look at the last/best of the breed and pick a new chip. It probably won't have integrated graphics, so at that time, I'll add a discrete GPU. I plan to have this motherboard and RAM for probably 15-20 years.

(Just like its predecessor, an AthlonX2 4850e, in an AM2+ board from 2008. Which I recently dropped a $30 Phenom into, and it's a relevant machine again, my workbench workhorse.)


AMD has committed to supporting AM4 until 2020, but you still have a fantastic upgrade path to the 3950X 16C32T CPU and plenty of RAM.


A matching socket is not enought though, the motherboard has to support it too. If it's an older one it might not get a BIOS update for the newest generation, and/or the VRMs on cheaper boards might not be able to handle a 16 core chip.


Just an anecdotal observation. I bought a micro ATX MSI motherboard (a relatively niche product) together with my Ryzen 1700X way back when and they provide support for the latest Ryzen CPUs via bios updates. I'd be surprised if other more popular boards did not get updates.

So I can run the latest 3000 series with my current first gen AM4 board which is great. As you point out though the 16 and 32 core chips might be too power hungry for some boards.


Is there any confirmation if this means Ryzen 2 only or also Ryzen 3? Lots of the enthusiasts expect this to include the upcoming Ryzen 3, but the wording is very vague.


Most of what I've seen seems to indicate that Ryzen 3 will be AM4, with Ryzen 4 moving on, I'd be suprised not to see a shift to DDR5 at that time and usb3+thunderbolt in that generation in a couple years.

I would love to see an AM4 3600G and 3700G though, basically a 3600 or 3700 with the addition of a GPU chiplet. There are plenty of people out there that could use more CPU but don't need a discrete GPU for work.


Oh man, this exactly. I'd happily add more heatsink and bump my APU TDP by 20-30 watts, rather than get similar performance by stuffing a whole mountain more hardware in a PCIe slot with its own TDP somewhere around 60-100 watts because of all the bus transceivers and other overhead.

As I understand it, they aim the high-TDP parts at the enthusiast/gamer/workstation market where a discrete GPU is just a given, so there'll probably never be a 3700G. Darn!


I bet a lot of ITX builders would be ecstatic if that happens.


Absolutely...


Zen 3 == am4, zen 4 == am5 (next)


Did a Ryzen 3700X build last year, though I went with a 2080. Looking forward to being able to add an AMD GPU -- not for any reason other than I believe Nvidia has been as stagnant as Intel. From the 1080 to the 2080 was what, 30 months (March 2016 to September 2080)? And in all that time we got a nominal boost in performance, DLSS to make your games blurry and RTX to slow them down. That's unbelievable.

AMD is finally showing life across their entire product catalog, and I couldn't be happier.


Given my own pain with an RX 5700XT, I'd suggest waiting a few months after big navi release to upgrade.

For Linux, it's been more than a pain, and pretty much have to install with a 5.3 kernel base os to begin with for it to work from the start, saw a lot of just black screens in the nearer term distros. When the 19.10 Ubuntu came out, it mostly just worked, then after an update, blank screens... same with 18.04 and the supported drivers.

On windows, a number of issues, reset, more issues.

Since around 5.4 kernel, and now on 5.6 rc, it's been much better, and the recent driver update for Windows has stabilized. Just the same, would have been better off with NVidia for the past 5 months or so.

I was planning on waiting until January to build, but was having issues with my old system (i7 4790K). So built out with an x570 mb, and 3600, swapped out with 3950X late december. Had a few issues with motherboard drivers as well (intel bt and wireless very bleeding edge).

At this point I'm very happy though. Just took a while to stabilize and wouldn't go bleeding edge AMD again, but would wait 3-6 months for any new chipset or gpu.


New RX 5000 series XT owner here. On Linux Mint 19, I was forced to install kernel 5.5 from mainline Ubuntu deb packages, also mesa drivers and libvulkan from the unstable padoka ppa, but since then it has been a smooth lockdown experience.

I'm not using latest Ryzen though so I wouldn't know what bleeding edge AMD on the CPU side feels like.


On the chipset side has mostly just worked, afaik, there've been enhancements that work better for scaling/timing etc... but I haven't noticed issues there.

Aside, RGB support for linux sucks.


Did a Ryzen build with a 5700 as well, tried to make it work but essentially just said "screw it" and installed the latest Fedora; ain't nobody got time to screw with graphics drives. Worked fine. Fedora likes new stuff -- sometimes too new, which broke my nVidia card in the past -- but the AMD drivers on board work fine.


I run a Sapphire pulse RX 5700 XT on Windows 10. My only issue was an old driver was causing crashes on a very specific, very niche game, but that was fixed weeks before I actually went to play the game, and wouldn't have known about had I kept my driver up to date, ie, it was an issue for a very short time and I'm impressed by AMD's responsiveness to a small indie.

There seems to be large variance in the different 5700XTs. Some of the third party ones don't have adequate cooling and quite literally cannot meet their intended specifications, like the ASUS TUF line.

I have no idea how well it works on Linux


I built a gaming PC this last week, and also went with Ryzen.

Although I have to say - the last gaming PC I made was about ~13 years ago and this made me hesitant to move to AMD. During the time I was 'gaming' (mid 2000s to mid 2010s) all the hype was intel..


Early to mid 2000s, Athlon 64 and the Digital tech (as in DEC Alpha's EV6 bus) was where it was at, until Intel got back in front with Core 2. AMD got to define the 64-bit instruction set for x86 with Athlon 64. 2007 or so (13 years ago) was after the decisive swing back to Intel. I got a Q6600 in 2007 that I was using (in its final form in a NAS in the attic) until late last year.


I brushed off an old AMD motherboard. And did a quick CPU scout on Ebay to see what I'd need to pay for the greatest CPU of yesteryear. And for about a tenner, I could get AMD's best offering. But alas, it burnt many watts, and had barely any more grunt than my old Core2Duo in my aging laptop - so I doubt it is actually worth the bother.

I have hardware that is over a decade old, 'young' people scoff at anything over a few years old! But there's a sweet spot to be had where you can buy older, rather than ancient or new and save yourself a fortune.

That said, my latest console is a Gamecube.


Good ole Q6600. Move to that from a Pentium D. Didn't keep my dorm room as warm as the Pent-D, but ran everything 100% better...


yeah I got it when the core2duo was released, so apperantly in 2006 :)


AMD is going very strong in server market (specially in DC market), all major sever vendors has now comprehensive AMD offering. The advantage AMD now days has over Intel is so big that choosing AMD is a no-brainer for majority(I would even say most) of DC applications;


Yeah, for all that AMD has edged ahead in desktop, their server offerings completely blow Intel out of the water. Which is largely Intel's doing, because they pushed their profit margins up to spectacularly high levels while they had the best chips.


Marketing works -- and AMD has been marketing hard.

But they also offer reasonably competitive offerings at decent prices. Intel, nearest I can tell, still has faster chips, but AMD has closed the gap enough and isn't struggling with internal issues like Intel.

Disclaimer: I built an AMD Ryzen box a couple of months ago, running linux on it and gaming -- works like charm.


My understanding is that Intel has like one chip that is within a few percentage points better than AMD's fastest in single threaded very very specific workloads, with fewer cores, quadruple the price, and more feature discrimination


When I built my first computer the conventional wisdom was that you wanted to use an AMD Athalon-64 processor and an NVidia graphics card. And there we are again.


Indeed, the last time when the answer was 'definitely AMD' (for home use at least) was when the Athlon 64 crushed the Pentium 4.


Out of 1,206 CPUs tracked on UserBenchmark, the top 4 out of 5 are Zen 2 architecture and they collectively amount to 10.22% market share out of all CPUs with 931k samples.

https://cpu.userbenchmark.com


UserBenchmark is pretty notorious in trying to keep Intel at the top and doesn't shy from dirty tricks to do that, such as redefining CPU scoring to be based on single-thread performance as soon as Intel is losing the crown. I'd suggest avoiding them in favor of some more reliable benchmark.


Replying to myself -- case in point: the page you linked doesn't have a single sorting method that makes any kind of practical sense. It's carefully curated to have an Intel product at top in every category.

I'd go as far as declare UserBenchmark actively harmful if you are trying to figure out benchmarking results. You're likely better off by just throwing dice!


Try the "Mkt. share %" column.


I've been patiently waiting for them to do the same thing in the high-end/upper-midrange GPU market for a few years now. Unfortunately Nvidia basically has free reign right now to charge whatever prices they want for the RTX 2070 on up.


Intel is sponsoring lots of pro gamers and streamers, I think many will keep going Intel as they believe AMD don't make good gaming chips.


You still can't beat intel at the high end. Both in terms of heat and performance. If you have a task that doesn't parallelize well, you can't beat intel.

Games are traditionally single threaded and hard to parallelize.

However: you don't need to be the best, you just need to be at the "agreed upon" high end for game requirements, which is the xbox and ps4 (soon to be next gen).

And so, if you're trying to build a "generation proof" gaming PC in a standard sized case, the 3800X is currently your best bet for a new build (or waiting for zen3). If you're trying to do SFF, Intel may be a better choice.

Regardless: AMD's competition has dropped the prices in the "enthusiast" segment. My next build will be AMD, simply because I don't need the best of the best, but rather, good enough.


> Games are traditionally single threaded and hard to parallelize.

On my 3900X I've seen even single player action titles utilize 16+ threads with overall CPU utilization reaching peaks of 80-90 %.

Yes, it maybe be hard to parallelize game engines, but all major ones heavily use multiple cores. A bunch of years ago (4-5) people would recommend an i3-K dual core for gaming, because you could overclock it to 5+ GHz on air. That was sufficient for many titles back then. (And these i3s would sometimes give you higher FPS compared to the more expensive quad-core options). But a CPU like that is far too weak for newer titles.

> You still can't beat intel at the high end. Both in terms of heat and performance. If you have a task that doesn't parallelize well, you can't beat intel.

In terms of heat, that is, performance per Watt, Intel's high end is quite a bit worse than AMD's. It is true however that you can get slightly higher single-threaded performance out of Intel's highest-end CPUs, but not because they are more efficient (- AMD is better both perf/W and IPC-wise), but just by raw, unadulterated clock speed, pushing them to 5+ GHz. And that causes them to burn a lot of power.

Zen 2 has consistently ~10-15 % higher performance clock-for-clock compared to Intel's best while achieving approximately 60-80 % higher perf/W.

This LTT review captures the massive advantage Zen 2 has in efficiency: https://www.youtube.com/watch?v=ZYqG31V4qtA


The efficiency at load is great, the power usage at idle could use some improvement (I feel there's a missing update). My previous system (6700k boasting to 4.6ghz) used about 50 watts at idle (from the wall). After I switched motherboard and CPU (a 3950x, the rest is the same, plus new Windows install) the power usage at idle is ~90 watts. The power management is too eager to use all the cores, I'm pretty sure their would be zero decernable difference if they shutdown one of the CCDs (chiplets) when utilisation is low. I've played around with the power settings and ~90w is about the best it gets.

Under load the system is brilliant (it's 4x faster with the same power usage, makes using my laptop painfully slow in comparison). The power management on these CPUs is incredibly advanced (each core runs at different speeds & voltages), I'm just surprised that it doesn't do better at idle.


The 3900X and 3950X sadly still have issues with idle power, the lower end chips do much better. A 3400G for example self-reports less than 10 W at idle, entire system is around 20 W out of the power socket. The uncore of my 3900X consumes more than that in idle according to itself...

The usual desktop (3500-3800 series) chips perform similarly well, it's just the two-chiplet AM4 processors that have been somewhat plagued by problems like this.


I was running into issues where just having Steam open, not doing anything at all, caused a R5 3600 to go from 2 watts, to 13 watts power usage. Start closing background processes. There might be one doing weird stuff keeping cores awake


>On my 3900X I've seen even single player action titles utilize 16+ threads with overall CPU utilization reaching peaks of 80-90 %.

which ones? genuinely curious. I have two Intel E5 eight-cores in an old workstation and rarely ever see a game using more than 2-4 threads.

I guess i'm not spending my time on the latest and great AAA games as soon as they come out, but i've spent quite a few (shameful) hours gaming on fairly recent titles.


Doom Eternal doesn't even have a main thread and uses a dependency graph of jobs, so should paralellize reeeeally well across multiple cores


The Division (a game I worked on) is quite well optimised for multi core.

In fact it’s one of the reasons we made our own game engine.


Rise and Shadow of the Tomb Raider is very CPU intensive and seems to use as many cores as you can feed it.


That's a bit outdated. Yes, the fastest cpu for games is still an Intel cpu, the i9-9900K. But the distance is not big. The 3950X is very close, the more reasonable AMD cpus - Ryzen 7 3700X and Ryzen 5 3600 - are also close. I collect benchmarks, comparison here [0]. When the next Ryzen generation arrives on the desktop even that small top spot for Intel will likely end.

It's not only that the turbo boost of those Ryzen cpus is high and at the same clock Intel does not have a lead anymore. It's also that recent games are already pretty parallelized. There is no need to wait for the next console generation to see that effect.

Btw, the 3800X is not a good pick. It is just a minimally higher clocked 3700X. If you want 8 cores, get the 3700X, the higher price of the 3800X is just a waste of your money. But for gaming you want to stick to a Ryzen 5 3600.

[0]: https://www.pc-kombo.com/us/benchmark/games/cpu/compare?ids[...


> Btw, the 3800X is not a good pick.

That really depends on the pricing situation, very much like with the 3600x.

Over the months I've seen the price difference between a 3800x and 3700x see as low as 20€, the same with the 3600 and the 3600x.

That's because the 3600 and 3700x are recommended, and sold so often that retailers sometimes end up sitting on large stocks of 3600x/3800x too long and clear them out at discounts.

Imho even for gaming the 3800x might be a pretty good long term choice. The higher clocks are particularly relevant yet you still get the core advantage of the 3700x, without taking the same clock hit.

But if it's all about the budget and price/performance then it's really difficult to beat the 3600.


There is no big clock hit. The turbo clock is 4.4GHz vs 4.5GHz, that's nothing. In practice the difference is mostly 1 FPS. The base clock difference looks bigger, 300 MHz, but Ryzen does not stay at the base clock like old Intel cpus did, it always turbos.

Now, if it's the same price there is no reason not to get the 3800X over the 3700X :)


The 3700X is a 65W part, so its quieter for those who dont like listening to fans spin loudly.


Higher base clock might mean you can overclock it higher. Of course, most people don't overclock their CPUs, even if they are unlocked.


I don't see why the base block would be the factor for that. The Ryzen cpus overclock on its own, it should be the turbo clock and PBO that shows how well they overclock. I just linked in another comment to an article that explains that while the 3800X does overclock better, it's a minimal difference, right in line with their turbo clock specs.


Presumably it basically is the same CPU, potentially just better binned ones being flashed as 3800x?


Exactly. https://www.techspot.com/review/1899-ryzen-3800x-vs-3700x-di... explains more details:

> The top 20% of all 3800X processors tested passed their 4.3 GHz AVX2 stress test, whereas the top 21% of all 3700X processors were only stable at 4.15 GHz. Also, all 3800X processors passed the test at 4.2 GHz, while 3700X processors were only good at 4.05 GHz, meaning the 3800X has about 150 MHz more headroom when it comes to overclocking.

That's a minimal difference of course, but right at the clock wall of Ryzen, so AMD has to do binning here to enable the 3800X.


I think average frame (which I assume this is) is much less useful than minimum, 0.1%, and 1% frame times. As a human, I'm more sensitive to instantaneous gameplay experience, not average. It would be very interesting to see graphs that show these more telling numbers.


Yes, these are average FPS. Minimum is a useless metric that is usually caused by benchmark issues, but 0.1% and 1% would be interesting. But they are usually correlated to average frame times, it's too rare that in specific games one processor has good average fps but sporadically dips to make the enormous effort to collect that data as well. When that happens it is because the architecture does not work well for gaming (Threadripper 1000 and 2000, fixed with the 3000 series) or there are not enough cores, something the site addresses in the build guide. When that happens the average FPS also are not great when looking at enough games, which the benchmark does for almost every modern processor.

It would not make a difference when judging Intel vs AMD or Ryzen 3000 vs Intel 9000 series.

The source articles for the benchmarks are linked though, often enough they also have those 1% or 0.1% benchmarks.


IIRC correctly the 1% and 0.1% frame times tell you how the frame rate varies i.e. if most of the time it is running at 60FPS and every so often tanks to 1 FPS. So some average FPS numbers might look okay but the frame rate might be tanking like every minute.


its close any way you cut it. theres a billion benchmarks out there with the data you want


> You still can't beat intel at the high end. Both in terms of heat and performance.

This sentiment is almost a year out of date. Since Zen 2 launched, Intel has a measurable disadvantage in IPC. Intel's single-thread performance is only partially won back with higher clock speeds at the expense of far greater power usage.


>Games are traditionally single threaded and hard to parallelize.

ECS and being able to safely run "systems" within your game concurrently is becoming a thing rapidly as "lists of objects" do not scale.

I just checked Doom Eternal: it's running 81 threads. (Of course, not all of them are always active or doing things concurrently. It's just a datapoint.)

Intel basically only wins by a non-significant margin at video encoding nowadays thanks to AVX512 which AMD does not implement. AMD wins or is comparable at a lower TDP on everything else.


I read a Twitter thread the other day about id Tech 7 (the new engine underpinning Doom Eternal). Apparently there is no “main thread”, everything is scheduled to a set of jobs that can be executed concurrently.

https://twitter.com/axelgneiting/status/1241487918046347264?...

I’ve been wondering if this will become de facto standard for multi-threaded/multi-core systems going forward.


The EA/DICE Frostbite engine has worked like this for many years now.


Learning about ECS and how to use them made gamedev so much easier for me. Although now I'm hamstrung by only being able to work on games in languages/frameworks with a solid ECS library (does a good/community agreed upon one exist for rust yet?).

ECS, the observer pattern, and behavior trees are probably the three main things that answered all the "how on earth do they build something this complex" questions for me.


> (does a good/community agreed upon one exist for rust yet?).

I just ran across this: https://csherratt.github.io/blog/posts/specs-and-legion/


> does a good/community agreed upon one exist for rust yet?

Specs? At least that's what Amethyst uses.


I would add state machines to any list of amazing techniques


Definitely. Behaviour trees are kind of a special case/evolution of regular ol state machines, but I find both useful in different situations.


I was considering learning them soon for enemy ai. Are they useful in other situations too?


Basic state machines are useful for controlling flow between different menus/game screens - this "current mode" state exists across iterations of the game loop so you need some kind of data based state machine.

e.g. in say a jrpg you have "new game/continue" menu -> world map -> battles and character menu and back again.

Edit: oh I misunderstood you - you were asking about behaviour trees. I haven't used them for anything other than controlling game unit behaviour (although not necessarily just enemy ai - one other example is using them to combine a bunch of simple actions into one thing that the user can tell a unit to do - so like an attack move in an RTS is actually "loop (is enemy nearby? -> no -> move towards location -> yes -> attack (which is probably it's own behaviour tree))


From what I have seen, the single threaded tasks are slightly better on intel but even in this linked article the AMD chips are winning slightly in single threaded. And in multi threaded the AMD CPUs absolutely blow away intel by a huge margin at a lower price.


> Games are traditionally single threaded and hard to parallelize.

That's some dinosaur games from the past. Modern games are using all available CPU cores with something like Vulkan and saturate GPU fully. So Intel CPUs are left in the dust with all the cores you get from high end Ryzens.


Yeah, this was true for basically everything like, 12? years ago. But it has been steadily getting less and less true as time goes on. Doesn't stop it being repeated constantly though.

There should be a formalized law for this - anything that was once widely known as true (because it was) will continue to be treated as gospel far after it ceases to be the case.


In technology, it seems to be the case that once some fact is old enough to be known as gospel, the world will have moved on enough that it's not true anymore.

And when challenged by the new norm, the old guard repeat their gospel louder and in greater numbers! So if you hear some technological rule of thumb repeated often enough, you should doubt it even more than if you hadn't heard it at all.


>Games are traditionally single threaded and hard to parallelize.

It is not that the Gamers and Prosumers dont know.

But they are willing to trade ~10% of Single Thread Performance for Double the Core at a similar or less price point.

Sounds like a very reasonable decision to me.


Also games are hardly limited by CPU at all a 10% loss in single threaded won't be noticed at all but more cores will make a huge difference for other tasks.


Tell that to Cities Skylines or any game based on Unity


Unity is transitioning to DOTS, a vastly more performant and parallelisable architecture. Obviously this isn't relevant to legacy titles, but "games are mostly single-threaded" is increasingly a thing of the past.

https://unity.com/dots


And Unity is pretty much dead last in that spot too. Remember that both the PS4 and Xbox One are (formerly) high core AMD cpus. Anything that runs on them can handle multi-threaded processors pretty well


I haven't played in a while but I was happily playing it on an i7 4770k which is way under what both Intel and AMD sell now.


It doesn't appear to scale past a certain size of city very well. Lot's of people get best case baseline fps of around 50. Great game for the most part though.


Add Ashes of the Singularity to that list too, great worst case for benching gaming rigs cpu bottleneck.


> Games are traditionally single threaded and hard to parallelize.

Citation needed. Games are about the least "Traditional" software engineering culture on the planet. We've had multicore CPUs for the last two gens of consoles (and I've seen multi-core programming on the Saturn and N64). On PC we've had mainstream multicore since 2009-ish. That's about when I left games. But given the need to win big on consoles, I can't imagine anyone has a main loop anymore that does everything on one thread.


What N64 had multicore?

>> traditionally

> anymore

> does everything on one thread

Multithreading is useful for concurrent programming separately from its value in parallel programming.


The N64 had two MIPS processors.


It did not.


Yeah, it really did. The RSP was just another MIPS processor that couldn't access main memory (like the SPEs on the PS3). Usually developers treated it like a graphics card, feeding it display lists that were interpreted and rendered by Nintendo's own microcode. But if you got hold of the RSP Dev Kit, then you could write your own microcode. And if you were really ambitious, you could write an MPEG player that used both the RSP and the main CPU to deliver FMV off a cartridge. That was Todd [0]. That's why N64 emulators couldn't run Resident Evil 2 for the longest time: they had native implementations of the RSP-as-a-graphics-card, and just drew the display lists. But RE2 didn't use the normal microcode, and it couldn't be played until someone wrote an emulator for the RSP.

[0] https://ultra64.ca/files/other/Game-Developer-Magazine/GDM_S...


What N64 had multicore?


Most CPU reviews are showing up to twice the heat or more for around 5% better performance on single-cpu loads.

Most games are lightly threaded, with newer games being much better optimized for 6+ cores.

Even for Gaming, only the 9900K/KS are even in consideration for a good choice, and if you're doing anything besides gaming or quicksync loads, odds are you're better off with AMD. The 3900X is less expensive and does as good or better in most workloads, and even in gaming, most wouldn't be able to tell the difference.

As to best of the best for higher end, the Threadripper has no peer.


I wonder what games really stress the CPU these days and I wish to get some example beyond dwarf fortress of games that actually stutter without a top end system, I got a midrange one built around a old core i5 and a modern Radeon 570 and even physic bound games like beam.ng and games with lots of agents like arma 3 run just fine.


Battlefield V likes as much CPU power as it can get. But it also likes GPU...

There is a free to play game Armored Warfare (crysis engine?), which I genuinely used to test stability of my cpu. 8 threads fully loaded and probably with some form of AVX.

Reviews of upcoming Mount&Blade also note how it loads 16+ threads (mostly for NPC on battlefield).

Arma3 was notorious for being bound by single thread performance and low latency memory.


https://www.youtube.com/watch?v=Gw_RH0RQ2oA

on a old midrange i5 6500 b5 does a solid 30+ fps on ultra and cpu load doesn't top out so I find the claim that it needs a top of the line cpu very dubious. I'll try armored warfare since it's free.


so just tested out armored warfare and at full had maximum quality I get 60fps with 70% cpu utilization on a i5 6500 with the rx570 gpu


If you are interested in cpu being utilised, should lower graphics setings from ultra. On ultra GPU is the main bottleneck. Try same on Low settings


There is some difference. The most marked difference is between dual core, and quad core. There are quite a few youtube videos out there.

https://www.youtube.com/watch?v=1eMh-SZQuWM


yeah so it just does 80-90 fps? I'm not talking about "any difference", I'm talking games right now that require a high end cpu, since everyone upward in the thread was claiming gamers need top end cpu, and I can't think of a game that stutters with a mid range one.

also, the fps reduction seems more in line with the low end intels having less pci lanes, and not the game being cpu capped.


Unless you're a competitive gamer, and young with very good vision, keeping above 100fps at 1440p or better, you're very unlikely to notice the difference.

Very few people genuinely notice the difference. I ran a 3600 while waiting for a 3950X, and if I had it to do over, would probably have just gone 3900X to start with... For the most part, I don't notice the difference, only after I'm running a couple databases and a few services that are active to I even come close to seeing a difference in general use.

I know GPU and Monitor options are similar, and CPU/GPU combination in terms of balance are important. Unless you're going RTX 2080 or higher, then it's much less of a difference in general use anyway.


the new ryzens compile gcc things faster, even on one thread. and you get more cores per dollar all the way up


I would think as of now, the main lead that Intel has in with AVX-512 computations.


The LTT review [0] also had good things to say about the battery life. This CPU seems very power-efficient when compared to a 9980HK.

[0] https://www.youtube.com/watch?v=ZYqG31V4qtA


FWIW, loaded battery life is intrinsically linked to the thermal / cooling aspect of CPU performance discussed in the article. Mechanically, CPUs are just simple resisters converting stored battery energy into heat.

Idle battery power is also interesting, and even the older Zen 1 had pretty good reduction in power consumption at idle (C1).


Intel still best on idle power use because of on package regulator, and still better powergating


Intel dropped FIVR for the Skylake generation[1], though it's back on Ivybridge[2].

[1]: https://en.wikipedia.org/wiki/Skylake_(microarchitecture)#Fe... [2]: Used to work there, played a minor part in enabling IDC to integrate the FIVR IP.


Sure, but how long does a laptop actually stay on idle and not a sleep state or low power state (doing something)?


The natural question now is how much the i9-10980HK differs from the 9980HK that the 4800HS seems to trounce handily. Judging by the fact that it's still on some iteration of 14nm(++++++), it looks to be more of the same strategy of aggressive turbo to hopefully compensate for lackluster IPC.

Hope to see more high performance AMD laptop design wins as a result!


> 14nm(++++++)

What do you mean by this?


Intel has been repeatedly tuning their 14nm process and adding a plus at the end each time. It's starting to get a bit ridiculous.


For reference, Intel's first 14nm mobile processors were introduced from late 2014 through mid 2015. They've done one major microarchitecture update since then, and the rest of their microarchitecture and fab refinements in the intervening years have been minor updates as they're in a holding pattern waiting for their fabs to deliver a 10nm process that is viable for mass production.

The first 10nm process was such an abject failure that Intel now prefers to pretend it didn't happen. The second 10nm process produced laptop processors that began shipping last fall, but are still mixed in with 14nm processors that are branded as part of the same generation.


Apparently a big chunk of those top-bin 10nm chips ended up going to Apple for the new 2020 Air.


Sure, Intel sold some to Apple, but I’ve been seeing 1/3 to 1/2 of the laptops Costco carries with Ice Lake since before thanksgiving. Warehouse stores, not even premium channels.

They’re not in nearly the short supply that some people like to imply.

AMD is quite late to market with their mobile 7nm products. Their desktop products are far more timely, if their mobile had been launched with the desktop they would have beaten Intel, instead they basically are 6 months late to market.


Late to market, but better performance.


Yes, incrementally better, not drastically, as compared to the 14nm parts.

That’s how generational leaps work. Competitors one-up each other, just like Intel one-upped the 1000 series with Coffee Lake. Time to market lost is time you’re giving your competitor to get their next generation ready.

Intel is now only 6 months away from their next release, Tiger Lake. We are already starting to see a few benchmarks leak out.

I don’t get why people think it’s such a big ask to not release half their lineup 9 months late. For that matter, where are the socketed APUs?


It's probably the same answer to both. Demand is still high but TSMC has finite 7nm fab capacity and the APUs have lower margins. What's the hurry to release lower margin APUs if the fabs are still busy making higher margin CPUs?


It’s also a reference to Intel’s failure to make 10nm work (their equivalent to AMD’a 7nm) and just endlessly flogging slightly updated 14nm


It's very promising. Can't wait to see more laptop offers on this chip. The current Asus Zephyrus G14 seems to gamer-y in terms of the design.

I'd buy the same hardware in a more mature and grown-up chassis like current Dell XPS 13 in a heartbeat.


I want this in an XPS 13 too. I just bought an XPS 13 last year, running Ubuntu, and it's very nice, aside from the whine when it's plugged in.


I'd settle for an XPS 13 with 32GB of RAM :( 16GB is the current limit.


Not with this year's model (XPS 13 9300) - 32 GB is now available.


$199 motile m141 laptop - up to 32GB no coil whine and has matte screen.


That looks really impressive - anyone know if they offer something similar in the UK? I know Walmart owns some brands here that it might be under (though think they sold some?)


I would love to have one of these for a week or two to try it out. It's so cheap it might be worth picking up on the off chance I love it.


Here's a pretty comprehensive review I wrote on it a few months back: https://randomfoo.net/2019/12/25/motile-m142-cheapo-linux-la...

It's surprisingly good as a cheap laptop, but as a daily driver, I'd just wait a month or so and get a 4000U thin and light at this point.


thank you! I'll give it a read.


Very interesting! It has an AMD Ryzen 3 so maybe we can expect an update with an AMD 4000.


Yup, I have it, it's great!


Been eyeing that for a while but got a bit distracted by this beast if you're into linux: https://system76.com/laptops/lemur


These reviews look great for the 35/45w TDP chips. I can’t wait to see what the 15w (Ryzen 7 4800u) chips that would go in an XPS13 look like when they are released


Similar thoughts here... and the single memory slot upgrade to single channel and only 24gb is a non-starter. My work laptop has 16gb and I'm running out regularly, but the performance drop for single-channel mode is unacceptable imho.

Would like to see this with a dual-slot memory access and up to a 32gb model at the top end, upgradable to 64gb would be nicer still. Dropping the GPU in favor of just the APU would be an option for me as well, I don't play games on the work computer, and don't use a GPU in a meaningful way.


If there was an HP Zbook or Dell XPS system with Ryzen and AMD graphics I'd seriously consider it.

Sigh...


Here's a list of most of the Ryzen laptops coming out soon. There are a range of U/H processors, a couple with AMD dGPUs: https://digistatement.com/list-of-ryzen-4000-laptops-with-re...

Not listed on that page but also coming out: Acer Swift 3, Acer Aspire 5, HP ProBook x360, Lenovo Thinkpad T14, X14 (using Ryzen Pro APUs)


Most of don't look like what I'm looking for, but thanks


I'm excitedly awaiting for the Ryzen 4000 Thinkpad anouncements, hopefully I can get to upgrade my aging P51. First time I'm exited for a tech launch in a long time.


They've been announced, although the details aren't 100%: https://www.ultrabookreview.com/35805-lenovo-thinkpad-t14-t1...


Looks like all the new Ryzen Thinkpads announced so far will use the lower power 15W models. I would love for 470p to make a return - a "T14p" with the 4900HS would be pretty sweet.


I'm still waiting for the 3000-models to become affordable :s


The now released 4000 Series of mobile CPUs is not the successor to the 3000 Series for Desktop CPUs. The 4000 series mobile is using the same architecture as the 3000 series desktop.


Okay. I'm referring to the mobile 3000 series available in the Thinkpad T495.


Not following closely to the CPU industry. What right steps have AMD taken to achieve the recent success (compared with Intel)?


Intel made missteps with their 10nm process, and manufacturing chips in house. Yields for their new process are too low for retail demand. Additionally clock speeds are higher with the more mature 14nm process of Kaby Lake, which is itself a more refined version of Skylake. With low yields, manufacturing is too expensive, and performance isn't much improved, leaving Intel little reason to bring these new chips to market. As a result, they've been refining and refreshing chips since Skylake. Kaby Lake introduced a bump in core count, as a response to AMD's release of their Zen microarchitecture.

AMD overcame these challenges in several ways. They're fabless, choosing to partner with semiconductor manufacturers like TSMC rather than manufacturing chips themselves. TSMC's current 7nm process node is one generation ahead of Intel's 14nm node, roughly on par with Intel's 10nm node. Smaller process means more efficient, and often more performant.

AMD also created a new architecture using a "chiplet" design and an interconnect (Zen), rather than the older monolithic die used by Intel. This increases yields and simplifies manufacturing, while making chips modular. Chiplets containing a number of cores can be combined into a single package to create various SKUs, targeting different market segments.


Oh that's interesting, I had always seen the chiplet thing as purely for yield reasons, I never thought about the "one wafer, tons of SKUs" aspect.

It also, I believe, lets them use a different process node for the I/O parts of the assembly, which may be deliberate -- not only do I/O transceivers not benefit from the smaller silicon, it may actually hurt them when you look at the electrical needs of driving larger buses, surviving potential ESD events, etc. (That's purely speculation on my part, and I'd love to hear from someone with more clue..)


"one wafer, tons of SKUs" has been how the semiconductor industry has managed to keep their products evolving and so cheap for decades. Intel chips are largely all the same per generation and product line (low core count i3/5/7, high core count i7/i9, and Xeon). Whether it's an i3 or an i7 depends on how many manufacturing errors were on the surface of their monolithic chip - any core that doesn't pass the tests is disabled by activating some fuses. The chips with the fewest errors contain the most working cores and so on. The same holds for flash/SSD memory and RAM, where frequency/stability are used as the measure instead of errors on the surface of the chip.

The key is that AMD's chiplets are about 80 mm^2 in size whereas a recent i7 is something like 130 mm^2. It doesn't sound that much smaller but when it comes to economies of scale, it makes a huge difference when, for example, a giant scratch takes out half a dozen chiplets instead of two or three full Intel processors. AMD trades performance - since the IO die that holds the chiplets is much bigger than an Intel CPU (300+ mm^2) the communication between chiplets is higher latency - for much better unit economics.

On the opposite end of the spectrum you have chips like IBM z mainframe CPUs which are as big as 600 mm^2 many of which are just thrown out instead of getting binned so their yield is tiny and each chip costs tens of hundreds of thousands of dollars.


Great summary.


Intel's fabs have stagnated so AMD now has access to fabs as good or better as Intels. In addition, AMD has been iterating on a new micro-architecture for the last few years that is actually competitive with Intel's. From 2011-2015 AMD's Bulldozer based micro-architectures were not competitive, and coupled with being way behind on process nodes was a completely uncompetitive high-end product line.


Excellent engineering management that has figured out a good architecture to re-use a few designs across a number of market segments. Years of investment previously in automated layout that let them iterate more rapidly and migrate from fab to fab as needed. Serious problems with an over ambitious node on the Intel side which resulted in flailing by Intel management.


They started from a new clean-sheet design. You can read about it here: https://en.wikipedia.org/wiki/Zen_(microarchitecture)


> Not following closely to the CPU industry. What right steps have AMD taken to achieve the recent success (compared with Intel)?

For decades, Intel has benefited from some of the most advanced microchip fabrication facilities in the world.

This had three benefits for Intel:

3) CPUs manufactured on cutting edge fabs can do more work with less power

2) CPUs manufactured on cutting edge fabs can get more work done without overheating

1) When you have the best CPUs in the world, you can sell them at huge margins. Basically people are willing to pay twice as much for that last 5-10% in performance.

And then it all came crashing down for Intel. Suddenly, AMD had swapped positions with Intel, offering CPUs that are more advanced and manufactured on a more advanced process.

Even worse for Intel, CPUs take YEARS to get to market. The last time that AMD was beating Intel was after 1995, when AMD acquired a chip maker named "NexGen": https://www.latimes.com/archives/la-xpm-1995-10-21-fi-59417-...


> The last time that AMD was beating Intel was after 1995, when AMD acquired a chip maker named "NexGen":

Er...

https://en.wikipedia.org/wiki/Athlon_64


AMD has been beating Intel since Pentium 4 and until Conroe (1st gen Core), early this century.


Hiring Jim Keller.


Zen architecture team lead was Mike Clark, engineering lead Suzanne Plummer

I am very sure that they didn't hire a man like Jim Keller "just" to do engineering work. In Intel, he is more of a project manager with extraordinary authority, and I believe that was the case for him for the last 10 or so year.

He shipped countless triple A hit products, probably more than any other man in the industry. That experience is his real value.


Anyone got a list of laptops with this in, I'm due an upgrade and would like a 4k ish screen. Don't really care about gaming at all, would just like everything to be very fast.

Battery life on these parts really shocked me. At least as good as Intel for far better performance!


Once again I see "Ryzen 4000 Review" and once again I'm disappointed to find out that it's actually a review of the 4000 series laptop processors. Can we change the title to reflect that? They're not the same - laptop 4000 is the same architecture as the desktop 3000 cpus (that came out last year) afaik (https://en.wikipedia.org/wiki/Zen_2).


You're not going to see Zen 3 until Q4 earliest


Come on Apple. Time to move to AMD!


If you believe the rumors, it sounds like they're moving to their own CPU designs. Maybe the Mac Pro would stay x86_64 though?


Apple's most likely going to move to custom ARM processors for a lot of their laptops.


Catalina dropping 32 bit support screams to me the next MacOS will support ARM with a Rosetta like amd64 translation layer.


Maybe not Rosetta-like as Apple now controls the chip design? Perhaps Nvidia Denver style.


i didn't know about that. just read the wikipedia article and nvidia wanted to support x86 but intel wouldn't let them use their patents. i would be SUPER surprised if that wasn't the same with Apple.


Apple is ruthless with their suppliers. I imagine Intel is selling chips for AMD-level pricing to Apple.


Which wouldn't necessarily be a bad deal. I mean, disregarding the current situation (covid), I'm pretty sure that Apple can argue from a guaranteed number of sales pov.


> Which wouldn't necessarily be a bad deal.

Maybe for them. But maybe not for consumers.


Defo talking about the company p.o.v ;-)


I don't get why AMD used 3000 naming for desktop Zen 2 CPUs and 4000 for mobile CPUs.

It is a bit misleading. When I researched some laptops and saw trhey had Ryzen 7 3700U I wronly presumed they are using Zen 2 CPUs, but I've found out that 3000 series mobile CPUs are using Zen+.


Although it seems confusing it actually makes sense from a business perspective, The Mobile chips are always designed AFTER the general cpu design with laptop graphics, different thermals and a tighter market (read, older, more mature process). That's why the technology is behind a few months compared to server/desktop. But its still AMD's "newest" chip they can produce. Therefore its better to create a definitive line for the customer that the 4000 series is simply the most up-to-date chip in any market and lineup to avoid confusion.


That said, it would be nice to see a generation of fractional numbers to just re-align... 4x50H/U for the refresh, then next generation, just wait.

It's far more about marketing... 4000 is higher than 3000, but you'd have to know there are no 4000 desktop CPUs when shopping, so it still creates confusion.


I'm really excited with the prospect of a a fast, multi core chip that does not throttle on a laptop. That G14 that everyone seems to be doing the initial review with shows really nice single and multicore performance with a small thermal signature. The gaming 'workout' is not terribly dissimilar to what I'd hope to see unplugged doing development.

The i9 I've got in my work laptop just cannot handle the watts it tries to consume. I've held off updating my personal laptop waiting for something that is not an 8lb+ brick to allow it to run with a load. Should be a promising Spring/Summer.


Why use a laptop if your work appears to befit a desktop? A lot more power and RAM expansion for less cash.

> small thermal signature

When I were a lad, we used to call that 'warm' :)


There are lots of people, who would need a desktop from their computing needs, but need to be mobile. This are the typical customers of MB Pros and other high end laptops. My work place falls under this category. So everyone has laptops. Which is a blessing in these times, as everyone just packed their laptop and continued to work from home.

Even in the times before the pandemic, it was very convenient to not be tied to your office just because of your computer. So a laptop brings a lot of value over a desktop machine. And with chips like the Ryzen 4000, there are more and more jobs which can be done on laptops.


Someone sold them a bill of goods about 'hotel' type desks. These things are a three foot cattle stall, not even a wall around it. Nothing permanent about the work space.

I can get a 32G mac just ordering it at work. A desktop... well that would require a dozen VPs to sign off on it. Argh.


Usually because desktops can't be moved easily.


I hadn't thought of it that way, but aren't there small PCs that have been designed to be portable? Guy I worked with had one and it was tiny. Get one a little less tiny and you have ventilation + expansion (not as much as a desktop but still better than a laptop). Just have a monitor at work & at home, carry the pooter around.


It can be done, but most people, who don't work at just one desk prefer the laptop as you can quickly move it to a meeting room etc. When you move the small PC, you not only need to carry keyboard and mouse and have a screen, it also means restarting the machine. A laptop keeps running and has almost everything in one package.


Fair point.


I have been holding out for almost a year already to replace my T440s with a new Ryzen-powered Thinkpad. Hope I won't have to wait for long now.


Big fan of Thinkpad. My 2017 MBP died (again) a few weeks ago and I can’t get it respired due to travel restrictions. So I took out my old T420 and loaded Linux. Forced to use it for a few days and have to say I like it a LOT. It’s bulkier and the screen is no where near as good. But it has 16GB RAM, the keyboard is amazing, it feels solid and I have USBs. I still occasionally use OSX on a 2010 Mac mini but recognise I’m not missing the Apple ecosystem at all. Quite a liberating feeling as I thought I was fully tied to it.


Be careful with the E series.

They don't officially support Linux anymore and I had to wait 12months for a BIOS update to solve all the issues with my E585 :(


I've been holding out on replacing my XPS 13 with a Thinkpad T series with a Zen 2 processor. I recently bought my wife one and I love the ergonomics of the 14" way better than the 13".

Probably my favorite laptop I've ever tried.


Which T model did you buy her?


T480. It was on sale for $750 at the time.


My favorite review so far has to be: https://www.youtube.com/watch?v=_aLH0Q6CZF4&t=0s "Now Intel is in (even more) Trouble - AMD RYZEN 4000 Mobile Review (ASUS ROG Zephyrus G14)"

by der8auer


I liked how he showed the improvements from replacing the default TIM with a liquid based TIM. I wonder if you could get the same improvement from using Thermal Grizzly. I know most people won't want to replace TIM every month for liquid metal, or be comfortable rubbing nail polish on the circuits.


I'd imagine GamersNexus on YT would be all over that one as they are sponsored by them. However, Steve tends to be down the list when it comes to getting new toys to play with.

Yes, replacing the LLM every few months, would be serious level of dedication IMHO and not sure I'm aware of anybody would go to that effort - however it was good to see what was possible.

Still, nice to see that they did use thermal paste and not pads in the production, though we don't know what paste they used. If that was known then some crude gestimations could be extrapolated.

They did however put some attention to that cooling system and was very interesting in the review how the fan blades are varying length in an effort to reduce fan noise - I'd never considered that or been aware of that before and made sense as well as being kinda neat attention to detail thinking.

Me personally, I'd not want to do any of that until warranty expired and to eek out more life in that puppy. Though, as I'm rockin a Core2Duo, you kinda feel my budget is never going to see this. Still nice to virtual experience it.

Yes the whole nail polish, kinda neat hack way of doing things and not aware of any downsides with that - beyond warranty, though at least you wouldn't have to replace that periodically compared to LM.


I was looking forward to other laptops in the 4000 series, but I guess with the current world situation, those will probably be pushed back to fall releases. I'd like to see one of these things with Ryzen 4000 + amdgpu as a Linux laptop.


Agreed.. would love to see some Ryzen 4000 options from System 76 and Dell.


I still have not heard an official statement what the Thunderbolt 3 story is for these CPUs. Could we get it? Will we get it?

Without this I simply can't buy em as we've invested in TB3 Docks for the past two years and are pretty happy with them.


Intel are finally starting to license motherboards on Ryzen (Asrock had one certified in Febuary) so it's not impossible,


The new Thinkpad T14 is supposed to have a TB3 on both the Intel and AMD versions, at least.


That has not been confirmed. There is nothing official on the T14 on the Lenovo site yet except from the press release.

https://www.lenovo.com/us/en/commercial-notebook/thinkpad/th... (and yes that's the right url, try replacing "thinkpad-t14-series" with "x" you will be redirected there)


I'm going off of the datasheet reproduced here:

https://www.notebookcheck.net/Lenovo-ThinkPad-T14-Ryzen-4000...


everybody does that and it's getting really boring fixing this. When the PSREF is out then we can talk until then, that's just a heap of typos. They already published a correction over RJ45 and blablabla i need to type this up four times a week on various social media, people, get a grip, there is ONE certified AMD motherboard with TB3, an extremely expensive Threadripper it is beyond unlikely that it will happen


It's possible, but less likely until USB4 (iirc) where TB becomes an optional part of the spec, without the need for certification from Intel... Probably next year we'll see a lot more options in the space.


One (biased?) metric of performance and efficiency of modern cpu is the Random X algorithm used in mining cryptocurrency like Monero. It is designed to use all the cpu subsystems including memory controller. The top CPUs by far for RandomX are AMD ryzen 3000 series both in terms of performance as well as efficiency.

https://monerobenchmarks.info/


I would really like to see Laptop Reviews sort by price first, then logical maximums within a given narrow price range.

It would make sense to group by Price, then key secondary features like: Weight, Screen Quality, Raw performance (Mixed current year games score, Disk Bound, CPU bound, GPU bound) when plugged in, maximum battery time (default out of box experience).


I'm not saying that you're wrong, but e.g. im my case the priorities are different (screen quality, keyboard, battery, touchpad/clickpad/trackpoint, weight, etc... and CPU&GPU performance would be last but potentially weighted against watt consumption) => can become complicated => I don't think that a "universal" method of getting the "ultimate" sorted list is feasible => I'd definitely love a site that would allow to set my own priorities and get as result the "perfect" laptop/notebook that matches at most the expectations/priorities :)

Some retailers have such sites, but maybe not reviewer-sites? (I do use that https://www.notebookcheck.net whenever I want to buy something, but their selection criteria is limited)

In my case price has never been a direct primary selection criteria - I always bought the gadget that met all my minimum requirements but which didn't exceed the budget that I had available.


For me it's about what works the least, and 'high price' is usually the main consideration since I __rarely use__ the laptop in question. My use case for a laptop is a couple times a year when I'm at a tech or similar convention for a weekend, or very very light use when visiting a relative's house for an hour or two.

I agree that the screen and keyboard quality are also factors that veto a laptop; but those are much more subjective and difficult to actually rank. I think that might go in to an 'editors choice' within a given price point. Then you'd compare the 2-4 that fit within your budget needs.


Even then... keyboard and touchpad are still really subjective, and to some extent the screen is as well.


Cool to see a mobile Ryzen chip performing well, but there was nothing mentioned about thermals in all their tests. That’s more concerning to me in a lighter laptop than the “muscle-books”


This is a dishonest review. The numbers are fine but the article surrounding it is dishonest. He is often repeating claims on weight but he compares 2070 (115W) and 2080 (150W) GPU laptops to a 2060 Max Q (65W) laptop. Mind you, there are no other 2060 Max Q laptops yet but there are 2060 laptops, that would've been much closer.


The article almost never stops mentioning that it's competing quite well against laptops that are 50% or 100% heavier. I don't think it is dishonest. They include some similar size laptops in the graphs, but the results aren't as interesting because it is such a one sided result. The article is trying to sell you on a laptop that performs like a bulky heavy gaming laptop but travels like a mid range spreadsheet pusher.


It's competing well on CPU benchmarks. The moment it benches a game https://images.idgesg.net/images/article/2020/03/asus_rog_ze... the picture becomes more nuanced.


That's comparing a GTX 2060 to a 2080 in the Intel laptops. Now if you care about gaming it's fair to point out there aren't any GTX 2080 laptops with AMD being reviewed, but it's not an indication of the performance it could have with a full spec GPU. Hopefully we should see one in the next few weeks.


You were saying "The article is trying to sell you on a laptop that performs like a bulky heavy gaming laptop but travels like a mid range spreadsheet pusher." but i was saying it does not perform like a bulky heavy gaming laptop because it has an inferior GPU compared to those.

I am sure once it's on a more equal footing there will be some small gains but nothign earth shattering, alas.


I think people are more excited about the possibilities of the CPU than this specific laptop, but fair point.


I'm glad to see AMD working on its single core IPC performance. The only CPU-bound workload I have as a user are game emulators, for which multicore performance is nearly irrelevant.


PC World is still a going concern?


PC Magazine employed John Dvorak up until 2018!

https://news.ycombinator.com/item?id=18044106


How about that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: