Hacker News new | past | comments | ask | show | jobs | submit login
AMD unveils world's most powerful desktop CPUs (zdnet.com)
116 points by kmod on Nov 11, 2019 | hide | past | favorite | 70 comments



> AMD has released a salvo by unveiled what are the world's most powerful desktop processors

Is it too much to ask that tech journalists proof read their articles before hitting publish?

> AMD unveils world's most powerful desktop CPUs

This article contains zero benchmarks or anything other than the core count and TDP to substantiate the claim that these are the "world's most powerful desktop CPUs"

Can we please stop giving ZDNet impressions for this tripe?

The Anandtech article from 4 days ago on the Threadripper announcement at least included AMD's slide deck. [1]

[1] https://www.anandtech.com/show/15062/amds-2019-fall-update


Discussion of the Anandtech article: https://news.ycombinator.com/item?id=21473235


The article reads like it's been back and forth between several languages in Google Translate.


> Is it too much to ask that tech journalists proof read their articles before hitting publish?

Yes.


It's amazing how AMD, which once seemed a company on their knees, now seems to be kicking Intel's butt, and I find myself worrying about Intel now. Why do they seem to be failing on so many projects? Itanium, Larabee, XScale, the Modem Division? Why is one of the world's largest and most successful CPU companies noncompetitive now on pretty much everything, except their mainstay x86 processor, where they seem to be stalling.

Is there something wrong with Intel management/culture that's allowing other firms to run rings around them?


With respect to their core CPU business, they were executing on a monopoly playbook since the start of the decade - 4 core desktops and 5% generational improvements on the same architecture, every generation - which made them organizationally unfit to stay competitive. When the Ryzen chips first launched they quickly moved to punch up the core counts to keep pace, but were still waiting on their 10nm process to come through, so they ran out of breathing room really quickly. For the past year it's just been "run hotter and engage in FUD", and now that AMD has got onto 7nm process there's really no comparison to be made.

Could they have seen it coming? Absolutely, and they could have had a new architecture ready earlier or adjusted their process plans to de-risk, but again, multiple years of uncontested dominance in their main business tends to make things that aren't next quarter top-line a lower priority. And I think that really is at the heart of the issue with their projects failing so frequently. Intel's pursued anti-competitive practices for decades, and when it works, they coast along without urgency, letting marketing call the shots, and when it doesn't they use their economic moat to rush a solution.

But this time, with everything running late, Intel is going to face several quarters ahead without having a real response on the CPU front. They are not going anywhere, though. Their foundry channel is needed to meet global demand for new chips - a good deal will still find buyers. They still have plenty of existing agreements to work with, and can negotiate their way through a lean patch. And they have their new GPU stuff coming soon which should hopefully give them a piece of a different market. There will be plenty of stress to re-org along the way, though, so Intel could come out of this looking very different.


Given the supposed "only the paranoid survive" mantra of Andy Grove, one would have expected them to not be complacent, especially with the Asian silicon fabs eroding their once, core competitive advantage.


It's like Intel executives wanted to give AMD a second chance.


Their pipeline has stalled but they’ve got good stuff in the L2 cache.


> It's amazing how AMD, which once seemed a company on their knees, now seems to be kicking Intel's butt

That's how competition works. You are constantly making decisions, each of which could bury you, even if you was a total winner only yesterday.

Intel put its bet on own fab and single die design. AMD outsourced the fab and focused on chiplet design. And AMD gain a benefit.

Though Intel still has some advantage, and would move to chiplet design soon most likely, so we would yet see a very tight competition.


According to Charlie yes, Intel does have serious management problems right now.

https://semiaccurate.com/2018/06/29/intels-firing-of-ceo-bri...


Intel's inertia ran out, and AMD surpassed them because they're actively innovating.

This is what happens when an incumbent company delays new products because of mismanagement or to maximize profits. (See also, Apple)

A CPU isn't like a database, it's easy for customers to switch to a competitor.


> which once seemed a company on their knees

Weren't AMD cpus a good alternative to intel for most consumers ? If I remember correctly athlon, phenom, FX were pretty good for the price. Or are you talking about earlier times ?

edit: "good alternative" perf/price wise, yes intel used to have the advantage in raw performance but their pricing made AMD cpus look very interesting for most consumers.


Phenom got soundly beaten by the first few Core i generations (Nehalem, Sandy Bridge) in terms of both peak performance and power efficiency. Bulldozer (The basis for AMD's FX and "A" series CPUs) was mostly a misstep that was doomed to feed the very low end of the market. The only truly successful devices based on it were Xbox One and PS4. (Incidentally, those processors used "Jaguar" cores which evolved from Bulldozer but actually didn't inherit the shared execution units (CMT) that were Bulldozer's headline feature that turned out not to work as well as intended.)

So between about 2010 and the introduction of Ryzen in 2017, there really was no alternative to Intel if you were building a PC that was either fast or energy efficient. In terms of performance, the 1000 & 2000 series Ryzens roughly attained parity with Intel, while the 3000 series now pulls ahead in many multithreaded and some singlethreaded disciplines, and definitely has better performance-per-watt under load.

Ryzen still draws significantly more power than Intel's CPUs when idle though - the PCIe 4.0 transition has counteracted the improvements from the 7nm shrink in this regard, and the APU variants lagging the GPU-less SKUs by almost a year in terms of process and microarchitecture means you also have to power budget for a discrete GPU, which adds a few idle watts.

(The most power efficient at idle Ryzen build I'm aware of is the Asrock DeskMini A300, which uses no chipset, combined with a low end APU - this draws about 7W when idle. In terms of performance and expandability, this system isn't any better than a NUC, which draws somewhat less power when idle. You can't buy a full size A300/X300 motherboard though, for some reason, even though this would actually be quite an interesting setup.)


About the asrock mini, have bios issues been solved?

https://www.newegg.com/Product/SingleProductReview?item=N82E...


That link doesn't work for me (perhaps geo-IP gated?) - what issue are we talking about?

The highly reputable c't Magazine used it in one of their recommended PC builds recently. They tend to be very thorough and fussy about issues, so I suspect they didn't run into whatever issue you're talking about. I believe they did mention that you need to watch out for BIOS support for the 3000 series CPUs (and ask the seller to update it if necessary and you don't have an older AM4 APU handy), as with any motherboards that predate release of a particular CPU you're trying to use.


I'd say AMD had better products than Intel during the NetBurst era (2000 - 2006)

Then Intel Core came out in 2006 and it was game over for AMD until Zen was introduced in 2017.

Between 2006 and 2016 AMD looked half-dead most of the time.


Until the latest Ryzen set of improvements, Intel was pretty much the only desktop PC you'd use for the best part of a decade.


I don't think a work "failing" is relevant here. The reality is that AMD is still well behind Intel in many, many things.

Projects fail. It's expected. The only way to have no failing projects is not to create experimental projects.

But AMD is definitely catching up and that's a good thing.


Please promote the guy who came up with the name "threadripper".


It's a shame they didn't call the line EPYC THREADRIPPER though.


Sounds like something a true-blue Aussie bloke would come up with.


I only give negative feedback


Sounds like something a romance novelist would come up with.


280 Watts TDP! Wow, I think that's a new record for desktop CPUs, although GPUs had reached that a while ago. I wonder how well air coolers work on them, or do they need water cooling for maximum performance? Impressive nonetheless.


I think TDP is becoming less and less of a comparable figure. All designs could use far more power in specific worst case corner cases (eg. a very specific instruction stream that causes every functional unit to be in use every cycle, and all data paths to toggle logic state every cycle).

Yet in the typical use case (desktop PC, most cores idle most of the time), power consumption will be far lower.

What really matters is the graph of "how much computation can I get out of this thing in 10ms bursts, 1 second bursts, and 10 minute bursts?", with a specific cooling setup.


It's correct that TDP is getting more relative¹, but it's not correct that we're talking of corner cases. Anandtech locked an i9-9900k to 95W, and the loss was considerable (between 8 and 28%)².

¹: it needs to be noted that in the last years, Intel's TDP is getting "much more relative" than AMD's ²: https://www.anandtech.com/show/13591/the-intel-core-i9-9900k...


Intel TDP always had the same meaning: "If your cooler can dissipate TDP, the CPU will run all-core loads at base clock speed, or a single-core loads at boost-clock speed." - Pretty sure that statement has always been true, within +- 100 MHz depending on the specific workload. The distance between base and boost clocks has just increased over the years.


Considering Intel has already ditched barely-meaningful TDP for completely meaningless SDP (the "dynamic contrast" of processor wattage), I'd say it already isn't a comparable figure.


I'm not sure whether people use $2k CPUs in "most of the time all cores are idle" scenarios a lot.


Even if they don't, 10-15% performance gain for double the price seems questionable. If you include the specialized motherboard, chillers, high speed ram even more so. Maybe in some very specific scenarios it would work like some VR prototype or flight sim, but for majority of users it doesn't make economic sense.


Yes, they need water cooling.

The marketing communication is amusing: "Optimized for water cooling": https://images.anandtech.com/doci/15062/AMD%20Fall%20Desktop...

Threadrippers are at least HEDT; I think this TDP is not unusual. Paradoxically, AMD's marketing is likely more reliable when they specify that the i9-9920X wall power consumption is up to ~300W (https://images.anandtech.com/doci/15062/AMD%20Fall%20Desktop...), rather than Intel's own spec (https://www.intel.com.au/content/www/au/en/products/processo...).


> Yes, they need water cooling.

That's inaccurate. Air cooling solutions are readily available for Threadripper.

Water cooling systems simply move heat to a more convenient place to air cool it away (thus the radiators and fans at the destination). The only limiter on air cooling (and not water) is weight/size on the CPU socket (or amount of air moving over the fins of the heatsinks).

Some examples of retail air coolers that can cool it out of the box already:

https://www.amd.com/en/thermal-solutions-threadripper

https://noctua.at/en/amd_ryzen_threadripper_tdp_guidelines

https://www.coolermaster.com/catalog/coolers/cpu-air-coolers...

Now the hypothetical ceiling on water cooling is of undeniably higher. You could literally use a jet engine as a fan to cool the water, or send it through liquid nitrogen. But we aren't there yet with 280 Watts. At the current wattage air cooling remains viable, with some trade-offs (e.g. huge air coolers, requiring huge cases, plus strong in-case airflow).

You definitely don't "need water cooling," yet. At least not for threadripper.


> That's inaccurate and doesn't actually make sense.

You want a steep temperature gradient at the limited chip surface to maximize heat flux. Pumping a liquid with high heat capacity to it steepens the gradient where you need it. Cooling the liquid on the other hand can happen over a much larger surface area with shallower gradients and thus lower heat flux.

Note that while the heat pipes in air coolers do use convection they do so passively, liquid cooling solutions are active.


You've not really added anything to the discussion here, but seem to be using verbose language in an attempt to convey expertise. For example this:

> You want a steep temperature gradient at the limited chip surface to maximize heat flux.

Is true with both and is really just a convoluted way of saying "if you don't convect heat fast enough, it won't work." Yeah, undeniably true...

> Pumping a liquid with high heat capacity to it steepens the gradient where you need it.

Where you need it is at the surface of the CPU package. Both solutions provide adequate heat capacities at that location, that's why they fundamentally work. One uses heat pipes and fins directly above with fans/air carrying the heat away, the other moves the heat, and does the same elsewhere.

> Cooling the liquid on the other hand can happen over a much larger surface area with shallower gradients and thus lower heat flux. Note that while the heat pipes in air coolers do use convection they do so passively, liquid cooling solutions are active.

This is a basic explanation of how water cooling works, it doesn't really add any specific insight into the problem scope.


You updated your post since I have made my comment. I have no issues with the new version.


I did tone down the language immediately after posting it. But you also altered yours after posting it (the third paragraph was a later addition).


That's like saying that heat sinks 'don't make sense'

Water cooling systems allow heat to be moved away from the chip's surface more efficiently than pointing at can at it can. Once moved away, a larger radiating surface can be air-coooled


This is entirely correct. With the space you can make available in Desktop PC cases it's certainly possible to cool a 300W chip with a tall heatsink covered in heatpipes with a sensible fan. And the heatsinks used are still relatively cheap compared to what can be done if enough money is thrown at it. If you don't have enough space for a large enough heatsink, that's when you start needing far greater airflow than what's sensible in non-industrial environments (mostly in terms of noise). Just look at what some of these big Cisco boxes do.

I'm disappointed AMD have chosen to confuse people with their marketing material. I guess these are expensive high-end CPUs aimed at the type of enthusiasts who see the idea of requiring water-cooling as a positive.


Not sure about Gen 3 TR on air I think they say an AIO is recommended


Threadripper has a lot more surface area to exchange heat, the Ryzen interface is about half the size of a threadripper, so you'd need to get some serious cooling going with that.


"Recommended". So at least they're being honest, unlike Intel and their stated TDP.

Laptop manufacturers have always used the cheapest possible cooling, but since Sandy Bridge, it has become painfully obvious the TDP is severely understated.

Thermal throttling has become acceptable, and while it's impressive that Intel CPUs last years at ~90 degC, I'd rather they be honest with their actual requirements to get the full performance out of a given chip.

Many laptops go over 90 degrees even at the guaranteed base speeds (no TurboBoost), in everyday use, not even multihour rendering or anything.


It's finally happened. I don't foresee a time where I need more CPU power in my desktop. I could use more memory, faster (and larger) SSDs, faster GPUs with more TCs (althou I'm actually good for that too mostly), more bulk data storage on platter disks but honestly I'm good for CPU for all my needs. Here's why: Most of the cores on my Ryzen 7 are idle most of the time. There are intense periods where I need to compile stuff where they max out but these are relatively short. Cup-of-coffee short. Any gaming activity seems to use no more than 8 cores max. Should I ever need more CPU I don't need an order of magnitude more, I need orders of magnitude more. Those are easily procured through AWS or Azure or some other cloud provider. Same with GPUs. I might make a training run for a NN on my desktop to hypothesis test something that I would then run for longer hours on cloud hardware. It's never going to be cost-efficient for me to scale up with any sort of ownership of the hardware. Of course I'm going to buy the new monster at some point but it will be due to hardware failure, not due to me needing the new shiny.


> Any gaming activity seems to use no more than 8 cores max

Large numbers of cores have never been useful for gaming, but overall thread performance has. Right now the major bottleneck for high end gaming is GPUs because display pixel counts have gone through the roof with high frame rate VR and 5k monitors, but keep in mind that the current most powerful graphics card for games (Nvidia 2080 Ti) is over a year old, still costs over $1000, and is not about to be replaced yet because, just like Intel, NVidia hasn't had competition at the top in a long time. If AMD can bring this success to the graphics space, maybe that will change too.


AMD's Navi is looking to be competing quite well with NVIDIA in the mid-range/enthusiast space, and shortly in the mid-to-low-end segment with the imminent RX5500.

I suspect it won't be long until the empire strikes back in this case though: NVIDIA also uses TSMC for fabbing its GPUs, so they too will soon be shipping 7nm GPUs, with all the intrinsic advantages of that node. Of course, pricing of those new products will hopefully be strongly influenced by AMD's resurgence. (And, supposedly, Intels re-entry into the GPU market, but I remain skeptical this will make much of a dent for a few years yet.)


Give it a year and developers will make sure your software slows down enough to make more CPU power seem like a worthwhile investment.


> Any gaming activity seems to use no more than 8 cores max.

The main reason for this is most likely that all current-gen game consoles are using 8 CPU cores, so this is basically the "baseline" for games right now. If the next generation of game consoles increases CPU core count, the core usage for games running on PC will most likely also go up accordingly.


Depends I was talking to a Mate who does Java development yesterday about the rebuild times in eclipse -costing an hour or two a week per developer.


"64k ought to be enough for anyone"


You are just being silly. I didn't even say that computing power was enough for me, just that I didn't need more on my desk.


I’ve been reading that many of the Ryzen and Threadripper CPUs support ECC ram, but motherboards support is shaky at best. Worst is that some motherboards will boot with ECC but don’t actually use it.

Does anyone have any concrete experience?

I wad hoping to build a workstation with ECC memory, but it appears only the EYPC cpus have certified support for ECC.

The Xeon W appears to be the most cost effective for guaranteed ECC memory.


I have an ASRock X399 Professional and a TR-1950X.

64GB DDR4-2666 ECC UDIMM runs perfectly on it, and the kernel reports ECC support.

Day-one memory support was very, pathetically, bad with Threadripper, but it has gotten better. It improved even more with Threadripper 2000.

If Ryzen 3000 is any guide, the gaping memory compatibility gulf between Intel and AMD will probably be closed to a slight gap by the time TR 3000 releases.

As far as I'm aware, every X399 motherboard vendor supports and warranties ECC support on the platform, but only if you use memory on their QVL at its rated speeds.

sTRX4 will likely be no different.

As far as cost-effectiveness, a W-3175X ($2,978.80) and Dominus Extreme ($1,868.74) is indeed $552.45 less expensive than a Supermicro MBD-H11SSL-NC with EPYC 7702P ($5399.99), but the the 7702P is about 37% faster multi-threaded, and about 13% faster single-threaded.


It seems they have left the xx90X model number unused for now but the 32-core has been accounted for...


Not a huge secret that they have a 64-core Threadripper 3990x ready to go.

But also no hurry to release it, as even the 24 core wipes the floor with anything Intel has to offer.


On the other hand, I am beginning to miss the times when things could be done on a single core CPU...


I wonder how many problems can make full use of those cores given only four RAM channels? Our robotic grasping problems certainly could but I know we're a bit of an outlier in that way.


> wipes the floor with anything Intel has to offer

* in synthetic benchmarks that uses down to the last core

edit: my bad I mistook this for a technical audience


What is ironic is that after Intel lost some benchmarks it decided to update the previous benchmarks suite they were using to "real world" tests, what I understand from this is that "real world" means the benchmarks wher Intel is in fron or not much behind.


This IS a worstation segment which likes cores.

But also Zen 2 core at 4.5 GHz is faster than Skylake core at 4.5GHz (about the peak boost in Intel HEDT) also in single threaded tasks.


Single-core CPU performance is at a peak. There aren't any revolutionary increases expected there anytime soon.

Milticore performance gets more and more important with time.


Single-core CPU performance is at a peak. There aren't any revolutionary increases expected there anytime soon.

To be fair, Ice Lake/Sunny Cove seems to introduce a significant IPC improvement. It's currently stuck in low-power dual core CPUs due to Intel's 10nm yield woes. (allegedly quad cores have launched too, but I haven't seen any reviews of devices based on those?) But presumably it will eventually percolate up the product stack and give desktop processors a solid boost too.

So that particular fight doesn't look anywhere near over just yet. (Zen+ to Zen2 was also a decent IPC bump, so no reason to believe AMD have hit a wall in that regard either - I'm sure they're scrambling to make another leap in Zen3/4 that compares favourably with the Skylake->Sunny Cove one to preempt Intel retaking the crown.)


> Milticore performance gets more and more important with time.

no? above 2-4 core it's just marketing hype like it was ten years ago already and today if you need parallel processing you have vastly powerful resources to tap.

sure this amd cpu might be faster in 3d rendering or video encoding or other task that use 4+ cores and that might be useful if you truly can't move them onto a gpu but truth is at the top end many-cores architectures are getting squeezed out by streaming processors and the workloads that have a hard dependency on many core single node general processor are rarer by the day.


Threadripper is proven to do extremely well in parallel compilation benchmarks -- the Linux kernel build process being the chief example. I've seen the numbers on at least 4 sites (Phoronix included).

It's not only about 3D. We're talking everyday programmer productivity, partially achieved by less waiting around when compiling your projects.


> everyday programmer productivity

> the Linux kernel build process


Yea, what is this BS Linux kernel build process that is a known metric that everyone can test against? I compile projects dozens of times a day at my work and never once I compiled the linux kernel...


It's an extreme case to demonstrate if a CPU's multicore performance is clearly better than previous generations.

For what it's worth, I migrated from a consumer-grade i7 CPU to a workstation-grade Xeon and the differences in compile speeds in my everyday projects (less than 400 files) are still significant and very noticeably in favour of the Xeon.

Linux kernel compilation is just a benchmark that multiplies such differences between CPUs. Thus the buyers can -- hopefully -- make a more informed decision.


weird, I've a 9000+ files maven monster and intellij incremental build means I've never have to wait for compilation during normal operations, and this is with a meager i7 8550u


Well, it's an incremental build. I am talking occasionally changing dependencies in a project written in a dynamic-but-compiled language, which leads to full rebuilds. The difference was big.

It also has tooling for "watching" test directories and auto re-running tests when certain files are changed -- which utilises incremental compilation under the hood and there indeed there is no visible compilation performance difference.


Yeah, I know it is, and said pretty much what you said...


Seems I didn't catch the sarcasm, sorry. :D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: