I'm still baffled by that showing years later. Over-engineered, over-cooled chips to reach absurd speed record has been a staple since as far as I remember, like back in the Pentium 2 or before. Why did anyone at Intel think they should hide the sauce, or get pissed when fans got to it, is beyond my comprehension.
Didn't they use a chiller?
Which, taken as the usual world record and cool feat of cpu speed would have been perfectly fine and impressive, but they were getting scared of AMD so for some weird reason some idiot in their PR team insisted on pretending from top to bottom that this was a normal chip running on normal condition, and when Asus (I think ?) let fans see how it was achieved Intel gave them less than 30 minutes to give the chip back.
This was not a great era for Intel ...
The 9Ghz clock was achieved not through any normal cooling or by efficiency of the chip.
These overclocking records have been around for decades but they're in no way shape or form representative of the average of even the top 1% of users.
It's impressive purely because it was possible with an off the shelf chip.
AMD has raised their market share over the last 4-5 years from about 8% to 31% of x86 sales. Intel also saw 5 straight years of market share declines against AMD in the server space - which is by far the most lucrative.
And yes, they're also worried about ARM and Nvidia.
ARM is another threat on the horizon on that front, but it's nothing compared to the beating AMD has been giving them since the Ryzen showed up.
However, I still own a substantial amount of Intel stocks because it will be one of the most valuable company in the world if China uses military action against Taiwan. If not, then I still believe Intel will be successful with their IFS strategy because the world is about to enter an era where compute will never be enough and designers want a second supplier next to TSMC.
The team was able to achieve a very impressive 9043.92GHz(a literal joke, since visible light is 480 thz)
What does this mean? Only those with Anglo-Saxon heritage make for good writers?
A bit of a stretch though.
Aliens.
Looks like the usual cost cutting where you replace employees with contractors.
Also, this entire article is based on an Asus advertisement video on YouTube. I'm sure they wouldn't put their best writers on that kind of content.
With better cooling I'd be surprised if I couldn't sustain 5400 on all 12 cores.
The overclocking team from Asus has achieved a new CPU frequency world record with Intel's brand-new Raptor Lake Refresh Core i9-14900KF. The team was able to achieve a very impressive 9043.92GHz on a single P-core with liquid helium, breaking the previous world record by 35.1MHz.
Perhaps the editor thought that "almost 9.1GHz" sounds better than "over 9GHz". I disagree with both - the best would be "over 9000 MHz" [0].Boom, honest, direct reporting.
It’s really not hard to do, it takes more energy to think up the clickbait than to just tell the facts.
But everything's so slow and path dependent.
I wonder how much you could do with a single rack if you got really serious about it. Cooling, power, networking etc.
A super long pipeline allows higher clock rates but it takes a giant dirt nap when branch prediction fails and when you have a cache miss. You end up having massive latencies in these cases.
Further, generally all else being equal a lower clock rate allows you to be more energy efficient.
Agree that it had tons of problems. But branch prediction has gotten better, compilers have gotten better, etc. Maybe they could be handled now!
I’m kidding in these sense that I don’t think a single core could be designed to usefully use 1000W. I get why things happened as they did. But I do still think single threaded performance is much more interesting than multi-core, so I wish we could see how those designs would have evolved.
This? You push a button and a number comes up.
Hardware overclocking requires a decent amount of knowledge, most of it obtainable only through months of trial and error, and a lot of tuning to push dozens of often conflicting parameters just right. Extreme overclocking requires that much more. If what they're doing is simply "pushing a button", then programming and system administration can be reduced to that too, along with many other things.
You simply can't get full performance from modern systems without some amount of overclocking, and things like PBO and XMP/EXPO profiles are far from what your hardware can achieve because they have to be very conservative, or many systems won't run without additional manual tuning, which most consumers won't do.
(Except for closed systems like Apple's where your hardware doesn't belong to you and you can't change anything anyway.)
So one immediate thing overclockers provide are general guidance on what you can expect to achieve and what thing to tune which way to get close to maximum performance from your hardware without spending months on it like they did. I heavily rely on such information. My system would be at least 25% slower if not for these "button pushers".
Then you look further and see that it’s done with coffee machines, motors (of all sorts), and just about any other device you can find, make or name.
Wanting a fast/strong/powerful/quick X is a fairly common thing for many of us.
The bigger local issue you run into with liquid helium and liquid nitrogen is having it evaporate in an enclosed space. You can "easily" create an environment leading to inert gas suffocation (a real hazard in some industrial cases) - in reality any simple case like this is unlikely to be using enough of N or He, and is unlikely to be sufficiently enclosed, but in principle it would be possible - maybe if you were in a basement and spilled an inexplicably large thermos of them it could do it.