The legendary programmer behind Doom e Quake just placed NVIDIA in a delicate situation. John Carmack publicly revealed that the DGX Spark — the manufacturer’s desktop supercomputer — is not delivering the performance promised during launch.

The equipment, which began to reach the first buyers after months of delay, was expected to revolutionize AI development with its power concentrated in a desktop format. In practice, the story is different.

Numbers that don’t add up

Carmack got straight to the point in his X post. According to its measurements, the DGX Spark consumes just 100W — less than half of the 240W specified by NVIDIA. The practical result? Approximately half of the advertised performance.

“If we consider that 1 petaflop sparse FP4 is equivalent to 125 teraflops dense BF16, we are only seeing about 50% of what was promised”detailed the programmer. “And even operating at this reduced level, the equipment heats up too much and experiences spontaneous restarts during prolonged use.”

The situation raises an uncomfortable question: did NVIDIA deliberately lower specifications before commercial release?

Other developers confirm the issue

Carmack is not alone. The main developer of Apple’s MLX framework reported similar results: 60 teraflops in BF16 operations, well below expectations for a device in this category.

To date, NVIDIA has not officially commented on the allegations. The silence contrasts with the aggressive marketing campaign that marked the DGX Spark announcement earlier in the year.

The technical (and controversial) explanation

Experts point to a possible root of the problem: the performance metric announced by NVIDIA — up to 1000 TOPS in FP4 calculations — considers structural sparsity, a technique that ignores operations with zero values ​​in neural networks.

In theory, this doubles processing speed. In practice, it requires specific optimizations that are not always applicable. In conventional operations, actual performance drops to approximately half the peak value—exactly what Carmack and other developers are observing.

The problem is that this caveat did not appear prominently in the launch material. For developers who invested in the equipment expecting the advertised performance, the frustration is understandable.

What this means for the AI ​​market

The DGX Spark case exposes a growing tension in the industry: the gap between aggressive marketing and actual delivery of AI products. With the hardware arms race accelerating, manufacturers face pressure to announce increasingly impressive numbers — even if they come with asterisks in small print.

For John Carmack, a veteran known for not mincing words when it comes to technical efficiency, the episode represents yet another example of inflated promises that damage the industry’s credibility.

NVIDIA, the absolute leader in GPUs for AI, now needs to explain whether the DGX Spark was deliberately limited or whether the performance and thermal stability issues are flaws that will be fixed via firmware.

Meanwhile, developers who received their units await answers — and probably think twice before blindly trusting the company’s upcoming technical specifications.

Until now, NVIDIA has not officially commented about criticism from John Carmack and other developers regarding DGX Spark performance

Source: https://www.hardware.com.br/noticias/john-carmack-nvidia-dgx-spark-desempenho-metade-prometido/



Leave a Reply