Sort. A counterpoint I read on this topic indicated it might be rigged a bit. The comparison was to an emulation of D-Wave's own annealing process on a single-CPU machine. That's... already an unfair representation of simulated annealing's performance on classical hardware. I'd compare the solution of a problem to optimized algorithms on NUMA machines, GPU clusters, and FPGA clusters that cost whatever D-Wave costs. Given that's all one knows about how it really works past their claims: input X dollars to get Y performance and Z energy-use. ;)
Note: One could also compare to performance of single machine with CPU's, GPU's, FPGA's, and/or ASIC's to be more fair. Just needs to be optimal, classical implementation on HW designed for it vs optimal use of D-Wave.
Note: One could also compare to performance of single machine with CPU's, GPU's, FPGA's, and/or ASIC's to be more fair. Just needs to be optimal, classical implementation on HW designed for it vs optimal use of D-Wave.