They are specifically targeting CFD applications, and so the cited FLOPs count might be misleading (but I don't know enough about optical computing to say for sure) I will comment on why I think this is so.
If this tool can perform very fast fourier transforms then it will benefit many simulation codes which rely on an approximation technique known as spectral approximation. In that case the huge FLOP/s count will only be realized in "high orders," that is when these spectral methods are pushed to their limits - in which case much existing research today breaks down and the methods can simply become unusable by e.g. yielding a 'theoretically' invertible matrix that has a numerically infinite condition number; no amount of fourier transforms can dig you out of that hole. This problem is not fully understood in the spectral method literature, and it rears its face in many different ways depending on the problem context.
GPUs have a similar problem, their ability to churn out dense linear algebra operations at high scale is so good that the vast majority of algorithms have a huge memory bottleneck, not a FLOP/s bottleneck. The result is people rarely see the insane promised 100X speedup of their simulation codes because they rarely are one massive dense linear algebra operation such as a matrix-matrix multiply, but rather a combination of small dense linear algebra ops + memory bound operations like gathers and scatters. Ironically spectral methods also work very well on the GPU in high orders, because they effectively replace memory bound operations with dense linear algebra operations - which the GPU eats for breakfast, but again: high orders -> still an unsolved area of simulation mathematics; a lot of things happen which we don't have the answer for and they frequently disrupt the method from being applicable.
As someone who uses spectral methods every day I would love to see this optical device work and therefore hopefully really push research hard on resolving some of its problems. I suspect its initial use will be limited to specialized problems, however.
Bosons have a wonderful advantage over fermions. Whereas electrons cannot occupy the same space at the same time, photons will happily overlap. So, speaking in broad strokes here, optical computers can certainly have a higher computational/informational density. However, Optalysys's marketing is disingenuous when they use the phrases like "computing at the speed of light." Present day electron based CPUs already compute at the speed of light!
In other words, an optical computer cannot move data from one point to another any faster than a traditional computer. The latency is the same no matter what you're computing with.
However, the ability of optical computers to have a higher computing density (computations per area of space) may negate that issue. Whatever the optical equivalent of RAM would be, it can be theoretically smaller, and thus help to relieve some of the latency issue.
Also interesting about optical computers is that they can perform convolutions for "free". Present day computers are abysmal at convolutions. This is helpful to image processing, graphics rendering, and neural networks.
> Present day electron based CPUs already compute at the speed of light!
No, speed is still below c, which is why on high-frequency signal routes on PCBs the traces must be length-matched, sometimes down to tenths of millimeters.
The title of this article is: "By 2020, you could have an exascale speed-of-light optical computer on your desk"
We might be able to rent one and have access to it remotely. Right now it's still in the theory phase, I'm not sure the author of this article realizes that 2020 is less than 6 years away.
If this tool can perform very fast fourier transforms then it will benefit many simulation codes which rely on an approximation technique known as spectral approximation. In that case the huge FLOP/s count will only be realized in "high orders," that is when these spectral methods are pushed to their limits - in which case much existing research today breaks down and the methods can simply become unusable by e.g. yielding a 'theoretically' invertible matrix that has a numerically infinite condition number; no amount of fourier transforms can dig you out of that hole. This problem is not fully understood in the spectral method literature, and it rears its face in many different ways depending on the problem context.
GPUs have a similar problem, their ability to churn out dense linear algebra operations at high scale is so good that the vast majority of algorithms have a huge memory bottleneck, not a FLOP/s bottleneck. The result is people rarely see the insane promised 100X speedup of their simulation codes because they rarely are one massive dense linear algebra operation such as a matrix-matrix multiply, but rather a combination of small dense linear algebra ops + memory bound operations like gathers and scatters. Ironically spectral methods also work very well on the GPU in high orders, because they effectively replace memory bound operations with dense linear algebra operations - which the GPU eats for breakfast, but again: high orders -> still an unsolved area of simulation mathematics; a lot of things happen which we don't have the answer for and they frequently disrupt the method from being applicable.
As someone who uses spectral methods every day I would love to see this optical device work and therefore hopefully really push research hard on resolving some of its problems. I suspect its initial use will be limited to specialized problems, however.