This is awe-inspiring and almost scary, it's pretty much beyond my understanding how much data these systems are meant to process.
What is also beyond me is how someone at Nvidia thinks that the label sequence "1.00E+2; 1.00E+3; 1.00E+4; 1.00E+5; 1.00E+6" for the vertical axis in "Figure 1" is more readable than "100; 1,000; 10,000; 100,000; 1,000,000" would have been. The latter is 5 chars less (total), even. Or, if exponential notation is important for the Big Serious Computing People, then perhaps they could have dropped the ".00" part from each value? Or, if I'm allowed to dream, gone with actual exponential notation?
It's the scientific notation, and makes the graph to be a log scale. It allows you to see they gained more than two orders of magnitude in a single generation.
There is a semantic difference: 1,000,000 == 1,000,000 where-as 1.00E+6 >= 1,000,000 < 1,010,000. The decimal places after the 1 in 1.00E+6 specify the precision of the measurement.
I don't think they specify any precision. It's just a way to write very large/small numbers approximately. (Though these numbers here aren't really considered large.)
What is also beyond me is how someone at Nvidia thinks that the label sequence "1.00E+2; 1.00E+3; 1.00E+4; 1.00E+5; 1.00E+6" for the vertical axis in "Figure 1" is more readable than "100; 1,000; 10,000; 100,000; 1,000,000" would have been. The latter is 5 chars less (total), even. Or, if exponential notation is important for the Big Serious Computing People, then perhaps they could have dropped the ".00" part from each value? Or, if I'm allowed to dream, gone with actual exponential notation?