What you are describing is usually called automatic tone mapping. This is basically noise reduction and possibly color normalization from brightening a dark image. Them showing their black image as the starting point is silly, because jpg will make a mess of the remaining information. What they should show is the raw image brightened by a straight multiplier to show the noisy version that you would get from trying to increase brightness in a trivial way.
Image on Github is JPEG made from RAW. Since RAW file has more dynamic range and contains a lot more information than JPEG you can take that photo in an editor and crank up the brightness. You will get a noisy image but it will be a lot brighter and will probably resemble the image with the high ISO in the middle. Then in an editor you can apply some de-noiser to get results similar to the last one.
So presumably this neural net more or less does it for you.
The *PNG is there just to show the results produced by the CNN, if you watch the linked video they do exactly what you are suggesting and then compare both results.
What you are describing is usually called automatic tone mapping. This is basically noise reduction and possibly color normalization from brightening a dark image. Them showing their black image as the starting point is silly, because jpg will make a mess of the remaining information. What they should show is the raw image brightened by a straight multiplier to show the noisy version that you would get from trying to increase brightness in a trivial way.