Regenerating the data is kind of cheating; it is as if you were given twice the amount of data.
In a normal situation, you obtain a list of input / output (say, images as input, a digit as output, for learning handwritten digits). You separate it between training data (which actually improves the net) and testing data (to detect overfitting), and you don't get more data than that.
Here, you can generate more data for free, as we have the function we want to approximate. Having more data will often result in a better result and faster convergence.
In a normal situation, you obtain a list of input / output (say, images as input, a digit as output, for learning handwritten digits). You separate it between training data (which actually improves the net) and testing data (to detect overfitting), and you don't get more data than that.
Here, you can generate more data for free, as we have the function we want to approximate. Having more data will often result in a better result and faster convergence.