In general, you are right. But in this particular case, char-CNN is the standard used by many people, Stanford included. It is not sophisticated at all.
For me, this omission is a negligence. It is selective laziness.
The author (me) does not have 20 PhD students, postdocs and startuppers under his hand, to do the job for Stanford. With my limited resources, it is more cost-effective to do my job against Stanford ;)
This professor gave an argument in his paper. I gave a refutation. If you agree with him that char-CNN was a sophisticated model in early 2017, then you are not well-informed about the situation in deep learning.
The MoleculeNet co-author gave another argument in the comments. My refutation is that you can't claim to lack time after 8*20 man-months have passed.
Look at the big list of exotic models they tried: http://moleculenet.ai/models
For me, this omission is a negligence. It is selective laziness.
The author (me) does not have 20 PhD students, postdocs and startuppers under his hand, to do the job for Stanford. With my limited resources, it is more cost-effective to do my job against Stanford ;)