Peer review doesn't tell you if the data is valid or not. they published their methodology and anyone is free to repeat the study.
Peer review just checks for obvious errors in study design, asks for more info if needed, and decides whether the paper is a good fit for the journal.
Watson and Crick's paper describing the structure of DNA wasn't peer reviewed. if you think they're wrong, try it for yourself and publish the results.
When a few groups all get the same result then you can be confident about the claims made. until then, it's just kind of interesting to think about, which is fine.
> A.J.K.P. and S.W.C. are co-founders and co-directors of Circadian Health Innovations PTY LTD
I do agree that this paper alone should not be used to help sell a product. But it looks like this paper just confirms previous findings using more rigorous methodology (see background):
"Light at night causes circadian disruption, (21–23) and is therefore a potential determinant of cardiovascular disease risk. Higher risks for coronary artery disease (24) and stroke (25) have been observed in people living in urban environments with brighter outdoor night light, as measured by satellite. Brighter night light has been cross-sectionally related to atherosclerosis, (26,27) obesity, hypertension, and diabetes (28) in small but well-characterized cohorts, using bedroom (26,27) and wrist-worn (28) light sensors. Moreover, experimental exposure to night light elevates heart rate and alters sympathovagal balance. (29) However, current evidence linking night light with cardiovascular risk is mostly within small cohorts, or relies on geospatial-level measurements of outdoor lighting, rather than measures of personal light exposure. (30,31)"
> Peer review doesn't tell you if the data is valid or not.
Sure but nobody claimed that.
> Watson and Crick's paper describing the structure of DNA wasn't peer reviewed.
I'd point out that outliers exist but that was before peer review become so popular.
Right now there's a good correlation between competency and peer review.
> if you think they're wrong, try it for yourself and publish the results.
Watson and Crick or the article?
For a balanced discussion of the article, it's reasonable to point out a lack of peer review to give context to what stage this is at. If "try it yourself" is the bar then I guess nobody comments? That doesn't seem like a good way to learn anything.
>> Watson and Crick's paper describing the structure of DNA wasn't peer reviewed.
> I'd point out that outliers exist but that was before peer review become so popular
what outlier? I just picked a famous example, there are almost infinitely many examples to choose from...
>> if you think they're wrong, try it for yourself and publish the results.
> Watson and Crick or the article?
yes
> For a balanced discussion of the article, it's reasonable to point out a lack of peer review to give context to what stage this is at.
The first thing that the pre-print says, in bold at the top of the page, is that this is a non-peer-reviewed article and shouldn't be used for clinical practice. so commenting "it's not peer reviewed" doesn't add anything
> If "try it yourself" is the bar then I guess nobody comments? That doesn't seem like a good way to learn anything.
"try it yourself" is the bar for determining the validity of the results. A comment section is not going to be able to determine the validity. My whole point is that it's worth discussing the article without waiting for a final peer-reviewed version of it. If you disagree with the results, you can point out a perceived flaw in the study or find papers which contradict the results so we can discuss something concrete
> what outlier? I just picked a famous example, there are almost infinitely many examples to choose from...
And there's even more almost infinitely many examples that say peer review is a strong signal.
> yes
If you were including the former, you were making a very rude argument by implying that anyone that values peer review is rendered invalid by that example.
> My whole point is that it's worth discussing the article without waiting for a final peer-reviewed version of it.
Telling people to shut up about peer review is bad for discussion.
I replied to the comment "Preprint not peer reviewed." which added nothing and arguably shuts down discussion.
My whole point is that it's ok to find research interesting and discuss it even though it's not peer reviewed yet.
> If you were including the former, you were making a very rude argument by implying that anyone that values peer review is rendered invalid by that example
No, I'm pointing out that not being peer reviewed is not automatically disqualifying and that the real way that or prove/disprove science is by replication attempts, not through peer review.
> And there's even more almost infinitely many examples that say peer review is a strong signal.
So you say, but if you think about it all papers in the ongoing replication crisis are peer reviewed. I know several peer reviewed papers which have inaccurate results, and in my experience having been on both sides of the peer review process I can tell you that it's pretty flawed since very few scientists are willing to invest a lot of their own time to do meticulous unpaid review of other people's work. Meanwhile, science progressed fine before peer review became standard in the 1970's.
> Telling people to shut up about peer review is bad for discussion.
I'll keep that in mind for the future but doesn't apply to anything I said. Maybe you should take a few minutes to read what I actually wrote instead of reacting emotionally
This creates a perfect setup for manipulation - the high barriers to entry for proper equipment, organization, and funding needed to produce quality reproductions mean that if someone posts fake content that mimics scientific paper formatting and includes all the right academic signals, most people will accept it as legitimate without question.
it's the opposite. no doctor or insurance is going to change their policy based on a single result, you wait until a few separate groups have replicated the results. In the scientific community there is a reward (being cited) for pointing out that someone else's finding was wrong. The more important the initial finding, the more citations (and attention) you will get if you're the first one to correct the record. There's also a reward for building on an initial finding and going further, but you can only do that if the original finding stands. So one way or another the truth will come out.
Publishing a pre-print is only valuable if your result hold long term. You're just stating that you did it first.
Peer review just checks for obvious errors in study design, asks for more info if needed, and decides whether the paper is a good fit for the journal.
Watson and Crick's paper describing the structure of DNA wasn't peer reviewed. if you think they're wrong, try it for yourself and publish the results.
When a few groups all get the same result then you can be confident about the claims made. until then, it's just kind of interesting to think about, which is fine.
> A.J.K.P. and S.W.C. are co-founders and co-directors of Circadian Health Innovations PTY LTD
I do agree that this paper alone should not be used to help sell a product. But it looks like this paper just confirms previous findings using more rigorous methodology (see background):
"Light at night causes circadian disruption, (21–23) and is therefore a potential determinant of cardiovascular disease risk. Higher risks for coronary artery disease (24) and stroke (25) have been observed in people living in urban environments with brighter outdoor night light, as measured by satellite. Brighter night light has been cross-sectionally related to atherosclerosis, (26,27) obesity, hypertension, and diabetes (28) in small but well-characterized cohorts, using bedroom (26,27) and wrist-worn (28) light sensors. Moreover, experimental exposure to night light elevates heart rate and alters sympathovagal balance. (29) However, current evidence linking night light with cardiovascular risk is mostly within small cohorts, or relies on geospatial-level measurements of outdoor lighting, rather than measures of personal light exposure. (30,31)"