Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Progress in Biology Is Slow – Here's How We Can Speed It Up (adamashwal.com)
40 points by ashwal on Sept 30, 2020 | hide | past | favorite | 22 comments


> If we care about blood pressure, for example, why have we not given every drug, at every dosage, every regiment, and in every combination to a mouse and actually seen what happens?

One of the answers is hidden behind this question: Ethics.

We could make a ton of progress if we started growing humans for lab testing. Then we could run massively parallel tests and get data much quicker. I mean, think about how much we could learn grom the brain if we had a large lab full of humans whose brains we could arbitrarily poke and prod at. Direct access would make us so much faster. But... is this something we will ever want to do?

Ethics force us to attack problems of (human) biology indirectly.

If we're talking about biology in general, though, I think we have made enormous progress in the last 10-20 years. In fact, I don't think we are in any way prepared for how accessible DIY biology is becoming. You can engineer viruses in your basement now. Once gene synthesis can be done in a garage, anyone could engineer anything they wanted (anthrax, ebola, whatever). Progress in biology is accelerating.

My guess is that in the future "learning how to live forever" will be seen as alchemy's goal of turning lead into gold. In theory, you could slam a bunch of subatomic particles together to do it, but that's not really worth doing. We've found other ways to get what we want from nature (shininess, gold color, conductivity, etc.) without using gold. The "living forever" argument also runs very quickly into philosophy (ship of Theseus) and away from biology.

I think it's much more interesting to consider the biological factories that we're building. Directed evolution, CRISPR, BIL Gates, gene drives, etc. We're making real headway into playing god.


> If we're talking about biology in general, though, I think we have made enormous progress in the last 10-20 years.

And, funnily enough, this corresponds with PCR becoming commonplace.

I would argue that the lack of progress in biology was almost solely due to the fact that before PCR biology was effectively "alchemy" and that after PCR biology became "science". PCR and sequencing blew away entire subfields of biology as being testably untrue.

I still remember high school biology and feeling that whole tranches of it were complete bullshit. It wasn't until I had a molecular biology course (fairly new in 1986!) that I went "Oh, okay, biology can have a solid scientific basis and actually make sense."


> PCR and sequencing blew away entire subfields of biology as being testably untrue.

As a non-biologist I would love to hear of some examples of this


As a PhD biologist I'd love to hear some examples of this.

PCR is a great tool, and it's absolutely been a transformative research tool. But it's just one tool out of a catalogue of many others. It can't answer all questions. Genes and gene expression are just one part of a much bigger picture, and there are many other techniques which are just as relevant for exploring biological systems.

You rarely see PCR used in a research project in isolation. It's just one tool amongst many others, and for the most part you can't just look at gene expression and assume it has any functional effect. You've also got to demonstrate the downstream effect through other techniques, such as Western blotting, ELISA, LCMS and others, and even then you have to show that the expression products themselves also have an observable functional effect.


I'm not sure what you're talking about. PCR is a great technique, and sequencing has revolutionized biology, but it didn't go and blow away subfields of biology as being testably untrue.


Cellular immunology--a big debate in the 1970s was whether cells primarily recognize self or non-self. Sequencing completely smashed the non-self people.

The Tree of Life--entire species had to get transferred around due to sequencing.

There are other areas of biology as well that simply got nuked once you could sequence things accurately.

We forget about them because it's been about 30 years (one tenure length) since PCR and sequencing became commonplace.


While the power of PCR isn't in question, I think you're overstating the case at bit. It didn't "nuke" whole areas of Biology, but it did add clarity and it did answer questions which previously were hard or impossible to answer using other techniques.

I think you're being somewhat unfair about taxonomy. Before whole-organism sequencing, classifying based upon detailed physical characteristics was the only option available, and for the most part that classification did match the genetic data. Sequencing did correct some mistakes, of course, but I think it does a disservice to all the people who did highly-detailed and rigorous work in that field to claim it was "nuked" when that simply is not the case. PCR-based sequencing did not invalidate most of our existing knowledge, but it did make it a quantitative science rather than qualitative.

Consider also that PCR also brought with it a number of problems of its own. Classification based upon sequencing of individual genes also led to mistakes because it didn't account for xenologues or independent evolution of the same mutation. These were later corrected with the advent of whole-genome sequencing and comparison of multiple genes. It also led to a lot of research published demonstrating that the presence of certain genes or expression products correlated with certain outcomes. And a huge amount of that was completely false, because they failed to investigate the downstream effect of these differences. In too many cases, there wasn't any functional effect, or the wrong conclusion was drawn because of experimental error and bad statistics. The bar for publishing in a good journal is now much higher, but it's still all to easy to misinterpret genetic data.


You're definitely overstating the case. I don't forget about this stuff, since I was getitng my undergrad, grad, phd and postdoctoral work in this field.


There's an interesting article on the topic published in 2012 [0, 1, 2]. One aspect the paper talks about different causes for the current situation. One of them being, "The ‘basic research–brute force’ bias".

> The ‘basic research–brute force’ bias is the ten- dency to overestimate the ability of advances in basic research (particularly in molecular biology) and brute force screening methods (embodied in the first few steps of the standard discovery and preclinical research process) to increase the probability that a molecule will be safe and effective in clinical trials

[0] https://www.nature.com/articles/nrd3681

[1] https://blogs.sciencemag.org/pipeline/archives/2012/03/08/er...

[2] https://en.wikipedia.org/wiki/Eroom%27s_law


There are 3 major flaws in this thesis:

1. The number of synthesizable, drug like molecules is enormous

2. To have a sufficiently powered experiment, you need to run it multiple times

3. Drugs don't work in a vacuum, it also interacts with the specific genetic/epigenetic/proteomic/microbiomic makeup of the organism. You can't really control for all of that.


There's so much more missing that makes me think this is an engineer writing about how biological research should be in theory, without the understanding and experience the complicated nature of biological research. Well, suppose we want to give "every drug, at every dosage, every regiment, and in every combination to a mouse". Well, that's a combinatorially large search space and one reason drug companies focus on high throughput screening of drugs before applying them to mice. Then even for the promising ones you have other effects that will skew your results, like handler effects (e.g. https://www.nature.com/news/male-researchers-stress-out-rode...). If we want to apply the drugs to that many mice, are we only gonna use female mice because males tend to fight and kill each other when placed in the same cage? Then what do we do about the sex skew? Then there's the fact that 96 million years separate humans from mice, which is relatively short in a genetic perspective but not that much from an epigenetic perspective. How well will results translate?

This is just off the top of my head, and I'm not a biologist. I'm sure actual experienced biological researchers can come up with a whole bunch of other issues. Because more often than not biology is whole lot messier than we can imagine.


Agreed, I don't know much about biology but I can see many limitations with this brute-force approach.

We live in a finite world, the infinite monkeys metaphor has no value.


The possible chemical space is incredibly vast. Even a large collections of potentially interesting molecules at hundreds of thousands or millions of molecules isn't even scratching the surface.

Going directly to mice is certainly problematic in an ethical terms, you're going to kill many millions of mice for something that has very low odds of succeeding.

Biological systems are noisy, the chance that one of your hits is just doing something funny, but not useful is very high. Going directly to mice is only making this worse, as that is a much more complex system as an assay that simply measures binding to a protein.


it's kind of impressive how, even those chemical space is vast, humans discovered salicylic acid from willow bark is a successful painkiller without ever running clinical trials or searching 1060 space.

Or how many chemicals we have were originally derived from plants. I wonder how they're finding these molecules without searching 1060 space?


I'm a layperson, but there seem to be a few problems with this:

1. Mouse models are not perfect simulations of human biology

2. Cost as has been mentioned multiple times

3. Patient response to treatments depend not just on drug formulation, but also disease state and progression

4. Even mice likely have population dependent responses to drugs

Given the above, a "combinatorial explosion" of drug cocktails tested on mice would likely only tell you what's safe for a given strain of mice, not what's safe for humans, much less what's effective. Factor in disease state, dosage, mouse model impedance, and the numerous other little things that go into using drugs to treat illnesses, and the "grad student brute force" approach begins to seem a lot more intelligent in comparison. Especially since some of those grad students are already using AI to reduce the search space of interesting drugs.

EDIT - my wife (PharmD) adds that if you're targeting diseases with this approach and not just general safety, then a lot of diseases have no known cause, but they do have treatments. That means there's no good way to simulate this in mice because the cause is unknown. Diseases are being discovered every day for which there are no known causes. Furthermore mouse models require manual labor. No way to scale that up.


Not to mention that one of the biggest bottlenecks in drug development is the preclinical (e.g. mice) to clinical (humans) transition.


A few comments:

* We already have good PK/PD models, PBTK, human physiology models based on differential equations, and toxicity prediction servers. If it was integrated in a smart and easy to use software suite, it would do a lot to reduce the cost of clinical trials. We could even incrementally improve this with toxicicity prediction servers that would be aware of physiology [0].

* You can certainly try to make a good model of mice physiology, as dr Guyton did for humans 50 years ago. But how can you answer a medical question with this model? Let say you want to create a drug for type II diabetes. How can any kind of biology model help you in this task? You would need to run some kind of genetic algorithm on this model, so it would be extremely slow and costly.

* Instead of scaling with mice, you can use organoids, it will be much more realistic, and you will not have the effect that drugs work on mice model but not on humans. Which brings this question: To design organoids you need to have a very good understanding of their biology, but this is what you want to discover!

[0] https://pubmed.ncbi.nlm.nih.gov/28522333/


Well-written post, but the suggested solution of HTS on mice is nonsensical. First of all, we’ve cured cancer in mice many times but it’s obviously still unsolved in humans. Secondly, you’re constraining yourself to synthesized chemical space, which as others have mentioned is a grain of sand (charitably 10^10 in human history) compared to a planet’s worth (10^60 by some estimates) of possible druglike molecules.

Target ID is hard, but we also have known biologically valid targets that are simply undruggable so far (KRAS is the most obvious example). Solving computational chemistry problems is a much more tractable, high-leverage endeavor. And we can in fact use that progress to accelerate biological research itself — for example, quickly developing bioavailable tool compounds via virtual screen to test biological hypotheses in mice, a rational “pharmacological knockout” approach.

We can reduce the biological space in a smart way, we don’t have to brute force this.


The notion that computational reducibility is the key principle underpinning difficulty in life (and probably social) sciences is extremely insightful, and had me optimistic for the second half.

But the author's suggestion to address this problem is just... to chase the combinatorial explosion harder? That's a pretty underwhelming solution. And impossible cost scaling aside, another key part of the problem is that we don't even know how to enumerate what the relevant parameters are. What about the effects of a mouse's environment on natural immune response and drug efficacy, for example? Such a highly roboticized environment would be highly unpleasant for a mouse, presumably, and adverse effects seem well within the realm of possibility.


I would think that we have to consider the entire parameter space or model space of a reductionist view of biology as a space of models, then search for the smaller subspace of models with fewer, stiffer parameters. http://www.lassp.cornell.edu/sethna/Sloppy/WhatAreSloppyMode...


For anyone interested in biology's slow advance, I'd highly recommend checking out the 2003 study that coined the term "Synthetic Biology"[0].

TL;DR is that we've known that dropping costs and scaling collaboration is how to make biotech better for many years now. This time span is also a testament to how hard it actually is.

As someone who works in synthetic biology on tools, I think a major problem is that the incentives are wrong for making the field better (they're stuck in a local maximum). The incentives of pharma + selling to academics doesn't really select for lower prices or increased collaboration.

[0] https://dspace.mit.edu/handle/1721.1/38455


No, no, no.

This is just an argument for putting more effort into a failed strategy. The reason why we haven't made much progress towards extending healthy human life isn't because the disease state is complex, it is because the primary strategy adopted by the research community is to reverse engineer the disease state, and then work backwards towards its cause.

Typical project: pick away at a small chunk of the altered metabolism of [age-related disease of choice]. Find a proximate cause of pathology that has some small contribution to the whole - an altered gene expression level, say, something really, really far removed from root causes. Find a small molecule that adjusts expression. Publish. Patent. Tech transfer finds someone willing to tinker with that family of small molecules to have a short at achieving a small alteration in the disease state. Goes into trials, fails at phase II or phase III.

This happens constantly. It is the bulk of all medical research for age-related disease. It is pointless. May as well not happen. Applying computational prowess to this process won't make it any better. You'll just have a lot more low yield approaches that still do nothing more than tinker with proximate causes in late stage disease, and will do next to nothing for patients. (With the occasional success like statins, which produce the amazing-for-this-strategy result of a 22% reduction in mortality. You still die, just slightly less often).

The only practical way forward for age-related disease is to entirely reject this approach to medicine in favor of a much, much better one.

1) Infer the root causes of aging and best points of intervention (already done, several times over).

2) Fix one of those causes, in isolation.

3) Observe the results.

Steps 1-3 have been achieved for removal of senescent cells. The results in animal studies are absolutely amazing, robust, night and day better than anything else anyone has done for the treatment of aging and age-related disease. Reversal of scores of diseases and measures of aging, every lab can do it, replicated many, many times via numerous different approaches.

Everyone is now backfilling their models of age-related disease, their understanding of disease etiology, to add senescent cells. Because they are clearly an important cause.

Once Unity Biotechnology has stopped being silly about their subpar approach to senescent cell clearance, and the rest of the dozen or so companies have started their trials, we should expect those human trials to follow the same sort of pattern.

This is the way to make progress. Infer root causes, target root causes, figure out which work by trying them. Backfill your understanding of age-related disease based on new data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: