Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Deployment Fidelity Problem (tylercipriani.com)
7 points by thcipriani on Aug 31, 2021 | hide | past | favorite | 5 comments


I disagree with

> In the book, however, the authors advocate for tracking both change lead time and deployment frequency. But I don’t get the point of measuring both: they’re two ways of looking at the same thing. If you have a shorter lead time, your deployment frequency is higher. If your deployment frequency is higher, you have a shorter change lead time—potato/potato.

They are usually correlated, but this isn't a requirement. The assumption is that you have essentially one release in flight at a time (or like one per environment with a series of environments). That is a particular release goes from A -> B -> C -> D[0]. If you have only a single pipe, you can only increase throughput (frequency) by decreasing latency (lead time). But this isn't the only model. In situations where you want high throughput but have unavoidable validation requirements, something like A -> B -> (C1, C2, ... Cn) -> D may be preferable, the idea being that you have a round-robin of validation environments before pushing globally, and so you can validate multiple versions in parallel. This allows you to release daily but validate every build for 3 days before pushing to production, for example, which the naive approach suggest shouldn't really be feasible.

[0]: Actually also just pipelining period is a way to increase throughput without decreasing latency.


In your example is A->B->C->D a series of staged release environments in the pipeline? Something like a graduated rollout? If that's the case then I'd say that frequency depends on how you define deployment.

If I've got your model wrong, I'd like to hear more about it.


> In your example is A->B->C->D a series of staged release environments in the pipeline?

Yes.

> Something like a graduated rollout? If that's the case then I'd say that frequency depends on how you define deployment.

I don't see why. If you imagine that you have only two stages, "staging" and "prod", and a requirement that things spend 24 hours pre-prod before being deployed, you can deploy each day to staging, and the next day to prod.

If you then add additional stages, you can have it spend 8 hours on each stage, deploying to prod every 8 hours, so a higher frequency, but any given build is still preprod for 24 hours, so frequency has tripled without a change in latency.


> If you then add additional stages, you can have it spend 8 hours on each stage, deploying to prod every 8 hours, so a higher frequency, but any given build is still preprod for 24 hours, so frequency has tripled without a change in latency.

Got it, thanks for the clarification. I'd interpreted your argument as: each stage is a deployment, therefore deployment frequency goes up (which also, of course, would mean lead time goes down -- so I was confused).

I like that this scheme could maintain continuous integration by having the "B" staging include the patch under test in "C" -- clever solution!


There are a lot of metrics you could collect about deployment. Most reduce to deployment fidelity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: