Yes. How do you deal with hill climbing in product management? How do you know that a 10% improvement in outcomes is bad, and a different approach could have given you 50% or 100%? More experimentation, more 'how might we'? Google is known for trying different approaches (many shades of blue for a link), but somehow none of their experiments indicated fresh content is important? Or is it just a matter of product management 'playing it safe' at Google, where they know 10% outcome improvement is good enough to keep their job?
My guess is limited time frames within some standard A/B testing protocol. Give people very similar content over a 1-2 week period, and they'll watch more of it. Give people very similar content over a 6 month period, and they'll get bored and leave. If your testing protocol doesn't look for long term effects, you'll never see the longer effect in any of your tests.