there are technical problems with progressive enhancement, the latency between page load and progressive enhancement being applied being a fairly obvious one, the other is the providing full html fallbacks combined with progressive enhancement can be very complicated and time consuming, its very easy to see time when time to live wins out over semantic correctness
hopefully html5 pushstate will alleviate a lot of the complications with providing full fallbacks
As well the elephant in the room that gets missed is that many rail about the technical correctness without taking into account the target market. Personally I build web applications that replace desktop applications. Progressive enhancement would kill the simplicity of our programming model and literally triple the work effort because we would be in the hybrid world of half server side and half client side.
As it sits, we use a full client side UI library and push all UI concerns into the UI, they communicate with the server via REST to get data and the client is responsible for rendering that data. We research our numbers extensively and the cost analysis of changing our development model to support clients that cannot support full JavaScript clients supports the conclusion "at least for us" that a progressive enhancement development model is not cost effective.
The money recouped with the shortened development cycle of Ajax clients outweighs the amount we would receive chasing some fraction of < 2% of potential clients, If we get to the point that we need to chase that 2% for growth we are a) doing great and b) would be better served segmenting that traffic to a completely independent server framework based UI that it built for those clients.
It is my belief that mixing the two development models gives you the worst of both worlds when it comes to features and maintainability.
the latency between page load and progressive enhancement being applied shows up primarily if you load your JavaScript at the bottom of the page. Now I know that this is what Yahoo!'s performance rules suggest, I was on the team that made those recommendations. They made sense between 2006-2009, but browsers have improved significantly since then and that rule isn't necessarily true any more. You can load scripts in the head of your document without any of the blocking issues we saw in the past. You can even do multiple scripts in parallel to reduce download latency. See my <a href="http://calendar.perfplanet.com/2010/thoughts-on-performance/... on performance</a> on the 2010 performance advent calendar.
IE6 is turning into a C grade browser this quarter (http://yuiblog.com/blog/2010/11/03/gbs-update-2010q4/) so you can probably just not serve JavaScript to it (server side detection). IE7 still doesn't do parallel script downloads, so you might need to do bottom loading for it, but it's best to first find out what your users use before optimizing for it. You can get really complicated and change your page structure based on useragent and user's network characteristics, but that requires far more development effort.
hopefully html5 pushstate will alleviate a lot of the complications with providing full fallbacks