That seems highly inefficient, though. Now the request has to go up to the switching office and back down to another customer's home, instead of just up to the switching office. Why not just stick the cache at the switching office and eliminate one of those hops, cutting out some latency too? Disk and even memory are cheap...hell, as Patrick's preso pointed out, one of the main reasons for this project is that bandwidth performance is not increasing nearly as fast as CPU, disk, and memory.
It is less efficient on time. But the speed of light delays back and forth over your local network are still much better than the time it takes to go anywhere interesting - like a data center. So it is still a win for a consumer. (Albeit less of one than having a separate set of equipment in your office just for caching stuff.)