> HN now gets over 120k unique ips on a weekday, and serves over 1.3 million page views.
Still off just one server? What are its specs?
How up to date is the news.arc file in the Arc distribution?
I'm considering using the simpler, more functional, single server, all in memory, filesystem as a database approach in an upcoming project. Would really love to see more details from PG and others who have had success with that back-to-basics simple approach.
Thanks for the info! I hope you don't mind me asking a few more questions...
> The currently available news.arc is quite out of date.
Any chance of getting an updated version?
> Operating out of memory works very well for an application like this, where most requests are for recent stuff.
Do you ever release cached resources? Or does the process simply die with an out-of-memory error, only to be restarted with empty caches?
If the former, what is the cache invalidation strategy?
If the latter, how often does the process die? What are the implications on availability and performance, particularly surrounding process initialization and warming the caches.
We randomly throw older items out of memory. If they're needed again they'll get reloaded from disk, but it's unlikely they'll be needed soon because older stuff is mostly visited by crawlers or random Google traffic.
Still off just one server? What are its specs?
How up to date is the news.arc file in the Arc distribution?
I'm considering using the simpler, more functional, single server, all in memory, filesystem as a database approach in an upcoming project. Would really love to see more details from PG and others who have had success with that back-to-basics simple approach.