Totally unrelated to the article, but I've got to put my good word in for Linode. I've never had such good service from a hosting provider, and frankly for a (currently) tiny account.
I use Linode for my primary site and prgmr as the backup. Both are excellent. Prgmr is not for newbies, you submit a public key with your order and get back a login to a XEN console. My hosts there seem to be fine, no problems yet.
This is not accurate. There are many VPS's which are cheaper, many of which have much greater capacity for the money. Some are fly-by-night scams. Others have several years of stability with good support. Check forums and VPS review sites to find the best match.
That said, I have nothing against prgmr or linode and would use them if I needed a super-reliable VPS. (My main VPS is Linode, my secondary is with Kerplunc Hosting)
That's simply false. RAM is the constraint of interest for plenty of single-server websites that are running MySQL as a backend. You want a bunch of memory for the database, plus enough left over to accommodate your web app. That often means wanting a 2-4 GB machine.
Linode is something like $55/GB RAM per month, while prgmr.com is $14.
I remember a post on here saying how hackernews was running on a new 8GB server or something. There's no reason it couldn't run on a 200MB server easily.
(Related: Is HN really sluggish for everyone else as well? Like 3-4 seconds per page load)
If you really think you could run HN on 200MB of RAM, write a blog post about how to do it and submit it here. I'm sure the community would like to know how. I think your ability to write applications using minimal amounts of RAM is out of sync with the median RAM use here.
with a stripped down/custom distro and FastCGI perl i could make that happen. you could go crazy and use an epoll perl httpd too. backend would be nosql with a small cache in memory. you'd need a decent chunk of bandwidth as gzip may push cpu over the edge, and hope you have fast disks. how many hps is HN anyway, at peak?
haven't had it for long enough , but the bandwidth seems spotty (sometimes 500k, sometimes 2mbit, etc) and seemingly high ssh latency, maybe cpu load on my slice. but other'n that good so far
I used to run a create-your-own-quiz django powered site with 6k uniques/day(in the last few days), with no cache, and it barely touched my 360 vps. You can see some nice graphs here: http://slig.imgur.com/linode
From my experience looking at those graphs, I see that the iorate is pretty low and that the cpu is almost idle. Checking "top", I see that the RAM was sufficient, as there were almost no swap use.
I also use chartbeat.com to see how the users are interacting with the site in realtime. They provide also two useful metrics: user load time and server load time. Looking at those, I can see that the user experience is pretty good, and that the server is up and responding very quickly.
I run a Django-powered site on a 96meg host on an over-loaded Xen host built with ancient P4 Xeon servers (No VT instructions). I use nginx and sqlite to cope with my tiny amount of available memory. With that said the site sees little traffic (<500 hits a day, peaks ~60 hits in 1 hour), but I've never run into issues.
you could go crazy and use a uClibc buildroot. build it on your desktop and transfer an image to your slice. i use slack, but that's not exactly tiny...
I've been running http://wasitup.com on Linode the last months and I've been pleased with their stablity except for a 9 minute network outage earlier today (to their credit they were able to resolve it quickly while communicating the status to us end users). For my opinion on the performance characteristics you'd have to read my article on the matter: http://journal.uggedal.com/vps-performance-comparison
You can blame the ever so unreliable ISP Nac Net, we use them in our office which cut out as well.
"
Good Afternoon,
At approx 12:10pm, we lost communications with some of our equipment in our Newark, NJ location. Upon investigation, it was discovered that equipment located in the cross connect area had lost power. Customers may have noticed degraded internet service for about 2-5 minutes while routers propagated through alternative providers. As of 12:23pm, power has been restored in the Newark, NJ site. We are still investigating the cause of the problem."
Thanks for the clarification. I've been thinking of running http://wasitup.com from 3 different Linode data centers for a while to remedy problems like this (the service currently runs on 3 different racks in the NJ data center). I would first have to find a way to send Tokyo Tyrant data encrypted/authenticated over WAN first.
An easy (if slow) way to do so is to tunnel it through ssh. If things are too slow you can use the "blowfish" algorithm for encryption through the "-c blowfish" option.
Totally unrelated, but I'm very happy with wasitup. Had forgotten completely I subscribed when I got a bunch of emails about a sick server. Really helps.
On the downside I found a couple of emails in yahoo's bulk folder.
If you go by monthly cost per GB of memory or especially storage, Linode is one of the most expensive options out there (~$60/GB RAM, $1.25/GB storage).
The outage was upstream. The downtime was due to the Internet correcting itself by routing around the problem -- if you have access to BGP data, you can see the rash of rerouting that fixed the problem.
"The servers used in both attacks employ the HomeLinux DynamicDNS provider, and both are currently pointing to IP addresses owned by Linode, a US-based company that offers Virtual Private Server hosting. The IP addresses in question are within the same subnet, and they are six IP addresses apart from each other,"
One article I saw implicated a Rackspace server in the attacker's side of event. I presume that meant Slicehost.
If I was breaking into computers "off the clock", I'd probably look to just-a-CC# no-questions-asked hosting providers (probably overseas) as my staging ground. This is something new. Commodity virtualized VPS systems like Slicehost are an awfully convenient way to launder attacks.
It's only been in the last couple years that VM slices have been so quick and easy to buy.
Right, that's why cloud services like EC2 and Rackspace Cloud Servers are being used for stuff like spam. Spammer can just buy a temporary instance for a few cents then take it down. That's also why many "cloud IPs" are being added to some spam blacklists, unfortunately.
End of last year we had someone attacking a clients network using commodity server instances.
If you can figure out a cut-out way to pay for the server time then there isn't much anyone can do to track it without getting on the ground and forcing local police forces into at least trying to make some headway.
So in effect it is kinda the same as it used to be (overseas, no questions providers) but instead of the servers being the overseas bit it is just the payment (and I guess they rely on the fact intrusion is hard to detect, unlike say spam, coupled with the sheer number of people buying instances daily now).
(our stuff led to Eastern Europe so it is unrelated - but the principle is similar).
Except to pay for one of these, you still pay via credit card, paypal, etc. that links to real identifiable info. I was wondering why, if one of the attackers instances were discovered by google, they didn't just hand it over to the authorities and have them get a subpoena for the account info? or maybe they did
Not that it matters (I'm sure if they're actually paying for VM's they're using stolen cards), but you can trade cash for anonymous credit card numbers in a number of places. Simplest example: Google "Vanilla VISA".