that threw me for a bit of a loop as well. This means that responsibility for doing ACL whitelisting at the edge is now moved from the actual edge, to the security groups on the actual servers responding to request, right?
That's do-able and all, but I kind of didn't hate the old paradigm of having an extra layer there.
One way to think of NLB is that it's an Elastic IP address that happens to go to multiple instances or containers, instead of just one. Everything else stays the same.
Yeh, it's easy to use it like that for now. I hope they update it later on though. Seems like an missing feature in their otherwise nice firewall rules setup.
Here is an interesting study regarding energy consumption
http://www.cell.com/current-biology/abstract/S0960-9822(15)0...
Tldr days of high level of exercise does not linearly correlate to high level of energy usage. The body can somewhere(brain?) spend less energi to compensate the exercise.
It does until it doesn't. Let's say you only want to move the bad students to the principal. Then you want to model pop quizzes. Students who score 1/5 on three or more per month go to the principal. Or call their parents, whatever.
After a while you realize that you're doing all this stuff to 'bad' students. This is where you really start to want some way to organize all the stuff you're doing to these bad students, as all the code you have dealing with them is sprinkled liberally through the codebase in conditionals in your procedures.
You'll start pulling out logic and things will start breaking. What you thought was less code is just crazy to manage.
The Gilded Rose kata illustrates the dangers of procedural code excellently.
Do the easy stuff first:
1. Enable caching & get the HTTP-headers with e-tags and stuff.
2. Get DB schema with proper indexes & tune database settings.
3. Buy better hardware where there are bottlenecks
4. Fix the software ;)
A point on 4: Find or, in very extreme circumstances, write your own very lightweight profiler you can run on production servers. This will be a life-saver.
The bottlenecks are very rarely where you think they are. Even when they are, there is typically lots of low hanging fruit hanging around that you don't know about until you profile.
Also, you should have some sort of monitoring software running aggregating stuff like server loads, connection stats, number of db requests, memcached hits/misses, etc. Cacti I find quite good for this.
The idea that everyone needs to scale horizontally has become very much less true than it was. Only Very Large sites have to go very horizontal at all these days.
Given 8 x 3Ghz core / 32GB / 5TB machines for cheap you're packing as much power as 15+ servers of a few years ago. Even quite large web sites can be run off a few beefy (but cheap!) servers. This means money saved on network infrastructure, power, space, management, etc.
Probably 80% of the Alexa top 1000 could each be run off less than 20 modern beefy machines.
sqrt(h^2 + h^2)*2 is the max length if your box is 0 is width. The max possible length decreases with sqrt(w^2 + w^2) as width expands.
Assumed the hallways size is symetrical and your object cannot be tilted to achive a smaller length. For example, an object of little height in a high to ceiling room will be possible to tilt with the same measurements are presented above (pythagoras). But if the box has any substancial height you wont get any help from a tilt. A soffa would tilt, a dish washer does not.