You might be wondering "why Cape Town"? when it is right at the bottom of Africa, and the Southern African business hub is around Johannesburg, 1262 km / 784 miles away, and the next area that needs it is Lagos, Nigeria ( 4760 km / 2958 miles).
However I think there are some upsides to Cape Town, i.e.
1) AWS already has team and infrastructure there in Cape Town. It is a bit of a tech hub as well.
2) Submarine cables land in Cape Town - and they cannot in Johannesburg as it is far inland. By length of internet cable, Cape Town is actually closer than it seems to other parts of Africa. (a) a lot of the connectivity is around the coast, not so much over land.
There's probably a shorter ping time between Lagos and Cape Town than there is between Lagos and Johannesburg, despite the map distances being the other way around.
3) The further north you go from Cape Town, the closer you are to data centres in Europe. Cape Town is closest to exactly those parts that are worst served by Europe.
I have no inside knowledge as to AWS chose Cape Town, but those seem like likely factors.
1) Cape Town is not the Southernmost point of Africa, but it is very far south, and does contain the Cape of Good Hope, the Southwestern-most point. There are no population centres that are further south (Port Elizabeth is about the same latitude)
2) "...the next area that needs it is Lagos" ... and other central African cities such as Kinshasa and Accra. It might make sense if the second African AWS region is located hereabouts.
3) "... to data centres in Europe" ... and Middle-East, of course.
I fell asleep reading this article and had a sort of half dream about a post-apocalyptic game in which you the amnesiac protagonist (whose initials are “AZ”) gain consciousness aboard the MV Dirona, and to discover yourself must awake an ancient demon, a task that actually involves finding and activating long-lost AWS data centers, solving progressively harder puzzles using only in-game clues, a voice conferencing service over which a mysterious and cantankerous but also very tired-sounding voice feeds you tidbits of cryptic information in a mocking and self-righteous tone, and the gradually collected pieces of a 20,000 page wiki printout titled “Cold Start of AWS Region (confidential)” that is only about 80% accurate for each site.
Along the way you discover you are very good at playing yourself at board games, which is a key mechanic, and your rating improves with each region successfully started.
Final boss is us-east-1 of course, after which the identity of the “voice”, and the protagonist, are revealed.
Sort of like Myst, with an inverted Adam and Eve thing going on, and a lot more JSON.
One thing to keep in mind is that each Availability Zone operates on independent power infrastructure, and each Availability Zone has independent power infrastructure!
South African data center providers seem to have nailed reliable generator power - "...their systems are designed to operate continuously and effectively, regardless of whether they receive power from the national grid." [0] I don't want to imagine the cost.
On another note, load shedding (i.e. scheduled blackouts) have been implemented pretty uniformly fortunately, with areas following different published schedules - there's even an app for that! [1]
A common misconception, but no, AWS AZs are not DCs. A single AZ may be composed of multiple data centers[1], and a region may incorporate facilities that do not serve a public AZ[2], or that supply other capabilities[3].
[1] They'd be necessarily close together due to speed-of-light constraints.
[2] You may infer this from S3's triple-zone replication, which is still somehow magically fulfilled in regions that only have two public AZs.
Do you happen to know how physically far can be AZs in the same region? The EBS sticking to the same AZ hints that they may be not that close, but I'm also very surprised to discover that 1 AZ != 1 DC
Which puts it at 200km in a millisecond (5 microseconds is 1 / 200 of a millisecond.) Like I said "modern communications is at a fraction of lightspeed", although I didn't want to guess what fraction. But 200 / 600 = 1/3 is a reasonable fraction, seems legit.
Quite. South Africa has a string of rolling black-outs though...some lasting as long as 8 hours at a time during bad spells. Some you'll be running off generators as primary source (with no backup) far more than you'd normally expect even if you have multiple data centres.
Easiest way around that is to locate it in an area exempt from the rolling blackouts as I said.
There hasn't been an 8 hour blackout in Cape Town due to loadshedding. The longest was 4 hrs I believe. They have a plan to get this sorted out in a year and a half, in the meantime DC's will just have to rely on their power redundancy (which has to be in place anyway).
Every time a new AWS region is made public, it continues to highlight the disparity in services availability across the regions, as well as making information about services available. Many regions are "discovered" because the ip-regions.json file is updated long before the press release, but it will be some weeks to months before key information needed to spin up infrastructure appears in documentation, for example things like the ELB hosted zone identifier, which at time of writing is not documented.
To extend what I'm trying to say is that customers shouldn't have to do this and that updating of this data (which ideally should be in a machine readable format, much like ip-ranges.json) is just another step. I would like to hope that AWS already has playbooks for taking a region out of closed-beta and making it GA. If the listing of af-south-1 is already present on other sections of documentation this may already be the case.
Show me that magic unicorn company that spins data centres like that with 0 documentation bugs and I’ll join you criticising them. As it stands with limited resources I am pretty impressed with the speed AWS keeps adding capacity around the world.
Screw-ups will happen no matter how much wishful thinking and effort is spent preventing them. If you assume perfection in things then you will be sorely disappointed by most everything.
The AWS SDKs I've dissected contain the information needed to talk to each (service, region) tuple as machine-readable data. They seem to converge at boto:
This is incredibly useful, thanks! Whist it has some region information and endpoints, boto's lacking other useful information - availability zone count, hosted zones IDs for services like S3, etc. This data publicly lives in a variety of tables across their documentation, and is painful to scrape.
As a followup, I've found that the Terraform AWS provider needs information that boto doesn't – specifically, it needs region-specific details which can only be found in the documentation. They have a checklist for what to do when a new region gets announced:
I too am disappointed that AWS doesn't publish this information in a convenient way. On the other hand, well… source code _is_ a machine readable data format, and Go ASTs aren't that scary:
Terraform follows AWS changes better than CloudFormation, so tracking Terraform might be a reasonable solution. One could even build a process to automatically retrieve the Terraform AWS provider source code, extract the necessary identifiers, and update the relevant data file living in an internal repository. Don't ask me how I know :-/
I know Epic Games uses AWS for their Fortnite matchmaking servers. And an African region has been a long-time player request. Maybe those wishes will be fulfilled soon.
>Fortnite, one of the world’s most popular video games, runs nearly entirely on AWS, including its worldwide game-server fleet, backend services, databases, websites, and analytics pipeline and processing systems.
Granted, this is from 2018, but Chris is talking about how much they doing everything, and talking about how they like AWS' elasticity for their game servers because their load is pretty dynamic, and that they're running games in 26 availability zones.
My South African cousins have desperately wanted an African Fortnite server. When playing on EU servers their latency was around 200ms on a good day iirc.
Oh, right. That's not a matchmaking server then, that's a dedicated multiplayer game server. Matchmaking is, to use Wikipedia's definition, the process of connecting players together for online play sessions.
in a lot of games you cannot see a server list anymore, you can only use matchmaking (usually casual / ranked) and the matchmaking handles spinning up servers depending on the load.
They're still not the same service. Matchmaking is about grouping people together for the game, presumably dependent on their geographical location (for latency) and their skill-level (to balance the game), etc. Dedicated game servers, on the other hand, host the actual multiplayer session.
It's similar to the distinction Netflix uses between their control plane (hosted on AWS) and their content-delivery (hosted on their 'Open Connect' CDN).
Oh I agree on the terminology, and I think it's a shame things are how they are (no more active community on games like we see with 2000s games). I was just pointing out that it was so common these days that people mean "game servers" when they say "matchmaking".
You're right P2P is more common on game consoles (Almost if not all Nintendo multiplayer games use P2P, Mario Kart, Splatoon, Animal Crossing, etc...) which in my opinion was 'okay' for a free service but a joke now for a paid one.
I might have expressed myself badly in my previous comment, I simply wanted to say that I've seen a lot of people saying "matchmaking" for "game servers" these days because it is very common and dedicated servers are no longer the norm (sadly).
I don't think they are playing the game of "Who is first attacking a market". I bet some Azure clients are happy to move to AWS if they were forced to use Azure due to being the only one. I bet also the other way around.
Perhaps they are basing it on this [1] ultimatum supposedly issued by GCP’s top brass to best the top 2 cloud providers by a certain date. It was widely discussed on HN some months ago.
Yeah that conversation was pretty silly and more HN echo chamber because folks are mad that Google Reader was shutdown. Below is what I commented a few months ago re: this topic.
> Google is currently building a massive $500 million datacenter outside of Reno as we speak, and has 10+ billion invested in their datacenter cloud offering buildout this year alone.
The AWS EC2 service was invented in Cape Town, AWS already have a lab and infrastructure there so its easier for them to scale up infrastructure when the people, skills and local ecosystem is already present.
Azure bandwidth out of their Africa datacenters is $0.181 per GB, while AWS is $0.154 per GB, will be interesting to see if this forces Azure prices down.
I wonder if this will improve international connectivity times once Cape Town's Cloudfront is warmed up since the underseas cables wouldn't need to carry as many CDNed assets due to new region presence.
Does anyone know how this will affect the water issues that Cape Town have been having? As far as I know, Cape Town's 'Zero Day' is still something people are cognizant of.
Not really actually. The drought ended. It does go in some regular cycles if you look at the historical dam water levels. Currently were out of the bad times, but efforts need to be scaled up to be ready for next time. Politically speaking this won't happen though.
In terms of climate, CPT is basically a more windy Los Angeles/SF.
However I think there are some upsides to Cape Town, i.e.
1) AWS already has team and infrastructure there in Cape Town. It is a bit of a tech hub as well.
2) Submarine cables land in Cape Town - and they cannot in Johannesburg as it is far inland. By length of internet cable, Cape Town is actually closer than it seems to other parts of Africa. (a) a lot of the connectivity is around the coast, not so much over land.
There's probably a shorter ping time between Lagos and Cape Town than there is between Lagos and Johannesburg, despite the map distances being the other way around.
3) The further north you go from Cape Town, the closer you are to data centres in Europe. Cape Town is closest to exactly those parts that are worst served by Europe.
I have no inside knowledge as to AWS chose Cape Town, but those seem like likely factors.
a) https://www.networkplatforms.co.za/information/our-network