Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"No guys stop complaining that google doesn't give support to kubernetes, we are not Google, we are the Cloud Native Computing Foundation." -- Some kubernetes developer.

Release blog of kubernetes brought to you by: -- Aparna Sinha, Product Manager, Google.

Seriously though those new features are really neat, and seems to compete with the ease of setup of docker 1.12(though swarm was a bit of a mess when I last tried to make something useful of it (docker 1.12.0)), I hope I can test them at work soon.

For now I'm having a great time with rancher.io and cattle.



Sorry, can you say more about your issue? We've always said we contribute (a ton!) to Kubernetes, but it's definitely not a commercial product, so we don't have a support for on-premises/non-Google Cloud deployments. That said (!!), if paid support is what you're after, please use one of the MANY organizations who DO offer support (http://kubernetes.io/community/ - about halfway down the page) and are huge contributors to the community.

Disclosure: I work at Google on Kubernetes.


The problem is, if the supporting organization is not the same as the developing organization, there's no obligation for the developing organization to respond to feedback from the supporting organization -- the developers are free to accept or reject feedback as they deem fit.

There could even be a conflict of interest: the Kubernetes developers are primarily sponsored by Google, which has an interest in promoting its own cloud offerings over those of their competitors. (Note the relative difficulty of setting up K8S on EC2; even if you can set it up, K8S assumes the ability to create a network topology that's unique to GCE; otherwise you have to use overlay network hacks.)


I don't think there is a distinction between a supporting and a developing organization at this point in the Kube ecosystem. At Red Hat we offer an enterprise version of Kube (OpenShift v3) and at the same time we contribute a ton of stuff back upstream. Personally it has been a great experience for me working with engineers from Google, different companies, and even individual contributors and I am pretty sure most if not all of my coworkers feel the same thing.


Have you ever made a contribution that conflicted with Google's goals? If so, how was it handled?


To be honest, I've been contributing to Kubernetes longer than any one who doesn't work at google, and it hasn't ever been an issue for me. The Googlers I have worked with have been sticklers for doing the right thing for the people we expected to need Kubernetes.

The google teams goals (seen from the outside) have been to built a phenomenal system for running applications that is stable, reliable, makes app author / maintainers lives easier, and will succeed as an open source ecosystem. If we have disagreed, it's more in the details of what to prioritize in the short term to succeed broadly (features, scale, ease of install, ease of running existing Docker images, etc). We haven't always picked right - but it's not for lack of trying.

EDIT: And I have certainly "forced" things that I felt certain audiences need into Kube by convincing other contributors over their initial objections. I can't think of a place where a reasoned argument has not carried the day, ever.


As someone who has been on the receiving end of Clayton's arguments (and a sometimes-winner :), I vouch. What he says above is really and truly the highest compliment one could pay.


A few items:

1) The people on that page all contribute mightily to the project in various ways - we wouldn't be where we are without them. We take feedback from everyone (we have over 15 special interest groups, many led by non-core team members). 2) K8s developers are most definitely not primarily sponsored by Google - more than 60% of K8s devs are NOT Googlers. 3) Overlay networks are definitely not a hack - many partners set them up with great benefit. The fact is networking is hard (tm), and unless you're just looking for a flat network, then you're going to have to use SOMETHING.

Disclosure: I work at Google on Kubernetes


Where are you getting the 60% number from?

From an analysis of all commits in the k8s (main) repo this is the data I am getting about domains and the breakup of which users under which domains commit/author the most.

  -----------------------
  Top 20 author (domains)
  -----------------------
  
  google.com => 16825
  gmail.com => 8220
  redhat.com => 4051
  fathomdb.com => 501
  bedafamily.com => 420
  coreos.com => 398
  huawei.com => 352
  raintown.org => 269
  zte.com.cn => 183
  mesosphere.io => 172
  zju.edu.cn => 140
  apache.org => 126
  mirantis.com => 72
  hotmail.co.uk => 67
  amadeus.com => 67
  163.com => 64
  us.ibm.com => 64
  tmrts.com => 44
  box.com => 43
  canonical.com => 42
  
  --------------------------
  Top 20 committer (domains)
  --------------------------
  
  google.com => 16655
  gmail.com => 7130
  redhat.com => 4065
  fathomdb.com => 493
  bedafamily.com => 419
  coreos.com => 388
  huawei.com => 348
  raintown.org => 268
  zte.com.cn => 180
  mesosphere.io => 174
  zju.edu.cn => 131
  apache.org => 121
  amadeus.com => 66
  163.com => 65
  us.ibm.com => 64
  hotmail.co.uk => 63
  mirantis.com => 63
  ebay.com => 53
  box.com => 43
  tmrts.com => 42
Btw you (google?) should really invest in something like http://stackalytics.com/ if the community wants to have good transparency around this type of data.

Crappy script to generate that data @ https://gist.github.com/harlowja/aca0b3c7d94c78014798fd9eb88...


There's no question Google has the most code checked in, but this can be a faulty metric (generated, rebased, etc can mess up authorship).

We think it's more important around # of unique contributors, where we (Google) are <50%.

For Stackalytics - http://stackalytics.com/?project_type=kubernetes-group&metri...

Looks like my 60% number is out of date - looks like we (Google) are up to 44%. I'll have to figure out why.

Disclosure: I work at Google on Kubernetes


In my experience, the question of who the contributors are and how much they may contribute is less important than the question of who has control of the project.

If you're claiming that Google has delegated authority over Kubernetes to the open-source community, and therefore is not in a position to place its needs over those of the community, please say so explicitly here.

Why quibble over statistics when we can get an official statement?


We, Google, have contributed 100% of Kubernetes to the Cloud Native Compute foundation, and, therefore, are not in position to place our needs over those of the community.[1]

That is not to say we (Google) do not continue to be deeply invested in its success (it's the core of our Google Container Engine), and, further, human beings who are also employed at Google _are_ core contributors to the project, but Google is not associated with the project.

[1] https://www.linuxfoundation.org/news-media/announcements/201...

Disclosure: I work at Google on Kubernetes


> K8s developers are most definitely not primarily sponsored by Google - more than 60% of K8s devs are NOT Googlers.

The key metric, in my view, is how many of them can approve PRs into master. How many of them are not Googlers?

Moreover, what would happen if a PR arrived that might make K8S incompatible with GCE, but be otherwise better for everyone else? I am certain it would be summarily rejected by Google.

> Overlay networks are definitely not a hack - many partners set them up with great benefit. The fact is networking is hard (tm), and unless you're just looking for a flat network, then you're going to have to use SOMETHING.

I respectfully disagree. The use of multiple IP addresses per node is a hard requirement of K8S, even though it is arguably unnecessary in environments whose servers can be bound to arbitrary ports.

By doing so, K8S made an explicit tradeoff that was better suited to Google's cloud offering than others'. I'm not saying it was the wrong decision; I'm simply saying that it forces complexity on those who operate in single-IP-per-host environments. (In case you think I'm unfairly laying blame, I point the finger equally at AWS and other cloud providers who refuse to make it simple to allocate a useful number of IP addresses per instance.)

I maintain my characterization that overlay networks are a hack: they make tracing more complex (tcpdump doesn't natively understand VXLAN or decapsulate their frames); they are compatible with few (if any) NetFlow analyzers (which are often used by orgs who use them for IDS and other purposes); and they add overhead to packet processing, particularly in virtualized environments that don't support VXLAN hardware offloading.


I originally objected to IP per pod given that it was so early in wide scale deployment of SDN on metal (at Red Hat, a year before Kube 1.0, we were already worried about how in the world we could support IP per pod as well as on gce). In light of everything, I think it was the right decision. While VXLAN is cumbersome, IP per entity allows very powerful integration into existing networks, fits well with almost all tools in the space, and makes apps much easier to run.

I do still want to see more solid work done for programmable BGP (calico) and programmable iptables (Contiv) and even programmable routers (it's not that hard to program routers on the fly today, just incredibly specific to each technology).

I also look forward to being able to exploit tools like ECMP more effectively within the cluster to do L3 load balancing and DSR - much of that would be a lot harder without being able to rely on endpoints everywhere.


I can't edit my previous comment, but let me just drop this here - Canonical JUST released a new commercially supported distro of Kubernetes.

https://insights.ubuntu.com/2016/09/27/canonical-expands-ent...

Disclosure: I work at Google on Kubernetes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: