REST API Notes - 2017/12/05

There were two major events in the past week that filled my browser tabs. The first was the 2017 AWS re:Invent conference that took place in Las Vegas, Nevada. The second was publication of the ThoughtWorks Technology radar. Each had noteworthy, API-related items to discuss.


Before I get into the significance of last week's Amazon announcements, I need to level-set on some terminology and context. If you're an old hand at this, skip to the 'analysis' section.


Kubernetes is an open source system for orchestrating containers. It was originally developed at Google and helps automate container deployment, scaling, and management. API developers, particularly microservice creators, found Kubernetes support for "Pods" particularly useful.

A Pod is the smallest and simplest unit in the Kubernetes model that one can create or deploy. A pod encapsulates:

  • one or many application containers (maybe Docker, but other containers are supported)
  • storage resources
  • a unique network IP
  • options that govern how the container(s) should run

The most common Kubernetes use case is to run a single container within a Pod. However, Pods running multiple containers, themselves sharing resources, are more interesting. In this case, one container is primarily responsible for performing the service, with separate "sidecar" containers perform necessary utility work. These may be things like client-side server discovery, security, stat tracking and logging, distributed gateways like Google Cloud Enpoints, circuit breakers, etc.

To be clear, sidecars are not about deploying multiple microservices at once. Nothing prevents one from deploying a pod with several containers and each one of those contains its own microservice. However, simultaneous deployment smacks of a "distributed monolith" and should be avoided. Microservice business logic should be cleanly encapsulated in one container, while common, repeatable "plumbing" necessary for normal business operation (the sidecars) exist behind clean, inter-pod interfaces. These services-and-sidecars form the basis of "service meshes", which I covered a few newsletters back.

Using Kubernetes on Google's cloud service was easy. It also promised no-vendor lock in: it ran on AWS, Azure, and Google Cloud platform. Google open-sourced the technology in an effort to catch up to AWS's cloud infrastructure lead. After all, if you make the infrastructure irrelevant, switching to take advantage of Google's lead in other areas, like machine learning, becomes that much more plausible.

While AWS supported Kubernetes on its cloud equivalent, it had yet to offer a managed service to get people up and running (Kubernetes can be complex). AWS had created its own AWS-only container service called Amazon Elastic Container Service. However, as the name makes clear, it only ran on Amazon.


At re:Invent, Amazon announced two Kubernetes related services: Amazon Elastic Container Service for Kubernetes (Amazon EKS) and AWS Fargate. EKS is the managed Kubernetes service that Amazon had been missing. Even with EKS, however, developers still need to maintain the infrastructure where the containers actually run. That is where Fargate comes in. Similar to Lambda, Fargate turns the management of the compute infrastructure into a service as well. With Fargate, a complex microservice (and associated sidecars) gets set up and processes requests with even less knowledge of underlying metal necessary.

Ben Thompson writes the Stratechery email newsletter, an ongoing analysis of technology ad media business strategy. In his November 30th post (subscription required) he discussed AWS's Execution advantage in relation to Kubernetes. He states:

"This is a strategy we’ve seen before: embrace a standard, extend it with your own services, and, well, there’s no need to extinguish Kubernetes, just make it ever harder to move away (which effectively extinguishes the underlying premise of containers, which is portability)."

Are Amazon's Kubernetes offerings better than Google's for deploying microservices? Probably not, but they don't have to be. They only have to be good enough to reduce the allure of switching. For now, I anticipate we'll continue to see plenty of service mesh creation, tutorials, and experimentation on AWS.


Kubernetes and service meshes made an appearance in Volume 17 of the ThoughtWorks radar. The Radar is a report published, approximately, every six months and examines emerging trends in the software industry. The infographic is split into techniques, platforms, tools, and languages/frameworks, with each listed item given an "adopt", "trial", "assess", or "hold" rating.

Service meshes are listed under techniques, and rated as an "assess":

"As large organizations transition to more autonomous teams owning and operating their own microservices, how can they ensure the necessary consistency and compatibility between those services without relying on a centralized hosting infrastructure? To work together efficiently, even autonomous microservices need to align with some organizational standards. A service mesh offers consistent discovery, security, tracing, monitoring and failure handling without the need for a shared asset such as an API gateway or ESB. A typical implementation involves lightweight reverse proxy processes deployed alongside each service process, perhaps in a separate container. These proxies communicate with service registries, identity providers, log aggregators, and so on. Service interoperability and observability are gained through a shared implementation of this proxy but not a shared runtime instance. We’ve advocated for a decentralized approach to microservice management for some time and are happy to see this consistent pattern emerge.

For API creators, the report has additional insights beyond Kubernetes and services meshes. It also includes a HOLD on "Overambitious API Gateways", while rating the Kong API Gateway as an "Assess":

"We remain concerned about business logic and process orchestration implemented in middleware, especially where it requires expert skills and tooling while creating single points of scaling and control. Vendors in the highly competitive API gateway market are continuing this trend by adding features through which they attempt to differentiate their products. This results in overambitious API gateway products whose functionality — on top of what is essentially a reverse proxy — encourages designs that continue to be difficult to test and deploy. API gateways do provide utility in dealing with some specific concerns — such as authentication and rate limiting — but any domain smarts should live in applications or services."

One thing of note about ThoughtWork's interest in Kong? It's upcoming sidecar configuration. For more API-related thought (and so much more) give the report a looksee yourself.



I thought I was going to have the analysis that Facebook is shuttering the API it used for Research all to myself (who else in this community reads adweek?). Then Kin Lane goes and has a great pro-con writeup that beat me by a day.

The salacious story is that Facebook is shuttering the API independent researcher used to detect Russian (dis)information campaigns during the last presidential election. Lots of big ramifications there that are outside the scope of this update. The smaller story that shouldn't be buried is that, yet again, businesses built on a single company's API are in a highly risky position. If you're a company, like Spredfast, that dependency is less an enrichment of a business model, and more like an umbilical cord.


There's a few meetups happening in December, including the DC API User Group that I'll be attending tonight. Head over to for more great gatherings like conferences, meetups, and/or hackathons happening near you. If you don't see an event that should be there let me know; either respond to this directly or send me an email at ''.

Til next time,


@libel_vox and

Subscribe to Net API Notes

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.