An Interview with Heptio, the Kubernetes Pioneers

I recently spent some time chatting with Craig McLuckie, CEO of the leading Kubernetes solutions provider Heptio. Centered around both developers and system administrators, Heptio's products and services simplify and scale the Kubernetes ecosystem.

Petros Koutoupis: For all our readers who have yet to hear of the remarkable things Heptio is doing in this space, please start by telling us, who is Craig McLuckie?

Craig McLuckie: I am the CEO and founder of Heptio. My co-founder, Joe Beda, and I were two of the three creators of Kubernetes and previously started the Google Compute Engine, Google's traditional infrastructure as a service product. He also started the Cloud Native Computing Foundation (CNCF), of which he is a board member.

PK: Why did you start Heptio? What services does Heptio provide?

CL: Since we announced Kubernetes in June 2014, it has garnered a lot of attention from enterprises looking to develop a strategy for running their business applications efficiently in a multi-cloud world.

Perhaps the most interesting trend we saw that motivated us to start Heptio was that enterprises were looking at open-source technology adoption as the best way to create a common platform that spanned on-premises, private cloud, public cloud and edge deployments without fear of vendor lock-in. Kubernetes and the cloud native technology suite represented an incredible opportunity to create a powerful "utility computing platform" spanning every cloud provider and hosting option, that also radically improves developer productivity and resource efficiency.

In order to get the most out of Kubernetes and the broader array of cloud native technologies, we believed a company needed to exist that was committed to helping organizations get closer to the vibrant Kubernetes ecosystem. Heptio offers both consultative services and a commercial subscription product that delivers the deep support and the advanced operational tooling needed to stitch upstream Kubernetes into modern enterprise IT environments.

PK: What makes Heptio relevant in the Container space?

CL: First, Joe and I are building our company around the philosophy of keeping open source open. We pride ourselves on "strong opinions, loosely held" and act as trusted advisors to organizations looking to embrace cloud native technologies. There are a tremendous number of different configuration options, deployment models and hosting options for Kubernetes and related cloud native technologies. We take a principled, technology-first perspective in working with our customers. We also recognize the importance of things like cloud provider-hosted Kubernetes services in a technical program and work to make sure that our solutions don't preclude customers from using the many excellent hosted Kubernetes services that are emerging where it makes sense to use them.

Next, it's important to realize that cloud native technologies are still young and evolving. As businesses start to adopt them, they inevitably encounter operational gaps. We enjoy helping organizations not only adopt Kubernetes, but also work to address key operational gaps in Kubernetes and container deployments with upstream-friendly open-source projects that we contribute back to the broader community. Each of our open-source projects address important issues with Kubernetes and many have been widely adopted almost as standards in the ecosystem:

  • Heptio Sonobuoy to ensure proper configuration of Kubernetes clusters.
  • Heptio Ark for disaster recovery and cluster migration.
  • Heptio Contour as a more modern ingress framework for Kubernetes.
  • Heptio Gimbal for managing multi-cluster ingress across OpenStack and Kubernetes.
  • Ksonnet for describing your application to deploy to a Kubernetes cluster.

PK: Lately, I've been seeing Heptio making headlines—more specifically, relating to the recent public release of an open-source load balancer for both Kubernetes and OpenStack. What exactly is this new and exciting technology?

CL: A pretty common starting point for organizations on their Kubernetes journey is running stateless, web-serving workloads. Obviously Kubernetes can do a lot more than that these days, but a lot of companies start their journey here. Ironically, one of the most important things for scaling these workloads to support internet-scale and IoT scenarios is an efficient, cloud-native-friendly load balancer. While there were a lot of options in the market, few could address the operational needs of Actapio, Yahoo Japan's subsidiary, which was grappling with this problem at the time. Additionally, they needed the solution to work not only with Kubernetes, but also on some of their legacy investments, including OpenStack, to scale effortlessly and affordably. And, they needed the solution to be 100% open source and Kubernetes-upstream-friendly.

We designed Gimbal around two great technologies that we were already familiar with: Envoy, a mature proxy technology developed by Lyft, and Kubernetes itself. By relying on a lot of the fundamental characteristics of Kubernetes as a hosting and operating environment, we radically simplified the process of creating a scale-out load-balancing framework that could span legacy and multiple cloud native environments.

PK: And why is this a game-changer?

CL: By routing traffic in hybrid environments, Gimbal removes obstacles for companies looking to adopt Kubernetes in their environments. This cloud native solution is tailored for the highly dynamic nature of managing Kubernetes workloads, which expensive, existing solutions are not designed to do.

Gimbal helps companies deal with scale—not only around the scale of the workloads themselves, but with the number of environments and teams that are interacting with those environments. It is a modern software load-balancing solution that is cloud native, but legacy-friendly, and it can be deployed on traditional infrastructure on-premises, at the network edge, or in cloud environments.

This approach is not only cost effective, but it also creates strong operating efficiencies for companies. Linking the management of the ingress control plane to the workload control plane simplifies configuration, makes it easier to implement policy and removes toil from IT operating practices.

PK: Where should readers go to learn more?

CL: Our blog goes into more details on the technology, and readers can download Gimbal from GitHub to try it out.

PK: The future of the data center and the services provided seem to be looking more cloud native. Already in this space, what will the future of Heptio look like as it addresses this larger demand?

CL: For now, we are focused on helping organizations get the most out of Kubernetes and a small set of additional open-source technologies (many of which we developed) that are necessary to operate and integrate Kubernetes into existing IT systems. You can expect us to continue to build open-source projects that fill gaps in integrating Kubernetes and to open Kubernetes up to running new workloads in the future.


Heptio prides itself in not only leveraging open-source solutions but contributing back to the very same communities relying on these wonderful technologies. To learn more, visit Also, be sure to catch Joe Beda's TGI Kubernetes series on Heptio's official YouTube channel. And finally, don't miss Joe Beda's ebook Becoming a Cloud Native Organization.

Petros Koutoupis, LJ Editor at Large, is currently a senior performance software engineer at Cray for its Lustre High Performance File System division. He is also the creator and maintainer of the RapidDisk Project. Petros has worked in the data storage industry for well over a decade and has helped pioneer the many technologies unleashed in the wild today.

Load Disqus comments