HTTP/2 and gRPC - The Next Generation of Microservices Interactions

Modern digital businesses are delivered by real-time interactions among hundreds and thousands of services. When you order a Lyft or stream a Netflix movie, several services start interacting and coordinating with each other to fulfill your request. Considering the importance of service interactions, the performance, reliability and health of these interactions becomes very critical for every digital business.

Unsurprisingly, significant advancements are going on to improve the inter-service communication mechanisms and more broadly improve the entire communication fabric. HTTP2 and gRPC are defining the next generation of highly efficient  inter-service communications. Istio (envoy) and Linkerd are promising to overhaul and establish a robust fabric for service discovery, routing, failure handling, etc. among services.

In this post we will summarize the key advancements in HTTP2, share an overview of gRPC and then describe Netsil’s approach to monitor the health and performance of HTTP2/gRPC based interactions.

HTTP/2 Overview

HTTP2 delivers significant enhancements primarily focused on improving the utilization of underlying TCP connections. Three major enhancements stand out:

  1. Binary Framing and Compression: As opposed to the newline-delimited plain text HTTP 1.x protocol, HTTP 2 employs binary encoding for frames. The binary encoding is much more compact, efficient for processing, and easier to implement correctly. The structure of the binary encoded frame is describe in detail here.

    In addition to binary encoding, HTTP2 employs header compression to reduce the footprint of HTTP headers which can grow upto kilobytes (think cookies) and also headers are often repeated across requests and responses. HTTP2 leverages static huffman code to compress literals. But in addition to the compression, client and server also maintain a list of frequently seen fields and their compressed values. So when these fields are repeated they simply include the reference to the compression values.

    HTTP2 Header Compression (credit: High Performance Browser Networking, O’Reilly)

  2. Multiplexing: HTTP was initially a single request and response flow. Client had to wait for the response before issuing the next request. HTTP 1.1 introduced pipelining where client could send multiple requests without waiting for the response. However, the server is still required to send the responses in the order of incoming requests. So HTTP 1.1 remained a FIFO queue and suffered from requests getting blocked on high latency requests in the front (referred to as Head-of-line blocking).

    HTTP2 introduces fully asynchronous, multiplexing of requests by introducing concept of streams. Client and servers can both initiate multiple streams on a single underlying TCP connection. Yes, even the server can initiate a stream for transferring data which it anticipates will be required by the client. For e.g. when client request a web page, in addition to sending theHTML content the server can initiate a separate stream to transfer images or videos, that it knows will be required to render the full page. The figure below shows multiple streams, 0 to 4, communicating on a single TCP connection.

    1. Stream 0 is reserved for communicating connection control frames.
    2. Stream 1 and 3 (odd-numbered) are initiated by the client.
    3. Stream 2 (even-numbered is initiated by the server.
    4. TCP Packets are illustrated as carrying the content for multiple streams. The Packet 1, for e.g. is transferring  SETTINGS for Stream 0, HEADER & DATA for Stream 1 and  DATA for Stream 3.While streams are mostly independent, there are provisions to establish priority and dependencies across streams as well.

    HTTP2 Multiplexing Leveraging Streams


  3. Flow Control: A successful implementation of multiplexing requires flow control in place to avoid contention for underlying TCP resources and avoid destructive behavior across streams. Rather than enforce a particular control flow algorithm, HTTP2 provides the building blocks for client and servers to implement flow control suitable for specific situation.

    Application-layer flow control allows the browser to fetch only a part of a particular resource, put the fetch on hold by reducing the stream flow control window down to zero, and then resume it later—e.g., fetch a preview or first scan of an image, display it and allow other high priority fetches to proceed, then resume the fetch once more critical resources have finished loading. (More details on flow control available at HTTP 2 Spec and O’Reilly High Performance Browser Networking)

gRPC Overview

gRPC is rapidly gaining adoption as the next generation of inter-service communication particularly in microservices architectures. gRPC leverages HTTP2 underneath and as such benefits from many of the above efficiencies of HTTP2. The practical benefits of gRPC have been captured elegantly in this blog post at Specifically, following attraction points are highlighted for why gRPC:

  1. Ability to auto-generate and publish SDKs as opposed to publishing the APIs for services.
  2. Leverage server-side streaming from underlying HTTP2
  3. Efficiency gains during serialization and deserialization by using protocol buffers as opposed to JSON

When you are building many services and establishing interactions among them using gRPC, it becomes critical to monitor the golden signals i.e latency, throughput and error for the gRPC calls. At Netsil, we perform a deep analysis of both HTTP/2 and gRPC interactions. As a result you get complete visibility into health of these critical service interactions as well as get clear understanding of dependencies among services.

Monitoring HTTP2 and gRPC Interactions

The health of gRPC and HTTP2 interactions can be defined by the golden signals of latency, throughput and error rates. You can easily monitor the health of gRPC and HTTP2 interactions without any code or container changes by using the Netsil Application Operations Center (AOC). All you need to do is download the Netsil collector and install one collector per host. The collector can be installed as docker container, Kubernetes DaemonSet pods or regular processes.

The Netsil collectors will automatically start analyzing the gRPC and HTTP2 interactions. They will generate detailed metrics for latency and throughput along with all the key attributes such as gRPC service method name, gRPC status message, status code, etc. Leveraging these metrics and attributes you can setup alerts to monitor the health of your service interactions. The dashboard below captures the latency and throughput of gRPC calls grouped by the method type and service method name.

gRPC Dashboard in the Netsil AOC

From the perspective of monitoring gRPC health, the following mapping of gRPC constructs to HTTP2 headers is of significance (full mapping details).

gRPC Request:

  • Method → “:method POST”
  • Scheme → “:scheme ” (“http” / “https”)
  • Path → “:path” “/” Service-Name “/” {method name}

gRPC Response:

  • HTTP-Status → “:status 200”
  • Status → “grpc-status” 1*DIGIT ; 0-9
  • Status-Message → “grpc-message” Percent-Encoded

These attributes allow you to further analyze the interactions and build granular alerts and dashboards. For e.g. the below chart alerts on gRPC requests resulting in errors.

Monitoring gRPC Errors in the Netsil AOC


Service-interactions are critical for your digital business. HTTP2 and gRPC deliver significant performance and reliability improvements. As you adopt these newer communication mechanisms, you can leverage Netsil AOC to monitor and alert on the health of gRPC and HTTP2 interactions.

You can get started free with the AOC today and gain completely visibility into the health of all your service interactions.


Get Netsil Updates

Get good content for good karma!

Copyright © 2015 - 2017 Netsil Inc. All Rights Reserved. | Privacy Policy

Share This