In this blog, I will explain the concept of Service Mesh, why it is needed for your cloud native applications, the reason of its popularity & incredible growth/adoption within the community.
Microservices have taken the software industry by storm and rightly so. Transitioning from a Monolith to Microservices architecture enables you to deploy your application more frequently, independently and reliably.
However everything is not GREEN in a Microservice architecture, and it has to deal with the same problems encountered while designing Distributed Systems.
On this note, why not recap the Eight Fallacies of Distributed Computing —
- The Network is Reliable
- Latency is Zero
- Bandwidth is Infinite
- The Network is Secure
- Topology does not Change
- There is one Administrator
- Transport Cost is Zero
- The Network is Homogenous
With Microservices architecture, comes in the dependency on Network and raises all the reliability questions. As the number of services increase, you have to deal with the interactions between them, monitor the overall system health, be fault tolerant, have logging and telemetry in place, handle multiple points of failure and more. Each of the services needs to have these common functionalities in place so that the service to service communication is smooth and reliable. But this sounds like lot of effort if you have to deal with dozens of microservices, does not it?
What is a Service Mesh?
Service Mesh can be defined as an infrastructure layer which handles the inter-service communication in a Microservice architecture. Service Mesh reduces the complexity associated with a Microservice architecture and provides lot of the functionalities like —
- Load Balancing
- Service Discovery
- Health Check
- Traffic Management & Routing
- Circuit Breaking and Failover Policy
- Metrics & Telemetry
- Fault Injection
Why is Service Mesh necessary?
In a Microservice architecture, handling service to service communication is challenging and most of the times we depend upon 3rd party libraries or components to provide functionality like Service Discovery, Load Balancing, Circuit Breaker, Metrics, Telemetry and more. Companies like Netflix came up with their own libraries like Hystrix for Circuit Breaker, Eureka for Service Discovery, Ribbon for Load Balancing which are popular and widely used by organizations.
However these components need to be configured inside your application code and based on the language you are using, the implementation will vary a bit. Anytime these external components are upgraded, you need to update your application, verify it and deploy the changes. This also creates an issue where now your application code is a mixture of business functionality and these additional configurations. Needless to say, this tight coupling increases the overall application complexity – since the developer now needs to also understand how these components are configured so that he/she can troubleshoot in case of any issues.
Service Mesh comes to the rescue here. It decouples this complexity from your application and puts it in a service proxy & let it handle it for you. These service proxies can provide you with a bunch of functionalities like traffic management, circuit breaking, service discovery, authentication, monitoring, security and much more. Hence from an application standpoint, all it contains is the implementation of Business functionalities.
Say in your Microservice architecture, if you have 5 services talking with each other. Then instead of building the common necessary functionalities like configuration, routing, telemetry, logging, circuit breaking etc inside every microservice, it makes more sense to abstract it into a separate component – called as ‘service proxy‘ here.
With the introduction of Service Mesh, there is a clear segregation of responsibilities. This makes the lives of developers easier – If there is an issue, developers can easily identify the root cause based on whether it is application or network related.
How is Service Mesh implemented?
To implement the service mesh, you can deploy a proxy alongside with your services. This is also known as the Sidecar Pattern.
The Sidecars abstract the complexity away from the application and handles the functionalities like Service Discovery, Traffic Management, Load Balancing, Circuit Breaking etc.
Envoy from Lyft is the most popular open source proxy designed for cloud native applications. Envoy runs along side every service and provides the necessary features in a platform agnostic manner. All traffic to your service flows through the Envoy proxy.
What is Istio?
Istio is an open platform to connect, manage and secure microservices. It is very popular in the Kubernetes Community and is getting widely adopted.
Istio provides additional capabilities in your microservices architecture like intelligent routing, load balancing, service discovery, policy enforcement, in-depth telemetry, circuit breaking and retry functionalities, logging, monitoring and more.
Istio is one of the best implementation of Service Mesh at this point. It enables you to deploy microservices without an in-depth knowledge of the underlying infrastructure.
As more and more organizations start breaking their Monoliths to a Microservice architecture, they will reach a point where managing the increasing number of services becomes a burden. Service Mesh comes to the rescue in such scenarios and abstracts away all the complexities without the need to make any changes to the application.
Incase you have any comments or questions about this blog, please feel free to leave a comment and I would be happy to discuss.
In my upcoming articles, I will talk more about Istio and its capabilities in details.