Microservices communicate significantly over the network. As the number of services grows in your architecture, the risks due to the unreliable network grows too. Handling the Service to Service communication within a Microservices architecture is challenging. Hence the recommended solution was to build services having Dumb Pipes and Smart Endpoints.
The first fallacy from the comprehensive list of ‘Eight Fallacies of Distributed Computing‘ is that the ‘Network is reliable’.
Service calls made over the network might fail. There can be congestion in network or power failure impacting your systems. The request might reach the destination service but it might fail to send the response back to the primary service. The data might get corrupted or lost during transmission over the wire. While architecting distributed cloud applications, you should assume that these type of network failures will happen and design your applications for resiliency.
To handle this scenario, you should implement automatic retries in your code when such a network error occurs. Say one of your services is not able to establish a connection because of a network issue, you can implement retry logic to automatically re-establish the connection.
Implementing message queues can also help in decoupling dependencies. Based on your Business use case you can implement queues to make your services more resilient. Messages in the queue won’t be lost until they are processed. If your Business use case allows ‘Fire and Forget’ type of requests compared to synchronous Request/Response, queues can be a good solution to reduce the tight coupling between components in your architecture and increase system reliability when there are network issues or systems are down.
Smart Endpoints and Dumb Pipes has been one of the design principles for microservices during the last decade.
Responsibility of the network is to just transfer messages between Source to Destination. Responsibility of microservices is to handle Business logic, transformation, validations, and process the messages.
But with the rise of Microservices Architecture, Containers, DevOps and Kubernetes – how do we manage functionalities like Traffic Management, Routing, Telemetry, Policy Enforcement?
A good answer to this question will be by building smarter pipes. Service Mesh rises to the challenge in this scenario by making the network much smarter. Instead of building functionalities like load balancing, service discovery, circuit breaking, authentication, security, routing inside each of your individual microservices, Service Mesh pushes these functionalities to the network/infrastructure layer.
The evolution of Service Mesh architecture has been a game changer. It decouples the complexity from your application and puts it in a service proxy & let it handle it for you. These service proxies can provide you with a bunch of functionalities like traffic management, circuit breaking, service discovery, authentication, monitoring, security and much more.
Istio is one of the best implementation of Service Mesh at this point. It enables you to deploy microservices without an in-depth knowledge of the underlying infrastructure. Istio solves complex requirements while not requiring changes to application code of your microservices. Istio reduces the complexity of running a distributed microservice architecture. It is Platform Independent and Language Agnostic – so it does not matter which language you use to containerize your application.
Istio leverages Envoy’s many built-in features like Service Discovery, Load Balancing, Circuit Breakers, Fault Injection, Observability, Metrics and more. Envoy is deployed as a sidecar to the relevant service in the same Kubernetes pod. The sidecar proxy model also allows you to add Istio capabilities to an existing deployment with no need to rearchitect or rewrite code.