This repo contains demo about how to build simple ecommerce microservices app from scratch step by step
based on spring boot (spring cloud, spring cloud gateway, spring data JPA, spring web, ...), deployed on Kubernetes and AWS using EKS.
The content part explains some concepts in microservices architecture like : service discovery, load balancing, distributed tracing, massaging & Advanced Message Queuing Protocol, ...
The app composed of 5 microservices [customer - product - order - payment - notification] communicating with each other using REST API (+ Open Feign) and messaging system (RabbitMQ).
Over the demonstration and documentation, we will cover a bunch of concepts in the microservice architecture :
- Service Discovery
- Traffic management : Spring Cloud Gateway - Zipkin
- Resiliency : Load balancing strategy
- Observability : Spring Boot Actuator - Prometheus - Grafana - Tracing
- Architecture diagram :
- Deployment diagram :
- 0. Setup parent Module
- 1. Setup first microservice
- 2. Postgres & PGAdmin on docker
- 3. Create new microservices
- 4. Migrate from modelMapper to mapStruct
- 5. Communication between microservices using restTemplate
- 6. Eureka Service Discovery
- 7. Migrate From restTemplate to OpenFeign
- 8. Load Balancing concept
- 9. Spring Cloud Gateway
- 10. Distributed Tracing Sleuth & Zipkin
- 11. Notification with RabbitMQ
- 12. Feign clients module
- 13. Containerizing microservices using JIB maven plugin
- 14. Deploy microservices to local Kubernetes
- 15. Configure AWS EKS cluster
- 16. create databases on AWS RDS
- 17. Monitoring microservices with Prometheus and Grafana
- 18. deploy microservices to EKS cluster using git actions
- 19. Deployment workflow diagram
mvn archetype:generate -DgroupId=dev.nano -DartifactId=ecomicroservices -DarchetypeArtifactId=maven-archetype-quickstart -DarchetypeVersion=1.4 -DinteractiveMode=false
In order to create microservices instances in top of the parent module, we need to generate submodule for each microservice then we add the necessary dependencies.
- Add maven dependencies for customer submodule in pom.xml :
- Spring data JPA
- Spring Web
- POSTGRES JDBC Driver
- Hibernate Validator
- Add classes [controller - service - repository] and resources.
- Add customized banner in resource folder.
- Configure database connection in
application.yml
file.
- Configure docker-compose file by adding Postgres and PGAdmin (GUI) images.
- Connecting to DB using PGAdmin
- Create new submodules for each microservice (same thing as on customer microservice) : product, order, payment and notification.
In the beginning, I have used modelMapper to map entities to DTOs and vice versa, but I have found that mapStruct is more efficient and faster than modelMapper. That's why I decided to move to mapStruct.
mapStruct
is a library that helps to map objects between two classes, same asmodelMapper
.mapStruct
is more flexible and faster thanmodelMapper
and it is easier to use.- Some resources explain why mapStruct is more performant than modelMapper :
After creating all microservices, we're supposed to send requests (Http requests) between them via restTemplate
to get data from other microservices.
- Make a configuration file for restTemplate
- Add restTemplate in the service class using dependency injection
- Call the microservice using restTemplate
#Example in customer microservice
OrdersResponse ordersResponse = restTemplate.postForObject(
"http://localhost:8003/api/v1/orders",
customerOrdersRequest,
OrdersResponse.class
);
Service discovery
is a tool that helps for detecting the devices & services that are running on the network, in order to connect to them.- So Service Discovery by the end knows all the client applications running on each port & IP address.
- Steps :
- microservices register to eureka server.
- look up the service using eureka server.
- eureka server will return the service information.
Medium Photo By Isuru Jayakantha
- Add new module for eureka server (Server Side)
- Add dependency for eureka in
pom.xml
for all microservices.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>
- Add the configuration in application.yml file
- Enable eureka in microservices using
@EnableEurekaClient
annotation in Application class. - You can check eureka dashboard with the current instances registered with Eureka on : http://localhost:8761
- In case we would change the url of the service (here I'm talking about the call of microservice using restTemplate) by its name (the name in application yaml file) :
- Here we should add a new annotation for restTemplate :
@LoadBalanced
- This annotation will help to load balance the call to the service, because it will use the name of the service without port number that define the instance.
- Here we should add a new annotation for restTemplate :
OpenFeign
is a client-side library that simplifies the use of RESTful web services.OpenFeign
gives more flexibility to use APIs from different microservices just creating interfaces (CustomerOrdersFeignClient & CustomerProductFeignClient) that target the method body in the controller.
- Add maven dependency on the parent module (pom.xml) :
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
- Enable OpenFeign on microservice using
@EnableFeignClients
annotation in Application class. - Create interfaces for clients (CustomerOrdersFeignClient & CustomerProductFeignClient)
- Replace restTemplate with OpenFeign in the service class.
- Use case : Customer and Payment communication via Eureka Server
-
_Load balancing_
is the process of distributing network traffic across multiple servers, to ensures no single server bears too much demand. -
There is a variety of load balancing methods (that decide in which microservice instance to send the request), which use different algorithms best suited for a particular situation:
- Round Robin Method — Requests are distributed across the group of servers sequentially.
- Least Connection Method — directs traffic to the server with the fewest active connections
- Least Response Time Method — Sends requests to the server selected by a formula that calculates the fastest response time & fewest active connections.
- IP Hash — the IP address of the client determines which server receives the request.
- ect
-
In the LB we manage a lot of things like : Authentication, High availability, Logging, caching, TLS, etc.
-
But in production we don't want to manage all these things, we can use some providers (aws, azure, google, etc.)
-
There's another concept in LB called
Health Check
, which is used to check the availability of the service to respond to the requests. -
Some resources explain LB :
Spring Cloud Gateway
is a service that allows you to route requests to different microservices.
It's help in traffic management (redirecting traffic), load balancing, circuit breaker, etc- Locally we will adopt API Gateway because it's need to configure a bunch of stuff. But in production we will use some providers (aws, azure, google, etc.) based on their features of load balancing.
- Create new module for Spring Cloud Gateway.
- Add maven dependency for Spring Cloud Gateway in
pom.xml
:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-gateway</artifactId>
</dependency>
- Set up the routes in
application.yml
file. - So at this step, instead of sending request to customer on : "http://localhost:**`8001`**/api/v1/customers",
we will send request on tge port of api-gateway : "http://localhost:**`8004`**`/api/v1/customers".
Sleuth Zipkin
is a distributed tracing library that allows you to trace the flow of requests and responses between microservices.- Furthermore, distributed tracing like zipkin help us to debug our microservices (if there's any latency or errors).
- Add Zipkin dependency in all microservices over
pom.xml
:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-zipkin</artifactId>
</dependency>
- Add config to application yml files (Example in product microservice) :
#Spring
spring:
application:
name: product
zipkin:
base-url: http://localhost:9411
- Screenshots of Zipkin UI in our app :
- Message broker is a service that allows you to send and receive messages or notifications.
- The purpose is to make an asynchronous communication between microservices.
- In case we have an outage of notification microservice, we can send requests asynchronously to the message broker,
until the notification microservice is up again. - Another use case, is about creating many instances of notification microservice to resolve the charges.
But in this case we don't know how many instances we need, So will have**_scaling issues_**
.
Solution :
- The solution in this case, is about adding a message broker to the application, so in case of an outage of the notification microservice, we can send requests asynchronously to the message broker,
until the notification microservice is up again.
-
Stand-for : Advanced Message Queuing Protocol
Is a messaging protocol that allows clients apps to communicate with each other using a messaging middleware brokers (RabbitMQ, Kafka, etc.). -
Product / Customer & Orders microservices are producers of messages in our app.
-
Notification microservice is a consumer of messages in our app.
-
Our queue is the RabbitMQ broker.
-
Illustration :
Note : We can create multiple queues in RabbitMQ.
-
Add new module called "ampq", which contains the RabbitMQ configuration and RabbitMQ producer.
-
RabbitMQConfig
:- Setup exchange, queue and the routing key.
- Bind the queue to the exchange, in order to send messages to a specific queue.
-
RabbitMQProducer
:- Contain the publish method to send messages to the queue.
-
Add maven dependency for RabbitMQ in
pom.xml
:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
- Consume messages from the queue in notification microservice, and store infos in the BD (Example in customer microservice).
public class NotificationConsumer {
@RabbitListener(queues = "${rabbitmq.queue.notification}")
public void consumer(CustomerNotificationRequest customerNotificationRequest) {
log.info("Consumed {} from queue", customerNotificationRequest);
notificationService.sendNotification(customerNotificationRequest);
}
}
- Add separate module for Feign clients.
- When we would call an API from another microservice, we will add feign clients to the microservice as dependency.
- For example, in customer microservice, we will add feign clients to pom.xml in order to use product and order APIs.
JIB
is a maven plugin that allows you to create a Docker image for your app.- Go check my blog post about Jib to see more details about google plugin.
- Add JIB dependency in
pom.xml
of the whole microservice usingprofiles
tag :
<profiles>
<profile>
<id>jib-build-push-docker-image</id>
<activation>
<activeByDefault>false</activeByDefault>
</activation>
<build>
<plugins>
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</profile>
</profiles>
- We should also configure
**docker-jib**
in the parent module by specifying java image and its version, as well as the goal of the build (check pom.xml of the parent module). - Create a new docker config file for the application to be matched with the images of each microservice.
- Update docker-compose.yml file by adding the images, ports and the relationships between them.
- Build the new images using the following command :
docker login # (if you are not logged in)
!! Note !!
: Concerning the packaging of microservices to JARs, first we need to install from the root using maven to be able to use modules as dependencies in other project.
mvn clean package -P build-docker-image
- Run the following command :
docker-compose up -d
- Create databases via PGAdmin
- Kubernetes is a container orchestration tool, that allows you to create a cluster of containers, and deploy them locally or on different cloud providers.
- K8S manage and deploy applications.
- Give us the possibility to scale up and down according to the needs of our application.
- The K8S cluster is divided in two nodes : master node and worker nodes.
The master node
is the node that is responsible for managing the cluster (brain of the cluster), where all the decisions are made.The worker nodes
are the nodes that are responsible for running the applications.
- To deploy our microservices to Kubernetes, we need to create deployment and service for each microservice.
- K8 deployment is a manifest to facilitate the deployment of the app in pods (in k8s we talk about pods not containers).
- Deployment allows to create replicaSets (instances of the pod) and configure them, versus POD that give us to create just one instance.
- Service manifest in k8s define how we can access the application.
- For testing purposes, we can use port forwarding to access the application.
- There's different types of services :
- ClusterIP
- LoadBalancer
- NodePort
- ExternalName
- Check k8S directory -> minikube -> services.
Note: We should never deploy our DB on K8S, because in case that node can die then you will lose the hole data of the application. Instead of that, we can use some services like RDS via AWS.
-
Deploy Postgres to K8S :
- Create
config-map
that contains the credentials of the database. - Create
service
manifest in order to define how we can access the database (ClusterIP). - Create a
volume
using "PersistentVolume" and "PersistentVolumeClaim". - For postgres we will use
statefulset
instead of deployment because the stateful-set support long-lived app such as databases (Save data in case that pod restart).
- Create
-
Run the following commands :
minikube start
cd k8s/miniKube
kubectl apply -f bootstrap/postgres
- SSH postgres in order to create DBs (all that stuff just for testing purposes locally) :
get pods (retrieve "postgres" pod name)
kubectl exec -it postgres-0 -- psql -U miliariadnane
CREATE DATABASE customer; CREATE DATABASE order; CREATE DATABASE product; CREATE DATABASE notification;
- Same thing for RabbitMQ and Zipkin after creating the manifests :
kubectl apply -f bootstrap/rabbitmq
kubectl apply -f bootstrap/zipkin
- Kubernetes provides us load balancer that manage the requests to the microservices,
so the best scenario is to use managed LB instead of configure us using cloud providers like AWS, GCP, etc.
- Kubernetes provides us a service discovery mechanism for free without any configuration.
- Some resources here explain how k8s give us Service Discovery dynamically :
- Kubernetes Ingress is an API object that provides routing rules to manage external users access to the services in a Kubernetes cluster.
- Ingress can provide load balancing, SSL termination and name-based virtual hosting.
- We must add ingress in our kubernetes minikube cluster to access our services from outside the cluster by specifying the customized path and host name for each microservice, also to manage load balancing.
To read more about ingress, you can check k8s documentation : Ingress k8s doc
-
Create a new cluster (master node) :
- Create AWS account (free tier for testing purposes).
- Create a VPC
- Create an IAM (Identity and Access Management) role with Security Group.
- When we say "
IAM role
", it means create AWS user. - And "
Security Group
" means a list of permissions that the user can access.
- When we say "
-
Create cluster for node groups (worker nodes) :
- choose a name for the cluster, k8s version, the region and the VPC.
- Set security groups for the cluster.
-
Create worker nodes & connect to cluster
- We will create worker nodes using "
EC2 instances
". - create Worker nodes as group of nodes.
- choose cluster it will attach to.
- Define security groups, instance type, resources.
- Define max & min number of nodes (autoscaling).
- We will create worker nodes using "
3. Setup EKS (solution 2) : using eksctl
check this link
- Check this link
Monitoring
is a concept that allows us to monitor and supervise the performance of our application using metrics like Http traces, health checks, CPU, memory, etc.- All of that information can be exposed and collected by Prometheus and Grafana using
micrometer
.
- Micrometer is a library that provides a metric facade that collect metrics from JVM-based application and services then exposes this data with external application.
- Add "micrometer" and "prometheus" to the dependencies of microservices :
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
<version>1.9.2</version>
</dependency>
- Add endpoints configuration to the application.yml :
management:
endpoints:
web:
exposure:
include: "*"
endpoint:
health:
show-details: always
-
Add docker images to docker compose.
-
Monitoring diagram :
- login to docker hub
- maven clean package & push custom image to docker hub
- update images version on deployments services manifests and commit changes
- configure aws credentials to access EKS cluster
- update kube config
- deploy services to EKS cluster (apply manifests)
- add db username & password as environment variables
env:
- name: SPRING_PROFILES_ACTIVE
value: eks
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: dbsecretd
key: db_username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: dbsecret
key: db_password
-
Install kubectl : link
-
Install aws cli : link
-
Configure and credential file settings in local machine : link
[default] aws_access_key_id=AKIAIOSFODNN7EXAMPLE aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
-
Install eksctl :
-
Add IAM permissions : The IAM security principal that you're using must have permissions to work with Amazon EKS IAM roles and service linked roles, and a VPC and related resources.
-
Verify kubectl configuration
kubectl cluster-info
-
Update kube config (exist in ~/.kube/config)
-
Run the following command :
exe : aws eks update-kubeconfig --name [name-of-eks-cluster] --region [aws-region]
aws eks update-kubeconfig --name demo-microservices --region ca-central-1
-
-
Check nodes status :
kubectl get nodes
-
Add secrets :
kubectl create secret generic dbsecret --from-literal=db_username=your-db-username --from-literal=db_password=your-db-password
- db_username & db_password are environment variables in our eks-application.yml files.