Welcome to our DevSecOps journey! π₯
This project demonstrates how to build a DevSecOps pipeline that automates the deployment of our Cloud Native application onto a Kubernetes clusterβΈοΈ hosted on AWS EC2 instances.
Itβs a dynamic e-commerce web application π developed with:
π₯ Firebase β handling backend services like authentication and database
π οΈ Tools check :
-
Jenkins for CI/CD automation
-
SonarQube for static code analysis
-
OWASP Dependency-Check and Trivy for security scanning
-
Docker for containerization
-
Kubernetes for deployment
We provisioned 5 EC2 instances :
-
Master (t2.medium): acts as the control node in the Kubernetes cluster.
-
Node1 and Node2 (t2.medium): worker nodes for the Kubernetes cluster.
-
Jenkins Instance (t2.large): hosts Jenkins, Docker, and Trivy .
-
Sonar Instance (t2.medium): hosts SonarQube.
To configure the Jenkins and SonarQube instances, connect to them either via SSH or directly through the AWS Console. \Make sure to allow inbound traffic on the necessary ports (e.g., 8080 for Jenkins, 9000 for SonarQube) in the security group settings to ensure external access to the services
( for installations just follow instructions on Docs )
- Architecture: 1 control plane node + 2 worker nodes
- Environment: AWS EC2 instances (t2.medium) with 10 GB EBS Volume
- Container Runtime: containerd
- Network Plugin: Calico
-
** Go Create 3 EC2 Instances of type t2.medium **
- 1 control plane node
- 2 worker nodes
-
Security Group Configuration:
- Ingress: SSH + all traffic from the security group
- Egress: Allow all traffic
Run these steps on all nodes as the root/sudo user:
sudo su# On control plane
hostnamectl set-hostname controlplane
# On worker 1
hostnamectl set-hostname worker1
# On worker 2
hostnamectl set-hostname worker2ip link
cat /sys/class/dmi/id/product_uuid# for kubelet to work
swapoff -aecho "net.ipv4.ip_forward = 1" | tee /etc/sysctl.d/k8s.conf
sysctl --system
sysctl net.ipv4.ip_forward # VerifyDocker also support containerd
apt update && apt upgrade -y
apt-get install containerd
ctr --version # Verify installation# Manual Installation
mkdir -p /opt/cni/bin
wget -q https://github.com/containernetworking/plugins/releases/download/v1.7.1/cni-plugins-linux-amd64-v1.7.1.tgz
tar Cxzf /opt/cni/bin cni-plugins-linux-amd64-v1.7.1.tgzGenerate config :
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.tomlVerify CRI is enabled:
head /etc/containerd/config.toml
#disabled_plugins list should be emptyConfigure systemd cgroup driver:
# use vi or nano it's your choice mate
# Update in [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]:
# SystemdCgroup = true
vi /etc/containerd/config.toml
# Update in [plugins."io.containerd.grpc.v1.cri"]:
# sandbox_image = "registry.k8s.io/pause:3.10"
systemctl restart containerdapt-get update
apt-get install -y apt-transport-https ca-certificates curl gpg
apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ / | tee /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubelet kubeadm
apt-get install -y kubectlOn the control plane node:
β οΈ Note: Save the join command that will be displayed in the output!
#for the cidr see your vpc aws settings if you see there /16 the commands below in fine mate, your are good to go, if not don't specify the network it's kinda optional but a good idea
kubeadm init --pod-network-cidr=192.168.0.0/16
β οΈ Note: Save all the output there is some commands
On each worker node, run the join command from the previous step:
# This is just an example of output , you have your own output displayed in master=control plane node
kubeadm join 172.31.25.150:6443 --token 2i8vrs.wsshnhe5zf87rhhu --discovery-token-ca-cert-hash sha256:eacbaf01cc58203f3ddd69061db2ef8e64f450748aef5620ec04308eac44bd77There's a lot of networking plugins, here i choosed Calico cause it's simple
On the control plane:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/custom-resources.yaml
watch kubectl get pods -n calico-systemlogout or exsit from the root user and use the ubuntu user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# so everything is fine do this !!
sudo cat /etc/kubernetes/admin.conf #copy the output
nano config2 # and past here save and close
sudo mv config2 .kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes
# congrats baby !!- Ensure security groups allow traffic between nodes
- Check pod network CIDR if using a network plugin other than Calico
- Verify container runtime is properly configured
- See in security groups ALL ICMP Traffic is allowed (Connectivity between nodes : this will make workers able to join and communicate between theme)
-
SonarQube Scanner
-
NodeJS Plugin
-
SSH Agent
-
Docker plugins (Docker , Docker Commons , Docker Pipeline , Docker API ,..)
-
OWASP Dependency-Check
We will also need to add nodejs20 and jdk21 to global config tools
Simply add them using available installations options
Go to Manage Jenkins β Credentials β Global β Add Credentials, and add:
-
π§ͺ SonarQube Token
- Type: Secret Text
- π ID:
Sonar-token
-
π³ DockerHub Credentials
- Type: Username and Password (or Secret Text with token)
- π ID:
docker
π Alright people , let's move to our jenkins initial Jenkinsfile version
Here we would be executing all stages till pushing the image to dockerhub (remember the workflow , try to identify each stage and match it π΅οΈββοΈ)
Don't forget to change the image dockerhub-user/image-name:latest
pipeline {
agent any
environment {
SCANNER_HOME = tool 'sonar-scanner'
}
tools {
jdk 'jdk21'
nodejs 'nodejs20'
}
stages {
stage('Clean Workspace') {
steps {
cleanWs()
}
}
stage('Check Node Version') {
steps {
sh 'node --version'
sh 'npm --version'
}
}
stage('Checkout Code') {
steps {
checkout scmGit(
branches: [[name: '*/master']],
extensions: [],
userRemoteConfigs: [[
credentialsId: 'github-token',
url: 'https://github.com/HafssaRaoui/e-commerce-app.git'
]]
)
}
}
stage('SonarQube Analysis') {
steps {
withSonarQubeEnv('sonar-server') {
sh '''$SCANNER_HOME/bin/sonar-scanner \
-Dsonar.projectName=e-commerce-website \
-Dsonar.projectKey=e-commerce-website \
-Dsonar.sources=src \
-Dsonar.javascript.lcov.reportPaths=coverage/lcov.info'''
}
script {
echo "SonarQube analysis complete. Check results at: http://51.44.85.43:9000/dashboard?id=e-commerce-website"
}
}
}
stage('Install Dependencies') {
steps {
script {
sh 'npm install -g @angular/[email protected] --force'
sh 'npm install'
sh 'ng version'
}
}
}
stage('Build Angular App') {
steps {
script {
sh 'ng build e-commerce --output-path=front --configuration=production'
sh 'ls -l front'
sh 'ls -l front/browser'
sh 'mv front/browser/index.csr.html front/browser/index.html'
}
}
}
stage('OWASP Dependency Check') {
steps {
dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit', odcInstallation: 'DP-Check'
dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
}
}
stage('Scanning With Trivy') {
steps {
sh "trivy fs . > trivyfs.txt"
}
}
stage('Docker Build') {
steps {
script {
withDockerRegistry(credentialsId: 'docker', toolName: 'docker') {
sh "docker build -t hafssa260/ecom-app:latest ."
}
}
}
}
stage('TRIVY Image Scan') {
steps {
sh "trivy image hafssa260/ecom-app:latest > trivyimage.txt"
}
}
stage('Docker Push') {
steps {
script {
withDockerRegistry(credentialsId: 'docker', toolName: 'docker') {
sh "docker push scan image"
}
}
}
}
}
}β
If the build passes at first try (which is defenitly not the case ) then congrats π
1οΈβ£ Head to checkout the sonar analysis :

2οΈβ£You should be able to have a look on depency check results as well

3οΈβ£ Go check if the image has been pushed and test it too :

We will deploy our ecom-app on Kubernetes by defining the deployment and then expose it using a LoadBalancer Service.
Checkout the .yaml file
Add the following stage to the previous pipeline.
stage('Deploy App on k8s') {
steps {
writeFile file: 'Hafsapp.yaml', text: '''
apiVersion: apps/v1
kind: Deployment
metadata:
name: ecom-app
labels:
app: ecom-app
spec:
replicas: 3
selector:
matchLabels:
app: ecom-app
template:
metadata:
labels:
app: ecom-app
spec:
containers:
- name: ecom-app
image: hafssa260/ecom-app:latest
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: ecom-app
spec:
selector:
app: ecom-app
type: LoadBalancer
ports:
- port: 80
targetPort: 80
'''
sshagent(['kube']) {
sh "scp -o StrictHostKeyChecking=no Hafsapp.yaml [email protected]:/home/ubuntu"
script {
def applyStatus = sh(
script: "ssh [email protected] 'kubectl apply -f /home/ubuntu/Hafsapp.yaml'",
returnStatus: true
)
if (applyStatus != 0) {
error("Deployment failed!")
}
}
}
}
}-
Wer're basically copying our yaml file to the Kubernetes master node :
13.38.72.213 -
Wer're using as well SSH credentials to connect to kubernetes server
Once connected the script applies the deployment via the command :
kubectl apply -f /home/ubuntu/Hafsapp.yaml
So far we have done a really good work πͺπ
Trigger a build and check if the deployment passes , in that case the website should be accessible on both nodes
π As a final enhancement we will integrate a load balancer









