Continuous Deployment in GitHub Actions

--

Image by Chris Pagan

ุจูุณู’ู…ู ุงู„ู„ูŽู‘ู‡ู ุงู„ุฑูŽู‘ุญู’ู…ูŽู†ู ุงู„ุฑูŽู‘ุญููŠู’ู…ู

Setelah kita membuat script CI, waktunya kita membuat script untuk CD nya, tapi sebelum itu kita harus men-setting secret, yang nantinya akan kita gunakan untuk upload artifact docker images ke GitHub Container Registry dan melakukan remote SSH ke kubernetes master-nodes.

Pertama kita masuk ke setting repository :

Kemudian kita masuk kepada menu secrets

Dan jika kita telah menambahkan secrets kita bisa lanjut menuliskan CD scriptnya

Okeh kita langsung saja masuk ke script CD nya.

name: CD to Kubernetes Cluster
on:
workflow_dispatch:
jobs:
Deployment:
env:
PROJECT_ID: ${{ secrets.GKE_PROJECT }}
GKE_CLUSTER: my-first-cluster-1
GKE_ZONE: us-central1-c
DEPLOYMENT_NAME: simple-go-api
name: Deploy To Kubernetes
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Authenticate to Google Cloud
uses: google-github-actions/auth@v0.4.0
with:
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
- name: Set up GKE credentials
uses: google-github-actions/get-gke-credentials@v0.4.0
with:
cluster_name: ${{ env.GKE_CLUSTER }}
location: ${{ env.GKE_ZONE }}
- name: Deploy
run: |-
cd kubernetes/
kubectl delete all --all -n experiment
kubectl apply -f namespace.yaml
kubectl apply -f deployment.yaml
kubectl apply -f autoscale.yaml
kubectl apply -f ingress.yaml
kubectl get all -n experiment
kubectl get services -o wide -n experiment

Deploy to Kubernetes Cluster

Dan pada tahap terakhir, kita akan melakukan deploy docker images kita ke Kubernetes Cluster yang sudah kita buat.

File namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
name: backend

File deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-go-api
namespace: backend
spec:
selector:
matchLabels:
run: simple-go-api
template:
metadata:
labels:
run: simple-go-api
spec:
securityContext:
runAsUser: 10001
containers:
- name: simple-go-api
image: ghcr.io/mrofisr/simple:latest
ports:
- containerPort: 3000
resources:
limits:
cpu: 500m
requests:
cpu: 200m

File autoscalling.yaml

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: simple-go-api
namespace: backend
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: simple-go-api
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50

setelah semua proses berhasil maka output atau hasilnya akan seperti ini :

Testing Horizontal Pod AutoScalling

Dan yang tahap terakhir kita akan melakukan testing pada HPA yang sudah kita buat, disini saya menggunakan Apache Benchmark untuk melakukan testing.

Testing :

mrofisr@mrofisr ~ % ab -n 10000 -c 1000 http://34.124.208.75:31000/quotes
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 34.124.208.75 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Server Software:
Server Hostname: 34.124.208.75
Server Port: 31000
Document Path: /quotes
Document Length: 224 bytes
Concurrency Level: 1000
Time taken for tests: 60.828 seconds
Complete requests: 4845
Failed requests: 4795
(Connect: 0, Receive: 0, Length: 4795, Exceptions: 0)
Total transferred: 1819471 bytes
HTML transferred: 1291366 bytes
Requests per second: 79.65 [#/sec] (mean)
Time per request: 12554.697 [ms] (mean)
Time per request: 12.555 [ms] (mean, across all concurrent requests)
Transfer rate: 29.21 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 28 404 468.1 288 3500
Processing: 305 10650 2602.1 11615 14534
Waiting: 305 10649 2601.9 11615 14534
Total: 352 11054 2581.8 11930 14895
Percentage of the requests served within a certain time (ms)
50% 11929
66% 12199
75% 12457
80% 12610
90% 12817
95% 13069
98% 13320
99% 14064
100% 14895 (longest request)

Before :

After :

After CPU usages normal :

Conclusion

Pada kasus ini saya membuat sebuah kondisi apabila penggunaan CPU melebihi 50% maka Kubernetes akan menggandakan podnya menjadi 4 buah pod agar proses aplikasi tetap berjalan dengan lancar.

About me

Iโ€™m Muhammad Abdur Rofi Maulidin, an aspiring DevOps Wannabe with a keen interest in Cloud Computing, CNCF technologies, and Automation. I am also the author of Jawara Cloud, a platform dedicated to sharing my experiences and insights in DevOps and Cloud Computing.

Feel free to keep in touch with me:

Free Palestine ๐Ÿ‡ต๐Ÿ‡ธ

--

--

No responses yet