Call us USA (+1)312-698-3083 | Email : sales@wappnet.com

Build a Full CI/CD Pipeline from Scratch: A Step-by-Step Practical Guide

Introduction

A CI/CD pipeline is not just automation. It is a commitment to quality, speed, and repeatability—the foundation that lets teams ship confidently and frequently.

Continuous Integration and Continuous Deployment represent one of the most significant shifts in how software is built and delivered. Before CI/CD, releasing software was a manual, error-prone process: developers would work in isolation for weeks or months, merge their changes in a stressful integration phase, manually build and test, and then deploy to production through a series of handoffs and manual verification steps. The result was slow release cycles, integration nightmares, and deployments that felt like high-risk events requiring all-hands-on-deck coordination.

CI/CD inverts this model. Continuous integration means that every code change is automatically built, tested, and integrated into the mainline multiple times per day. Continuous deployment means that every change that passes the automated quality gates can be deployed to production automatically or with a single approval. The benefits are transformative: faster feedback loops, higher quality through automated testing, reduced integration risk, and the ability to ship new features and fixes to users in hours or days rather than weeks or months.

This tutorial walks through building a complete CI/CD pipeline from scratch. It covers what CI/CD actually means in practice, how to choose the right tools for your stack, step-by-step implementation with real configuration examples for multiple platforms, and best practices for making your pipeline reliable, secure, and maintainable as your team and codebase grow. By the end, you will have a working reference implementation you can adapt to your own projects.

What CI/CD Actually Means

CI/CD are two related but distinct practices that work together to enable rapid, reliable software delivery. Understanding each component is essential before building the pipeline.

1

Source

Code change

2

Build

Compile & package

3

Test

Unit & integration

4

Deploy

Staging → Prod

Continuous Integration: Keep the Mainline Healthy

Continuous integration is the practice of merging code changes into a shared mainline (typically the main or master branch) frequently—at least daily, often multiple times per day—and automatically verifying that each merge does not break the build or tests. The core principle is that integration should be a non-event, not a multi-day merge crisis.

The CI process typically includes checking out the code from version control, installing dependencies, compiling or building the application, running unit tests and integration tests, running static analysis and linting, and reporting success or failure. If any step fails, the pipeline stops, and the developer is notified immediately. The goal is fast feedback: developers learn within minutes if their change broke something while the context is still fresh.

  • Fast feedback CI runs should complete in under ten minutes. Longer than that and developers will start merging without waiting for CI, defeating the purpose.
  • Fail fast: Stop at the first failure. Do not waste time running subsequent steps if the build is already broken.
  • Visible results: CI status should be visible in pull requests, commit history, and team dashboards. Broken builds should be high-priority fixes.

Continuous Deployment: Automate the Path to Production

Continuous deployment is the practice of automatically deploying every change that passes CI to production without manual intervention. Continuous Delivery is a slightly less aggressive variant: every change that passes CI is ready to deploy to production, but the final deployment requires a manual approval or trigger. Most organizations start with Continuous Delivery and evolve toward Continuous Deployment as confidence in their testing and rollback capabilities grows.

The CD process typically includes packaging the application into a deployable artifact (Docker image, JAR, ZIP), uploading the artifact to a registry or artifact repository, deploying to a staging or pre-production environment, running smoke tests or integration tests in the staging environment, deploying to production (automatically or with approval), and verifying deployment success with health checks and monitoring.

  • Immutable artifacts: Build once, deploy everywhere. The same Docker image or artifact that was tested in staging is what gets deployed to production. Never rebuild for different environments.
  • Staged rollout: Deploy to staging or a canary subset first, verify, then roll out to full production. This limits the blast radius if something goes wrong.
  • Rollback as a first-class operation: Deploying should be easy, but rolling back should be even easier. One command or one button should revert to the previous version.

Choosing Your CI/CD Tools

The CI/CD tool landscape is rich. The right choice depends on your source control system, your team’s expertise, and whether you prefer SaaS convenience or self-hosted control.

GitHub Actions SaaS GitHub repos, quick start Low
GitLab CI/CD SaaS/Self GitLab repos, Docker-native Low-Medium
Jenkins Self-hosted Full control, plugins High
CircleCI SaaS Fast builds, containers Low
Azure Pipelines SaaS Microsoft stack, Azure Medium
AWS CodePipeline SaaS AWS-native deployments Medium

For this tutorial, we will demonstrate with GitHub Actions (SaaS, widely accessible) and provide equivalent examples for GitLab CI/CD. The patterns are transferable to any CI/CD platform.

Key Decision Factors

  • Source control integration: If you use GitHub, GitHub Actions is a natural fit. If you use GitLab, GitLab CI/CD is tightly integrated. If you use Bitbucket, consider Bitbucket Pipelines or Jenkins.
  • Runner control: SaaS platforms provide hosted runners (convenient but sometimes slower). Self-hosted runners give you full control over environment, performance, and cost but require maintenance.
  • Cost: SaaS platforms charge per build minute (expensive for large projects). Self-hosted is infrastructure cost only, but you pay in operational overhead.
  • Ecosystem and plugins: Jenkins has thousands of plugins but high complexity. GitHub Actions has a large marketplace. GitLab CI/CD has a Docker-first design.

Building the Pipeline: End-to-End Walkthrough

The following walkthrough builds a complete CI/CD pipeline for a Node.js web application. The same principles apply to other stacks (Python, Java, Go); adapt the build and test commands as needed.

Step 1: Repository Setup

Start with a clean repository structure. The CI/CD configuration lives in the repository alongside the code, ensuring that pipeline changes are versioned and reviewed like any other code change.

project/
├── src/              # application source code
├── tests/            # unit and integration tests
├── Dockerfile        # container definition
├── package.json      # dependencies and scripts
├── .github/
│   └── workflows/
│       └── ci-cd.yml # GitHub Actions pipeline
├── .gitignore
└── README.md
  • Initialize the repository and commit the application code.
  • Add test scripts to package.json (npm test, npm run lint).
  • Create a Dockerfile for containerized deployment.
  • Add CI/CD pipeline configuration file.

Step 2: Continuous Integration Configuration

The CI stage runs on every push and pull request. It installs dependencies, runs linting and tests, and reports results. This is the quality gate that prevents broken code from reaching the mainline.

# .github/workflows/ci-cd.yml
name: CI/CD Pipeline
on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main]
env:
  NODE_VERSION: ’20’
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}
jobs:
  test:
    name: Test & Lint
    runs-on: ubuntu-latest
    steps:
      – name: Checkout code
        uses: actions/checkout@v4
      – name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: ‘npm’
      – name: Install dependencies
        run: npm ci
      – name: Run linter
        run: npm run lint
      – name: Run unit tests
        run: npm test — –coverage
      – name: Upload coverage
        uses: codecov/codecov-action@v3
        with:
          files: ./coverage/lcov.info

This CI configuration triggers on pushes to main and develop branches and on pull requests to main. It checks out the code, sets up the Node.js environment with dependency caching, installs dependencies, runs the linter, executes tests with coverage reporting, and uploads coverage data to Codecov for tracking over time.

Step 3: Build and Push Docker Image

After tests pass, build a Docker image and push it to a container registry. This image is the immutable artifact that will be deployed to staging and production.

  build:
    name: Build & Push Image
    needs: test
    runs-on: ubuntu-latest
    if: github.ref == ‘refs/heads/main’ && github.event_name == ‘push’
    permissions:
      contents: read
      packages: write
    steps:
      – name: Checkout code
        uses: actions/checkout@v4
      – name: Log in to Container Registry
        uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}
      – name: Extract metadata
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
          tags: |
            type=sha,prefix={{branch}}-
            type=ref,event=branch
            type=semver,pattern={{version}}
      – name: Build and push
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          cache-from: type=registry,ref=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:buildcache
          cache-to: type=registry,ref=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:buildcache,mode=max

This build job runs only on pushes to main after tests pass. It logs into the container registry using GitHub’s built-in authentication, extracts metadata to tag the image with the branch name and commit SHA, builds the Docker image with layer caching to speed up subsequent builds, and pushes the image to the registry with appropriate tags.

Step 4: Deploy to Staging

Deploy the newly built image to a staging environment for integration testing and manual verification before promoting to production.

  deploy-staging:
    name: Deploy to Staging
    needs: build
    runs-on: ubuntu-latest
    environment:
      name: staging
      url: https://staging.example.com
    steps:
      – name: Checkout code
        uses: actions/checkout@v4
      – name: Configure kubectl
        run: |
          echo “${{ secrets.KUBECONFIG_STAGING }}” | base64 -d > kubeconfig
          export KUBECONFIG=./kubeconfig
      – name: Deploy to Kubernetes
        run: |
          IMAGE_TAG=${{ github.sha }}
          kubectl set image deployment/myapp \
            myapp=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:main-${IMAGE_TAG} \
            -n staging –record
      – name: Wait for rollout
        run: |
          kubectl rollout status deployment/myapp -n staging –timeout=5m
      – name: Run smoke tests
        run: |
          curl -f https://staging.example.com/health || exit 1

The staging deployment job configures kubectl with staging cluster credentials, updates the Kubernetes deployment to use the newly built image, waits for the rollout to complete successfully, and runs smoke tests to verify basic functionality. If any step fails, the pipeline stops and production deployment is blocked.

Step 5: Deploy to Production

After the staging deployment succeeds, deploy to production. This step typically requires manual approval to give teams control over when production changes occur.

  deploy-production:
    name: Deploy to Production
    needs: deploy-staging
    runs-on: ubuntu-latest
    environment:
      name: production
      url: https://example.com
    steps:
      – name: Checkout code
        uses: actions/checkout@v4
      – name: Configure kubectl
        run: |
          echo “${{ secrets.KUBECONFIG_PRODUCTION }}” | base64 -d > kubeconfig
          export KUBECONFIG=./kubeconfig
      – name: Deploy to Kubernetes
        run: |
          IMAGE_TAG=${{ github.sha }}
          kubectl set image deployment/myapp \
            myapp=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:main-${IMAGE_TAG} \
            -n production –record
      – name: Wait for rollout
        run: |
          kubectl rollout status deployment/myapp -n production –timeout=10m
      – name: Notify deployment
        uses: 8398a7/action-slack@v3
        with:
          status: ${{ job.status }}
          text: ‘Production deployment completed’
          webhook_url: ${{ secrets.SLACK_WEBHOOK }}

The production deployment is nearly identical to staging but targets the production Kubernetes cluster. GitHub Actions environments support required reviewers: you can configure the production environment to require manual approval from designated team members before the deployment proceeds. The pipeline includes a Slack notification step to alert the team when production deployments are complete.

CASE STUDY · Amazon

CI/CD at scale: deploying every 11.7 seconds across thousands of services

Amazon’s software deployment infrastructure is one of the most mature examples of CI/CD pipeline services at scale. In public talks and published papers, Amazon engineers describe a deployment system where code changes flow from commit to production in minutes across thousands of microservices, with deployments occurring every 11.7 seconds on average during peak hours.

The architecture relies on automated testing at multiple levels (unit, integration, canary, and production monitoring); progressive rollout strategies where new versions are deployed to a small percentage of hosts first and automatically rolled back if metrics degrade; comprehensive observability with per-deployment dashboards tracking latency, error rates, and business metrics; and a culture where developers own their services end-to-end, including deployment and on-call responsibilities.

Amazon’s key insight is that deployment risk is inversely proportional to deployment frequency: small, frequent deployments are individually low-risk and easy to troubleshoot, while large, infrequent deployments are high-risk and difficult to debug. The CI/CD infrastructure enables this high-frequency deployment model by making the deployment process fast, safe, and observable.

Making Your Pipeline Reliable and Maintainable

A CI/CD pipeline is infrastructure that the entire team depends on. The following practices ensure it remains reliable, secure, and maintainable as the codebase and team grow.

Security and Secrets Management

Never commit secrets (API keys, database passwords, SSH keys) to the repository. Use your CI/CD platform’s secrets management: GitHub Secrets, GitLab CI/CD Variables, Jenkins Credentials, or an external secrets manager like AWS Secrets Manager or HashiCorp Vault. Inject secrets as environment variables at runtime and rotate them regularly.

  • Principle of least privilege: Grant the CI/CD pipeline only the permissions it needs. Do not use admin credentials or overly broad IAM policies.
  • Audit secret access log when secrets are accessed and by which pipeline runs. Rotate secrets immediately if a pipeline is compromised.
  • Separate staging and production secrets. Use different credentials for staging and production environments. A compromised staging pipeline should not have production access.

Fast Feedback and Fail Fast

Pipeline speed directly affects developer productivity. Target under ten minutes for the CI stage. Run fast tests (unit tests, linting) first and expensive tests (integration tests, E2E tests) later. Use caching aggressively: dependency caching, Docker layer caching, and incremental builds all reduce pipeline duration.

Fail at the first error. Do not run deployment steps if tests failed. Do not waste time on subsequent steps if an earlier step is broken. Surface failures prominently: failed pipelines should block pull request merges and trigger notifications to the responsible developer.

Pipeline as Code and Versioning

Store pipeline configuration in the repository alongside the code. This ensures pipeline changes are reviewed, tested, and versioned like any other code. If a pipeline change breaks CI, you can revert it through version control just like reverting a code change.

Observability and Debugging

Pipelines fail. When they do, debugging should be straightforward. Ensure pipeline logs are detailed and accessible. Include timestamps, command outputs, and error messages. Integrate with your monitoring and alerting systems: notify teams when builds fail, when deployments succeed or fail, and when pipeline duration exceeds thresholds. Track CI/CD metrics: build success rate, deployment frequency, mean time to recovery, and change failure rate.

Progressive Rollout and Rollback

Deploy new versions progressively using cloud deployment services, such as canary deployments to a small percentage of traffic, then gradual rollout to full production. Monitor key metrics during rollout and automatically roll back if error rates increase or latency degrades. Make rollback a one-command operation that reverts to the previous Docker image or artifact version.

ESSENTIAL CHECKLIST

✓  Tests run on every pull request and block merge if they fail.

✓  Secrets are stored in the secrets manager and never committed to the repository.

✓  Build artifacts that are immutable—the same artifact deployed to staging and production.

✓ Deployment to production requires manual approval or comprehensive automated checks.

✓  Rollback is a single command or button that reverts to the previous version.

✓  Pipeline failures trigger immediate notifications to responsible developers.

✓ CI/CD metrics (build time, success rate, deployment frequency) are tracked and visible.

CASE STUDY · Etsy

Continuous deployment culture: from weekly releases to 50+ deploys per day

Etsy is a canonical example of a company transforming its deployment culture through CI/CD. In widely-cited engineering blog posts from the early 2010s, Etsy described transitioning from a painful weekly deploy process involving hours of coordination and frequent rollbacks to a system where engineers deployed their own code to production more than fifty times per day.

The key enablers were comprehensive automated testing (unit, integration, and end-to-end tests running on every commit), feature flags allowing code to be deployed dark and enabled selectively, real-time monitoring with deployment markers showing exactly when changes went live and their impact on site metrics, a culture where everyone deployed their own changes (no separate release engineering team), and investment in making deployment a low-friction, low-risk operation.

Etsy’s leadership argued that frequent, small deployments were safer than infrequent, large deployments because they were easier to test, easier to troubleshoot, and easier to roll back. The high deployment frequency became a cultural norm and a competitive advantage, enabling rapid response to customer needs and business opportunities.

Bottom Line

Building a CI/CD pipeline service from scratch requires understanding the core principles of continuous integration and continuous deployment, choosing appropriate tooling for your stack and team, implementing each stage methodically with proper testing and security practices, and establishing operational discipline around pipeline maintenance, monitoring, and improvement.

The pipeline demonstrated in this tutorial covers the complete flow: source code changes trigger CI (lint, test, build); successful CI runs produce an immutable Docker image; the image deploys to staging for verification; and after staging succeeds, the same image deploys to production with appropriate approvals and safety checks. This pattern is foundational and scales from small teams to large enterprises.

The value of CI/CD is not merely technical. It is organizational: faster feedback enables faster iteration, automated quality gates maintain code health without manual oversight, repeatable deployments reduce errors and stress, and the ability to ship confidently and frequently transforms how product teams work.

Teams with mature CI/CD ship features in days instead of months, fix bugs in hours instead of weeks, and spend less time on process and coordination and more time on building valuable software.

Start simple: implement CI first (automated testing on every commit), then add basic CD (automated deployment to staging), and gradually increase automation and sophistication as confidence grows with cloud DevOps solutions. The journey from manual deployments to full continuous deployment is incremental.

Each step delivers value. Each step builds confidence. The goal is not perfection on day one; it is continuous improvement toward a delivery system that enables the team to ship better software faster.

Frequently Asked Questions About CI/CD Pipelines

1. What is the difference between CI and CD?

Continuous Integration (CI) automates code integration, testing, and validation. Continuous Deployment (CD) automates releasing tested code to production environments.

2. How long should a CI pipeline take?

Ideally under 10 minutes. Fast feedback ensures developers fix issues while context is fresh.

3. What is the best CI/CD tool in 2025?

Popular choices include GitHub Actions, GitLab CI/CD, Jenkins, Azure Pipelines, and AWS CodePipeline. The best tool depends on your cloud provider, team size, and ecosystem.

4. What is a Kubernetes deployment pipeline?

A Kubernetes deployment pipeline automates building Docker images, pushing them to a registry, and updating Kubernetes deployments across staging and production environments.

5. Is CI/CD only for large teams?

No. Even small teams benefit from CI/CD by reducing manual errors and enabling faster releases.

Ankit Patel
Ankit Patel
Ankit Patel is the visionary CEO at Wappnet, passionately steering the company towards new frontiers in artificial intelligence and technology innovation. With a dynamic background in transformative leadership and strategic foresight, Ankit champions the integration of AI-driven solutions that revolutionize business processes and catalyze growth.

Related Post