Security as a gate at the end of the pipeline is security theater. I’ve believed this for years, but it took watching a real incident unfold to make me truly militant about it. If your security checks only run after code is merged, packaged, and ready to ship, you’re not doing security — you’re doing compliance paperwork.

I’ve spent the last few years building DevSecOps pipelines that weave security into every stage of the software delivery lifecycle. Not bolted on. Not an afterthought. Baked in from the first keystroke. In this post, I’ll walk through exactly how I structure these pipelines, the tools I use, and the lessons I’ve learned the hard way.


The Wake-Up Call: A Leaked AWS Key

Let me tell you about the incident that changed how I think about pipeline security forever.

A developer on my team was working late on a Friday — already a red flag. They were debugging a Lambda function and hardcoded an AWS access key into the source to test locally. Happens more than anyone wants to admit. They committed it, pushed to a public GitHub repo, and went to bed.

By Saturday morning, someone had scraped the key. By the time I got the alert, there were over 200 EC2 instances spinning up across multiple regions mining cryptocurrency. The bill was climbing fast. We revoked the key, killed the instances, and spent the weekend doing forensics.

The kicker? We had security scanning. It ran in our staging environment. After the code had already been public for hours. That’s when I decided: security scanning that doesn’t run before code leaves the developer’s machine is barely better than no scanning at all.

That Monday, I started rebuilding our entire pipeline. Here’s what I built.


The DevSecOps Pipeline Model

I think of a DevSecOps pipeline as having six security integration points. Miss any one of them and you’ve got a gap:

  1. Pre-commit — Catch secrets and basic issues before code enters version control
  2. Commit/PR — SAST, SCA, and linting on every pull request
  3. Build — Container scanning and dependency verification
  4. Test — DAST and API security testing against running environments
  5. Deploy — IaC scanning and policy-as-code checks
  6. Runtime — Continuous monitoring and anomaly detection

Most teams I’ve worked with cover maybe two of these. Usually commit and deploy. That’s not enough.


Stage 1: Pre-Commit — Stop Problems at the Source

This is the cheapest place to catch issues. A secret detected before it’s committed costs nothing. A secret detected after it’s been pushed to a public repo costs you a weekend and a five-figure AWS bill. Ask me how I know.

I use pre-commit hooks with tools like gitleaks and detect-secrets. Here’s a minimal setup:

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/gitleaks/gitleaks
    rev: v8.18.0
    hooks:
      - id: gitleaks

  - repo: https://github.com/Yelp/detect-secrets
    rev: v1.4.0
    hooks:
      - id: detect-secrets
        args: ['--baseline', '.secrets.baseline']

The important thing here is that this runs locally, before the commit even happens. The developer gets instant feedback. No waiting for CI. No public exposure window.

I’ll be honest — some developers push back on pre-commit hooks. They say it slows them down. My response: getting paged at 3am because someone pushed a database password to GitHub slows you down a lot more.


Stage 2: Commit and PR — The First Real Gate

Once code hits a pull request, that’s where I run the heavy hitters: Static Application Security Testing (SAST) and Software Composition Analysis (SCA). These are non-negotiable in every CI/CD pipeline I build.

Here’s a GitHub Actions workflow that runs both:

name: Security Scan on PR
on:
  pull_request:
    branches: [main]

jobs:
  sast:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Run Semgrep SAST
        uses: semgrep/semgrep-action@v1
        with:
          config: >-
            p/security-audit
            p/secrets
            p/owasp-top-ten

  sca:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Run Trivy SCA
        uses: aquasecurity/trivy-action@master
        with:
          scan-type: 'fs'
          scan-ref: '.'
          severity: 'CRITICAL,HIGH'
          exit-code: '1'

A few opinions here. First, I use Semgrep for SAST because the rules are readable and writable. If I can’t understand what a security tool is flagging, I can’t triage it effectively. Second, I set exit-code: '1' on Trivy — meaning the pipeline fails on critical or high vulnerabilities. No exceptions. If you don’t fail the build, developers learn to ignore the warnings. I’ve seen it happen on every single team that treats security findings as “informational.”

SCA is particularly important because most of your code isn’t your code. It’s dependencies. And dependencies have dependencies. A vulnerability three levels deep in your dependency tree is still your problem when it gets exploited.


Stage 3: Build — Container and Artifact Scanning

If you’re building containers — and most of us are — you need to scan them. Not just the application code, but the base image, the installed packages, everything. I’ve written more about this in my container security scanning guide.

Here’s how I integrate container scanning into the build stage:

  build-and-scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Build container image
        run: docker build -t myapp:${{ github.sha }} .

      - name: Scan container image
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: 'myapp:${{ github.sha }}'
          severity: 'CRITICAL,HIGH'
          exit-code: '1'
          format: 'sarif'
          output: 'trivy-results.sarif'

      - name: Upload scan results
        uses: github/codeql-action/upload-sarif@v3
        with:
          sarif_file: 'trivy-results.sarif'

I upload results in SARIF format so they show up directly in GitHub’s Security tab. This matters because developers actually look at GitHub’s UI. They don’t look at CI logs unless something’s broken.

One thing I’ve learned: always pin your base images to a specific digest, not just a tag. python:3.12-slim can change underneath you. python:3.12-slim@sha256:abc123... can’t. This makes your builds reproducible and your security scans meaningful.


Stage 4: Test — DAST and API Security

Static analysis catches a lot, but it can’t catch everything. You need to test running applications too. This is where Dynamic Application Security Testing (DAST) comes in. I run DAST against ephemeral environments spun up during CI.

  dast:
    runs-on: ubuntu-latest
    needs: [build-and-scan]
    steps:
      - name: Deploy to ephemeral environment
        run: |
          # Deploy the built container to a test environment
          docker compose -f docker-compose.test.yml up -d
          sleep 10

      - name: Run ZAP DAST scan
        uses: zaproxy/[email protected]
        with:
          target: 'http://localhost:8080'
          rules_file_name: '.zap/rules.tsv'
          fail_action: 'true'

      - name: Tear down environment
        if: always()
        run: docker compose -f docker-compose.test.yml down

DAST is slower than SAST. That’s fine. I don’t run it on every commit — just on PRs to main and on nightly builds. The goal isn’t to catch every issue instantly; it’s to catch the issues that static analysis misses before they reach production.

For API security specifically, I also run contract tests that validate authentication, authorization, input validation, and rate limiting. These aren’t traditional security scans, but they catch security-relevant bugs that DAST tools often miss.


Stage 5: Deploy — Infrastructure as Code Scanning

Your application code can be bulletproof, but if your infrastructure is misconfigured, none of it matters. An S3 bucket with public access, a security group open to the world, an IAM role with *:* permissions — these are the things that keep me up at night.

I scan every Terraform plan and CloudFormation template before it touches AWS:

  iac-scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Run Checkov IaC scan
        uses: bridgecrewio/checkov-action@master
        with:
          directory: ./infrastructure
          framework: terraform
          soft_fail: false
          output_format: sarif

      - name: Run tfsec
        uses: aquasecurity/[email protected]
        with:
          working_directory: ./infrastructure
          soft_fail: false

I run both Checkov and tfsec because they catch different things. Checkov is better at compliance frameworks (CIS, SOC2). tfsec is better at catching AWS-specific misconfigurations. Belt and suspenders.

Policy-as-code is the other piece here. I use Open Policy Agent (OPA) to enforce organizational policies. Things like “no public S3 buckets, ever” or “all RDS instances must have encryption enabled.” These aren’t suggestions — they’re hard gates that block deployment.


Stage 6: Runtime — Continuous Monitoring

The pipeline doesn’t end at deployment. Production is where the real threats live. I won’t go deep on runtime security here — that’s a whole separate topic — but the pipeline should feed into runtime monitoring.

At minimum, I set up:

  • Secrets rotation and monitoring — Automated rotation for every credential, with alerts on any manual access
  • Dependency monitoring — Automated PRs when new CVEs are published for your dependencies (Dependabot, Renovate)
  • Container runtime security — Falco or similar tools watching for anomalous behavior in running containers
  • Audit logging — CloudTrail, VPC Flow Logs, and application-level audit trails feeding into a SIEM

The pipeline creates the artifacts. Runtime monitoring makes sure those artifacts behave as expected once they’re live.


Putting It All Together

Here’s what the complete pipeline looks like as a single GitHub Actions workflow:

name: DevSecOps Pipeline
on:
  pull_request:
    branches: [main]
  push:
    branches: [main]

jobs:
  secrets-scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      - uses: gitleaks/gitleaks-action@v2

  sast:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: semgrep/semgrep-action@v1
        with:
          config: p/security-audit p/owasp-top-ten

  sca:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: aquasecurity/trivy-action@master
        with:
          scan-type: 'fs'
          severity: 'CRITICAL,HIGH'
          exit-code: '1'

  build-and-scan:
    needs: [secrets-scan, sast, sca]
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: docker build -t myapp:${{ github.sha }} .
      - uses: aquasecurity/trivy-action@master
        with:
          image-ref: 'myapp:${{ github.sha }}'
          severity: 'CRITICAL,HIGH'
          exit-code: '1'

  iac-scan:
    needs: [secrets-scan]
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: bridgecrewio/checkov-action@master
        with:
          directory: ./infrastructure
          soft_fail: false

  deploy:
    needs: [build-and-scan, iac-scan]
    if: github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: echo "Deploy to production"

The key design decision: secrets-scan, sast, and sca run in parallel. They don’t depend on each other, so there’s no reason to run them sequentially. build-and-scan only runs after all three pass. deploy only runs after everything passes and only on the main branch.


The Cultural Shift

Tools are the easy part. The hard part is culture.

I’ve rolled out DevSecOps pipelines to teams that hated it. Developers complained that builds were slower. Product managers complained that features were delayed. Leadership asked why we were “blocking releases for theoretical vulnerabilities.”

Here’s what I tell them: the average cost of a data breach is in the millions. The cost of a developer waiting an extra three minutes for a security scan is approximately zero. The math isn’t complicated.

But you can’t just mandate it. You have to make it easy. That means:

  • Fast feedback loops — If your security scans take 30 minutes, developers will find ways around them. Keep scans under 5 minutes where possible.
  • Clear, actionable findings — “Potential SQL injection on line 47 of user_controller.py” is useful. “Security issue detected” is not.
  • No false positive fatigue — Tune your tools aggressively. A tool that cries wolf gets ignored. I’d rather miss a low-severity finding than train developers to click “dismiss” without reading.
  • Shared ownership — Security isn’t the security team’s job. It’s everyone’s job. Developers should be able to read, understand, and fix security findings without filing a ticket.

What I’d Do Differently

If I were starting from scratch today, I’d change a few things about how I built my first DevSecOps pipeline.

I’d start with secrets detection and nothing else. Get that one thing working perfectly before adding SAST, SCA, and the rest. The leaked key incident I mentioned earlier? That alone justified the entire program. Start with the highest-impact, lowest-effort win.

I’d invest more in developer education upfront. The best security tool is a developer who understands the OWASP Top 10. No scanner will ever be as effective as a developer who knows not to write the vulnerability in the first place.

And I’d set up a security champions program from day one. One developer per team who gets extra security training and acts as the first point of contact for security questions. This scales way better than a centralized security team trying to review every PR.


Wrapping Up

DevSecOps isn’t a product you buy. It’s a practice you build. It’s messy, it’s iterative, and it’s never finished. But every security check you add to your pipeline is one less vulnerability that makes it to production.

Start small. Add secrets detection today. Add SAST next week. Add container scanning next month. Build the muscle. Make security a habit, not an event.

If you want to go deeper, check out my DevSecOps implementation guide for a more comprehensive walkthrough, or my post on building CI/CD pipelines with GitHub Actions if you’re setting up the pipeline infrastructure from scratch.

The pipeline I described here isn’t theoretical. It’s running in production, catching real vulnerabilities, every single day. And it all started because someone pushed an AWS key to a public repo on a Friday night.

Don’t wait for your own wake-up call.