When I joined Locus as CISO, the AppSec posture was typical for a fast-moving AI startup: no SAST, no SCA, no container scanning, no secrets detection. The engineering team was shipping multiple times a day with zero security gates. The vulnerability backlog was a fiction because nobody was counting.
I had two constraints: a $1M total security budget (not just AppSec — everything), and a five-person team. Enterprise AppSec platforms were out of the question. A single Snyk or Checkmarx license would consume a third of the budget before we'd secured anything else.
So we built it with open source. The result: 96% reduction in late-stage vulnerabilities within 8 months, zero enterprise licensing spend on AppSec.
The Pipeline
Every PR triggers a parallel security scan across five dimensions. The scan completes in under 3 minutes. Developers get results in the PR itself, not in a separate dashboard they'll never check.
1. SAST — Semgrep
Semgrep runs custom rules tailored to our codebase. The key insight: generic rule sets produce 90% noise. We wrote ~40 custom rules targeting our specific patterns: our API framework, our auth middleware, our data access layer. These rules catch real bugs because they understand our code, not generic Python.
# Example: Detect raw SQL in our ORM layer
rules:
- id: raw-sql-in-orm
pattern: db.execute($QUERY)
message: Use parameterized queries via ORM
severity: ERROR
metadata:
category: security
cwe: CWE-89
False positive rate with custom rules: under 5%. With the default ruleset, it was over 60%.
2. SCA — Trivy + OSV-Scanner
Trivy scans dependencies in both application code and container images. OSV-Scanner provides a second opinion against the OSV database. We run both because no single SCA tool has complete vulnerability coverage.
The critical addition: reachability analysis. A vulnerable dependency that's never called is noise. We built a lightweight reachability check using call graph analysis that filters out ~40% of SCA findings that would otherwise require manual triage. This is what Verida (our open-source project) automates.
3. Container Scanning — Trivy
Every Docker image is scanned before it reaches the registry. We enforce a policy: no critical or high CVEs in base images. If the scan fails, the image doesn't push. This sounds aggressive, but in practice it means we update base images proactively rather than reactively.
4. Secrets Detection — Gitleaks
Gitleaks runs as a pre-commit hook and in CI. The pre-commit hook catches secrets before they enter the repository. The CI check catches anything the hook missed (developers can skip hooks locally).
We also run periodic full-history scans to find secrets committed before Gitleaks was deployed. Found 23 live credentials in the first scan. All rotated within 48 hours.
5. DAST — OWASP ZAP
ZAP runs nightly against our staging environment with authenticated scans. We maintain a custom scan policy that focuses on our API surface rather than running the full default scan (which takes 6 hours and produces mostly noise for an API-first platform).
The Secret Weapon: AI-Assisted Triage
Open-source tools find vulnerabilities. They also find an overwhelming number of false positives. The difference between a successful open-source AppSec program and an abandoned one is triage velocity.
We built an AI triage layer using Claude that automatically:
- Correlates findings across tools (the same issue flagged by Semgrep and ZAP gets deduplicated)
- Assesses exploitability based on the specific code context
- Suggests remediation with code snippets tailored to our codebase
- Auto-closes findings that match known false positive patterns
This reduced manual triage time by ~70%. A human still reviews every finding classified as high or critical, but the AI handles the long tail of low/medium findings that would otherwise pile up and get ignored.
The Integration Architecture
The pipeline runs in GitHub Actions. Results flow into a central findings database (PostgreSQL). A lightweight dashboard (Grafana) shows trends. But the primary interface is the PR itself:
- Blocking findings (critical/high with confirmed reachability) fail the PR check
- Advisory findings (medium/low or unreachable) appear as PR comments
- Informational findings go to the dashboard only
Developers never leave their workflow. They don't need a security dashboard account. They don't need to learn a new tool. The security feedback appears exactly where they're already working.
The Numbers
| Metric | Before | After (8 months) |
|---|---|---|
| Late-stage vulnerabilities (found in staging/prod) | ~120/quarter | 5/quarter |
| Mean time to remediation | 45 days | 3 days |
| False positive rate | N/A (no scanning) | <5% (with custom rules) |
| AppSec tooling cost | $0 | $0 (open source) |
| Developer friction complaints | 0 (no gates) | ~2/month (acceptable) |
What I'd Do Differently
Start with custom rules from day one. We wasted six weeks running default rulesets, drowning in false positives, and losing developer trust. Custom rules should be the first investment, not an optimization.
Don't block PRs immediately. We started in advisory mode for the first month. Developers saw the findings without being blocked. This built trust before we turned on enforcement. Going straight to blocking would have caused a revolt.
Invest in reachability analysis early. The single biggest noise reducer in SCA is knowing whether the vulnerable code path is actually reachable. This alone cut our SCA noise by 40%.
The best AppSec program isn't the one with the most expensive tools. It's the one developers actually use. Open-source tools with custom rules, integrated into the developer workflow, with AI-assisted triage — that's the stack that scales.
The entire pipeline configuration is being open-sourced as part of the Verida project. If you're building AppSec on a budget, check it out on GitHub.