The Onboarding Playbook That Cut Time‑to‑First‑PR from 9 Days to 2

Stop hazing your new hires with bespoke snowflakes. Ship a paved road that gets devs to their first meaningful PR this week, not next sprint.

Onboarding isn’t a wiki—it's a product. Ship a paved road and your new hires will ship code.
Back to all posts

The week-one faceplant you’ve seen before

Day 1: the laptop arrives. Day 2: the bespoke shell script from 2017 fails on M‑series Macs. Day 3: security still hasn’t granted AWS access. Day 4: the dev runs Docker, burns an afternoon fighting a mystery node-gyp compile. Day 5: finally compiles… but test data lives in an S3 bucket they can’t read. Sound familiar? I’ve watched too many teams mistake folklore for process.

I’ve also seen the fix. At GitPlumbers, we’ve cut onboarding from weeks to days by shipping paved-road defaults, not bespoke snowflakes. No silver bullets—just boring, well-oiled paths that work on every laptop and in Codespaces.

Define the metric that matters: time to first meaningful PR

If you don’t measure it, you’ll ship vibes, not results. Optimize for TTFPR (Time to First Meaningful PR): a non-trivial change merged to a production-bound branch with tests passing.

  • Baseline targets I hold teams to:

    • < 2 hours to compile and run the service locally.

    • Day 1: repo checked out, single command spin-up, tests run.

    • Day 2-3: first meaningful PR merged.

  • Anti-goals: counting wiki pages viewed, training modules completed, or Slack emojis reacted.

Track TTFPR alongside DORA and SRE metrics: MTTR, change failure rate, and incident participation. When TTFPR drops, hiring ROI shows up in actual velocity, not just headcount math.

Ship a paved road, not a scavenger hunt

Pick boring, proven defaults and make deviation a conscious choice. The paved road starts with three things:

  1. Devcontainers (or Codespaces) for toolchains.

  2. A single Makefile as the UX for local builds, tests, and data.

  3. Repo templates that stamp out services with the same skeleton.

Here’s a minimal devcontainer that works on laptops and GitHub Codespaces:


{

  "name": "service-api",

  "image": "mcr.microsoft.com/devcontainers/base:ubuntu",

  "features": {

    "ghcr.io/devcontainers/features/node:1": { "version": "20" },

    "ghcr.io/devcontainers/features/docker-in-docker:2": {}

  },

  "postCreateCommand": "make dev-setup",

  "customizations": {

    "vscode": {

      "extensions": [

        "ms-azuretools.vscode-docker",

        "hashicorp.terraform",

        "esbenp.prettier-vscode"

      ]

    }

  }

}

And the single UX every dev learns once:


SHELL := /bin/bash

APP ?= service-api

ENV ?= dev



default: dev



.PHONY: dev dev-setup test run db seed clean

dev: dev-setup db run



dev-setup:

	# Install pinned tool versions

	asdf install || true



	# Install npm deps, Go modules, or whatever your stack needs

	npm ci || true



	# Pre-commit hooks

	pre-commit install || true



test:

	npm test



run:

	docker compose up --build $(APP)



db:

	docker compose up -d postgres && ./scripts/wait-for-db.sh



seed:

	node scripts/seed.js



clean:

	docker compose down -v

Pin toolchains with asdf or mise and commit them:


.tool-versions

nodejs 20.11.1

terraform 1.8.5

golang 1.22.2

Repo templates: use GitHub Template Repos or Backstage Software Templates to enforce the same Makefile, devcontainer, lint/test config, CI, and observability boilerplate. New service > click > same paved road.

If your “quickstart” is longer than the Makefile, you’ve already lost.

Provision sandboxes with GitOps, not tickets

The fastest way to kill momentum is waiting on infra. Give every dev a button that creates a namespace, app install, and test data via GitOps. We’ve had great mileage with Terraform for cloud primitives and ArgoCD or Flux for app deploys.

Terraform declaring per-dev k8s namespaces and quotas:


# terraform/k8s/namespaces.tf

provider "kubernetes" {}



variable "dev_user" { type = string }



resource "kubernetes_namespace" "dev" {

  metadata {

    name = "dev-${var.dev_user}"

    labels = { owner = var.dev_user }

  }

}



resource "kubernetes_resource_quota" "dev_quota" {

  metadata {

    name      = "rq-${var.dev_user}"

    namespace = kubernetes_namespace.dev.metadata[0].name

  }

  spec {

    hard = {

      "limits.cpu"    = "2","limits.memory" = "8Gi","requests.storage" = "20Gi"

    }

  }

}

Then GitOps the app into that namespace:


# apps/dev-jlee.yaml (ArgoCD)

apiVersion: argoproj.io/v1alpha1

kind: Application

metadata:

  name: service-api-dev-jlee

spec:

  project: default

  destination:

    namespace: dev-jlee

    server: https://kubernetes.default.svc

  source:

    repoURL: https://github.com/acme/service-api

    path: deploy/helm

    targetRevision: main

    helm:

      values: |

        image.tag: dev-jlee-{{ sha }}

  syncPolicy:

    automated: { prune: true, selfHeal: true }

Wrap it with one command new hires can run after SSO:


make sandbox DEV_USER=jlee

Behind that target you can call terraform apply and commit an ArgoCD Application. No Jira tickets. No waiting on “the k8s guy.”

Secrets, access, and the 30‑minute wall

Most week-one pain is auth. Trade static keys and snowflake IAM for SSO + OIDC, then keep secrets out of laptops.

  • SSO/OIDC: Okta or AzureAD to AWS via GitHub/GitLab OIDC. No long-lived keys.

  • Session tools: aws sso login, aws-vault, or gcloud auth login.

  • Secrets: sops + KMS for repo-encrypted config; mount at runtime via Secrets Store CSI or doppler in dev.

AWS IAM trust for GitHub OIDC (no static keys):


{

  "Version": "2012-10-17",

  "Statement": [

    {

      "Effect": "Allow",

      "Principal": {

        "Federated": "arn:aws:iam::123456789012:oidc-provider/token.actions.githubusercontent.com"

      },

      "Action": "sts:AssumeRoleWithWebIdentity",

      "Condition": {

        "StringEquals": {

          "token.actions.githubusercontent.com:sub": "repo:acme/service-api:ref:refs/heads/main"

        }

      }

    }

  ]

}

Now onboarding steps are predictable:

  1. brew install awscli aws-vault gnupg sops (or it’s baked in the devcontainer).

  2. aws sso login --profile acme-dev.

  3. make dev and you’re running locally with secrets injected at runtime, not committed to disk.

If you must support local-only dev, standardize on docker compose and secrets.env generated from sops with a single command. No hand-edited .env lore.

Docs that run themselves

Wikis rot. Keep docs next to code and executable:

  • README.md shows the make targets first, then links deeper.

  • docs/adr/* for architecture decisions—short, date-stamped.

  • scripts/* referenced by the Makefile; don’t paste bash into Confluence.

  • A thin developer portal like Backstage can aggregate Golden Paths later, but don’t start there. Start with a working repo template.

A living onboarding README template:


# Getting Started

1. Install tooling: `make dev-setup`

2. Run locally: `make run`

3. Run tests: `make test`

4. Create sandbox: `make sandbox DEV_USER=$USER`



Troubleshooting

- Docker memory >= 6GB

- `aws sso login --profile acme-dev` if auth fails



Links

- ADRs

- Runbook (Grafana, alerts)

- SLO dashboard

If your “docs” can’t be copy/pasted into a terminal, they will drift.

Case study: fintech before/after

A mid-market fintech we worked with (SOC 2, heavy AWS) had a heroic-but-fragile onboarding. Tooling lived in bespoke bash, environments were shared, and AWS access took days. Their numbers before:

  • TTFPR: 9 days median.

  • First day to green tests: 6–8 hours.

  • New hire ticket volume: 7 per hire (access, env fixes).

What we shipped in 4 weeks:

  • Devcontainers + pinned toolchains; GitHub Codespaces optional.

  • Single Makefile across 18 repos; same targets and behavior.

  • Terraform module for per-dev k8s namespaces; ArgoCD Application per dev.

  • Okta -> AWS SSO with OIDC; killed long-lived keys and ~/.aws/credentials.

  • Secrets with sops + KMS; no plaintext .env in Slack.

  • Repo templates for services (HTTP API, batch worker) with observability baked in (OpenTelemetry, Prometheus metrics, basic SLOs).

After:

  • TTFPR dropped to 2 days (p95 = 4 days).

  • Day 1 local run in 45 minutes median.

  • New-hire tickets: <1 average. Security loved that long-lived AWS keys were gone.

  • Engineering managers stopped burning Fridays unblocking laptops and started reviewing code.

Total cost: ~4 engineer-weeks + some IAM work. Payback: the next 6 hires.

Measure and iterate like SREs

Treat onboarding as a product with SLOs:

  • SLO: 95% of new hires reach first PR in ≤ 3 business days.

  • SLI: time from first commit to merged PR with passing CI on a trunk-bound branch.

  • Error budget: 5% can exceed; do a retro when burned.

Instrumentation ideas:

  • A lightweight wizard (onboard.sh or a CLI) that logs anonymized timing to a metrics sink.

  • Add an “Onboarding” label to issues; track per-hire ticket count.

  • Monthly flywheel: pick top 2 blockers, fix them, update the template and Makefile.

Don’t forget the human glue: pair every new hire with a buddy; schedule 2 hours of “walking skeleton” time in week one where they ship a trivial change through the full pipeline (local -> PR -> CI -> canary) so the dopamine hits early.

The cost/benefit trade‑offs (and where to say no)

  • Devcontainers vs. BYO laptop setup: devcontainers win on consistency and support fewer permutations. Downside: Docker-in-Docker perf. Mitigate by caching deps and using Codespaces for heavy stacks.

  • ArgoCD/Flux vs. custom scripts: GitOps adds YAML and another control plane, but you pay once and stop re-solving drift, audit, and rollbacks. If you’re < 5 engineers, start with Terraform + kubectl apply in CI, graduate later.

  • Backstage day 1? No. Land the Makefile/devcontainer first. Backstage helps at scale, but it’s not your first 30 days move.

  • Bespoke glue: resist writing a magical onboarding script that only one staff engineer understands. Prefer named tools with docs and communities. Killing cleverness is how you scale.

  • AI-generated quickstarts: great for scaffolding, dangerous for security and drift. Run GitPlumbers-style “vibe code cleanup” before you put it into templates; add lint and policy-as-code to catch hallucinated flags and nonsense configs.

What to do in the next 30 days

  • Week 1: choose defaults (devcontainer, Makefile targets, secrets approach).

  • Week 2: ship repo templates and the first service on the paved road.

  • Week 3: wire SSO + OIDC, kill static keys, provision per-dev namespaces.

  • Week 4: measure TTFPR on two new hires; fix the top two blockers and repeat.

If you want a second pair of hands, GitPlumbers has done this movie dozens of times. We’ll keep you honest, keep it boring, and get it done without inventing a new platform just to onboard people.

Related Resources

Key takeaways

  • Measure onboarding by time to first meaningful PR, not how many docs someone read.
  • Pick paved-road defaults: devcontainers + one Makefile + GitOps sandboxes + SSO access.
  • Automate everything a new hire needs to type in their first hour—no bespoke scripts.
  • Use GitOps for ephemeral environments so onboarding never files a ticket.
  • Keep docs executable and versioned with code; Backstage can help but start simple.
  • Instrument onboarding like an SRE: set SLOs, track TTFPR, and fix top failure modes monthly.

Implementation checklist

  • Define TTFPR (time to first meaningful PR) and set a target (<= 3 days).
  • Create a single `Makefile` with `make dev`, `make test`, `make db`, `make run`.
  • Adopt devcontainers or Codespaces with pre-baked toolchains.
  • Provision per-dev sandboxes via Terraform + ArgoCD; no tickets.
  • Use SSO (Okta/AAD) with OIDC for AWS/GCP access; remove static keys.
  • Store secrets with `sops` + KMS and load via `doppler`/`chamber`/`secrets-store-csi`.
  • Move setup into repo templates; generate services from Golden Path templates.
  • Instrument onboarding with a dashboard: time spent, blockers, buddy feedback.

Questions we hear from teams

What’s a realistic TTFPR target for an established team?
Aim for 2–3 business days for a meaningful PR. If you’re at > 1 week, you’re paying onboarding tax every hiring cycle. Start by getting Day-1 setup under 2 hours and remove all tickets from the critical path.
Do we need Backstage to create Golden Paths?
No. Start with repo templates plus a devcontainer and Makefile. Backstage helps at scale, but it’s an aggregator—not a prerequisite.
Should we standardize on Codespaces for everyone?
Not necessarily. Offer it as a paved-road default for heavy stacks or contractors, but keep local dev via devcontainers working. Choice, not chaos.
How do we keep secrets safe during onboarding?
Use SSO/OIDC to avoid long-lived cloud keys. Store configuration secrets with sops + KMS and inject at runtime (Secrets Store CSI, doppler). Never send .env files over Slack or email.
What about AI-generated quickstarts—time saver or landmine?
Both. Use AI to draft scaffolds, then run a code rescue pass: add policy-as-code, linting, and security scanning. Don’t let hallucinated flags or insecure defaults leak into your templates.

Ready to modernize your codebase?

Let GitPlumbers help you transform AI-generated chaos into clean, scalable applications.

Talk to GitPlumbers about your paved-road onboarding See our platform templates and checklists

Related resources