Internal Developer Portals That Actually Ship: Paved Roads, Not Pet Projects

Stop building bespoke portals nobody uses. Ship a paved road with self‑service that cuts lead time and keeps you out of plugin hell.

A portal is successful the day it becomes the fastest, safest way to ship—not the day it reaches plugin parity with your vendors.
Back to all posts

The $1M Portal Nobody Used

I watched a unicorn spend a year “building the developer experience.” They rolled out a Backstage fork with 40 plugins, a custom service graph, three ways to deploy, and no SSO on day one. Adoption was 8%. Most teams bookmarked their old runbooks and ignored the shiny portal.

What finally moved the needle wasn’t another widget. It was a paved road: one opinionated way to create a service, one way to provision infra, one way to ship to prod. Lead time dropped from days to hours, and the portal became the front door instead of a museum.

If your portal isn’t the fastest path to value, devs will route around it.

What an Internal Developer Portal Is (and Isn’t)

An IDP is the self‑service front end for your platform. It’s not a bespoke control plane. Done right, it:

  • Provides a service catalog with ownership, oncall, SLOs, and runbooks.
  • Offers self‑service workflows: create a service, request infra, rotate secrets.
  • Enforces paved‑road defaults: CI/CD, observability, cost tags, security baselines.
  • Writes everything as code via PRs—no snowflakes.

What it isn’t:

  • A CMDB replacement. Treat catalog data as derived from repos, runtime, and IaC.
  • A plugin playground. Plugins without adoption are liabilities.
  • A second orchestrator. Delegate to GitHub Actions, ArgoCD, Terraform, Vault.

The Opinionated Stack That Actually Ships

I’ve seen this combo work at mid‑size and enterprise scale without a seven‑figure burn:

  • Identity: Okta or Azure AD SSO into everything.
  • Source/CI: GitHub + GitHub Actions with OIDC to cloud (no long‑lived keys).
  • Portal: Backstage (catalog + Scaffolder only at first).
  • Infra: Terraform modules (or Terraform Cloud) with GitOps-style promotion.
  • Deploy: ArgoCD + Helm to Kubernetes. Reconcile, don’t push.
  • Secrets: HashiCorp Vault or cloud secret manager.
  • Observability: Prometheus/Grafana + Loki + Tempo or vendor equivalent.

Minimal app-config.yaml for Backstage to keep you out of yak‑shaving:

# backstage/app-config.yaml
app:
  title: Company Portal
integrations:
  github:
    - host: github.com
      token: ${GITHUB_TOKEN}
auth:
  providers:
    github:
      development:
        clientId: ${GITHUB_CLIENT_ID}
        clientSecret: ${GITHUB_CLIENT_SECRET}
catalog:
  locations:
    - type: url
      target: https://github.com/acme/catalog/blob/main/catalog-info.yaml
      rules:
        - allow: [Component, System, API]
scaffolder:
  defaultAuthor:
    name: platform-bot
    email: platform@acme.com
kubernetes:
  serviceLocatorMethod:
    type: multiTenant

Keep it boring. Ship the catalog and Scaffolder first; resist the urge to wire in every vendor dashboard day one.

Before/After: Creating a Service the Old Way vs the Portal

Before (real numbers from a payments client, 40 teams):

  • 17 manual steps across four docs; 2–3 days elapsed.
  • 4 different CI patterns; infra tickets sat in a queue for a week.
  • New services missing alerts, SLOs, or cost tags ~60% of the time.

After (same org, 90 days later):

  • One Scaffolder template; 45 minutes to first passing CI, same day deploy.
  • Infra via Terraform PR auto‑generated; platform team only reviews guardrails.
  • Every service ships with SLOs, alerts, and cost center tags.

Template excerpt that does the real work:

# templates/create-service/template.yaml
apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
  name: golden-path-http-service
  title: Golden Path: HTTP Service (Node or Go)
  description: Creates a repo, CI/CD, Helm chart, ArgoCD App, SLOs, and Terraform infra stub.
spec:
  owner: platform
  parameters:
    - title: Service Basics
      properties:
        name:
          type: string
          description: service repo name
        language:
          type: string
          enum: [node, go]
        domain:
          type: string
          enum: [payments, risk, core]
        costCenter:
          type: string
  steps:
    - id: fetch
      name: Fetch Skeleton
      action: fetch:template
      input:
        url: ./skeletons/${{ parameters.language }}
        values:
          name: ${{ parameters.name }}
          domain: ${{ parameters.domain }}
    - id: publish
      name: Create GitHub Repo
      action: publish:github
      input:
        repoUrl: github.com?owner=acme&repo=${{ parameters.name }}
        defaultBranch: main
        repoVisibility: private
    - id: ci
      name: Add CI
      action: github:actions:dispatch
      input:
        workflowId: bootstrap-ci.yml
        repoUrl: github.com?owner=acme&repo=${{ parameters.name }}
    - id: argocd
      name: Register with ArgoCD
      action: fs:write
      input:
        files:
          - path: argocd/app.yaml
            contents: |
              apiVersion: argoproj.io/v1alpha1
              kind: Application
              metadata:
                name: ${{ parameters.name }}
              spec:
                project: default
                source:
                  repoURL: https://github.com/acme/${{ parameters.name }}.git
                  path: deploy/helm
                  targetRevision: main
                destination:
                  server: https://kubernetes.default.svc
                  namespace: ${{ parameters.domain }}
    - id: terraform
      name: Infra Module Stub
      action: fs:write
      input:
        files:
          - path: infra/main.tf
            contents: |
              module "service" {
                source      = "git::https://github.com/acme/tf-mod-service.git"
                name        = "${{ parameters.name }}"
                cost_center = "${{ parameters.costCenter }}"
                env         = var.env
              }
    - id: register
      name: Register in Catalog
      action: catalog:register
      input:
        targets:
          - ./catalog-info.yaml

And the baked-in CI that works out of the gate:

# .github/workflows/ci.yml
name: ci
on:
  push:
    branches: [main]
  pull_request: {}
permissions:
  id-token: write
  contents: read
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: '20' }
      - run: npm ci && npm test
      - name: Build and push image
        uses: docker/build-push-action@v6
        with:
          context: .
          push: true
          tags: ghcr.io/acme/${{ github.event.repository.name }}:${{ github.sha }}
      - name: Update Helm values (image tag)
        run: |
          yq -i '.image.tag = "${{ github.sha }}"' deploy/helm/values.yaml
      - name: Create deploy PR
        uses: peter-evans/create-pull-request@v6
        with:
          title: "chore: deploy ${{ github.sha }}"
          commit-message: "chore: deploy ${{ github.sha }}"

No tickets. No copy/paste. Every change auditable.

Governance Baked Into the Path (Without a Police State)

Scorecards beat policy docs. Make the golden path pass/fail obvious in the portal.

  • Require oncall, SLO, runbook, and costCenter in catalog-info.yaml.
  • Auto‑generate alert routes and dashboards per service.
  • Use OPA or scorecards to block deploys missing baselines.

Example catalog-info.yaml your template writes on day one:

apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
  name: payments-api
  tags: [http, node]
  annotations:
    github.com/project-slug: acme/payments-api
    argocd/app-name: payments-api
    cost-center: FIN-123
    pagerduty/service-id: P12345
spec:
  type: service
  owner: payments-team
  system: payments
  lifecycle: production

Simple OPA policy to enforce annotations before deploy:

package acme.guardrails
required = {"cost-center", "pagerduty/service-id", "github.com/project-slug"}
violation[msg] {
  some k
  required[k]
  not input.metadata.annotations[k]
  msg := sprintf("missing required annotation: %s", [k])
}

Run that in CI or as an ArgoCD admission webhook to keep you honest.

What to Build First (and What to Skip)

Build first:

  1. Create Service template (HTTP API + background worker variants).
  2. Request Managed DB (RDS/Cloud SQL) via Terraform module with sane defaults.
  3. Rotate Secret workflow that writes to Vault and refreshes app pods.

Skip (for now):

  • Fancy service graph visualizations no one uses during incidents.
  • Five deployment options. Pick one. Reduce cognitive load.
  • Vendor plugin zoo. Start with links to existing dashboards.
  • Custom runtime schedulers. You don’t need to outsmart Kubernetes.

A thin Terraform module example for DB self‑service:

# modules/db/main.tf
resource "aws_db_instance" "this" {
  engine               = "postgres"
  instance_class       = var.instance_class
  allocated_storage    = var.allocated_storage
  db_name              = var.name
  username             = var.username
  password             = var.password
  backup_retention_period = 7
  tags = {
    CostCenter = var.cost_center
    Service    = var.service
    Owner      = var.owner
  }
}

Expose it via a portal workflow that opens a Terraform PR. No tickets, no snowflakes.

Measure Impact Like a Product Team

If you can’t show the business impact, the portal budget will evaporate next planning cycle. Track:

  • DORA metrics: lead time, change failure rate, MTTR.
  • Time to first PR for new service creation.
  • Adoption: percent of services created via the portal; percent on paved road.
  • Guardrail coverage: services with SLOs, alerts, cost tags.
  • Infra wait time: time from request to provision (target: hours, not weeks).

Example lightweight tracking with Backstage actions emitting events:

# emit an event when a template completes
curl -X POST https://events.acme.internal/portal \ 
  -H "Authorization: Bearer $TOKEN" \ 
  -H "Content-Type: application/json" \ 
  -d '{
    "event": "scaffold.completed",
    "service": "payments-api",
    "duration_ms": 2700000,
    "template": "golden-path-http-service"
  }'

Roll that into a weekly dashboard and show deltas. At one fintech, paved‑road adoption hit 82% in two quarters and cut lead time from 3.1 days to 4.6 hours. CFO noticed the drop in infra tickets and approved more headcount. Funny how that works.

Common Failure Modes (And How to Avoid Them)

  • Bespoke plugin trap: Every “just a small plugin” becomes a maintenance tax. Counter: establish a plugin freeze; new features must displace an old one.
  • Half‑baked auth: If SSO isn’t wired on day one, you lose the launch. Counter: ship with Okta/AAD SSO and GitHub auth working, period.
  • Two ways to deploy: You think you’re flexible; you’re fragmenting. Counter: one deploy path, with progressive delivery (canaries) if needed.
  • Treating the portal like a project, not a product: No SLOs, no roadmap. Counter: assign a PM, publish an SLA, run quarterly adoption reviews.
  • No exit ramps: Teams with valid exceptions get blocked. Counter: publish a documented escape hatch with risk sign‑off and a path back to the road.

Remember: the goal isn’t a beautiful portal. It’s safer, faster shipping.

Where GitPlumbers Fits

We parachute into platforms that stalled at “Backstage installed, now what?” We cut scope, wire SSO, ship two golden paths, and measure. We’ve done this at fintech, healthcare, and gaming—under enterprise controls without losing speed.

If you need someone who’s been on the receiving end of pager duty and procurement, not just demos, we should talk.

Related Resources

Key takeaways

  • Favor an opinionated paved road over bespoke plugins. Ship three workflows first: create service, provision infra, wire CI/CD.
  • Treat the portal as a product. Own SLOs, a roadmap, and success metrics (lead time, MTTR, adoption).
  • Use common rails (GitHub/Auth, Backstage, Terraform, ArgoCD, Vault) and lean integrations. Avoid rewriting control planes.
  • Bake governance into the templates: tags, SLOs, alerts, cost centers, security baselines.
  • Make self‑service idempotent and auditable—every click results in Git commits and PRs, not snowflake drift.
  • Measure impact with before/after numbers and kill features users don’t adopt within one quarter.

Implementation checklist

  • Pick a single identity provider and SSO everything (Okta/AAD/GitHub).
  • Stand up a minimal Backstage with service catalog + Scaffolder only.
  • Publish 1–2 golden-path templates (HTTP API + worker) with batteries included.
  • Codify CI/CD with GitHub Actions and GitOps deploys via ArgoCD.
  • Expose 2–3 infra modules via self‑service (DB, queue, bucket) using Terraform.
  • Bake in scorecards (SLOs, alerts, oncall, cost tags) from day one.
  • Instrument end‑to‑end: time‑to‑first‑PR, lead time, change failure rate.
  • Staff the platform with product skills; say no to plugin sprawl.

Questions we hear from teams

Why Backstage and not build our own?
Because you don’t get points for building a UI framework. Backstage gives you a well‑trodden scaffolder and catalog. Keep it minimal, resist plugin sprawl, and invest where it matters: your templates and guardrails.
How do we handle teams that need exceptions?
Publish an escape hatch with risk sign‑off and time‑boxed exceptions. Require an issue with owner, rationale, and a plan to rejoin the paved road. Make it easier to return than to stay bespoke.
What about non‑Kubernetes shops?
Swap ArgoCD/Helm for your deploy primitive (e.g., ECS with GitHub Actions, Spinnaker, Octopus). The pattern stays: PR‑driven infra, a single deployment path, and templates that encode guardrails.
Won’t this slow down innovation?
The opposite. Constraints reduce cognitive load. Teams move faster when the happy path is one click. Innovation happens at the edge of the paved road, not in reinventing CI or IAM every sprint.
How do we keep the portal from becoming a ticket machine?
Make every workflow idempotent and PR‑based. If a click can’t produce a commit, it’s not self‑service. Kill any flow that needs manual hand‑offs.

Ready to modernize your codebase?

Let GitPlumbers help you transform AI-generated chaos into clean, scalable applications.

Talk to an engineer about your portal roadmap See how we ship paved roads in 90 days

Related resources