The Twelve-Factor App
Session 1.10 · ~5 min read
Origin and Purpose
The Twelve-Factor App methodology was written by Adam Wiggins, co-founder of Heroku, and published in 2011. It emerged from observing hundreds of applications deployed on Heroku's platform and distilling the patterns that separated applications that scaled cleanly from those that broke under pressure.
The methodology is not tied to any language, framework, or cloud provider. It describes twelve principles for building software-as-a-service applications that are portable, resilient, and deployable on modern cloud platforms. Fifteen years later, these principles remain the baseline for cloud-native application design.
The original document lives at 12factor.net and is worth reading in full. This session summarizes each factor, explains why it matters, and identifies the most common way teams violate it.
The Twelve Factors
| # | Factor | Principle | Common Violation |
|---|---|---|---|
| 1 | Codebase | One codebase tracked in version control, many deploys | Separate repos for staging and production with copy-pasted code |
| 2 | Dependencies | Explicitly declare and isolate dependencies | Relying on system-level packages that are not in the dependency manifest |
| 3 | Config | Store configuration in the environment | Hardcoding database URLs, API keys, or feature flags in source code |
| 4 | Backing Services | Treat backing services as attached resources | Assuming the database is on localhost and will always be there |
| 5 | Build, Release, Run | Strictly separate build and run stages | SSHing into production to edit code or apply patches directly |
| 6 | Processes | Execute the app as one or more stateless processes | Storing user sessions in local memory instead of an external store |
| 7 | Port Binding | Export services via port binding | Requiring an external web server (Apache, IIS) to be pre-installed |
| 8 | Concurrency | Scale out via the process model | Running everything in a single monolithic process with threads only |
| 9 | Disposability | Maximize robustness with fast startup and graceful shutdown | Processes that take minutes to start or lose in-flight work on shutdown |
| 10 | Dev/Prod Parity | Keep development, staging, and production as similar as possible | Using SQLite in development but PostgreSQL in production |
| 11 | Logs | Treat logs as event streams | Writing logs to local files on disk instead of stdout |
| 12 | Admin Processes | Run admin/management tasks as one-off processes | Running database migrations by manually connecting to production |
Deep Dive: The Factors That Trip People Up
Factor 3: Config
Configuration is everything that varies between deploys: database credentials, API keys, feature flags, third-party service URLs. The twelve-factor app stores these in environment variables, not in code.
This sounds obvious, but violations are everywhere. A config.py file with DATABASE_URL = "postgres://prod-server:5432/mydb" committed to the repo. A .env file checked into version control. An application that reads from a YAML file baked into the Docker image.
The test is simple: could you open-source the codebase right now without exposing any credentials or environment-specific values? If not, your config is not properly externalized.
Factor 6: Processes
Twelve-factor processes are stateless and share-nothing. Any data that needs to persist must be stored in a backing service (database, cache, object store). This means no sticky sessions, no in-memory caches that cannot be lost, and no local file storage that other processes need to read.
This factor is what makes horizontal scaling possible. If each process is stateless, you can add or remove instances at will. A load balancer can send any request to any instance. If an instance crashes, no data is lost because there was no data on that instance to begin with.
The most common violation is storing session data in process memory. It works fine with a single server. The moment you add a second server behind a load balancer, users lose their sessions when requests hit a different instance.
Factor 9: Disposability
Processes should start fast and shut down gracefully. Fast startup means new instances can be spun up quickly in response to load. Graceful shutdown means the process finishes in-flight requests, releases resources, and exits cleanly when it receives a SIGTERM signal.
This factor matters because cloud platforms routinely start and stop instances. Auto-scaling groups add and remove instances based on load. Kubernetes reschedules pods across nodes. Spot instances can be terminated with 30 seconds notice. If your process takes five minutes to start or drops connections on shutdown, these operations cause user-facing errors.
Factor 10: Dev/Prod Parity
The gap between development and production environments should be as small as possible. This means the same operating system, the same database engine (not just the same type), the same message queue, and the same cache. Docker and containerization have made this dramatically easier. You define your stack once in a Dockerfile and docker-compose.yml, and every developer runs the same environment.
The classic violation is using in-memory substitutes during development. H2 instead of PostgreSQL. A local directory instead of S3. A synchronous function call instead of a message queue. These substitutions hide bugs that only appear in production, where the real services behave differently.
How the Factors Connect
The twelve factors are not independent. They reinforce each other. Stateless processes (Factor 6) only work if config is externalized (Factor 3) and backing services are treated as attachable resources (Factor 4). Fast startup (Factor 9) requires that dependencies are explicitly declared and isolated (Factor 2) so that the environment can be set up predictably. Dev/prod parity (Factor 10) is easier when config lives in environment variables (Factor 3) rather than in environment-specific files.
When teams violate one factor, the violations tend to cascade. If config is hardcoded, dev/prod parity breaks. If processes are stateful, horizontal scaling fails. If the build and run stages are not separated, you end up patching production directly, which violates disposability because you cannot recreate the environment from scratch.
Twelve-Factor in 2026
The original methodology was written before Docker (2013), Kubernetes (2014), and the serverless movement (2015+). Some factors, like port binding, feel obvious now because modern frameworks default to self-contained HTTP servers. Others, like treating logs as event streams, are baked into platform expectations (CloudWatch, Datadog, and ELK all assume log streams, not log files).
The methodology was open-sourced to evolve with the community. New considerations, such as health check endpoints, circuit breakers, and observability, extend the original twelve factors but do not replace them. The foundation remains solid.
Further Reading
- Adam Wiggins, The Twelve-Factor App. The original reference. Read each factor page; they are short and precise.
- Twelve-Factor App Methodology, Wikipedia. Background, history, and adoption context.
- IBM Developer, "Creating Cloud-Native Applications: 12-Factor Applications". Practical application of all twelve factors in a Java context.
- Pradeep Loganathan, "12 Factor App: The Complete Guide to Building Cloud-Native Applications". Comprehensive walkthrough with modern examples.
Assignment
Pick any application you have built or worked on. It can be a side project, a work codebase, or even a tutorial project. Score it on each of the twelve factors using this scale:
- 0 = Factor is violated (e.g., config is hardcoded, logs go to local files)
- 1 = Partially followed (e.g., most config is externalized but some secrets are in code)
- 2 = Fully followed
Create a table with columns: Factor, Score (0-2), Evidence (one sentence explaining your score).
- What is your total score out of 24?
- Which factor has the lowest score? What would it take to fix it?
- Which factor was the hardest to evaluate? Why?
Most applications score between 10 and 16 on their first assessment. A perfect 24 is rare. The point is not to achieve perfection but to identify where the gaps are and what risks they create.