
Docker Env Vars: My Container Refused to Start
Okay, so picture this: it was late, the kind of late where your brain feels like it’s running on fumes and instant coffee. I was super excited to finally get this new microservice deployment rolling o...
r5yn1r4143
2w ago
Okay, so picture this: it was late, the kind of late where your brain feels like it’s running on fumes and instant coffee. I was super excited to finally get this new microservice deployment rolling on Docker. Everything looked perfect. I’d spun up the container, hit docker ps, and… nothing. Just a ghost container, a fleeting moment of existence before it decided to check out. My terminal proudly displayed: Exited (1) 2 seconds ago.
My first thought was, "Uh oh. What did I break this time?" This wasn't some complex Kubernetes cluster failing, just a single Docker container. How hard could it be? Famous last words, right? This seemingly simple "Container just won't start" problem turned into a deep dive into something so fundamental, yet so easy to overlook: environment variables.
TL;DR: The Sneaky Culprit
If your Docker container is exiting immediately with a generic error like Exited (1), and you've checked the obvious stuff (image build, dependencies), seriously consider your environment variables. A typo, a missing value, or an unexpected format can be the silent killer. Use docker logs <container_id> to get actual error messages from your application.
The "Hello, World!" That Wasn't
My microservice was a simple Node.js app. It needed a database connection string, an API key, and a port number. Standard stuff. I’d defined them in my docker-compose.yml file like this:
version: '3.8'
services:
my-app:
image: my-custom-node-app:latest
ports:
- "3000:3000"
environment:
DATABASE_URL: "postgres://user:password@db:5432/mydatabase"
API_KEY: "supersecretapikey123"
PORT: 3000
I ran docker-compose up -d. The little whale icon appeared, did a little dance, and then promptly disappeared. Back to docker ps -a to see my sad, exited container. I tried docker logs <container_id> (where <container_id> is the ID of my failed container).
The output was… cryptic. Something like:
Error: Missing required configuration value: DATABASE_URL
Or sometimes, depending on how the app was written, it might have been even less helpful:
internal/modules/run_main.js:XX:XX
throw err;
^Error: Cannot find module './config'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:XX:XX)
...
This second one was the real kicker. It wasn't even getting to the point of checking for variables; it was crashing earlier because its configuration loading mechanism failed. Why? Because it expected a specific structure for the config, which it didn't get due to the environment variable issue.
Why This Happens (Beyond the Obvious)
It's not just about having the environment variables; it’s about how they are presented to the container.
DATABASE_URL versus DATABASE_URL. Or maybe DB_URL. Even a single character difference will make your app think the variable simply doesn't exist.PORT variable was set to 3000 (a number). Node.js, by default, reads environment variables as strings. So, inside my Node.js app, process.env.PORT was actually "3000". While this often works, some libraries or frameworks are strict and expect a number type. If my app tried to do parseInt(process.env.PORT, 10) and it failed, or if it expected a number for a port binding and got a string, it could cause issues. In my case, the app was fine with the string, but it’s a common pitfall.docker-compose.yml, YAML handles some of this, but the variable as received by the container needs to be correct. For example, if I had a secret that looked like mysecret=value with spaces!, and I didn't quote it correctly in the application's expected format, it could break.docker-compose starts the container and injects the environment variables at runtime. If your ENTRYPOINT or CMD script in your Dockerfile does something that depends on the environment variable being set before the script even runs its main logic, a delay or an error in variable injection could cause a premature exit..env files) into the container. If the path to the .env file is wrong, or if the .env file itself has syntax errors (e.g., unquoted values with spaces), your app might fail to load its config.Debugging Nightmares and How to Win
So, how do you tackle this beast? It’s all about systematic debugging.
Step 1: Get the Real Error Message
Forget docker ps. Your first port of call is always docker logs.
docker logs <your_container_id_or_name>
This will dump the standard output and standard error streams from your container. If your application logs its startup errors, this is where you'll find them. If it’s a generic Exited (1), maybe your application isn't logging errors properly. That’s a separate problem for another day (and another Oops IT article!).
Step 2: Inspect the Container's Environment
Docker provides a handy way to see exactly what environment variables a running container has. While mine was exiting immediately, I could temporarily modify my docker-compose.yml to keep it alive longer or inspect a similar container that did manage to start.
```yaml version: '3.8' services: my-app: image: my-custom-node-app:latest ports: - "3000:3000" environment: DATABASE_URL: "postgres://user:password@db:5432/mydatabase" API_KEY: "supersecretapikey123" PORT: 3000 # TEMPORARY FOR DEBUGGING: Keep container running indefinitely
Comments
Sign in to join the discussion.