Fix PM2 Showing “Online” But Nothing Is Listening

(Nginx 502 Error)

A production Node.js application suddenly started returning 502 Bad Gateway errors while PM2 continued showing the services as “online.” Here’s how we traced the real cause and restored the server safely.

Table of Contents

PM2 Showed “Online” — But the Website Was Down

One of my clients contacted me about a production outage on a custom Node.js application running behind Nginx. The main website was returning a 502 Bad Gateway error, and at the same time the browser occasionally showed SSL-related warnings.

The server was running:

At first glance, this looked like a normal reverse proxy issue under our Web Server Errors category. But after checking the services, it became clear the problem was deeper.

The interesting part was this:

PM2 claimed the frontend application was “online,” yet nothing was actually listening on the expected frontend port.

That misleading PM2 state was the key to the entire outage.

Problem Summary

The client reported:

The application stack looked like this:

				
					Internet → Nginx → Node.js applications managed by PM2
				
			

Before touching anything, I wanted to verify whether the problem was actually Nginx or if the upstream applications themselves had failed.

Verify Nginx and Apache Status

The first thing I checked was whether the web servers themselves were healthy.

				
					systemctl status nginx --no-pager -l
				
			

Nginx was running normally.

Then I checked Apache:

				
					systemctl status apache2 --no-pager -l
				
			

Apache was also healthy.

Since both web servers were operational, the next logical step was checking the backend applications behind Nginx.

Check PM2 Application Status

Next, I checked PM2:

				
					pm2 list
				
			

Output looked similar to this:

				
					┌────┬───────────┬───────────┐
│ id │ name      │ status    │
├────┼───────────┼───────────┤
│ 0  │ client    │ online    │
│ 1  │ api       │ errored   │
└────┴───────────┴───────────┘
				
			

At first glance:

But PM2 status alone is not enough.

A process can appear “online” while the actual application behind it is dead.

So instead of trusting PM2 blindly, I checked the listening ports directly.

Verify Listening Ports

This was the most important troubleshooting step.

				
					ss -lntp | grep -E ':5000|:5173'
				
			

Nothing was listening.

That immediately confirmed:

This is why the server was returning: 502 Bad Gateway.

At this stage, we knew the problem was not Nginx itself. The upstream Node.js services were failing.

Check Nginx Error Logs

To confirm the upstream failure, I checked the Nginx error log:

				
					tail -n 50 /var/log/nginx/error.log
				
			

The logs showed:

				
					connect() failed (111: Connection refused)
no live upstreams
				
			

That confirmed Nginx was healthy but could not reach the backend services.

Because the API process was marked as errored in PM2, the next step was checking the application logs.

Inspect PM2 Logs

I checked the PM2 logs:

				
					pm2 logs --lines 100
				
			

The backend was failing with:

				
					Error: Configuration property "mongoURI" is not defined
				
			

There was also a warning similar to:

				
					WARNING: No configurations found in configuration directory
				
			

Initially, this looked like:

But before changing anything, I wanted to verify whether the application itself could still run manually.

Start the Backend Manually

I moved into the backend application directory and started the app directly:

				
					cd /root/webprint/server
node server.js
				
			

The application started immediately:

				
					Server up and running on port 5000 !
MongoDB successfully connected
				
			

This completely changed the diagnosis.

The application itself was healthy.

The real issue was PM2.

More specifically:

Because manual startup worked, the next logical step was rebuilding the PM2 process properly.

Rebuild the Broken PM2 Backend Process

First, I removed the broken PM2 entry:

				
					pm2 delete api
				
			

Then recreated it from the correct working directory:

				
					cd /root/appname/server
pm2 start server.js --name api
				
			

Then saved the PM2 state:

				
					pm2 save
				
			

After that, port 5000 finally appeared:

				
					ss -lntp | grep 5000
				
			

At this point, the backend API was restored.

But the website was still returning a 502 error because the frontend service was also broken.

Fix the Frontend PM2 Process

The frontend process was even more misleading.

PM2 showed it as: online

But:

I manually tested the frontend first:

				
					cd /root/appname/client
npm run start
				
			

Once the frontend successfully started manually, I repaired the PM2 process the same way:

				
					pm2 delete client
				
			

Then:

				
					pm2 start npm --name client -- start
				
			

Finally:

				
					pm2 save
				
			

Now port 5173 finally appeared:

				
					ss -lntp | grep 5173
				
			

At this point, Nginx upstreams recovered and the website came back online.

Verify the Website and HTTPS

Once both Node.js services were restored, I verified the site:

				
					curl -Ik https://yourdomain.com
				
			

Response:

				
					HTTP/2 200
				
			

The site was now:

But there was still one important validation left.

Verify Everything Survives Reboot

Many PM2 issues return after reboot because the saved PM2 state is still broken.

To prevent that, I verified PM2 startup persistence:

				
					pm2 startup
pm2 save
				
			

Then rebooted the server.

After reboot:

				
					pm2 list
				
			

Both applications returned automatically.

Then I verified:

				
					ss -lntp | grep -E '5000|5173'
				
			

And finally:

				
					curl -Ik https://yourdomain.com
				
			

Everything survived reboot correctly.

Common Mistakes and Edge Cases

Need Help Fixing Your VPS?

If you’re stuck with server issues and need a reliable fix, I troubleshoot real VPS problems daily — from Nginx errors and SMTP failures to DNS and performance issues.

Instead of guessing, get a proven fix based on real experience.

Conclusion

In this case, the 502 Bad Gateway error was not caused by Nginx itself.

The real issue was corrupted or incorrect PM2 process definitions after restart.

The frontend process appeared “online” even though nothing was actually listening on the required port.

By:

we restored the production environment safely without unnecessary package upgrades or risky system-wide changes.

This type of troubleshooting is common in custom Node.js stacks running behind Nginx, especially on VPS environments.

If you’re dealing with similar PM2, Node.js, Nginx, or production server issues, feel free to reach out through my VPS troubleshooting service.

Tharindu

Hey!! I'm Tharindu. I'm from Sri Lanka. I'm a part time freelancer and this is my blog where I write about everything I think might be useful to readers. If you read a tutorial here and want to hire me, contact me here.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button