A legacy system does not usually fail because it is written in an old language. It fails because the world around it keeps changing.
In this case, the failure was boring and brutal: booking emails were no longer sent. No obvious error surfaced for users. The flow looked fine. But nothing left the building, which meant the only thing that mattered, confirmation and coordination by email, simply did not happen.
This is what maintenance looks like in 2026. Not “fix the code”. Fix the seams where your system touches other systems that do not care about your feelings.
The shape of the system
The application is custom PHP, built years ago, and it is still around because it does its job.
It also happens to be in better shape than the usual mental image of “legacy PHP”. It uses Symfony, the codebase is structured reasonably cleanly, and it already runs in Docker on a VM. That matters because it changes the work from archaeology to engineering. You are still constrained, but you are not operating in pure chaos.
The constraints were also clear: this is a small system. The goal is to keep it working, keep changes reversible, and keep costs low. This is not a rewrite project and it does not need to become one.
What actually broke
The root cause was not a sudden bug in Symfony. It was an external deprecation.
The system relied on the hosting provider’s SMTP server to send booking emails. That SMTP path was on its way out, already soft deprecated, and heading toward being disabled entirely. Even before the final shutdown, deliverability and reliability degraded enough that the booking flow became unreliable.
This is the trap with “it works”. It works until someone else changes the rules, and email providers change rules constantly.
So the task was straightforward on paper: move email delivery to something stable.
Email delivery in 2026 means auth churn
The practical solution was to migrate the outbound mail flow to Microsoft 365. SMTP is still there, and for a small system, using SMTP is a perfectly reasonable way to ship email without building new infrastructure.
But the real work was not “change SMTP host”.
The real work was authentication.
Basic authentication is being phased out in a lot of ecosystems, and Microsoft has been tightening the screws in particular. The result is that “just use username and password” becomes a short-lived solution, and the stable path involves OAuth.
That is where legacy maintenance gets annoying. Email is not the business logic, but now you have to integrate token handling cleanly into an old codebase, often through a mail library that was not designed for this exact situation.
In my case the mail library integration was awkward enough that it forced real implementation work: updating how the transport is configured, adding token fetch and refresh, testing edge cases, and making sure the change does not silently break again.
The real unlock: make it runnable locally
The most important step was not OAuth. It was making the system testable without touching production.
Before this, the application effectively lived on the server. That works until it does not. When something breaks under time pressure, “SSH into prod and poke around” becomes stressful and slow, and you end up debugging blind.
So I pulled the system into a repo and made the environment repeatable.
Concretely:
- Secrets and credentials moved to environment variables.
- I refactored the production Docker Compose setup.
- I added a development overlay plus an env file so the full stack runs locally with reasonable parity.
That changed everything. Suddenly I could iterate safely. I could test the mail integration without guessing. I could version changes. And future updates will be faster because the system is now something you can run, not something you pray at.
If you do only one thing when you inherit a small legacy system, do this. Make it runnable locally. Everything else becomes less dramatic.
Minimal monitoring, maximum sanity
Once email delivery was fixed, the next obvious risk was the same kind of silent failure happening again.
I did not build an observability platform. I added the smallest amount of monitoring that catches the important classes of failure.
Three health endpoints:
/healthconfirms the app responds./health/dbconfirms the database connection works./health/errorsscans the application logs and fails if errors appear.
Then I wired them into UptimeRobot. It checks hourly and emails me if something goes wrong.
Is hourly perfect? No. Is it enough for a small system that runs a handful of events? Yes.
The goal is not to catch every edge case. The goal is to stop flying blind.
Practical lessons from boring maintenance
This is the part people skip because it is not glamorous, but it is the actual craft:
- Treat legacy as constraints, not shame. The goal is not to modernize for status points. The goal is to keep a useful system reliable.
- Fix the seam first. External dependencies fail before your domain logic does. SMTP, auth, APIs, certificates, hosting. Start there.
- Make it runnable locally. Pull it into a repo, move secrets into env vars, and get dev parity. It reduces stress and risk immediately.
- Keep changes reversible. Isolate the mail integration behind a small boundary. Make it easy to swap transports again later.
- Add minimal observability early. A few health checks beat guessing and log-diving under pressure.
The end state is not exciting. That is the point.
Booking emails are sent again. Auth changes are handled. The app can be run locally. Monitoring exists. The system is boring.
And boring is exactly what you want when the job is maintenance.