Writing is the tedious part.
Not because it is deep and soulful, but because it is slow, linear, and unforgiving. You can build a system in bursts. You can refactor your way out of a mess. Writing a technical thesis forces you to commit: definitions, structure, claims, limits.
The only way I got through it without turning it into a personal grievance was to treat writing as a phase, not a constant background task.
The phases that actually happened
My thesis did not progress as “write chapter 1, then chapter 2.” It progressed like an engineering project that eventually had to become a document:
- MVP first (before Christmas). A minimal version of Vigil that ran end to end, even if it was ugly.
- Framework implementation + lean case studies. (January) The Python package, spec format, and execution engine became real, with just enough studies to stress the design.
- First full draft. (After January) The moment the repo had to become an argument someone else could follow.
- Expanded case studies. (February) Once the backbone was stable, I widened the empirical surface and tightened what the studies actually demonstrate.
- Final rewrite. (Now) When the system stopped moving, I rewrote everything in my own voice.
This sequencing was not elegant, but it was honest. The thesis became readable only after the system became real.
Why I delayed “serious writing”
Early writing is tempting because it feels like progress. But in technical work, early prose becomes a liability: if the architecture is still moving, the text either turns vague or it becomes wrong.
So I mostly delayed polished writing. I wrote just enough to keep the narrative from drifting, but I let the repository carry the truth. When the design changed, I changed code and specs first. The prose could catch up later.
Concretely, that meant my “writing” for a long time was scattered and functional: comments in the LaTeX source, inline TODOs where a section was missing, and short notes about what a chapter should eventually argue. I did not maintain one sacred TODO list. Once I switched into writing mode, the thesis itself became the task tracker.
The forcing function: supervisor meetings
Biweekly meetings with my supervisors were the external clock. Even when nothing was finished, I had to show what changed since last time, what I learned from running the studies, and what the thesis was actually claiming.
That feedback loop mattered less for “motivation” and more for calibration. It kept me from drifting into a framework that was conceptually impressive but practically untestable, and it forced me to keep narrowing until what I wrote could be defended.
ChatGPT as a drafting tool
While the thesis was still taking shape, I used ChatGPT to generate first drafts and explore narrative options.
That was useful for two very practical reasons:
- Speed. Getting a rough paragraph down is faster than staring at an empty page.
- Iteration. I could try different explanations and section orders quickly, then keep what felt accurate and throw away the rest.
This was not “polish my final prose.” It was front-loading the messy part: turning half-formed ideas into something concrete enough to edit. Once the structure stabilized, I rewrote the text myself. The tool helped me move faster at the stage where the thesis was still becoming a thesis.
What “writing” looked like in practice
Once I had a first full draft, the work became less mystical and more mechanical:
- GitHub as the source of truth, Overleaf as the editor. The thesis lived in a GitHub repo via Overleaf sync, so text, figures, and code evolved together instead of drifting into separate worlds.
- Specs and runs as the source of truth. The code, studies, specs, and the LaTeX thesis all lived in the same repo. That made it much harder to “handwave” something in prose that the code could not back up.
- Inline TODOs as scaffolding. When a chapter needed a missing explanation, I left it as an explicit hole and moved on. Later passes were mostly deleting TODOs by replacing them with real text.
- Rewrite after stabilization. A lot of early text was serviceable but not mine. Once the model and terminology stopped changing, rewriting became straightforward: translate the stable design into clear, consistent language.
This is also where the “boring” decisions mattered. Keeping the spec small. Keeping the check taxonomy usable. Keeping the case studies representative rather than decorative. The thesis reads better when the system underneath it is not trying to be clever.
Case studies: exploratory, not confirmatory
One of the most important reframings was what the case studies are allowed to claim.
They are not a universal proof. They are pressure tests. Their job is to expose where the abstractions break, where the framework is clunky, and where a “nice model” stops matching real pipelines. That is why I kept them heterogeneous: different backends, different variation surfaces, different kinds of outputs.
The case studies can show that Vigil runs across different execution styles, that declared variation produces inspectable differences, and that the checks are usable in practice. They cannot prove a global statement about all evolving systems. And they do not need to. A thesis is defensible when it is clear about what it does and what it does not claim.
What made it defensible
For me, defensibility came down to a few repeatable habits:
- Narrow claims until they are true.
- Keep terminology consistent with what the framework actually supports.
- Use runnable artifacts and study results as evidence, not vibes.
- Cut anything that cannot survive a skeptical reader.
Writing is still tedious. But delaying it until the story was real, and treating case studies as pressure tests instead of proof, made it possible to finish something that holds up.