Curious by default.
Shipping code since 19.
I'm Philip. 22, based in Denmark. I've been writing code since I was 19, shipping production work the whole time, and I'm at my best when the problem in front of me is something nobody's solved yet.

How I got here.
I started in 3D, not in code. Blender first, then playing around in Unreal Engine 5, picking up art-side tooling, peeking into rendering pipelines and shader graphs, whatever the next rabbit hole was. Somewhere in there I got obsessed with the question every 3D artist eventually asks: how do you actually build a real-time experience, not just a still render? That question kept pulling me deeper, and it's still the loop I'm running today.
Somewhere in the middle of those years, I noticed the gap that bugs me to this day: artists fight their tools more than they should. The same five clicks, a hundred times a day. Blender lets you script it away with Python — so I did. One pie menu became another, and another, and now I've got eight addons live on SuperHive that solve real workflow problems for 3D artists. Not the kind of addons that demo well — the kind that make a Friday afternoon end an hour earlier.
The web came later, but it stuck. Once I'd shipped a site end-to-end — Postgres schema, API, frontend, deploy, monitoring — I realised the full-stack loop was the same loop as the artist loop: see a problem, build the thing that makes it go away. Just at a different scale.
Problem-solving is the through-line. I don't care whether the problem is a slow SQL query, a janky cloner setup in Geometry Nodes, or a UI that makes the user click three times to do one thing. I care that it's solved cleanly, shipped, and the next person who picks it up doesn't curse me.
AI is the future —
and the human role just got sharper.
The next era of software engineering is about architecting LLMs to do the work while you orchestrate and direct. Boilerplate, scaffolding, glue code, unit tests, the dozens of repetitive files behind every feature — all of it compresses toward zero. What's left is the part that was always the interesting part anyway: deciding what to build, how the pieces fit together, where the data flows, and where the risk lives.
So the ground shifts, but the job sharpens. Architecture, stack choice, and security stop being things you get to after shipping the feature — they become the feature itself. When models can type a thousand lines faster than any human, the scarce resource is judgement: knowing which abstraction will still make sense in six months, which dependency will bite you at scale, which boundary the data has just crossed without the right guard rails in place.
Security is where this matters most — and it's also where AI is at its most dangerous if you don't watch it. Models will happily write code that does what you asked, and miss the input validation that should have lived next to it. They'll generate an API endpoint without rate-limiting, store a secret in plaintext because you didn't tell them not to, fan out a query that opens up a textbook injection vector. The mistakes don't look like mistakes — they look like working code. The vulnerability is the absence of something: the missing check, the unlocked door, the trusted assumption that should have been validated at the boundary. AI is brilliant at writing what's there; it's terrible at noticing what should be there but isn't.
That reframes the human role entirely. Engineers stop being typists and become architects, security guards, and reviewers. Architects, because models need a clean blueprint to build against — vague intent produces vague code. Security guards, because somebody has to stand between the model's enthusiasm and the production database. And reviewers, because every AI-written feature still needs a human who actually understands what it does, where the edges are, and what was quietly skipped along the way. The model writes; the human verifies, hardens, and owns the result.
That's the work I want to do. Agent orchestration, system design, and the quiet discipline of keeping a stack honest while it grows under AI's velocity. Models will write the code. Humans still decide what's worth writing — and what needs to be triple-checked before it ever runs in production.
Four principles, non-negotiable.
Not slogans — these decide what I build next and when I stop.
A half-written plan is not a product. I favour cutting scope and shipping a real thing over perfecting a spec nobody uses.
I can touch anything between a Postgres index and a pixel. Full-stack means full responsibility — auth, DB, API, UI, deploy, monitoring.
Postgres beats whatever dropped last week. I reach for proven tools unless there's a real reason to deviate.
Every site I ship targets Lighthouse 95+ across the board and works from a 390 px iPhone up. Accessibility is a shipping requirement.
Where my time goes.
- Full-stack web apps — Next.js, TypeScript, Postgres, edge runtimes
- AI product engineering — LLM agents, RAG pipelines, inference glue
- Creative tools — Blender addons, WebGPU rendering, 3D-for-the-web