The pragmatic architecture of my production projects

Reading some discussions between developers or software architects, one might think that the smallest web application today requires a distributed infrastructure, a Kubernetes cluster, and several specialized cloud services.

Yet many web services (including those that receive several thousand visitors per day) can work perfectly well with a much simpler architecture.

Here is a hands-on account of the infrastructure I use for my production projects, some of which exceed 5,000 daily visitors, and which I also applied for years during a cybersecurity competition with more than 250 on-site participants.

Context

For a long time, I hosted all my projects from home. My websites, services, Git forge, and even my continuous integration infrastructure ran on small ARM machines (mainly Pine64 boards).

Contrary to what I hear everywhere, it worked very well.

The main reason I stopped is not a problem of computing power or reliability, but simply my shift to a more nomadic lifestyle. A stable, permanent residential connection becomes hard to guarantee when you travel often.

I therefore moved the public-facing part of my services to OVH, trying to keep the same philosophy: simplicity and pragmatism.

A simple observation

In many modern web projects, the architecture looks something like this:

user → CDN → frontend → API → database

Every request triggers server-side computation, even when the content changes very rarely.

In my case, several of my sites share an important characteristic: the main content changes once a day.

Take the daily game Yakazu as an example: for the vast majority of visitors, the only thing that matters is the grid of the day, which changes just once a day.

Under these conditions, generating the page dynamically on every request does not make much sense.

Chosen architecture

The solution is therefore very simple: the site is entirely static.

Every day, a scheduled event triggers the CI pipeline which:

  1. generates the new grid;
  2. rebuilds the site;
  3. publishes the static files.

These files are then served directly by the static hosting included with the domain name at OVH.

Yes, the free 100 MB plan.

This gives a very simple architecture:

CI → site generation → static files → OVH

All traffic is absorbed by the hosting provider’s infrastructure, which is exactly what it is designed for.

Frontend

The site is built with SvelteKit.

This choice might seem surprising since I don’t use the server-side capabilities of this framework at all. But it allows:

  • easily generating a static site (@sveltejs/adapter-static);
  • clean routing;
  • well-structured code;
  • building Progressive Web Apps.

In other words: a modern framework, but without depending on a permanent backend.

And when a backend is needed?

Some features still require a bit of server-side logic.

For instance, regularly held championships need to verify participations and record results and times.

In that case, I spin up a small separate backend server, which only receives a very limited number of requests.

The architecture then becomes:

static site → CORS → small backend

This backend is deliberately minimal and only handles actions that cannot be performed on the client side.

The key point is that the vast majority of traffic never touches it.

Results

With this architecture:

  • almost all traffic is served as static files;
  • there is virtually no server-side computation;
  • the failure surface is very small.

The issues I encounter are almost always related to the generation pipeline:

  • an outdated Docker image;
  • a CI trigger that fails;
  • something missing in the data generation (the absence of a game grid, for example).

But these problems are easy to detect and usually fixed within minutes.

Why keep it simple?

Looking back, I realize this approach follows a few simple principles:

  1. pre-compute what can be pre-computed;
  2. serve files rather than computation;
  3. reserve the backend for cases that truly need it.

These principles often make it possible to handle significant load with very modest resources.

For more than ten years, I organized a cybersecurity challenge that eventually hosted around 300 simultaneous participants, with very basic machines.

The secret was not the hardware’s power, but the system’s design. I invite you to watch my presentation of this infrastructure, with the trial and error, the mistakes, and the constraints that shaped it.

What about user accounts and payments?

A “login” to a user account does not necessarily require systematic validation by a backend (and the notion of a user account is not always meaningful anyway).

A single API request can handle authentication (whether verifying a payment or an email address, …) and then you store that response in the browser to use in the interface: right away… or even later. That way, there is no need to make another API request for information you already have.

Not only does this reduce server load, it also speeds up page loading for the user, who sees their data without a network call — and therefore even offline. Everyone wins.

Conclusion

Modern infrastructure offers many very powerful tools. But it is sometimes worth remembering that many problems can be solved in a much simpler way.

Before adding a new technological layer, it is often worth asking a very simple question:

does this content really need to be generated on every request?

In many cases, the answer is no.