In the web application world, you now use application containers, storage containers, and some load balancer solution in front of it.
For new projects, yes. This was just some old school php that ran on my old server account, written years ago.
But (using the more traditional approach with a server), why not take inspiration from Amiga, and just tar out data to a ram disk during boot of the VM (or the container or whatever)
For the VM, ram disk offers nothing. The filesystem area where the data are persisted is hot and cached already. The latency is all in the http requests to this site. Even the processing of the content is fast.
For containers, the volatility of containers in particular precludes this without a persistence layer because users will be logged out all over the place without warning when the container is torn down.
Not exactly huge data, and doesn't even need to survive reboots. It does sound like something that could be dropped into a paas somewhere, or a simple vhost at some VPS provider, VMware or whatever.
As said above, the size of the data is not the issue. However reimplementing it all to be containerised and use a storage bucket is a task big enough to warrant a total rewrite anyway.