Table of Contents
6. Deploying a web app
This is a pretty general guide to putting 'something' on the web that you can access with your browser.
Usually, your app will end up either at <username>.tardis.ac
, or <username>.tardis.ac/some/path
.
For now, you'll need to ask an admin if you want to use a different domain, but we're working on changing that.
Static Sites
If your content is going to be served the same to everyone, it's a *static web page*. This includes JS apps that, once loaded, make requests to somewhere else.
Deploying these is super simple - you can build them with Gitlab (if you need to) and then deploy them on our pages instance. This page details how.
Any other applications
If you have some application you need to run that generates responses dynamically, then things get more complicated, and you have a lot more choices.
Getting something running
Firstly, consider what you need to run alongside your actual app. If you need some kind of database (which is likely), you can use our managed MySQL server. See the tardis console to set one up.
Now you need to actually deploy your code. To start with, get it running on the sandbox
vm in some way. Get your code onto the VM, and also install any extra languages or applications you need.
The rest of the setup largely depends on your language, and framework. For Python, you'll probably want to set up a virtual environment, or for Node you might just need to run npm install
.
Eventually, you should get to a point where you can run ./my-app
or some other command, and it will start listening on a port of your choosing. Please set this to a random port >3000 rather than a recognisable one like 8080.
At this point, you should test everything is working like you expect by making an SSH tunnel. From your local machine, run ssh <username>@tardisproject.uk -L 4242:localhost:4242
. This will forward 4242
on your machine to port 4242
on the sandbox vm, meaning you can now test your app by loading up localhost:4242
in a browser.
Exposing it to the internet
Now you've gotten everything running within the network, it's time to put it on the internet! Before you do this, make an effort to check for any security holes or things that should only be exposed for development.
Now, head to the Tardis console and hit Endpoints
. An endpoint is just a forwarding from our publicly-accessible server, to an application inside the network.
You can create a new endpoint with your domain and whatever prefix you'd like. The target IP address will be 192.168.0.130
(the internal ip of the sandbox vm), and the target port will be whatever you set your application up to listen to.
If you want to use a custom domain, let an admin know and they'll set it up for you.
Endpoint options
There are a couple of advanced options. In most cases you won't need to touch them, but if you have problems you should take a look.
In most cases, you won't need to use HTTPs from the proxy to the target. You can test this by running curl localhost:<port>
on the sandbox vm - if it succeeds then you don't need it.
Whether you need to strip the prefix mostly depends on how configurable your application is.
Say your applications' prefix is /todos
. If a user visits /todos/create
, then with prefix stripping your application will just see /create
, and without it it will see /todos/create
.
Many applications or frameworks will let you specify this prefix upfront and strip it on their side. This is useful because no matter if the prefix is stripped or not, your application still needs to change its links to accomodate this - a link to /create
will fail, and needs to be a link to /todos/create
instead.
Endpoint changes take about 5 minutes to apply, so come back after that time and check your URL to hopefully see your application!
If it's not working, try running curl -v http://target ip:target port/
, and check that your application is running. If you still can't get it working, contact an admin for help.
Keeping it running
By now you probably have your command running in a terminal, and another browser with it open over the internet. This is good, but at some point you'll probably want to log off and keep your application up.
There are a lot of ways to do this, in ascending order of complexity:
- Open a tmux session and run the command
- Make a user systemd unit, and ask an admin to enable lingering
- Use docker, and possibly some orchestrator
We'll skim over each of these, but feel free to ask an admin for help.
Tmux
This is the easiest way by far. tmux
's main job is to let you have multiple terminals open at once, however it also lets you detach from these terminals and keep them running.
From the sandbox VM, run tmux
. You'll see your screen get cleared and a bar appear at the bottom. We won't cover all of the things you can do, but to keep your service running:
- Start it running in this new session just like before (you'll need to re-set any environment variables)
- Hit
Ctrl+b
, thend
. You'll go back to your original terminal, and see a message like[detached (from session 0)]
Your app is now running in the background - feel free to log out and it will stay running. If you want to get back to it to see logs or stop it, run tmux attach
whenever you want.
Systemd
Systemd is a general service manager, meaning you write down how you want to run your application and it ensures that it stays running.
Unlike tmux, it can do things like wait for other services to come up first, or restart your service if it crashes. Unfortunately, you do have to ask an admin to enable 'lingering' for this to work, otherwise your services will get stopped once you logout.
User systemd units (which a service is a type of) go in the .local/share/systemd/user
directory in your homedir (create it if it doesn't exist). Here's an example named fossil.service
:
[Unit] Description=Fossil user server After=network-online.target [Service] WorkingDirectory=/home/tcmal/museum/ ExecStart=/usr/bin/fossil server -P 14301 --baseurl https://tcmal.tardis.ac/fossil --https --repolist . [Install] WantedBy=default.target
You can find more info on systemd services here.
Now, you can start your service using systemd:
$ systemctl --user daemon-reload # Pick up the changes $ systemctl --user enable --now your-app.service # Start the service, and auto-start it when needed
Here are some basic commands for managing your service:
$ journalctl --user -efu your-app.service # Get the Entire logs and Follow them $ systemctl --user restart your-app.service # Restart it, for example when the code has been updated $ systemctl --user disable --now your-app.service # Stop and no longer auto-start
Docker / Orchestrators
Docker is a tool for packaging up applications with all their dependencies, so they can be run exactly the same on your server as on your local device. If you want to learn more about it, see the first part of this page
You can use rootless docker on the sandbox VM, although you'll need an admin to enable lingering for you just like with systemd. Just detach from the docker container once you run it, and it will stay running. You can also use docker compose or any other tools you like for making docker easier.
You can build a docker image for your app right on the sandbox VM, or you can build it somewhere else (including from CI) and push it to the Gitlab container registry, then pull it to the sandbox VM.
Lastly, you can use our Kubernetes cluster to deploy dockerised applications onto. This is great for if you want to automate updates or have a lot of interconnected applications to deploy, but can have a steep learning curve.