More self-hosting

docker ps
in terminalIn my previous post, I talked about self-hosting my personal AI chat interface with local LLMs. That was a few months back, and in the past week or so, I've been self-hosting quite a few new things (including this site!), so I thought I'd add how I set those up as well! This time, I learned how to use Docker Compose (which definitely took me too long). You can find some of my compose files here: https://github.com/sadeshmukh/selfhosted.
I got the idea to start self-hosting a couple new services upon seeing this Hacker News post about Hoarder, a bookmarking app with automatic AI tagging. It sounded pretty awesome to me (especially since it had native ollama integration), so I ran their installation script, and everything worked perfectly. The app started, the login worked, and I could bookmark anything!
Alright, so now I needed to tunnel it. I thought it would be easy: after all, it was just about running cloudflared auth login
, writing a config file, then cloudflared run tunnel
. The issue arose with authentication - though there was probably some way to fix this, I couldn't figure out how to authenticate cloudflared to multiple hostnames at the same time. After an hour of messing around, I found a much simpler solution, buried somewhere in Cloudflare settings. In their normal dashboard, there's a link to their second dashboard for Cloudflare One (or Cloudflare Zero Trust, or something like that) where you can manage network-related things - most importantly, tunnels!
By creating a tunnel through Zero Trust, I was able to manage the tunnel remotely, through their dashboard. This way, I didn't have to mess with config files and manually restart/reauth whenever an issue arose. It also let me run cloudflared with a token, meaning I didn't have to work with any persistent data/configs stored on my computer. For good measure, I stuffed it inside a docker container, using network_mode="host"
. I still haven't figured out host.docker.internal
, so that's something I should probably figure out soon (it hasn't worked on WSL, but has on my Mac).
Network mode "host"
Ideally, you shouldn't run docker containers on the same network, or at least, that's what I thought. However, after failing to get --add-host=host.docker.internal:host-gateway
working, I just tossed it on there. I'm guessing there are ways to have two containers running in the same virtual network to point to each other, but after spending an hour failing to figure it out, the solution ended up being to have the ports accessible on localhost, then running cloudflared on the host network.
Networks make no sense.
I did skip over one thing in the beginning - Hoarder was running through a Docker Compose file. Now, to start it up from scratch, I would have to remember to also separately start the cloudflared docker container. For later containers, I ended up including cloudflared
in their compose files.
Of course, I couldn't stop at Hoarder. I needed to do more. I'd always wanted a blog, so I looked around for a few. Ghost (what this site is hosted on) had a nice, modern feel, as well as a great UI for configuring it. So, of course I got it, then "signed up" - only to realize I didn't actually have any emailing mechanism set up. They needed an SMTP server that would actually handle the emailing itself, and I couldn't be bothered to make another Google account with the purpose of emailing for this blog - instead, I spent another day figuring out how to send emails from @sahil.ink.
Ghost recommended using Mailgun, so after signing up, mistyping an email, signing up again, changing DNS records, breaking email forwarding, changing DNS records again, and finally setting up the credentials, I actually had email working. Fantastic!
Earlier, I had no need to send email, only to receive it. I would receive email by forwarding through Cloudflare Email Routing, which was a nice side benefit since Cloudflare already managed my DNS. However, in the process of adding an MX record for mail.sahil.ink, I also added one for sahil.ink, breaking my email routing. This caused me to drop a few emails, and I still have to reach out to make sure I haven't missed anything super important. Lesson learned!
Phew. That was pretty nice, wasn't it?
After setting up a blog, I wanted to share this - and the first thought in my mind was Twitter. But then, I remembered the Fediverse - why not host my own Mastodon instance?
This was unnecessarily painful. I failed to spot their Docker Compose file in their repo, and instead searched their Docker Hub page, all to no avail. After around 3 hours or so, I had given up and instead written my own compose file. This is probably atrocious, and misses a few best practices, but you can check it out here.
And after all that, I had a working Mastodon! You can follow me there @[email protected].
I set up a couple extra things:
- Homarr: server management
- ArchiveBox: webpage archiving
- Jellyfin: media server (maybe adding the Servarr suite in the future?)
- MeTube: GUI for
yt-dlp
All in all, though it may have taken me around a week, the experience of setting up these services was super worthwhile, and I've definitely learned a lot. If you want to check out any of the compose files, you can view them here.
Thanks for reading! As always, you can contact me at [email protected].