[{"content":"It has been about six months since I started running services on my VPS. Time for an honest look at what worked, what did not, and what I would do differently.\nWhat works well Static sites are effortless. Hugo plus Nginx has been zero-maintenance. I deploy with a script, and it just works. No crashes, no updates to worry about, no dependencies to manage. This has been the most clearly worthwhile thing I self-host.\nUptime Kuma runs itself. I set it up once, and it has been quietly monitoring everything since. The only time I touch it is to add a new monitor. It has caught two outages that I would not have noticed otherwise.\nDocker simplifies everything. Pinned image versions, isolated environments, easy rollbacks. I resisted containers for a while, thinking they were overkill for a single server. They are not. The overhead is minimal, and the operational benefits are real.\nWhat did not work I over-engineered the backup system. My initial backup setup involved multiple scripts, rotating snapshots, and offsite sync. It was fragile and I never fully trusted it. I replaced it with a single rsync cron job to an offsite location. Simpler and more reliable.\nTrying too many services at once. In the first month, I installed half a dozen things to \u0026ldquo;try them out.\u0026rdquo; Most sat unused, consuming resources and needing updates. Now I only install something when I have an actual, immediate need for it.\nWhat I would do differently Start with monitoring from day one. I set up Uptime Kuma a few weeks in. I should have done it first. Knowing your server is up is more important than whatever service you are deploying on it.\nAutomate TLS renewal verification. Let\u0026rsquo;s Encrypt renewal worked fine automatically, but I did not have monitoring on the certificate expiry dates. A silent renewal failure would have meant a surprise expired certificate. Now Uptime Kuma checks certificate expiry for every HTTPS endpoint.\nKeep a simple log of changes. I started doing this halfway through and wished I had done it from the start. A text file with dated entries like \u0026ldquo;installed Forgejo\u0026rdquo; or \u0026ldquo;updated Nginx config for gzip\u0026rdquo; makes troubleshooting much easier when something breaks weeks later.\nThe numbers Current state of the server after six months:\n5 services running (Nginx, Uptime Kuma, Forgejo, LibreSpeed, WireGuard) ~450 MB RAM used at idle 99.7% uptime over the period (the 0.3% was a provider-side network issue) 2 unplanned outages, both resolved within an hour It is not a lot, but it runs quietly and does what I need. That feels like the right place to be.\n","permalink":"https://packetlog.org/posts/lessons-six-months-self-hosting/","summary":"\u003cp\u003eIt has been about six months since I started running services on my VPS. Time for an honest look at what worked, what did not, and what I would do differently.\u003c/p\u003e\n\u003ch2 id=\"what-works-well\"\u003eWhat works well\u003c/h2\u003e\n\u003cp\u003e\u003cstrong\u003eStatic sites are effortless.\u003c/strong\u003e Hugo plus Nginx has been zero-maintenance. I deploy with a script, and it just works. No crashes, no updates to worry about, no dependencies to manage. This has been the most clearly worthwhile thing I self-host.\u003c/p\u003e","title":"Lessons from six months of self-hosting"},{"content":"I recently ran my site through securityheaders.com and realized I was missing a few useful headers. Here is the set I ended up adding to my Nginx config.\nThe headers add_header X-Content-Type-Options \u0026#34;nosniff\u0026#34; always; add_header X-Frame-Options \u0026#34;SAMEORIGIN\u0026#34; always; add_header Referrer-Policy \u0026#34;strict-origin-when-cross-origin\u0026#34; always; add_header Permissions-Policy \u0026#34;camera=(), microphone=(), geolocation=()\u0026#34; always; add_header Content-Security-Policy \u0026#34;default-src \u0026#39;self\u0026#39;; style-src \u0026#39;self\u0026#39; \u0026#39;unsafe-inline\u0026#39;; img-src \u0026#39;self\u0026#39;; font-src \u0026#39;self\u0026#39;;\u0026#34; always; What each one does:\nX-Content-Type-Options prevents browsers from guessing the MIME type of a response. Without this, a browser might interpret a text file as HTML and execute scripts in it. X-Frame-Options prevents your site from being embedded in an iframe on another domain. This mitigates clickjacking attacks. Referrer-Policy controls how much referrer information is sent when navigating away from your site. strict-origin-when-cross-origin is a sensible default. Permissions-Policy explicitly disables browser features your site does not use. A static blog has no need for camera or microphone access. Content-Security-Policy restricts where resources can be loaded from. The policy above only allows resources from the same origin, which is exactly right for a self-contained static site with no external dependencies. The always keyword Note the always parameter on each header. Without it, Nginx only adds headers to successful responses (2xx and 3xx). With always, headers are included on error pages too. This matters because error pages are also potential attack vectors.\nPlacement These go in the server block of your Nginx site configuration. If you add them in a location block instead, be aware that Nginx does not inherit add_header directives from parent blocks — you would need to repeat them in every location block.\nVerifying After reloading Nginx (nginx -t \u0026amp;\u0026amp; systemctl reload nginx), you can check the headers with curl:\ncurl -sI https://packetlog.org | grep -iE \u0026#34;x-content|x-frame|referrer|permissions|content-security\u0026#34; These headers are low-effort and cost nothing in terms of performance. There is no reason not to set them.\n","permalink":"https://packetlog.org/posts/http-security-headers/","summary":"\u003cp\u003eI recently ran my site through \u003ca href=\"https://securityheaders.com/\"\u003esecurityheaders.com\u003c/a\u003e and realized I was missing a few useful headers. Here is the set I ended up adding to my Nginx config.\u003c/p\u003e\n\u003ch2 id=\"the-headers\"\u003eThe headers\u003c/h2\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;\"\u003e\u003ccode class=\"language-nginx\" data-lang=\"nginx\"\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e\u003cspan style=\"color:#66d9ef\"\u003eadd_header\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003eX-Content-Type-Options\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003e\u0026#34;nosniff\u0026#34;\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003ealways\u003c/span\u003e;\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e\u003cspan style=\"color:#66d9ef\"\u003eadd_header\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003eX-Frame-Options\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003e\u0026#34;SAMEORIGIN\u0026#34;\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003ealways\u003c/span\u003e;\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e\u003cspan style=\"color:#66d9ef\"\u003eadd_header\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003eReferrer-Policy\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003e\u0026#34;strict-origin-when-cross-origin\u0026#34;\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003ealways\u003c/span\u003e;\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e\u003cspan style=\"color:#66d9ef\"\u003eadd_header\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003ePermissions-Policy\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003e\u0026#34;camera=(),\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003emicrophone=(),\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003egeolocation=()\u0026#34;\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003ealways\u003c/span\u003e;\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e\u003cspan style=\"color:#66d9ef\"\u003eadd_header\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003eContent-Security-Policy\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003e\u0026#34;default-src\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003e\u0026#39;self\u0026#39;\u003c/span\u003e; \u003cspan style=\"color:#66d9ef\"\u003estyle-src\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003e\u0026#39;self\u0026#39;\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003e\u0026#39;unsafe-inline\u0026#39;\u003c/span\u003e; \u003cspan style=\"color:#66d9ef\"\u003eimg-src\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003e\u0026#39;self\u0026#39;\u003c/span\u003e; \u003cspan style=\"color:#66d9ef\"\u003efont-src\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003e\u0026#39;self\u0026#39;\u003c/span\u003e;\u003cspan style=\"color:#66d9ef\"\u003e\u0026#34;\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003ealways\u003c/span\u003e;\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cp\u003eWhat each one does:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eX-Content-Type-Options\u003c/strong\u003e prevents browsers from guessing the MIME type of a response. Without this, a browser might interpret a text file as HTML and execute scripts in it.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eX-Frame-Options\u003c/strong\u003e prevents your site from being embedded in an iframe on another domain. This mitigates clickjacking attacks.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eReferrer-Policy\u003c/strong\u003e controls how much referrer information is sent when navigating away from your site. \u003ccode\u003estrict-origin-when-cross-origin\u003c/code\u003e is a sensible default.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003ePermissions-Policy\u003c/strong\u003e explicitly disables browser features your site does not use. A static blog has no need for camera or microphone access.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eContent-Security-Policy\u003c/strong\u003e restricts where resources can be loaded from. The policy above only allows resources from the same origin, which is exactly right for a self-contained static site with no external dependencies.\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"the-always-keyword\"\u003eThe \u003ccode\u003ealways\u003c/code\u003e keyword\u003c/h2\u003e\n\u003cp\u003eNote the \u003ccode\u003ealways\u003c/code\u003e parameter on each header. Without it, Nginx only adds headers to successful responses (2xx and 3xx). With \u003ccode\u003ealways\u003c/code\u003e, headers are included on error pages too. This matters because error pages are also potential attack vectors.\u003c/p\u003e","title":"Quick note on HTTP security headers"},{"content":"I recently converted all images on this site from PNG and JPEG to WebP. The results were better than I expected, so here are some quick notes.\nWhy WebP WebP produces smaller files than PNG and JPEG at comparable visual quality. Browser support is effectively universal now — every modern browser has supported it for years. There is no reason to keep serving older formats for new content.\nConverting with cwebp The cwebp command-line tool (part of the libwebp package) handles conversion:\n# Convert a single PNG cwebp -q 80 input.png -o output.webp # Batch convert all PNGs in a directory for f in *.png; do cwebp -q 80 \u0026#34;$f\u0026#34; -o \u0026#34;${f%.png}.webp\u0026#34;; done Quality 80 is a good starting point. For screenshots and diagrams, you can often go lower (65–75) without visible degradation. For photographs, 80–85 tends to preserve detail well.\nActual savings On this site, the conversion reduced total image weight by about 60%. A few specific examples:\nImage PNG WebP (q80) Reduction Screenshot 245 KB 82 KB 67% Diagram 118 KB 41 KB 65% Photo 390 KB 175 KB 55% These are meaningful savings, especially for visitors on slower connections.\nIn Hugo For a Hugo site, I keep WebP files in the static/images/ directory and reference them directly in posts:\n![Description of image](/images/example.webp) No special Hugo configuration needed. If you want to get more advanced, Hugo\u0026rsquo;s image processing pipeline can convert images at build time, but for a small site with a handful of images, manual conversion is simpler.\nOne caveat WebP\u0026rsquo;s lossy compression is not always the best choice. For images with very fine text or pixel-precise detail, PNG remains better. I keep a few images as PNG where lossless reproduction matters. For everything else, WebP is the default.\n","permalink":"https://packetlog.org/posts/optimizing-images-webp/","summary":"\u003cp\u003eI recently converted all images on this site from PNG and JPEG to WebP. The results were better than I expected, so here are some quick notes.\u003c/p\u003e\n\u003ch2 id=\"why-webp\"\u003eWhy WebP\u003c/h2\u003e\n\u003cp\u003eWebP produces smaller files than PNG and JPEG at comparable visual quality. Browser support is effectively universal now — every modern browser has supported it for years. There is no reason to keep serving older formats for new content.\u003c/p\u003e\n\u003ch2 id=\"converting-with-cwebp\"\u003eConverting with cwebp\u003c/h2\u003e\n\u003cp\u003eThe \u003ccode\u003ecwebp\u003c/code\u003e command-line tool (part of the \u003ca href=\"https://developers.google.com/speed/webp/docs/cwebp\"\u003elibwebp\u003c/a\u003e package) handles conversion:\u003c/p\u003e","title":"Optimizing images for the web with WebP"},{"content":"Knowing when your server is down before your users do is the bare minimum of responsible self-hosting. I have tried several monitoring approaches and settled on a setup that is simple, free, and has been reliable for months.\nUptime Kuma Uptime Kuma is the core of my monitoring stack. It is a self-hosted monitoring tool that checks HTTP endpoints, TCP ports, DNS records, and ping targets at configurable intervals.\nInstallation is one Docker command:\ndocker run -d --restart=always -p 3001:3001 \\ -v uptime-kuma:/app/data \\ --name uptime-kuma louislam/uptime-kuma:1 After setup, the web dashboard shows the status of all monitored services at a glance.\nWhat I monitor My current setup tracks:\nHTTPS endpoints for all sites hosted on the server, checking every 60 seconds TCP port checks for SSH (port 22) and other services Certificate expiry — Uptime Kuma warns when TLS certificates are approaching expiration DNS resolution — verifying that DNS records resolve correctly Ping to the server itself as a basic connectivity check Each monitor can have its own check interval. I use 60 seconds for critical services and 300 seconds for things that are less urgent.\nNotifications Uptime Kuma supports many notification channels. I use two:\nEmail via an external SMTP provider for critical alerts Webhook to a simple script that logs events for later review The notification setup is straightforward through the web interface. You can configure different notification channels per monitor, so you are not overwhelmed with alerts for non-critical services.\nExternal monitoring There is an inherent problem with self-hosted monitoring: if the server goes down, the monitoring goes down with it. For this reason, I also use a free external service to monitor the server from outside.\nSeveral free options exist:\nHetrix Tools — free tier includes 15 monitors with 1-minute intervals from multiple locations Uptime Robot — free tier with 5-minute intervals for up to 50 monitors Cronitor — free tier for basic uptime checks I run one of these as a complement to Uptime Kuma. The external service watches the server itself. Uptime Kuma watches everything running on it. Between the two, coverage is solid.\nSimple health checks For services that do not expose an HTTP endpoint, I use a small health check script that runs via cron:\n#!/bin/bash # Check if critical services are running services=(\u0026#34;nginx\u0026#34; \u0026#34;docker\u0026#34;) for svc in \u0026#34;${services[@]}\u0026#34;; do if ! systemctl is-active --quiet \u0026#34;$svc\u0026#34;; then echo \u0026#34;$svc is not running\u0026#34; | mail -s \u0026#34;Service alert: $svc\u0026#34; admin@packetlog.org fi done This runs every five minutes and sends an email if a systemd service stops. It catches cases that network-level monitoring might miss — like a service crashing but the server staying up.\nTracking response time Beyond simple up/down status, Uptime Kuma records response times for each check. Over time, this data reveals patterns. I have noticed my server\u0026rsquo;s response time creeping up on a few occasions, which turned out to be a log file filling the disk and an unoptimized cron job running during peak hours.\nThe response time graphs are useful for spotting these slow degradations that would not trigger a down alert but still affect performance.\nStatus pages Uptime Kuma can generate a public status page showing the current state of your services. I do not use this publicly, but it is a nice feature if you run services for others and want to provide transparency about uptime.\nWhat I have learned After running this setup for several months:\n60-second check intervals are a good default. Faster than that is unnecessary for most self-hosted services, and slower means you might miss brief outages. Alert fatigue is real. Start with notifications only for genuinely critical services. I initially set up alerts for everything and quickly started ignoring them. External monitoring is not optional. You need at least one check from outside your infrastructure. The free tiers of external services are sufficient for this. Log your incidents. When something goes down, write a brief note about what happened and what you did to fix it. This pays off when the same issue recurs months later. The total resource cost of Uptime Kuma on my VPS is about 80 MB of RAM. For the peace of mind it provides, that is a reasonable trade.\n","permalink":"https://packetlog.org/posts/monitoring-server-uptime/","summary":"\u003cp\u003eKnowing when your server is down before your users do is the bare minimum of responsible self-hosting. I have tried several monitoring approaches and settled on a setup that is simple, free, and has been reliable for months.\u003c/p\u003e\n\u003ch2 id=\"uptime-kuma\"\u003eUptime Kuma\u003c/h2\u003e\n\u003cp\u003e\u003ca href=\"https://github.com/louislam/uptime-kuma\"\u003eUptime Kuma\u003c/a\u003e is the core of my monitoring stack. It is a self-hosted monitoring tool that checks HTTP endpoints, TCP ports, DNS records, and ping targets at configurable intervals.\u003c/p\u003e\n\u003cp\u003eInstallation is one Docker command:\u003c/p\u003e","title":"Monitoring server uptime with free tools"},{"content":"I use a VPN tunnel to administer my VPS and to connect back to my home network when I am away. Over the past year I have used both OpenVPN and WireGuard for this purpose. Here is how they compare in practice.\nThe setup My use case is straightforward: I need a secure tunnel from my laptop to my VPS for SSH, web dashboards, and occasionally reaching services on my home LAN. Nothing exotic — just point-to-point connectivity between machines I control.\nI ran OpenVPN first because I was already familiar with it. Later I set up WireGuard on the same server and ran both in parallel for a few weeks before switching fully to WireGuard.\nConfiguration complexity OpenVPN configuration is verbose. A typical server config file is 30–50 lines, and you need a PKI infrastructure for certificates. Tools like easy-rsa simplify the certificate management, but it is still multiple steps: generate a CA, create server and client certificates, configure TLS parameters, set up the tunnel interface.\nWireGuard is dramatically simpler. A server config is about 10 lines. There is no CA — you generate a key pair on each peer and exchange public keys. Adding a new peer is three lines in the config and a reload. The entire setup from scratch takes about five minutes.\nHere is a minimal WireGuard server config for reference:\n[Interface] Address = 10.0.0.1/24 ListenPort = 51820 PrivateKey = \u0026lt;server-private-key\u0026gt; [Peer] PublicKey = \u0026lt;client-public-key\u0026gt; AllowedIPs = 10.0.0.2/32 The client side is equally short. Compare this to an OpenVPN config with its certificate paths, cipher negotiations, and TLS auth keys.\nPerformance This is where WireGuard stands out most clearly. On my VPS (1 vCPU, 2 GB RAM), I consistently measured 15–25% higher throughput with WireGuard compared to OpenVPN over the same connection.\nLatency overhead was also noticeably lower. With OpenVPN (UDP mode), the tunnel added about 3–5 ms of latency. With WireGuard, the overhead was typically under 1 ms. For interactive SSH sessions, this makes a perceptible difference.\nThe performance gap makes sense architecturally. WireGuard runs in the kernel and uses modern cryptography (ChaCha20, Curve25519) with a fixed cipher suite. OpenVPN runs in userspace and supports a wide range of cipher combinations, which adds overhead.\nResource usage WireGuard is lighter on resources. On my server, the wg0 interface uses negligible RAM and no persistent process — it is a kernel module. OpenVPN runs as a userspace daemon that typically uses 10–20 MB of RAM.\nOn a small VPS where every megabyte counts, this matters. Not enormously, but it adds up if you are running other services.\nStability and reconnection Both have been reliable for me, but WireGuard handles network changes more gracefully. When my laptop switches from WiFi to a mobile hotspot, WireGuard reconnects transparently — there is no handshake negotiation, just a seamless transition. OpenVPN sometimes needs a manual reconnect or a timeout before re-establishing the tunnel.\nWireGuard also handles the \u0026ldquo;roaming\u0026rdquo; case well. Since it is based on UDP and uses the concept of cryptokey routing, the server automatically updates the endpoint when a client\u0026rsquo;s IP address changes.\nWhat OpenVPN still does better OpenVPN has a more mature ecosystem. It supports TCP mode, which can be useful if you are on a network that blocks UDP. WireGuard is UDP-only by design.\nOpenVPN also has more granular access control through its certificate infrastructure. You can revoke individual client certificates without touching the server config. With WireGuard, you need to remove the peer from the config and reload.\nIf you need to push routes, DNS settings, or other options to clients dynamically, OpenVPN has built-in support for that. WireGuard keeps this out of scope by design — you handle routing on each peer yourself.\nWhere I landed I switched to WireGuard for my daily use and have not looked back. The simplicity of the configuration, the lower latency for SSH sessions to my VPS, and the seamless reconnection when switching networks all make it a better fit for my workflow.\nI keep the OpenVPN configuration around as a fallback for situations where I might be on a network that restricts UDP traffic. But in practice, I have not needed it in months.\nFor a straightforward secure tunnel to your own server, WireGuard is the clear choice. The smaller codebase (around 4,000 lines vs. hundreds of thousands for OpenVPN) also means less surface area to audit, which is a nice bonus.\n","permalink":"https://packetlog.org/posts/wireguard-vs-openvpn/","summary":"\u003cp\u003eI use a VPN tunnel to administer my VPS and to connect back to my home network when I am away. Over the past year I have used both OpenVPN and WireGuard for this purpose. Here is how they compare in practice.\u003c/p\u003e\n\u003ch2 id=\"the-setup\"\u003eThe setup\u003c/h2\u003e\n\u003cp\u003eMy use case is straightforward: I need a secure tunnel from my laptop to my VPS for SSH, web dashboards, and occasionally reaching services on my home LAN. Nothing exotic — just point-to-point connectivity between machines I control.\u003c/p\u003e","title":"WireGuard vs OpenVPN: my experience"},{"content":"Running a speed test from your own machine tells you about one path. To get a broader picture of your server\u0026rsquo;s connectivity, you need to test from multiple locations. Here are the approaches I use.\nping.pe ping.pe is one of the simplest tools for this. Enter your server\u0026rsquo;s IP or hostname and it runs ping, MTR, and port checks from dozens of locations simultaneously. The results show up in a live grid.\nWhat I find most useful is the MTR view. It runs a full MTR from each location to your server, so you can see the complete path and identify which routes have issues. If your server is reachable from Frankfurt and London but has high latency from Singapore, the MTR will show you where the path diverges and where the delay is introduced.\nThe ping results give a quick overview of baseline latency from each region:\nFor a server in Amsterdam, I typically see:\nWestern Europe: 5-15 ms Eastern Europe: 20-40 ms US East Coast: 75-90 ms US West Coast: 140-160 ms Asia Pacific: 200-280 ms These numbers are useful as a baseline. If latency from a region suddenly increases, you have a reference point to compare against.\nSpeedtest Tracker Speedtest Tracker is a self-hosted application that runs periodic speed tests and stores the results. It uses the Ookla speedtest CLI under the hood, testing from your server to the nearest Ookla servers.\nThis tests the opposite direction from a self-hosted LibreSpeed instance. LibreSpeed measures from a client to your server. Speedtest Tracker measures from your server to external test endpoints. Both are useful.\nInstallation with Docker:\ndocker run -d \\ --name speedtest-tracker \\ -p 8080:80 \\ -v /opt/speedtest-tracker:/config \\ -e PUID=1000 \\ -e PGID=1000 \\ -e DB_CONNECTION=sqlite \\ ghcr.io/alexjustesen/speedtest-tracker:latest The web interface shows historical graphs of download speed, upload speed, and latency. You can configure test intervals and select specific Ookla servers to test against.\ncurl timing from remote hosts If you have access to machines in different locations (other VPS instances, cloud shells, etc.), a simple curl timing test gives you real-world download measurements:\ncurl -o /dev/null -s -w \u0026#34;connect: %{time_connect}s\\ntls: %{time_appconnect}s\\nttfb: %{time_starttransfer}s\\ntotal: %{time_total}s\\nspeed: %{speed_download} bytes/s\\n\u0026#34; https://packetlog.org/test-file To make this more meaningful, create a test file of known size:\ndd if=/dev/urandom of=/var/www/html/test-100m.bin bs=1M count=100 Then download it from your remote locations. This gives you the actual throughput a real client would experience, including TLS overhead.\nCloud shell trick Most cloud providers offer free browser-based shells. Google Cloud Shell, AWS CloudShell, and Oracle Cloud Shell all provide terminal access from their respective data center locations. These are useful for quick ad-hoc tests without maintaining your own machines everywhere.\n# From Google Cloud Shell (US) mtr -r -c 20 packetlog.org # Or a download test curl -o /dev/null -s -w \u0026#34;%{speed_download}\\n\u0026#34; https://packetlog.org/test-100m.bin What the results tell you The numbers themselves are less important than the patterns. Things I look for:\nConsistent performance within a region. If most European locations show similar latency but one is an outlier, there might be a routing issue with that specific network path.\nAsymmetric speeds. Upload and download speeds between your server and a test point should be roughly similar (within the limits of the path). If download is fast but upload is slow, or vice versa, it could indicate congestion on one direction of a link.\nTime-of-day variation. Network performance can vary significantly by time of day due to usage patterns. Running tests at different hours helps distinguish between a persistent problem and peak-hour congestion.\nDegradation over time. If your speed test results gradually worsen, it might indicate your hosting provider is overcommitting resources, or there has been a routing change that affected your server\u0026rsquo;s connectivity.\nMy setup I run a LibreSpeed instance for on-demand testing from my browser and Speedtest Tracker for automated periodic measurements. For geographic coverage, I use ping.pe and occasional manual tests from cloud shells. The combination covers most troubleshooting scenarios without being overly complicated.\nThe goal is not to obsess over numbers but to have enough data to identify problems quickly when they occur and to verify that your hosting provider is delivering what they promised.\n","permalink":"https://packetlog.org/posts/speed-testing-multiple-locations/","summary":"\u003cp\u003eRunning a speed test from your own machine tells you about one path. To get a broader picture of your server\u0026rsquo;s connectivity, you need to test from multiple locations. Here are the approaches I use.\u003c/p\u003e\n\u003ch2 id=\"pingpe\"\u003eping.pe\u003c/h2\u003e\n\u003cp\u003e\u003ca href=\"https://ping.pe/\"\u003eping.pe\u003c/a\u003e is one of the simplest tools for this. Enter your server\u0026rsquo;s IP or hostname and it runs ping, MTR, and port checks from dozens of locations simultaneously. The results show up in a live grid.\u003c/p\u003e","title":"Speed testing your server from multiple locations"},{"content":"I wanted an independent way to test the connection speed to my server without relying on third-party services. LibreSpeed is an open-source speed test that you can host yourself. There is also a Go backend called speedtest-go that is self-contained and easy to deploy.\nThis post covers setting up speedtest-go with a systemd service and an Nginx reverse proxy.\nWhy self-host a speed test Public speed test services measure the connection between you and their servers, which is useful but does not tell you much about the connection to your specific server. A self-hosted instance measures exactly what you care about: the bandwidth and latency between a client and your VPS.\nIt is also useful for verifying that your hosting provider delivers the bandwidth they promise.\nInstalling speedtest-go Download the latest release from the GitHub releases page. Pick the binary for your architecture:\ncd /tmp wget https://github.com/librespeed/speedtest-go/releases/download/v1.1.5/speedtest-go_1.1.5_linux_amd64.tar.gz tar xzf speedtest-go_1.1.5_linux_amd64.tar.gz Create a directory for the application and move the files:\nsudo mkdir -p /opt/speedtest sudo mv speedtest-go /opt/speedtest/ sudo mv assets /opt/speedtest/ Configuration Create a settings file at /opt/speedtest/settings.toml:\n# Bind address and port bind_address = \u0026#34;127.0.0.1\u0026#34; listen_port = 8989 # Server location info (shown in the UI) server_lat = 52.3676 server_lng = 4.9041 server_name = \u0026#34;Amsterdam\u0026#34; # Paths assets_path = \u0026#34;/opt/speedtest/assets\u0026#34; # Database for storing results (optional) database_type = \u0026#34;bolt\u0026#34; database_file = \u0026#34;/opt/speedtest/speedtest.db\u0026#34; # Statistics password (change this) statistics_password = \u0026#34;change-me-to-something-random\u0026#34; # Logging enable_tls = false The bind_address is set to 127.0.0.1 because Nginx will handle external connections. There is no need to expose the Go server directly.\nTest that it starts correctly:\ncd /opt/speedtest ./speedtest-go You should see output indicating the server is listening on port 8989. Stop it with Ctrl+C.\nCreating a systemd service Create a dedicated user to run the service:\nsudo useradd -r -s /usr/sbin/nologin speedtest sudo chown -R speedtest:speedtest /opt/speedtest Create the service file at /etc/systemd/system/speedtest.service:\n[Unit] Description=LibreSpeed speedtest-go After=network.target [Service] Type=simple User=speedtest Group=speedtest WorkingDirectory=/opt/speedtest ExecStart=/opt/speedtest/speedtest-go Restart=on-failure RestartSec=5 # Hardening NoNewPrivileges=true ProtectSystem=strict ProtectHome=true ReadWritePaths=/opt/speedtest PrivateTmp=true [Install] WantedBy=multi-user.target Enable and start the service:\nsudo systemctl daemon-reload sudo systemctl enable speedtest sudo systemctl start speedtest sudo systemctl status speedtest Check that it is running:\ncurl -s http://127.0.0.1:8989 | head -20 You should see HTML output from the speed test interface.\nNginx reverse proxy Add a server block for the speed test. If you want to serve it on a subdomain like speed.packetlog.org, create /etc/nginx/sites-available/speedtest:\nserver { listen 80; listen [::]:80; server_name speed.packetlog.org; location / { proxy_pass http://127.0.0.1:8989; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # Speed test needs larger body sizes for upload tests client_max_body_size 100M; } } Enable the site and reload Nginx:\nsudo ln -s /etc/nginx/sites-available/speedtest /etc/nginx/sites-enabled/ sudo nginx -t sudo systemctl reload nginx If you are using Let\u0026rsquo;s Encrypt, get a certificate for the subdomain:\nsudo certbot --nginx -d speed.packetlog.org The web interface Once everything is running, visit https://speed.packetlog.org in a browser. The default LibreSpeed interface shows:\nThe interface measures:\nDownload speed — server to client bandwidth Upload speed — client to server bandwidth Ping — round-trip latency Jitter — variation in latency Results are stored in the database if you configured one. You can view historical results at the /stats endpoint (protected by the statistics_password from the config).\nResource usage speedtest-go is lightweight at idle — about 10 MB of RAM. During an active test it uses more, but this is temporary and proportional to the configured test size. On a small VPS, it coexists comfortably with other services.\nCPU usage spikes briefly during tests because the server is generating and consuming data at line speed. On a 1 vCPU machine, expect the CPU to be fully utilized during an active speed test. This is normal and lasts only a few seconds.\nVerifying the setup After everything is running, it is worth doing a few sanity checks.\nFirst, make sure the service survives a reboot:\nsudo reboot # After reboot: systemctl status speedtest Check that the Nginx proxy is working correctly by looking at the response headers:\ncurl -I https://speed.packetlog.org You should see a 200 status code and headers from Nginx (not directly from the Go server). If you get a 502 Bad Gateway, the speedtest-go process is not running or is listening on a different port than Nginx expects.\nRun a test from your browser and then verify the result was stored:\ncurl -s http://127.0.0.1:8989/stats | head If you configured the bolt database, you should see JSON output with your test result.\nFirewall considerations Since speedtest-go binds to 127.0.0.1, it is not directly accessible from outside. All traffic goes through Nginx. Make sure your firewall allows HTTP (80) and HTTPS (443) but does not expose port 8989:\nsudo ufw status If you are using ufw, the default rules for Nginx should be sufficient:\nsudo ufw allow \u0026#39;Nginx Full\u0026#39; Restricting access If you do not want the speed test to be publicly accessible, you can add basic authentication in Nginx:\nsudo apt install apache2-utils sudo htpasswd -c /etc/nginx/.htpasswd speedtest Then add to the Nginx location block:\nauth_basic \u0026#34;Speed Test\u0026#34;; auth_basic_user_file /etc/nginx/.htpasswd; Alternatively, you can restrict by IP if you only need access from specific locations.\nAutomating tests For ongoing monitoring, you can run speed tests from the command line. The speed test API accepts requests that can be scripted:\n# Quick download test using curl curl -o /dev/null -s -w \u0026#34;%{speed_download}\u0026#34; https://speed.packetlog.org/backend/garbage?ckSize=100 # The result is in bytes per second For more structured testing from remote locations, I cover that in a separate post.\nWrapping up The whole setup takes about 15 minutes. speedtest-go is a well-maintained project with a clean interface and minimal dependencies. Having your own speed test endpoint is useful for troubleshooting, verifying hosting provider performance, and establishing baseline measurements for your server\u0026rsquo;s connectivity.\n","permalink":"https://packetlog.org/posts/librespeed-self-hosted/","summary":"\u003cp\u003eI wanted an independent way to test the connection speed to my server without relying on third-party services. \u003ca href=\"https://github.com/librespeed/speedtest\"\u003eLibreSpeed\u003c/a\u003e is an open-source speed test that you can host yourself. There is also a Go backend called \u003ca href=\"https://github.com/librespeed/speedtest-go\"\u003espeedtest-go\u003c/a\u003e that is self-contained and easy to deploy.\u003c/p\u003e\n\u003cp\u003eThis post covers setting up speedtest-go with a systemd service and an Nginx reverse proxy.\u003c/p\u003e\n\u003ch2 id=\"why-self-host-a-speed-test\"\u003eWhy self-host a speed test\u003c/h2\u003e\n\u003cp\u003ePublic speed test services measure the connection between you and their servers, which is useful but does not tell you much about the connection to your specific server. A self-hosted instance measures exactly what you care about: the bandwidth and latency between a client and your VPS.\u003c/p\u003e","title":"Setting up a self-hosted speed test with LibreSpeed"},{"content":"I have been running various services on my VPS for a while now. Some have been worth the effort, others were not. Here is where I have landed.\nWorth self-hosting Static sites and personal projects. This is the easiest category. A static site generator plus Nginx is trivial to maintain. No database, no runtime dependencies, minimal attack surface. Updates are a git pull and a rebuild. I cannot think of a reason to use a managed service for this.\nMonitoring. I run a lightweight monitoring stack to keep tabs on my server. Uptime Kuma is a good example — a single Docker container that monitors HTTP endpoints, pings, and ports. It sends alerts via email or webhook when something goes down. Setup takes five minutes, and it has been completely reliable.\nDNS (authoritative). Running your own authoritative DNS with something like Knot DNS or NSD is straightforward once configured. Zone file updates are predictable, and you have full control over TTLs and record types. I would not recommend running a recursive resolver though — that is a different level of complexity and exposure.\nGit hosting. Gitea or Forgejo run well on modest hardware and provide a GitHub-like interface. Useful for private projects and mirrors. A Forgejo instance with SQLite uses about 100 MB of RAM.\nSpeed testing. Self-hosted speed test tools like LibreSpeed give you an independent way to measure connectivity to your server. I wrote more about this in a later post.\nNot worth self-hosting (for me) Email. The perennial answer. Running a mail server is not technically difficult — Postfix and Dovecot are well-documented. The problem is deliverability. Major providers are aggressive about filtering mail from small, unknown servers. You will spend more time managing SPF, DKIM, DMARC, reverse DNS, IP reputation, and delisting requests than you will spend actually using email. I use a paid email provider and do not miss running my own.\nDatabase-backed web applications. Content management systems, project management tools, anything with a database and user accounts. The maintenance burden is real: backups, updates, security patches, dependency management. For personal use, managed services or SaaS tools are almost always less hassle.\nSearch engines. I experimented with self-hosted search (SearXNG). It works, but the instance needs regular attention as upstream sources change their APIs and rate limits. The maintenance-to-value ratio was not there for me.\nThe decision framework When I evaluate whether to self-host something, I think about:\nMaintenance overhead. How often does it need attention? Static sites need almost none. Email needs constant vigilance. Failure impact. If the service goes down at 2 AM, does it matter? Monitoring and DNS matter. A personal Gitea instance can wait until morning. Data sensitivity. Hosting your own means you control the data. For some use cases this matters. Resource usage. On a small VPS, every service competes for RAM and CPU. A service that uses 500 MB at idle is expensive when you only have 2 GB total. The practical middle ground I have settled on a small set of services that run quietly and need minimal attention. The server runs Nginx for static sites, Uptime Kuma for monitoring, Forgejo for git, and a speed test instance. Total idle RAM usage is under 500 MB. Everything is backed up daily with a simple rsync script.\nOne thing I have learned is that Docker makes management significantly easier for most of these services. Each service gets its own container with pinned versions, isolated dependencies, and a simple update path. A docker-compose.yml file documents the entire stack in one place. Rollbacks are just a matter of changing a tag.\nThe key realization is that self-hosting is not all or nothing. Pick the services where the trade-off makes sense and use managed solutions for the rest.\n","permalink":"https://packetlog.org/posts/self-hosting-2025/","summary":"\u003cp\u003eI have been running various services on my VPS for a while now. Some have been worth the effort, others were not. Here is where I have landed.\u003c/p\u003e\n\u003ch2 id=\"worth-self-hosting\"\u003eWorth self-hosting\u003c/h2\u003e\n\u003cp\u003e\u003cstrong\u003eStatic sites and personal projects.\u003c/strong\u003e This is the easiest category. A static site generator plus Nginx is trivial to maintain. No database, no runtime dependencies, minimal attack surface. Updates are a \u003ccode\u003egit pull\u003c/code\u003e and a rebuild. I cannot think of a reason to use a managed service for this.\u003c/p\u003e","title":"Self-hosting in 2025: what's worth it"},{"content":"Serving a static site is fast by default, but proper caching headers make a noticeable difference for repeat visitors. These are the headers I configure and why.\nCache-Control This is the primary header that controls caching behavior. For static assets like CSS, JS, images, and fonts:\nlocation ~* \\.(css|js|png|jpg|jpeg|webp|svg|woff|woff2|ico)$ { add_header Cache-Control \u0026#34;public, max-age=2592000, immutable\u0026#34;; } max-age=2592000 is 30 days. The immutable directive tells the browser not to revalidate the resource even on a normal reload. This works well when your build tool hashes filenames — if the content changes, the filename changes, so the old cached version is never requested again.\nFor HTML pages, shorter caching makes more sense:\nlocation ~* \\.html$ { add_header Cache-Control \u0026#34;public, max-age=3600\u0026#34;; } One hour. This means content updates appear within an hour for returning visitors, while still reducing unnecessary requests.\nETag and Last-Modified Nginx sends ETag and Last-Modified headers by default for static files. These enable conditional requests: the browser sends If-None-Match or If-Modified-Since, and the server responds with 304 Not Modified if nothing changed. No body is transferred.\nThis is useful for resources where you set a short max-age. After the cache expires, the browser revalidates instead of downloading the full resource again.\nI leave Nginx\u0026rsquo;s default ETag behavior enabled. There is no reason to disable it unless you are running multiple backend servers that generate inconsistent ETags.\nThe Expires header Expires is the older HTTP/1.0 way of setting cache duration. It takes an absolute date:\nExpires: Thu, 15 Jan 2026 00:00:00 GMT If you set Cache-Control: max-age, it takes precedence over Expires in modern browsers. I do not bother setting Expires separately since Nginx\u0026rsquo;s expires directive sets both:\nexpires 30d; This adds both the Expires header and Cache-Control: max-age=2592000.\nWhat I actually use For this site, the configuration is straightforward. Hugo generates hashed asset filenames in production, so assets can be cached aggressively. HTML files get a shorter cache time. The relevant Nginx snippet is in my earlier post on Nginx configuration.\nThe key insight is that caching is not just about performance. It reduces server load, lowers bandwidth usage, and makes the site feel faster for returning visitors. For a static site on a small VPS, it is one of the simplest optimizations you can make.\n","permalink":"https://packetlog.org/posts/caching-static-sites/","summary":"\u003cp\u003eServing a static site is fast by default, but proper caching headers make a noticeable difference for repeat visitors. These are the headers I configure and why.\u003c/p\u003e\n\u003ch2 id=\"cache-control\"\u003eCache-Control\u003c/h2\u003e\n\u003cp\u003eThis is the primary header that controls caching behavior. For static assets like CSS, JS, images, and fonts:\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;\"\u003e\u003ccode class=\"language-nginx\" data-lang=\"nginx\"\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e\u003cspan style=\"color:#66d9ef\"\u003elocation\u003c/span\u003e ~\u003cspan style=\"color:#e6db74\"\u003e*\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003e\\.(css|js|png|jpg|jpeg|webp|svg|woff|woff2|ico)\u003c/span\u003e$ {\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e    \u003cspan style=\"color:#f92672\"\u003eadd_header\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003eCache-Control\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003e\u0026#34;public,\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003emax-age=2592000,\u003c/span\u003e \u003cspan style=\"color:#e6db74\"\u003eimmutable\u0026#34;\u003c/span\u003e;\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e}\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cp\u003e\u003ccode\u003emax-age=2592000\u003c/code\u003e is 30 days. The \u003ccode\u003eimmutable\u003c/code\u003e directive tells the browser not to revalidate the resource even on a normal reload. This works well when your build tool hashes filenames — if the content changes, the filename changes, so the old cached version is never requested again.\u003c/p\u003e","title":"Caching strategies for static sites"},{"content":"I spent the last few weeks reading about BGP after running into a routing issue that I could not explain with traceroute alone. These are rough notes, not a tutorial.\nThe basics BGP (Border Gateway Protocol) is how autonomous systems on the internet exchange routing information. An autonomous system (AS) is a network or group of networks operated by a single organization, identified by an AS number (ASN). For example, Cloudflare is AS13335, Hetzner is AS24940.\nWhen your packets travel from your home to a server in Amsterdam, they cross multiple autonomous systems. BGP is what lets those networks agree on the path.\nHow routes propagate Each AS announces the IP prefixes it owns to its neighbors. Those neighbors pass the announcements along, prepending their own ASN to the path. The result is an AS path — a chain of AS numbers that describes the route.\n203.0.113.0/24 via AS64500 AS64501 AS64502 This means: to reach 203.0.113.0/24, go through AS64502, then AS64501, then AS64500. Shorter paths are generally preferred, but it is more nuanced than that.\nBGP communities Communities are tags attached to route announcements. They let operators signal intent to their peers. For example, a provider might define a community that means \u0026ldquo;do not export this route to other peers\u0026rdquo; or \u0026ldquo;lower the local preference for this route.\u0026rdquo;\nCommunities are typically written as ASN:value, like 64500:100. Large communities use a three-part format: ASN:function:parameter.\nI found the RIPE NCC BGP community guides useful for understanding how different networks use these.\nInteresting observations A few things that surprised me:\nRouting is not symmetric. The path from A to B can be completely different from the path from B to A. Each AS makes independent routing decisions.\nAS path length is not the only metric. Local preference, multi-exit discriminator (MED), and operator policy all influence route selection. A longer AS path can win if the operator configured it that way.\nRoute convergence takes time. When a link goes down, BGP routers withdraw the affected routes and announce alternatives. This process can take seconds to minutes. During convergence, packets can be dropped or take suboptimal paths.\nUseful tools For looking up ASN information and BGP routes, I found these helpful:\nwhois -h whois.radb.net AS64500 — query the RADB for an AS\u0026rsquo;s routing policy bgp.tools — clean interface for AS and prefix lookups PeeringDB — information about where networks peer RIPE Stat — routing history and visibility data I do not operate a network that runs BGP, but understanding the basics has helped me make sense of traceroute output and routing anomalies. When you see a strange path or unexpected latency, knowing that BGP policy decisions are behind it makes the situation less mysterious.\n","permalink":"https://packetlog.org/posts/bgp-routing/","summary":"\u003cp\u003eI spent the last few weeks reading about BGP after running into a routing issue that I could not explain with traceroute alone. These are rough notes, not a tutorial.\u003c/p\u003e\n\u003ch2 id=\"the-basics\"\u003eThe basics\u003c/h2\u003e\n\u003cp\u003eBGP (Border Gateway Protocol) is how autonomous systems on the internet exchange routing information. An autonomous system (AS) is a network or group of networks operated by a single organization, identified by an AS number (ASN). For example, Cloudflare is AS13335, Hetzner is AS24940.\u003c/p\u003e","title":"What I learned about BGP routing"},{"content":"When something is slow or unreachable on the network, the first instinct is to blame the server. But the problem is often somewhere in between. These are the tools I reach for when I need to figure out where packets are getting lost or delayed.\nTraceroute basics Traceroute shows the path packets take from your machine to a destination. It works by sending packets with increasing TTL (Time to Live) values. Each router along the path decrements the TTL by one, and when it reaches zero, that router sends back an ICMP Time Exceeded message. This reveals each hop.\ntraceroute packetlog.org On Linux, traceroute sends UDP packets by default. On Windows, tracert uses ICMP. This matters because some routers treat ICMP and UDP differently. You can force ICMP on Linux:\ntraceroute -I packetlog.org Or use TCP, which is useful when ICMP is filtered:\ntraceroute -T -p 443 packetlog.org A typical output looks like this:\n1 gateway (192.168.1.1) 1.234 ms 1.112 ms 1.098 ms 2 10.10.0.1 (10.10.0.1) 3.456 ms 3.321 ms 3.298 ms 3 core-router.isp.net (203.0.113.1) 8.765 ms 8.654 ms 8.601 ms 4 * * * 5 peer-link.transit.net (198.51.100.5) 15.432 ms 15.321 ms 15.287 ms 6 edge.datacenter.net (198.51.100.20) 18.123 ms 18.045 ms 17.998 ms 7 packetlog.org (203.0.113.10) 18.234 ms 18.156 ms 18.089 ms Each line is a hop. The three values are round-trip times for three probes. Asterisks (* * *) mean the router did not respond, which is common and not necessarily a problem. Many routers are configured to drop ICMP or rate-limit responses.\nReading traceroute output A few patterns I have learned to look for:\nSudden latency jump. If latency goes from 10 ms to 80 ms at a specific hop and stays high for the rest of the path, the link between those two hops is likely the bottleneck. Possibly an intercontinental link, which is expected.\nLatency spike at one hop, then back to normal. This usually means that specific router is slow at generating ICMP responses but is forwarding traffic just fine. The router\u0026rsquo;s control plane is separate from its forwarding plane. Do not assume this hop is the problem.\nPacket loss at intermediate hops only. If you see loss at hop 5 but hop 6 and the destination are fine, the router at hop 5 is probably rate-limiting ICMP. Not a real issue.\nPacket loss at the final hop. This is more concerning. It could indicate the destination server is overloaded or there is congestion on the last-mile link.\nMTR: traceroute with statistics MTR combines traceroute and ping into a single tool. It continuously sends probes and builds a statistical picture over time, which makes it far more useful than a single traceroute snapshot.\nmtr packetlog.org The interactive display updates in real time. For a report you can share or save:\nmtr -r -c 100 packetlog.org This sends 100 probes and outputs a summary. A typical report:\nHOST: myserver Loss% Snt Last Avg Best Wrst StDev 1.|-- gateway 0.0% 100 0.5 0.6 0.4 1.2 0.1 2.|-- isp-core.example.net 0.0% 100 3.2 3.4 2.9 5.1 0.3 3.|-- transit-peer.example.n 0.0% 100 12.1 12.3 11.8 14.2 0.4 4.|-- ??? 100.0 100 0.0 0.0 0.0 0.0 0.0 5.|-- dc-edge.example.net 0.0% 100 18.4 18.6 18.1 20.3 0.3 6.|-- packetlog.org 0.0% 100 18.5 18.7 18.2 20.5 0.3 The columns that matter most:\nLoss% — Percentage of probes that got no response. Look at the final destination, not intermediate hops. Avg — Average round-trip time. Compare this across hops to find where latency is introduced. StDev — Standard deviation. High StDev means inconsistent latency, which can indicate congestion or route flapping. Installing MTR On Debian and Ubuntu:\nsudo apt install mtr-tiny The mtr-tiny package is the ncurses version without a GTK dependency. On RHEL-based systems:\nsudo dnf install mtr MTR needs root or appropriate capabilities to send raw packets. You can either run it with sudo or set the capability:\nsudo setcap cap_net_raw+ep /usr/bin/mtr-packet Looking glasses A looking glass is a web-based tool hosted by a network operator that lets you run traceroute, ping, or BGP queries from their network. This is invaluable when you want to test connectivity from a location other than your own.\nSome looking glasses I find useful:\nNLNOG Looking Glass — Run from the NLNOG ring nodes distributed worldwide. Hurricane Electric Looking Glass — BGP route info and traceroute from HE\u0026rsquo;s extensive network. Lumen / Level3 Looking Glass — Useful for testing from a major Tier 1 network. RIPE RIS Looking Glass — More focused on BGP and route visibility. These are particularly useful when a user reports they cannot reach your server. You can run a traceroute from a network close to theirs and see where the path breaks.\nCombining the tools My typical workflow when investigating a connectivity issue:\nRun mtr -r -c 50 destination from my server to the reported problem area. This gives the forward path. Ask the affected user (or use a looking glass near them) to run a traceroute back to my server. The return path can be completely different. Check if the issue is at a peering point between two networks. This is common and usually resolves on its own as traffic engineering adjusts. If the problem is clearly within a transit provider\u0026rsquo;s network, there is not much you can do except wait or contact them. If the problem is on the last hop before your server, check your server\u0026rsquo;s network interface, firewall rules, and load.\nUseful flags A few mtr options I use regularly:\n# Use TCP instead of ICMP (useful when ICMP is filtered) mtr -T -P 443 packetlog.org # Use UDP mtr -u packetlog.org # Set packet size (helps detect MTU issues) mtr -s 1400 packetlog.org # Show IP addresses instead of hostnames (faster) mtr -n packetlog.org For traceroute, the Paris traceroute variant is worth knowing about. Classic traceroute can show false paths because load balancers distribute probes across different routes. Paris traceroute keeps flow identifiers consistent:\nsudo apt install paris-traceroute paris-traceroute packetlog.org When to worry Not every anomaly in a traceroute is a problem. Networks are complex, and routers make independent decisions about ICMP handling. Focus on what matters: can the destination be reached, what is the latency to the destination, and is there packet loss at the destination.\nIf everything looks fine in the traceroute but the application is still slow, the issue is likely above the network layer. Check DNS resolution time, TLS handshake overhead, or server-side processing. Tools like curl -w with timing variables are helpful there:\ncurl -o /dev/null -s -w \u0026#34;dns: %{time_namelookup}s\\nconnect: %{time_connect}s\\ntls: %{time_appconnect}s\\ntotal: %{time_total}s\\n\u0026#34; https://packetlog.org Network diagnostics are about narrowing down where the problem is, not necessarily fixing it. Half the time, the answer is \u0026ldquo;it is not your network\u0026rdquo; — which is still useful information.\n","permalink":"https://packetlog.org/posts/network-diagnostics-traceroute-mtr/","summary":"\u003cp\u003eWhen something is slow or unreachable on the network, the first instinct is to blame the server. But the problem is often somewhere in between. These are the tools I reach for when I need to figure out where packets are getting lost or delayed.\u003c/p\u003e\n\u003ch2 id=\"traceroute-basics\"\u003eTraceroute basics\u003c/h2\u003e\n\u003cp\u003eTraceroute shows the path packets take from your machine to a destination. It works by sending packets with increasing TTL (Time to Live) values. Each router along the path decrements the TTL by one, and when it reaches zero, that router sends back an ICMP Time Exceeded message. This reveals each hop.\u003c/p\u003e","title":"Network diagnostics: traceroute, MTR, and looking glasses"},{"content":"I recently moved a domain to a new server and had to wait for the change to take effect everywhere. The process is commonly called \u0026ldquo;DNS propagation,\u0026rdquo; but that term is a bit misleading. Here is what actually happens.\nDNS is not a broadcast system When people say \u0026ldquo;DNS is propagating,\u0026rdquo; it sounds like your new record is being pushed out to servers around the world. That is not how it works.\nDNS is a pull-based system. Resolvers fetch records when they need them, cache the results, and serve the cached version until it expires. When you change a DNS record, the old cached copies do not get invalidated. They simply expire over time based on the TTL (Time to Live) value.\nSo \u0026ldquo;DNS propagation\u0026rdquo; really means \u0026ldquo;waiting for caches to expire.\u0026rdquo;\nHow a DNS query works When you type example.com into a browser, the resolution process goes through several steps.\nYour device first checks its own local cache. If it has a recent answer, it uses that.\nIf not, it asks a recursive resolver. This is usually operated by your ISP or a public service like 1.1.1.1 or 8.8.8.8. The recursive resolver does the actual work of finding the answer.\nThe recursive resolver starts at the root. It asks a root name server for com., then asks the .com TLD server for example.com, and finally asks the authoritative name server for example.com for the specific record. Each answer comes with a TTL, and the resolver caches every response.\nClient -\u0026gt; Recursive resolver -\u0026gt; Root NS -\u0026gt; TLD NS -\u0026gt; Authoritative NS On the next query for the same record, the resolver serves it from cache without going through the whole chain again.\nTTL controls the timing Every DNS record has a TTL value, expressed in seconds. A TTL of 3600 means resolvers will cache that record for one hour before checking again.\nWhen you update a record at your DNS provider, the authoritative name server starts returning the new value immediately. But resolvers that already have the old value cached will keep serving it until the TTL expires.\nThis is why changes are not instant. If your TTL was 86400 (24 hours), it could take up to 24 hours for all caches to expire and start returning the new value.\nChecking the current state The dig command is essential for debugging DNS. It lets you query specific servers and see exactly what they return.\nQuery your authoritative name server directly:\ndig @ns1.yourprovider.com packetlog.org A This bypasses all caches and shows you what the authoritative server is returning right now.\nQuery a public resolver to see the cached value:\ndig @1.1.1.1 packetlog.org A dig @8.8.8.8 packetlog.org A If the authoritative server returns the new IP but the public resolver still returns the old one, you are seeing a cached answer. The ANSWER SECTION in the output includes the remaining TTL:\npacketlog.org. 1742 IN A 203.0.113.10 The 1742 means this cached answer expires in 1742 seconds.\nCommon record types A few record types you will work with most often:\nA record. Maps a domain to an IPv4 address. This is the most common record type.\ndig packetlog.org A +short AAAA record. Maps a domain to an IPv6 address.\ndig packetlog.org AAAA +short CNAME record. An alias that points one domain to another. The resolver follows the chain to get the final IP.\ndig www.packetlog.org CNAME +short MX record. Specifies mail servers for the domain.\ndig packetlog.org MX +short TXT record. Holds arbitrary text. Used for SPF, DKIM, domain verification, and other purposes.\ndig packetlog.org TXT +short Reducing propagation time If you know you are going to change a record, lower the TTL in advance. For example, if your current TTL is 86400 seconds, change it to 300 (5 minutes) a day before the migration. Then when you update the record, caches will expire within 5 minutes.\nAfter the migration is complete and verified, you can raise the TTL back to a longer value. Longer TTLs are better for performance because they reduce the number of queries hitting your authoritative server.\nA reasonable default TTL for most records is 3600 (1 hour). For records that rarely change, 86400 (24 hours) is fine.\nNegative caching DNS also caches negative results. If a resolver looks up a record that does not exist, it caches the \u0026ldquo;NXDOMAIN\u0026rdquo; response. The duration of this negative cache is controlled by the SOA record\u0026rsquo;s minimum TTL field.\nThis matters if you are adding a new record. Even if the authoritative server now has the record, resolvers that recently cached a negative result will not check again until the negative cache expires.\nTroubleshooting A few situations I have run into:\nChanges seem instant from some locations but not others. This is normal. Different resolvers cached the old record at different times, so their caches expire at different times.\nChanges are not visible after the TTL should have expired. Some resolvers do not strictly honor TTL. A few ISP resolvers are known to enforce a minimum cache time regardless of the TTL you set. There is not much you can do about this except wait.\nOld and new values alternate. If your DNS provider uses multiple name servers and you only updated one, or if the update is still being synchronized between them, you may see inconsistent results. Most providers handle this automatically, but it can take a few minutes.\nFor real-time checks across many locations, whatsmydns.net shows the DNS response from resolvers around the world.\nKey takeaways DNS propagation is not a push. It is caches expiring. The TTL value on your records controls how long that takes. Lower TTL before changes, use dig to verify, and be patient. For most changes with a reasonable TTL, everything settles within an hour.\n","permalink":"https://packetlog.org/posts/dns-propagation/","summary":"\u003cp\u003eI recently moved a domain to a new server and had to wait for the change to take effect everywhere. The process is commonly called \u0026ldquo;DNS propagation,\u0026rdquo; but that term is a bit misleading. Here is what actually happens.\u003c/p\u003e\n\u003ch2 id=\"dns-is-not-a-broadcast-system\"\u003eDNS is not a broadcast system\u003c/h2\u003e\n\u003cp\u003eWhen people say \u0026ldquo;DNS is propagating,\u0026rdquo; it sounds like your new record is being pushed out to servers around the world. That is not how it works.\u003c/p\u003e","title":"How DNS propagation actually works"},{"content":"After setting up my VPS, security was the next priority. A server on the public internet gets probed constantly. Within hours of going live, the auth log fills up with failed SSH login attempts from all over the world.\nThis post covers the measures I took. None of this is novel, but having it written down in one place is useful.\nSSH hardening SSH is the primary way you access a Linux server, which makes it the primary target for attackers. The default configuration is functional but permissive.\nDisable root login Root login over SSH should be off. You should connect as a regular user and use sudo when needed.\nEdit /etc/ssh/sshd_config:\nPermitRootLogin no Disable password authentication Key-based authentication is both more convenient and more secure than passwords. Once your public key is set up, disable password login entirely:\nPasswordAuthentication no PubkeyAuthentication yes Make sure your key is working before you apply this. Otherwise you will lock yourself out.\nChange the default port (optional) Moving SSH to a non-standard port reduces noise in the logs. It is not real security, but it cuts automated scan attempts significantly:\nPort 2222 Pick any unused port above 1024. Remember to update your firewall rules to allow the new port before restarting SSH.\nOther useful settings A few more options worth setting:\nMaxAuthTries 3 LoginGraceTime 30 ClientAliveInterval 300 ClientAliveCountMax 2 MaxAuthTries 3 limits authentication attempts per connection. LoginGraceTime 30 closes the connection if authentication is not completed in 30 seconds. ClientAliveInterval and ClientAliveCountMax disconnect idle sessions after about 10 minutes. After making changes:\nsshd -t # test the configuration systemctl restart sshd Always test the configuration before restarting. A syntax error in sshd_config can lock you out.\nFail2ban Fail2ban monitors log files and bans IP addresses that show malicious behavior. It is particularly useful for SSH, where brute-force attempts are constant.\nInstallation apt install fail2ban Configuration Fail2ban uses jail configurations. Create a local override file so your changes survive package updates:\ncp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local Edit /etc/fail2ban/jail.local. The key settings for SSH:\n[sshd] enabled = true port = ssh filter = sshd logpath = /var/log/auth.log maxretry = 3 bantime = 3600 findtime = 600 This bans an IP for one hour after 3 failed login attempts within 10 minutes. You can increase bantime for more aggressive banning.\nIf you changed the SSH port, update the port value:\nport = 2222 Start and enable the service:\nsystemctl enable fail2ban systemctl start fail2ban Checking status fail2ban-client status sshd This shows the number of currently banned IPs and total bans since the service started. On a public server, you will see bans accumulating within minutes.\nTo unban an IP manually:\nfail2ban-client set sshd unbanip 203.0.113.50 UFW firewall UFW (Uncomplicated Firewall) is a frontend for iptables that makes basic firewall management straightforward.\nInstallation and setup UFW is often pre-installed on Debian/Ubuntu. If not:\napt install ufw Start by setting the default policies:\nufw default deny incoming ufw default allow outgoing This blocks all incoming connections except those you explicitly allow. Outgoing traffic is unrestricted.\nAllowing services Allow the services you need:\nufw allow 22/tcp # SSH (or your custom port) ufw allow 80/tcp # HTTP ufw allow 443/tcp # HTTPS If you moved SSH to a different port:\nufw allow 2222/tcp Enabling the firewall ufw enable UFW will warn you that this may disrupt existing SSH connections. Make sure you have allowed SSH before enabling.\nCheck the current rules:\nufw status verbose Rate limiting UFW has a built-in rate limiting feature for SSH:\nufw limit 22/tcp This allows a maximum of 6 connections per 30 seconds from a single IP. It is a simple complement to fail2ban.\nUnattended upgrades Security updates should be applied promptly. On a personal server that you might not check every day, automatic security updates make sense.\nInstallation apt install unattended-upgrades apt-listchanges Configuration Enable automatic updates:\ndpkg-reconfigure -plow unattended-upgrades This creates /etc/apt/apt.conf.d/20auto-upgrades with:\nAPT::Periodic::Update-Package-Lists \u0026#34;1\u0026#34;; APT::Periodic::Unattended-Upgrade \u0026#34;1\u0026#34;; The default configuration in /etc/apt/apt.conf.d/50unattended-upgrades is already set to install security updates. You can verify:\ngrep -A 5 \u0026#34;Allowed-Origins\u0026#34; /etc/apt/apt.conf.d/50unattended-upgrades It should include lines for security updates like:\n\u0026#34;${distro_id}:${distro_codename}-security\u0026#34;; Optional: email notifications If you want to receive email when updates are applied, set the mail address in 50unattended-upgrades:\nUnattended-Upgrade::Mail \u0026#34;martin@packetlog.org\u0026#34;; You will need a working mail setup on the server for this.\nOptional: automatic reboot Some kernel updates require a reboot. You can configure unattended-upgrades to reboot automatically at a specific time:\nUnattended-Upgrade::Automatic-Reboot \u0026#34;true\u0026#34;; Unattended-Upgrade::Automatic-Reboot-Time \u0026#34;04:00\u0026#34;; I leave this off and reboot manually when needed. I prefer to know when my server restarts.\nTesting Run a dry-run to verify the configuration:\nunattended-upgrades --dry-run --debug Additional measures A few more things I do that are quick and worthwhile.\nRemove unnecessary packages A fresh Debian install is already minimal, but check for anything you do not need:\napt list --installed | wc -l The fewer packages installed, the smaller the attack surface.\nCheck listening ports Periodically verify that only expected services are listening:\nss -tlnp This shows all TCP ports in listening state along with the process using them. If you see something unexpected, investigate.\nSet up log monitoring Review logs periodically. The important ones:\njournalctl -u sshd --since \u0026#34;1 hour ago\u0026#34; tail -100 /var/log/auth.log Failed login attempts, unusual process activity, and unexpected network connections are all worth investigating.\nKeep backups This is not a security hardening step per se, but it is the most important thing you can do. If something goes wrong, whether from an attack or a misconfiguration, backups let you recover.\nI keep automated daily backups of the server\u0026rsquo;s data to an external storage location. A simple cron job with rsync or rclone works well for this.\nSummary The measures here are the baseline. They will not stop a determined, targeted attacker, but they handle the vast majority of real-world threats to a personal server: automated scans, brute-force attempts, and unpatched vulnerabilities.\nThe key points:\nSSH: key-only auth, no root login, rate limiting Fail2ban: automatic banning of repeat offenders UFW: deny by default, allow only what is needed Unattended upgrades: automatic security patches All of this takes about 30 minutes to set up on a fresh server. The Debian security documentation and the fail2ban wiki are good references for going deeper.\n","permalink":"https://packetlog.org/posts/linux-security-basics/","summary":"\u003cp\u003eAfter setting up my VPS, security was the next priority. A server on the public internet gets probed constantly. Within hours of going live, the auth log fills up with failed SSH login attempts from all over the world.\u003c/p\u003e\n\u003cp\u003eThis post covers the measures I took. None of this is novel, but having it written down in one place is useful.\u003c/p\u003e\n\u003ch2 id=\"ssh-hardening\"\u003eSSH hardening\u003c/h2\u003e\n\u003cp\u003eSSH is the primary way you access a Linux server, which makes it the primary target for attackers. The default configuration is functional but permissive.\u003c/p\u003e","title":"Linux security basics for a personal server"},{"content":"I have been running Nginx on a 1 vCPU / 2 GB RAM VPS for a while now. These are my notes on configuration choices that make sense at this scale.\nThe defaults are mostly fine Nginx is efficient out of the box. For a small site serving static files, you can run the default configuration and it will handle far more traffic than your server will ever see. But there are a few things worth adjusting.\nWorker processes The worker_processes directive controls how many worker processes Nginx spawns. The common advice is to set it to the number of CPU cores:\nworker_processes auto; The auto value does exactly that. On a 1 vCPU machine, this gives you one worker, which is appropriate. There is no benefit to running multiple workers on a single core.\nEach worker can handle thousands of concurrent connections, so one worker is not a bottleneck for a small site.\nWorker connections events { worker_connections 512; } The default is often 768 or 1024. For a small personal server, 512 is more than enough. Each connection uses a small amount of memory, so there is no harm in keeping this reasonable.\nThe maximum number of simultaneous connections your server can handle is worker_processes * worker_connections. With 1 worker and 512 connections, that is 512 concurrent connections. For a personal blog, you will never come close.\nGzip compression Enabling gzip reduces the size of text-based responses significantly:\ngzip on; gzip_vary on; gzip_proxied any; gzip_comp_level 4; gzip_min_length 256; gzip_types text/plain text/css text/javascript application/javascript application/json application/xml image/svg+xml; A few notes on these settings:\ngzip_comp_level 4 is a good balance between compression ratio and CPU usage. Level 6 or higher uses noticeably more CPU with diminishing returns. gzip_min_length 256 avoids compressing tiny responses where the overhead is not worth it. gzip_vary on ensures caches handle compressed and uncompressed versions correctly. Static file caching For a static site, telling browsers to cache assets saves bandwidth and makes repeat visits faster:\nlocation ~* \\.(css|js|png|jpg|jpeg|gif|ico|svg|woff|woff2)$ { expires 30d; add_header Cache-Control \u0026#34;public, no-transform\u0026#34;; } HTML files should not be cached as aggressively, since you want updates to appear immediately:\nlocation ~* \\.html$ { expires 1h; add_header Cache-Control \u0026#34;public, no-transform\u0026#34;; } Logging The default access log writes every request to disk. On a small VPS with an SSD, this is fine for low traffic. But if you want to reduce disk writes, you can buffer the logs:\naccess_log /var/log/nginx/access.log combined buffer=16k flush=5m; This buffers log entries and writes them in batches. The flush=5m ensures logs are written at least every five minutes even if the buffer is not full.\nFor a personal site with minimal traffic, I actually keep the default logging. The disk I/O is negligible. But if you are running something busier, buffered logging helps.\nSecurity headers A few headers that are worth adding to every response:\nadd_header X-Content-Type-Options \u0026#34;nosniff\u0026#34; always; add_header X-Frame-Options \u0026#34;SAMEORIGIN\u0026#34; always; add_header Referrer-Policy \u0026#34;strict-origin-when-cross-origin\u0026#34; always; These are low-effort, high-value. They prevent content type sniffing, clickjacking, and excessive referrer information leakage.\nDisabling server tokens By default Nginx includes its version number in error pages and the Server response header. Disabling this reveals less information:\nserver_tokens off; This goes in the http block of your main Nginx configuration.\nPutting it together Here is a condensed version of my Nginx configuration for reference:\nuser www-data; worker_processes auto; pid /run/nginx.pid; events { worker_connections 512; } http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; server_tokens off; keepalive_timeout 65; gzip on; gzip_vary on; gzip_comp_level 4; gzip_min_length 256; gzip_types text/plain text/css text/javascript application/javascript application/json application/xml image/svg+xml; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } The site-specific server block lives in /etc/nginx/sites-available/ and is symlinked to sites-enabled/.\nMonitoring I keep an eye on Nginx with a few simple commands:\n# Check active connections curl -s http://127.0.0.1/nginx_status # Watch the access log tail -f /var/log/nginx/access.log # Test configuration before reloading nginx -t \u0026amp;\u0026amp; systemctl reload nginx The nginx_status endpoint requires the stub_status module, which is included in most Nginx packages. Add it to a server block restricted to localhost:\nlocation /nginx_status { stub_status; allow 127.0.0.1; deny all; } Resource usage On my server, Nginx uses about 3 MB of RAM for the master process and 5 MB per worker. With one worker, that is roughly 8 MB total. CPU usage for serving static files is effectively zero under normal load.\nFor a small VPS, Nginx is hard to beat. It does its job quietly and stays out of the way. The Nginx documentation is thorough if you need to dig deeper into any of these settings.\n","permalink":"https://packetlog.org/posts/nginx-small-vps/","summary":"\u003cp\u003eI have been running Nginx on a 1 vCPU / 2 GB RAM VPS for a while now. These are my notes on configuration choices that make sense at this scale.\u003c/p\u003e\n\u003ch2 id=\"the-defaults-are-mostly-fine\"\u003eThe defaults are mostly fine\u003c/h2\u003e\n\u003cp\u003eNginx is efficient out of the box. For a small site serving static files, you can run the default configuration and it will handle far more traffic than your server will ever see. But there are a few things worth adjusting.\u003c/p\u003e","title":"Notes on running Nginx on a small VPS"},{"content":"When I first set up HTTPS on my server, I realized I did not fully understand what was happening behind the scenes. I knew I needed a certificate, and I knew Let\u0026rsquo;s Encrypt was free, but the details were fuzzy. So I dug into it.\nWhat TLS actually does TLS (Transport Layer Security) provides three things for a connection between a client and a server:\nEncryption. The data in transit cannot be read by anyone observing the network. Authentication. The client can verify it is talking to the intended server, not an impostor. Integrity. The data cannot be modified in transit without detection. When your browser connects to a site over HTTPS, a TLS handshake happens before any HTTP data is exchanged. During this handshake, the server presents its certificate.\nHow certificates work A TLS certificate is a file that binds a public key to a domain name. It is signed by a Certificate Authority (CA), which is an organization that browsers trust.\nThe chain of trust looks like this:\nYour server has a certificate for packetlog.org, signed by Let\u0026rsquo;s Encrypt. Let\u0026rsquo;s Encrypt\u0026rsquo;s intermediate certificate is signed by ISRG Root X1. ISRG Root X1 is in your browser\u0026rsquo;s trust store. When a browser connects, it walks this chain from your certificate up to a root it trusts. If the chain is valid and the domain matches, the connection proceeds.\nThe certificate contains:\nThe domain name (or names) it is valid for The public key The validity period (usually 90 days for Let\u0026rsquo;s Encrypt) The issuer\u0026rsquo;s signature Let\u0026rsquo;s Encrypt and ACME Let\u0026rsquo;s Encrypt is a free, automated CA. It uses the ACME protocol (Automatic Certificate Management Environment) to verify that you control a domain before issuing a certificate.\nThe most common verification method is the HTTP-01 challenge:\nYou request a certificate for packetlog.org. Let\u0026rsquo;s Encrypt gives you a token. You place that token at http://packetlog.org/.well-known/acme-challenge/\u0026lt;token\u0026gt;. Let\u0026rsquo;s Encrypt fetches that URL. If it gets the expected response, it issues the certificate. There is also the DNS-01 challenge, which requires creating a TXT record. This is useful for wildcard certificates or situations where port 80 is not available.\nInstalling Certbot Certbot is the standard client for Let\u0026rsquo;s Encrypt. On Debian/Ubuntu:\napt install certbot python3-certbot-nginx The python3-certbot-nginx plugin allows Certbot to automatically configure Nginx.\nGetting your first certificate Before running Certbot, make sure:\nYour domain\u0026rsquo;s DNS A record points to your server\u0026rsquo;s IP. Nginx is running and serving your domain on port 80. Port 80 is open in your firewall. A basic Nginx config to start with:\nserver { listen 80; server_name packetlog.org; root /var/www/packetlog.org; } Then run Certbot:\ncertbot --nginx -d packetlog.org Certbot will:\nVerify domain ownership via the HTTP-01 challenge. Obtain the certificate. Modify your Nginx configuration to enable HTTPS. Set up a redirect from HTTP to HTTPS. After completion, your Nginx config will have new blocks for port 443 with the certificate paths filled in.\nWhat Certbot adds to Nginx After running Certbot, the Nginx configuration gains something like this:\nserver { listen 443 ssl; server_name packetlog.org; root /var/www/packetlog.org; ssl_certificate /etc/letsencrypt/live/packetlog.org/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/packetlog.org/privkey.pem; include /etc/letsencrypt/options-ssl-nginx.conf; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; } server { listen 80; server_name packetlog.org; return 301 https://$server_name$request_uri; } The fullchain.pem file contains your certificate plus the intermediate certificate. The privkey.pem is your private key.\nAutomatic renewal Let\u0026rsquo;s Encrypt certificates expire after 90 days. Certbot installs a systemd timer (or cron job, depending on your system) that attempts renewal twice a day:\nsystemctl list-timers | grep certbot You can test the renewal process without actually renewing:\ncertbot renew --dry-run If this succeeds, your renewal is properly configured and will happen automatically.\nChecking your certificate After setup, verify that everything is working. From any machine:\nopenssl s_client -connect packetlog.org:443 -servername packetlog.org \u0026lt; /dev/null 2\u0026gt;/dev/null | openssl x509 -noout -dates This shows the validity dates of the certificate. You can also check the full chain:\nopenssl s_client -connect packetlog.org:443 -servername packetlog.org -showcerts \u0026lt; /dev/null Or use an online tool like the SSL Labs test for a thorough analysis.\nCommon issues A few things I ran into or have seen others encounter:\nPort 80 blocked. The HTTP-01 challenge requires port 80 to be reachable. If your firewall blocks it, Certbot will fail with a connection timeout. Make sure UFW or iptables allows traffic on port 80.\nDNS not propagated. If you just set up your DNS record, it may take some time to propagate. Certbot will fail if the domain does not resolve to your server. Use dig to verify:\ndig +short packetlog.org Wrong Nginx server block. Certbot needs to find a server_name directive matching your domain. If you have multiple server blocks, make sure the right one is active.\nRate limits. Let\u0026rsquo;s Encrypt has rate limits: 50 certificates per registered domain per week. For normal use this is not an issue, but if you are testing repeatedly, you might hit it. Use the staging environment for testing:\ncertbot --nginx -d packetlog.org --staging Staging certificates are not trusted by browsers, but the process is identical and the rate limits are much higher.\nSecurity considerations A few things worth doing after the basic setup:\nRedirect all HTTP to HTTPS. Certbot does this by default, but verify it. There should be no way to access your site over plain HTTP.\nCheck your TLS configuration. The defaults from Certbot\u0026rsquo;s options-ssl-nginx.conf are generally good. They disable old protocols (TLS 1.0, 1.1) and weak ciphers.\nSet up HSTS if you are committed to HTTPS. This tells browsers to always use HTTPS for your domain:\nadd_header Strict-Transport-Security \u0026#34;max-age=63072000; includeSubDomains\u0026#34; always; Start with a shorter max-age while testing, then increase it once you are confident everything works.\nFurther reading Let\u0026rsquo;s Encrypt documentation Certbot documentation Mozilla SSL Configuration Generator How TLS works (Cloudflare) ","permalink":"https://packetlog.org/posts/tls-certificates-lets-encrypt/","summary":"\u003cp\u003eWhen I first set up HTTPS on my server, I realized I did not fully understand what was happening behind the scenes. I knew I needed a certificate, and I knew Let\u0026rsquo;s Encrypt was free, but the details were fuzzy. So I dug into it.\u003c/p\u003e\n\u003ch2 id=\"what-tls-actually-does\"\u003eWhat TLS actually does\u003c/h2\u003e\n\u003cp\u003eTLS (Transport Layer Security) provides three things for a connection between a client and a server:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eEncryption.\u003c/strong\u003e The data in transit cannot be read by anyone observing the network.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eAuthentication.\u003c/strong\u003e The client can verify it is talking to the intended server, not an impostor.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eIntegrity.\u003c/strong\u003e The data cannot be modified in transit without detection.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eWhen your browser connects to a site over HTTPS, a TLS handshake happens before any HTTP data is exchanged. During this handshake, the server presents its certificate.\u003c/p\u003e","title":"Understanding TLS certificates and Let's Encrypt"},{"content":"When I decided to rent a VPS for personal projects, I spent some time thinking about location. I ended up choosing a data center in Amsterdam. Here is how I arrived at that decision and what the setup looks like.\nWhy Amsterdam A few factors made Amsterdam a good fit.\nNetwork connectivity. Amsterdam is one of the most interconnected cities in Europe. AMS-IX is one of the largest internet exchange points in the world. In practice this means good peering, low latency to most of Western Europe, and solid routing to the rest of the world.\nInfrastructure maturity. The Netherlands has a long history of hosting. The data center ecosystem is well-established, the power grid is reliable, and the legal framework around data hosting is clear and predictable.\nGeographic convenience. For my use case, having the server in Western Europe made sense. Latency to where I usually connect from is consistently under 20 ms.\nPicking a provider I looked at several providers and compared them on a few criteria: price, network quality, support responsiveness, and whether they offer unmanaged plans. I prefer unmanaged because I want full control over the OS and do not need a control panel.\nThe specifics of which provider I chose matter less than the criteria. What I looked for:\nKVM virtualization (not OpenVZ) for full kernel access At least 1 GB of RAM SSD storage Unmetered bandwidth or a generous monthly allowance IPv4 and IPv6 support A clean IP reputation I ended up with a small plan: 1 vCPU, 2 GB RAM, 40 GB SSD, running Debian 12. The monthly cost is in the range of 5 to 10 euros.\nInitial setup After provisioning, the first SSH connection:\nssh root@203.0.113.10 The first thing I do on a fresh server is update everything:\napt update \u0026amp;\u0026amp; apt upgrade -y Then create a non-root user:\nadduser deploy usermod -aG sudo deploy Copy the SSH key to the new user:\nmkdir -p /home/deploy/.ssh cp /root/.ssh/authorized_keys /home/deploy/.ssh/ chown -R deploy:deploy /home/deploy/.ssh chmod 700 /home/deploy/.ssh chmod 600 /home/deploy/.ssh/authorized_keys At this point I switch to the new user and disable root login. But that is a topic for a separate post about security hardening.\nChecking the network A few quick checks I run on a new VPS to verify things are working as expected:\n# Check the public IP curl -4 ifconfig.me # Test DNS resolution dig +short example.com # Check available bandwidth roughly apt install -y iperf3 I also check the route to a few well-known endpoints to get a sense of how the network is peered:\ntraceroute -n 1.1.1.1 On this server, the route to Cloudflare\u0026rsquo;s DNS is three hops. That tells me the data center has good peering.\nResource baseline After the initial setup and before installing anything else, I note the resource baseline:\nfree -h df -h On a fresh Debian 12 install with 2 GB of RAM, roughly 100 MB is used after boot. The base system uses about 1.5 GB of disk space. That leaves plenty of room for services.\nWhat I run on it Right now the server handles a few things:\nThis blog (static files served by Nginx) A couple of small personal tools Automated backups to an external storage endpoint Nothing resource-intensive. The server sits at about 5% CPU and 300 MB of RAM usage on an average day. For a small personal server, that is more than enough headroom.\nWas it worth it Compared to shared hosting or a managed platform, a VPS gives you full control at the cost of doing your own maintenance. For me that is a good trade. I learn more about how things work, I have root access when I need it, and the monthly cost is predictable.\nAmsterdam specifically has been a good choice. The network quality is excellent, and I have not had any downtime issues in the time I have been running this server.\n","permalink":"https://packetlog.org/posts/vps-setup-amsterdam/","summary":"\u003cp\u003eWhen I decided to rent a VPS for personal projects, I spent some time thinking about location. I ended up choosing a data center in Amsterdam. Here is how I arrived at that decision and what the setup looks like.\u003c/p\u003e\n\u003ch2 id=\"why-amsterdam\"\u003eWhy Amsterdam\u003c/h2\u003e\n\u003cp\u003eA few factors made Amsterdam a good fit.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eNetwork connectivity.\u003c/strong\u003e Amsterdam is one of the most interconnected cities in Europe. AMS-IX is one of the largest internet exchange points in the world. In practice this means good peering, low latency to most of Western Europe, and solid routing to the rest of the world.\u003c/p\u003e","title":"My VPS setup in Amsterdam: why I chose it"},{"content":"I wanted a blog on my VPS that would cost almost nothing in resources. Static site generators were the obvious choice, and after looking at a few options I went with Hugo.\nWhy Hugo The main appeal is simplicity. Hugo is a single binary. There is no runtime, no dependency tree, no Node modules folder growing to 800 MB. You write Markdown, run one command, and get a folder of HTML files ready to serve.\nBuild times are measured in milliseconds. For a small site like this one, the entire build takes under 100 ms. That matters less for a blog with ten posts, but it means the tooling never gets in the way.\nChoosing a theme I picked PaperMod. It is minimal, fast, and does not pull in external resources. No Google Fonts requests, no analytics scripts, no CDN dependencies. The HTML it produces is clean and light.\nInstalling it as a Git submodule keeps things manageable:\ngit submodule add --depth=1 https://github.com/adityatelange/hugo-PaperMod.git themes/PaperMod The theme configuration lives in hugo.yaml. The defaults are reasonable, and you only need to override what you actually want to change.\nProject structure The directory layout is straightforward:\nsite/ ├── content/ │ └── posts/ ├── static/ ├── themes/ │ └── PaperMod/ └── hugo.yaml Posts go in content/posts/ as Markdown files with YAML front matter. Static assets go in static/. That is the whole structure.\nWriting a post Each post is a Markdown file with a small header:\n--- title: \u0026#34;Your post title\u0026#34; date: 2025-09-07 tags: [\u0026#34;example\u0026#34;] draft: false --- The rest is standard Markdown. Hugo supports code blocks with syntax highlighting out of the box, which is useful for a technical blog. You can also add a description field for SEO and a tags list to organize posts by topic.\nLocal development Hugo has a built-in development server with live reload:\nhugo server -D The -D flag includes draft posts. The server watches for file changes and rebuilds automatically. It runs at http://localhost:1313/ by default. Every time you save a Markdown file, the browser refreshes within milliseconds.\nThis tight feedback loop is one of the things that makes Hugo pleasant to use. You write, save, and immediately see the result.\nBuilding and deploying Building the site produces a public/ directory with plain HTML, CSS, and a small amount of JavaScript from the theme:\nhugo build For deployment, I copy the public/ folder to the web root on the server. A simple rsync does the job:\nrsync -avz --delete public/ user@yourserver:/var/www/packetlog.org/ You could automate this with a Git hook or a small shell script, but for a site that updates a few times a month, running it manually is fine.\nServing with Nginx On the VPS side, Nginx serves the static files. The configuration is minimal:\nserver { listen 80; server_name packetlog.org; root /var/www/packetlog.org; index index.html; location / { try_files $uri $uri/ =404; } } Static files are cheap to serve. Nginx handles this with barely any memory or CPU usage. On a small VPS with 1 GB of RAM, the overhead is negligible.\nWhat I skipped I deliberately left out a few things:\nComments. I do not need them. If someone wants to respond, they can write their own post or send an email. Analytics. I do not want to track visitors or load third-party scripts. Custom fonts. The system font stack works fine and loads instantly. The result is a site that loads fast, costs nothing extra to run, and is easy to maintain. Hugo\u0026rsquo;s documentation covers everything else you might need.\nThoughts so far This setup took about an hour from start to finish. Most of that time was spent reading the PaperMod documentation and tweaking the hugo.yaml configuration. The actual deployment was a few minutes.\nFor a personal blog on a VPS, this is exactly the level of complexity I wanted. No database, no server-side rendering, no build pipelines. Just Markdown files and a static site generator.\n","permalink":"https://packetlog.org/posts/hugo-site-for-vps/","summary":"\u003cp\u003eI wanted a blog on my VPS that would cost almost nothing in resources. Static site generators were the obvious choice, and after looking at a few options I went with \u003ca href=\"https://gohugo.io/\"\u003eHugo\u003c/a\u003e.\u003c/p\u003e\n\u003ch2 id=\"why-hugo\"\u003eWhy Hugo\u003c/h2\u003e\n\u003cp\u003eThe main appeal is simplicity. Hugo is a single binary. There is no runtime, no dependency tree, no Node modules folder growing to 800 MB. You write Markdown, run one command, and get a folder of HTML files ready to serve.\u003c/p\u003e","title":"A minimal Hugo site for your VPS"},{"content":"I work with infrastructure and networking, mostly Linux servers, containers, and anything that involves moving packets from point A to point B. By day I deal with distributed systems; in my spare time I run a small VPS in Amsterdam where I experiment with self-hosted services and test configurations I would not risk in production.\nThis site is a collection of notes from those experiments. If something took me more than an hour to figure out, I write it down here so I do not have to figure it out again. Hopefully some of it is useful to others as well.\nYou can check the server performance at speed.packetlog.org. The source for most of my projects is on GitHub.\nIf you want to reach me, send a mail to martin@packetlog.org.\n","permalink":"https://packetlog.org/about/","summary":"About this site and its author","title":"About"}]