-
Notifications
You must be signed in to change notification settings - Fork 20
Description
Hello,
I have found a significant bug in the caddy-maxmind-geolocation module. When a request is made to a site that uses the module, it triggers a fatal tlsv1 alert internal error on the client-side, terminating the connection during the TLS handshake.
Crucially, this happens without causing a panic or any fatal error message in the Caddy logs, which makes it very difficult to debug.
Context
- Caddy Version & Modules: v2.10.0
admin.api.crowdsec, caddy.listeners.layer4, caddy.logging.encoders.formatted, caddy.logging.encoders.transform, crowdsec, dns.providers.cloudflare, dynamic_dns, dynamic_dns.ip_sources.interface, dynamic_dns.ip_sources.simple_http, dynamic_dns.ip_sources.static, dynamic_dns.ip_sources.upnp, http.handlers.appsec, http.handlers.crowdsec, http.matchers.maxmind_geolocation, layer4, layer4.handlers.echo, layer4.handlers.proxy, layer4.handlers.proxy_protocol, layer4.handlers.socks5, layer4.handlers.subroute, layer4.handlers.tee, layer4.handlers.throttle, layer4.handlers.tls, layer4.matchers.clock, layer4.matchers.crowdsec, layer4.matchers.dns, layer4.matchers.http, layer4.matchers.local_ip, layer4.matchers.not, layer4.matchers.openvpn, layer4.matchers.postgres, layer4.matchers.proxy_protocol, layer4.matchers.quic, layer4.matchers.rdp, layer4.matchers.regexp, layer4.matchers.remote_ip, layer4.matchers.remote_ip_list, layer4.matchers.socks4, layer4.matchers.socks5, layer4.matchers.ssh, layer4.matchers.tls, layer4.matchers.winbox, layer4.matchers.wireguard, layer4.matchers.xmpp, layer4.proxy.selection_policies.first, layer4.proxy.selection_policies.ip_hash, layer4.proxy.selection_policies.least_conn, layer4.proxy.selection_policies.random, layer4.proxy.selection_policies.random_choose, layer4.proxy.selection_policies.round_robin, tls.handshake_match.alpn
- Operating System: Debian 12 VM with 2Gb memory on Proxmox
The Bug
Any request to a site using the maxmind_geolocation matcher results in a hard TLS failure. The client receives a (35) LibreSSL/3.3.6: error:1404B438:SSL routines:ST_CONNECT:tlsv1 alert internal error. The Caddy logs remain clean, showing no errors at the time of the request.
Further investigation shows that the module's placeholders are also not working, indicating a fundamental failure to provide data to the Caddy process.
Steps to Reproduce
-
Use the following minimal Caddyfile:
test.example.com { tls your.email@example.com # This test block adds a debug header for every request header X-Debug-Country-Code "{geoip.country.iso_code}" reverse_proxy 127.0.0.1:8080 }
-
Start Caddy. The service starts successfully.
-
Make a single request from an external IP address:
curl -4 -v https://test.example.com
Expected Behavior
A successful HTTP 200 response with a header like X-Debug-Country-Code: CA.
Actual Behavior
-
The
curlcommand fails with:
curl: (35) LibreSSL/3.3.6: error:1404B438:SSL routines:ST_CONNECT:tlsv1 alert internal error -
If the test is changed to a simple
respondinstead of areverse_proxy, the connection succeeds, but the placeholder is not replaced:
< x-debug-country-code: {geoip.country.iso_code} -
The Caddy journal shows no panic or fatal error at the time of the failure.
What I've Already Ruled Out
-
The database file is correct. The
mmdblookuptool successfully queries the exact same file and returns the correct country code (CA) for my test IP:$ mmdblookup --file /etc/caddy/GeoLite2-City.mmdb --ip 37.120.205.17 country iso_code "CA" <utf8_string>
-
File permissions are correct. The Caddy user owns the database file.
-
This is not a caching issue. The Caddy service was fully restarted (
systemctl stop caddy && systemctl start caddy) before every test.
This strongly suggests a bug within the module's code that causes it to fail in a way that corrupts the TLS handshake state, without crashing the entire application.
Let me know if I'm missing something up.
Thank you for your time and for your work on this module.