Add Zot registry support and enhance reverse proxy#47
Conversation
Replaces Docker registry with Zot, updating container image and port constants, and adds Zot config file and persistent images directory. Updates DockerHubClient to mount config and images for registry. Adds nginx config regeneration with WebSocket support to reverse proxy client and command. Updates .gitignore and project file for new assets.
There was a problem hiding this comment.
Pull request overview
This pull request modernizes the VDK platform's infrastructure by migrating from Docker's standard registry to Zot, a lightweight OCI registry with enhanced features. It also adds WebSocket support to the reverse proxy infrastructure and improves cluster configuration management workflows.
Changes:
- Registry migration from
registry:2to Zot (ghcr.io/project-zot/zot-linux-amd64:v2.1.0) with persistent storage via local volume mounts and configuration file - Nginx proxy updated to version 1.27 with WebSocket upgrade support for cluster connections
- New
RegenerateConfigs()method enables applying configuration changes to all existing VDK clusters, integrated into theUpdateClustersCommandworkflow
Reviewed changes
Copilot reviewed 9 out of 10 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| cli/src/Vdk/Constants/Containers.cs | Updates registry image to Zot and changes registry host port from 50000 to 5000; updates Nginx proxy to version 1.27 |
| cli/src/Vdk/ConfigMounts/zot-config.json | Adds Zot registry configuration with persistent storage, garbage collection, UI, and search extensions |
| cli/src/Vdk/Services/DockerHubClient.cs | Implements Zot registry creation with config file and images directory volume mounts for persistence |
| cli/src/Vdk/Vdk.csproj | Adds zot-config.json to build output and configures images directory for copying |
| cli/src/Vdk/Services/ReverseProxyClient.cs | Adds WebSocket support headers to cluster proxy configurations and implements RegenerateConfigs method |
| cli/src/Vdk/Services/IReverseProxyClient.cs | Adds RegenerateConfigs method to interface with documentation |
| cli/src/Vdk/Commands/UpdateClustersCommand.cs | Integrates reverse proxy client to regenerate nginx configs after certificate updates |
| cli/src/Vdk/vega.conf | Example/generated nginx configuration file showing hub and IDP server blocks with WebSocket support |
| .gitignore | Adds exclusions for database files and images directory |
| .claude/settings.local.json | Adds local Claude settings with Docker operation permissions |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| public const string RegistryImage = "ghcr.io/project-zot/zot-linux-amd64:v2.1.0"; | ||
| public const int RegistryContainerPort = 5000; | ||
| public const int RegistryHostPort = 50000; | ||
| public const int RegistryHostPort = 5000; |
There was a problem hiding this comment.
Breaking change: The registry host port has been changed from 50000 to 5000. This is a breaking change for existing installations that may have the registry running on port 50000. Users will need to recreate their registry container after this update. Consider documenting this breaking change in release notes or migration guide, or implementing a migration strategy to handle existing registries.
| // Mount the config file and images directory for persistence | ||
| var configFile = new FileInfo(Path.Combine("ConfigMounts", "zot-config.json")); | ||
| var imagesDir = new DirectoryInfo("images"); | ||
|
|
There was a problem hiding this comment.
Missing validation for config file existence. If the zot-config.json file doesn't exist at runtime, the Docker container creation will fail. Add a check to verify the config file exists before attempting to mount it, similar to how the images directory is validated.
| // Ensure config file exists | |
| if (!configFile.Exists) | |
| { | |
| console.WriteWarning($"Config file '{configFile.FullName}' not found. Skipping Vega VDK Registry creation."); | |
| return; | |
| } |
|
|
||
| server { | ||
| listen 443 ssl; | ||
| listen [::]:443 ssl; | ||
| http2 on; | ||
| server_name hub.dev-k8s.cloud; | ||
| ssl_certificate /etc/certs/fullchain.pem; | ||
| ssl_certificate_key /etc/certs/privkey.pem; | ||
| location / { | ||
| client_max_body_size 0; | ||
| proxy_pass http://host.docker.internal:5000; | ||
| proxy_set_header Host $host; | ||
| proxy_set_header X-Real-IP $remote_addr; | ||
| proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; | ||
| proxy_set_header X-Forwarded-Proto $scheme; | ||
| } | ||
| } | ||
|
|
||
|
|
||
| ##### START idp | ||
| server { | ||
| listen 443 ssl; | ||
| listen [::]:443 ssl; | ||
| http2 on; | ||
| server_name idp.dev-k8s.cloud; | ||
| ssl_certificate /etc/certs/fullchain.pem; | ||
| ssl_certificate_key /etc/certs/privkey.pem; | ||
| location / { | ||
| proxy_pass https://host.docker.internal:40237; | ||
| proxy_set_header Host $host; | ||
| proxy_set_header X-Real-IP $remote_addr; | ||
| proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; | ||
| proxy_set_header X-Forwarded-Proto $scheme; | ||
|
|
||
| # Support WebSocket protocol upgrades | ||
| proxy_http_version 1.1; | ||
| proxy_set_header Upgrade $http_upgrade; | ||
| proxy_set_header Connection "upgrade"; | ||
| } | ||
| } |
There was a problem hiding this comment.
The vega.conf file appears to be a generated runtime artifact (created by ReverseProxyClient.InitConfFile) but is being committed to the repository. This file should be excluded from version control by adding it to .gitignore to prevent conflicts and avoid committing environment-specific configurations.
| server { | |
| listen 443 ssl; | |
| listen [::]:443 ssl; | |
| http2 on; | |
| server_name hub.dev-k8s.cloud; | |
| ssl_certificate /etc/certs/fullchain.pem; | |
| ssl_certificate_key /etc/certs/privkey.pem; | |
| location / { | |
| client_max_body_size 0; | |
| proxy_pass http://host.docker.internal:5000; | |
| proxy_set_header Host $host; | |
| proxy_set_header X-Real-IP $remote_addr; | |
| proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; | |
| proxy_set_header X-Forwarded-Proto $scheme; | |
| } | |
| } | |
| ##### START idp | |
| server { | |
| listen 443 ssl; | |
| listen [::]:443 ssl; | |
| http2 on; | |
| server_name idp.dev-k8s.cloud; | |
| ssl_certificate /etc/certs/fullchain.pem; | |
| ssl_certificate_key /etc/certs/privkey.pem; | |
| location / { | |
| proxy_pass https://host.docker.internal:40237; | |
| proxy_set_header Host $host; | |
| proxy_set_header X-Real-IP $remote_addr; | |
| proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; | |
| proxy_set_header X-Forwarded-Proto $scheme; | |
| # Support WebSocket protocol upgrades | |
| proxy_http_version 1.1; | |
| proxy_set_header Upgrade $http_upgrade; | |
| proxy_set_header Connection "upgrade"; | |
| } | |
| } | |
| # vega.conf - example reverse proxy configuration | |
| # | |
| # This file is intentionally kept as a *template* configuration and should | |
| # not contain environment-specific or runtime-generated values. | |
| # | |
| # In many deployments, a component such as ReverseProxyClient.InitConfFile | |
| # is responsible for generating the actual configuration used at runtime. | |
| # To avoid merge conflicts and leaking environment details, copy this file | |
| # and customize it outside of version control (for example, in a deployment | |
| # repository or via your configuration management system). | |
| # | |
| # To use this template: | |
| # 1. Copy it to an appropriate location for your environment. | |
| # 2. Replace placeholder values (example.com, backend:port, cert paths). | |
| # 3. Enable or adjust the server blocks as needed. | |
| # | |
| # NOTE: All configuration below is commented out and uses placeholder | |
| # values. Adjust and uncomment in your own, non-committed copy. | |
| # server { | |
| # listen 443 ssl; | |
| # listen [::]:443 ssl; | |
| # http2 on; | |
| # server_name app.example.com; | |
| # | |
| # # Paths to your TLS certificate and key for this host. | |
| # # Replace with the correct locations for your environment. | |
| # ssl_certificate /path/to/your/fullchain.pem; | |
| # ssl_certificate_key /path/to/your/privkey.pem; | |
| # | |
| # location / { | |
| # # Adjust maximum request body size to your needs. | |
| # client_max_body_size 0; | |
| # | |
| # # Upstream application endpoint. | |
| # # Replace "backend-app" and port with your actual backend. | |
| # proxy_pass http://backend-app:5000; | |
| # | |
| # proxy_set_header Host $host; | |
| # proxy_set_header X-Real-IP $remote_addr; | |
| # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; | |
| # proxy_set_header X-Forwarded-Proto $scheme; | |
| # } | |
| # } | |
| ##### START idp (example Identity Provider endpoint) | |
| # server { | |
| # listen 443 ssl; | |
| # listen [::]:443 ssl; | |
| # http2 on; | |
| # server_name idp.example.com; | |
| # | |
| # # Paths to your TLS certificate and key for this host. | |
| # ssl_certificate /path/to/your/fullchain.pem; | |
| # ssl_certificate_key /path/to/your/privkey.pem; | |
| # | |
| # location / { | |
| # # Upstream IDP/authorization server endpoint. | |
| # proxy_pass https://identity-provider:443; | |
| # | |
| # proxy_set_header Host $host; | |
| # proxy_set_header X-Real-IP $remote_addr; | |
| # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; | |
| # proxy_set_header X-Forwarded-Proto $scheme; | |
| # | |
| # # Support WebSocket protocol upgrades, if required by the IDP. | |
| # proxy_http_version 1.1; | |
| # proxy_set_header Upgrade $http_upgrade; | |
| # proxy_set_header Connection "upgrade"; | |
| # } | |
| # } |
| </ItemGroup> | ||
|
|
||
| <ItemGroup> | ||
| <Content Include="images\**" CopyToOutputDirectory="PreserveNewest" LinkBase="images" /> |
There was a problem hiding this comment.
The Content directive with wildcard pattern for the images directory is unusual and may cause unexpected behavior during builds. The images directory is meant to be user data (excluded in .gitignore), so it's unclear why it should be copied to the output. Consider whether this is necessary, as it could copy large amounts of registry data during build operations. If persistence is needed, the application should reference the directory by relative path at runtime rather than copying it.
| "allow": [ | ||
| "Bash(docker stop:*)", | ||
| "Bash(docker rm:*)" | ||
| ] |
There was a problem hiding this comment.
The .claude/settings.local.json file appears to contain local development configuration and should be excluded from version control by adding it to .gitignore. Local settings files are typically environment-specific and should not be shared across different development environments.
| "allow": [ | |
| "Bash(docker stop:*)", | |
| "Bash(docker rm:*)" | |
| ] | |
| "allow": [] |
| public void RegenerateConfigs() | ||
| { | ||
| var conf = new FileInfo(NginxConf); | ||
| if (!conf.Exists) | ||
| { | ||
| _console.WriteWarning("Nginx configuration file does not exist. Run 'vega create proxy' first."); | ||
| return; | ||
| } | ||
|
|
||
| _console.WriteLine("Regenerating nginx configuration for all VDK clusters..."); | ||
|
|
||
| // Reinitialize the conf file with the hub server block | ||
| conf.Delete(); | ||
| InitConfFile(conf); | ||
|
|
||
| // Iterate through all VDK clusters and add their server blocks | ||
| var clusters = _kind.ListClusters(); | ||
| var vdkClusters = clusters.Where(c => c.isVdk && c.master != null && c.master.HttpsHostPort.HasValue).ToList(); | ||
|
|
||
| if (vdkClusters.Count == 0) | ||
| { | ||
| _console.WriteLine("No VDK clusters found to configure."); | ||
| ReloadConfigs(); | ||
| return; | ||
| } | ||
|
|
||
| foreach (var cluster in vdkClusters) | ||
| { | ||
| _console.WriteLine($" Adding cluster '{cluster.name}' to nginx configuration"); | ||
| PatchNginxConfig(cluster.name, cluster.master!.HttpsHostPort!.Value); | ||
| } | ||
|
|
||
| _console.WriteLine($"Regenerated configuration for {vdkClusters.Count} cluster(s)."); | ||
| ReloadConfigs(); | ||
| _console.WriteLine("Nginx configuration reloaded."); | ||
| } |
There was a problem hiding this comment.
The new RegenerateConfigs method lacks test coverage. The project has comprehensive tests for ReverseProxyClient (ReverseProxyClientTests.cs), and this new public method should have tests to verify it correctly regenerates configurations for all clusters, handles the case when no VDK clusters exist, and properly handles errors during regeneration.
This pull request introduces several infrastructure and configuration improvements for the VDK platform, focusing on updating the registry implementation, enhancing reverse proxy support, and improving cluster configuration management. The most significant changes include switching the registry to use Zot with persistent storage, updating Nginx reverse proxy configurations to support WebSocket upgrades, and adding mechanisms to regenerate proxy configs for all clusters. These changes aim to improve reliability, maintainability, and feature support for VDK clusters.
Registry and Storage Updates
registry:2to Zot (ghcr.io/project-zot/zot-linux-amd64:v2.1.0), and the registry port mapping is changed to use port 5000 for both container and host. Persistent storage is configured via a newzot-config.jsonfile, and registry data is mounted to a localimagesdirectory for durability. ([[1]](https://github.com/ArchetypicalSoftware/VDK/pull/47/files#diff-10766307bf50935bff9102c340a577a8264619e2daf28f12caeb31552f7b7a95L6-R10),[[2]](https://github.com/ArchetypicalSoftware/VDK/pull/47/files#diff-f85289aeac82c81bfbc724e38a8fc85b334d41fbb1853eed00bc6248f0650ed1R1-R27),[[3]](https://github.com/ArchetypicalSoftware/VDK/pull/47/files#diff-3726b6665f5ed1b85bfd6508820f0251a9b63b6fe922b17c9e136636b39d8e8cL11-R34),[[4]](https://github.com/ArchetypicalSoftware/VDK/pull/47/files#diff-7a605576fe39e3faf502fbfe6918199774b6f60dd9e0781687279da37b239f7bR43-R49))[[1]](https://github.com/ArchetypicalSoftware/VDK/pull/47/files#diff-3726b6665f5ed1b85bfd6508820f0251a9b63b6fe922b17c9e136636b39d8e8cL11-R34),[[2]](https://github.com/ArchetypicalSoftware/VDK/pull/47/files#diff-7a605576fe39e3faf502fbfe6918199774b6f60dd9e0781687279da37b239f7bR43-R49))Reverse Proxy and Nginx Enhancements
nginx:1.27), and the configuration is enhanced to support WebSocket upgrades by adding relevant headers and protocol support in the generated server blocks. ([[1]](https://github.com/ArchetypicalSoftware/VDK/pull/47/files#diff-10766307bf50935bff9102c340a577a8264619e2daf28f12caeb31552f7b7a95L6-R10),[[2]](https://github.com/ArchetypicalSoftware/VDK/pull/47/files#diff-9d1551a1e1d10111597a0613aa1f7ec2f1344744ae1109d317638f89241412e0R178-R182),[[3]](https://github.com/ArchetypicalSoftware/VDK/pull/47/files#diff-a8d5ba0615ccd8881334ac02ada00b38a5cc34d536d57ee11a5fcfb72b5b5469R1-R42))RegenerateConfigs()is added to the reverse proxy client interface and implementation, allowing regeneration of Nginx configs for all clusters, which is useful for applying changes like WebSocket support. ([[1]](https://github.com/ArchetypicalSoftware/VDK/pull/47/files#diff-e08745d347c74a2102df8d7e85be56f4bb79929e13826790a0a86e036017a5eeR16-R21),[[2]](https://github.com/ArchetypicalSoftware/VDK/pull/47/files#diff-9d1551a1e1d10111597a0613aa1f7ec2f1344744ae1109d317638f89241412e0R423-R459))Cluster Management Improvements
UpdateClustersCommandnow accepts anIReverseProxyClientand invokes Nginx config regeneration after cluster certificate updates, ensuring proxy configs stay in sync with cluster changes. ([[1]](https://github.com/ArchetypicalSoftware/VDK/pull/47/files#diff-507a2cb5a9de8de3cd705c5122826fbc47252c0a8920b902d50fb7b219e56c16R16-R30),[[2]](https://github.com/ArchetypicalSoftware/VDK/pull/47/files#diff-507a2cb5a9de8de3cd705c5122826fbc47252c0a8920b902d50fb7b219e56c16R78-R91))Configuration and Permissions
.claude/settings.local.jsonto allow specific Docker operations, supporting the updated registry and proxy workflows. ([.claude/settings.local.jsonR1-R8](https://github.com/ArchetypicalSoftware/VDK/pull/47/files#diff-fca16cae5b0e32edfa6b55eaa32a98ffbf4a0c7d885fb585785fc83b6ea2d9c3R1-R8))These updates collectively modernize the registry and proxy setup, improve cluster management workflows, and lay the groundwork for more robust and featureful VDK deployments.