The Relay Proxy is a lightweight Go application that runs within your infrastructure and handles all streaming connections to the Harness platform. It fetches all of your flag, target, and target group data on startup, caches it, fetches any updates over time, and serves it to your connected downstream SDKs.
To learn more about deploying the relay proxy see Deploy the Relay Proxy.
To read more about use cases and advantages of the Relay Proxy see the Why use the Relay Proxy?.
You can also read more about the use cases, architecture and more in our blog post.
To view the many configuration options available read Configuration.
To view how to securely connect sdks to the Relay Proxy with HTTPS enabled see TLS.
By default the Relay Proxy V2 runs with a redis cache. The Redis cache supports password authentication and mTLS (mutual TLS) authentication.
Ready-to-run examples demonstrating various deployment scenarios:
- Redis mTLS - Redis with mutual TLS authentication (recommended for production)
- Redis Auth - Redis with password authentication
- HA Mode with Monitoring - High availability setup with Prometheus & Grafana
- Redis Cluster HA - Redis cluster with monitoring
- Load Balancing - Nginx load balancing across multiple proxies
- Reverse Proxy - Nginx reverse proxy setup
- TLS Reverse Proxy - HTTPS-enabled reverse proxy
For info on horizontal scaling Relay Proxies and a working example see Load Balancing.
If you'd like to build and run the Relay Proxy on Windows see Windows.
For info on the external Harness SaaS endpoints the Relay Proxy communicates with see Outbound Endpoints. For info on the Relay Proxy endpoints your sdks will connect to see Inbound Endpoints.
For help on debugging your Relay Proxy install see Debugging.
See the contribution guide.