Manage Eclipse P2 Proxy
The Eclipse P2 proxy is an internal nginx reverse proxy that caches Eclipse P2 content to improve the stability for Jenkins builds.
At a high level:
- Service name:
ep-p2-proxy-service - Service type:
ClusterIP(internal only) - Cache storage: persistent volume mounted at
/cache(nginx cache path:/cache/nginx) - Upstream sources:
download.eclipse.organdarchive.eclipse.org - Intercepts and proxies redirects to
archive.eclipse.org - Default time-to-live (TTL):
36500 days. Released Eclipse files are never updated.
For feature overview and enablement guidance, see Eclipse P2 Caching Proxy.
Confirm the Proxy Is in Use
Use build logs and generated Maven settings to confirm repository traffic is routed through the proxy.
Expected indicators:
- Jenkins logs include
Loading repository ... from mirror ... http://ep-p2-proxy-service/... - Build-agent
settings.xmlcontains Eclipse P2 proxy URLs that usehttp://ep-p2-proxy-service
Confirm Cache Hits and Misses
The nginx access log includes cache= status for each request:
cache=HIT: served from cache, no upstream fetchcache=MISS: not in cache, fetched from upstreamcache=EXPIRED: stale cache entry was refreshed from upstream
Example command:
POD=$(kubectl get pod -l app=ep-p2-proxy -o jsonpath='{.items[0].metadata.name}')
kubectl logs -f "$POD"
Look for lines similar to:
10.100.167.47 [09/Mar/2026:15:05:12 +0000] "GET /rt/rap/4.4/compositeContent.jar HTTP/1.1" 200 435 cache=HIT upstream_time=-
When cache=HIT and upstream_time=-, the request was served from cache without an external fetch.
Inspect Cache Size and File Count
Use the following commands to inspect cache growth on the proxy volume:
POD=$(kubectl get pod -l app=ep-p2-proxy -o jsonpath='{.items[0].metadata.name}')
kubectl exec "$POD" -- sh -c "find /cache/nginx -type f | wc -l; du -sh /cache/nginx/"
This helps you verify:
- Cache warm-up progress
- Relative cache stability across repeated builds
- Storage usage trends over time
Clear the Cache
Clear the Nginx cache when build failures point to a corrupt or stale cached artifact. Clear it after Eclipse publishes updated content, such as a plugin update or security fix. This forces builds to fetch fresh content instead of serving the older cached copy.
Clearing the cache deletes all cached files under /cache/nginx inside the pod. The next build that requests each artifact fetches it from download.eclipse.org and repopulates the cache.
POD=$(kubectl get pod -l app=ep-p2-proxy -o jsonpath='{.items[0].metadata.name}')
kubectl exec "$POD" -- sh -c "rm -rf /cache/nginx/*"
kubectl rollout restart deployment/ep-p2-proxy-deployment
kubectl rollout status deployment/ep-p2-proxy-deployment
The restart ensures that Nginx rebuilds its in-memory cache metadata after the files are removed.
Recognize and Recover from Volume Capacity Issues
The Nginx cache is configured with max_size=2g, which causes Nginx to evict the least-recently-used entries when the cache directory approaches that limit. However, the persistent volume claim ep-p2-proxy-pvc also holds temporary files and Nginx logs, so total volume usage can exceed 2 GiB. The persistent volume claim defaults to 3 GiB.
Signs that the volume is full or nearly full:
- Build failures when fetching content not already in the cache, even though the proxy pod is running and healthy
- Nginx error logs showing write failures (visible via
kubectl logs) - Pod restart loops (liveness probe failures caused by nginx inability to write)
Check current volume usage:
POD=$(kubectl get pod -l app=ep-p2-proxy -o jsonpath='{.items[0].metadata.name}')
kubectl exec "$POD" -- df -h /cache
kubectl exec "$POD" -- du -sh /cache/nginx/ /var/log/nginx/
Remediation options:
- Clear the cache: Frees space immediately. See Clear the Cache above. The cache repopulates on the next build.
- Increase the persistent volume claim size: Set a larger value for
TF_VAR_p2_proxy_pvc_sizeindocker-compose.override.ymland re-run the bootstrap container. This resizes the underlying Amazon Elastic Block Store (EBS) volume. You may also want to increase the Nginxmax_sizevalue; see the configuration guidance in Eclipse P2 Caching Proxy.
Verify Upstream Reachability
The Nginx configuration uses proxy_cache_use_stale. This serves cached content when download.eclipse.org is unreachable. It protects builds from transient upstream outages, but it can also hide connectivity problems while requested artifacts are already cached. A build that requests an uncached artifact can then fail with a timeout.
To verify that the proxy can reach the upstream from inside the pod:
POD=$(kubectl get pod -l app=ep-p2-proxy -o jsonpath='{.items[0].metadata.name}')
kubectl exec "$POD" -- sh -c "wget -q --spider https://download.eclipse.org/ && echo 'upstream reachable' || echo 'upstream unreachable'"
You can also spot upstream connectivity problems in the access log. If recent log output shows only cache=HIT entries, that can be normal for a warm cache. If you expect new or uncached artifacts to be requested, such as after clearing the cache, check the logs while builds are running. If you still see no cache=MISS entries, check whether the upstream is reachable.
Troubleshooting Tips
If proxy behavior changed after configuration updates, restart the deployment:
kubectl rollout restart deployment/ep-p2-proxy-deployment kubectl rollout status deployment/ep-p2-proxy-deploymentVerify the proxy pod and service are healthy:
kubectl get pods -l app=ep-p2-proxy kubectl get service ep-p2-proxy-service
For broader build architecture context, see Build Infrastructure.