RedHat OpenShift | PS
BeyondTrust Kubernetes Authenticator
Independent Kubernetes authenticator client that runs as a sidecar/init container to handle BeyondTrust PasswordSafe authentication automatically for containerized applications.
Binary download
To access the binaries for OpenShift, see Kubernetes client.
Must have an active Dockers account to access binaries.
Features
- OAuth authentication mode
- Sidecar and init container patterns
- Automatic token refresh
- Health check endpoints
- OpenShift certification readiness
Prerequisites
- Kubernetes cluster (1.19+) or OpenShift (4.x+)
- BeyondTrust PasswordSafe API URL and OAuth credentials
- kubectl or oc CLI configured
Installation
- Create a Kubernetes Secret with your OAuth credentials:
kubectl create secret generic bt-oauth \
--from-literal=client-id=YOUR_CLIENT_ID \
--from-literal=client-secret=YOUR_CLIENT_SECRET- Deploy the authenticator
Deployment Modes
The authenticator supports two deployment patterns:
Init Container Mode
Authenticates once at pod startup and exits. Best for:
- Short-lived jobs
- Applications that don't need token refresh
- Minimal resource usage
Example:
initContainers:
- name: beyondtrust-auth
# Pin to an immutable digest. Find the value at
# https://hub.docker.com/r/beyondtrust/ps-authn-k8s-client/tags
image: beyondtrust/ps-authn-k8s-client:<version>@sha256:<digest>
env:
- name: BT_MODE
value: "init"
- name: BT_CLIENT_ID
valueFrom: {secretKeyRef: {name: bt-oauth, key: client-id}}
- name: BT_CLIENT_SECRET
valueFrom: {secretKeyRef: {name: bt-oauth, key: client-secret}}
- name: BT_API_URL
value: "https://passwordsafe.example.com/api/v3"Sidecar Mode
Runs continuously with automatic token refresh. Best for:
- Long-running applications
- Production workloads
- Automatic credential rotation
Example:
containers:
- name: beyondtrust-authenticator
# Pin to an immutable digest. Find the value at
# https://hub.docker.com/r/beyondtrust/ps-authn-k8s-client/tags
image: beyondtrust/ps-authn-k8s-client:<version>@sha256:<digest>
env:
- name: BT_MODE
value: "sidecar"
- name: BT_CLIENT_ID
valueFrom: {secretKeyRef: {name: bt-oauth, key: client-id}}
# ... more configurationConfiguration
All configuration is done via environment variables:
| Variable | Required | Default | Description |
|---|---|---|---|
BT_CLIENT_ID | Yes | - | OAuth client ID |
BT_CLIENT_SECRET | Yes | - | OAuth client secret |
BT_API_URL | Yes | - | BeyondTrust API URL |
BT_MODE | No | sidecar | Container mode (sidecar or init) |
BT_TOKEN_PATH | No | /run/beyondtrust/access-token | Path to write token |
BT_REFRESH_INTERVAL | No | 5m | Token refresh check interval |
BT_REFRESH_BEFORE_EXPIRY | No | 1m | Refresh token before expiry |
BT_LOG_LEVEL | No | info | Log level (debug, info, warn, error) |
BT_LOG_FORMAT | No | json | Log format (json or text) |
BT_HEALTH_PORT | No | 8080 | Health check port |
BT_TLS_VERIFY | No | true | Verify TLS certificates |
BT_CA_BUNDLE | No | - | Custom CA bundle path |
Token File Format
The authenticator writes a JSON token file that your application can read. The file is written atomically (via a temporary file and rename) to prevent partial reads. File permissions are 0640 (Owner Read/Write, Group Read) and the parent directory is created with 0700 if it does not exist.
{
"access_token": "eyJhbGciOiJSUzI1NiIs...",
"token_type": "Bearer",
"expires_at": "2026-02-13T15:30:00Z",
"expires_in": 3600,
"scope": "publicapi",
"api_url": "https://passwordsafe.example.com/api/v3",
"client_id": "your-client-id",
"last_updated": "2026-02-13T14:30:00Z"
}Reading the token in your app:
# Access token for API calls
TOKEN=$(cat /run/beyondtrust/access-token | jq -r '.access_token')Version
The binary supports a -version flag to print version information and exit:
./authenticator -version
# BeyondTrust Kubernetes Authenticator v1.1.0Health Checks
The sidecar exposes health and observability endpoints on BT_HEALTH_PORT (default 8080):
/healthz: Liveness probe (is the process running?)/readyz: Readiness probe (is the token valid?)/metrics: Prometheus metrics in text exposition format
Configure in your pod spec:
livenessProbe:
httpGet:
path: /healthz
port: 8080
readinessProbe:
httpGet:
path: /readyz
port: 8080Prometheus Metrics
The /metrics endpoint exposes the following Prometheus metrics:
| Metric | Type | Labels | Description |
|---|---|---|---|
beyondtrust_auth_attempts_total | Counter | mode, status | Total authentication attempts (status: success / failure) |
beyondtrust_auth_duration_seconds | Histogram | mode | End-to-end authentication latency including retries |
beyondtrust_token_refreshes_total | Counter | status | Total proactive token refresh attempts |
beyondtrust_token_expiry_seconds | Gauge | NA | Seconds until the current token expires (negative = expired) |
Enable Prometheus scraping via pod annotations (already included in the deployment examples):
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/metrics"OpenShift Deployment
The authenticator is designed to run under OpenShift's most restrictive restricted SecurityContextConstraints (SCC) policy without any cluster-admin grants.
How restricted SCC works
restricted SCC worksOpenShift's restricted SCC enforces MustRunAsRange for both runAsUser and fsGroup. The cluster automatically assigns a namespace-specific UID and GID range to each project. All containers in a pod share the same allocated UID, which means:
- Never set
runAsUserin the pod or container spec, OpenShift assigns one at admission time. - Never set
fsGroup: it is also controlled byMustRunAsRange. Because both containers inherit the same UID, the app container can read the token file (it is the file owner) without needing group access. - Always set
allowPrivilegeEscalation: falseand drop all Linux capabilities.
SCC compliance matrix
| Requirement | How we satisfy it |
|---|---|
| Non-root UID | runAsNonRoot: true on the pod; Dockerfile sets USER 1000; OpenShift overrides with its own range |
| No privilege escalation | allowPrivilegeEscalation: false on every container |
| No Linux capabilities | capabilities.drop: [ALL] on every container |
| Writable token volume | emptyDir (in-memory) mounted at /run/beyondtrust |
| No K8s API access | automountServiceAccountToken: false on the pod |
| SELinux compatible | emptyDir volumes receive the pod's SELinux label automatically, no :z annotation required |
SELinux and emptyDir
emptyDirOpenShift with SELinux enforcing automatically labels emptyDir volumes with the pod's MCS label (s0:c1,c2 range). No seLinuxOptions or hostPath relabelling flags (:z, :Z) are needed. Using medium: Memory (tmpfs) is recommended to avoid writing the token to the node's disk and to guarantee cleanup on pod termination.
Quick start on OpenShift
# 1. Create the OAuth credentials secret
oc create secret generic bt-oauth \
--from-literal=client-id=YOUR_CLIENT_ID \
--from-literal=client-secret=YOUR_CLIENT_SECRET
# 2. Deploy — sidecar pattern (token kept fresh)
oc apply -f deploy/examples/openshift-sidecar.yaml
# 3. Or — init container pattern (token fetched once at startup)
oc apply -f deploy/examples/openshift-init.yamlTwo pattern-specific files are provided, use the one that matches your deployment:
- Sidecar pattern: continuous token refresh; includes liveness/readiness probes.
- Init container pattern: token fetched once; init container exits before the app starts.
Both files omit fsGroup and runAsUser, drop all capabilities, and use an in-memory emptyDir volume. Create the bt-oauth Secret with oc create secret before applying either manifest.
Production deployment
Production deployment example
This section walks through a complete production deployment a Go application that retrieves database credentials from BeyondTrust Password Safe using a sidecar-managed token.
Architecture
Pod
├── beyondtrust-authenticator (sidecar)
│ - authenticates on startup
│ - refreshes token before expiry
│ - writes /run/beyondtrust/access-token (shared volume)
└── myapp (application container)
- reads token from shared volume
- calls Password Safe API to retrieve secrets
- Kubernetes Secret
Store the OAuth credentials as a Kubernetes Secret. Never hardcode credentials in the pod spec.
kubectl create secret generic bt-oauth \
--from-literal=client-id=YOUR_CLIENT_ID \
--from-literal=client-secret=YOUR_CLIENT_SECRET- Deployment Manifest
Select the one that matches your environment.
| Use case | Key features |
|---|---|
| Token fetched once at pod startup | Init container exits before app starts; minimal resource footprint |
| Token kept fresh for the pod lifetime | Continuous refresh, liveness/readiness probes, Prometheus annotations |
OpenShift restricted SCC (sidecar pattern) | No hardcoded UIDs, allowPrivilegeEscalation: false, all capabilities dropped |
OpenShift restricted SCC (init container pattern) | No hardcoded UIDs, allowPrivilegeEscalation: false, all capabilities dropped |
All manifests include resource requests/limits, ephemeral-storage bounds, automountServiceAccountToken: false, and an emptyDir shared volume with medium: Memory.
- Go Application: Secret Retrieval Using the Library with the Sidecar Token
The key benefit of this project is credential isolation: the application container never holds BeyondTrust OAuth credentials. Use session.NewSessionFromToken from go-client-library-passwordsafe, it accepts the token written by the sidecar, signs in, and returns a ready-to-use handle for secret retrieval. No ClientID or ClientSecret is ever needed in the app container.
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"os"
"time"
logging "github.com/BeyondTrust/go-client-library-passwordsafe/api/logging"
"github.com/BeyondTrust/go-client-library-passwordsafe/api/session"
"github.com/BeyondTrust/go-client-library-passwordsafe/api/utils"
backoff "github.com/cenkalti/backoff/v4"
"go.uber.org/zap"
)
const tokenPath = "/run/beyondtrust/access-token"
// tokenFile mirrors the JSON written by the authenticator sidecar.
type tokenFile struct {
AccessToken string `json:"access_token"`
ExpiresAt time.Time `json:"expires_at"`
}
// waitForToken blocks until the token file exists or the deadline is exceeded.
// When using an init container the file is guaranteed present before this
// container starts; the wait covers the pure-sidecar case where both
// containers start concurrently.
func waitForToken(deadline time.Duration) (*tokenFile, error) {
start := time.Now()
for {
data, err := os.ReadFile(tokenPath)
if err == nil {
var tf tokenFile
if err := json.Unmarshal(data, &tf); err != nil {
return nil, fmt.Errorf("parsing token file: %w", err)
}
if time.Now().After(tf.ExpiresAt) {
return nil, fmt.Errorf("token already expired at %v", tf.ExpiresAt)
}
return &tf, nil
}
if time.Since(start) > deadline {
return nil, fmt.Errorf("token file not available after %v", deadline)
}
time.Sleep(500 * time.Millisecond)
}
}
func main() {
// Wait up to 30 s for the sidecar to write a valid token.
tf, err := waitForToken(30 * time.Second)
if err != nil {
log.Fatalf("startup: %v", err)
}
apiURL := os.Getenv("BT_API_URL") // only env var the app needs
zapLogger, _ := zap.NewProduction()
logger := logging.NewZapLogger(zapLogger)
backoffDef := backoff.NewExponentialBackOff()
backoffDef.InitialInterval = 1 * time.Second
backoffDef.MaxElapsedTime = 2 * time.Minute
httpClient, err := utils.GetHttpClient(30, true, "", "", logger)
if err != nil {
log.Fatalf("http client: %v", err)
}
// ClientID and ClientSecret are intentionally omitted — the sidecar
// owns those credentials. NewSessionFromToken signs in using the token
// it has already obtained on our behalf.
sess, err := session.NewSessionFromToken(context.Background(), tf.AccessToken, session.Parameters{
EndpointURL: apiURL,
HTTPClient: *httpClient,
BackoffDefinition: backoffDef,
Logger: logger,
APIVersion: "3.1",
RetryMaxElapsedTimeSeconds: 30,
})
if err != nil {
log.Fatalf("session: %v", err)
}
defer func() {
if err := sess.Close(); err != nil {
log.Printf("session close: %v", err)
}
}()
// ── Retrieve Secrets Safe secrets ─────────────────────────────────────
// Paths follow the "folder/title" convention used in the Password Safe UI.
retrieved, err := sess.GetSecrets([]string{
"infrastructure/db-password",
"infrastructure/api-key",
}, "/")
if err != nil {
log.Fatalf("GetSecrets: %v", err)
}
// ── Retrieve a managed-account password ──────────────────────────────
// Paths follow the "managed-system/account-name" convention.
dbAccount, err := sess.GetManagedAccount("prod-db-server/app-service-account", "/")
if err != nil {
log.Fatalf("GetManagedAccount: %v", err)
}
// Use the secrets — never log them in production.
fmt.Println("Secrets retrieved successfully")
_ = retrieved["infrastructure/db-password"]
_ = retrieved["infrastructure/api-key"]
_ = dbAccount
}The application container's env block only needs BT_API_URL:
# Application container — holds no BeyondTrust credentials
- name: myapp
image: myorg/myapp:latest
env:
- name: BT_API_URL
value: "https://passwordsafe.example.com/api/v3"
volumeMounts:
- name: bt-token
mountPath: /run/beyondtrust
readOnly: trueWhy this approach is better than raw HTTP: the library handles server-side session management, request retries with exponential backoff, path parsing, file secret size limits, managed account workflows, and structured error responses, none of which you get by hand-rolling Authorization: Bearer calls. The sidecar provides the token; session.NewSessionFromToken provides the API abstraction. Neither requires credentials to be present in the application container.
Troubleshooting
`permission denied` writing token file
level=ERROR msg="Failed to write token" error="failed to write temp file: open /run/beyondtrust/access-token.tmp: permission denied"
Docker: The host directory passed to -v is owned by root (Docker creates it if it does not exist). Fix:
sudo rm -rf /tmp/bt-token
mkdir -p /tmp/bt-token
chmod o+w /tmp/bt-tokenKubernetes: Ensure the shared volume is an emptyDir and that fsGroup is set in the pod securityContext so the group ownership is applied at mount time:
securityContext:
fsGroup: 1000
volumes:
- name: token-vol
emptyDir:
medium: Memory`401 Unauthorized` / authentication failed
level=ERROR msg="Authentication attempt failed" error="obtaining access token: ..."
- Verify
BT_CLIENT_IDandBT_CLIENT_SECRETmatch the OAuth application registered in BeyondTrust Password Safe. - Confirm
BT_API_URLis set to your Password Safe API base URL (for example,https://passwordsafe.example.com/api/v3, with no trailing slash). - Check that the OAuth application has the required scopes in Password Safe.
Pod stuck at `0/1 Ready` (readiness probe failing)
kubectl describe pod <pod-name> | grep -A6 Readiness
kubectl logs <pod-name> -c beyondtrust-authenticator- If logs show authentication errors, check credentials (see above).
- If
/readyzreturns503withtoken_valid: false, the initial auth has not completed yet, wait for"Initial authentication successful"to appear in the logs. - Increase
initialDelaySecondson the readiness probe if the BeyondTrust API is slow to respond at startup.
TLS / certificate errors
level=ERROR msg="Authentication attempt failed" error="creating HTTP client: tls: failed to verify certificate"
- Set
BT_CA_BUNDLEto the path of a PEM file containing your internal CA certificate. - For development only, set
BT_TLS_VERIFY=falseto skip TLS verification. Do not use in production.
Health server port conflict
level=ERROR msg="Health check server failed to bind" error="listen tcp :8080: bind: address already in use"
Set BT_HEALTH_PORT to a free port, or set it to 0 to disable the health server entirely:
BT_HEALTH_PORT=9090 # use a different port
BT_HEALTH_PORT=0 # disable health serverToken not refreshing
If "Refreshing token" never appears in the logs:
- Check
BT_REFRESH_BEFORE_EXPIRY: it must be less than the token TTL. The default is1m; increase it (e.g.5m) so the refresh window is reached before expiry. - Check
BT_REFRESH_INTERVAL: this controls how often the expiry is checked. The default is5m; reduce it for faster observation during testing.
OpenShift `SCC restricted` rejection
Error creating: pods ... is forbidden: unable to validate against any security context constraint
- Use
openshift-sidecar.yamloropenshift-init.yamlas your deployment base, they have no hardcoded UIDs, drop all capabilities, and setallowPrivilegeEscalation: false. - Do not set
runAsUserorfsGroupto specific values in OpenShift; let the SCC assign the UID range for your namespace automatically. - Verify the namespace has the
restrictedorrestricted-v2SCC assigned.
Updated about 3 hours ago
