Appearance
ProAuth Installation
Helm Chart Deployment
ProAuth is designed to run in a Kubernetes cluster and therefore the main deployment method is a Helm release. Releasing by Helm charts has several advantages.
- The configuration values are defined on a logical level in the values.yaml. The technical details on how those values are consumed during runtime is hidden.
- The Helm releases also take care of initialization actions like database schema deployments. The helm release makes sure that the service only gets deployed after a successful and compatible database schema upgrade.
- Easy release management and rollback features provided by helm.
ProAuth is delivered as two helm charts, one for the backend (core) and one for the admin UI. If you do not host ProAuth in Kubernetes, then you need to configure the containers directly and make sure the appropriate initialization logic (database, UI configuration) has been executed at the right point in time of the startup. Please refer to the chapter @sec:containerruntimeenvironment for detailed information.
ProAuth Helm Chart Values
Application setting values
This section contains most important settings.
- License data
- Please provide a valid license string in order to use ProAuth.
- ProAuthRoot
- ProAuth provides a root client application with an initial random client secret if the
ClientAppSecretis not set explicitly during deployment. - To enable and use SCIM in ProAuth in general, the SCIM token security key
ScimTokenSecurityKeymust be set. The SCIM token security key must have a minimum length of128 bit(16 chars). This security key is needed to create individual SCIM tokens for the IDP instances where SCIM will be enabled and to verify the security token of every incoming SCIM request. - ProAuth needs a default certificate to sign and encrypt tokens within the OIDC flow. The default certificate needs to be provided through application settings, at least at the initial startup of ProAuth. More certificates can then either be managed by API or UI.
- ProAuth provides a root client application with an initial random client secret if the
- Encryption keys are used to encrypt critical information in ProAuth data stores. A valid X.509 certificate is needed. If a key rotation is necessary, the old certificates can be listed under keyrotationdecryptioncertificates. This enables the system to decrypt old values while already using the new key pair for all current encryption / decryption actions.
- The data section contains the SQL connection string to the ProAuth database.
- The base service settings contain general settings for the service to run.
licensedata: content of the license data fileclientappsecret: ProAuth Root client secret; when empty a random value will be generatedscimtokensecuritykey: necessary when SCIM is used; security key to create the SCIM endpoint tokens- The encryption algorithm requires a key size of at least
128 bits(16 chars)
- The encryption algorithm requires a key size of at least
requirehttpsmetadata: default set totruesessionidletimeout: session timeout in minutes; default set to20jobqueueinterval: job queue interval execution in hours; default set to4emailsenderaddress,mailserverconfig: when provided, it is used as default option value on all supported types- mail server config is a JSON definition, quotes needs to be escaped (JSON Samples are provided in the corresponding chapters)
- Tenant (@sec:tenant-configure)
- UserStore IDP (@sec:userstore-configure)
- E-Mail TwoFactor (@sec:twofactor-email-configure)
- mail server config is a JSON definition, quotes needs to be escaped (JSON Samples are provided in the corresponding chapters)
knownproxies: comma separated list of IP addresses of proxies used for x-forwarded-for headerknownnetworks: comma separated list CIDR ranges of networks used for x-forwarded-for header
- The enhanced logging configuration enables detailed logs for error analysis and is normally only used by a 4tecture representative.
yml
appsettings:
license:
licensedata: VALUE_TO_OVERRIDE
proauthroot:
clientappsecret:
scimtokensecuritykey:
# defaultcertvalue: VALUE_TO_OVERRIDE
# defaultcertpassword: VALUE_TO_OVERRIDE
encryptionkeys:
mode: "" ## empty to disable, currently supported X509
certificate: ""
certificatepassword: ""
keyrotationdecryptioncertificates: []
#- certificate: ""
# certificatepassword: ""
#- certificate: ""
# certificatepassword: ""
data:
defaultconnection:
connectionstring: VALUE_TO_OVERRIDE
commandtimeoutinseconds: VALUE_TO_OVERRIDE
baseservicesettings:
hosturl: VALUE_TO_OVERRIDE
#requirehttpsmetadata: VALUE_TO_OVERRIDE
#sessionidletimeout: VALUE_TO_OVERRIDE
#jobqueueinterval: VALUE_TO_OVERRIDE
emailsenderaddress:
mailserverconfig:
#knownproxies: ""
#knownnetworks: ""
enhancedlogging:
enabled: false
logsensitivedata: falseDapr settings
If ProAuth is hosted in a cluster with multiple instances, we rely on Dapr for the communication between services, event handling and shared state data.
If dapr.enabled is set to true, a valid dapr configuration needs to be provided. You can either let the helm chart generate a valid dapr configuration for using Redis. If a custom dapr configuration is desired, set the flag deployDefaultComponents to false.
When deployDefaultComponents is true, the chart will create Dapr components for you using the names specified below.
When deployDefaultComponents is false, the chart will NOT create any Dapr components and will reference pre-existing components ONLY by name. You must pre-create components that match the configured names.
redisHost: sample valueredis-master.redis:6379
yml
dapr:
enabled: false
id: proauth
nameStateStore: "proauthstatestore"
namePubSub: "proauthglobalevents"
nameDbDeploymentWorkerPubSub: "proauthdatabasedeploymentuserstoreworker"
deployDefaultComponents: true
defaultComponents:
redisHost: VALUE_TO_OVERRIDE
redisPassword: VALUE_TO_OVERRIDE
redisDB: 0
maxLenApprox: 100Dapr component requirements and delivery patterns
When you bring your own Dapr components (deployDefaultComponents=false), create them with the exact names configured above and ensure they satisfy the following requirements for ProAuth (backend):
- StateStore: any supported Dapr state store (no special metadata required)
- PubSub (global events): PubSub component with metadata consumerID set to {uuid}. This ensures cache invalidation and pipeline reload events are fanned out to every ProAuth instance. For Redis Streams, set metadata:
- consumerID:
- PubSub for DB Deployment Worker: PubSub component that points to the same backend service as the global PubSub. Here we require the competing consumer pattern so only one worker instance processes a given event. For Redis Streams, use a shared consumer/group (do not set consumerID to a unique value per instance).
Sample Dapr components (ProAuth)
The following example shows a typical set of components when using Redis. Adjust names and secret references to match your environment.
yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: "proauthstatestore"
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: "redis-master.redis:6379"
- name: redisPassword
secretKeyRef:
name: proauth-dapr-secrets
key: redis-password
- name: ttlInSeconds
value: 1800
- name: redisDB
value: "0"
---
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: "proauthglobalevents"
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: "redis-master.redis:6379"
- name: redisPassword
secretKeyRef:
name: proauth-dapr-secrets
key: redis-password
- name: consumerID
value: "{uuid}"
- name: redisDB
value: "0"
- name: maxLenApprox
value: "100"
---
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: "proauthdatabasedeploymentuserstoreworker"
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: "redis-master.redis:6379"
- name: redisDB
value: "0"
- name: redisPassword
secretKeyRef:
name: proauth-dapr-secrets
key: redis-password
- name: maxLenApprox
value: "100"
- name: processingTimeout
value: "300s"Azure Identity
If ProAuth is hosted in Azure, the Pods can be run with a dedicated Azure Managed Identity. This enables a password-less configuration for the access to other resources in Azure (i.e. Azure SQL, ...). Since the database deployment needs higher access rights, there is a dedicated identity configuration for the database deployment. Please provide the appropriate data.
yml
azureidentity:
enabled: false
name: VALUE_TO_OVERRIDE
resourceID: VALUE_TO_OVERRIDE
clientID: VALUE_TO_OVERRIDE
dbdeployazureidentity:
enabled: false
name: VALUE_TO_OVERRIDE
resourceID: VALUE_TO_OVERRIDE
clientID: VALUE_TO_OVERRIDEUserStore DB Deployment Worker
ProAuth provides a worker container which is able to automatically create and configure databases for newly created UserStore IDPs. To enable the deployment of this DB deployment worker container, enable it in the configuration.
dbdeployment: the group where all the DB Deployment Worker settings are configured
yml
dbdeploymentworker:
enabled: trueThe database worker needs permissions to create databases and users in the target database server. Those settings depend on the target database server and overall security setup. Currently, the following options are supported:

The different configuration options are due to some restrictions of Azure SQL and your cloud setup. During a deployment, there is a dedicated job running to update the schemas of all existing databases. This job either runs with an SQL user or with a Azure Managed Identity. The same concept applies for the database worker. The pod can either run under a pod identity for deployment or it can use a SQL user. However, there are the following things to consider.
- The database creation is only possible through API and not SQL commands. Therefore, we need a user to create the database by API. Those could be either of the following:
- The pod identity (if configured, the db deployment identity is used)
- A dedicated service principal which is only used for creating the database. This is the preferred approach, since the pod identity for schema deployment does usually not have database creation permissions.
- If the access to the database is authenticated by managed identities, the DB creation and user creation must be performed by either the pod identity or a dedicated service principal. A managed identity based user can only be created by a user which has
Directory Readerpermissions on the AAD tenant. This is not the case for SQL users. - If the access to the database is authenticated by SQL users, there is no need for a pod identity. The dedicated service principal is needed to create the database. It is also used for the user creation. The schema deployment is always performed by the schema deployment user.
If the databases are hosted on Azure SQL, the database worker needs the required authentication settings for accessing the Azure API.
dbdeployment.azuresqltenantid,clientid,clientsecret: AAD service principal (or client app) with proper permissions. If managed identities will be configured, this user needs directory read permissions.subscriptionid: The Azure Subscription ID in which the Azure SQL resources are hostedresourcegroupname: The resource group name which contains the Azure SQL instancesqlservername: The Azure SQL server nameelasticpoolname: if provided, the new UserStore databases are created in this elastic pool- Managed Identity authentication:
- if Managed Identity is enabled for ProAuth (
azureidentity.enabled,dbdeployazureidentity.enabled), the security (users and roles) of the newly created databases will be targeting those managed identities
- if Managed Identity is enabled for ProAuth (
- SQL Server user login:
dbuser,dbpassword: database user which is created and assigned for Read/Write accessdeploymentuser,deploymentpassword: database user which is created and assigned for schema deployment when deploy ProAuth updates
- Managed Identity users have priority over SQL Server users, if both are configured
yml
dbdeploymentworker:
azuresql:
tenantid: VALUE_TO_OVERRIDE
clientid: VALUE_TO_OVERRIDE
clientsecret: VALUE_TO_OVERRIDE
subscriptionid: VALUE_TO_OVERRIDE
resourcegroupname: VALUE_TO_OVERRIDE
sqlservername: VALUE_TO_OVERRIDE
elasticpoolname: null
#dbuser: null
#dbpassword: null
#deploymentuser: null
#deploymentpassword: nullIf the databases are hosted on a MS SQL Server, the database worker needs the required DB server permissions (roles) to create and configure the databases.
dbdeployment.sqlserverconnectionstring: The connection string to the SQL Serverdbuser,dbpassword: SQL Server login which is created and assigned for Read/Write accessdeploymentuser,deploymentpassword: SQL Server login which is created and assigned for schema deployment when deploy ProAuth updates
yml
dbdeploymentworker:
sqlserver:
connectionstring: VALUE_TO_OVERRIDE
dbuser: null
dbpassword: null
deploymentuser: null
deploymentpassword: nullIf the database needs to be deleted when the UserStore IPD is removed or the UserStore ConnectionString is deleted, this can be enabled by setting the flag enabledeletionofuserstoredatabases.
yml
dbdeploymentworker:
enabledeletionofuserstoredatabases: falseINFO
The database will only be deleted when there is no other usage of the same database in another UserStore IDP connection string.
External Secrets Configuration
ProAuth Helm charts support using pre-created Kubernetes secrets instead of creating them automatically during deployment. This feature is particularly useful for organizations that want to manage secrets separately from the Helm deployment process through external secret management systems, GitOps workflows, or when using ProAuth as a subchart in larger deployments.
How External Secrets Work
When external secrets are enabled, the ProAuth chart will:
- First, attempt to discover existing secrets using Kubernetes API lookup to automatically detect the keys in your pre-created secrets
- If lookup fails (e.g., during
helm templateoperations or subchart scenarios), fall back to the keys you specify in thekeysconfiguration - Reference your external secrets instead of creating new ones, giving you full control over secret lifecycle management
This approach provides flexibility while maintaining compatibility across different deployment scenarios.
ProAuth Core Chart External Secrets
By default, the ProAuth chart creates all necessary secrets automatically based on the values provided in values.yaml. However, you can configure the chart to reference external, pre-created secrets for specific configurations.
The external secrets configuration is controlled through the externalSecrets section:
yml
externalSecrets:
# User store connections aliases secret configuration
userstoreconnectionsaliases:
enabled: false # Set to true to use an external secret
secretName: "" # Name of the external secret to use
keys: [] # List of keys that exist in your external secret
# Data default connection secret configuration
datadefaultconnection:
enabled: false
secretName: ""
keys: []
# Database schema deployment secret configuration
dbschemadeployment:
enabled: false
secretName: ""
keys: []
# ProAuth root secret configuration
proauthroot:
enabled: false
secretName: ""
keys: []
# Base service settings secret configuration
baseservicesettings:
enabled: false
secretName: ""
keys: []
# DB deployment worker SQL Server secret configuration
dbdeploymentworkerSqlserver:
enabled: false
secretName: ""
keys: []
# DB deployment worker Azure SQL secret configuration
dbdeploymentworkerAzuresql:
enabled: false
secretName: ""
keys: []
# License secret configuration
license:
enabled: false
secretName: ""
keys: []
# Encryption keys secret configuration
encryptionkeys:
enabled: false
secretName: ""
keys: []Required and Optional Keys
Each external secret type expects specific keys. You only need to include the keys that you actually create in your external secrets. Here are the expected keys for each secret type:
Base Service Settings (baseservicesettings):
- Required:
hosturl,emailsenderaddress,mailserverconfig,useforwardedheaders - Optional:
requirehttpsmetadata,sessionidletimeout,jobqueueinterval,knownproxies,knownnetworks
Data Default Connection (datadefaultconnection):
- Required:
connectionstring,commandtimeoutinseconds
ProAuth Root (proauthroot):
- Required:
clientappsecret,scimtokensecuritykey - Optional:
defaultcertvalue,defaultcertpassword
Encryption Keys (encryptionkeys):
- Required:
mode,certificate,certificatepassword - Optional:
KeyRotationDecryptionCertificates__<index>__Certificate,KeyRotationDecryptionCertificates__<index>__CertificatePassword
License (license):
- Required:
licensedata
DB Deployment Worker SQL Server (dbdeploymentworkerSqlserver):
- Required:
connectionstring - Optional:
containment,dbuser,dbpassword,deploymentuser,deploymentpassword
DB Deployment Worker Azure SQL (dbdeploymentworkerAzuresql):
- Required:
subscriptionid,resourcegroupname,sqlservername,authenticationtype,dbcreationtype - Optional:
elasticpoolname,tenantid,clientid,clientsecret,sqladminuser,sqladminpassword
User Store Connection Aliases (userstoreconnectionsaliases):
- Required: Your custom alias names (e.g.,
DefaultUserStoreConnection)
Database Schema Deployment (dbschemadeployment):
- Required:
user,password
Configuration Best Practices
1. Always Specify the Keys Configuration
To ensure reliable deployment across all scenarios (including subchart usage and CI/CD pipelines), always specify the keys field with the actual keys that exist in your external secret:
yml
externalSecrets:
baseservicesettings:
enabled: true
secretName: "my-base-settings"
keys:
- "hosturl"
- "emailsenderaddress"
- "mailserverconfig"
- "useforwardedheaders"
# Only include optional keys if you created them in your secret2. Use Consistent Secret Naming
Adopt a consistent naming pattern for your external secrets:
yml
externalSecrets:
baseservicesettings:
secretName: "proauth-base-settings"
datadefaultconnection:
secretName: "proauth-database"
encryptionkeys:
secretName: "proauth-encryption"
license:
secretName: "proauth-license"3. Include Only Existing Keys
Only specify keys in the keys array that you have actually created in your external secret. This prevents deployment errors and allows for flexible configurations.
Examples
Example 1: Using External Secret for Database Connection
- Create your secret manually:
bash
kubectl create secret generic proauth-database \
--from-literal=connectionstring="Server=myserver;Database=mydb;User Id=user;Password=pass;" \
--from-literal=commandtimeoutinseconds="30"- Configure your
values.yaml:
yml
externalSecrets:
datadefaultconnection:
enabled: true
secretName: "proauth-database"
keys:
- "connectionstring"
- "commandtimeoutinseconds"
# Leave the original appsettings.data.defaultconnection empty
appsettings:
data:
defaultconnection: {}Example 2: Using External Secret for User Store Connection Aliases
- Create your secret manually:
bash
kubectl create secret generic proauth-userstore-aliases \
--from-literal=DefaultUserStoreConnection="Server=myserver;Database=userstore1;..." \
--from-literal=TestConnection="Server=testserver;Database=test;..."- Configure your
values.yaml:
yml
externalSecrets:
userstoreconnectionsaliases:
enabled: true
secretName: "proauth-userstore-aliases"
keys:
- "DefaultUserStoreConnection"
- "TestConnection"
# Leave the original appsettings empty
appsettings:
data:
userstoreconnections:
connectionstringaliases: {}Example 3: Complete External Secrets Configuration
yml
externalSecrets:
baseservicesettings:
enabled: true
secretName: "proauth-base-settings"
keys:
- "hosturl"
- "emailsenderaddress"
- "mailserverconfig"
- "useforwardedheaders"
datadefaultconnection:
enabled: true
secretName: "proauth-database"
keys:
- "connectionstring"
- "commandtimeoutinseconds"
proauthroot:
enabled: true
secretName: "proauth-root-settings"
keys:
- "clientappsecret"
- "scimtokensecuritykey"
encryptionkeys:
enabled: true
secretName: "proauth-encryption"
keys:
- "mode"
- "certificate"
- "certificatepassword"
license:
enabled: true
secretName: "proauth-license"
keys:
- "licensedata"
# Clear original appsettings when using external secrets
appsettings:
baseservicesettings: {}
data:
defaultconnection: {}
proauthroot: {}
encryptionkeys: {}
license: {}Using ProAuth as a Subchart
When using ProAuth as a subchart in a larger Helm deployment, external secrets are particularly useful. Ensure that:
- Create secrets before ProAuth deployment: Use Helm hooks or dependency ordering to ensure your secrets exist before ProAuth is deployed
- Always specify the keys configuration: This is crucial for subchart scenarios where secret lookup may not work during template rendering
- Use proper namespace: Ensure secrets are created in the same namespace where ProAuth will be deployed
Example parent chart structure:
yaml
# In parent chart values.yaml
proauth:
externalSecrets:
datadefaultconnection:
enabled: true
secretName: "shared-database-secret"
keys:
- "connectionstring"
- "commandtimeoutinseconds"
# In parent chart templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: shared-database-secret
namespace: {{ .Release.Namespace }}
type: Opaque
data:
connectionstring: {{ .Values.database.connectionString | b64enc | quote }}
commandtimeoutinseconds: {{ "30" | b64enc | quote }}Troubleshooting External Secrets
If you encounter issues with external secrets:
Verify secret existence: Ensure your external secret exists in the correct namespace
bashkubectl get secret your-secret-name -n your-namespaceCheck secret keys: Verify the secret contains the expected keys
bashkubectl get secret your-secret-name -n your-namespace -o jsonpath='{.data}' | jq 'keys'Validate configuration: Ensure your
keysconfiguration matches the actual keys in your secretTest template rendering: Use
helm templateto verify the configuration works correctlybashhelm template test-release ./proauth -f your-values.yaml
Migration from Internal to External Secrets
To migrate from internal secrets (values in appsettings) to external secrets:
- Create your external secrets with the required keys
- Enable external secrets in your values.yaml with proper
keysconfiguration - Clear the corresponding appsettings sections to avoid conflicts
- Test the deployment in a non-production environment first
This approach gives you full control over secret management while maintaining the flexibility and ease of use of the ProAuth Helm chart.
External Service Accounts Configuration
Both ProAuth Helm charts now support using pre-created Kubernetes service accounts instead of creating them automatically during deployment. This is useful for organizations that want to manage service accounts and their RBAC permissions separately from the Helm deployment process.
ProAuth Core Chart External Service Accounts
By default, the ProAuth chart creates all necessary service accounts automatically. However, you can configure the chart to reference external, pre-created service accounts for specific workloads.
The external service accounts configuration is controlled through the externalServiceAccounts section:
yml
externalServiceAccounts:
# Main service account configuration (used by the main ProAuth deployment)
# This service account needs permissions for basic Kubernetes API access and k8s-wait-for functionality
# Required RBAC: Role with access to services, pods (get, watch, list) and jobs (get, watch, list)
main:
enabled: false # Set to true to use an external service account
name: "" # Name of the external service account to use
# Deploy service account configuration (used by database deployment jobs)
# This service account needs permissions for job management and k8s-wait-for functionality
# Required RBAC: Role with access to services, pods (get, watch, list) and jobs (get, watch, list)
deploy:
enabled: false # Set to true to use an external service account
name: "" # Name of the external service account to use
# Database management service account configuration (used by database deployment worker stateful set)
# This service account is used for database management operations
# Required RBAC: Depends on your specific database management requirements
dbManagement:
enabled: false # Set to true to use an external service account
name: "" # Name of the external service account to useImportant Notes:
- When using external service accounts, you are responsible for creating the necessary RBAC roles and role bindings
- The chart will only create RBAC resources if at least one service account is managed by the chart
- You can mix and match internal and external service accounts as needed
Example: Using External Service Account for Main Deployment
- Create your service account and RBAC manually:
bash
# Create the service account
kubectl create serviceaccount my-proauth-main-sa
# Create the required role (if not exists)
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: proauth-k8s-wait-for
rules:
- apiGroups: [""]
resources: ["services", "pods"]
verbs: ["get", "watch", "list"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["get", "watch", "list"]
EOF
# Create the role binding
kubectl create rolebinding my-proauth-main-binding \
--role=proauth-k8s-wait-for \
--serviceaccount=default:my-proauth-main-sa- Configure your
values.yaml:
yml
externalServiceAccounts:
main:
enabled: true
name: "my-proauth-main-sa"
# deploy and dbManagement remain internal (chart-managed)
deploy:
enabled: false
dbManagement:
enabled: falseExample: Using All External Service Accounts
- Create all service accounts and RBAC manually:
bash
# Create service accounts
kubectl create serviceaccount my-proauth-main-sa
kubectl create serviceaccount my-proauth-deploy-sa
kubectl create serviceaccount my-proauth-db-sa
# Create and bind roles (you are responsible for all RBAC when using all external service accounts)
# ... (create appropriate roles and bindings for your requirements)- Configure your
values.yaml:
yml
externalServiceAccounts:
main:
enabled: true
name: "my-proauth-main-sa"
deploy:
enabled: true
name: "my-proauth-deploy-sa"
dbManagement:
enabled: true
name: "my-proauth-db-sa"Deployment Job Resources Configuration
Both ProAuth Helm charts now support configuring resource requests and limits for their respective initializer jobs. This is critical in environments with resource quotas or policies that require all pods to have resource limits.
ProAuth Core Chart - Database Deployment Job
The main ProAuth chart includes a database deployment job that can be configured with resource limits and requests:
yml
# Database deployment job resource configuration
deploymentJobResources:
limits:
cpu: "500m"
memory: "512Mi"
requests:
cpu: "100m"
memory: "128Mi"Example Usage:
yml
deploymentJobResources:
limits:
cpu: "1000m" # 1 CPU core
memory: "1Gi" # 1 GB memory
requests:
cpu: "200m" # 0.2 CPU cores
memory: "256Mi" # 256 MB memoryINFO
By default, deploymentJobResources is set to {} (empty), which means no resource limits or requests are applied to the jobs. This maintains backward compatibility with existing deployments.
INFO
Reasons for setting resource limits on all containers:
- Many Kubernetes environments have resource quotas that require resource requests and limits
- Some organizations use admission controllers that require all pods to have resource limits defined
- Defining resource limits helps with capacity planning and prevents jobs from consuming excessive resources
ProAuth Admin App Helm Chart Values
Application setting values
This section contains most important settings.
- Authentication information to authenticate against the backend API
- Encryption keys are used to encrypt critical information in ProAuth data stores. A valid X.509 certificate is needed. If a key rotation is necessary, the old certificates can be listed under keyrotationdecryptioncertificates. This enables the system to decrypt old values while already using the new key pair for all current encryption / decryption actions.
- The base service settings contain general settings for the service to run.
- The initializer settings will be used to initialize the Admin UI configuration (View Definitions, Labels, etc.) by the initializer container. Please provide the appropriate settings to connect to the backend API via a client credential grant.
yml
appsettings:
authentication:
authority: VALUE_TO_OVERRIDE
clientid: VALUE_TO_OVERRIDE
clientsecret: VALUE_TO_OVERRIDE
tenantid: VALUE_TO_OVERRIDE
encryptionkeys:
mode: "" ## empty to disable, currently supported X509
certificate: ""
certificatepassword: ""
keyrotationdecryptioncertificates: []
#- certificate: ""
# certificatepassword: ""
#- certificate: ""
# certificatepassword: ""
baseservicesettings:
serviceurl: VALUE_TO_OVERRIDE
sessiontimeoutinminutes: 720
#knownproxies: ""
#knownnetworks: ""
initializersettings:
authority: VALUE_TO_OVERRIDE
serviceurl: VALUE_TO_OVERRIDE
resourceclientid: VALUE_TO_OVERRIDE
resourceclientsecret: VALUE_TO_OVERRIDE
resourcetenantid: VALUE_TO_OVERRIDEExternal Secrets Configuration
The ProAuth Admin App chart also supports external secrets for managing sensitive configuration separately from the Helm deployment:
yml
externalSecrets:
# Base service settings secret configuration
baseservicesettings:
enabled: false # Set to true to use an external secret
secretName: "" # Name of the external secret to use
# Encryption keys secret configuration
encryptionkeys:
enabled: false
secretName: ""
# Authentication secret configuration
authentication:
enabled: false
secretName: ""
# Initializer settings secret configuration
initializersettings:
enabled: false
secretName: ""Example: Using External Secret for Authentication
- Create your secret manually:
bash
kubectl create secret generic my-auth-secret \
--from-literal=authority="https://login.microsoftonline.com/tenant" \
--from-literal=clientid="your-client-id" \
--from-literal=clientsecret="your-client-secret" \
--from-literal=tenantid="your-tenant-id"- Configure your
values.yaml:
yml
externalSecrets:
authentication:
enabled: true
secretName: "my-auth-secret"
# Leave the original appsettings.authentication empty
appsettings:
authentication: {}ProAuth Admin App Chart External Service Accounts
By default, the ProAuth Admin App chart creates all necessary service accounts automatically. However, you can configure the chart to reference external, pre-created service accounts for specific workloads.
The external service accounts configuration is controlled through the externalServiceAccounts section:
yml
externalServiceAccounts:
# Main service account configuration (used by the main ProAuthAdminApp deployment)
# This service account needs permissions for basic Kubernetes API access and k8s-wait-for functionality
# Required RBAC: Role with access to services, pods (get, watch, list) and jobs (get, watch, list)
main:
enabled: false # Set to true to use an external service account
name: "" # Name of the external service account to use
# Deploy service account configuration (used by resource deployment jobs)
# This service account needs permissions for job management and k8s-wait-for functionality
# Required RBAC: Role with access to services, pods (get, watch, list) and jobs (get, watch, list)
deploy:
enabled: false # Set to true to use an external service account
name: "" # Name of the external service account to useExample: Using External Service Account for Admin App Main Deployment
- Create your service account and RBAC manually:
bash
# Create the service account
kubectl create serviceaccount my-adminapp-main-sa
# Create the required role (if not exists)
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: adminapp-k8s-wait-for
rules:
- apiGroups: [""]
resources: ["services", "pods"]
verbs: ["get", "watch", "list"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["get", "watch", "list"]
EOF
# Create the role binding
kubectl create rolebinding my-adminapp-main-binding \
--role=adminapp-k8s-wait-for \
--serviceaccount=default:my-adminapp-main-sa- Configure your
values.yaml:
yml
externalServiceAccounts:
main:
enabled: true
name: "my-adminapp-main-sa"
# deploy remains internal (chart-managed)
deploy:
enabled: falseDeployment Job Resources Configuration
The Admin App chart includes a resource deployment job that can be configured with resource limits and requests:
yml
# Resource deployment job resource configuration
deploymentJobResources:
limits:
cpu: "300m"
memory: "256Mi"
requests:
cpu: "50m"
memory: "64Mi"Example Usage:
yml
deploymentJobResources:
limits:
cpu: "500m" # 0.5 CPU cores
memory: "512Mi" # 512 MB memory
requests:
cpu: "100m" # 0.1 CPU cores
memory: "128Mi" # 128 MB memoryDapr settings
If ProAuth is hosted in a cluster with multiple instances, we rely on Dapr for the communication between services, event handling and shared state data.
If dapr.enabled is set to true, a valid dapr configuration needs to be provided. You can either let the helm chart generate a valid dapr configuration for using Redis. If a custom dapr configuration is desired, set the flag deployDefaultComponents to false.
When deployDefaultComponents is true, the chart will create a Dapr StateStore component using the configured name.
When deployDefaultComponents is false, the chart will NOT create any Dapr components and will reference a pre-existing StateStore component ONLY by name. You must pre-create the component with the configured name.
redisHost: sample valueredis-master.redis:6379
yml
dapr:
enabled: false
id: proauthadminappserver
nameStateStore: "proauthadminappserverstatestore"
deployDefaultComponents: true
defaultComponents:
redisHost: VALUE_TO_OVERRIDE
redisPassword: VALUE_TO_OVERRIDE
redisDB: 0
maxLenApprox: 100Dapr component requirements (Admin App)
When you bring your own Dapr components (deployDefaultComponents=false), create the StateStore with the exact name configured above. Requirements:
- ProAuthAdminApp (frontend)
- StateStore: any supported Dapr state store (no special metadata required). No PubSub is required by the Admin App.
Sample Dapr component (Admin App)
The following example shows a typical StateStore when using Redis. Adjust names and secret references to match your environment.
yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: "proauthadminappserverstatestore"
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: "redis-master.redis:6379"
- name: redisPassword
secretKeyRef:
name: proauthadminapp-dapr-secrets
key: redis-password
- name: ttlInSeconds
value: 1800
- name: redisDB
value: "0"Container Runtime Environment
ProAuth is delivered as container images and those images can be run in any OCI compliant container environment. The advantages of the helm chart do not apply here and the deployment automation needs a little bit more logic. From a runtime perspective, there is no change.
The easiest way to configure the containers is by using an appropriate .env file to specify the environment variables to overwrite the appsettings.json values.
ProAuth appsettings.json
The following settings should be configured to run ProAuth:
json
{
"Data": {
"DefaultConnection": {
"ConnectionString": "VALUE_TO_OVERRIDE",
"CommandTimeoutInSeconds": 30
},
"Events": {
"Type": "InMemory", // or Dapr
"InstanceName": "proauthglobalevents"
},
"StateStore": {
"Type": "InMemory", // or Dapr
"InstanceName": "proauthstatestore"
}
},
"BaseServiceSettings": {
"HostUrl": "VALUE_TO_OVERRIDE",
"EmailSenderAddress": null, // or proper value
"MailServerConfig": null // or proper value
},
"ProAuthRoot": {
"ClientAppSecret": null, // or own secret
"ScimTokenSecurityKey": null // needs to be set when using SCIM
},
"License": {
"LicenseFile": "",
"LicenseData": "VALUE_TO_OVERRIDE"
}
"EncryptionKeys": {
"Mode": "X509", // empty to disable
"Certificate": "VALUE_TO_OVERRIDE",
"CertificatePassword": "VALUE_TO_OVERRIDE",
"KeyRotationDecryptionCertificates": [
//{
// "Certificate": "VALUE_TO_OVERRIDE",
// "CertificatePassword": "VALUE_TO_OVERRIDE"
//}
]
}
}Please refer to the chapter @sec:helmchartdeployment for detailed information about the different settings.
ProAuth Admin UI appsettings.json
The following settings should be configured to run ProAuth:
json
{
"BaseServiceSettings": {
"ServiceUrl": "VALUE_TO_OVERRIDE",
"SessionTimeoutInMinutes": 720
},
"AuthenticationSettings": {
"Authority": "VALUE_TO_OVERRIDE",
"ClientId": "VALUE_TO_OVERRIDE",
"ClientSecret": "VALUE_TO_OVERRIDE",
"TenantId": "VALUE_TO_OVERRIDE"
},
"Data": {
"StateStore": {
"Type": "InMemory",
"InstanceName": "proauthadminappserverstatestore"
}
},
"EncryptionKeys": {
"Mode": "X509", // empty to disable
"Certificate": "VALUE_TO_OVERRIDE",
"CertificatePassword": "VALUE_TO_OVERRIDE",
"KeyRotationDecryptionCertificates": [
//{
// "Certificate": "VALUE_TO_OVERRIDE",
// "CertificatePassword": "VALUE_TO_OVERRIDE"
//}
]
}
}Please refer to the chapter @sec:helmchartdeployment for detailed information about the different settings.
Root Configuration
To run ProAuth, admin access to a database server is required, this must be set up beforehand.
If ProAuth is deployed to the cluster by the Helm Package, an Init container is executed first which deploys the database based on the arguments used to start the DatabaseMigrator in the Init container. The DatabaseMigrator creates the ProAuth database if it does not already exist and then deploys the database schema using the given dacpac files. If the ProAuth database already exists, the database schema is applied. The DatabaseMigrator then scans for existing UserStore instances by looking for UserStore ConnectionStrings in the options. If UserStore instances are found, they are also deployed based on the given dacpac files and the database schemas are applied.
After the Init Container has executed the DatabaseMigrator, the ProAuth deployment starts. During the deployment of ProAuth by the Helm Package, a connection string is required, which has access to a previously created database.
If ProAuth starts for the first time on an empty database, the Root DataInitializer is executed in ProAuth. The Root DataInitializer sets up a minimal ProAuth configuration, which finally contains a Root ClientApp, with which ProAuth can be set up automated customer specific.
Root DataInitializer
The goal of the ProAuth Root DataInitializer is to setup an initial Root ClientApp with SysAdmin privileges. With this ClientApp, it's possible to configure ProAuth automated through the API.
The ProAuth Root DataInitializer sets up following configuration:
- Default Certificate (provided by settings)
- Root Customer
- Root ClientApp
- Invitation ClientApp
- Root Subscription
- Root Tenant
- Forward ClaimRule
- ServerCookie IDP