MindMap Gallery OCP 4.5 Documentation
This mind map of the OCP4.5 document clearly describes the organizational methods of the document, especially the classification by roles and activities as well as by documents. Each method is exhaustively explained in terms of its definition, implementation steps, and expected outcomes.
Edited at 2021-06-28 23:54:43OCP 4.5 Documentation
Cluster-level Task (OCP 3.11)
Day 2 Operations
Run-once Tasks
NTP synchronization
Entropy
Checking the default
Environment Health Checks
Checking complete environment health
Creating alerts using Prometheus
Host health
Router & Registry Health
Network Connectivity
Storage
Docker Storage
API Server Status
Controller Role Verification
Verifying correct MTU size
Creating a Environment-wide Backup
Creating a Master Host backup
Creating a Node Host backup
Backing up Registry Certificates
Backup up other Installation Files
Backing up Application Data
etcd Backup
Backing up a Project
Backing up Persistent Volume Claims
Host-level Tasks
Adding a host to the Cluster
Master Host Tasks
Deprecating a master host
Creating a master host backup
Restoring a master host backup
Node Host Tasks
Deprecating a node host
Creating a node host backup
Restoring a node host backup
Node maintenance & next steps
etcd Tasks
etcd backup/restoring
Replacing an etcd host
Scaling etcd
Removing an etcd host
Project-level Task
Backup/Restoring a Project
Docker Tasks
Increasing Container Storage
Managing Container Registry Certificate
Managing Container Registries
Managing Certificates
Changing an Application's self-signedcertificate to CA-signed certificate
Cluster Administration
1. Overview
2. Managing Nodes
3. Restoring OpenShift ContainerPlatform Components
4. Replacing a Master Host
5. Managing Users
6. Managing Projects
7. Managing Pods
8. Managing Networking
9. Configuring Service Account
10. Managing Role-based AccessControl RBAC
11. Image Policy
12. Image Signatures
13. Scoped Tokens
14. Monitoring Images
15. Managing Security Context Constraints
16. Scheduling
17. Setting Quotas
18. Setting Multi-Project Quotas
19. Setting Limit Ranges
20. Pruning Objects
21. Extending the Kubernetes API withCustom Resources
22. Garbage Collection
23. Allocating Node Resources
24. Node Problem Detector
25. Overcommitting
26. Assigning Uniques External IPS forIngress Traffic
27. Handling out of Resources Errors
28. Monitoring & Debugging Routers
29. High Availability
30. IPTables
31. Securing Builds by Strategy
32. Restricting Application Capabilities
33. sysctls
34. Encrypting Data at DataStore Layer
35. Encrypting Traffic between Nodes withIPSec
36. Building Dependency Trees
37. Replacing etcd Quorum
38. Restoring etcd Quorum
39. Troubleshooting OpenShift SDN
40. Diagnostics Tools
41. Idling Applications
42. Analyzing Cluster Capacity
43. Configuring Cluster Auto-Scaler inAWS
44. Disabling Features using Feature Gates
45. KURYR SDN Administration
Configuring Cluster
1. Overview
2. Setting up The Registry
3. Setting up a Router
4. Deploying Red Hat CloudForms
5. Prometheus
7. Master & Node Configuration
8. OpenShift Ansible Broker Configuration
9. Adding Hosts to Existing Cluster
10. Adding the Default Image Streams andTemplates
11. Configuring Custom Certificates
12. Redeploying Certificates
13. Configuring Authentication and UserAgent
14. Syncing Groups with LDAP
15. Configuring LDAP Failover
16. Configuring the SDN
17. Configuring Nuage SDN
18. Configuring NSX-T SDN
19. Configuring KURYR SDN
20. Configuring for AWS
21. Configuring for RHV
22. Configuring OpenStack
23. Configuring GCP
24. Configuring Azure
25. Configuring VMWare vSphere
26. Configuring Local Volumes
27. Configuring Persistent Storage
28. Persistent Storage Examples
29. Configuring Ephemeral Storage
30. Working with HTTP Proxies
31. Configuring Global Build Defaults &Overrides
32. Configuring Pipeline Execution
33. Configuring Route Timeouts
34. Configuring Native Container Routing
35. Routing from Edge Load Balancers
36. Aggregating Container Logs
37. Aggregate Logging Sizing Guidelines
38. Enabling Cluster Metrics
39. Customizing the Web Console
40. Deploying external Persistent VolumeProvisioners
41. Installing the Operator Framework(Technology Preview)
By Roles & Activities
Cluster Installer
Install on
Baremetal
Dell EMC
HPE
Intel
Lenovo
Public Cloud Provider
AWS
Azure
GCP
Private Virtualization/Cloud
VMWare vSphere
RHOSP
RHV
Network Environment
Restricted Network
Existing Network
Install a Private Cluster on
AWS
Azure
GCP
Check Installation Logs
Access OCP Cluster
Using WebConsole
Install OpenShift Container Storage
Cluster Administrator
Manage Cluster Components
Manage Machine
Manage Container Registries
Manage Users and Groups
Manage Authentication
Manage Ingress, API Server & ServiceCertificates
Manage Networking
Manage Storage
Manage Operators
Change Cluster Components
Use Custom Resource Definition (CRD) toModify the Cluster
Set Resource Quotas
Prune and Reclaim Resources
Scale & Tune Clusters
Update Cluster
Monitor Cluster Components
Work with Cluster Logging
Monitor Clusters
Remote Health Monitoring
Developer Activities
Work with Projects
Work with Applications
Use Developer CLI Tool (odo)
Create CI/CD Pipelines
Deploy Helm Charts
Understand Operators
Understand Image Builds
Create Images
Create Deployments & DeploymentConfigs
Create Templates
Create Operators
REST API Reference
By Documentation
Get Start
Release Notes
New Features & Enhancements
1. Installation & Upgrades
1. Installing on vSphere usingInstaller-Provisioned Infrastructure
2. Installing on GCP using User-ProvisionedInfrastructure & a Shared VPC
3. Three-node Bare Metal Deployment
4. Restricted Network Cluster UpgradeImproments
5. Migrating Azure Private DNS Zones
6. Built-in Help for install-config.yamlSupported Fields
7. Encrypt EBS Instance Volumes with aKMS Key
8. Install to Pre-Existing VPC with MultipleCIDR on AWS
9. Adding Custom Domain Names to AWSVPC DHCP Option Sets
10. Provisioning Bare Metal Hosts usingIPv6
11. Custom Networks & Subnets forCluster on RHOSP
12. Additional Network for Cluster onRHOSP
13. Improved Load Balancer UpgradeExperience for Cluster that Use KURYR
14. Multiple Version Schemes Acceptedwhen Installing RPM Packages
15. SSH Configuration No Longer Requiredfor Debug Information
16. Master Node Can be named Any ValidHostname
17. Octavia OVN Provider DriverSupported on Previous RHOSP Versions
18. Octavia OVN Provider Driver SupportsListener on Same Port
2. Security
3. Images
4. Machine API
5. Nodes
6. Cluster Monitoring
7. Cluster Logging
8. Web Console
9. Scale
10. Networking
11. Developer Experience
12. Backup & Restore
13. Disaster Recovery
14. Storage
15. Operator
16. OpenShift Virtualization
Notable Technical Changes
Deprecated & Removed Features
Technology Preview Features
Architecture
Security
Install
OpenShift Container Platform 4.5 Installing(Post Installation)
Troubleshooting Installation Issues
Prerequisites
Gathering Logs from a Failed Installation
Manually Gathering Log with SSH Access toHosts
Manually Gathering Logs without SSHAccess to Hosts
Getting Debug Information from theInstallation Program
Support for FIPS Cryptography
FIPS Validation in OCP
FIPS Support in Components that theCluster Uses
Installing a Cluster in FIPS Mode
Installation Configuration
Installation Methods for Different Platfrom
Installer-provisioned infrastructure
Default
AWS
Azure
GCP
RHV
Custom
AWS
Azure
GCP
OpenStack
RHV
Network Operator
AWS
Azure
GCP
Private Clusters
AWS
Azure
GCP
Existing Virtual Private Network
AWS
Azure
GCP
User-provisioned infrastructure options
Custom
AWS
Azure
GCP
OpenStack
Baremetal
vSphere
Network Operator
Baremetal
vSphere
Restricted Network
AWS
GCP
Baremetal
vSphere
Customizing Node
Adding day-1 Kernel Arguments
Adding Kernel Modules to Nodes
Building and Testing the Kernel ModuleContainer
Provisioning a Kernel Module to OCP
Encrypting Disk during Installation
Enabling TPM v2 Disk Encryption
Enabling Tang Disk Encryption
Configuring Chrony Time Service
Creating a Mirror Registry for Installation ina Restricted Network
Installing OpenShift CLI Client
Configuring Credentials that allow Imagesto be mirrored
Mirroring the OCP Image Repository
Preparing Cluster to gather Support Data
Using Samples Operator ImageStreamswith alternate or mirrored Registries
Available Cluster Customization
You complete most of the cluster configuration and customization after you deploy your OpenShiftContainer Platform cluster. A number of configuration resources are available.You modify the configuration resources to configure the major features of the cluster, such as theimage registry, networking configuration, image build behavior, and the identity provider.For current documentation of the settings that you control by using these resources, use the oc explaincommand, for example oc explain builds --api-version=config.openshift.io/v1
Cluster Configuration Resources
apiserver.config.openshift.io
authentication.config.openshift.io
build.config.openshift.io
console.config.openshift.io
featuregate.config.openshift.io
image.config.openshift.io
ingress.config.openshift.io
oauth.config.openshift.io
project.config.openshift.io
proxy.config.openshift.io
scheduler.config.openshift.io
Operator Configuration Resources
console.operator.openshift.io
config.imageregistry.operator.openshift.io
config.samples.operator.openshift.io
Additional Configuration Resources
alertmanager.monitoring.coreos.com
ingresscontroller.operator.openshift.io
Informational Resources
clusterversion.config.openshift.io
dns.config.openshift.io
infrastructure.config.openshift.io
network.config. openshift.io
Configuring Your Firewall
Required Web Sites
registry.redhat.io
quay.io
sso.redhat.com
cert-api.access.redhat.com
api.access.redhat.com
infogw.api.openshift.com
cloud.redhat.com/api/ingress
*.amazonaws.com
*.googleapis.com
accounts.google.com
management.azure.com
mirror.openshift.com
storage.googleapis.com/openshiftrelease
*.apps.<cluster_name>. <base_domain>
quay-registry.s3.amazonaws.com
api.openshift.com
art-rhcos-ci.s3.amazonaws.com
api.openshift.com
cloud.redhat.com/openshift
Configuring a Private Cluster
Setting DNS to Private
Setting the Ingress Controller to Private
Restricting the API Server to Private
Installing on (Installation Steps)
Baremetal
AWS
Azure
GCP
IBM Power
IBM Z & LinuxONE
OpenStack
RHV
vSphere
Upgrade
Updating Clusters
Configure
Post Installation Configuration
Cluster-level Tasks
Adjust Worker Nodes
Understand the difference MachineSetsand MachineConfigPool
Scaling a MachineSet manually
The MachineSet deletion policy
Create Infrastructure MachineSets
OCP Infrastructure Components
Kubernetes
OCP Control Plane
The Default Router
The Container Image Registry
The Cluster Metrics Collection orMonitoring Service
Cluster Aggregated Logging
Service Broker
Create Infrastructure MachineSets forProduction Environment
Create MachineSets for different CloudPlatform
Cluster Autoscaler
ClusterAutoscaler Resource Definition
Deploy ClusterAutoscaler
Machine Autoscaler
Machine Autoscaler Resource Definition
Deploy MachineAutoscaler
Using Feature Gates (Preview Technology)
RotateKubeletServerCertificate
SupportPodPidsLimit
etcd Tasks
Recommended etcd Practice
etcd Encryption
Enable/Disable etcd entryption
Backup etcd Data
Restore etcd to previous Cluster State
Pod Disruption Budget
Understand how to use Pod disruptionbudgets to specify the number of Podsthat must be up
Specify the number of pods that must beup with pod disruption budgets
Node-level Tasks
Adding RHEL Compute MachinesCluster
About adding RHEL compute nodes to acluster
System requirements for RHEL computenodes
Preparing the machine to run the playbook
Preparing a RHEL compute node
Adding a RHEL compute machine to yourcluster
Required parameters for the Ansible hostsfile
Removing RHCOS compute machines froma cluster
Deploy MachineHealthChecks
About MachineHealthChecks
Sample MachineHealthCheck resource
Create a MachineHealthCheck resource
Scale a MachineSet manually
The MachineSet delete policy
Understand the difference betweenMachineSets and the MachineConfigPool
Recommended Node Host Practices
Create a KubeletConfig CRD to editkubelet parameters
Control plane node sizing
Recommended etcd practices
Setting up CPU Manager
Huge Pages
What huge pages do
How huge pages are consumed by apps
Configuring huge pages at Boot time
Device Plugins
Example device plug-ins
Nvidia GPU device plug-in for COS-basedoperating system
Nvidia official GPU device plug-in
Solarflare device plug-in
KubeVirt device plug-ins: vfio and kvm
Methods for deploying a device plug-in
- Daemonsets are the recommended approach for device plug-in deployments.- Upon start, the device plug-in will try to create a UNIX domain socket at/var/lib/kubelet/device-plugin/ on the node to serve RPCs from Device Manager.- Since device plug-ins must manage hardware resources, access to the host file system, as wellas socket creation, they must be run in a privileged security context.- More specific details regarding deployment steps can be found with each device plug-inimplementation.
Understanding the Device Manager
Enabling Device Manager
Taints & Tolerations
Understanding taints and tolerations
Understanding how to use tolerationseconds to delay Pod evictions
Understanding how to use multiple taints
Preventing Pod eviction for node problems
Understanding Pod scheduling and nodeconditions (Taint Node by Condition)
Understanding evicting pods by condition(Taint-Based Evictions)
Adding taints and tolerations
Dedicating a Node for a User using taintsand tolerations
Binding a user to a Node using taints andtolerations
Controlling Nodes with special hardwareusing taints and tolerations
Removing taints and tolerations
Topology Manager
Topology Manager policies
Setting up Topology Manager
Pod interactions with Topology Managerpolicies
Resource Requests & Overcommit
For each compute resource, a container may specify a resource request and limit. Scheduling decisions are made based on the request to ensure that a node has enough capacity available to meet the requested value. If a container specifies limits, but omits requests, the requests are defaulted to the limits. A container is not able to exceed the specified limit on the node.The enforcement of limits is dependent upon the compute resource type. If a container makes norequest or limit, the container is scheduled to a node with no resource guarantees. In practice, the container is able to consume as much of the specified resource as is available with the lowest local priority. In low resource situations, containers that specify no resource requests are given the lowest quality of service.Scheduling is based on resources requested, while quota and hard limits refer to resource limits, whichcan be set higher than requested resources. The difference between request and limit determines thelevel of overcommit; for instance, if a container is given a memory request of 1Gi and a memory limit of2Gi, it is scheduled based on the 1Gi request being available on the node, but could use up to 2Gi; so it is200% overcommitted.
Cluster-level Overcommit using the ClusterResource Override Operator
Installing the Cluster Resource OverrideOperator using the web console
Installing the Cluster Resource OverrideOperator using the CLI
Configuring cluster-level overcommit
Node-level Overcommit
Understanding compute resources andcontainers
Understanding overcomitment and qualityof service classes
Understanding swap memory and QOS
Understanding nodes overcommitment
Disabling or enforcing CPU limits usingCPU CFS quotas
Reserving resources for system processes
Disabling overcommitment for a node
Project-level Limits
Disabling overcommitment for a project
Freeing Node Resources usingGarbage Collection
Understanding how terminated containersare removed though garbage collection
Understanding how images are removedthough garbage collection
Configuring garbage collection forcontainers and images
Using the Node Tuning Operator
Accessing an example Node TuningOperator specification
Custom tuning specification
Default profiles set on a cluster
Supported Tuned daemon plug-ins
audio
cpu
disk
eeep_she
modules
mounts
net
scheduler
scsi_host
selinux
sysctl
sysfs
usb
video
vm
Configuring The Maximum Numberof Pods per Node
podsPerCore
maxPods
Network Configuration
Configuring Network Policy withOpenShift SDN
About network policy
Example NetworkPolicy object
Creating a NetworkPolicy object
Deleting a NetworkPolicy object
Viewing NetworkPolicy object
Configuring multitenant isolation usingNetworkPolicy
Creating default network policies for a newproject
Modifying the template for new projects
Setting DNS to Private
Enabling The Cluster-wide Proxy
Cluster Network OperatorConfiguration
Configuration parameters for theOpenShift SDN default CNI networkprovider
Cluster Network Operator exampleconfiguration
Configuring Ingress Cluster Traffic
Use an Ingress Controller
Automatically assign an external IP byusing a load balancer service
Manually assign an external IP to a service
Configure a NodePort
Red Hat OpenShift Service Mesh Supported Configurations
Supported configurations for Kiali on RedHat OpenShift Service Mesh
Supported Mixer adapters
3scale Istio Adapter
Red Hat OpenShift Service Meshinstallation activities
Service Mesh Operator
Elasticsearch Operator
Jaeger Operator
Kiali Operator
Optimizing Routing
Baseline Ingress Controller (router)performance
Ingress Controller (router) performanceoptimizations
Storage Configuration
Dynamic Provisioning
About dynamic provisioning
Available dynamic provisioning plug-ins
kubernetes.io/cinde
manila.csi.openstack.org
kubernetes.io/aws-ebs
kubernetes.io/azure-disk
kubernetes.io/azure-file
kubernetes.io/gce-pd
kubernetes.io/vspherevolume
Defining a Storage Class
Basic StorageClass object definition
StorageClass annotations
RHOSP Cinder object definition
AWS Elastic Block Store (EBS) objectdefinition
Azure Disk object definition
Azure File object definition
GCE PersistentDisk (gcePD) object defi
VMware vSphere object definition
Changing The Default Storage Class
Optimizing Storage
Available Persistent Storage Options
Recommended Configurable StorageTechnology
Specific application storagerecommendations
Other specific application storagerecommendations
Deploy OpenShift Container Storage
Prepare for Users
Understanding Identity ProviderConfiguration
About identity providers in OpenShiftContainer Platform
After installing OpenShift Container Platform, you can further expand and customize yourcluster to your requirements, including taking steps to prepare for users.The OpenShift Container Platform control plane includes a built-in OAuth server. Developers andadministrators obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to specify an identity provider after you install your cluster.By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a Custom Resource (CR) that describesthat identity provider and add it to the cluster.OpenShift Container Platform user names containing /, :, and % are not supported.
Supported identity providers
htpasswd
keystone
ldap
basic-authentication
request-header
github
gitlab
oidc
Identity Provider Paramaters
name
mappingMethod
claim
lookup
generate
add
Sample identity provider CR
apiVersion: config.openshift.io/v1kind: OAuthmetadata:name: clusterspec:identityProviders:- name: my_identity_provider 1mappingMethod: claim 2type: HTPasswdhtpasswd:fileData:name: htpass-secret
Using RBAC to Define & ApplyPermission
RBAC overview
Default Cluster Roles
Evaluate Authorization
Projects & Namespaces
Default Projects
View Cluster Roles & Bindings
View Local Roles & Bindings
Add Roles to Users
Create a Local Role
Create a Cluster Role
Local Role Binding Commands
Cluster Role Binding Commands
Create a Cluster Admin
The kubeadmin User
Remove the kubeadmin user
Image Configuration Resources
Image Controller Configuration Parameters
Configure Image Settings
Configuring additional trust stores forimage registry access
Importing insecure registries and blockingregistries
Configuring image registry repositorymirroring
Installing Operators from OperatorHub
Install from OperatorHub using the WebConsole
Install from OperatorHub using the CLI
Authentication & Authorization
Understand Authentication
Basic Concepts
Authentication
The authentication layer identifies the userassociated with requests to the OCP API
Authorization
The authorization layer then usesinformation about the requesting user todetermine if the request is allowed.
Admission
Admission plug-ins are used to helpregulate how OCP functions.
Admission plug-ins intercept requests tothe master API to validate resourcerequests and ensure policies are adheredto, after the request is authenticated andauthorized.
For example, they are commonly used toenforce security policy, resource limitationsor configuration requirements.
Users
A user in OpenShift Container Platform is an entity that can make requests to the OCP API Server. An user object represents an actor which can be granted permissions in the system by adding roles to them or to their groups. Typically, this represents the account ofa developeran administratorthat is interacting with OpenShift Container Platform.
Type
System Users
Many of these are created automatically when the infrastructure is defined, mainly for the purpose of enabling the infrastructure to interact with the API securely.
Cluster Administrator
system:admin
Per-node Users
system:node:node1.example.com
Users for use by routers and registries, andvarious others
system:openshift:registry
User that is used by default forunauthenticated requests
anonymous
Service Accounts
These are special system users associated with projects; some are created automatically when the project is first created,
system:serviceaccount:default:deployer
system:serviceaccount:foo:builder
Regular Users
This is the way most interactive OpenShift Container Platform users are represented. Regular users are created automatically in the system upon first login or can be created via the API.
Groups
Type
Regular Group
A user can be assigned to one or more groups, each of which represent a certain set of users. Groups are useful when managing authorization policies to grant permissions to multiple users at once, for example allowing access to objects within a project, versus granting them to users individually.
System/Virtual Group
system:authenticated
Automatically associated with all authenticated users.
system:authenticated:oauth
Automatically associated with all users authenticated with an OAuth access token.
system:unauthenticated
Automatically associated with all unauthenticated users.
API Authentication
OCP OAuth Server
The OpenShift Container Platform master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API. When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request.It then determines what user that identity maps to, creates an access token for that user, and returns the token for use.
OAuth token requests
Every request for an OAuth token must specify the OAuth client that will receive and use the token. The following OAuth clients are automatically created when starting the OpenShift Container Platform API: openshift-browser-client Requests tokens at <namespace_route>/oauth/token/request with a user-agent that can handle interactive logins. openshift-challenging-clientRequests tokens with a user-agent that can handle WWW-Authenticate challenges. <namespace_route> refers to the namespace’s route. This is found by running the following command. oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host
API impersonation
You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation.
Authentication metrics for Prometheus
openshift_auth_basic_password_countcounts the number of oc login user nameand password attempts.
openshift_auth_basic_password_count_resultcounts the number of oc login user name andpassword attempts by result, success or error.
openshift_auth_form_password_countcounts the number of web console loginattempts.
openshift_auth_form_password_count_resultcounts the number of web console loginattempts by result, success or error.
openshift_auth_password_total countsthe total number of oc login and webconsole login attempts.
Method
OAuth Access Token
Obtained from the OpenShift ContainerPlatform OAuth server using the<namespace_route>/oauth/authorize and <namespace_route>/oauth/tokenendpoints.
Sent as an Authorization: Bearer…header.
Sent as a websocket subprotocolheader in the formbase64url.bearer.authorization.k8s.io.<base64url-encoded-token>for websocket requests.
X.509 Client Certificates
Requires an HTTPS connection to the APIserver.
Verified by the API server against atrusted certificate authority bundle.
The API server creates and distributescertificates to controllers to authenticatethemselves.
Configure the Internal OAuth Server
OCP OAuth Server
OAuth Token Request Flows & Responses
Options for the Internal OAuth Server
Token Duration Options
Access tokens
Authorize codes
Grant Options
auto
prompt
Configure the Internal OAuth Server'sToken Duration
Register an Additional OAuth Client
OAuth Server Metadata
Troubleshoot OAuth API Events
Understanding Identity ProviderConfiguration
About Identity Providers in OCP
By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you mustcreate a Custom Resource (CR) that describes that identity provider and add it to the cluster
Supported Identity Providers
htpasswd
keystone
ldap
basic-authentication
request-header
github
gitlab
oidc
Remove the kubeadmin user
Prerequisites:- You must have configured at least one identity provider.- You must have added the cluster-admin role to a user.- You must be logged in as an administrator.Procedure:$ oc delete secrets kubeadmin -n kube-system
Identity Provider Parameters
name
mappingMethod
Defines how new identities are mapped to users when they log in. Enter one of thefollowing values:- claimThe default value. Provisions a user with the identity’s preferred user name. Fails if a user with that user name is already mapped to another identity.- lookupLooks up an existing identity, user identity mapping, and user, but does not automatically provision users or identities. This allows cluster administrators to set up identities and users manually, or using an external process. Using this method requires you to manually provision users.- generateProvisions a user with the identity’s preferred user name. If a user with the preferred user name is already mapped to an existing identity, a unique user name is generated. For example, myuser2. This method should not be used in combination with external processes that require exact matches between OpenShift Container Platform user names and identity provider user names, such as LDAP group sync.- addProvisions a user with the identity’s preferred user name. If a user with that user name already exists, the identity is mapped to the existing user, adding to anyexisting identity mappings for the user. Required when multiple identity providers are configured that identify the same set of users and map to the same user names.
Sample Identity Provider CR
apiVersion: config.openshift.io/v1kind: OAuthmetadata: name: clusterspec: identityProviders: - name: my_identity_provider 1. mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret
Configuring Identity Providers
Configure an htpasswd Identity Provider
By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you mustcreate a Custom Resource (CR) that describes that identity provider and add it to the cluster.NOTEOpenShift Container Platform user names containing /, :, and % are not supported
About Identity Provider in OCP
To define an HTPasswd identity provider you must perform the following steps:1. Create an htpasswd file to store the user and password information. Instructions are providedfor Linux and Windows.2. Create an OpenShift Container Platform secret to represent the htpasswd file.3. Define the HTPasswd identity provider resource .4. Apply the resource to the default OAuth configuration .
Create an HTPasswd file using Linux
Create an HTPassword file using Windows
Create the HTPasswd Secret
Sample Identity Provider CR
apiVersion: config.openshift.io/v1kind: OAuthmetadata: name: clusterspec: identityProviders: - name: my_identity_provider 1. mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret
Add an Identity Provider to the Clusters
Update users for an HTPasswd IdentityProvider
Configure Identity Provider using the WebConsole
Configure an LDAP Identity Provider
About LDAP Authentication
During authentication, the LDAP directory is searched for an entry that matches the provided username. If a single unique match is found, a simple bind is attempted using the distinguished name (DN) ofthe entry plus the provided password.These are the steps taken:1. Generate a search filter by combining the attribute and filter in the configured url with theuser-provided user name.2. Search the directory using the generated filter. If the search does not return exactly one entry,deny access.3. Attempt to bind to the LDAP server using the DN of the entry retrieved from the search, andthe user-provided password.4. If the bind is unsuccessful, deny access.5. If the bind is successful, build an identity using the configured attributes as the identity, emailaddress, display name, and preferred user name.
Create an LDAP Secret
Create a ConfigMap
Sample LDAP CR
apiVersion: config.openshift.io/v1kind: OAuthmetadata: name: clusterspec: identityProviders: - name: ldapidp 1 mappingMethod: claim 2 type: LDAP ldap: attributes: id: 3 - dn email: 4 - mail name: 5 - cn preferredUsername: 6 - uid bindDN: "" 7 bindPassword: 8 name: ldap-secret ca: 9 name: ca-config-map insecure: false 1 url: "ldap://ldap.example.com/ou=users,dc=acme,dc=com?uid"
Add an Identity Provider to the Cluster
Configure a Basic Authentication IdentityProvider
About Basic Authentication
Create a Secret
Create a ConfigMap
Sample basic authentication CR
apiVersion: config.openshift.io/v1kind: OAuthmetadata:name: clusterspec:identityProviders:- name: basicidp 1mappingMethod: claim 2type: BasicAuthbasicAuth:url: https://www.example.com/remote-idp 3ca: 4name: ca-config-maptlsClientCert: 5name: client-cert-secrettlsClientKey: 6name: client-key-secret
Add an Identity Provider to the Cluster
Example of Apache HTTPD Configurationfor Basic Identity Provider
Example /etc/httpd/conf.d/login.conf<VirtualHost *:443># CGI Scripts in hereDocumentRoot /var/www/cgi-bin# SSL DirectivesSSLEngine onSSLCipherSuite PROFILE=SYSTEMSSLProxyCipherSuite PROFILE=SYSTEMSSLCertificateFile /etc/pki/tls/certs/localhost.crtSSLCertificateKeyFile /etc/pki/tls/private/localhost.key# Configure HTTPD to execute scriptsScriptAlias /basic /var/www/cgi-bin# Handles a failed login attemptErrorDocument 401 /basic/fail.cgi# Handles authentication<Location /basic/login.cgi>AuthType BasicAuthName "Please Log In"AuthBasicProvider fileAuthUserFile /etc/httpd/conf/passwordsRequire valid-user</Location></VirtualHost>Example /var/www/cgi-bin/login.cgi#!/bin/bashecho "Content-Type: application/json"echo ""echo '{"sub":"userid", "name":"'$REMOTE_USER'"}'exit 0Example /var/www/cgi-bin/fail.cgi#!/bin/bashecho "Content-Type: application/json"echo ""echo '{"error": "Login failure"}'exit 0
Basic Authentication Troubeshooting
Using RBAC to Define & ApplyPermission
RBAC Overview
Default Cluster Roles
cluster-admin
A super-user that can perform any action in any project. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project.
cluster-status
A user that can get basic cluster status information.
self-provisioner
A user that can create their own projects.
basic-user
A user that can get basic information about projects and users.
admin
A project manager. If used in a local binding, an admin has rights to view any resource in the project and modify any resource in the project except for quota.
edit
A user that can modify most objects in a project but does not have the power to view or modify roles or bindings.
view
A user who cannot make any modifications, but can see most objects in a project. They cannot view or modify roles or bindings.
Evaluate Authorization
Cluster Role Aggregation
Projects & Namaspaces
A project has
The mandatory Name
The mandatory name is a unique identifier for the project and is most visible when using the CLI tools or API. The maximum name length is 63 characters.
The optional displayName
The optional displayName is how the project is displayed in the web console (defaults to name).
The optional description
The optional description can be a more detailed description of the project and is also visible in the web console.
Provide a unique scope for:
Named resources to avoid basic namingcollisions
Delegated management authority totrusted users.
The ability to limit community resourceconsumption
Each project scopes its own set of:
Objects
bindings
componentstatuses
configmaps
endpoints
events
limitranges
namespaces
nodes
persistentvolumeclaims
persistentvolumes
pods
podtemplates
replicationcontrollers
resourcequotas
secrets
serviceaccounts
services
mutatingwebhookconfigurations
validatingwebhookconfigurations
customresourcedefinitions
apiservices
controllerrevisions
daemonsets
deployments
replicasets
statefulsets
tokenreviews
localsubjectaccessreviews
selfsubjectaccessreviews
selfsubjectrulesreviews
subjectaccessreviews
horizontalpodautoscalers
cronjobs
jobs
certificatesigningrequests
leases
endpointslices
events
ingresses
flowschemas
prioritylevelconfigurations
ingressclasses
ingresses
networkpolicies
runtimeclasses
poddisruptionbudgets
podsecuritypolicies
clusterrolebindings
clusterroles
rolebindings
roles
priorityclasses
csidrivers
csinodes
storageclasses
volumeattachments
Policies
Constraints
Service Accounts
Default Projects
OpenShift Container Platform comes with a number of default projects, and projects starting with openshift- are the most essential to users. These projects host master components that run as pods and other infrastructure components. The pods created in these namespaces that have a critical pod annotation are considered critical, and the have guaranteed admission by kubelet. Pods created for master components in these namespaces are already marked as critical.
View Cluster Roles & Bindings
Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing cluster roles and bindings.
$ oc describe clusterrole.rbac
$ oc describe clusterrolebinding.rbac
View Local Roles & Bindings
Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing local roles and bindings.Users with the admin default cluster role bound locally can view and manage roles and bindings in that project.
$ oc describe rolebinding.rbac
$ oc describe rolebinding.rbac -njoe-project
Add Roles to Users
$ oc adm policy add-role-to-user adminalice -n joe
$ oc describe rolebinding.rbac -n joe
Create a Local Role
$ oc create role podview --verb=get--resource=pod -n blue
$ oc adm policy add-role-to-user podviewuser2 --role-namespace=blue -n blue
Create a Cluster Role
$ oc create clusterrole podviewonly--verb=get --resource=pod
Local Role Binding
Table 1. Local role binding operationsCommandDescription$ oc adm policy who-can <verb> <resource>Indicates which users can perform an action on a resource.$ oc adm policy add-role-to-user <role> <username>Binds a specified role to specified users in the current project.$ oc adm policy remove-role-from-user <role> <username>Removes a given role from specified users in the current project.$ oc adm policy remove-user <username>Removes specified users and all of their roles in the current project.$ oc adm policy add-role-to-group <role> <groupname>Binds a given role to specified groups in the current project.$ oc adm policy remove-role-from-group <role> <groupname>Removes a given role from specified groups in the current project.$ oc adm policy remove-group <groupname>Removes specified groups and all of their roles in the current project.
Cluster Role Binding
Table 2. Cluster role binding operationsCommandDescription$ oc adm policy add-cluster-role-to-user <role> <username>Binds a given role to specified users for all projects in the cluster.$ oc adm policy remove-cluster-role-from-user <role> <username>Removes a given role from specified users for all projects in the cluster.$ oc adm policy add-cluster-role-to-group <role> <groupname>Binds a given role to specified groups for all projects in the cluster.$ oc adm policy remove-cluster-role-from-group <role> <groupname>Removes a given role from specified groups for all projects in the cluster.
Create a Cluster Admin
The cluster-admin role is required to perform administrator level tasks on the OpenShift Container Platform cluster, such as modifying cluster resources.PrerequisitesYou must have created a user to define as the cluster admin.ProcedureDefine the user as a cluster admin:$ oc adm policy add-cluster-role-to-user cluster-admin <user>
Removing the kubeadmin User
The kubeadmin User
OpenShift Container Platform creates a cluster administrator, kubeadmin, after the installation process completes.This user has the cluster-admin role automatically applied and is treated as the root user for the cluster. The password is dynamically generated and unique to your OpenShift Container Platform environment. After installation completes the password is provided in the installation program’s output. For example:INFO Install complete!INFO Run 'export KUBECONFIG=/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI.INFO The cluster is ready when 'oc login -u kubeadmin -p ' succeeds (wait a few minutes).INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.comINFO Login to the console with user: kubeadmin, password:
Remove the kubeadmin User
PrerequisitesYou must have configured at least one identity provider.You must have added the cluster-admin role to a user.You must be logged in as an administrator.ProcedureRemove the kubeadmin secrets:$ oc delete secrets kubeadmin -n kube-system
Configuring the User Agent
Understanding & Creating ServiceAccounts
Overview
A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user’s credentials.When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user’s credentials. For example, service accounts can allow:Replication controllers to make API calls to create or delete pods.Applications inside containers to make API calls for discovery purposes.External applications to make API calls for monitoring or integration purposes.
Create Service Account
oc create sa robot
oc describe robot
Using Service Account in Applications
Default Service Account
Cluster-level
Service AccountDescriptionreplication-controllerAssigned the system:replication-controller roledeployment-controllerAssigned the system:deployment-controller rolebuild-controllerAssigned the system:build-controller role. Additionally, the build-controller service account is included in the privileged Security Context Constraint in order to create privileged build pods.
Project-level
Service AccountUsagebuilderUsed by build pods. It is given the system:image-builder role, which allows pushing images to any imagestream in the project using the internal Docker registry.deployerUsed by deployment pods and given the system:deployer role, which allows viewing and modifying replication controllers and pods in the project.defaultUsed to run all other pods unless they specify a different service account.
Using a service account’s credentialsexternally
$ oc describe secret robot-token-uzkbh -ntop-secret
$ oc login--token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...
Using a Service Account as an OAuthClient
Scoping Tokens
You can created scoped tokens to delegate some of your permissions to another user or service account. For example, a project administrator might want to delegate the power to create pods.A scoped token is a token that identifies as a given user but is limited to certain actions by its scope. Only a user with the cluster-admin role can create scoped tokens.Scopes are evaluated by converting the set of scopes for a token into a set of PolicyRules. Then, the request is matched against those rules. The request attributes must match at least one of the scope rules to be passed to the "normal" authorizer for further authorization checks.
User Scopes
User scopes are focused on getting information about a given user. They are intent-based, so the rules are automatically created for you:user:full - Allows full read/write access to the API with all of the user’s permissions.user:info - Allows read-only access to information about the user, such as name and groups.user:check-access - Allows access to self-localsubjectaccessreviews and self-subjectaccessreviews. These are the variables where you pass an empty user and groups in your request object.user:list-projects - Allows read-only access to list the projects the user has access to.
Role Scopes
The role scope allows you to have the same level of access as a given role filtered by namespace.role:<cluster-role name>:<namespace or * for all> - Limits the scope to the rules specified by the cluster-role, but only in the specified namespace .Caveat: This prevents escalating access. Even if the role allows access to resources like secrets, rolebindings, and roles, this scope will deny access to those resources. This helps prevent unexpected escalations. Many people do not think of a role like edit as being an escalating role, but with access to a secret it is.role:<cluster-role name>:<namespace or * for all>:! - This is similar to the example above, except that including the bang causes this scope to allow escalating access.
Using Bound Service Account Tokens
Managing Security ContextConstraints
About SCC
Similar to the way that RBAC resources control user access, administrators can use Security Context Constraints (SCCs) to control permissions for pods. These permissions include actions that a pod, a collection of containers, can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with in order to be accepted into the system.SCCs allow an administrator to control:Whether a pod can run privileged containers.The capabilities that a container can request.The use of host directories as volumes.The SELinux context of the container.The container user ID.The use of host namespaces and networking.The allocation of an FSGroup that owns the pod’s volumes.The configuration of allowable supplemental groups.Whether a container requires the use of a read only root file system.The usage of volume types.The configuration of allowable seccomp profiles.
The cluster contains 8 Default SCCs
anyuid
hostaccess
hostmount-anyuid
hostnetwork
node-exporter
nonroot
privileged
The privileged SCC allows:Users to run privileged podsPods to mount host directories as volumesPods to run as any userPods to run with any MCS labelPods to use the host’s IPC namespacePods to use the host’s PID namespacePods to use any FSGroupPods to use any supplemental groupPods to use any seccomp profilesPods to request any capabilities
restricted
The restricted SCC:Ensures that pods cannot run as privileged.Ensures that pods cannot mount host directory volumes.Requires that a pod run as a user in a pre-allocated range of UIDs.Requires that a pod run with a pre-allocated MCS label.Allows pods to use any FSGroup.Allows pods to use any supplemental group.For more information about each SCC, see the kubernetes.io/description annotation available on the SCC.
SCC Settings
CategoryDescriptionControlled by a booleanFields of this type default to the most restrictive value. For example, AllowPrivilegedContainer is always set to false if unspecified.Controlled by an allowable setFields of this type are checked against the set to ensure their value is allowed.Controlled by a strategyItems that have a strategy to generate a value provide:A mechanism to generate the value, andA mechanism to ensure that a specified value falls into the set of allowable values.
SCC Strategies
RunAsUser
RunAsUserMustRunAs - Requires a runAsUser to be configured. Uses the configured runAsUser as the default. Validates against the configured runAsUser.MustRunAsRange - Requires minimum and maximum values to be defined if not using pre-allocated values. Uses the minimum as the default. Validates against the entire allowable range.MustRunAsNonRoot - Requires that the pod be submitted with a non-zero runAsUser or have the USER directive defined in the image. No default provided.RunAsAny - No default provided. Allows any runAsUser to be specified.
SELinuxContext
SELinuxContextMustRunAs - Requires seLinuxOptions to be configured if not using pre-allocated values. Uses seLinuxOptions as the default. Validates against seLinuxOptions.RunAsAny - No default provided. Allows any seLinuxOptions to be specified.
SupplementalGroups
SupplementalGroupsMustRunAs - Requires at least one range to be specified if not using pre-allocated values. Uses the minimum value of the first range as the default. Validates against all ranges.RunAsAny - No default provided. Allows any supplementalGroups to be specified.
FSGroup
FSGroupMustRunAs - Requires at least one range to be specified if not using pre-allocated values. Uses the minimum value of the first range as the default. Validates against the first ID in the first range.RunAsAny - No default provided. Allows any fsGroup ID to be specified
Controlling Volumes
The usage of specific volume types can be controlled by setting the volumes field of the SCC. The allowable values of this field correspond to the volume sources that are defined when creating a volume: azureFile azureDisk flocker flexVolume hostPath emptyDir gcePersistentDisk awsElasticBlockStore gitRepo secret nfs iscsi glusterfs persistentVolumeClaim rbd cinder cephFS downwardAPI fc configMap vsphereVolume quobyte photonPersistentDisk projected portworxVolume scaleIO storageos * (a special value to allow the use of all volume types) none (a special value to disallow the use of all volumes types. Exist only for backwards compatibility)
The recommended minimum set ofallowed volumes for new SCCs are:
The usage of specific volume types can be controlled by setting the volumes field of the SCC. The allowable values of this field correspond to the volume sources that are defined when creating a volume:azureFileazureDiskflockerflexVolumehostPathemptyDirgcePersistentDiskawsElasticBlockStoregitReposecretnfsiscsiglusterfspersistentVolumeClaimrbdcindercephFSdownwardAPIfcconfigMapvsphereVolumequobytephotonPersistentDiskprojectedportworxVolumescaleIOstorageos* (a special value to allow the use of all volume types)none (a special value to disallow the use of all volumes types. Exist only for backwards compatibility)
emptyDir
persistentVolumeClaim
configMap
secret
projected
Admission
SCC Prioritization
About Pre-Allocated SCC Values
Example SCC
Create SCC
Role-Based Access to SCC
SCC Reference Commands
List
$ oc get scc
Examine
$ oc describe scc restricted
Delete
$ oc delete scc <scc_name>
Update
$oc edit scc <scc_name>
Impersonating the system:admin User
Syncing LDAP Groups
Networking
Understanging Networking
Accessing Hosts
Cluster Network Operator in OCP
DNS Operator in OCP
Ingress Operator in OCP
Configuring the Ingress Controller
Using the SCTP on a BaremetalCluster
Network Policy
Multiple Networks
Hardware Networks
OpenShhift SDN Default CNINetwork Provider
OVN-Kubernetes Default CNINetwork Provider
Configuring Routes
Configuring Ingress Cluster Traffic
About External IP
External IP Address Blocks. for yourCluster
Configure Cluster Wide Proxy
Configure a Custom PKI
Load Balancing on RHOSP
Registry
Image Registry
Image Registry Operator in OCP
Setting Up & Configuring the Registry
Registry Options
Accessing the Registry
Exposing the Registry
Storage
Understanding Persistent Storage
Configuring Persistent Storage
Using Container Storage Interface
Expanding Persistent Volumes
Dynamic Provisioning
Migrate
Migration Toolkit for Containers
Migrating from OCP 3
Migrating from OCP 4.1
Migrating from OCP 4.2 or later
Manage
Backup & Restore
Backup etcd
Replacing an Unhealthy etcd Member
Shutting Down the Cluster gracefully
Restarting the Cluster gracefully
Disaster Recovery
Machine Management
Creating Machineset
Manually Scaling a Machineset
Modifying a Machineset
Deleting a Machine
Applying Autoscaling to OCP Cluster
Creating Infrastructure Machinesets
User-provisioned Infrastructure
Deploying Machine Health Checks
Metering
About Metering
Installing Metering
Upgrading Metering
Configuring Metering
Reports
Using Metering
Examples of Using Metering
Troubleshooting & DebuggingMetering
Uninstalling Metering
Web Console
Accessing the Web Console
Using Dashboard to Get ClusterInformation
Configuring the Web Console
Customizing the Web Console
About Developer Prespective in theWeb Console
Disabling the Web Console
Monitor
Logging
Understanding Cluster Logging
Installing Cluster Logging
Configuring Cluster LoggingDeployment
Viewing Cluster Logs
Forwarding Logs to third PartySystems
Collecting & Storing KubernetesEvents
Updating Cluster Loggin
Troubleshooting Cluster Logging
Uninstalling Cluster Logging
Exported Fields
Monitoring
Scalability & Performance
Recommended Practices for InstallingLarge Clusters
Recommended Host Practices
Recommended Cluster ScalingPractices
Using the Node Tuning Operator
Using Cluster Loader
Using CPU Manager
UsingTopology Manager
Scaling the Cluster MonitoringOperator
Planning Environment According toObject Maximums
Optimizing Storage
Optimizing Routing
What Huge Pages Do & How Theyare Consumed by Applications
Support
Getting Support
Gathering Data about Cluster
Summarizing Cluster Specifications
Remote Health Monitoring withConnected Clusters
Troubleshooting
Integrate
Jaeger
OpenShift Virtualization
Service Mesh
Serverless Application
Develop
Applications
Projects
Application Life Cycle Management
Deployment
Quotas
Monitoring Project & Application Metricsusing the Developer Perpective
Monitoring Application Health
Idling Application
Pruning Objects to Reclaim Resources
Using the Red Hat Marketplace
Builds
Understand Image Builds
Understand Build Configurations
Create Build Inputs
Manage Build Input
Use Build Strategies
Custom Image Builds with Builda
Perform Basic Builds
Trigger & Modify Builds
Perform Advanced Builds
Use Red Hat Subscription in Builds
Secure Builds by Strategy
Build Configuration Resources
Troubleshoot Builds
Setup Additional Trusted CA for Builds
Create & Use ConfigMaps
Images
Configure the Samples Operator
Using the Samples Operator with anAlternate Regisry
Understand Containers, Images &ImageStreams
Create Image
Manage Image
Manage ImageStream
Image Configuration Resources
Use Templates
Use Ruby on Rails
Use Image
Nodes
Working with Pods
Controlling Pod Placement onto Nodes
Using Jobs & Daemonsets
Working with Nodes
Working with Containers
Working with Clusters
Pipelines
Undestand OpenShift Pipelines
Install OpenShift Pipelines
Uninstall OpenShift Pipelines
Create CI/CD Solutions for Applicationusing OpenShift Pipelines
Work with OpenShift Pipelines using theDeveloper Perspectives
OpenShift Pipelines Release Notes
Operators
Understand Operator
What are Operators?
Operator Framework Glossary of CommonTerms
Bundle
Bundle Image
Catalog Source
Catalog Image
Channel
Channel Head
ClusterServiceVersion
Dependency
Index Image
InstallPlan
OperatorGroup
Package
Registry
Subscription
Update Graph
Operator Framework Packaging Format
Package Manifest
Bundle
Manifests
Annotation
Dependencies
opm CLI
Operator Lifecycle Manager (OLM)
Understand OperatorHub
CRDs (Custom Resource Definitions)
Types
Kubernetes Operator managed byOperator Life Cycle Manager (OLM)
1. Cloud Credential Operator
2. OpenShift Controller Manager Operator
3. Cluster Version Operator
4. Console Operator
5. DNS Operator
6. ETCD Cluster Operator
7. Ingress Operator
8. Kubernetes API Server Operator
9. Kubernetes Controller ManagerOperator
10. Kubernetes Scheduler Operator
11. Machine API Operator
12. Machine Config Operator
13. Marketplace Operator
14. Node Tuning Operator
15. OpenShift API Server Operator
16. Prometheus Operator
Cluster Operator managed by OpenShiftCluster Version Operator
1. Cluster Authentication Operator
2. Cluster AutoScaler Operator
3. Cluster Image Registry Operator
4. Cluster Monitoring Operator
5. Cluster Network Operator
6. Cluster Samples Operator
7. Cluster Storage Operator
8. Cluster SVCAT API Server Operator
9. Cluster SVCAT ControllerManager Operator
Tasks for
Application Developer
Create application from installedOperator
Install Operator in your namespace
Manage admission webhooks in OLM
Administrator
Viewing Operator Status
Adding/Deleting Operator to/from aCluster
Allowing non-cluster Administrator toinstall Operator
Managing Custom Catalog
Using OLM on Restricted Network
Configuring Proxy support in OLM
Operator Developer
Get Start with Operator SDK
Create Ansible-based Operators
Create Helm-based Operators
Generate ClusterServiceVersion (CSV)
Work with Bundle Images
Validate Operators using the ScoreCard
Configure Builtin Monitoring withPrometheus
Configure Leader Election
Operator SDK Reference
Appendices
Cost Management
Getting Started with Cost Management
Managing Cost Data using Tagging
Using Cost Models
References
CLI Tools
REST API
OCP 4.1 New Features
Deprecated Features
Hawkular --> Cluster Monitoring
Cassandra --> Cluster Monitoring
Heapster --> Prometheus Adapter
Atomic Host --> RHEL CoreOS
System Containers --> RHEL CoreOS
projectatomic/docker-1.13 additionalsearch registries --> CRI-O is thedefault container runtime on RHCOSand Red Hat Enterprise Linux.
oc adm diagnostic --> Operator-basedDiagnostics
oc adm registry --> Image RegistryOperator
Custom strategy builds using Docker --> Ifyou want to continue using custom builds,you should replace your Dockerinvocations with Podman or Buildah. Thecustom build strategy will not be removed,but the functionality changed significantly
Cockpit --> Improved Web Console
Stand-alone Registry Installation --> Quayis Red Hat’s enterprise container image registry.
DNSmasq --> CoreDNS is the default
External etcd Nodes --> etcd is always onthe cluster
CloudForms OpenShift Provider andPodified CloudForms --> Built-inmanagement tooling
Volume Provisioning via installer --> Usedynamic volumes or, if NFS is required,NFS provisioner.
Blue-green installation method --> Ease ofupgrade is a core value of OCP 4.1
OpenShift Service Broker and ServiceCatalog --> Reference the OperatorFramework and Operator LifecycleManager (OLM) to continue providingyour applications to OpenShift 4clusters.
oc adm ca --> Certificate are managed byOperator internally
oc adm ca --> Functions are managed byOperators internally.
oc adm create-bootstrap-policy-file -->Functions are managed by Operatorsinternally.
oc adm policy reconcile-sccs --> Functionsare managed by openshift-apiserverinternally
Web Console --> a new Web Console
New Features & Enhancements
1. Operator
Operator Lifecycle Manager (OLM)
2. Installation & Upgrade
Operator Hub
3. Storage
4. Scale
Cluster Maximus
Node Tuning Operator
5. Cluster Monitoring
New Alerting User Interface
Telemeter
Autoscale Pod horizontally based on theCustom Metrics API
Autoscale Pod horizontally based on theResource Metrics API
6. Developer Experience
Multi-stage Dockerfile Build generallyAvailable
7. Registry
The Registry is now managed by anOperator
8. Networking
Cluster Network Operator (CNO)
OpenShift SDN
Multus
SR-IOV
F5 Router Plug-in Support
9. Web Console
Developer Catalog
New Management Screen
10. Security
Notable Technical Changes
Build powered by Buildah
Security Context Constraints
Service CA Bundle Changes
OpenShift Service Broker & ServiceCatalog Deprecation
Service Catalog no longer installed bydefault
Template Service Broker no longerinstalled by default
OpenShift Ansible Service Broker nolonger installed by default
Several oc adm command are nowdeprecated
The Configurability of Image PolicyAdmission Plugin is not present
Technology Preview Features
OCP 4.2 New Features
Deprecated Features
Deprecation of the Service Catalog, theTemplate Service Broker, the AnsibleService Broker, and their Operators
Deprecation of cluster role APIs
Deprecation of OperatorSources andCatalogSourceConfigs
Deprecation of /oapi endpoint from oc
Deprecation of the -short flag of oc version
oc adm migrate commands
Persistent volume snapshots
EFS
Recycle reclaim policy
New Features & Enhancements
1. Operator
New location for Operator ProductDocumentation
Scoped Operator Installation
Ingress Operator
Machine Config Operator
Node Feature Discovery Operator
Node Tuning Operator enhancements
2. Installation & Upgrade
OCP upgrades phased-rollout
CLI-based installation
openshift-install
oc adm upgrade
Installation in restricted networks
Three-node bare metal deployment
Cluster-wide egress proxy
New platform boundary (OpenShift + OSIntegration)
OpenShift Container Platform is now aware of the entire infrastructure and brings the operating systemunder management. This makes installation and updates seamless across the platform and theunderlying operating system. Everything is managed as a single entity.
Full stack automation (Installer-provisionedInstallation -IPI) and pre-existinginfrastructure (User-provisioned Installation-UPI).
Full stack automated deployment
AWS
GCP
Azure
RHOSP
Red Hat Cluster Application MigrationTools & Migration Assistant
3. Storage
Persistent Volume using the Local StorageOperator
OpenShift Container Storage Interface(CSI)
Raw Block Volume support
4. Scale
Cluster limits
5. Developer Experience
OpenShift Do
CodeReady Containers
6. Nodes
CRI-O support
Whitelisting of sysctls configuration
Master nodes are now schedulable
7. Networking
Installer-provisioned OpenShift onOpenStack
Open Virtual Networking (OVN) for OpenvSwitch
Enable Internal Ingress Controller forprivate cluster
Kubenetes CNI plugin addition &enhancements
Enablement of GPUs in Cluster
8. Web Console
Console customization options
New API Explorer
Machine Autoscaler
Developer Perpective
Prometheus queries
Identity Providers
General Web Console updates
Notable Technical Changes
corsAllowedOrigins
New CNI plug-ins
Multus:bridge
ipvlan
Cluster Network Operator supportsSimpleMACVLAN
Builds maintain their layers
Builds on Windows
Ingress controller support disabled
Reduce OperatorHub complexity byremoving CatalogSourceConfig usage
Global catalog namespace change
the openshift-marketplace namespace
OCP 4.3 New Features
Deprecated Features
Pipelines build strategy
Beta workload alerts
Service Catalog, Template Service Broker,Ansible Service Broker, and their Operators
Deprecation of OperatorSources andCatalogSourceConfigs
VirtualBox support for CodeReadyContainers
Unsupported Features
Cluster logging no longer allowsforwarding logs by editing the FluentdDaemonset
Persistent volume snapshots
The ose-local-storage-provisioner containerhas been removed
New Features & Enhancements
1. Operator
Samples Operator
Image Registry Operator
Simplified mirroring of OperatorHub
Operator telemetry & alerts
2. Installation & Upgrade
OCP Upgrade phased-rollout
Support for FIPS cryptography
Deploy Private Cluster on AWS, Azure orGCP
3. Security
Automated rotation of service servingcertificates CA
Encrypt data stored in etcd
4. Cluster Monitoring
Improvements for PromQL query browserin web console
Use Pod capacity metric forKubeletTooManyPods alert
Monitor your own services
Querying metrics in the web console
5. Machine API
Automatically repair damaged machineswith machine health checking
6. Logging
Log forwarding
7. Developer Experience
OpenShift Do enhancements
Using Helm
8. Web Console
New Project dashboard
New NamespaceDashboard option in theConsoleLink Custom Resource Definition
Provide cluster-wide third-party userinterfaces
New ConsoleYAMLSample CustomResource Definition
Open a Support case from the webconsole
View security vulnerabilities
New User Management section
Create alert receivers
Developer perspective
CSI provisioners now shown on storageclass creation page
9. Networking
Configure network policy
Kuryr CNI support for Red Hat OpenStackPlatform (RHOSP)
10. Scale
Cluster maximums
11. Storage
OpenShift Container Storage 4.2
Persistent storage Using iSCSI
Raw block volume support
CSI volume expansion
Use tolerations in Local Storage Operator
Notable Technical Changes
Operator SDK v0.12.0
Cluster logging Fluent forwardconfiguration changes
OCP 4.4 New Features
Deprecated Features
OpenShift CLI config flag
OpenShift CLI timeout flag
OpenShift editor
machine CIDR network parameters
Service Catalog, Template Service Broker,Ansible Service Broker, and their Operators
Deprecation of OperatorSources,CatalogSourceConfigs, and packagingformat
Removed Features
OpenShift CLI secret subcommands
OpenShift CLI build-logs command
Deprecated upstream Kubernetes metricshave been removed
High granularity request duration bucketsin Prometheus
New Features & Enhancements
1. Operator
etcd cluster Operator
Insights Operator now collects anonymizedCSRs
Remove Samples Operator if it cannotconnect to registry.redhat.io
2. Installation & Upgrade
Installing a cluster on Microsoft Azure usinguser-provisioned infrastructure
Installing a cluster on Red Hat Virtualizationusing installer-provisioned infrastructure
Installing a cluster on OpenStack usinguser-provisioned infrastructure
Installing a cluster on OpenStack no longerrequires the Swift object storage service
Clusters installed on OpenStack supportself-signed certificates
OpenStack validates RHCOS images bychecking sha256 checksum
Support for east-west traffic with OVN loadbalancing on OpenStack with Kuryr
3. Security
Support for bound service account tokens
The oauth-proxy imagestream is nowavailable
kube-apiserver checks client certificatesbefore tokens
4. Nodes
Evicting Pods using the descheduler
Controlling overcommit and managingcontainer density on nodes
5. Cluster monitoring
Monitoring Dashboards in web console
hwmon collector disabled in node-exporter
cluster-reader can read node metrics
Cluster alert for when multiple containersare killed
New API server alerts
Permission updates for PrometheusOperator
Cluster monitoring component versionupdates
6. Web Console
IBM Marketplace integration inOperatorHub
Edit applications in the Topology view
Create Helm releases
7. Networking
Stream Control Transmission Protocol(SCTP) on OpenShift Container Platform
Using DNS forwarding
HAProxy upgraded to version 2.0
Ingress Enhancements
8. Storage
Persistent storage using CSI snapshots
Persistent storage using CSI cloning
9. Scale
Cluster maximums
10. Developer Experience
Automatic image pruning
Build objects report conditions in status
Recreate rollouts for image registry
odo enhancements
OpenShift Pipelines
Helm 3 GA support
11. Documentation updates & conventions
OpenShift documentation licensed underApache license 2.0
Copy button for docs.openshift.com site
OpenShift Container Engine renamed toOpenShift Kubernetes Engine
Documentation is now available for the 4.3version of Azure Red Hat OpenShift
Notable Technical Changes
Sending cluster logs using the Fluentdsyslog plug-in (RFC 3164)
Operator SDK v0.15.0
OCP 4.5 New Features
Deprecated Features
Jenkins Pipeline build strategy
v1beta1 CRDs
Custom label no longer in use
OperatorSources andCatalogSourceConfigs block clusterupgrades
Ignition config spec v2
Removed Features
OpenShift CLI commands and flagsremoved
The oc run OpenShift CLI command nowonly creates Pods
Service Catalog, Template Service Broker,and their Operators
CatalogSourceConfigs removed
New Features & Enhancements
1. Operators
Bundle Format for packaging Operatorsand opm CLI tool
v1 CRD support in Operator LifecycleManager
Report etcd member status conditions
Admission webhook support in OLM
ConfigMap configurations added fromopenshift-config namespace
Read-only Operator API
Upgrading metering and support forrespecting a cluster-wide proxyconfiguration
2. Installation & Upgrade
Installing a cluster on vSphere usinginstaller-provisioned infrastructure
Installing a cluster on GCP usinguser-provisioned infrastructure and ashared VPC
Three-node bare metal deployments
Restricted network cluster upgradeimprovements
Migrating Azure private DNS zones
Built-in help for install-config.yamlsupported fields
Encrypt EBS instance volumes with a KMSkey
Install to pre-existing VPC with multipleCIDRs on AWS
Adding custom domain names to AWSVirtual Private Cloud (VPC) DHCP optionsets
Provisioning bare metal hosts using IPv6with Ironic
Custom networks and subnets for clusterson RHOSP
Additional networks for clusters on RHOSP
Additional networks for clusters on RHOSP
Multiple version schemes accepted wheninstalling RPM packages
SSH configuration no longer required fordebug information
Master nodes can be named any validhostname
Octavia OVN provider driver supported onprevious RHOSP versions
Octavia OVN provider driver supportslisteners on same port
3. Security
Using the oauth-proxy imagestream inrestricted network installations
4. Images
Mirroring release images to and from files
Mirroring release image signatures
5. Machine API
AWS MachineSets support spot instances
Autoscaling the minimum number ofmachines to 0
MachineHealthCheck with empty selectormonitors all machines
Describing machine and MachineSet fieldsby using oc explain
6. Nodes
New descheduler strategy is available
Vertical Pod Autoscaler Operator
Anti-affinity control plane node schedulingon RHOSP
7. Cluster Monitoring
Monitor your own services
8. Cluster Logging
Elasticsearch version upgrade
New Elasticsearch log retention feature
Kibana link in web console moved
9. Web Console
New Infrastructure Features filters forOperators in OperatorHub
Developer Perspective
Streamlined steps for configuring alertsfrom cluster dashboard
10. Scale
Cluster maximums
11. Networking
Migrating from the OpenShift SDN defaultCNI network provider
Ingress enhancements
HAProxy upgraded to version 2.0.14
HTTP/2 Ingress support
12. Developer experience
oc new-app now produces Deploymentresources
Support node affinity scheduler in imageregistry CRD
Virtual hosted buckets for custom S3endpoints
Node pull credentials during build andimagestream import
13. Backup & Restore
Gracefully shutting down and restarting acluster
14. Disaster Recovery
Automatic control plane certificaterecovery
15. Storage
Persistent storage using the AWS EBS CSIDriver Operator
Persistent storage using the OpenStackManila CSI Driver Operator
Persistent storage using CSI inlineephemeral volumes
Persistent storage using CSI volumecloning
16. OpenShift Virtualization
OpenShift Virtualization support
Notable Technical Changes
Operator SDK v0.17.2
terminationGracePeriod parameter support
/readyz configuration for API server healthprobe