Category: <span>openshift</span>

The oc adm must-gather tool is essential for troubleshooting and diagnostics in OpenShift. With the release of OpenShift 4.17, new flags have been introduced to enhance flexibility and precision in data collection. These additions enable administrators to gather logs more efficiently while reducing unnecessary data collection.

New Flags in Must-Gather

--since

This flag allows users to collect logs newer than a specified duration. For example:

oc adm must-gather --since=24h

This command gathers logs from the past 24 hours, making it easier to pinpoint recent issues.

--since-time

The --since-time flag lets users specify an exact timestamp (RFC3339 format) to collect logs from a particular point in time.

oc adm must-gather --since-time=2025-02-10T11:12:39Z

This is useful for investigating incidents that occurred at a specific time.

Existing Flags for Enhanced Customization

Along with the new additions, several existing flags provide more control over the data collection process:

  • --all-images: Uses the default image for all operators annotated with operators.openshift.io/must-gather-image.
  • --dest-dir: Specifies a local directory to store gathered data.
  • --host-network: Runs must-gather pods with hostNetwork: true for capturing host-level data.
  • --image: Allows specifying a must-gather plugin image to run.
  • --node-name: Targets a specific node for data collection.
  • --node-selector: Selects nodes based on a node selector.
  • --run-namespace: Runs must-gather pods within an existing privileged namespace.
  • --source-dir: Defines the directory from which data is copied.
  • --timeout: Sets a time limit for data gathering.
  • --volume-percentage: Adjusts the maximum storage percentage for gathered data.

Conclusion

The introduction of --since and --since-time in OpenShift 4.17 significantly improves must-gather’s efficiency by enabling targeted log collection. By leveraging these and other available flags, administrators can streamline troubleshooting and optimize diagnostics.

For a deeper dive into must-gather and its latest enhancements, check out the official OpenShift documentation.

openshift

Managing virtual machines in an Infrastructure as Code (IaC) environment requires efficiency and reliability. One of the central ideas for this is having a single source of truth (SSoT) in order to ensure consistency in resources, improve automation, and leverage processes such as version control. In this type of secluded environment, we can track and test changes and increase our scalability with ease.

This learning path will showcase how to use Red Hat OpenShift GitOps with a Git repository as a single source of truth for our infrastructure, thereby enhancing automation, consistency, and efficiency for VMs in Red Hat OpenShift Virtualization.

https://developers.redhat.com/learn/manage-openshift-virtual-machines-gitops?sc_cid=RHCTG0250000438530

openshift

Applying a specific node selector to all infrastructure components will guarantee that they will be scheduled on nodes with that label. See more details on node selectors in placing pods on specific nodes using node selectors, and about node labels in understanding how to update labels on nodes.

Our node label and matching selector for infrastructure components will be node-role.kubernetes.io/infra: "".

To prevent other workloads from also being scheduled on those infrastructure nodes, we need one of two solutions:

  • Apply a taint to the infrastructure nodes and tolerations to the desired infrastructure workloads.
    OR
  • Apply a completely separate label to your other nodes and matching node selector to your other workloads such that they are mutually exclusive from infrastructure nodes.

TIP: To ensure High Availability (HA) each cluster should have three Infrastructure nodes, ideally across availability zones. See more details about rebooting nodes running critical infrastructure.

TIP: Review the infrastructure node sizing suggestions

By default all nodes except for masters will be labeled with node-role.kubernetes.io/worker: "". We will be adding node-role.kubernetes.io/infra: "" to infrastructure nodes.

However, if you want to remove the existing worker role from your infra nodes, you will need an MCP to ensure that all the nodes upgrade correctly. This is because the worker MCP is responsible for updating and upgrading the nodes, and it finds them by looking for this node-role label. If you remove the label, you must have a MachineConfigPool that can find your infra nodes by the infra node-role label instead. Previously this was not the case and removing the worker label could have caused issues in OCP <= 4.3.

This infra MCP definition below will find all MachineConfigs labeled both “worker” and “infra” and it will apply them to any Machines or Nodes that have the “infra” role label. In this manner, you will ensure that your infra nodes can upgrade without the “worker” role label.

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
  name: infra
spec:
  machineConfigSelector:
    matchExpressions:
      - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]}
  nodeSelector:
    matchLabels:
      node-role.kubernetes.io/infra: ""

If you are not using the MachineSet API to manage your nodes, labels and taints are applied manually to each node:

Label it:

oc label node <node-name> node-role.kubernetes.io/infra=
oc label node <node-name> node-role.kubernetes.io=infra

Taint it:

oc adm taint nodes -l node-role.kubernetes.io/infra node-role.kubernetes.io/infra=reserved:NoSchedule node-role.kubernetes.io/infra=reserved:NoExecute

openshift Uncategorized

Infrastructure nodes allow customers to isolate infrastructure workloads for two primary purposes:

  1. to prevent incurring billing costs against subscription counts and
  2. to separate maintenance and management.

This solution is meant to complement the official documentation on creating Infrastructure nodes in OpenShift 4. In addition there is a great OpenShift Commons video describing this whole process: OpenShift Commons: Everything about Infra nodes

To resolve the first problem, all that is needed is a node label added to a particular node, set of nodes, or machines and machineset. Red Hat subscription vCPU counts omit any vCPU reported by a node labeled node-role.kubernetes.io/infra: "" and you will not be charged for these resources from Red Hat. Please see How to confirm infra nodes not included in subscription cost in OpenShift Cluster Manager? to confirm your vCPU reports correctly after applying the configuration changes in this article.

To resolve the second problem we need to schedule infrastructure workloads specifically to infrastructure nodes and also to prevent other workloads from being scheduled on infrastructure nodes. There are two strategies for accomplishing this that we will go into later.

You may ask why infrastructure workloads are different from those workloads running on the control plane. At a minimum, an OpenShift cluster contains 2 worker nodes in addition to 3 control plane nodes. While control plane components critical to the cluster operability are isolated on the masters, there are still some infrastructure workloads that by default run on the worker nodes – the same nodes on which cluster users deploy their applications.

Note: To know the workloads that can be executed in infrastructure nodes, check the “Red Hat OpenShift control plane and infrastructure nodes” section in OpenShift sizing and subscription guide for enterprise Kubernetes.

Planning node changes around any nodes hosting these infrastructure components should not be addressed lightly, and in general should be addressed separately from nodes specifically running normal application workloads.

openshift

It is not possible to change the domain for the API, internal or external.

Starting with OpenShift 4.8, it is possible to change the domain of the console and downloads routes after cluster installation.

Choose your domain name with carrefully.

More information see this document from RedHat https://access.redhat.com/solutions/4853401

openshift

Yesterday i was helping a customer to deploy his OpenShift 4.16.x cluster. The first step was the bastion host preparation. This includes the setup of the OCP CLI.

We do the default instalation of the CLI on top of RHEL 8.x but after the installation we got the following error:

 oc version
oc: /lib64/libc.so.6: version `GLIBC_2.33' not found (required by oc)
oc: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by oc)
oc: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by oc)

We tried to compile the new GLIBC .but without success.

The solution: Download the CLI version compiled for RHEL 8. Link here for amd64

Linux openshift

Today i got the following error when i try to run the command :
./cpd-cli manage login-entitled-registry ${IBM_ENTITLEMENT_KEY}

Run command: podman run -d –name olm-utils-play –env CMD_PREFIX=manage -v /opt/cpd-cli-linux-EE-12.0.2-39/cpd-cli-workspace/olm-utils-workspace/work:/tmp/work icr.io/cpopen/cpd/olm-utils:latest[ERROR] 2023-03-06T12:41:55.991666Z Command exception: Failed to start the olm-utils-play container: Error: runc: container_linux.go:370: starting container process caused: error adding seccomp filter rule for syscall bdflush: permission denied: OCI permission denied (exit status 126)[ERROR] 2023-03-06T12:41:55.998354Z RunPluginCommand:Execution error: exit status 1

This error happened due to runc version too low. My bastion host is RHEL 8.4. To solve the problem i just updated the Linux, and everything works.

Cloud openshift podman

If a user has been deleted from the OpenShift web console, they will no longer be able to login. The user’s account and associated resources will also be deleted. If the user needs access again, they will have to be re-created in the console.

To recreate the user you can use the command :

oc create user <username>

Use the oc create useridentitymapping command to map the identity to the user.

Use the command oc get identities to lis all identities you have configured, an then map the user.
For example:
oc create useridentitymapping homeldap:Y249a2VuaW8sb3U9 kenio

openshift

Linux Containers have emerged as a key open source application packaging and delivery technology, combining lightweight application isolation with the flexibility of image-based deployment methods.

Red Hat Enterprise Linux (RHEL) base images are meant to form the foundation for the container images you build. As of April 2019, new Universal Base Image (UBI) versions of RHEL standard, minimal, init, and Red Hat Software Collections images are available that add to those images the ability to be freely redistributed.

RHEL minimal images provide a base for your own container images that is less than half the size of the standard image, while still being able to draw on RHEL software repositories and maintain any compliance requirements your software has.

Building custom images using Containerfile or Dokerfile  sometimes you need to install packages on top of the minimal images of RHEL.  You need to use microdnf to install things not dnf /yum.

Answer: As minimal as stated: no Python and no Python module dependencies. Which are quite many packages to think of it.

I suppose the actual gap will come also from the fact of not using Python:

  • There is no Python interface, and thus you can’t invoke microdnf from a Python code using a consistent API. You’ll have to resort to using the subprocess Python module
  • Actual dnf can be expanded with many additional commands provided by the dnf-plugins-core and other plugin packages. You may not expect any of those features in microdnf. They will hardly ever make it to microdnf.

 

 

 

openshift

Today i will install Code Ready. You can install Openshift on your laptop. See this link . My RHEL 8.4 VM has a small disk and first i need to resize the disk and then install CodeReady

Using this commands i change from 20 GB to 50GB disk

First you need to locate the vm disk with the command

sudo virsh domblklist rhel8-1

the output was:

Target Source
——————————————————-
vda /var/lib/libvirt/images/rhel8-2-clone.qcow2
sda –

To resize the disk the VM must be not running and must not have a snapshot.

Just type this command and add 30GB

sudo qemu-img resize /var/lib/libvirt/images/rhel8-2-clone.qcow2 +30G

Start the vm and verify the disk using lsblk command

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 50G 0 disk
|-vda1 252:1 0 1G 0 part /boot
`-vda2 252:2 0 29G 0 part
|-rhel-root 253:0 0 26G 0 lvm /
`-rhel-swap 253:1 0 3G 0 lvm [SWAP]

 

Linux openshift