MySphere Posts

The watch command is a useful utility in Unix-like systems that allows you to execute a command periodically and display its output. However, macOS does not come with watch pre-installed. If you’re running macOS Sequoia and want to use watch, follow the steps below to install it.

Recently i switch my mabook to a new MacBook Pro M2 and try to use the command to watch some openshift logs and i got the following result:

To install just use Homebrew.

brew install watch

Using watch on macOS

Now that watch is installed, you can start using it. The basic syntax is:

watch -n <seconds> <command>

For example, to monitor the disk usage of your system every two seconds, you can run:

watch -n 2 df -h

Additional Options

  • -d: Highlights the differences between updates.
  • -t: Turns off the title/header display.
  • -b: Beeps if the command exits with a non-zero status.

Alternative: Using a while Loop

If you prefer not to install watch, you can achieve similar functionality using a while loop in the terminal:

while true; do <command>; sleep <seconds>; done

For example:

while true; do df -h; sleep 2; done

This method works in any macOS version without requiring additional installations.

Linux MAC

Managing virtual machines in an Infrastructure as Code (IaC) environment requires efficiency and reliability. One of the central ideas for this is having a single source of truth (SSoT) in order to ensure consistency in resources, improve automation, and leverage processes such as version control. In this type of secluded environment, we can track and test changes and increase our scalability with ease.

This learning path will showcase how to use Red Hat OpenShift GitOps with a Git repository as a single source of truth for our infrastructure, thereby enhancing automation, consistency, and efficiency for VMs in Red Hat OpenShift Virtualization.

https://developers.redhat.com/learn/manage-openshift-virtual-machines-gitops?sc_cid=RHCTG0250000438530

openshift

Machine Learning

When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods.

  • To delete the node from the UPI installation, the node must be firstly drained and then marked unschedulable prior to deleting it:

$ oc adm cordon <node_name>
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
- Ensure also that there are no current jobs/cronjobs being ran or scheduled in this specific node as the draining does not take it into consideration.
- For Red Hat OpenShift Container Platform 4.7+, utilize the option `--delete-emptydir-data` in case `--delete-local-data` doesn't work. The `--delete-local-data` option is deprecated in favor of `--delete-emptydir-data`.

$ oc get node <node_name> -o yaml > backupnode.yaml

Before proceeding with deletion of the node, it needs to be under "power off" status:
$ oc delete node <node_name>

Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node once it is in shutdown mode.

Once the node is deleted, it can be ready for a power-off activity, or if it is needed to rejoin the cluster, it could be possible to either restart the kubelet or create the yaml back:

$ oc create -f backupnode.yaml

In order to get the node back, it can also be back by restarting kubelet:

$ systemctl restart kubelet

If it is needed to destroy then all the data from the worker node to delete all the software installed, execute the following:

# nohup shred -n 25 -f -z /dev/[HDD]
This command will overwrite all data on /dev/[HDD] repeatedly, in order to make it harder for even very expensive hardware probing to recover the data. Command line parameter -z will overwrite this device with zeros at the end of cycle to re-write data 25 times (it can be overridden with -n [number]).

One should consider running this command from RescueCD.

In order to monitor the deletion of the node, get the kubelet live logs:

$ oc adm node-logs <node-name> -u kubelet

https://access.redhat.com/solutions/4976801

Uncategorized

Applying a specific node selector to all infrastructure components will guarantee that they will be scheduled on nodes with that label. See more details on node selectors in placing pods on specific nodes using node selectors, and about node labels in understanding how to update labels on nodes.

Our node label and matching selector for infrastructure components will be node-role.kubernetes.io/infra: "".

To prevent other workloads from also being scheduled on those infrastructure nodes, we need one of two solutions:

  • Apply a taint to the infrastructure nodes and tolerations to the desired infrastructure workloads.
    OR
  • Apply a completely separate label to your other nodes and matching node selector to your other workloads such that they are mutually exclusive from infrastructure nodes.

TIP: To ensure High Availability (HA) each cluster should have three Infrastructure nodes, ideally across availability zones. See more details about rebooting nodes running critical infrastructure.

TIP: Review the infrastructure node sizing suggestions

By default all nodes except for masters will be labeled with node-role.kubernetes.io/worker: "". We will be adding node-role.kubernetes.io/infra: "" to infrastructure nodes.

However, if you want to remove the existing worker role from your infra nodes, you will need an MCP to ensure that all the nodes upgrade correctly. This is because the worker MCP is responsible for updating and upgrading the nodes, and it finds them by looking for this node-role label. If you remove the label, you must have a MachineConfigPool that can find your infra nodes by the infra node-role label instead. Previously this was not the case and removing the worker label could have caused issues in OCP <= 4.3.

This infra MCP definition below will find all MachineConfigs labeled both “worker” and “infra” and it will apply them to any Machines or Nodes that have the “infra” role label. In this manner, you will ensure that your infra nodes can upgrade without the “worker” role label.

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
  name: infra
spec:
  machineConfigSelector:
    matchExpressions:
      - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]}
  nodeSelector:
    matchLabels:
      node-role.kubernetes.io/infra: ""

If you are not using the MachineSet API to manage your nodes, labels and taints are applied manually to each node:

Label it:

oc label node <node-name> node-role.kubernetes.io/infra=
oc label node <node-name> node-role.kubernetes.io=infra

Taint it:

oc adm taint nodes -l node-role.kubernetes.io/infra node-role.kubernetes.io/infra=reserved:NoSchedule node-role.kubernetes.io/infra=reserved:NoExecute

openshift Uncategorized

Infrastructure nodes allow customers to isolate infrastructure workloads for two primary purposes:

  1. to prevent incurring billing costs against subscription counts and
  2. to separate maintenance and management.

This solution is meant to complement the official documentation on creating Infrastructure nodes in OpenShift 4. In addition there is a great OpenShift Commons video describing this whole process: OpenShift Commons: Everything about Infra nodes

To resolve the first problem, all that is needed is a node label added to a particular node, set of nodes, or machines and machineset. Red Hat subscription vCPU counts omit any vCPU reported by a node labeled node-role.kubernetes.io/infra: "" and you will not be charged for these resources from Red Hat. Please see How to confirm infra nodes not included in subscription cost in OpenShift Cluster Manager? to confirm your vCPU reports correctly after applying the configuration changes in this article.

To resolve the second problem we need to schedule infrastructure workloads specifically to infrastructure nodes and also to prevent other workloads from being scheduled on infrastructure nodes. There are two strategies for accomplishing this that we will go into later.

You may ask why infrastructure workloads are different from those workloads running on the control plane. At a minimum, an OpenShift cluster contains 2 worker nodes in addition to 3 control plane nodes. While control plane components critical to the cluster operability are isolated on the masters, there are still some infrastructure workloads that by default run on the worker nodes – the same nodes on which cluster users deploy their applications.

Note: To know the workloads that can be executed in infrastructure nodes, check the “Red Hat OpenShift control plane and infrastructure nodes” section in OpenShift sizing and subscription guide for enterprise Kubernetes.

Planning node changes around any nodes hosting these infrastructure components should not be addressed lightly, and in general should be addressed separately from nodes specifically running normal application workloads.

openshift

It is not possible to change the domain for the API, internal or external.

Starting with OpenShift 4.8, it is possible to change the domain of the console and downloads routes after cluster installation.

Choose your domain name with carrefully.

More information see this document from RedHat https://access.redhat.com/solutions/4853401

openshift

rsync is generally faster than scp for copying files, especially when transferring a large amount of data or syncing directories. Here’s why:

1. Incremental Transfers

  • rsync: Only transfers the parts of files that have changed, rather than the entire file. This makes subsequent transfers much faster.
  • scp: Always transfers the entire file, even if only a small part of it has changed.

2. Compression

  • rsync: Supports compression during the transfer (using the -z option), which reduces the amount of data sent over the network.
  • scp: Also supports compression (using the -C option), but it doesn’t have the same efficiency in skipping unchanged data.

3. Resume Support

  • rsync: Can resume interrupted transfers without starting over (using the --partial flag).
  • scp: Does not natively support resuming transfers. If the transfer is interrupted, you need to restart it.

4. Efficient Directory Handling

  • rsync: Designed for syncing directories, handling file metadata, permissions, and symbolic links efficiently.
  • scp: Less efficient for syncing directories and preserving metadata.

When to Use Each Tool

  • Use rsync if:
    • You need to sync large files or directories.
    • You expect the transfer might be interrupted.
    • Only parts of files or directories have changed.
  • Use scp if:
    • You need a simple, one-time transfer of a few files.
    • You don’t need incremental syncing or advanced features.

Command Examples:

  • rsync:

rsync -avz source_file user@remote:/path/to/destination

scp:

scp source_file user@remote:/path/to/destination

In summary, rsync is more efficient for most use cases, particularly when dealing with large or frequently updated files.

Linux

I follow the instructions to create and share a folder on OMV but i can’t access the shared folder using my MAC or Linux Manchines.

On the Mac i got the error : The operation can’t be completed because the original item for “foder name” can’t be found.

When i try to mount the shared folder on a Linux machine i got Permission Denied.

I discovered that only the first user created during the initial setup can access the shared folders.

I opened a terminal session and ssh to the omv machine. The permissions for the disks are shown bellow:

The failed disk have the permission drwx—–

The only way i found was to change the permission to 0775 and all users can mount the shared folders.

Linux

Yesterday i was helping a customer to deploy his OpenShift 4.16.x cluster. The first step was the bastion host preparation. This includes the setup of the OCP CLI.

We do the default instalation of the CLI on top of RHEL 8.x but after the installation we got the following error:

 oc version
oc: /lib64/libc.so.6: version `GLIBC_2.33' not found (required by oc)
oc: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by oc)
oc: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by oc)

We tried to compile the new GLIBC .but without success.

The solution: Download the CLI version compiled for RHEL 8. Link here for amd64

Linux openshift