MySphere Posts

Kubernetes has long been a cornerstone for managing containerized workloads, and its continuous evolution keeps it at the forefront of cloud-native technologies. One of the exciting advancements in recent releases is the enhancement of startup scaling capabilities, particularly through features like Kube Startup CPU Boost and dynamic resource scaling. In this blog post, we’ll dive into what startup scaling is, how it works, and why it’s a significant addition for Kubernetes users looking to optimize application performance during startup.

What is Startup Scaling in Kubernetes?

Startup scaling refers to the ability to dynamically allocate additional resources, such as CPU, to pods during their initialization phase to accelerate startup times. This is particularly useful for applications that require significant resources during their boot process but may not need those resources once they’re running steadily. By providing a temporary resource boost, Kubernetes ensures faster deployment and improved responsiveness without over-provisioning resources long-term.

The concept of startup scaling ties closely with Kubernetes’ broader autoscaling capabilities, including Horizontal Pod Autoscaling (HPA) and Vertical Pod Autoscaling (VPA). However, startup scaling specifically addresses the transient needs of applications during their startup phase, a critical period for performance-sensitive workloads.

Key Features of Startup Scaling

One of the standout implementations of startup scaling is the Kube Startup CPU Boost, introduced as an open-source operator in Kubernetes 1.28 and further refined in subsequent releases. Here’s how it works:

  • Dynamic Resource Allocation: Kube Startup CPU Boost temporarily increases CPU resources for pods during their startup phase. Once the pod is fully initialized, the operator scales down the resources to their normal levels, optimizing resource utilization.
  • No Pod Restarts: Unlike traditional vertical scaling, which might require pod restarts to adjust resources, this feature leverages in-place resource resizing, a capability introduced in Kubernetes 1.27 and graduated to beta in 1.33. This ensures zero downtime during resource adjustments.
  • Targeted Use Cases: Startup scaling is ideal for applications with heavy initialization processes, such as machine learning workloads, databases, or complex microservices that perform significant computations or data loading during startup.

How Does Kube Startup CPU Boost Work?

The Kube Startup CPU Boost operator monitors pods and applies a predefined CPU boost policy during their startup phase. Here’s a simplified workflow:

  1. Pod Creation: When a pod is created, the operator identifies it as a candidate for CPU boost based on configured policies (e.g., specific labels or annotations).
  2. Resource Adjustment: The operator temporarily increases the pod’s CPU allocation (requests and/or limits) to speed up initialization.
  3. Monitoring and Scaling Down: Once the pod reaches a stable state (determined by readiness probes or a timeout), the operator reduces the CPU allocation back to its baseline, ensuring efficient resource usage.
  4. In-Place Resizing: Leveraging the in-place pod vertical scaling feature, these adjustments occur without restarting the pod, maintaining application availability.

This process is seamless and integrates with Kubernetes’ existing autoscaling mechanisms, making it a natural fit for clusters already using HPA or VPA.

Benefits of Startup Scaling

The introduction of startup scaling, particularly through Kube Startup CPU Boost, brings several advantages:

  • Faster Application Startup: By allocating more CPU during initialization, applications launch quicker, reducing latency for end-users.
  • Resource Efficiency: Temporary boosts prevent over-provisioning, ensuring resources are only allocated when needed.
  • Improved User Experience: Faster startup times are critical for user-facing applications, where delays can impact satisfaction.
  • Support for Resource-Intensive Workloads: AI/ML applications, databases, and other compute-heavy workloads benefit significantly from this feature.
  • No Downtime: In-place resource resizing ensures that scaling operations don’t disrupt running applications.

Getting Started with Startup Scaling

To leverage startup scaling in your Kubernetes cluster, you’ll need to:

  1. Enable the InPlacePodVerticalScaling Feature Gate: This is enabled by default in Kubernetes 1.33, allowing in-place resource resizing. Verify your cluster version and configuration to ensure compatibility.
  2. Install the Kube Startup CPU Boost Operator: This open-source operator can be deployed via a Helm chart or directly from its GitHub repository. Configure it with policies that match your workload requirements.
  3. Configure Pod Annotations: Use annotations to specify which pods should receive a CPU boost and define the boost parameters (e.g., duration or resource limits).
  4. Monitor and Optimize: Use Kubernetes monitoring tools like Prometheus or Grafana to track the impact of startup scaling on your application performance and resource usage.

Best Practices

  • Test in a Staging Environment: Before enabling startup scaling in production, test it in a non-critical environment to understand its impact on your workloads.
  • Combine with Autoscaling: Use startup scaling alongside HPA and VPA for a comprehensive scaling strategy that handles both startup and runtime demands.
  • Monitor Resource Usage: Ensure your cluster has sufficient resources to handle temporary boosts, especially in multi-tenant environments.
  • Fine-Tune Boost Policies: Adjust boost duration and resource limits based on your application’s startup behavior to avoid over- or under-provisioning.

What’s Next for Startup Scaling?

As Kubernetes continues to evolve, we can expect further refinements to startup scaling. The graduation of in-place pod vertical scaling to beta in Kubernetes 1.33 is a promising step, and future releases may bring this feature to stable status. Additionally, enhancements to the Kube Startup CPU Boost operator could include more granular control over boost policies or integration with other resource types, such as memory or GPU.

Conclusion

Startup scaling, exemplified by Kube Startup CPU Boost, is a powerful addition to Kubernetes’ scaling arsenal. By addressing the unique resource needs of applications during their startup phase, it enables faster deployments, better resource efficiency, and improved user experiences. Whether you’re running AI/ML workloads, databases, or microservices, this feature can help optimize your Kubernetes cluster for performance and cost.

To learn more, check out the official Kubernetes documentation or explore the Kube Startup CPU Boost project on GitHub. Start experimenting with startup scaling today and see how it can transform your application deployments

Uncategorized

IBM App Connect Enterprise 13.0.4 Evaluation Edition is available on the following link:

https://www.ibm.com/resources/mrs/assets?source=swg-wmbfd

Builds on version 13.0 with a focus on usability, AI integration, and automation.

  • AI watsonx Code Assistant in Toolkit
  • AI Mapping Assist in Designer
  • AI Data Assist in Designer
  • Context Trees for improved Toolkit usability
  • New Toolkit Discovery Request Nodes: Azure Service Bus and IBM Planning Analytics
  • New Toolkit Discovery Input Nodes: Amazon EventBridge and Azure Service Bus
  • Toolkit Salesforce Input node new state persistence policy
  • Open Telemetry support for Toolkit Kafka nodes
  • Outbound OAuth2.0 support in the REST Request and HTTP Request nodes
  • New support for MQTT version 5
  • Embedded Global Cache – new upsert method for use in JavaCompute nodes

For more details see this blog post: https://community.ibm.com/community/user/blogs/ben-thompson1/2025/06/18/ace-13-0-4-0?hlmlt=VT

ACE

Released on February 24, 2025, Red Hat OpenShift 4.18 is here, bringing a fresh wave of enhancements to the Kubernetes-powered hybrid cloud platform. Whether you’re a developer, a sysadmin, or an IT decision-maker, this update has something to pique your interest—think beefed-up security, slick virtualization upgrades, and tools to make your clusters easier to monitor and manage. Let’s dive into what’s new and why OpenShift 4.18 might just be the upgrade your team’s been waiting for.

Security That Packs a Punch

In today’s world, keeping sensitive data safe is non-negotiable, and OpenShift 4.18 steps up to the plate. One standout feature is the Secrets Store CSI Driver Operator, now fully available after being a tech preview since 4.14. This nifty tool lets your workloads tap into external secrets managers—like Azure Key Vault, Google Cloud Secret Manager, or HashiCorp Vault—without leaving sensitive credentials lying around on your cluster. Secrets are mounted as ephemeral volumes, meaning the cluster stays blissfully unaware of the juicy details. Pair this with OpenShift GitOps or Pipelines, and you’ve got a secure, streamlined way to handle credentials across your apps. It’s a game-changer for teams juggling compliance and agility.

Virtualization Gets a Glow-Up

If you’re running virtual machines alongside containers, OpenShift 4.18’s virtualization updates will catch your eye. Built on Kubernetes 1.31 and CRI-O 1.31, this release supercharges OpenShift Virtualization. A big win?User Defined Networks (UDNs) are now generally available, giving you custom networking options—Layer 2, Layer 3, or localnet—for your pods and VMs via OVN-Kubernetes. It’s all about flexibility.

Bare metal fans, rejoice: 4.18 expands support across Google Cloud (C3, C4, C4A, N4 machines) and Oracle Cloud Infrastructure, with deployment options via Assisted or Agent-based Installers. Plus, the Migration Toolkit for Virtualization (MTV) gets smarter with user-defined PVC names and optimized migration schedules, making VM transfers faster and less of a headache. Whether you’re hybrid cloud-curious or all-in, these updates make managing VMs smoother than ever.

Observability: See It All, Fix It Fast

Ever wished you could get a clearer picture of what’s happening in your cluster? OpenShift 4.18 delivers with the Cluster Observability Operator (COO) 1.0.0, now GA. This unifies metrics, logs, and traces into one tidy package, complete with dashboards, a troubleshooting UI, and distributed tracing. Add in multi-namespace Prometheus alerts and GPU accelerator metrics, and you’ve got a toolkit to spot issues before they spiral. It’s like giving your cluster a superpower: total visibility.

Developers, This One’s for You

The developer experience in 4.18 is all about small wins that add up. The OpenShift Console now boasts colored Tekton Pipeline logs (because who doesn’t love a little flair?), one-click YAML imports via OpenShift Lightspeed, and a YAML editor with dark/light mode support—perfect for late-night coding sessions. There’s also the OpenShift CLI Manager Operator (tech preview), which simplifies CLI management in disconnected environments. These tweaks might not scream “revolutionary,” but they’ll make your day-to-day a little smoother.

Under the Hood: Core Platform Goodies

OpenShift 4.18 swaps out RunC for Crun as the default container runtime (don’t worry, RunC’s still an option), aligning with OCI standards for a leaner, meaner runtime. Single-node clusters can now auto-recover after a shutdown—great for edge setups—and high-availability cloud clusters can snooze for up to 90 days without breaking a sweat. It’s the kind of reliability that keeps operations humming.

Lifecycle and Availability

Red Hat backs 4.18 with at least 6 months of full support (or 90 days after 4.19 drops, whichever’s longer), followed by maintenance support until August 2026. As an even-numbered release, it might snag Extended Update Support (EUS), stretching its lifecycle to 24 months. You can deploy it anywhere—on-prem, AWS, Azure, Google Cloud, Oracle Cloud, or bare metal—starting now.

Why It Matters

OpenShift 4.18 isn’t about reinventing the wheel; it’s about making the wheel spin better. From tighter security to sharper observability and a friendlier developer experience, it’s a release that listens to what users need in 2025: tools that work hard so you don’t have to. Whether you’re modernizing apps, managing VMs, or scaling across clouds, 4.18 has your back.

Ready to explore? Check out the [official OpenShift 4.18 Release Notes](https://docs.openshift.com) for the full scoop, and let us know what you think in the comments. What feature are you most excited to try?

Uncategorized

To create an AWS S3 bucket to use as a backup location for IBM Storage Fusion, you need to configure the bucket in AWS and ensure that it is properly configured for compatibility with IBM Storage Fusion. Below are the steps to achieve this:

Step 1: Sign in to the AWS Management Console

  1. Sign in aws.amazon.com and sign in to your AWS account using your credentials.
  2. After you sign in, navigate to the AWS Management Console.

Step 2: Access the S3 service

  1. In the AWS Management Console, use the search bar at the top and enter S3.
  2. Select S3 from the results to open the Amazon S3 dashboard.

Step 3: Create an S3 bucket

  1. From the S3 dashboard, click the Create bucket button.
  2. Bucket Name: Enter a unique name for your bucket (for example, ibm-storage-fusion-backup-2025).
  3. Region: Choose an AWS Region that aligns with your IBM Storage Fusion implementation for optimal performance and compliance (e.g., us-east-1).
  4. Object ownership: Leave the default setting (ACLs disabled) unless you have specific requirements.
  5. Block public access: For security, keep Block all public access enabled unless IBM Storage Fusion explicitly requires public access (see IBM documentation).
  6. Bucket versioning: Enable versioning if you want to maintain multiple versions of backup objects for recovery purposes. This is recommended for backup scenarios.
  7. Encryption: Enable server-side encryption with Amazon S3-managed keys (SSE-S3) for added security.
  8. Click Create bucket to finish the configuration.

Step 4: Configure the bucket for IBM Storage Fusion

IBM Storage Fusion often integrates with AWS S3 for object backup and storage through its data protection capabilities. To ensure compatibility:

  1. Bucket policy: You may need to configure a bucket policy to allow IBM Storage Fusion to access the bucket. This requires an IAM role or user with appropriate permissions (for example, s3:PutObject, s3:GetObject, s3:ListBucket). Policy example:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:user/ibm-fusion-user"
},
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": [
"arn:aws:s3:::my-fusion-bucket",
"arn:aws:s3:::my-fusion-bucket/*"
]
}
]
}

Replace <AWS_ACCOUNT_ID> and <ROLE_NAME> with their specific values.

Access credentials: Create an IAM user or role with programmatic access, and generate an access key ID and a secret access key. These will be used by IBM Storage Fusion to authenticate with S3.

  1. Go to IAM Users >  > Add User, enable programmatic access, attach the AmazonS3FullAccess policy (or a custom policy), and save the credentials. If you use full access, you don’t need the policy exemplified above.

 Lifecycle rules: Optionally, configure lifecycle rules to transition older backups to cost-effective storage classes, such as S3 Glacier or S3 Glacier Deep Archive, to optimize costs.

Cloud

I was installing the IBM MQ Server on a linux box yesterday. I got the error bellow:

[root@gateway MQServer]# dnf install rpm-build
Updating Subscription Management repositories.
Instana 14 B/s | 20 B 00:01
Errors during downloading metadata for repository ‘instana-agent’:

Status code: 401 for https://_:[email protected]/agent/rpm/generic/x86_64/repodata/repomd.xml (IP: 34.253.98.124)
Error: Failed to download metadata for repo ‘instana-agent’: Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried

Solution:

Delete the Instana Agent repo: rm /etc/yum.repos.d/Instana-Agent.repo
Clear dnf cache: dnf clean all
run dnf install rpm-build again

Uncategorized

If you’ve ever tried to unzip a large file on macOS and encountered the mysterious “Error 513,” you’re not alone. This pesky error can pop up unexpectedly, leaving you scratching your head and wondering why your file won’t extract properly. Recently, I ran into this issue myself while trying to decompress a massive archive using the built-in Archive Utility on macOS. After some trial and error, I found a reliable solution: the Keka application. Here’s a rundown of what Error 513 is, why it happens, and how Keka saved the day.

What Is Error 513 on macOS?

Error 513 typically occurs when macOS’s default Archive Utility struggles to handle certain zip files—particularly large ones or those with complex structures. The error message might not give you much detail, often just stating that the operation couldn’t be completed. From my experience, it seems to be tied to limitations in how the native tool processes files, especially if they’re compressed in a way that macOS doesn’t fully support or if the file size pushes the utility beyond its comfort zone.

While the exact cause can vary (think file corruption, incompatible compression methods, or even permission issues), the result is the same: you’re stuck with a zip file that won’t budge. For me, it was a multi-gigabyte archive I’d downloaded, and no amount of retrying or rebooting would make Archive Utility cooperate.

The Solution: Keka to the Rescue

After a bit of digging online and some failed attempts with Terminal commands (like using unzip via Homebrew), I stumbled across Keka, a free and lightweight compression tool for macOS. Unlike the built-in Archive Utility, Keka is designed to handle a wider range of file formats and sizes with ease. Here’s how I used it to solve my Error 513 problem—and how you can, too.

Step 1: Download and Install Keka
  • Head over to the official Keka website (kekadev.com) or grab it from the Mac App Store if you prefer.
  • Installation is straightforward: just drag the app to your Applications folder, or let the App Store handle it for you.
Step 2: Open Your Problematic Zip File
  • Launch Keka from your Applications folder.
  • Drag and drop the zip file causing Error 513 onto the Keka window, or use the “Open” option in the app to locate it manually.
Step 3: Extract the File
  • Keka will automatically start extracting the file to the same directory as the original zip (you can change the destination if you’d like).
  • Sit back and let it work its magic. For my large file, Keka churned through it without a hitch—no Error 513 in sight.

Within minutes, I had my files unzipped and ready to use, something macOS’s default tool couldn’t manage despite multiple attempts.

Why Keka Works When Archive Utility Doesn’t

Keka’s strength lies in its versatility and robustness. It supports a variety of compression formats (like 7z, RAR, and more) and seems better equipped to handle edge cases—like oversized zip files—that trip up Archive Utility. Plus, it’s open-source, so it’s constantly being refined by a community of developers who actually care about making it work.

Bonus Tips

  • Check File Integrity: Before blaming the tool, ensure your zip file isn’t corrupted. You can test it in Keka by right-clicking the file and selecting “Verify” if you suspect an issue.
  • Permissions: If Keka still struggles, double-check the file’s permissions in Finder (Get Info > Sharing & Permissions) to ensure you have read/write access.
  • Update Keka: Make sure you’re running the latest version, as updates often fix bugs and improve compatibility.

Final Thoughts

Error 513 might be a roadblock when unzipping large files on macOS, but it doesn’t have to be a dealbreaker. For me, switching to Keka was a game-changer—fast, free, and frustration-free. If you’re tired of wrestling with Archive Utility’s limitations, give Keka a shot. It’s a small download that delivers big results, and it’ll likely become your go-to tool for all things compression-related on macOS.

Have you run into Error 513 before? Let me know how you tackled it—or if Keka worked for you too!

MAC

“I have lots of photo files. Since 2006, when I purchased my first digital camera, the number of photos has grown quickly, and after getting an iPhone, the number of photos exploded.

With the high number of photos, the number of backups grew as well.

I decided to organize all backups and create folders using the format YYYY-MM from the metadata of the photo files.”

Bellow the python script. The script runs on macos:

import os
import shutil
import datetime
import logging
import tkinter as tk
from tkinter import filedialog
from PIL import Image, ExifTags
import pillow_heif
import piexif
import struct

# Setup logging
logging.basicConfig(level=logging.DEBUG, format="%(asctime)s - %(levelname)s - %(message)s")

ATOM_HEADER_SIZE = 8
EPOCH_ADJUSTER = 2082844800  # Difference between Unix and QuickTime epoch

def get_file_date(file_path):
    try:
        if file_path.lower().endswith(".heic") and pillow_heif.is_supported(file_path):
            heif_file = pillow_heif.open_heif(file_path, convert_hdr_to_8bit=False)
            exif_data = heif_file.info.get("exif")
            if exif_data:
                exif_dict = piexif.load(exif_data)
                date_str = exif_dict["Exif"].get(piexif.ExifIFD.DateTimeOriginal)
                if date_str:
                    return datetime.datetime.strptime(date_str.decode("utf-8"), "%Y:%m:%d %H:%M:%S")
        
        elif file_path.lower().endswith((".jpg", ".jpeg")):
            with Image.open(file_path) as img:
                exif_data = img.getexif()
                if exif_data:
                    exif_dict = {ExifTags.TAGS.get(tag, tag): value for tag, value in exif_data.items()}
                    logging.debug(f"EXIF metadata for {file_path}: {exif_dict}")
                    
                    if "DateTimeOriginal" in exif_dict:
                        date_str = exif_dict["DateTimeOriginal"]
                    elif "DateTime" in exif_dict:
                        date_str = exif_dict["DateTime"]
                    else:
                        date_str = None
                        logging.warning(f"No DateTimeOriginal or DateTime found for {file_path}")
                    
                    if date_str:
                        try:
                            logging.debug(f"Extracted date string from EXIF: {date_str}")
                            return datetime.datetime.strptime(date_str, "%Y:%m:%d %H:%M:%S")
                        except ValueError as ve:
                            logging.error(f"Error parsing date for {file_path}: {ve}")
                    else:
                        logging.warning(f"DateTime metadata missing or unreadable for {file_path}")
                else:
                    logging.warning(f"No EXIF metadata found for {file_path}")
    
    except Exception as e:
        logging.error(f"Error extracting date from {file_path}: {e}")
    
    # If metadata exists but could not be parsed, use file birth time (creation date on macOS)
    file_stats = os.stat(file_path)
    file_birth_time = file_stats.st_birthtime
    logging.debug(f"Using file birth time for {file_path}: {datetime.datetime.fromtimestamp(file_birth_time)}")
    return datetime.datetime.fromtimestamp(file_birth_time)

def move_files_to_folders(source_folder):
    for filename in os.listdir(source_folder):
        file_path = os.path.join(source_folder, filename)
        if filename.lower().endswith((".jpg", ".jpeg", ".heic", ".mov")):
            date_taken = get_file_date(file_path)
            if date_taken:
                folder_name = date_taken.strftime("%Y-%m")
            else:
                logging.warning(f"Could not determine date for {file_path}, using 'unknown' folder.")
                folder_name = "unknown"
            
            dest_folder = os.path.join(source_folder, folder_name)
            os.makedirs(dest_folder, exist_ok=True)
            
            dest_file_path = os.path.join(dest_folder, filename)
            count = 1
            while os.path.exists(dest_file_path):
                name, ext = os.path.splitext(filename)
                dest_file_path = os.path.join(dest_folder, f"{name}_{count}{ext}")
                count += 1
            
            shutil.move(file_path, dest_file_path)
            logging.info(f"Moved {filename} to {dest_folder}")

if __name__ == "__main__":
    root = tk.Tk()
    root.withdraw()
    folder_selected = filedialog.askdirectory(title="Select the folder containing files")
    if folder_selected:
        move_files_to_folders(folder_selected)
        logging.info("File organization complete.")
    else:
        logging.warning("No folder selected.")

Uncategorized

The oc adm must-gather tool is essential for troubleshooting and diagnostics in OpenShift. With the release of OpenShift 4.17, new flags have been introduced to enhance flexibility and precision in data collection. These additions enable administrators to gather logs more efficiently while reducing unnecessary data collection.

New Flags in Must-Gather

--since

This flag allows users to collect logs newer than a specified duration. For example:

oc adm must-gather --since=24h

This command gathers logs from the past 24 hours, making it easier to pinpoint recent issues.

--since-time

The --since-time flag lets users specify an exact timestamp (RFC3339 format) to collect logs from a particular point in time.

oc adm must-gather --since-time=2025-02-10T11:12:39Z

This is useful for investigating incidents that occurred at a specific time.

Existing Flags for Enhanced Customization

Along with the new additions, several existing flags provide more control over the data collection process:

  • --all-images: Uses the default image for all operators annotated with operators.openshift.io/must-gather-image.
  • --dest-dir: Specifies a local directory to store gathered data.
  • --host-network: Runs must-gather pods with hostNetwork: true for capturing host-level data.
  • --image: Allows specifying a must-gather plugin image to run.
  • --node-name: Targets a specific node for data collection.
  • --node-selector: Selects nodes based on a node selector.
  • --run-namespace: Runs must-gather pods within an existing privileged namespace.
  • --source-dir: Defines the directory from which data is copied.
  • --timeout: Sets a time limit for data gathering.
  • --volume-percentage: Adjusts the maximum storage percentage for gathered data.

Conclusion

The introduction of --since and --since-time in OpenShift 4.17 significantly improves must-gather’s efficiency by enabling targeted log collection. By leveraging these and other available flags, administrators can streamline troubleshooting and optimize diagnostics.

For a deeper dive into must-gather and its latest enhancements, check out the official OpenShift documentation.

openshift

I set up an OpenShift 4.16 cluster using UPI on top of VMware. The cluster has 3 Masters, 3 Worker Nodes, and 3 InfraNodes. The infra nodes were necessary to install IBM Storage Fusion.

After the setup, I needed to create a load balancer in front of the OpenShift cluster. There are several options, and one of them is HAProxy.

I just installed an RHEL 9 server, added 3 ips to the network card and setup the haproxy.

Prerequisites

  • A system running RHEL 9
  • Root or sudo privileges
  • A basic understanding of networking and load balancing

Step 1: Install HAProxy

First, update your system packages:

sudo dnf update -y

Then, install HAProxy using the package manager:

sudo dnf install haproxy -y

Verify the installation:

haproxy -v

Step 2: Configure HAProxy

The main configuration file for HAProxy is located at /etc/haproxy/haproxy.cfg. Open the file in a text editor:

sudo nano /etc/haproxy/haproxy.cfg

The configuration bellow was used for my cluster. Change the IP adresses to match

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

    # utilize system-wide crypto-policies
    #ssl-default-bind-ciphers PROFILE=SYSTEM
    #ssl-default-server-ciphers PROFILE=SYSTEM

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    tcp
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------

frontend api
    bind 192.168.252.171:6443
    default_backend controlplaneapi

frontend apiinternal
    bind 192.168.252.171:22623
    bind 192.168.252.171:22624
    default_backend controlplaneapiinternal

frontend secure
    bind 192.168.252.170:443
    default_backend secure

frontend insecure
    bind 192.168.252.170:80
    default_backend insecure

#---------------------------------------------------------------------
# static backend
#---------------------------------------------------------------------

backend controlplaneapi
    balance source
    server master-01  192.168.252.5:6443 check
    server master-02  192.168.252.6:6443 check
    server master-03  192.168.252.7:6443 check


backend controlplaneapiinternal
    balance source
    server master-01  192.168.252.5:22623 check
    server master-02  192.168.252.6:22623 check
    server master-03  192.168.252.7:22623 check
    server master-01  192.168.252.5:22624 check
    server master-02  192.168.252.6:22624 check
    server master-03  192.168.252.7:22624 check

backend secure
    balance source
    server worker-01  192.168.252.8:443 check
    server worker-02  192.168.252.9:443 check
    server worker-03  192.168.252.10:443 check
    server  worker-04   192.168.252.11:443 check
    server  worker-05   192.168.252.12:443 check
    server  worker-06   192.168.252.13:443 check

backend insecure
    balance roundrobin
    server worker-01  192.168.252.8:80 check
    server worker-02  192.168.252.9:80 check
    server worker-03  192.168.252.10:80 check
    server worker-04   192.168.252.11:80 check
    server worker-05   192.168.252.12:80 check
    server worker-06  192.168.252.13:80 check

Uncategorized

The watch command is a useful utility in Unix-like systems that allows you to execute a command periodically and display its output. However, macOS does not come with watch pre-installed. If you’re running macOS Sequoia and want to use watch, follow the steps below to install it.

Recently i switch my mabook to a new MacBook Pro M2 and try to use the command to watch some openshift logs and i got the following result:

To install just use Homebrew.

brew install watch

Using watch on macOS

Now that watch is installed, you can start using it. The basic syntax is:

watch -n <seconds> <command>

For example, to monitor the disk usage of your system every two seconds, you can run:

watch -n 2 df -h

Additional Options

  • -d: Highlights the differences between updates.
  • -t: Turns off the title/header display.
  • -b: Beeps if the command exits with a non-zero status.

Alternative: Using a while Loop

If you prefer not to install watch, you can achieve similar functionality using a while loop in the terminal:

while true; do <command>; sleep <seconds>; done

For example:

while true; do df -h; sleep 2; done

This method works in any macOS version without requiring additional installations.

Linux MAC