HFS 2.5.3 Release Notes for HPE Compute Scale-Up Server 3200 and Superdome Flex Family systems
Linux Operating Systems:
Minimum Linux Distro versions on Compute Scale-up Server 3200:
HFS version 2.5.3 on Compute Scale-up Server 3200 supports Linux Distro versions:
HFS 2.5.3 on Superdome Flex and Superdome Flex 280 supports Linux versions.
For older OS versions, use the following HFS bundles:
Note: Customers running older distro versions not included in HFS 2.5.3 are still supported.
Installation:
The following linux bootline options are recommended when installing. HFS will add the bootline options automatically when needed:
tsc=nowatchdog |
Prevent the watchdog from changing the clocksource from tsc. |
add_efi_memmap |
Ensure all memory is included in Linux memory map. |
udev.children-max=512 |
Prevent driver load issues when booting. |
nmi_watchdog=0 |
Disable SW watchdog, which may have scaling issues on large systems. |
watchdog_thresh=30 |
Increase timeouts on large systems. 32 socket systems may require 60 seconds. |
workqueue.watchdog_thresh=60 |
Increase timeouts on large systems. 32 sockets systems may require 120 seconds. |
pci=nobar |
Prevent Linux from assigning unassigned BARs. |
console=ttyS0,115200 |
Enable the serial console. |
earlyprintk=ttyS0,115200 |
Display early boot messages. Aids in debugging early boot issues. |
Note: Removing "quiet" from the kernel bootline will also aid debugging boot issues.
Linux distro links:
HFS (HPE Foundation Software) Description:
HPE Foundation Software (HFS) includes automatic boot-time optimization utilities, reliability features, and technical support tools. Designed for high performance computing, these tools help maximize system performance and availability.
HPE Documentation Links:
HPE Foundation Software (HFS 2.5.3) on HPE Support Center
HPE Foundation Software (HFS 2.5.3) on Software Download Repository
HPE Compute Scale-up Server 3200 Linux Installation Guide
HPE Compute Scale-up Server 3200 Quick Specs
HPE Superdome Flex Administrator Guide
Managing System Performance with HPE Foundation Software
HPE Foundation Software (HFS) commands
PRODUCT MODEL(S):
HPE
Compute Scale-Up Server 3200
HPE Superdome Flex
HPE Superdome Flex 280
HFS 2.5.3 ENHANCEMENTS / FIXES:
· Initial RHEL9.5 support
o Add RHEL 9.5 as a supported distro.
· hpe-auto-config:
o RAS CEC handling change
Skip configuration when corrected errors are handled by SFW. BIOS version 009.010.051.000.2308020404+ cloaks corrected errors from the OS until a hidden threshold is reached. Once that hidden threshold is reached, the error is sent to the OS with a specific bit set notifying the OS to immediately offline that memory, so configuring CEC or mcelog is not necessary.
o Add iommu=pt when iommu enabled
Add iommu=pt. Work around an issue when iommu is enabled and passthrough is not enabled and qat driver is enabled.
o Allow Rocky Linux
There is no official support for Rocky Linux. This change makes allows HFS to run on Rocky Linux as if it were RHEL.
· Oracle Linux:
o Install shim_certificate_hpe as part of HFS Oracle Linux pattern
Add shim_certificate_hpe RPM to the “HPE Foundation Software for Oracle Linux” pattern. shim_certificate_hpe adds and signed EFI shim that allows HPE signed rpm to install when Secure Boot is enabled.
· meminfo:
o hwperf: meminfo script missing perl shebang
A recent change broke meminfo. Restore previous functionality.
SUPERSEDES:
Version: HFS 2.5.2
UPDATE RECOMMENDATION: Recommended
INSTALLATION
INSTRUCTIONS:
Please review all instructions and the "Hewlett Packard Enterprise Support Tool License Terms" or your Hewlett Packard Enterprise support terms and conditions for precautions, scope of license, restrictions, and limitation of liability and warranties, before installing this package. It is important that you read and understand these instructions completely before you begin. This can determine your success in completing the software update.
Linux Installation instructions:
Notes:
· Note: Sles15 sp4 kernel version 5.14.21-150400.26.63 has a regression. It does not advertise avx capabilities, causing poor performance by applications wanting to use avx instructions. Fixed in version 5.14.21-150400.26.66 . SUSE BZ 1211205 for details.
HFS ISO installation Instructions:
Note: To see the list of groups to install, use: dnf grouplist
f. Reboot the system to activate the change:
g.
reboot
After the system reaches the EFI shell, in the RMC command window, enter:
power reset npar pnum=0
Note: Refer to your operating system documentation for details on adding directories of RPM packages as available software sources/repositories for use by zipper, dnf and yum.
Installing HFS in a contaner:
These directions are in the hfs-container rpm README file.
# These are
example commands to build a container, install HPE-HFS, and run the resulting
container on a
# HPE Scale Up Server 3200 worker node in a Red Hat Openshift Platform
4.14 cluster. This solution requires
# the Dockerfile and hfs-config.repo file included in the hfs-container
rpm.
#
# This solution also requires the HPE HFS iso file,
hpe-foundation-2.5.3-cd1-media-rhel94-x86_64.iso. One
# method used to provide the iso contents to the container build is a
loop mount command shown below.
#
mkdir ./HPE-Foundation-Software-2.5.3
sudo mount -o loop ./hpe-foundation-2.5.3-cd1-media-rhel94-x86_64.iso
./HPE-Foundation-Software-2.5.3
#
# Commands to extract the hfs-container files from the
HPE-HFS iso image
#
rpm2cpio HPE-Foundation-Software-2.5.3/RPMS/hfs-container*.rpm | cpio -icd
cp opt/hpe/container/* .
#
# With copies of the Dockerfile and hfs-config.repo, and the iso
file mounted as show above, the command
# below will build a local image using the definitions in Dockerfile.
#
sudo podman image build -f Dockerfile -t hfs:1
#
# The resulting container can be tested for basic functionality on
the build host with the folllowing
# command.
#
sudo podman run -it --network host --privileged --pid host --volume /:/host:ro
localhost/hfs:1
#
# The commands below were developed from the Red Hat OpenShift
documentation provided in the link below
#
#
https://docs.openshift.com/container-platform/4.13/registry/securing-exposing-registry.html#registry-exposing-hpe-hfs-registry-manually_securing-exposing-registry
#
#
# Login to the Openshift cluster
#
oc login https://api.${CLUSTER-HOST}:6443 -u kubeadmin -p ${PASSWORD}
#
# Get the default route to the internal OpenShift registry::
#
HOST=$(oc get route default-route -n openshift-image-registry --template='{{
.spec.host }}')
#
# Get the certificate of the Ingress Operator on the internal Openshift
registry:
#
oc get secret -n openshift-ingress router-certs-default -o
go-template='{{index .data "tls.crt"}}' | base64 -d | sudo tee
/etc/pki/ca-trust/source/anchors/${HOST}.crt > /dev/null
#
# Enable the cluster’s default certificate to trust the route using the
following commands:
#
sudo update-ca-trust enable
#
# Log in with podman using the default route:
#
sudo podman login -u kubeadmin -p $(oc whoami -t) $HOST
#
# Tag the local image with the destination project/imagestream:version
#
sudo podman tag localhost/hfs:1 ${HOST}/hpe-hfs/hfs:version1
#
# Create the hpe-hfs namespace for the rest of the process below
#
oc create namespace hpe-hfs
#
# Change to hpe-hfs namespace
#
oc project hpe-hfs
#
# Add permission to run privileged containers and pods from daemonsets
#
oc adm policy add-scc-to-user privileged -z default -n hpe-hfs
#
# Check if the destination imagesource already exists
#
oc describe imagestream hfs -n hpe-hfs
#
# If the imagestream does not exist, create a new one
#
oc create imagestream hfs -n hpe-hfs
#
# Push the image from the local podman registry to the internal Openshift
registry
#
sudo podman push ${HOST}/hpe-hfs/hfs:version1
#
# Check the destination imagestream
#
oc describe imagestream hfs -n hpe-hfs
#
# Run a test pod using the new image
#
oc apply -f ./hfs.yaml
#
# Check the status of the pod
#
oc get pods
#
# rsh in to the pod to manually change the configuration and execute the test
#
oc rsh hfs-test
#
# The following file is an example of tuning suggestions for the HPE Scale Up
Server 3200
# This example has 8 sockets and divides each socket in half for reserved cores
for general
# purpose, OS, and OpenShift control plane workloads. The other half are
isolated for
# low latency and high performance workloads. The hugepages section is
also an example.
#
10-profile-hpe-auto-config.yaml
DISCLAIMER:
The
information in this document is subject to change without notice.
Hewlett Packard Enterprise makes no warranty of any
kind with regard to this material, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose. Hewlett
Packard Enterprise shall not be liable for errors contained herein or for
incidental or consequential damages in connection with the furnishing,
performance, or use of this material.
This document contains proprietary information that is protected by copyright. All rights are reserved. No part of this document may be reproduced, photocopied, or translated to another language without the prior written consent of Hewlett Packard Enterprise.
(C) Copyright 2024 Hewlett Packard Enterprise Development L.P.