HFS 2.5.2 Release Notes for HPE Compute Scale-Up Server 3200 and Superdome Flex Family systems
Linux Operating Systems:
Minimum Linux Distro versions on Compute Scale-up Server 3200:
HFS version 2.5.2 on Compute Scale-up Server 3200 supports Linux Distro versions:
HFS 2.5.2 on Superdome Flex and Superdome Flex 280 supports Linux versions.
For older OS versions, use the following HFS bundles:
Note: Customers running older distro versions not included in HFS 2.5.2 are still supported.
Installation:
The following linux bootline options are recommended when installing. HFS will add the bootline options automatically when needed:
· tsc=nowatchdog Prevent the watchdog from changing the clocksource from tsc.
· add_efi_memmap Ensure all memory is included in Linux memory map
· udev.children-max=512 Prevent driver load issues when booting
· nmi_watchdog=0 Disable SW watchdog, which may have scaling issues on large systems.
· watchdog_thresh=30 Increase timeouts on large systems
· workqueue.watchdog_thresh=60 Increase timeouts on large systems
· pci=nobar Prevent Linux from assigning unassigned BARs.
· console=ttyS0,115200 Enable the serial console.
· earlyprintk=ttyS0,115200 Display early boot messages. Aids in debugging early boot issues.
Note: Removing "quiet" from the kernel bootline will also aid debugging boot issues.
Linux distro links:
HFS (HPE Foundation Software) Description:
HPE Foundation Software (HFS) includes automatic boot-time optimization utilities, reliability features, and technical support tools. Designed for high performance computing, these tools help maximize system performance and availability.
HPE Documentation Links:
Download latest HPE Foundation Software (HFS 2.5.2) from HPE Support Center
Download latest HPE Foundation Software (HFS 2.5.2) from Software Download Repository
HPE Compute Scale-up Server 3200 Linux Installation Guide
HPE Compute Scale-up Server 3200 Quick Specs
HPE Superdome Flex Administrator Guide
Managing System Performance with HPE Foundation Software
HPE Foundation Software (HFS) commands
PRODUCT MODEL(S):
HPE
Compute Scale-Up Server 3200
HPE Superdome Flex
HPE Superdome Flex 280
HFS 2.5.2 ENHANCEMENTS / FIXES:
· hpe-auto-config:
o Increase watchdog threshold timeout on large systems
On larger systems (>512
cpus), increase current setting of
watchdog_thresh=30 workqueue.watchdog_thresh=60
to
watchdog_thresh=60 workqueue.watchdog_thresh=120
o Add iommu=pt for rhel9/rhel10
RHEL9/10 disables iommu passthrough by default. Disabled passthrough combined
with intel_iommu=on,sm_on causes the qat_4xxx driver to cause soft lockups on
boot. If hpe-auto-config is adding intel_iommu=on to RHEL9/10, also add
iommu=pt to enable passthrough.
o Replace use of egrep with grep -E
egrep is deprecated and should be replace with grep
-E.
o Remove bau=0
The commandline setting of bau=0 has not been needed for a long time.
Remove it.
· numatools
o Update update numatools for kernel 6.11.0
Handle upstream vma change.
o kernel removed page_mapcount() from devel
Modify to handle upstream page_mapcount() change.
o Stricter compiler cleanup
Modify to handle stricter compiler warning settings.
o Integrate upstream changes
Modify to handle upstream pmd and pud changes.
· hwperf
o Don't bail completely on missing device
When a device in the cache disappears (hotplug remove), don't bail out after failing to find the device node. Instead, use the existing "missing_devices" path to report the missing device to the user (with tips to update the cache). If the user has execute permission of lspci (newer pciutils), try to use lspci when refreshing the cache and running as non-root. This might result in a run with the correct lspci, but a failure to cache it. Inform the user of this.
o Cleanup for stricter RHEL10 compiler settings
Modify to handle stricter compiler warning settings.
· dcd
o Update to dcd 4.4-6-1 (LINUXDS-454)
Update dcd to the latest version.
o Add SElinux support (LINUXDS-428)
Add SElinux configuration files for dcd.
SUPERSEDES:
Version: HFS 2.5.1
UPDATE RECOMMENDATION: Recommended
LANGUAGES:
International
English
INSTALLATION INSTRUCTIONS:
Please review all instructions and the "Hewlett Packard Enterprise Support Tool License Terms" or your Hewlett Packard Enterprise support terms and conditions for precautions, scope of license, restrictions, and limitation of liability and warranties, before installing this package. It is important that you read and understand these instructions completely before you begin. This can determine your success in completing the software update.
Linux Installation instructions:
Notes:
· Note: Sles15 sp4 kernel version 5.14.21-150400.26.63 has a regression. It does not advertise avx capabilities, causing poor performance by applications wanting to use avx instructions. Fixed in version 5.14.21-150400.26.66 . SUSE BZ 1211205 for details.
HFS ISO installation Instructions:
Note: To see the list of groups to install, use: dnf grouplist
f. Reboot the system to activate the change:
g.
reboot
After the system reaches the EFI shell, in the RMC command window, enter:
power reset npar pnum=0
Note: Refer to your operating system documentation for details on adding directories of RPM packages as available software sources/repositories for use by zipper, dnf and yum.
Installing HFS in a contaner:
These directions are in the hfs-container rpm README file.
# These are
example commands to build a container, install HPE-HFS, and run the resulting
container on a
# HPE Scale Up Server 3200 worker node in a Red Hat Openshift Platform
4.14 cluster. This solution requires
# the Dockerfile and hfs-config.repo file included in the hfs-container
rpm.
#
# This solution also requires the HPE HFS iso file,
hpe-foundation-2.5.2-cd1-media-rhel94-x86_64.iso. One
# method used to provide the iso contents to the container build is a
loop mount command shown below.
#
mkdir ./HPE-Foundation-Software-2.5.2
sudo mount -o loop ./hpe-foundation-2.5.2-cd1-media-rhel94-x86_64.iso ./HPE-Foundation-Software-2.5.2
#
# Commands to extract the hfs-container files from the
HPE-HFS iso image
#
rpm2cpio HPE-Foundation-Software-2.5.2/RPMS/hfs-container*.rpm | cpio -icd
cp opt/hpe/container/* .
#
# With copies of the Dockerfile and hfs-config.repo, and the iso
file mounted as show above, the command
# below will build a local image using the definitions in Dockerfile.
#
sudo podman image build -f Dockerfile -t hfs:1
#
# The resulting container can be tested for basic functionality on
the build host with the folllowing
# command.
#
sudo podman run -it --network host --privileged --pid host --volume
/:/host:ro localhost/hfs:1
#
# The commands below were developed from the Red Hat OpenShift
documentation provided in the link below
#
# https://docs.openshift.com/container-platform/4.13/registry/securing-exposing-registry.html#registry-exposing-hpe-hfs-registry-manually_securing-exposing-registry
#
#
# Login to the Openshift cluster
#
oc login https://api.${CLUSTER-HOST}:6443 -u kubeadmin -p ${PASSWORD}
#
# Get the default route to the internal OpenShift registry::
#
HOST=$(oc get route default-route -n openshift-image-registry --template='{{
.spec.host }}')
#
# Get the certificate of the Ingress Operator on the internal Openshift
registry:
#
oc get secret -n openshift-ingress router-certs-default -o
go-template='{{index .data "tls.crt"}}' | base64 -d | sudo tee
/etc/pki/ca-trust/source/anchors/${HOST}.crt > /dev/null
#
# Enable the cluster’s default certificate to trust the route using the
following commands:
#
sudo update-ca-trust enable
#
# Log in with podman using the default route:
#
sudo podman login -u kubeadmin -p $(oc whoami -t) $HOST
#
# Tag the local image with the destination project/imagestream:version
#
sudo podman tag localhost/hfs:1 ${HOST}/hpe-hfs/hfs:version1
#
# Create the hpe-hfs namespace for the rest of the process below
#
oc create namespace hpe-hfs
#
# Change to hpe-hfs namespace
#
oc project hpe-hfs
#
# Add permission to run privileged containers and pods from daemonsets
#
oc adm policy add-scc-to-user privileged -z default -n hpe-hfs
#
# Check if the destination imagesource already exists
#
oc describe imagestream hfs -n hpe-hfs
#
# If the imagestream does not exist, create a new one
#
oc create imagestream hfs -n hpe-hfs
#
# Push the image from the local podman registry to the internal Openshift
registry
#
sudo podman push ${HOST}/hpe-hfs/hfs:version1
#
# Check the destination imagestream
#
oc describe imagestream hfs -n hpe-hfs
#
# Run a test pod using the new image
#
oc apply -f ./hfs.yaml
#
# Check the status of the pod
#
oc get pods
#
# rsh in to the pod to manually change the configuration and execute the test
#
oc rsh hfs-test
#
# The following file is an example of tuning suggestions for the HPE Scale Up
Server 3200
# This example has 8 sockets and divides each socket in half for reserved cores
for general
# purpose, OS, and OpenShift control plane workloads. The other half are
isolated for
# low latency and high performance workloads. The hugepages section is
also an example.
#
10-profile-hpe-auto-config.yaml
DISCLAIMER:
The
information in this document is subject to change without notice.
Hewlett Packard Enterprise makes no warranty of any
kind with regard to this material, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose. Hewlett
Packard Enterprise shall not be liable for errors contained herein or for
incidental or consequential damages in connection with the furnishing,
performance, or use of this material.
This document contains proprietary information that is protected by copyright. All rights are reserved. No part of this document may be reproduced, photocopied, or translated to another language without the prior written consent of Hewlett Packard Enterprise.
(C) Copyright 2024 Hewlett Packard Enterprise Development L.P.