HFS 2.5.6 Release Notes for HPE
Compute Scale-Up Server 3200 and Superdome Flex
Family systems
Linux Operating Systems:
Minimum Linux Distro
versions on Compute Scale-up Server 3200:
HFS
version 2.5.6 on Compute Scale-up Server 3200 supports Linux Distro versions:
HFS
version 2.5.5 on Compute Scale-up Server 3200 supports Linux Distro versions:
HFS
2.5.4 on Superdome Flex and Superdome Flex 280 supports Linux versions.
For older OS
versions, use the following HFS bundles:
Note:
Customers running older distro versions not included in HFS 2.5.6 are still
supported.
Installation:
The following linux bootline options are recommended when
installing. HFS will add the bootline options automatically when needed:
tsc=nowatchdog |
Prevent the watchdog from changing the clocksource from
tsc. |
add_efi_memmap |
Ensure all memory is included in Linux memory map. |
udev.children-max=512 |
Prevent driver load issues when booting. |
nmi_watchdog=0 |
Disable SW watchdog, which may have scaling issues on
large systems. |
watchdog_thresh=30 |
Increase timeouts on large systems. 32 socket systems may
require 60 seconds. |
workqueue.watchdog_thresh=60 |
Increase timeouts on large systems. 32 sockets systems may
require 120 seconds. |
pci=nobar |
Prevent Linux from assigning unassigned BARs. |
pci=norom |
Do not assign address space to expansion ROMs that do not
already have BIOS assigned address ranges. |
console=ttyS0,115200 |
Enable the serial console. |
earlyprintk=ttyS0,115200 |
Display early boot messages. Aids in debugging early
boot issues. |
Note:
Removing "quiet" from the kernel bootline will also aid debugging
boot issues.
Linux
distro links:
HFS (HPE Foundation Software) Description:
HPE
Foundation Software (HFS) includes automatic
boot-time optimization utilities, reliability features, and technical support
tools. Designed for high performance computing, these tools help maximize
system performance and availability.
HPE Documentation Links:
HPE
Foundation Software (HFS 2.5.6) on HPE Support Center
HPE
Foundation Software (HFS 2.5.6) on Software Download Repository
HPE
Compute Scale-up Server 3200 Linux Installation Guide
HPE Compute Scale-up
Server 3200 Quick Specs
HPE
Superdome Flex Administrator Guide
Managing
System Performance with HPE Foundation Software
HPE
Foundation Software (HFS) commands
PRODUCT MODEL(S):
HPE Compute Scale-Up Server 3200
HPE Superdome Flex
HPE Superdome Flex
280
HFS
2.5.6 ENHANCEMENTS / FIXES:
·
kdump:
o
kdump
speedup by using more cpus.
o
Restore
kdump dump status messages.
·
Pilot4:
o
Remove
Pilot4 PCI resource0 file rw permission (Superdome Flex and Flex 280).
·
dcd:
o
Update dcd to 4.8-5.3
SUPERSEDES:
Version:
HFS 2.5.5
UPDATE RECOMMENDATION: Recommended
Notes:
·
The Pilot4 Linux /sys/device resource0 file,
when opened opened (as root) and mapped (mmap()), consecutive reads or writes
can generate PIC errors which can crash the system. By default, the resource0
file has root only rw permission, so non-root cannot hit this issue.
Also, the file needs to be mapped, meaning commands such as cat will not hit
the issue. The change is to remove rw permission on the resource0
file, to prevent root programs from mapping the file and hitting the
issue. root can change permission back to rw if for some reason they want
to, though that exposes the issue which can crash the system. Pilot4 is
on Superdome Flex and Fles280 platforms. It is not on SUS 3200 platform.
For
more details and workaround, see Customer Advisory a00108899
INSTALLATION INSTRUCTIONS:
Please
review all instructions and the "Hewlett Packard Enterprise Support Tool
License Terms" or your Hewlett Packard Enterprise support terms and
conditions for precautions, scope of license, restrictions, and limitation of
liability and warranties, before installing this package. It is important that
you read and understand these instructions completely before you begin. This
can determine your success in completing the software update.
Linux Installation instructions:
Installation Notes:
·
Note:
Sles15 sp4 kernel version 5.14.21-150400.26.63
has a regression. It does not advertise
avx capabilities, causing poor performance by applications wanting to use avx
instructions. Fixed in version 5.14.21-150400.26.66 . SUSE BZ 1211205 for details.
HFS ISO installation Instructions:
Note: To see the list of groups to
install, use: dnf grouplist
f. Reboot the system to activate the change:
g.
reboot
After the system reaches the EFI shell, in the RMC command window, enter:
power reset npar pnum=0
Note:
Refer to your operating system documentation for details on adding directories
of RPM packages as available software sources/repositories for use by zipper,
dnf and yum.
Installing HFS in a contaner:
These
directions are in the hfs-container rpm README file.
#
These are example commands to build a container, install
HPE-HFS, and run the resulting container on a
# HPE Scale Up Server 3200 worker node in a Red Hat Openshift Platform
4.14 cluster. This solution requires
# the Dockerfile and hfs-config.repo file included in the hfs-container
rpm.
#
# This solution also requires the HPE HFS iso file,
hpe-foundation-2.5.6-cd1-media-rhel96-x86_64.iso. One
# method used to provide the iso contents to the container build is a
loop mount command shown below.
#
mkdir ./HPE-Foundation-Software-2.5.6
sudo mount -o loop ./hpe-foundation-2.5.6-cd1-media-rhel96-x86_64.iso
./HPE-Foundation-Software-2.5.6
#
# Commands to extract the hfs-container files from the
HPE-HFS iso image
#
rpm2cpio HPE-Foundation-Software-2.5.6/RPMS/hfs-container*.rpm | cpio -icd
cp opt/hpe/container/* .
#
# With copies of the Dockerfile and hfs-config.repo, and the iso
file mounted as show above, the command
# below will build a local image using the definitions in Dockerfile.
#
sudo podman image build -f Dockerfile -t hfs:1
#
# The resulting container can be tested for basic functionality on
the build host with the folllowing
# command.
#
sudo podman run -it --network host --privileged --pid host --volume
/:/host:ro localhost/hfs:1
#
# The commands below were developed from the Red Hat OpenShift
documentation provided in the link below
#
# https://docs.openshift.com/container-platform/4.13/registry/securing-exposing-registry.html#registry-exposing-hpe-hfs-registry-manually_securing-exposing-registry
#
#
# Login to the Openshift cluster
#
oc login https://api.${CLUSTER-HOST}:6443 -u kubeadmin -p ${PASSWORD}
#
# Get the default route to the internal OpenShift registry::
#
HOST=$(oc get route default-route -n openshift-image-registry --template='{{
.spec.host }}')
#
# Get the certificate of the Ingress Operator on the internal Openshift
registry:
#
oc get secret -n openshift-ingress router-certs-default -o
go-template='{{index .data "tls.crt"}}' | base64 -d | sudo tee
/etc/pki/ca-trust/source/anchors/${HOST}.crt > /dev/null
#
# Enable the cluster’s default certificate to trust the route using the
following commands:
#
sudo update-ca-trust enable
#
# Log in with podman using the default route:
#
sudo podman login -u kubeadmin -p $(oc whoami -t) $HOST
#
# Tag the local image with the destination project/imagestream:version
#
sudo podman tag localhost/hfs:1 ${HOST}/hpe-hfs/hfs:version1
#
# Create the hpe-hfs namespace for the rest of the process below
#
oc create namespace hpe-hfs
#
# Change to hpe-hfs namespace
#
oc project hpe-hfs
#
# Add permission to run privileged containers and pods from daemonsets
#
oc adm policy add-scc-to-user privileged -z default -n hpe-hfs
#
# Check if the destination imagesource already exists
#
oc describe imagestream hfs -n hpe-hfs
#
# If the imagestream does not exist, create a new one
#
oc create imagestream hfs -n hpe-hfs
#
# Push the image from the local podman registry to the internal Openshift
registry
#
sudo podman push ${HOST}/hpe-hfs/hfs:version1
#
# Check the destination imagestream
#
oc describe imagestream hfs -n hpe-hfs
#
# Run a test pod using the new image
#
oc apply -f ./hfs.yaml
#
# Check the status of the pod
#
oc get pods
#
# rsh in to the pod to manually change the configuration and execute the test
#
oc rsh hfs-test
#
# The following file is an example of tuning suggestions for the HPE Scale Up
Server 3200
# This example has 8 sockets and divides each socket in half for reserved cores
for general
# purpose, OS, and OpenShift control plane workloads. The other half are
isolated for
# low latency and high performance workloads. The hugepages section is
also an example.
#
10-profile-hpe-auto-config.yaml
DISCLAIMER:
The information in this document is subject to change
without notice.
Hewlett Packard Enterprise makes no warranty of any
kind with regard to this material, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose. Hewlett
Packard Enterprise shall not be liable for errors contained herein or for
incidental or consequential damages in connection with the furnishing,
performance, or use of this material.
This document
contains proprietary information that is protected by copyright. All rights are
reserved. No part of this document may be reproduced, photocopied, or
translated to another language without the prior written consent of Hewlett
Packard Enterprise.
(C) Copyright 2025
Hewlett Packard Enterprise Development L.P.