HFS 2.5.1 Release Notes for HPE Compute Scale-Up Server 3200 and Superdome Flex Family systems
Linux Operating Systems:
Minimum Linux Distro versions on Compute Scale-up Server 3200:
HFS version 2.5.1 on Compute Scale-up Server 3200 supports Linux Distro versions:
HFS 2.5.1 on Superdome Flex and Superdome Flex 280 supports Linux versions.
For older OS versions, use the following HFS bundles:
Note: Customers running older distro versions not included in HFS 2.5.1 are still supported.
Installation:
The following linux bootline options are recommended when installing. HFS will add the bootline options automatically when needed:
· tsc=nowatchdog Prevent the watchdog from changing the clocksource from tsc.
· add_efi_memmap Ensure all memory is included in Linux memory map
· udev.children-max=512 Prevent driver load issues when booting
· nmi_watchdog=0 Disable SW watchdog, which may have scaling issues on large systems.
· watchdog_thresh=30 Increase timeouts on large systems
· workqueue.watchdog_thresh=60 Increase timeouts on large systems
· pci=nobar Prevent Linux from assigning unassigned BARs.
· console=ttyS0,115200 Enable the serial console.
· earlyprintk=ttyS0,115200 Display early boot messages. Aids in debugging early boot issues.
Note: Removing "quiet" from the kernel bootline will also aid debugging boot issues.
Linux distro links:
HFS (HPE Foundation Software) Description:
HPE Foundation Software (HFS) includes automatic boot-time optimization utilities, reliability features, and technical support tools. Designed for high performance computing, these tools help maximize system performance and availability.
HPE Documentation Links:
Download latest HPE Foundation Software (HFS 2.5.1) from HPE Support Center
Download latest HPE Foundation Software (HFS 2.5.1) from Software Download Repository
HPE Compute Scale-up Server 3200 Linux Installation Guide
HPE Compute Scale-up Server 3200 Quick Specs
HPE Superdome Flex Administrator Guide
Managing System Performance with HPE Foundation Software
HPE Foundation Software (HFS) commands
PRODUCT MODEL(S):
HPE
Compute Scale-Up Server 3200
HPE Superdome Flex
HPE Superdome Flex 280
HFS 2.5.1 ENHANCEMENTS / FIXES:
· hfs-auto-config
o Add "tsx=on" for SAP HANA on newer RHEL versions
Upstream has introduced disabling of TSX with commit 95c5824f75f3 and SAP HANA wants TSX, so Red Hat added use of "tsx=on" boot parameter to the SAP notes (RHEL8: 2777782, RHEL9: 3108302). SUSE defaults to "tsx=on".
o Add dracut.conf config to rpm file list
After uninstalling hpe-auto-config, /etc/dracut.conf.d/hpe-auto-config.conf remains. Add the file to the rpm list of files to be deleted on uninstall.
o kdump: Get apicid of CPU0 for disable_cpu_apicid
10_kdump.sh incorrectly assumes the apic id of CPU0 to be 0 in the application of disable_cpu_apicid=0 to the kdump commandline.
CPU0 is not guaranteed to have a apicid of 0. Trying to kdump
without the BSP disabled can lead to hangs/BIOS halts when the kdump kernel is
booting.
Add a new function to compute the initial apic id of
CPU0. Use this function to apply the correct setting of disable_cpu_apicid=X to
the kdump commandline.
o kdump: Blacklist additional drivers
Add a few more drivers to the list of default drivers
blacklisted. These drivers have known issues or are known to be not needed during
kdump.
Drivers: nvidia nd_pmem dax_pmem iaa_crypt idxd qat_4xxx
ipmi_si kvm mana
· topology
o Add support for VMD devices
When VMD is enabled, connected NVMe drives receive PCI
addresses that don't have SMBIOS entries, leading to no GEOIDs. Also, the 5
digit segment values cause the drives to be placed at the bottom of the --io
output.
Connect VMD NVMe address to root address by reading and
decoding the sysfs file structure. This root address has a type 9 SMBIOS
entry, giving us a GEOID.
Sort the list of devices printed with --io to place
devices in order of their slot address.
Skip printing "Intel Volume Management Device NVMe
RAID Controller" devices that appear when VMD is enabled. These are not
physical RAID cards we care about.
· Msr-tools
· shim_certificate_hpe
o New shim_certificate_hpe RPM
Oracle UEK changed the way Secure Boot keys are handled. Added
/boot/efi/EFI/redhat/shim_certificate.efi to properly recognize HPE platform key.
· Selinux config
o hpe_irqbalance: Add SELinux support
The SELinux subpackage provides a policy module for hpe_irqbalance and a simple man page for hpe_irabalance-selinux. RHEL only, RHEL 8.8 and newer.
SUPERSEDES:
Version: HFS 2.5.1
UPDATE RECOMMENDATION: Recommended
LANGUAGES:
International
English
INSTALLATION INSTRUCTIONS:
Please review all instructions and the "Hewlett Packard Enterprise Support Tool License Terms" or your Hewlett Packard Enterprise support terms and conditions for precautions, scope of license, restrictions, and limitation of liability and warranties, before installing this package. It is important that you read and understand these instructions completely before you begin. This can determine your success in completing the software update.
Linux Installation instructions:
Notes:
· Note: Sles15 sp4 kernel version 5.14.21-150400.26.63 has a regression. It does not advertise avx capabilities, causing poor performance by applications wanting to use avx instructions. Fixed in version 5.14.21-150400.26.66 . SUSE BZ 1211205 for details.
HFS ISO installation Instructions:
Note: To see the list of groups to
install, use: dnf grouplist
e.2. Upgrade:
- To upgrade, use:
# dnf upgrade
f. Reboot the system to activate the change:
g.
reboot
After the system reaches the EFI shell, in the RMC command window, enter:
power reset npar pnum=0
Note: Refer to your operating system documentation for details on adding directories of RPM packages as available software sources/repositories for use by zipper, dnf and yum.
Installing HFS in a contaner:
These directions are in the hfs-container rpm README file.
# These are
example commands to build a container, install HPE-HFS, and run the resulting
container on a
# HPE Scale Up Server 3200 worker node in a Red Hat Openshift Platform
4.14 cluster. This solution requires
# the Dockerfile and hfs-config.repo file included in the hfs-container
rpm.
#
# This solution also requires the HPE HFS iso file,
hpe-foundation-2.5.1-cd1-media-rhel94-x86_64.iso. One
# method used to provide the iso contents to the container build is a
loop mount command shown below.
#
mkdir ./HPE-Foundation-Software-2.5.1
sudo mount -o loop ./hpe-foundation-2.5.1-cd1-media-rhel94-x86_64.iso
./HPE-Foundation-Software-2.5.1
#
# Commands to extract the hfs-container files from the
HPE-HFS iso image
#
rpm2cpio HPE-Foundation-Software-2.5.1/RPMS/hfs-container*.rpm | cpio -icd
cp opt/hpe/container/* .
#
# With copies of the Dockerfile and hfs-config.repo, and the iso
file mounted as show above, the command
# below will build a local image using the definitions in Dockerfile.
#
sudo podman image build -f Dockerfile -t hfs:1
#
# The resulting container can be tested for basic functionality on
the build host with the folllowing
# command.
#
sudo podman run -it --network host --privileged --pid host --volume
/:/host:ro localhost/hfs:1
#
# The commands below were developed from the Red Hat OpenShift
documentation provided in the link below
#
#
https://docs.openshift.com/container-platform/4.13/registry/securing-exposing-registry.html#registry-exposing-hpe-hfs-registry-manually_securing-exposing-registry
#
#
# Login to the Openshift cluster
#
oc login https://api.${CLUSTER-HOST}:6443 -u kubeadmin -p ${PASSWORD}
#
# Get the default route to the internal OpenShift registry::
#
HOST=$(oc get route default-route -n openshift-image-registry --template='{{
.spec.host }}')
#
# Get the certificate of the Ingress Operator on the internal Openshift
registry:
#
oc get secret -n openshift-ingress router-certs-default -o
go-template='{{index .data "tls.crt"}}' | base64 -d | sudo tee
/etc/pki/ca-trust/source/anchors/${HOST}.crt > /dev/null
#
# Enable the cluster’s default certificate to trust the route using the
following commands:
#
sudo update-ca-trust enable
#
# Log in with podman using the default route:
#
sudo podman login -u kubeadmin -p $(oc whoami -t) $HOST
#
# Tag the local image with the destination project/imagestream:version
#
sudo podman tag localhost/hfs:1 ${HOST}/hpe-hfs/hfs:version1
#
# Create the hpe-hfs namespace for the rest of the process below
#
oc create namespace hpe-hfs
#
# Change to hpe-hfs namespace
#
oc project hpe-hfs
#
# Add permission to run privileged containers and pods from daemonsets
#
oc adm policy add-scc-to-user privileged -z default -n hpe-hfs
#
# Check if the destination imagesource already exists
#
oc describe imagestream hfs -n hpe-hfs
#
# If the imagestream does not exist, create a new one
#
oc create imagestream hfs -n hpe-hfs
#
# Push the image from the local podman registry to the internal Openshift
registry
#
sudo podman push ${HOST}/hpe-hfs/hfs:version1
#
# Check the destination imagestream
#
oc describe imagestream hfs -n hpe-hfs
#
# Run a test pod using the new image
#
oc apply -f ./hfs.yaml
#
# Check the status of the pod
#
oc get pods
#
# rsh in to the pod to manually change the configuration and execute the test
#
oc rsh hfs-test
#
# The following file is an example of tuning suggestions for the HPE Scale Up
Server 3200
# This example has 8 sockets and divides each socket in half for reserved cores
for general
# purpose, OS, and OpenShift control plane workloads. The other half are
isolated for
# low latency and high performance workloads. The hugepages section is
also an example.
#
10-profile-hpe-auto-config.yaml
DISCLAIMER:
The
information in this document is subject to change without notice.
Hewlett Packard Enterprise makes no warranty of any
kind with regard to this material, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose. Hewlett
Packard Enterprise shall not be liable for errors contained herein or for
incidental or consequential damages in connection with the furnishing,
performance, or use of this material.
This document contains proprietary information that is protected by copyright. All rights are reserved. No part of this document may be reproduced, photocopied, or translated to another language without the prior written consent of Hewlett Packard Enterprise.
(C) Copyright 2024 Hewlett Packard Enterprise Development L.P.