This document lists the best practices that Workload Manager supports for evaluating SAP workloads running on Google Cloud. To learn about Workload Manager, see Product overview .
Best practices for SAP workloads
The following table shows the Workload Manager best practices for evaluating SAP workloads that run on Google Cloud.
Note that to enable Workload Manager for evaluating your SAP workloads, you must set up Google Cloud's Agent for SAP on the host VMs .
Select one or more rule categories to filter the following table.
This rule is deprecated. It has been replaced by the rule: "Check that Google Cloud's Agent for SAP is set up correctly on all instances in the evaluation scope" , which is provided by default at no charge.
To ensure that X4 instances are optimized to support SAP workloads, you must run the command-line utility provided by Google Cloud's Agent for SAP to verify that the OS configuration matches best practice recommendations.
For more information, see Post-deployment tasks in the SAP HANA planning guide.
Instances in the evaluation scope must have the Agent for SAP configured for Workload Manager Evaluation. If the agent has not been set up correctly, then evaluation results might be incomplete or inaccurate. This check is included by default, at no charge.
For more information, see Google Cloud's Agent for SAP planning guide and then Verify the agent version .
To receive support from SAP and Google Cloud for SAP HANA on a Compute Engine VM, you must use an operating system version that is certified by SAP and Google Cloud for use with SAP HANA.
For more information, see OS support for SAP HANA on Google Cloud .
To receive support from SAP and Google Cloud for SAP HANA on a Compute Engine custom VM, you must use a custom VM type that is certified by SAP and Google Cloud for use with SAP HANA.
For more information, see Certified custom machine types for SAP HANA .
For performance reasons, the SAP HANA /hana/data
and /hana/log
volumes must be mapped to the same type of SSD-based persistent disk. You can map both volumes to the same single persistent disk or, if the same persistent disk type is used for each, you can map each volume to a separate persistent disk.
For more information, see the SAP HANA planning guide .
To receive support from SAP and Google Cloud for SAP HANA on a Compute Engine VM, you must use a VM type that is certified by SAP and Google Cloud for use with SAP HANA.
For more information, see Certified Compute Engine VMs for SAP HANA .
To receive support from SAP and Google Cloud for SAP NetWeaver on a Compute Engine custom VM, you must use a custom VM type that is certified by SAP and Google Cloud for use with SAP NetWeaver.
For more information, see Certified machines in the SAP NetWeaver planning guide.
To receive support from SAP and Google Cloud for SAP NetWeaver on a Compute Engine VM, you must use an operating system version that is certified by SAP and Google Cloud for use with SAP NetWeaver.
For more information, see OS support for SAP NetWeaver on Google Cloud .
To receive support from SAP and Google Cloud for SAP NetWeaver on a Compute Engine VM, you must use a VM type and CPU platform that is certified by SAP and Google Cloud for use with SAP NetWeaver.
For more information, see Machine types in the SAP NetWeaver planning guide.
log_disk_usage_reclaim_threshold
parameter
If the log partition file system disk usage ('usedDiskSpace' in percent of 'totalDiskSpace') is above the specified threshold, the logger will automatically trigger an internal 'log release' (0 = disabled). As default, the logger will keep all free log segments cached for reuse, segments will only be removed if a reclaim is triggered explicitly via 'ALTER SYSTEM RECLAIM LOG' or if a 'DiskFull'/'LogFull' event is hit on logger level. This threshold parameter can be used to trigger the reclaim internally before a 'DiskFull'/'LogFull' situation occurs.
For more information, see log_disk_usage_reclaim_threshold in the SAP HANA Configuration Parameter Reference.
The backup catalog can grow quite large over time, especially if it is not regularly cleaned up. This can lead to performance problems and can make it difficult to find the backups that are needed.
For more information, see SAP HANA multiple issue caused by large Log Backups due to large Backup Catalog size in the SAP Knowledge Base.
Data and log compression can be used for the initial full data shipping, the subsequential delta data shipping, as well as for the continuous log shipping. Data and log compression can be configured to reduce the amount of traffic between systems, especially over long distances (for example, when using the ASYNC replication mode).
For more information, see Data and Log Compression in the SAP HANA System Replication guide.
datashipping_parallel_channels
parameter
The SAP HANA parameter datashipping_parallel_channels
defines the number of network channels used by full or delta datashipping. The default value is 4
, which means that four network channels are used to ship data.
For more information, see datashipping_parallel_channels in the SAP HANA Administration Guide.
In databases with more than 235 GB allocation limit, the gc_unused_memory_threshold_rel
and gc_unused_memory_threshold_abs
parameters have to be configured. These parameters help to reduce the risk of hiccups (e.g. due to MemoryReclaim waits) when garbage collection happens reactively.
For more information, see SAP HANA Garbage Collection in the SAP Knowledge Base.
For block storage, SAP HANA requires a minimum throughput of 400 MB per second. If you are using SSD or balanced persistent disks, use the minimum size for that persistent disk type to provide the necessary throughput. If you are using extreme persistent disks, provision a minimum of 20,000 IOPS.
For more information, see Persistent disk storage in the SAP HANA planning guide.
load_table_numa_aware
parameter
To improve the performance of NUMA-based SAP HANA systems, enable the load_table_numa_aware
parameter. When this parameter is enabled, SAP HANA optimizes data placement across NUMA nodes during table loading.
For more information, see SAP HANA Non-Uniform Memory Access (NUMA) in the SAP Knowledge Base.
A permanent license key is required to operate on a HANA system. If a permanent license key expires, a (second) temporary license key is automatically generated and will be valid for 28 days.
For more information, see License Keys for SAP HANA Database in the SAP Knowledge Base.
consensus
parameter
In a Linux Pacemaker high-availability cluster for SAP on Google Cloud, the default value of the consensus
parameter is set to 1.2 times the value of the token
parameter. It is recommended not to modify this value. If you change the default value, make sure that it is at least 1.2 times the token
value.
For more information, see Corosync configuration parameter values in the SAP HANA high-availability planning guide.
join
parameter
In a Linux Pacemaker high-availability cluster for SAP on Google Cloud, set the Corosync join
parameter to a value of 60
to conform to Google Cloud best practices.
For more information, see Corosync configuration parameter values in the SAP HANA high-availability planning guide.
max_messages
parameter
In a Linux Pacemaker high-availability cluster for SAP on Google Cloud, to avoid message flooding between cluster nodes during token processing, set the Corosync max_messages
parameter to a value of 20
.
For more information, see Corosync configuration parameter values in the SAP HANA high-availability planning guide.
token_retransmits_before_loss_const
parameter
In a Linux Pacemaker high-availability cluster for SAP on Google Cloud, set the Corosync token_retransmits_before_loss_const
parameter to a value of 10
or more to conform to Google Cloud best practices.
For more information, see Corosync configuration parameter values in the SAP HANA high-availability planning guide.
token
parameter
In a Linux Pacemaker high-availability cluster for SAP on Google Cloud, set the value of the Corosync token
parameter to the recommended timeout value of 20000
to conform to the Google Cloud best practice for failure detection.
For more information, see Corosync configuration parameter values in the SAP HANA high-availability planning guide.
transport
parameter
In a Linux Pacemaker high-availability cluster for SAP on Google Cloud, set the value of the Corosync transport
protocol as appropriate for your Operating System. For Red Hat systems of version 8 and later, the parameter should be set to knet
. For other supported Operating Systems, a value of udpu
is expected.
For more information, see the guide for your OS:
- For RHEL, see HA cluster configuration guide for SAP HANA on RHEL .
- For SLES, see HA cluster configuration guide for SAP HANA on SLES .
pcmk_delay_max
on the fencing device cluster resource
To avoid fence race conditions in Linux Pacemaker high-availability clusters for SAP, the pcmk_delay_max
parameter must be specified with a value of
For more information, see Special Options for Fencing Resources .
SAPHana
operations
The definition of the SAPHana
resource in a Linux Pacemaker HA cluster contains a timeout value for the stop
, start
, promote
, and demote
operations. For Linux Pacemaker HA clusters for SAP on Google Cloud, we recommend a value of at least 3600
for each operation.
For more information, see the guide for your OS:
- For RHEL, see HA cluster configuration guide for SAP HANA on RHEL .
- For SLES, see HA cluster configuration guide for SAP HANA on SLES .
If log_mode
is set to 'normal', HANA creates regular log backups, allowing for point-in-time recovery (restoring up to the moment before a failure). If log_mode
is set to 'overwrite' , no log backups are created; you can only recover the database to the last data backup.
For more information, see Log Modes in the SAP HANA Administration Guide.
logshipping_async_buffer_size
on the primary site
If system replication is disconnected during a full data shipment, then replication has to start from scratch. In order to reduce the risk of buffer full situations, the logshipping_async_buffer_size
parameter can be adjusted to a value of 1 GB
on the primary site.
For more information, see SAP HANA System Replication in the SAP Knowledge Base.
logshipping_max_retention_size
parameter
In context of logreplay operations modes the logshipping_max_retention_size
SAP HANA parameter defines the maximum amount of redo logs that are kept on primary site for synchronization with the secondary site (default: 1 TB
). If the underlying file system isn't large enough to hold the complete configured retention size, it can happen in the worst case that the file system runs full and the primary site comes to a standstill.
For more information, see SAP HANA System Replication in the SAP Knowledge Base.
max_cpuload_for_parallel_merge
parameter
By default, multiple auto merges (up to num_merge_threads
) of different tables or partitions can be executed up to a CPU utilization limit of 45%, but as soon as this limit is exceeded, a maximum of one auto merge is executed at any time. This can in the worst case result in an increased auto merge backlog although sufficient system resources for handling parallel auto merges would still be available. In this case you can consider increasing this parameter to a value that is both higher than the usual CPU utilization and lower than a critical limit that would allow auto merges to introduce resource bottlenecks.
For more information, see SAP HANA Delta Merges in the SAP Knowledge Base.
In a scale-out SAP HANA environment, maintaining consistency in OS and kernel across all nodes within the system is crucial for optimal performance and stability.
For more information, see SAP HANA: Supported Operating Systems in the SAP Knowledge Base.
In a Linux Pacemaker high availability cluster for SAP on Google Cloud, using alias IP addresses that move between Compute Engine instances is discouraged as a failover mechanism because it doesn't meet the high availability requirements. In certain failure scenarios, such as a zonal failure event, you might not be able to remove an alias IP address from a compute instance. Consequently, you might not be able to move the alias IP address to another compute instance, making failover impossible.
For more information, see Alias IP VIP implementations in the SAP HANA high-availability planning guide.
To ensure high availability for your SAP system and to safeguard it from unforeseen host events, all resources in the Pacemaker managed cluster must be in the Started state.
For more information, see Resource agent is stopped in the Troubleshooting high-availability configurations for SAP guide.
In a Pacemaker cluster, the timeout
parameter in op_defaults
sets a global default for how long operations can take before being considered failed. If a specific timeout is configured for an individual resource it will override the global default. Google Cloud recommends to set a default timeout value of 600
.
For more information, see the guide for your OS:
- For RHEL, see Set the cluster defaults .
- For SLES, see Cluster bootstrap and more .
In a Linux Pacemaker high-availability cluster for SAP on Google Cloud, meta_attributes
are configuration parameters that influence how a resource behaves within the cluster. For the ASCS resource, SUSE and Red Hat recommend to set resource-stickiness
to a value of 5000
. Also, for ENSA1, set migration-threshold
to a value of 1
and failure-timeout
to a value of 60
.
For more information, see the guide for your OS:
- For RHEL, see Creating resource for managing the (A)SCS instance .
- For SLES - ENSA1, see Configuring the resources for the ASCS .
- For SLES - ENSA2, see Configuring the resources for the ASCS .
In a high availability SAP Central Services cluster (ABAP or Java), setting IS_ERS=true
for the ERS resource is mandatory for an Enqueue Replication Server (ENSA1) configuration because it is used to identify the node where the ERS service is active. For an ENSA2 configuration, this setting is optional but recommended.
For more information, see the following guides:
- For RHEL, see Creating resource for managing the ERS instance or view the Red Hat Knowledgebase .
- For SLES, see Configuring cluster resources .
- For SAP NetWeaver Enqueue Replication 1 High Availability Cluster - SAP NetWeaver 7.40 and 7.50, see Configuring the resources for the ERS .
- For SAP S/4 HANA - Enqueue Replication 2 High Availability Cluster, see Configuring the resources for the ERS instance .
In a Linux Pacemaker high-availability cluster, to help manage resource behavior and failover policies, the rsc_defaults
primitive sets default meta_attributes
for all resources. SUSE and Red Hat recommend to set resource-stickiness
to a value of 1
, indicating a low preference for resources to remain on their current node, and a migration-threshold
to a value of 3
, allowing up to three failures on a node before a resource is moved to another node.
For more information, see the guide for your OS:
- For RHEL 9, see Configuring general cluster properties .
- For RHEL 8, see Configuring general cluster properties .
- For SLES, see Configuring the cluster base .
The SAPInstance
primitive in Pacemaker manages SAP application instances, guaranteeing their correct starting, stopping, and monitoring. To enhance the stability of SAP Instances, Google Cloud recommends setting the instance attribute for AUTOMATIC_RECOVER=false
. Additionally, it is recommended to set the monitor
operation to have a timeout
value of 60
and an interval value set to 11
for SLES and 20
for RHEL.
For more information, see the guide for your OS:
- For RHEL, see Creating resource for managing the (A)SCS instance .
- For SLES, see Configuring the resources for ASCS .
To ensure high availability of SAP Central Services, in the DEFAULT.PFL
file, values for the serverhost
and replicatorhost
parameters need to align with the Pacemaker cluster configuration. This configuration ensures continued operation even if one of the hosts experiences a failure, because the cluster can automatically failover to the other host.
For more information, view the profile parameters for the installed ENSA version:
- For ENSA1, see Profile Parameters for the Enqueue Clients
- For ENSA2, see Profile Parameters of Enqueue Replicator 2
gcpstonith
fence agent
The gcpstonith
fencing module is deprecated. Migrate to the OS bundled fence_gce
fencing agent for optimal reliability and functionality with your Pacemaker cluster on Google Cloud. fence_gce
is included in supported Linux distributions with the High Availability (HA) extension or add-on.
For more information, see the following guides:
- To set up fencing for an HA cluster on RHEL, see Set up fencing .
- To set up fencing for an HA cluster on SLES, see Set up fencing .
- To migrate from gcpstonith to fence_gce, see Fence agent gcpstonith is deprecated .
migration-threshold
parameter to the recommended value for SAP HANA
To migrate the SAP HANA resource to a new cluster node in the event of a failure in a Linux Pacemaker high-availability cluster, the SAP HANA resource definition must specify the migration-threshold
parameter with the recommended value of 5000
. This parameter determines the number of errors before a failover occurs and marks the cluster node as ineligible to host the SAP HANA resource.
For more information, see the guide for your OS:
- For RHEL, see HA cluster configuration guide for SAP HANA on RHEL .
- For SLES, see HA cluster configuration guide for SAP HANA on SLES .
In a Pacemaker configuration on Google Cloud Platform, the health check primitive and the Internal Load Balancer (ILB) primitive work together for high availability. The health check monitors the instance's status by listening on a specific port, while the ILB manages traffic routing. The recommended monitoring settings for health check are an interval
of 10
seconds and a timeout
of 20
seconds. The recommended monitoring settings for the ILB are an interval
of 3600
seconds and a timeout
of 60
seconds.
For more information, see the guide for your OS:
- For RHEL, see Create a virtual IP address resource .
- For SLES, see Create a local cluster IP resource for the VIP address .
A Linux Pacemaker HA cluster contains a location preference constraint that has been set on one or more resources. For Linux Pacemaker HA clusters for SAP on Google Cloud, location constraints may prevent correct failover of cluster resources in the event of failure. These constraints often occur when a resource is manually moved between nodes in the cluster.
For more information, see the guide for your OS:
- For RHEL, see Managing Cluster Resources .
- For SLES, see Manual resource migration .
To allow a Linux Pacemaker high-availability cluster configuration to monitor and manage its application resources, the cluster nodes that host those resources must not be in the maintenance mode.
For more information, see the guide for your OS:
- For RHEL, see Performing cluster maintenance .
- For SLES, see Enable and disable maintenance mode in a High Availability Cluster .
resource-stickiness
parameter for SAP HANA
In a Linux Pacemaker high-availability cluster for SAP HANA, set the resource-stickiness
parameter to the recommended value of 1000
. This parameter defines how strongly a resource prefers to remain on its current node. The value of 1000
is high enough to minimize unnecessary migration of the resource to another node.
For more information, see the guide for your OS:
- For RHEL, see Configuring general HA cluster properties .
- For SLES, see Configuring cluster properties and resources .
In a Linux Pacemaker high availability cluster for SAP HANA on Google Cloud, the meta attributes within the SAP HANA msl
resource (classified as Primary or Secondary) determine how this resource is managed within the cluster.
For more information, see the guide for your OS:
- For RHEL, see Creating Promotable SAPHana resource .
- For SLES, see Create SAPHana resource .
In a Linux Pacemaker high availability cluster for SAP HANA on Google Cloud, the SAPHana
resource contains configuration to control the availability and data protection of the SAP HANA System Replication that is managed by the HA cluster. Google Cloud recommends setting the values for the instance attributes as follows: AUTOMATED_REGISTER=true
, DUPLICATE_PRIMARY_TIMEOUT=7200
, and PREFER_SITE_TAKEOVER=true
.
For more information, see the guide for your OS:
- For RHEL, see Creating Promotable SAPHana resource .
- For SLES, see Create SAPHana resource .
The SAPHana
resource manages the instances that are part of the replicated SAP HANA pair. In the event of a failure to the SAP HANA primary replication instance, the SAPHana
resource agent can trigger a takeover of SAP HANA System Replication based on how the resource agent parameters have been set. The interval
and timeout
values for the monitor operation should be set to the recommended values based upon the OS vendor. For Red Hat, the primary monitor should have an interval
of 59
and a timeout
of 700
, while the secondary monitor should have an interval
of 61
and a timeout
of 700
. For SUSE, the primary monitor should have an interval
of 60
and a timeout
of 700
, while the secondary monitor should have an interval
of 61
and a timeout
of 700
.
For more information, see the guide for your OS:
- For RHEL, see Creating Promotable SAPHana resource .
- For SLES, see Creating SAPHana resource .
To preserve the integrity and high availability of the cluster, the Pacemaker configuration should enable STONITH to activate node fencing and have an appropriate timeout set to ensure the timely completion of STONITH operations. These settings are essential for isolating failed nodes and preventing them from disrupting the cluster's operations. It is recommended to set stonith-enabled=true
and stonith-timeout
to a value of 300
for optimal results.
For more information, see the guide for your OS:
- For RHEL, see Set the cluster defaults .
- For SLES, see Configure the general cluster properties .
In a Linux Pacemaker high availability cluster for SAP HANA on Google Cloud, the meta attributes within the SAP HANA topology clone resource determine how this resource is managed within the cluster. The recommended settings for a SAP HANA topology resource is a clone_node_max
value of 1
and an interleave
value of true
.
For more information, see the guide for your OS:
- For RHEL, see Creating cloned SAPHanaTopology resource .
- For SLES, see Creating SAPHanaTopology .
A Linux Pacemaker HA cluster contains a SAPHanaTopology
resource that includes a monitor operation which has an interval
value and a timeout
value. For Linux Pacemaker HA clusters for SAP on Google Cloud, we recommend a value between 10
and 60
seconds for the interval
, and a value of 600
seconds for the timeout
.
For more information, see the guide for your OS:
- For RHEL, see Create the
SAPHanaTopology
resource . - For SLES, see Create the
SAPHanaTopology
primitive resource .
The timeout
parameter defines the maximum amount of time allowed for an operation (such as starting or stopping a resource) to complete. If the operation does not finish within this specified time, it is considered to have failed. The recommended settings for a SAP HANA topology resource is a start
timeout value of 600
and a stop
timeout value of 300
.
For more information, see the guide for your OS:
- For RHEL, see Create the SAPHanaTopology resource .
- For SLES, see SAPHanaTopology .
parallel_merge_threads
parameter
If parallel_merge_threads
is set to a specific value, this value is used for parallelism while token_per_table
defines the number of consumed tokens.
For more information, see SAP HANA Delta Merges in the SAP Knowledge Base.
automatic_reorg_threshold
parameter
The automatic_reorg_threshold
parameter specifies when automatic reorganization of row store tables is triggered. If the value is set to 30(default) automatic reorganization won't be triggered as often as it could be.
For more information, see Incorrect SAP HANA Alert 71: 'Row store fragmentation' in the SAP Knowledge Base.
To guard against zonal failures, Google Cloud recommends that at least one instance hosting an SAP application server is running in a different zone than the SAP Central Services.
For more information, see the SAP HANA disaster recovery planning guide .
Google Cloud recommends that for Compute Engine instances, the Cloud API access scope is set to Allow full access to all Cloud APIs
and uses the IAM permissions of the instance service account to control access to Google Cloud resources.
For more information, see the section titled Enable access to Google Cloud APIs in the Agent for SAP installation guide.
Compute instances that have the deletionProtection
option enabled are protected against accidental deletion. Google Cloud recommends enabling deletion protection for all instances that are critical to running SAP workloads.
For more information, see Prevent accidental VM deletion in the Compute Engine documentation.
In systems where the SAP NetWeaver version supports ENSA2, but the DEFAULT.PFL
file still contains ENSA1 parameters, this mismatch might cause issues with enqueue server functionality and cluster behavior.
For more information, see Profile Parameters of Enqueue Replicator 2 in the SAP Help Portal.
To mitigate potential performance issues and guard against certain failure scenarios in a Pacemaker-managed high-availability cluster, Google Cloud recommends that you don't run the SAP application server processes on the same compute instances that host the SAP Central Services or Enqueue Replication Server (ERS). This is because application servers are not managed by the Pacemaker cluster and are not migrated to a new VM in the event of an outage.
For more information see the section titled Distributed deployment with high availability in the reference architecture for SAP on Google Cloud.
To ensure that the VM restarts automatically in the event of a failure, enable the Compute Engine automatic restart policy for any VM that is running an SAP workload.
For more information, see Set VM host maintenance policy .
Creating backups regularly and implementing a proper backup strategy helps you recover your SAP HANA database in situations such as data corruption or data loss due to an unplanned outage or failure in your infrastructure. Google Cloud recommends following a backup strategy that includes creating at least one full system backup of your SAP HANA database weekly, and creating at least one delta backup or snapshot based backup of the SAP HANA data volume daily. Daily full system backups can also be used as a substitute for delta or snapshot based backups. More frequent backups may be necessary to meet specific RPO requirements.
For more information, see Backup and recovery in the SAP HANA operations guide, or Backup and recovery for SAP HANA on bare metal instances .
To protect against region-wide outages and maintain business continuity, the SAP HANA primary node and Disaster Recovery (DR) sites must be deployed in different geographical regions. This approach mitigates the risk of catastrophic events affecting the components deployed within a single region, reducing potential data loss and downtime beyond what zone-level redundancy can offer.
For more information, see the SAP HANA disaster recovery planning guide .
Compute Engine includes functionality based on Intel's Memory RAS that can significantly reduce the impact of all memory errors that would otherwise cause VM crashes. When combined with SAP HANA's fast restart capability (available since HANA 2.0 SP04), SAP HANA systems are able to recover from such failure events. This configuration is recommended on all Memory Optimized virtual machine families
For more information, see SAP HANA Fast Restart option .
MIGRATE
for SAP workloads
To prevent any platform maintenance events from stopping or restarting a VM that is running SAP workloads, the onHostMaintenance
parameter for the VM must be set to the recommended option MIGRATE
. This recommendation does not apply to X4 or C3 Metal instances.
For more information, see Set VM host maintenance policy .
To ensure resiliency of an SAP HANA high-availability configuration, the primary and secondary nodes must exist in different zones in the same region.
For more information, see the SAP HANA planning guide .
To enable the best performance of the Hyperdisk volumes used with SAP HANA, you must set values recommended by Google Cloud for the following SAP HANA properties: num_completion_queues
, num_submit_queues
, tables_preloaded_in_parallel
, and load_table_numa_aware
.
For more information, see Hyperdisk performance in the SAP HANA planning guide.
For optimal performance of your SAP HANA system, Google Cloud recommends that you use a separate disk for each SAP HANA filesystem. Notably, the disks hosting the SAP HANA data and log volumes must not be used for any other function, such as serving as the installation path or system instance path. This recommendation also applies to the SAP HANA backup volume, if you save your backups to a disk. Hosting filesystems on separate disks is also recommended to be able to use data snapshots as a backup and recovery option.
For more information, see the Persistent disk storage section in the SAP HANA planning guide.
In a SAP HANA high availability configuration the system replication hook provided by the Operating System vendor has not been implemented. This may lead to incorrect reporting of the replication state of SAP HANA System Replication to Linux Pacemaker clusters.
For more information, see the guide for your OS:
- For RHEL, see Enable the SAP HANA HA/DR provider hook .
- For SLES, see Enable the SAP HANA HA/DR provider hook .
For Compute Engine instance migrations across machine series, Google Cloud recommends setting the CPU platform to 'Automatic' before migrating. Setting a specific CPU platform is only advised if you want to use the same type for the target machine due to performance or advanced instruction set compatibility reasons.
For more information, see the Compute Engine guides for CPU platforms , how to Specify a minimum CPU platform for VM instances , and how to Remove a minimum CPU platform setting .
Enable UEFI boot for the VM by creating a custom image with the UEFI_COMPATIBLE
guest OS feature or selecting a pre-configured UEFI-compatible image. UEFI compatibility is a prerequisite for newer generation machine types in Google Cloud.
For more information, see the Compute Engine guides for Operating system details , Memory-optimized machine family for Compute Engine , and Enable guest operating system features .
Encryption protects backups from unauthorized access by encrypting the backup data before it is transferred to the backup location. This means that even if an unauthorized user gains access to the backup data, they cannot read it without the decryption key. This is applicable for both file-based backups and backups created using third-party backup tools. Google Cloud recommends that you enable backup encryption in the SAP HANA system.
For more information, see system backup encryption statement in the SAP HANA reference guide.
DEVELOPMENT
privileges in a production environment
At least one user or role has the DEVELOPMENT
privilege in the production database. Google Cloud recommends that you have no users with this privilege.
For more information, see DEVELOPMENT privilege section in SAP HANA security checklists and recommendations.
The force_first_password_change
parameter in SAP HANA specifies whether users are required to change their password after they are created. Google Cloud recommends that you enable the force_first_password_change
parameter.
For more information, see password policy configuration options in the SAP HANA One security guide.
SAP_INTERNAL_HANA_SUPPORT
privileges in production environment
At least one account has the SAP_INTERNAL_HANA_SUPPORT
role. Google Cloud recommends that you have no users with this privilege.
For more information, see SAP_INTERNAL_HANA_SUPPORT
role in SAP HANA security checklists and recommendations.
last_used_passwords
parameter
Password reuse is a common security vulnerability. The last_used_passwords
parameter in SAP HANA prevents users from reusing their most recent passwords. The parameter specifies the number of past passwords that a user is not allowed to use when changing their current password. Google Cloud recommends that you set last_used_passwords
to a value of 5
or higher.
For more information, see password policy configuration options in the SAP HANA One security guide.
Encryption protects SAP HANA logs from unauthorized access. One way to do this is to encrypt the logs at the operating system level. SAP HANA also supports encryption in the persistence layer, which can provide additional security. Google Cloud recommends that you encrypt log volumes.
For more information, see recommendations for data encryption in SAP HANA security checklists and recommendations.
maximum_invalid_connect_attempts
parameter
The maximum_invalid_connect_attempts
parameter in SAP HANA specifies the maximum number of failed logon attempts that are possible; the user is locked as soon as this number is reached. Google Cloud recommends that you set maximum_invalid_connect_attempts
to a value of 6
or higher.
For more information, see password policy configuration options in the SAP HANA One security guide.
maximum_password_lifetime
parameter
The maximum_password_lifetime
parameter in SAP HANA specifies the number of days after which a user's password expires. The parameter will enforce security measures to change the user password periodically. Google Cloud recommends that you set maximum_password_lifetime
to a value of 182
or lower.
For more information, see password policy configuration options in the SAP HANA One security guide.
maximum_unused_initial_password_lifetime
parameter
The initial password is only meant to serve a temporary purpose. The maximum_unused_initial_password_lifetime
parameter in SAP HANA specifies the number of days for which the initial password or any password set by a user administrator for a user is valid. Google Cloud recommends that you set maximum_unused_initial_password_lifetime
to a value of 7
or lower.
For more information, see password policy configuration options in the SAP HANA One security guide.
maximum_unused_productive_password_lifetime
parameter
The maximum_unused_productive_password_lifetime
parameter in SAP HANA specifies the number of days after which a password expires if the user has not logged on. It helps in reducing the risk of compromised accounts due to prolonged password inactivity. Google Cloud recommends that you set maximum_unused_productive_password_lifetime
to a value of 365
or lower.
For more information, see password policy configuration options in the SAP HANA One security guide.
minimal_password_length
parameter
The minimal_password_length
parameter in SAP HANA specifies the minimum number of characters a password must contain. It is important to note that the minimal_password_length
parameter is critical to enhancing the security of SAP HANA. A password that is shorter than 8 characters is more likely to be guessed or cracked, which could allow an unauthorized user to access your system. To improve the security of your SAP HANA system, Google Cloud recommends that you increase the value of the minimal_password_length
parameter to a value of 8
or more.
For more information, see password policy configuration options in the SAP HANA One security guide.
minimum_password_lifetime
parameter
The minimum_password_lifetime
parameter in SAP HANA specifies the minimum number of days that must elapse before a user can change their password. This parameter helps enforce password aging policies and improve system security by preventing users from frequently changing their passwords.
For more information, see password policy configuration options in the SAP HANA One security guide.
password_expire_warning_time
parameter
The password_expire_warning_time
parameter in SAP HANA specifies the number of days before a password is due to expire that the user receives notification. It is important to notify users of password expiration times to ensure that they change their passwords before they expire. The default value for the password expiration warning time is 14 days.
For more information, see password policy configuration options in the SAP HANA One security guide.
password_layout
parameter
The password_layout
parameter in SAP HANA specifies the character types that the password must contain at least one character of each selected character type is required.
For more information, see password policy configuration options in the SAP HANA One security guide.
password_lock_time
parameter
The password_lock_time
parameter in SAP HANA specifies the number of minutes for which a user is locked after the maximum number of failed logon attempts. Google Cloud recommends that you set password_lock_time to a value of 1440
or higher.
For more information, see password policy configuration options in the SAP HANA One security guide.
It is recommended to protect SAP HANA data from unauthorized access. One way to do this is to encrypt the data at the operating system level. SAP HANA also supports encryption in the persistence layer, which can provide additional security. We recommend that you encrypt data volumes.
For more information, see recommendations for data encryption in SAP HANA security checklists and recommendations.
CVE-2019-0357 is a vulnerability that allows database users with administrator privileges to run operating system commands as root on particular SAP HANA versions.
For more information, see SAP security note for CVE-2019-0357 .
At least one user has the DEBUG
or ATTACH DEBUGGER
privilege in the system. Google Cloud recommends that you have no users with this privilege.
For more information, see recommendations for database users in SAP HANA security checklists and recommendations.
System replication is configured with allowed_sender
when the listen interface is global
.
For more information, see recommendations for network configurations in SAP HANA security checklists and recommendations.
Configuring the swap space on Linux-based SAP systems enhances performance by managing memory more efficiently. SAP recommendations for swap space are based on the available physical memory as well as the role of the system as part of the database or application layer.
For more information, see the SAP notes for Swap-space recommendation for Linux and HANA services use large SWAP memory .
Linux can use SELinux for enhanced security, but it can interfere with SAP server components. For SAP implementations, set SELinux to Disabled
or Permissive
mode. Disabling SELinux requires a system reboot, while permissive mode can be set without rebooting. This configuration ensures compatibility with SAP tools that are not SELinux-aware.
For more information, see SAP instance or Host Agent startup fails due to SELinux and Changing SELinux to permissive mode .
On Linux, system tuning services can help optimize the performance and stability of SAP workloads by setting recommended parameters for the SAP system. Google Cloud recommends enabling the sapconf
or saptune
service on SUSE Linux Enterprise Server, or the tuned
service on Red Hat Enterprise Linux.
For more information, see the guide for your OS:
- For RHEL, see the SAP note Red Hat tuned-profiles for SAP .
- For SLES, see the SUSE guide Tuning systems with saptune .
The thread stack parameter default_stack_size_kb
and worker_stack_size_kb
determines the amount of data a newly created thread can access.
For more information, see Indexserver Crash Due to STACK OVERFLOW in Evaluator::ExpressionParser in the SAP Knowledge Base.
Regular consistency checks are required to detect hidden corruptions as early as possible.
For more information, see SAP HANA Consistency Checks and Corruptions in the SAP Knowledge Base.
tables_preloaded_in_parallel
parameter in X4 VMs
The tables_preloaded_in_parallel
parameter lets you control the number of tables loaded in parallel after you start your SAP HANA system, providing flexibility for performance optimization. We recommend a minimum value of 32.
For more information, see SAP HANA Loads and Unloads in the SAP Knowledge Base.
In a scale-out SAP HANA environment, maintaining consistency in timezones is crucial to maintain system stability.
For more information, see Check HANA DB for DST switch in the SAP Knowledge Base.