<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://wiki.scalelogicinc.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Pa-P</id>
	<title>Scalelogic Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://wiki.scalelogicinc.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Pa-P"/>
	<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/Special:Contributions/Pa-P"/>
	<updated>2026-05-05T04:11:08Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.44.5</generator>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Scale_Logic_ZX_ver.1.0_up33_Release_Notes&amp;diff=1849</id>
		<title>Scale Logic ZX ver.1.0 up33 Release Notes</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Scale_Logic_ZX_ver.1.0_up33_Release_Notes&amp;diff=1849"/>
		<updated>2026-04-02T12:35:13Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: 1 revision imported&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date: 2026-03-04&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build: 65410&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== New ==&lt;br /&gt;
&lt;br /&gt;
=== ZFS Encryption now available for ZFS volumes and ZFS datasets. ===&lt;br /&gt;
&lt;br /&gt;
=== Self-Encrypting Drives (SED) support for Toshiba MG08, MG10, MG11 series drives. ===&lt;br /&gt;
&lt;br /&gt;
=== LED disks blinking for NVMe drives using Intel VMD 4.0. ===&lt;br /&gt;
&lt;br /&gt;
=== Possibility to create an SMB or NFS share on a dataset with the .zfs/snapshot path. ===&lt;br /&gt;
&lt;br /&gt;
=== The rootconsole and launchpad are now enabled by default. ===&lt;br /&gt;
&lt;br /&gt;
=== Possibility to turn off the LSA.  ===&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== ZFS (v2.3.5). ===&lt;br /&gt;
&lt;br /&gt;
=== Linux kernel (v5.15.189). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom NeXtreme-E Series 10/100GbE Network Controller driver (bnxt_en, v1.10.3-234.0.154.0). ===&lt;br /&gt;
&lt;br /&gt;
=== Marvell FastLinQ 41000 Network Controller driver (qede, v8.74.6.0). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 100GbE Network Controller driver (ice, v2.3.14). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10/40GbE Network Controller driver (i40e, v2.28.11). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10GbE Network Controller driver (ixgbe, v6.2.5). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 1GbE Network Controller driver (igb, v5.19.4). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom HBA 9600-16e 12Gb Tri-Mode Storage Adapter driver (mpi3mr, v8.14.1.0.0). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom HBA Adapter driver (mpt3sas, v55.00.00.00). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom MegaRAID Adapter driver (megaraid_sas, v07.734.00.00). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 16Gb/32Gb Fibre Channel Adapter driver (celerity16fc, v2.14.3) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 12Gb/s GT HBA Adapter driver (esas5hba, v1.10.1f1). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 12Gb/s HBA Adapter driver (esas4hba, v1.56.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec SmartHBA and SmartRAID Adapter driver (smartpqi, v2.1.36-026). ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox firmware update driver (mft, v4.33.0). ===&lt;br /&gt;
&lt;br /&gt;
=== LSI Storage Authority Software (v008.014.012.000). ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== Manual ZFS snapshot creation failed or timed out on the WebGUI in environments with a large number of ZFS snapshots. ===&lt;br /&gt;
&lt;br /&gt;
=== WebGUI did not work correctly after updating the system from up27 to later versions. ===&lt;br /&gt;
&lt;br /&gt;
=== Active Directory users could not be handled correctly when the username does not match the user’s full name. ===&lt;br /&gt;
&lt;br /&gt;
=== Expired support license notification was displayed when using a trial Product Key. ===&lt;br /&gt;
&lt;br /&gt;
== Important notes for ZX HA configuration ==&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use sync always option for zvols and datasets in cluster&amp;amp;nbsp; ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended not to use more than eight ping nodes ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to configure each IP address in separate subnetwork ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to run Scrub scanner after failover action triggered by power failure (dirty system close) ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use UPS unit for each cluster node ===&lt;br /&gt;
&lt;br /&gt;
=== “Enable VIP-based target visibility” on an iSCSI Target eliminates the need for static discovery in the iSCSI initiator; static discovery is still fully supported but no longer required when using this feature ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly not recommended to change any settings when both nodes do not have the same ZX version, for example during software updating ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use different Server names for cluster nodes ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work properly with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work stable with ALB bonding mode ===&lt;br /&gt;
&lt;br /&gt;
=== FC Target HA cluster does not support Persistant Reservation Synchronization and it cannot be used as a storage for Microsoft Hyper-V cluster. This problem will be solved in future releases. ===&lt;br /&gt;
&lt;br /&gt;
=== When using certain Broadcom (previously LSI) SAS HBA controllers with SAS MPIO, Broadcom recommends to install specific firmware from Broadcom SAS vendor. ===&lt;br /&gt;
*Please consult Broadcom vendor for specific firmware that is suitable for your hardware setup.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
The utilization of physical hard drives in virtual machines hosted by Hyper-V is not supported and may cause problems. The problem does not occur when utilizing virtual hard drives in virtual machines within a Hyper-V environment.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Scale Logic VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Scale Logic VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Scale Logic ZX users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic ZX with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== High Availability shared storage cluster does not work with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Due to technical reasons the High Availability shared storage cluster does not work properly when using the Infiniband controllers for VIP interface configuration. This limitation will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Unexpected long failover time, especially with HA-Cluster with two or more pools ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Current failover mechanism procedure is moving pools in sequence. Since up27 release, up to 3 pools are supported in HA-cluster. If all pools are active on single node and failover needs to move all 3 pools, the failover may take longer than 60 seconds which is a default iSCSI timeout in Hyper-V Clusters. In some environments, under heavy load a problem with too long time of cluster resources switching may occur as well. If the switching time exceeds the iSCSI initiator timeout, it is strongly recommended to increase the timeout up to 600 seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using Windows, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Run regedit tool and find: &#039;&#039;HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\...\Parameters\MaxRequestHoldTime registry key&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
2. Change value of the key from default 60 sec to 600 sec (decimal)&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using VMware, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Select the host in the vSphere Web Client navigator&lt;br /&gt;
&lt;br /&gt;
2. Go to Settings in the Manage tab&lt;br /&gt;
&lt;br /&gt;
3. Under System, select Advanced System Settings&lt;br /&gt;
&lt;br /&gt;
4. Choose the &#039;&#039;Misc.APDTimeout&#039;&#039; attribute and click the Edit icon&lt;br /&gt;
&lt;br /&gt;
5. Change value from default 140 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using XenServer, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A. For existing Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Detach and reattach SRs. This will update the new iSCSI timeout settings for the existing SRs.&lt;br /&gt;
&lt;br /&gt;
B. For new Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Create the new SR. New and existing SRs will be updated with the new iSCSI timeout settings.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB, Round-Robin and Round-Robin with RDMA do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic ZX as Hyper-V or VMware guest, bonding ALB, Round-Robin and Round-Robin with RDMA is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Nodes connected to the same AD server must have unique Server names ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If ZX nodes are connected to the same AD server, they cannot have the same Server names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== SAS-MPIO cannot be used with Cluster over Ethernet ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly not recommended to use Cluster over Ethernet with SAS-MPIO functionality. Such a configuration can lead to a very unstable cluster behavior.&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Wrong state of storage devices in VMware after power cycle of both nodes in HA FC Target ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In FC Target HA environment, power cycle of both nodes simultaneously may lead to a situation when VMware is not able to restore proper state of the storage devices. In vSphere GUI LUNs are displayed as Error, Unknown or Normal,Degraded. Moving affected pools to another node and back to its native node should bring LUNs back to normal. A number two option is to restart the Failover in ZX’s GUI. Refresh vSphere’s Adapters and Devices tab afterwards.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Separated mode after update from ZX up24 to ZX up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In HA cluster environment after updating of one node from ZX up24 to ZX up25 the other node can fall into separated mode and the mirror path might indicate disconnected status. In such a case go to Failover Settings and in the Failover status section select Stop Failover on both nodes. Once this operation is finished select Start Failover.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of ZX up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of ZX up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of ZX up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target alias name is required while configuring HA FC Target in case of adding two or more ports to one FC group ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If we want to have more then one port in each FC group (in HA FC configuration) it is necessary to type in Target alias name for every port. Otherwise an error message “Target alias is already used” can show up while setting up remote port mapping for FC targets in (pool name) -&amp;gt; Fibre Channel -&amp;gt; Targets and initiators assigned to this zpool. This issue will be resolved in the future release.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of ZX up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open ZX TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;Please note that in order to change this parameter Failover must be stopped first.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating nodes of the ZX cluster from up24 and earlier versions changes FC ports to target mode resulting in losing connection to a storage connected via FC initiator ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is a significant difference in FC configurations in up24 and earlier versions. Those versions allowed the FC ports to be configured in initiator mode only, while later versions allow both target and initiator mode with target as default, so in case of using storage connected via FC initiator, FC port(s) must be manually corrected in GUI of the updated node.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating Metro Cluster node with NVMe disks as read cache from ZX up26 or earlier can cause the system to lose access to the NVMe disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The process of updating of Metro Cluster node from ZX up26 or earlier is changing NVME disk IDs. In consequence moving pool back to updated node is possible but the read cache is gone (ID mismatch). In order to bring read cache back to the pool we recommend to use console tools in the following way: press Ctrl+Alt+x -&amp;gt; “Remove ZFS data structures and disks partitions”, locate and select the missing NVMe disk and press OK to remove all ZFS metadata on the disk. After this operation click Rescan button in GUI -&amp;gt; Storage. The missing NVMe disk should now appear in Unassigned disks at the bottom of the page which allows to select that disk in pool’s Disk group’s tab. Open Disk group tab of the pool, press the Add group button and select Add read cache. The missing disk should now be available to select it as a read cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Long time of a failover procedure in case of Xen client with iSCSI MPIO configuration ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In a scenario where Xen client is an iSCSI initiator in MPIO configuration, the power-off of one node starts the failover procedure that takes a very long time. Pool is finally moved successfully but there are many errors showing up in dmesg in meantime. In case of such an environment we recommend to add the following entry in the device section of the configuration file: /etc/multipath.conf:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;no_path_retry queue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;The structure of the device section should look as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;device {&lt;br /&gt;
        vendor                  &amp;quot;SCST_FIO|SCST_BIO&amp;quot;&lt;br /&gt;
        product                 &amp;quot;*&amp;quot;&lt;br /&gt;
        path_selector           &amp;quot;round-robin 0&amp;quot;&lt;br /&gt;
        path_grouping_policy    multibus&lt;br /&gt;
        rr_min_io               100&lt;br /&gt;
        no_path_retry           queue&lt;br /&gt;
        }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== In case of large number of disks, zpool move can take a long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In certain environments in case of large number of disks (about 100 and more) the zpool move operation can take a long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Rolled back data are not properly refreshed both in Windows and Vmware systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before using rollback operation on zvol please detach iSCSI or FC target, perform rollback operation and reattach target.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User gets deleted from share access list after changing its username on AD server ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the AD user is assigned to a share and later the username is changed we should let ZX know about it. Using the &amp;quot;Synchronize and update shares configurations&amp;quot; operation on ZX leads to a situation where the changed user gets deleted from the share’s access list. The new username needs to be added to the share’s access list manually.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== QLogic 32Gbit FC HBA is no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from ZX up29 we no longer support QLogic 32Gbit FC adapters.&lt;br /&gt;
&lt;br /&gt;
=== Certain 16Gbit FC HBAs are no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from ZX up29 we no longer support certain 16Gbit FC adapters. If you are using a 16Gbit FC adapter based on the Qlogic chipset using the qla2xxx_scts driver, please refer to our online hardware compatibility list (HCL) to verify if that particular adapter is supported.&lt;br /&gt;
&lt;br /&gt;
=== E-mail password cannot contain special non-ASCII characters ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The following special characters #&amp;amp;nbsp;: + cannot be used in a password used in an e-mail notification feature. They can break the authentication process.&lt;br /&gt;
&lt;br /&gt;
=== LSA e-mail notifications does not work with SMTP servers requiring SSL/TLS authentication ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; E-mail alert configuration in LSI Storage Authority Software does not work with SMTP servers which require SSL/TLS authentication&lt;br /&gt;
&lt;br /&gt;
=== Moving IP address of the NFS share’s IP read only access list to read/write access list cannot be performed in one step ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If an IP address is already present on the NFS access list and you would like to move it to another access list, it has to be performed in two steps. First delete the IP address from the current list and apply the changes. Next edit the NFS share again and add the IP address to the other access list.&lt;br /&gt;
&lt;br /&gt;
=== If the used space on zpool reaches more than 80%, the system may generate high load and become unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the used space on the zpool reaches more then 80%, the system is trying to utilize the available space to the maximum. As a result, the system load may increase, especially waiting I/O and cause its unstable work. Expanding the pool size space is recommended.&lt;br /&gt;
&lt;br /&gt;
=== In certain situations WebGUI is not showing the current state of the system ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There are situations when the system is performing actions taking too long for the WebGUI to refresh the values in the web browser. In such a case the system is showing the old value taken directly from cache memory. We recommend using the F5 key to refresh the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== An ongoing O&amp;amp;ODP process involving small zvol block size or dataset record size generate high load and render the system unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; O&amp;amp;ODP backup tasks involving small zvol block sizes as well as small dataset record sizes (4KB - 16KB) are known to generate very high load rendering the system unstable. We recommend using at least 64KB sizes for zvols and datasets.&lt;br /&gt;
&lt;br /&gt;
=== Runtime UPS calibration in the client-server configuration unexpectedly shutdowns ZX ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the client-server configuration of the UPS, the runtime UPS calibration process returns an improper value interpreted by ZX as being on battery. When it timeouts it shuts the system down.&lt;br /&gt;
&lt;br /&gt;
=== Starting from up29 (including updating from previous version), system cannot boot up in UEFI mode if your boot medium is controlled by LSI SAS 9300 HBA with outdated firmware ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Make sure your LSI SAS 9300 HBA has the latest firmware installed. A quick workaround is changing the booting mode from UEFI to Legacy.&lt;br /&gt;
&lt;br /&gt;
=== Bonded Mellanox network cards show negative values on the network usage chart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be solved in the future release.&lt;br /&gt;
&lt;br /&gt;
=== In case of hundreds of thousands of LDAP users system starts very slowly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; LDAP database is stored on the boot medium. If you have that large LDAP database we recommend using ultra fast NVMe disk for boot medium.&lt;br /&gt;
&lt;br /&gt;
=== After update to ZX up29 write back cache on some hardware RAID volumes can be unintentionally disabled ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from ZX up29 we disable write-back cache on all HDD disks by default, but we do not disable write-back cache on SSD drives and hardware RAID volumes. It can happen however that the write-back cache on some RAID volumes can be turned off. Hardware RAID volume performance can be heavily impacted by the lack of the write-back cache, so please make sure it&#039;s enabled after update. Open TUI and invoke Extended tools by pressing CTRL+ALT+X, then select Disk write-back cache settings.&lt;br /&gt;
&lt;br /&gt;
=== Restarting or disconnecting JBOD with the write-back cache enabled on disks can lead to the data inconsistency ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If write-back cache is enabled on disks in JBOD then restarting or disconnecting JBOD can lead to data inconsistency. Starting from ZX up29 we disable write-back cache on HDD disks by default during bootup procedure. We do not disable write-back cache on SSD drives and hardware RAID volumes.&lt;br /&gt;
&lt;br /&gt;
=== Snapshots are not displayed after a system reboot if there are more than a few thousands of snapshots ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case there is a large number of snapshots (more than a few thousands) there might be a significant delay in listing them in WebGUI after a system reboot. Depending on how big the number of snapshots is, it may take a few minutes or up to several dozen minutes to populate the list in WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the gzip-9 compression algorithm. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the gzip-9 compression algorithm the system can behave unstable when copying the data to storage. It is possible to use this compression algorithm only in environments where very efficient processors are being used.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use more than 500 zvols. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using more than 500 zvols in the system, the responsiveness of the Web-GUI may be low and the system may have problems with the import of zpools.&lt;br /&gt;
&lt;br /&gt;
=== It is recommended to use Fibre Channel groups in Fibre Channel Target HA Cluster environments that use the Fibre Channel switches. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the Fibre Channel switches in FC Target HA Cluster environments, it is recommended to use only Fibre Channel groups (using the Fibre Channel Public group it is not recommended).&lt;br /&gt;
&lt;br /&gt;
=== Manual export and import of zpool in the system or deactivation of the Fibre Channel group without first suspending or turning off the virtual machines on the VMware ESXi side may cause loss of access to the data by VMware ESXi. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before a manual export and import of a zpool in the system or deactivation of the Fibre Channel group in Fibre Channel Target HA Cluster environment, you must suspend or turn off the virtual machines on the VMware ESXi side. Otherwise, the VMware ESXi may lose access to the data, and restarting it will be necessary.&lt;br /&gt;
&lt;br /&gt;
=== In Fibre Channel Target HA Cluster environments the VMware ESXi 6.7 must be used instead of VMware ESXi 7.0. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the VMware ESXi 7.0 in Fibre Channel Target HA Cluster environment, restarting one of the cluster nodes may cause the Fibre Channel paths to report a dead state.&lt;br /&gt;
&lt;br /&gt;
=== Sometimes cluster nodes hang up during boot of Scale Logic ZX. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case one of the cluster nodes hangs up during Scale Logic ZX boot, it must be manually restarted.&lt;br /&gt;
&lt;br /&gt;
=== Sometimes when using the ipmi hardware solutions, the cluster node may be restarted again by the ipmi watchdog ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In this case, it is recommended to wait 5 minutes before turning on the cluster node after it was turned off.&lt;br /&gt;
&lt;br /&gt;
=== Sometimes restarting one of the cluster nodes may cause some disks to be missing in the zpool configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In this case, click the “Rescan storage” button on the WebGUI to solve this problem.&lt;br /&gt;
&lt;br /&gt;
=== The Internet Connection Check functionality has been removed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to check the internet connection, try to get the date and time from the NTP server using the Web-GUI (System Settings -&amp;gt; System -&amp;gt; Time and date settings).&lt;br /&gt;
&lt;br /&gt;
=== After upgrading the system to a newer version, the event viewer reported an error message: An unexpected system reboot occurred. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After upgrading the system to a newer version, the event viewer reported an error message: &amp;quot;An unexpected system reboot occurred. Run the &#039;Scrub scanner&#039; on all pools to check the system integrity. Analysis of logs and previous events can help to find the cause of this problem and prevent the issue in the future. For more information, refer to the help article.&amp;quot;. This information should be ignored.&lt;br /&gt;
&lt;br /&gt;
=== Low performance on remote disks in case of new installation of ZX up29r2. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;The source for this problem is the zfs_vdev_max_active parameter set to 1 on a new installation of ZX up29r2. To resolve this problem, please change the value of the zfs_vdev_max_active parameter from 1 to 1000 in TUI.&amp;amp;nbsp; In order to change the zfs_vdev_max_active parameter to 1000 open ZX TUI and use CTRL+ALT+W keys combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select the zfs module, then the zfs_vdev_max_active parameter and change its value to 1000. This operation requires a restart of the ZX. This should be done by selecting the Reboot option in the TUI.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
=== In case of no local storage disks in any Non-Shared storage HA Cluster node, the remote disks mirroring path connection status shows incorrect state: Disconnected. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; According to assumptions, each cluster nodes in Non-Shared storage HA Cluster must have at least one local storage disk before creating the remote disk mirroring path connection.&lt;br /&gt;
&lt;br /&gt;
=== In some environments in case of using RDMA for remote disks mirroring path, shutdown one of the cluster nodes may causes its restart instead of shutting down. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In some environments in case of using RDMA for remote disks mirroring path, shutdown one of the cluster nodes may causes its restart instead of shutting down.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the ATTO Fibre Channel Target in the HA cluster environment. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of using the ATTO Fibre Channel Target in a HA Cluster environment, after the power cycle of one of the cluster nodes, the fibre channel path reports a dead state. In order to restore the correct status of these fibre channel paths, the VMware server must be restarted.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the ATTO Fibre Channel Target in a HA cluster environment, restarting the cluster node with both zpools imported in the system causes the second cluster node to be unexpectedly restarted.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;Therefore, using the ATTO Fibre Channel Target in the HA cluster environment is not recommended.&lt;br /&gt;
&lt;br /&gt;
=== The SED functionality configuration issues. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED functionality in Scale Logic ZX enables to use the drives with verified SED configuration only - they are tagged as &amp;quot;SED&amp;quot; and listed on the Scale Logic ZX HCL.&lt;br /&gt;
&lt;br /&gt;
=== The SED configuration tool available in TUI also lists devices that are not currently supported. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI also lists devices that are not currently supported.&lt;br /&gt;
&lt;br /&gt;
=== Enabling the autotrim functionality in the zpools may cause drastic increase load or iowait in the system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of a drastic increase load or iowait in the system after enabling the autotrim functionality in the zpools, consider disabling it. It is recommended to run the &amp;quot;Trim&amp;quot; function manually on demand and at a convenient time (e.g. at a time when the system is working under less load).&lt;br /&gt;
&lt;br /&gt;
=== The Mellanox ConnectX-3 network controller is no longer supported in RDMA mode due to its instability. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In order to provide stable work with RDMA we recommend using the Mellanox ConnectX-4, ConnectX-5, or ConnectX-6.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115142 --&amp;gt;The Network usage charts display incorrect data for an Active-Backup bonding with RDMA. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Network usage charts incorrectly display data for systems using the Active-Backup bonding with RDMA. The charts only reflect the usage of one network interface included in the Active-Backup bonding (the charts for the second network interface are not generated at all).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115051 --&amp;gt;Duplicate entries appear in the Service Status tab in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In certain scenarios, the Service Status tab in the WebGUI shows duplicated instances of the same connection.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114628 --&amp;gt;Restoring data backups from the macOS Time Machine application may not work correctly with older versions of the macOS system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems with restoring a copy of data from the Time Machine application, it is recommended to update the macOS system to a new version.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114516 --&amp;gt;The Virtual Hard disks smaller than 1B are visible in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; It’s possible to disable the virtual disks through IPMI settings. In Settings -&amp;gt; Media Redirection Settings -&amp;gt; VMedia Instance Settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;  Uncheck &amp;quot;Emulate SD Media as USB disk to Host&amp;quot; checkbox - it  manages one of the virtual disks.&lt;br /&gt;
  Set &amp;quot;Hard disk instances&amp;quot; to 0 in a combo box.&lt;br /&gt;
  Set &amp;quot;Remote KVM Hard disk instances&amp;quot; to 0 in the combo box - settings of the combo box manage the second virtual disk. &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== &amp;lt;!-- #114449 --&amp;gt;Unsupported configuration of VMware virtual machines (consisting of multiple disks) for data rollback from snapshots in On- &amp;amp; Off-site Data Protection. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The VMware virtual machine data rollbacks from snapshots using the On- &amp;amp; Off-site Data Protection functionality are not supported when the virtual machines consist of multiple disks. The specific virtual machine configuration is incompatible with the restoration process.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114351 --&amp;gt;Subdomain statuses in the User Management tab in the WebGUI are not updated. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of unavailability of a given subdomain, information about its status will not be updated on the WebGUI (even by pressing the refresh button).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114251 --&amp;gt;The problems with users and groups synchronization within the Active Directory one-way trusted configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems it’s recommended to use two-way trusted configuration.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #108558 --&amp;gt;Partial support for REST API v3. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The REST API v3 is currently only partially supported. As a result, not all operations can be executed using this version of the REST API. For optimal utilization of the REST API, we highly recommend all customers to employ REST API v4.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #104059 --&amp;gt;SAS Multipath configuration is not supported in the Non-Shared Storage Cluster. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of the Non-Shared Storage Cluster, the SAS Multipath configuration is not supported at all. In this scenario, all the disks need to be connected through one path only. In the case of using the JBOD configuration with disks connected through a pair of SAS cables, one of them must be disconnected.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #99323 --&amp;gt;Username in LDAP database can’t be changed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To modify a username in the LDAP database, the administrator needs to delete the user account and creating a new one in the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115409 --&amp;gt;The hard disk LED locating and disk faulty functionality do not work properly using the Broadcom HBA 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter the Hard disk LED locating and disk faulty functionality do not work.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115436 --&amp;gt;The Broadcom HBA 9600 Storage Adapter may cause “Target allocation failed, error -6” error messages in dmesg. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter,using the “Rescan” button in the storage tab in the WebGUI may results in the “Target allocation failed, error -6” error messages in dmesg.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #109737 --&amp;gt;The ARCHTTP tool, when in use, might erroneously redirect to another network interface. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To avoid redirection to another network interface, it’s recommended to connect to the ARCHTTP tool using the primary network interface available in the Scale Logic ZX (the network interface is usually: eth0).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115494 --&amp;gt;Resilver progress bar in the HA Non-shared Cluster Storage environment may show values over 100%. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the HA Non-Shared storage cluster with compression and deduplication enabled it has been observed that the resilver progress bar on the WebGUI may display values exceeding 100%.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #116234 --&amp;gt;Identification LED blinking does not work on NVMe drives in AMD-based servers. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Identification LED blinking on NVMe drives does not work on servers based on AMD processors. This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115416 --&amp;gt;The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter.&lt;br /&gt;
&lt;br /&gt;
=== The TDB UID/GIDs mapping does not function properly. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; Workarounds:&lt;br /&gt;
&lt;br /&gt;
*Single-Domain Environments:&lt;br /&gt;
**Use the &amp;quot;autorid&amp;quot; option in the &amp;quot;ID mapping backend&amp;quot; settings.&lt;br /&gt;
**Alternatively, use &amp;quot;rid+tdb&amp;quot;:&lt;br /&gt;
**#Connect to the domain.&lt;br /&gt;
**#Navigate to the “Accessed domains” section.&lt;br /&gt;
**#Click the “Edit domain settings” button.&lt;br /&gt;
**#Set the UID/GID mapping to &amp;quot;rid&amp;quot; and define the Min ID and Max ID range (e.g., 2,000,000 to 2,999,999).&lt;br /&gt;
&lt;br /&gt;
Note: The range 1,000,000 to 1,999,999 is reserved.&lt;br /&gt;
&lt;br /&gt;
*Multi-Domain Environments:&lt;br /&gt;
**The &amp;quot;autorid&amp;quot; option is not supported. Use one of the following:&lt;br /&gt;
**#&amp;quot;rid+tdb&amp;quot;&lt;br /&gt;
**#&amp;quot;ad (with RFC2307 schema) + tdb&amp;quot;&lt;br /&gt;
**Steps for configuration:&lt;br /&gt;
&amp;lt;ol style=&amp;quot;margin-left: 80px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Connect to the domains.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Navigate to the “Accessed domains” section.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Click the “Edit domain settings” button for each domain.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Set the UID/GIDs mapping to &amp;quot;rid&amp;quot; for all domains.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Define unique Min ID and Max ID ranges for each domain (e.g., 2,000,000 to 2,999,999 for the first domain, 3,000,000 to 3,999,999 for the second domain, etc.).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== No Warning for Duplicate IP Addresses on Network Interfaces ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; No warning or error message is displayed if two network interfaces are configured with the same IP address. This can lead to network conflicts or connectivity issues. Users must manually verify configurations to avoid duplicates.&lt;br /&gt;
&lt;br /&gt;
=== No LED Management for aacraid Storage Controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; LED management is no longer supported for storage controllers using the aacraid driver, aligning with the manufacturer’s decision to discontinue these controllers. Users depending on LED indicators should explore alternative monitoring solutions or consider upgrading to supported hardware.&lt;br /&gt;
&lt;br /&gt;
=== LED Blinking Not Functional on NVMe Drives in Supermicro X12 Servers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; On Supermicro X12 servers, LED blinking functionality for NVMe drives is not operational. Users should rely on alternative methods to identify and manage drives.&lt;br /&gt;
&lt;br /&gt;
=== Web Server Settings in Maxview Storage Manager Not Preserved After Restart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; Changes made to the Web server settings in Maxview Storage Manager revert to default values after a server restart. Custom configurations are lost upon reboot. This issue will be addressed in a future release.&lt;br /&gt;
&lt;br /&gt;
=== Unnecessary dmesg Entries After Zpool Export/Import ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; Following a zpool export and import, dmesg may show entries such as &amp;quot;debugfs: Directory &#039;zdX&#039; with parent &#039;block&#039; already present!&amp;quot; While these entries do not affect functionality, they will be addressed in a future release.&lt;br /&gt;
&lt;br /&gt;
=== Discontinued IDE Disk Support in Scale Logic ZX Up31 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In Scale Logic ZX Up31, IDE disk support has been removed. Older servers or virtual machines relying on IDE disks may experience compatibility issues or failures. We recommend migrating to supported storage solutions to avoid disruptions. Future releases will not reintroduce IDE disk support.&lt;br /&gt;
&lt;br /&gt;
=== Consider Reducing Volume Block Size to 16KB for High Random Workloads ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; For workloads with high levels of random I/O, reducing the iSCSI volume block size to 16KB can improve performance. Users experiencing performance challenges with random workloads should consider this tuning option.&lt;br /&gt;
&lt;br /&gt;
=== Samba AD backend authentication fails after Microsoft Windows security updates (CVE-2025-49716) ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; After installing security updates addressing CVE-2025-49716, domain authentication fails when using the Samba AD backend.&lt;br /&gt;
&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Main_Page&amp;diff=1845</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Main_Page&amp;diff=1845"/>
		<updated>2026-03-19T14:09:26Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div class=&amp;quot;relese-notes-wrapper&amp;gt;&lt;br /&gt;
===== &#039;&#039;&#039;&#039;&#039;Release Notes&#039;&#039;&#039;&#039;&#039; =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span&amp;gt;{{&lt;br /&gt;
#tag:DynamicPageList| &lt;br /&gt;
category = Release Notes &lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = descending&lt;br /&gt;
count = 1&lt;br /&gt;
mode = none&lt;br /&gt;
}}&amp;lt;/span&amp;gt;[[Release Notes|All release notes »]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Help topics:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
count=50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 50&lt;br /&gt;
count= 50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| style=&amp;quot;vertical-align: top&amp;quot; | &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 100&lt;br /&gt;
count=50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;ZFS and data storage articles:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = ZFS and data storage articles&lt;br /&gt;
count=60&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=About&amp;diff=1844</id>
		<title>About</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=About&amp;diff=1844"/>
		<updated>2026-03-19T14:08:51Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: 1 revision imported&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This function summarizes all information connected with your license. The detailed information are divided into two panels:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;ABOUT&#039;&#039;&#039;&amp;amp;nbsp;:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Version&#039;&#039;&#039; - shows detailed information about the used Release of the ZX&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Serial Number &#039;&#039;&#039; - the serial number of your ZX license.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Storage&#039;&#039;&#039; - limitation of the usable storage size. There can be a limited size (you will see the maximum value for the storage size) or it can be unlimited.&amp;lt;br/&amp;gt;The &#039;&#039;&#039;TRIAL&#039;&#039;&#039; license is an Unlimited Storage license. After the &#039;&#039;&#039;TRIAL&#039;&#039;&#039; period is over, the system&#039;s performance will be significantly reduced.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Version status&#039;&#039;&#039; - shows the current status of the activation. The status can be either &#039;Activated&#039; or &#039;Not Activated&#039;. If the status is &#039;Not Activated&#039;, the button &#039;&#039;&#039;ACTIVATE&#039;&#039;&#039; will function. In order to activate the product, the system needs an access to the Internet. The communication port needed for the successful activation is 443 (source and destination). In case of any activation errors, please verify your firewall rules.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Expiration date&#039;&#039;&#039;&amp;amp;nbsp;: trial license expiration date.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;LICENSES&#039;&#039;&#039;&amp;amp;nbsp;:&lt;br /&gt;
&lt;br /&gt;
License keys should be provided to you from Scale Logic partner.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Product Key &#039;&#039;&#039;- Product Key format is XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXXX&#039;&#039;&#039;&amp;lt;br/&amp;gt;Storage Key &#039;&#039;&#039;- allows to extend the storage capacity managed by the ZX system.&amp;lt;br/&amp;gt;&#039;&#039;&#039;Feature Pack keys&#039;&#039;&#039; - adds additional function to the system.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=FC_group&amp;diff=1842</id>
		<title>FC group</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=FC_group&amp;diff=1842"/>
		<updated>2026-03-19T14:08:51Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: 1 revision imported&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__Each pool contains its own configuration for FC. It is assigned to a particular pool and can be used on any machine where a pool is imported. However, FC targets are local to a particular machine and each pool contains mapping of FC targets that has to be used on a particular machine. When the pool is imported on a machine where targets are not assigned to a pool configuration, it is possible to specify them after the pool import.&lt;br /&gt;
&lt;br /&gt;
FC configuration consists of groups that define which volumes are available on given ports to configured initiators. Two type of groups are available: a public group and initiator groups. Public group allows any initiator connected to configured ports to access LUNs assigned to this group. Public group is present on a pool by default and cannot be removed or created. Initially, it didn’t have any volumes or ports assigned, so nothing is available until it is configured manually. Because this group allows to connect any initiator, it is not possible to assign initiators to this group. The second type of FC group is an initiator group. By using this group it is possible to define which initiators can connect to LUNs assigned to a group. Initiator that is not assigned to a FC group won’t be able to connect through ports to LUNs. It is possible to configure many initiator groups to have different access configurations to volumes through FC targets. It is possible to define alias for each initiator group that allows easier identification of the group purpose.&lt;br /&gt;
&lt;br /&gt;
In general, groups gather set of: ports, volumes and initiators. LUNs added to a group define which volumes are available in this group. Ports assigned to a group define on which ports it is possible to connect to LUNs in a particular group. Finally, initiators (in case of an initiator group) define which initiators (ports of remote machines) will be able to connect to LUNs in the group using ports that are assigned to the group. For example, to allow initiators Ini0 and Ini1 access volume Vol-01 through ports P0 and P1 it is required to create an initiator group with assigned ports: P0 and P1, next add volume Vol-01 and initiators Ini0 and Ini1 to this group. It is possible to assign the same volume, initiator or port to more than one group. However, there are some limitations to the configuration that the system won’t allow to be applied:&lt;br /&gt;
&lt;br /&gt;
#The same target cannot be assigned to two groups that share a set of initiators.&lt;br /&gt;
#Due to the rule above, the same initiator cannot be assigned to two groups that share a set of ports.&lt;br /&gt;
#The target assigned in a public group cannot be used by an initiator group and the other way around, the target assigned to an initiator group cannot be used in public group.&lt;br /&gt;
#The target can be assigned to only one pool - the same port cannot be used in two groups that belong to different pools. If the target is used in an active pool and another pool that also uses this target is imported, the group using a conflicting target will be deactivated upon import.&lt;br /&gt;
&lt;br /&gt;
Moreover, a volume used by iSCSI cannot be assigned to any FC group and the other way around.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;A created group can be modified at any point in time. It is possible to assign or remove initiators, ports or volumes. Volumes assigned to groups can be modified, however, it is advised to be careful because in some cases the connected initiator may lose access to the volume during this operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is possible to deactivate any FC group. When a group is inactive, configuration that is represented by that group is not applied to ports thus LUNs are not available to initiators from a given group. A group can be deactivated either manually or by the system in case of configuration conflicts. Configuration conflicts are possible mainly during a foreign pool import. A group is deactivated on the imported pool in case of the following conflicts:&lt;br /&gt;
&lt;br /&gt;
#Target used by the pool is already used by other active pool.&lt;br /&gt;
#One of the LUNs uses SCSI ID that is already used by FC or iSCSI LUN on other pool.&lt;br /&gt;
&lt;br /&gt;
A group that was deactivated due to conflict can be activated manually after resolving that conflict by modifying the configuration.&lt;br /&gt;
&lt;br /&gt;
A bit more explanation is required for the SCSI ID uniqueness. This LUN identifier consists of 16 characters, however two SCSI IDs that have the same first 8 characters are considered conflicting. It is due to the way some initiators read those identifiers. Some initiators honor only those first 8 characters of SCSI ID, which could lead to issues if two LUNs would have this part of SCSI ID the same. However, in most cases you don’t have to worry about this setting because the system assigns a unique SCSI ID to the volume based on it’s name and time stamp of creation. When a SCSI ID is not specified, the one assigned to volume is used for a LUN. It is recommended to use a default (generated by system) SCSI ID. To use the default value simply do not specify any SCSI ID during configuration of LUN.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== FC groups and encrypted resources ==&lt;br /&gt;
FC Groups can contain a mix of unencrypted and encrypted zvols. However, encryption introduces strict dependency rules that affect the availability of the entire group.&lt;br /&gt;
&lt;br /&gt;
=== Group locking mechanism ===&lt;br /&gt;
If any encrypted zvol assigned to an FC Group cannot be accessed due to an encryption issue, the system will:&lt;br /&gt;
&lt;br /&gt;
* Automatically set the FC Group status to &#039;&#039;&#039;Locked&#039;&#039;&#039; &lt;br /&gt;
* Block access to &#039;&#039;&#039;all zvols&#039;&#039;&#039; in that group, including unencrypted ones &lt;br /&gt;
* Prevent initiators from accessing any LUNs assigned to the group &lt;br /&gt;
&lt;br /&gt;
This behavior is intentional and ensures data consistency and security.&lt;br /&gt;
&lt;br /&gt;
In the GUI, encrypted zvols with access issues are marked with an &#039;&#039;&#039;error indicator&#039;&#039;&#039;, and a tooltip may display the cause of the issue (e.g., an incorrect encryption passphrase).&lt;br /&gt;
&lt;br /&gt;
=== Resolving a Locked FC Group ===&lt;br /&gt;
If an FC Group is locked:&lt;br /&gt;
&lt;br /&gt;
# Identify encrypted zvols in the group.&lt;br /&gt;
# Check their encryption status.&lt;br /&gt;
# Unlock the affected zvols by providing the correct encryption passphrase.&lt;br /&gt;
# Verify that all encrypted zvols are accessible.&lt;br /&gt;
&lt;br /&gt;
In general, once all encryption issues are resolved, encrypted resources are unlocked automatically and the FC Group returns to &#039;&#039;&#039;Active&#039;&#039;&#039; status.&lt;br /&gt;
&lt;br /&gt;
In rare cases, automatic activation may fail even though encryption issues have already been resolved. In such situations, deactivating and then reactivating the FC Group can be used to trigger the same validation procedures that are executed after resolving encryption-related errors. If encryption issues are fully resolved, the FC Group and all its resources will activate successfully. If not, the system will prevent activation and display an error.&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Important:&#039;&#039;&#039;&lt;br /&gt;
 If the FC Group was &#039;&#039;&#039;manually deactivated while it was locked&#039;&#039;&#039;, resolving the encryption issues will still unlock the encrypted resources, but the FC Group will &#039;&#039;&#039;remain inactive&#039;&#039;&#039;. In this case, the group must be &#039;&#039;&#039;manually activated&#039;&#039;&#039; to make its resources available to initiators.&lt;br /&gt;
&lt;br /&gt;
=== Detaching blocked zvols as an alternative ===&lt;br /&gt;
As an alternative recovery method, a locked FC Group can be restored to an &#039;&#039;&#039;Active&#039;&#039;&#039; state by &#039;&#039;&#039;detaching blocked zvols&#039;&#039;&#039; (e.g., encrypted and inaccessible ones) from the FC Group.&lt;br /&gt;
&lt;br /&gt;
Detaching blocked zvols removes them from the group configuration. As a result:&lt;br /&gt;
&lt;br /&gt;
* The FC Group becomes &#039;&#039;&#039;Active&#039;&#039;&#039; again &lt;br /&gt;
* Initiators regain access to the remaining, non-blocked zvols assigned to the group&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
After detaching blocked zvols:&lt;br /&gt;
&lt;br /&gt;
* The FC Group operates normally with the remaining zvols. &lt;br /&gt;
* Detached zvols remain unavailable until their encryption issues are resolved.&lt;br /&gt;
&lt;br /&gt;
Once detached, encrypted zvols are later unlocked:&lt;br /&gt;
&lt;br /&gt;
* They are &#039;&#039;&#039;not automatically reattached or activated.&#039;&#039;&#039; &lt;br /&gt;
* To make them available again, you must &#039;&#039;&#039;manually attach them to the FC Group.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This behavior ensures predictable recovery of FC Groups while preventing unintended exposure of storage resources after encryption-related access issues.&lt;br /&gt;
{{:Encryption}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Add_FC_volume&amp;diff=1840</id>
		<title>Add FC volume</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Add_FC_volume&amp;diff=1840"/>
		<updated>2026-03-19T14:08:51Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: 1 revision imported&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__{{:Add new zvol}}&lt;br /&gt;
=== Attach to Fibre Channel groups ===&lt;br /&gt;
This section allows you to assign the new zvol to one or more Fibre Channel (FC) groups and control how it will be presented to FC initiators.&lt;br /&gt;
&lt;br /&gt;
==== FC membership properties ====&lt;br /&gt;
&lt;br /&gt;
===== SCSI ID =====&lt;br /&gt;
A unique identifier of a device.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Automatic&#039;&#039;&#039; – a SCSI identifier is generated automatically. &lt;br /&gt;
* &#039;&#039;&#039;Generate&#039;&#039;&#039; – creates a new random SCSI identifier.&lt;br /&gt;
&lt;br /&gt;
In most cases, leaving the value set to &#039;&#039;&#039;automatic&#039;&#039;&#039; is sufficient.&lt;br /&gt;
&lt;br /&gt;
===== Write cache settings =====&lt;br /&gt;
Defines how write caching is exposed for this FC LUN:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Write-through --- Block I/O (default)&#039;&#039;&#039; – write requests are completed only after data is safely stored on disk. Recommended for most environments. &lt;br /&gt;
* &#039;&#039;&#039;Write-through --- File I/O&#039;&#039;&#039; – write-through behaviour using the File I/O path. &lt;br /&gt;
* &#039;&#039;&#039;Write-back --- File I/O&#039;&#039;&#039; – enables write-back caching on the File I/O path. This provides the highest write performance, but cached data can be lost in case of a power outage or node failure (even in HA cluster mode). Use only when the environment can tolerate potential data loss and when appropriate protection such as UPS and battery-backed cache is in place. &lt;br /&gt;
* &#039;&#039;&#039;Read only --- File I/O&#039;&#039;&#039; – exposes the LUN as read-only over File I/O. &lt;br /&gt;
* &#039;&#039;&#039;Read only --- Block I/O&#039;&#039;&#039; – exposes the LUN as read-only over Block I/O.&lt;br /&gt;
&lt;br /&gt;
===== TRIM support =====&lt;br /&gt;
When enabled, allows the FC LUN to accept TRIM/UNMAP commands, returning freed blocks to the pool. Use this option only when the initiator and operating system fully support TRIM over Fibre Channel.&lt;br /&gt;
&lt;br /&gt;
===== FC groups =====&lt;br /&gt;
The &#039;&#039;&#039;FC groups&#039;&#039;&#039; table lists all configured Fibre Channel groups.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Alias&#039;&#039;&#039; – name of the FC group. Select the check box to assign the zvol to that group. &lt;br /&gt;
* &#039;&#039;&#039;LUN&#039;&#039;&#039; – LUN number under which the zvol will be exposed in the selected group. &lt;br /&gt;
** Auto-assigns the next available LUN number. &lt;br /&gt;
** If manual entry is allowed, you can type a specific LUN number that is free within that group.&lt;br /&gt;
&lt;br /&gt;
If no FC group is selected, the zvol will not be available over Fibre Channel and can be assigned later from the FC configuration view.&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Add_new_zvol&amp;diff=1838</id>
		<title>Add new zvol</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Add_new_zvol&amp;diff=1838"/>
		<updated>2026-03-19T14:08:51Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: 1 revision imported&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&amp;lt;onlyinclude&amp;gt;&lt;br /&gt;
A zvol is a ZFS block device created inside a ZFS pool (zpool). In this documentation, the term zvol refers to a block-type resource that is typically exported as a LUN to hosts over iSCSI, Fibre Channel, or NVMe-oF. Zvols are usually used to:&lt;br /&gt;
&lt;br /&gt;
* Provide block storage for virtual machines, databases, or other applications that expect a disk device. &lt;br /&gt;
* Separate workloads that require different performance or data-protection policies (for example, different compression, block size, or deduplication settings). &lt;br /&gt;
* Control how space is consumed by different applications or tenants at the pool level.&amp;lt;/onlyinclude&amp;gt; &lt;br /&gt;
&lt;br /&gt;
 You first create a zvol, and then (optionally) attach it to a target to make it available to hosts.&lt;br /&gt;
&amp;lt;onlyinclude&amp;gt;&lt;br /&gt;
== Creating a zvol ==&lt;br /&gt;
&lt;br /&gt;
# Go to the zpool management view in the GUI. &lt;br /&gt;
# Select and expand the zpool in which you want to create the zvol. &lt;br /&gt;
# Navigate to the iSCSI Targets, FC Targets, or NVMe-oF Targets section. &lt;br /&gt;
# Click &#039;&#039;&#039;Add zvol&#039;&#039;&#039; to open the &#039;&#039;&#039;Add new zvol&#039;&#039;&#039; dialog. &lt;br /&gt;
# Configure &#039;&#039;&#039;Encryption settings&#039;&#039;&#039; and Zvol properties, and optionally attach to an &#039;&#039;&#039;iSCSI&#039;&#039;&#039; &#039;&#039;&#039;target, NVMe-oF subsystem, or assign to FC groups&#039;&#039;&#039;. &lt;br /&gt;
# Review the configuration and click &#039;&#039;&#039;Add&#039;&#039;&#039;. The new zvol appears in the selected zpool.&lt;br /&gt;
&lt;br /&gt;
After creation, you can adjust most properties later; however, encryption and some layout-related parameters (such as volume block size) cannot be changed after data has been written.&lt;br /&gt;
&lt;br /&gt;
=== Encryption settings ===&lt;br /&gt;
This section is displayed at the top of the dialog. Encryption can be enabled only during zvol creation and cannot be disabled later for this resource.&lt;br /&gt;
&lt;br /&gt;
==== Encrypt resource ====&lt;br /&gt;
Enable this switch to create an encrypted zvol. If the switch remains disabled, the zvol is created unencrypted.&lt;br /&gt;
&lt;br /&gt;
==== Encryption method ====&lt;br /&gt;
Defines the encryption algorithm used when the zvol is encrypted.&lt;br /&gt;
&lt;br /&gt;
* By default, the method is inherited from &#039;&#039;&#039;Configuration → Resource encryption&#039;&#039;&#039; (for example, aes-256-gcm). &lt;br /&gt;
* You can select a different supported method for this zvol if required by policy or performance.&lt;br /&gt;
&lt;br /&gt;
For information about keys, unlocking behaviour, and error handling, see the [[Encryption]] article.&lt;br /&gt;
&lt;br /&gt;
=== Zvol properties ===&lt;br /&gt;
These fields define the behaviour and performance characteristics of the zvol.&lt;br /&gt;
&lt;br /&gt;
==== Name ====&lt;br /&gt;
&lt;br /&gt;
* The zvol name must be unique within the pool. &lt;br /&gt;
* Allowed characters: a–z  A–Z  0–9  .  _  -&lt;br /&gt;
&lt;br /&gt;
Renaming a zvol that is already exported through a target will change its internal path; any targets using that path must be updated before clients regain access.&lt;br /&gt;
&lt;br /&gt;
==== Size ====&lt;br /&gt;
&lt;br /&gt;
* Defines the logical capacity of the zvol. Enter the value and select the unit (e.g., GiB). &lt;br /&gt;
* The dialog shows the currently available physical space in the pool below the field.&lt;br /&gt;
&lt;br /&gt;
Effective space consumption depends on the provisioning mode, compression efficiency, and any additional data copies.&lt;br /&gt;
&lt;br /&gt;
==== Provisioning ====&lt;br /&gt;
Controls how space is allocated in the pool.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Thin provisioned (default)&#039;&#039;&#039;: Physical space is allocated on demand as data is written. This allows you to define logical capacities larger than the pool’s current free space, but you must monitor pool usage to avoid running out of space. &lt;br /&gt;
* &#039;&#039;&#039;Thick provisioned&#039;&#039;&#039;: The full size of the zvol is reserved immediately at creation time. This guarantees capacity for the zvol but reduces free space for other resources.&lt;br /&gt;
&lt;br /&gt;
Use thick provisioning only for workloads that require guaranteed capacity and for which overcommitment is unacceptable.&lt;br /&gt;
&lt;br /&gt;
==== Deduplication ====&lt;br /&gt;
Enables ZFS block-level deduplication for the zvol. Available options include:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Disabled (default)&#039;&#039;&#039; – deduplication is off. &lt;br /&gt;
* &#039;&#039;&#039;On&#039;&#039;&#039; – alias for sha256. &lt;br /&gt;
* &#039;&#039;&#039;Verify&#039;&#039;&#039; – alias for sha256,verify; performs an extra block comparison step. &lt;br /&gt;
* &#039;&#039;&#039;sha256&#039;&#039;&#039; – deduplicates based on SHA-256 checksums; blocks with identical checksums share a single physical copy. &lt;br /&gt;
* &#039;&#039;&#039;sha256,verify&#039;&#039;&#039; – uses SHA-256 and additionally verifies candidate duplicate blocks to reduce the risk of hash collisions. This mode is very resource-intensive.&lt;br /&gt;
&lt;br /&gt;
Use deduplication only when you expect a high ratio of repeated blocks and have sufficient RAM (e.g., many similar VM images). For general-purpose workloads, leaving deduplication disabled is usually recommended. &lt;br /&gt;
&lt;br /&gt;
==== Number of data copies ====&lt;br /&gt;
Controls how many ZFS data copies are stored for this zvol, in addition to pool-level redundancy (mirrors, RAIDZ, etc.). &lt;br /&gt;
&lt;br /&gt;
* Allowed values: &#039;&#039;&#039;1 (default), 2, 3&#039;&#039;&#039;. &lt;br /&gt;
* When possible, copies are placed on different physical disks. &lt;br /&gt;
* Additional copies increase used space and count against pool capacity. &lt;br /&gt;
* Only new writes use the current setting; existing data keeps the number of copies that were in effect when they were written.&lt;br /&gt;
&lt;br /&gt;
Use 2 or 3 copies only for small but critical zvols where additional local redundancy is more important than capacity efficiency. &lt;br /&gt;
&lt;br /&gt;
==== Compression ====&lt;br /&gt;
Defines the on-the-fly compression algorithm for zvol data.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;lz4 (default)&#039;&#039;&#039; – high-performance, general-purpose method that is recommended for most workloads. &lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – disables compression. &lt;br /&gt;
* Additional algorithms in the list: &lt;br /&gt;
** gzip-1 … gzip-9 (higher levels compress more but are slower), &lt;br /&gt;
** lzjb, &lt;br /&gt;
** zle (effective mainly for blocks of zeros).&lt;br /&gt;
&lt;br /&gt;
Keeping lz4 enabled is advisable for most zvols. Disable compression only when the data is already compressed and extremely latency-sensitive. &lt;br /&gt;
&lt;br /&gt;
==== Volume block size ====&lt;br /&gt;
Defines the block size used for the zvol. This is similar to choosing a sector size for a virtual disk.&lt;br /&gt;
&lt;br /&gt;
* Values: 4, 8, 16, 32, 64, 128, 256, 512 KiB, and 1 MiB. &lt;br /&gt;
* Default value in the dialog: 64 KiB. &lt;br /&gt;
* The chosen size cannot be changed once meaningful data has been written.&lt;br /&gt;
&lt;br /&gt;
Guidelines:&lt;br /&gt;
&lt;br /&gt;
* Smaller blocks (e.g., 4-16 KiB) can improve performance for random I/O with small requests, at the cost of more metadata and slightly higher overhead. &lt;br /&gt;
* Larger blocks (e.g., 128 KiB or more) are suitable for large sequential workloads, such as backup or media storage.&lt;br /&gt;
&lt;br /&gt;
Choose the block size based on the typical I/O pattern of the applications that will use the zvol.&lt;br /&gt;
&lt;br /&gt;
==== Write cache sync requests ====&lt;br /&gt;
Controls how synchronous write operations are handled for this zvol (ZFS sync property).&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Always (default)&#039;&#039;&#039;: All writes are treated as synchronous. Each transaction is committed and flushed to stable storage before the operation returns to the initiator. This provides the highest level of data safety and is recommended especially when no reliable UPS is available. &lt;br /&gt;
* &#039;&#039;&#039;Standard&#039;&#039;&#039;: Equivalent to sync=standard. Only writes that are explicitly requested as synchronous are forced to stable storage; other writes can stay cached for up to about one second before being committed. This improves performance, but the most recent (up to 1 second) cached data can be lost in case of a power outage. Use this option only in environments protected by a reliable UPS. &lt;br /&gt;
* &#039;&#039;&#039;Disabled&#039;&#039;&#039;: Equivalent to sync=disabled. Even explicitly synchronous writes are treated as asynchronous and may remain in cache for up to about one second. This provides the highest performance, but the most recent cached data may be lost during a power outage, and applications may observe inconsistent data. Use this option only for non-critical workloads and only in environments equipped with a reliable UPS. &lt;br /&gt;
&lt;br /&gt;
==== Write cache sync request handling (logbias) ====&lt;br /&gt;
Provides a hint about how synchronous writes should use log devices (if present in the pool).&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Write log device (Latency)&#039;&#039;&#039;: If dedicated log vdevs exist, they are used to minimize latency for synchronous writes. This is the recommended default for latency-sensitive workloads. &lt;br /&gt;
* &#039;&#039;&#039;In pool (Throughput)&#039;&#039;&#039;: Log vdevs are bypassed, and writes are optimized for aggregate throughput and efficient pool usage. This can be beneficial for streaming workloads where latency is less critical.&lt;br /&gt;
&lt;br /&gt;
This setting does not override pool layout; it only influences where synchronous data is staged before being committed to main storage. &lt;br /&gt;
&lt;br /&gt;
==== Read cache (primary, ARC) scope ====&lt;br /&gt;
Specifies what is cached in the primary memory cache (ARC) for this zvol.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;All (default)&#039;&#039;&#039; – cache both data and metadata. &lt;br /&gt;
* &#039;&#039;&#039;Metadata&#039;&#039;&#039; – cache only metadata; user data is read directly from disk. &lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – do not cache anything for this zvol in ARC. &lt;br /&gt;
&lt;br /&gt;
For large, sequential, or low-priority workloads you can reduce ARC pressure by switching to &#039;&#039;&#039;Metadata&#039;&#039;&#039; or &#039;&#039;&#039;None&#039;&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
==== Read cache (secondary, L2ARC) scope ====&lt;br /&gt;
Controls use of secondary cache devices (L2ARC), typically SSDs.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;All (default)&#039;&#039;&#039; – cache both metadata and user data on L2ARC. &lt;br /&gt;
* &#039;&#039;&#039;Metadata&#039;&#039;&#039; – cache only metadata on L2ARC. &lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – exclude this zvol from L2ARC caching.&lt;br /&gt;
&lt;br /&gt;
Use &#039;&#039;&#039;Metadata&#039;&#039;&#039; or &#039;&#039;&#039;None&#039;&#039;&#039; for zvols that would otherwise fill L2ARC with data that has low reuse value, thereby preserving cache space for more critical workloads.&amp;lt;/onlyinclude&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Attach to target ===&lt;br /&gt;
The &#039;&#039;&#039;Attach to target&#039;&#039;&#039; section at the bottom of the dialog allows you to export the newly created zvol as a LUN immediately. This section is optional; you can also attach the zvol later from the target configuration views.&lt;br /&gt;
&lt;br /&gt;
==== General behaviour ====&lt;br /&gt;
&lt;br /&gt;
* When the Attach to target checkbox is disabled, the zvol is created but not attached to any target. &lt;br /&gt;
* Enabling the checkbox expands the configuration panel and allows you to select or configure how the zvol will be presented to initiators.&lt;br /&gt;
&lt;br /&gt;
==== Fields ====&lt;br /&gt;
&#039;&#039;&#039;Target name&#039;&#039;&#039;: Select an existing target from the drop-down list. The zvol will be attached as a new LUN under this target.&lt;br /&gt;
&lt;br /&gt;
==== SCSI ID ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Automatic&#039;&#039;&#039; – uses an automatically generated SCSI identifier. &lt;br /&gt;
* &#039;&#039;&#039;Generate&#039;&#039;&#039; – creates a new random identifier if you need to control or refresh the ID.&lt;br /&gt;
&lt;br /&gt;
In most cases, leaving the default automatic value is sufficient.&lt;br /&gt;
&lt;br /&gt;
==== LUN ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;automatic&#039;&#039;&#039; – assigns the next available LUN number on the selected target. &lt;br /&gt;
* &#039;&#039;&#039;manual entry&#039;&#039;&#039; – specify a particular LUN number if your environment uses a specific numbering scheme; the number must not already be in use on that target.&lt;br /&gt;
&lt;br /&gt;
==== Write cache settings ====&lt;br /&gt;
Defines how write caching is presented to the initiator for this LUN.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Write-through --- Block I/O (default)&#039;&#039;&#039;: All writes are committed directly to stable storage before completion is reported to the initiator. This mode prioritizes data integrity and is recommended for most environments. &lt;br /&gt;
* &#039;&#039;&#039;Read only --- Block I/O&#039;&#039;&#039;: Exposes the LUN as read-only on the Block I/O path. Any write attempts from the initiator are rejected. Use this option for volumes that must not be modified. &lt;br /&gt;
* &#039;&#039;&#039;Write-through --- File I/O&#039;&#039;&#039;: Similar to Write-through --- Block I/O, but handled through the File I/O path. Writes are acknowledged only after they are safely stored on disk. &lt;br /&gt;
* &#039;&#039;&#039;Write-back --- File I/O&#039;&#039;&#039;: Enables write-back caching on the File I/O path. Write requests are acknowledged after being stored in cache rather than on disk, which provides the highest write performance. However, cached data may be lost in case of a power failure or node crash (this risk exists even in HA cluster configurations), and resource failover can take noticeably longer. Use this option only when the environment can tolerate potential data loss and when additional protection (e.g., a battery-backed cache and a reliable UPS) is in place. &lt;br /&gt;
* &#039;&#039;&#039;Read only --- File I/O&#039;&#039;&#039;: Exposes the LUN as read-only on the File I/O path. Use when the initiator must have read access only, for example, for archival or reference datasets.&lt;br /&gt;
&lt;br /&gt;
==== TRIM support ====&lt;br /&gt;
&lt;br /&gt;
* When enabled, it allows the zvol to honor TRIM / UNMAP requests from the initiator so that released blocks can be returned to the pool. &lt;br /&gt;
* Use this option only when the operating system and initiator software fully support TRIM for the relevant protocol. &lt;br /&gt;
&lt;br /&gt;
TRIM can improve space efficiency for thin-provisioned zvols, but misconfigured initiators or unsupported combinations may cause unexpected behaviour.&lt;br /&gt;
&lt;br /&gt;
After you confirm the configuration and click Add, the zvol is created with the specified properties. If attachment is enabled, the zvol is also exposed as a LUN on the selected target and becomes available to connected initiators after they rescan their devices.&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Dataset&amp;diff=1836</id>
		<title>Dataset</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Dataset&amp;diff=1836"/>
		<updated>2026-03-19T14:08:51Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: 1 revision imported&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__A &#039;&#039;&#039;dataset&#039;&#039;&#039; is a ZFS file system created inside a ZFS pool (zpool). In this documentation, dataset always refers to a file-system resource, typically used as NAS storage for SMB / NFS shares.&lt;br /&gt;
&lt;br /&gt;
Datasets are typically used to: &lt;br /&gt;
&lt;br /&gt;
* Provide NAS volumes for SMB and NFS shares.&lt;br /&gt;
* Separate data sets with different performance or data-protection policies (for example, different compression, recordsize, or deduplication settings).&lt;br /&gt;
* Apply independent quota and reservation limits for different workloads or tenants.&lt;br /&gt;
&lt;br /&gt;
You create a dataset first, then create shares (SMB / NFS, etc.) that point to it.&lt;br /&gt;
&lt;br /&gt;
== Creating a dataset ==&lt;br /&gt;
&lt;br /&gt;
# Go to the pool management view in the GUI. &lt;br /&gt;
# Select and expand the zpool where you want to create the dataset. &lt;br /&gt;
# Navigate to the &#039;&#039;&#039;Shares&#039;&#039;&#039; section for the selected zpool. &lt;br /&gt;
# Click &#039;&#039;&#039;Add dataset&#039;&#039;&#039; to open the dataset creation dialog. &lt;br /&gt;
# Configure the parameters.&lt;br /&gt;
# Review and confirm creation. The new dataset appears in the dataset list for the selected zpool. &lt;br /&gt;
&lt;br /&gt;
After creation, you can:&lt;br /&gt;
&lt;br /&gt;
* Assign SMB or NFS shares to the dataset in the appropriate shares configuration pages. &lt;br /&gt;
* Adjust most dataset properties later; however, &#039;&#039;&#039;encryption settings&#039;&#039;&#039; and some layout-related properties cannot be changed after creation.&lt;br /&gt;
&lt;br /&gt;
Below is a description of all dataset properties.&lt;br /&gt;
&lt;br /&gt;
=== Encryption settings ===&lt;br /&gt;
This section is displayed at the top of the dialog. &#039;&#039;&#039;Encryption is available only during creation and cannot be disabled later&#039;&#039;&#039;. Once the dataset is created, you cannot turn encryption off for it.&lt;br /&gt;
&lt;br /&gt;
==== Encrypt resource ====&lt;br /&gt;
Enable this switch to create an encrypted dataset. When disabled, the dataset is created unencrypted.&lt;br /&gt;
&lt;br /&gt;
==== Encryption method ====&lt;br /&gt;
Shows the algorithm used when the dataset is encrypted.&lt;br /&gt;
&lt;br /&gt;
* By default, it inherits the value from the Configuration -&amp;gt; Resource encryption setting (for example, aes-256-gcm).&lt;br /&gt;
* You can select a different supported method for this dataset.&lt;br /&gt;
&lt;br /&gt;
For details about keys, unlocking, and error handling, see [[Encryption]].&lt;br /&gt;
&lt;br /&gt;
=== Dataset properties ===&lt;br /&gt;
These fields define the behaviour of the dataset itself.&lt;br /&gt;
&lt;br /&gt;
==== Name ====&lt;br /&gt;
&lt;br /&gt;
* The dataset name must be unique within the pool. &lt;br /&gt;
* Allowed characters: a–z  A–Z  0–9  .  _  -&lt;br /&gt;
&lt;br /&gt;
Changing the name of an existing dataset breaks paths used by its shares; clients will lose access until the share definitions are adjusted.&lt;br /&gt;
&lt;br /&gt;
==== Deduplication ====&lt;br /&gt;
Enables ZFS block-level deduplication for this dataset. Options:&lt;br /&gt;
&lt;br /&gt;
* Disabled (default) – deduplication is turned off. &lt;br /&gt;
* On – alias for “sha256”. &lt;br /&gt;
* Verify – alias for &amp;quot;sha256, Verify&amp;quot;; additionally compares blocks to reduce the risk of false matches. &lt;br /&gt;
* sha256 - uses the SHA-256 checksum for deduplication. When two blocks have the same checksum, they are treated as identical and only a single copy is stored. &lt;br /&gt;
* sha256, Verify – uses SHA-256 for deduplication and additionally verifies candidate duplicate blocks to detect possible hash collisions. This mode is very resource-intensive and is not recommended for general use.&lt;br /&gt;
&lt;br /&gt;
Use deduplication only for workloads with a high ratio of identical blocks and sufficient RAM (e.g., many similar VM images). For general data, it is usually better to keep it disabled.&lt;br /&gt;
&lt;br /&gt;
==== Number of data copies ====&lt;br /&gt;
Controls the number of ZFS data copies stored for this dataset, in addition to pool redundancy (mirror, RAIDZ, and so on).&lt;br /&gt;
&lt;br /&gt;
* Possible values: &#039;&#039;&#039;1 (default)&#039;&#039;&#039;, &#039;&#039;&#039;2&#039;&#039;&#039;, &#039;&#039;&#039;3&#039;&#039;&#039;. &lt;br /&gt;
* Copies are stored on different disks when possible. &lt;br /&gt;
* Extra copies increase used space and are counted towards quota and reservation. &lt;br /&gt;
* Only new writes use the current setting.&lt;br /&gt;
&lt;br /&gt;
Use higher values only for small but critical datasets where local redundancy is more important than capacity.&lt;br /&gt;
&lt;br /&gt;
==== Compression ====&lt;br /&gt;
The compression algorithm used for this dataset.&lt;br /&gt;
&lt;br /&gt;
* Default: &#039;&#039;&#039;lz4 (default)&#039;&#039;&#039; – fast, generally recommended.&lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – disables compression. &lt;br /&gt;
* Other algorithms that can appear in the list: &lt;br /&gt;
** gzip levels 1–9 (1 = fastest, lowest compression; 9 = slowest, highest compression), &lt;br /&gt;
** lzjb, &lt;br /&gt;
** zle.&lt;br /&gt;
&lt;br /&gt;
Keep lz4 for most datasets. Disable compression only when data is already compressed and very latency-sensitive.&lt;br /&gt;
&lt;br /&gt;
==== Record size ====&lt;br /&gt;
Suggested block size for files stored in this dataset.&lt;br /&gt;
&lt;br /&gt;
* Designed primarily for database-type workloads that access large files in fixed-size records. &lt;br /&gt;
* For such workloads, setting the “record size” to at least match the database record size can significantly improve performance. &lt;br /&gt;
* For general-purpose datasets, changing the default is not recommended and may reduce performance. &lt;br /&gt;
* Values: 4, 8, 16, 32, 64, 128, 256, 512 KiB and 1 MiB; newer software versions allow values up to 16 MiB. &lt;br /&gt;
* Default: &#039;&#039;&#039;128 KiB (default)&#039;&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
The new record size applies only to data written after the change; existing files keep their original block size.&lt;br /&gt;
&lt;br /&gt;
==== Write cache sync requests ====&lt;br /&gt;
Controls the ZFS &#039;&#039;&#039;sync&#039;&#039;&#039; property – how synchronous write operations are handled.&lt;br /&gt;
&lt;br /&gt;
Options in the drop-down:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Always&#039;&#039;&#039;: All file-system transactions are committed and flushed to stable storage before returning to the application. Best data safety; lower performance. &lt;br /&gt;
* &#039;&#039;&#039;Standard (default)&#039;&#039;&#039;: Synchronous operations are logged and flushed; however, to improve performance, the most recent cached data (approximately one second) may be lost if a sudden power failure occurs. Recommended only when the environment is protected by a reliable UPS, as indicated by the warning in the dialog. &lt;br /&gt;
* &#039;&#039;&#039;Disabled&#039;&#039;&#039;: Synchronous requests are treated as asynchronous; data is committed only when the next transaction group is written. This provides maximum performance but the highest risk of data loss and inconsistency. Use only for non-critical workloads where this risk is acceptable.&lt;br /&gt;
&lt;br /&gt;
==== Write cache sync request handling (logbias) ====&lt;br /&gt;
Gives a hint how synchronous writes for this dataset should use log devices.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Write log device (Latency)&#039;&#039;&#039; – if the pool has separate log devices, they are used to minimize latency of synchronous writes. Recommended default. &lt;br /&gt;
* &#039;&#039;&#039;In pool (Throughput)&#039;&#039;&#039; – log devices are not used; the software optimizes for overall pool throughput and efficient use of resources.&lt;br /&gt;
&lt;br /&gt;
==== Read cache (primary, ARC) scope ====&lt;br /&gt;
Controls what is cached in main memory (ARC) for this dataset.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;All (default)&#039;&#039;&#039; – cache data and metadata. &lt;br /&gt;
* &#039;&#039;&#039;Metadata&#039;&#039;&#039; – cache only metadata. &lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – do not cache anything from this dataset in ARC. &lt;br /&gt;
&lt;br /&gt;
You can reduce ARC pressure for large streaming or low-priority datasets by switching to “Metadata” or “None”.&lt;br /&gt;
&lt;br /&gt;
==== Read cache (secondary, L2ARC) scope ====&lt;br /&gt;
Controls what is cached on L2ARC devices (if present).&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;All (default)&#039;&#039;&#039; – cache data and metadata. &lt;br /&gt;
* &#039;&#039;&#039;Metadata&#039;&#039;&#039; – cache only metadata. &lt;br /&gt;
* &#039;&#039;&#039;None&#039;&#039;&#039; – do not cache this dataset in L2ARC.&lt;br /&gt;
&lt;br /&gt;
Use “Metadata” or “None” for datasets that would otherwise fill L2ARC with low-value data.&lt;br /&gt;
&lt;br /&gt;
==== Access time ====&lt;br /&gt;
Controls whether file access time (atime) is updated on reads.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Disabled (default)&#039;&#039;&#039; – access time is not updated, which avoids extra writes and can significantly improve performance. &lt;br /&gt;
* &#039;&#039;&#039;Enabled&#039;&#039;&#039; – access time is updated on each read; required by some legacy applications (for example, certain mailers).&lt;br /&gt;
&lt;br /&gt;
=== Small data blocks policy ===&lt;br /&gt;
This section controls how &#039;&#039;&#039;small data blocks&#039;&#039;&#039; of this dataset are placed when the pool has a &#039;&#039;&#039;special devices group&#039;&#039;&#039; configured.&lt;br /&gt;
&lt;br /&gt;
* If no special devices group exists in the pool, the section is disabled, and an information banner appears: “Available only when a special devices group exists.” In this case, all data blocks are stored on regular data vdevs. &lt;br /&gt;
* When a special devices group exists and is healthy, the &#039;&#039;&#039;Small data block size&#039;&#039;&#039; list becomes active.&lt;br /&gt;
&lt;br /&gt;
==== Small data block size ====&lt;br /&gt;
Defines the maximum size of blocks that will be stored on special devices instead of regular data vdevs (this corresponds to the ZFS special_small_blocks property for the dataset). More info available in the “[[Small blocks policy settings]]” article.&lt;br /&gt;
&lt;br /&gt;
Available options in the drop-down:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Disable for the dataset&#039;&#039;&#039;: The small data blocks policy is disabled for this dataset, regardless of the pool settings. All data blocks (including small ones) are stored on regular data vdevs. &lt;br /&gt;
* &#039;&#039;&#039;4 KiB, 8 KiB, 16 KiB, 32 KiB, 64 KiB, 128 KiB, 256 KiB, 512 KiB, 1 MiB, 2 MiB, 4 MiB, 8 MiB, 16 MiB&#039;&#039;&#039;: Any data block with a logical size less than or equal to the selected value is stored on special devices. Larger blocks are stored on regular data vdevs. &lt;br /&gt;
* &#039;&#039;&#039;Inherit from the pool settings (default) [X KiB]&#039;&#039;&#039;: The dataset inherits the pool-level small blocks setting. The value in brackets ([X KiB]) shows the current pool threshold; e.g.: &lt;br /&gt;
** [0 KiB] – small data blocks policy is effectively disabled on the pool. &lt;br /&gt;
** [128 KiB] – blocks up to 128 KiB are redirected to special devices according to pool settings.&lt;br /&gt;
&lt;br /&gt;
Notes and recommendations&lt;br /&gt;
&lt;br /&gt;
* A &#039;&#039;&#039;higher threshold&#039;&#039;&#039; moves more data to special devices, which can improve performance for small, random I/O, but also increases capacity usage on the special devices group. &lt;br /&gt;
* A &#039;&#039;&#039;very small value&#039;&#039;&#039; (e.g., 4 KiB or 8 KiB) typically limits the placement mostly to metadata and very small files. &lt;br /&gt;
* If special or dedup devices are not supported by the pool layout (e.g., the pool contains RAIDZ data groups instead of mirror-based data vdevs), the small data blocks policy cannot be effectively used. Plan the pool layout accordingly. &lt;br /&gt;
* If the special devices group becomes degraded or unavailable, the performance and behaviour of datasets using the small data blocks policy can be affected; always monitor pool health.&lt;br /&gt;
&lt;br /&gt;
=== Space management – quota and reservation ===&lt;br /&gt;
The bottom part of the dialog controls space limits for the dataset.&lt;br /&gt;
&lt;br /&gt;
==== Enable quota ====&lt;br /&gt;
When this switch is enabled:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Quota definition&#039;&#039;&#039; &lt;br /&gt;
** Hard limit on the total space that the dataset and all its descendants (child datasets, snapshots, clones) can consume. &lt;br /&gt;
** A unit (MiB, GiB, TiB) can be selected from the drop-down. &lt;br /&gt;
* &#039;&#039;&#039;Include snapshots and clones&#039;&#039;&#039; (checkbox) &lt;br /&gt;
** When checked (default), space used by snapshots and clones counts towards the quota. This matches standard ZFS behaviour and is usually recommended.&lt;br /&gt;
&lt;br /&gt;
Notes:&lt;br /&gt;
&lt;br /&gt;
* Quota cannot be smaller than reservation (if reservation is enabled). &lt;br /&gt;
* When the quota is reached, further writes fail with “out of space” for this dataset even if the pool still has free capacity.&lt;br /&gt;
&lt;br /&gt;
==== Enable reservation ====&lt;br /&gt;
When this switch is enabled:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Reserved space&#039;&#039;&#039; &lt;br /&gt;
** Amount of pool space reserved exclusively for this dataset. &lt;br /&gt;
** You cannot reserve more than the currently available free space in the pool. The dialog shows the currently available physical space below the field. &lt;br /&gt;
* &#039;&#039;&#039;Include snapshots and clones&#039;&#039;&#039; (checkbox) &lt;br /&gt;
** When checked, the reserved space covers the dataset and all its descendants (snapshots and clones). &lt;br /&gt;
** When unchecked, reserved space applies only to the dataset itself (behaviour similar to ZFS refreservation).&lt;br /&gt;
&lt;br /&gt;
Additional rules:&lt;br /&gt;
&lt;br /&gt;
* The sum of all reservations in a pool cannot exceed its free space. &lt;br /&gt;
* Quota must be greater than or equal to reservation.&lt;br /&gt;
&lt;br /&gt;
Use reservation only for datasets that must have guaranteed space, for example critical databases or backup targets.&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Encryption&amp;diff=1834</id>
		<title>Encryption</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Encryption&amp;diff=1834"/>
		<updated>2026-03-19T14:08:51Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: 1 revision imported&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Encryption protects data stored in datasets and zvols within a ZFS pool (zpool). The encryption feature is available for every zpool, but encrypted resources can be created only after you configure a pool-wide encryption passphrase.&lt;br /&gt;
&lt;br /&gt;
Key characteristics:&lt;br /&gt;
&lt;br /&gt;
*Encryption applies to datasets and zvols; the zpool itself is not encrypted.&lt;br /&gt;
*All encrypted resources in one zpool share the same passphrase.&lt;br /&gt;
*Datasets and zvols can only be encrypted during their creation.&lt;br /&gt;
*You can later change the pool-wide encryption passphrase and the default encryption method.&lt;br /&gt;
&lt;br /&gt;
Use encryption when you need at-rest data protection within a specific zpool.&lt;br /&gt;
&lt;br /&gt;
== Configuring resource encryption ==&lt;br /&gt;
&lt;br /&gt;
#Go to &#039;&#039;&#039;Storage&#039;&#039;&#039;.&lt;br /&gt;
#Select the zpool you want to configure.&lt;br /&gt;
#Open the &#039;&#039;&#039;Configuration&#039;&#039;&#039; tab.&lt;br /&gt;
#Expand the &#039;&#039;&#039;Resource encryption&#039;&#039;&#039; section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You will see either the initial configuration fields or the current encryption status, depending on whether encryption was already configured or was configured during [[Zpool_wizard|zpool creation]]. When no passphrase is configured for a zpool, the &#039;&#039;&#039;Resource encryption&#039;&#039;&#039; section shows:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Default encryption method&#039;&#039;&#039; – algorithm that is preselected in the drop-down list and used by default for new encrypted datasets and zvols in this zpool, if you do not choose a different method during resource creation.&lt;br /&gt;
*&#039;&#039;&#039;Encryption passphrase&#039;&#039;&#039; – shared passphrase used to unlock all encrypted resources in this zpool.&lt;br /&gt;
*&#039;&#039;&#039;Confirm passphrase&#039;&#039;&#039; – repeat the passphrase for verification.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enter the passphrase twice, select the default method, and then click &#039;&#039;&#039;Save settings&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Important&#039;&#039;&#039;: The passphrase cannot be recovered if it is lost. Without the passphrase, encrypted resources in this zpool cannot be accessed. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the passphrase is configured, you can start creating encrypted datasets and zvols in this zpool. More details on how to use encryption in resources can be found here:&lt;br /&gt;
&lt;br /&gt;
*Create a new zvol for iSCSI Target&lt;br /&gt;
*Create a new zvol for FC Group&lt;br /&gt;
*Create a new dataset&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
*Encryption can be enabled only at creation time. Existing datasets and zvols cannot be switched to encrypted mode by editing their properties.&lt;br /&gt;
*To protect existing data that is currently unencrypted, you must:&lt;br /&gt;
**Create a new encrypted dataset or zvol.&lt;br /&gt;
**Copy or replicate data from the old resource to the new encrypted one.&lt;br /&gt;
**Remove the unencrypted original if it is no longer needed.&lt;br /&gt;
&lt;br /&gt;
== Managing a zpool with configured resource encryption ==&lt;br /&gt;
&lt;br /&gt;
When a passphrase is already configured, the &#039;&#039;&#039;Resource encryption&#039;&#039;&#039; section shows:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Passphrase status&#039;&#039;&#039; (for example, configured).&lt;br /&gt;
*&#039;&#039;&#039;Default encryption method&#039;&#039;&#039;.&lt;br /&gt;
*Buttons:&lt;br /&gt;
**Change passphrase&lt;br /&gt;
**Change encryption method&lt;br /&gt;
&lt;br /&gt;
=== Changing the encryption passphrase ===&lt;br /&gt;
&lt;br /&gt;
#Click &#039;&#039;&#039;Change passphrase&#039;&#039;&#039;.&lt;br /&gt;
#In the dialog:&lt;br /&gt;
##Enter &#039;&#039;&#039;New passphrase&#039;&#039;&#039;.&lt;br /&gt;
##Confirm passphrase.&lt;br /&gt;
##Enter the &#039;&#039;&#039;Administrator&#039;&#039;&#039; password to authorize the change.&lt;br /&gt;
#Click &#039;&#039;&#039;Change passphrase&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
After you confirm the change, the new passphrase is propagated to all existing encrypted datasets and zvols in the zpool. This synchronization may take some time, depending on the number of encrypted resources. A notification of the operation&#039;s start and completion is recorded in &#039;&#039;&#039;Event Viewer&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 While the synchronization is in progress, the User Interface is locked for changes and cannot be used until the operation finishes. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Changing the default encryption method ===&lt;br /&gt;
&lt;br /&gt;
#Click &#039;&#039;&#039;Change encryption method&#039;&#039;&#039;.&lt;br /&gt;
#Select a new &#039;&#039;&#039;Default encryption method&#039;&#039;&#039; from the drop-down list.&lt;br /&gt;
#Click &#039;&#039;&#039;Save method&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The selected method is displayed as default only for encrypted datasets and zvols created after this change. Existing encrypted resources keep their original encryption method which cannot be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Available encryption methods ====&lt;br /&gt;
&lt;br /&gt;
The following methods are available for resource encryption:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-128-CCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 128-bit key in CCM (Counter with CBC-MAC) mode.&lt;br /&gt;
*Provides authenticated encryption with moderate CPU usage.&lt;br /&gt;
*Suitable when you need a balance between performance and security.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-192-CCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 192-bit key in CCM mode.&lt;br /&gt;
*Higher security margin than 128-bit, with slightly higher CPU cost.&lt;br /&gt;
*Use when you prefer stronger keys and can accept a small performance impact.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-256-CCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 256-bit key in CCM mode.&lt;br /&gt;
*Maximum key length in the CCM group.&lt;br /&gt;
*Use when the security margin is more important than performance.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-128-GCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 128-bit key in GCM (Galois/Counter Mode).&lt;br /&gt;
*Authenticated encryption optimized for performance on modern CPUs.&lt;br /&gt;
*Good choice when you need strong encryption with high throughput.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-192-GCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 192-bit key in GCM mode.&lt;br /&gt;
*Increases key size over AES-128-GCM while remaining performant.&lt;br /&gt;
*Use when you want a higher security margin but similar behavior to AES-128-GCM.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AES-256-GCM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*AES with a 256-bit key in GCM mode.&lt;br /&gt;
*Provides strong authenticated encryption and is widely used as a best-practice choice.&lt;br /&gt;
*Recommended default when your hardware can handle the additional CPU load.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;onlyinclude&amp;gt;&lt;br /&gt;
== Handling invalid or missing passphrase ==&lt;br /&gt;
&lt;br /&gt;
If the encryption passphrase is invalid or not configured on the current host, all encrypted datasets and zvols in the affected zpool are locked and cannot be accessed. When a locked zvol is attached to an iSCSI target, FC group, or NVMe-oF subsystem, these objects are effectively blocked as well, and no data can be accessed through them. For an encrypted dataset, all shares configured on it are also blocked.&lt;br /&gt;
&lt;br /&gt;
To restore access, enter the correct passphrase in &#039;&#039;&#039;Configuration → Resource encryption&#039;&#039;&#039; for the zpool. After a valid passphrase is provided, all locked, encrypted resources are automatically unlocked and become active again, provided that the related targets, groups, subsystems, or datasets were not manually deactivated beforehand.&lt;br /&gt;
&lt;br /&gt;
Such situations may occur, for example, when a zpool is imported on a different host or moved between cluster nodes. In a cluster environment, the passphrase is usually synchronized between nodes, so after a failover, the other node already has the required passphrase. However, if the passphrase change operation was interrupted, some encrypted resources may have been updated to the new passphrase while others still use the old one. On the original host, access may still work, but after exporting the zpool and importing it on another host, some or all encrypted resources can become partially locked. In this case, an event is recorded in the Event Viewer indicating that the passphrase change did not complete successfully.&lt;br /&gt;
&lt;br /&gt;
If this happens, first try to unlock the resources by entering the latest passphrase (the one you intended to change to). If this does not unlock all encrypted resources, enter the previous passphrase (the one used before the change), allow the passphrase change process to complete, and then change the passphrase again to the desired new value. This sequence should unify the passphrase across all encrypted resources in the zpool. Always monitor Event Viewer logs when working with encrypted resources and when changing passphrases.&lt;br /&gt;
&amp;lt;/onlyinclude&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Zpool_wizard&amp;diff=1832</id>
		<title>Zpool wizard</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Zpool_wizard&amp;diff=1832"/>
		<updated>2026-03-19T14:08:51Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: 1 revision imported&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
The &#039;&#039;&#039;Zpool Wizard&#039;&#039;&#039; guides you through the process of creating and configuring a new ZFS pool (zpool) from available disks. A zpool is the foundational storage construct in ZFS. It serves as a logical storage pool that combines multiple physical storage devices (disks) into &#039;&#039;&#039;vdevs&#039;&#039;&#039; (virtual devices), which collectively form the unified zpool.&amp;amp;nbsp;The wizard consists of multiple steps that allow you to configure data groups (vdevs), add optional device groups, adjust pool settings, and enable encryption if required.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Accessing the wizard ==&lt;br /&gt;
&lt;br /&gt;
#Navigate to &#039;&#039;&#039;Storage&#039;&#039;&#039;.&lt;br /&gt;
#Click &#039;&#039;&#039;Add zpool&#039;&#039;&#039;.&lt;br /&gt;
#The Zpool creation wizard will launch.&lt;br /&gt;
#Follow the guided steps to configure your zpool.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Zpool configuration steps ==&lt;br /&gt;
&lt;br /&gt;
=== Add data group&amp;amp;nbsp; ===&lt;br /&gt;
&lt;br /&gt;
In this step, available disks are listed. You can filter only unused disks using the toggle.&lt;br /&gt;
&lt;br /&gt;
#Select one or more disks from the list.&lt;br /&gt;
#Choose the desired redundancy level for the group:&lt;br /&gt;
#*&#039;&#039;&#039;Single&#039;&#039;&#039; - No redundancy. Any disk failure results in data loss.&lt;br /&gt;
#*&#039;&#039;&#039;Mirror&#039;&#039;&#039; - Data is stored on multiple disks. Capacity equals the size of one disk per mirror.&lt;br /&gt;
#**&#039;&#039;&#039;Mirror (Single Group)&#039;&#039;&#039;: All selected disks will be combined into a single mirrored group.&lt;br /&gt;
#**&#039;&#039;&#039;Mirror (Multiple Groups)&#039;&#039;&#039;: The selected disks will be paired into multiple mirrored groups, each consisting of two disks.&lt;br /&gt;
#*&#039;&#039;&#039;Z-1&#039;&#039;&#039; - Single-parity redundancy. One disk may fail without losing data. A minimum of three disks is required for a RAIDZ-1 group.&lt;br /&gt;
#*&#039;&#039;&#039;Z-2&#039;&#039;&#039; - Double-parity redundancy. Two disks may fail without losing data. A minimum of four disks is required for a RAIDZ-2 group.&lt;br /&gt;
#*&#039;&#039;&#039;Z-3&#039;&#039;&#039; - Triple-parity redundancy. Three disks may fail without losing data. A minimum of five disks is required for a RAIDZ-3 group.&lt;br /&gt;
#Click &#039;&#039;&#039;Add group&#039;&#039;&#039; to add the selected configuration.&amp;amp;nbsp;&lt;br /&gt;
#*The selected data group will appear in the right-hand panel. The total zpool capacity and licensed storage usage are displayed below.&amp;amp;nbsp;&lt;br /&gt;
#*To learn more vdev types, please refer to the following article: [[Redundancy in Disks Groups|Redundancy in Disk Groups]]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Add write log (optional) ===&lt;br /&gt;
&lt;br /&gt;
This feature allows you to configure the write log function with a selected redundancy level (single drive or mirror). The write log utilizes a separate intent log (SLOG) device. A fast SSD/NVMe should be used for this vdev.&lt;br /&gt;
&lt;br /&gt;
#Select disks from the available list.&lt;br /&gt;
#Choose redundancy type (&#039;&#039;&#039;Single&#039;&#039;&#039; or &#039;&#039;&#039;Mirror&#039;&#039;&#039;) for added reliability.&lt;br /&gt;
#Add the group to the zpool.&lt;br /&gt;
&lt;br /&gt;
Write log groups are displayed separately in the Other groups section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Key points to consider&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
 • If multiple log devices are specified, write operations are load-balanced between the devices.&lt;br /&gt;
 • Log devices can be configured with redundancy by using mirrors to enhance fault tolerance.&lt;br /&gt;
 • RAIDZ vdev types are not supported for the intent log.&lt;br /&gt;
 &lt;br /&gt;
 This ensures efficient and reliable write operations while leveraging the selected redundancy level.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;br/&amp;gt;Add read cache (optional) ===&lt;br /&gt;
&lt;br /&gt;
This step allows you to assign SSDs as L2ARC (Level 2 Adaptive Replacement Cache) devices to boost read performance. Adding a read cache improves performance and reduces latency for storage systems under heavy read load. A cache device stores frequently accessed data from the storage pool, providing an additional layer of caching between main memory and disk. These devices cannot be configured as mirrors or RAIDZ groups. A fast SSD/NVMe should be used for this vdev.&lt;br /&gt;
&lt;br /&gt;
#Select a disk to be used as a cache device.&amp;amp;nbsp;Only &#039;&#039;&#039;Single&#039;&#039;&#039; redundancy is available.&lt;br /&gt;
#Confirm by adding the group.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;Key benefits and considerations&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
 • Cache devices are particularly useful for &#039;&#039;&#039;read-heavy workloads&#039;&#039;&#039; where the working dataset size exceeds the capacity of main memory.&lt;br /&gt;
 • By utilizing cache devices, a larger portion of the working dataset can be served from low-latency storage, improving performance significantly.&lt;br /&gt;
 • The greatest performance improvements are seen in workloads characterized by random reads of primarily static content.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;br/&amp;gt;Add special devices group (optional) ===&lt;br /&gt;
&lt;br /&gt;
 Special and deduplication vdevs require at least the same level of redundancy as data vdevs. &lt;br /&gt;
 Because RAIDZ vdevs do not provide compatible redundancy for these device groups, special vdevs and deduplication vdevs cannot be used in a ZFS pool that contains RAIDZ1, RAIDZ2, or RAIDZ3.&lt;br /&gt;
&lt;br /&gt;
A special devices group stores metadata and small-block data to improve performance. A fast SSD/NVMe should be used for this vdev.&lt;br /&gt;
&lt;br /&gt;
#Select one or more disks.&lt;br /&gt;
#Choose redundancy (&#039;&#039;&#039;Single&#039;&#039;&#039; or &#039;&#039;&#039;Mirror&#039;&#039;&#039;). &#039;&#039;&#039;The mirror redundancy level is recommended to prevent data loss&#039;&#039;&#039;.&lt;br /&gt;
#Add them as a group.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;Key features and benefits&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
 • Storing metadata on special devices improves performance for metadata-intensive operations, such as file lookups and directory traversals.&lt;br /&gt;
 • Small files below a certain size threshold can also be stored on these devices, enhancing read and write speeds for such workloads.&lt;br /&gt;
 • Special devices are particularly beneficial for environments with a large number of small files or high metadata activity.&lt;br /&gt;
 &lt;br /&gt;
 Using special devices optimizes the overall performance of the ZFS pool by offloading critical metadata and small-file operations to faster storage.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
=== Add deduplication group (optional) ===&lt;br /&gt;
&lt;br /&gt;
A deduplication group can be explicitly excluded from a special device group to hold deduplication tables. This allows the deduplication tables to be stored separately from the special device class.&lt;br /&gt;
&lt;br /&gt;
#Select disks for this purpose.&amp;amp;nbsp;Redundancy can be set to &#039;&#039;&#039;Single&#039;&#039;&#039; or &#039;&#039;&#039;Mirror&#039;&#039;&#039;.&amp;amp;nbsp;&#039;&#039;&#039;The mirror redundancy level is recommended to prevent data loss&#039;&#039;&#039;.&lt;br /&gt;
#Add the group to confirm.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;Key features and considerations&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
 • Storing deduplication tables in a dedicated group improves the efficiency of deduplication processes by isolating them from other metadata operations.&lt;br /&gt;
 • This configuration provides flexibility in optimizing storage layout based on workload requirements.&lt;br /&gt;
 • Using a deduplication group is particularly beneficial for systems with high deduplication demands, ensuring better performance and management.&lt;br /&gt;
 &lt;br /&gt;
 This setup enhances deduplication performance while maintaining a clear separation of metadata and deduplication operations.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;br/&amp;gt;Add spare disks (optional) ===&lt;br /&gt;
&lt;br /&gt;
A spare disk is a special pseudo-vdev used to track available spare devices for a zpool. Using spare disks enhances the storage pool&#039;s reliability by enabling seamless drive replacement and reducing the risk of data loss.&lt;br /&gt;
&lt;br /&gt;
#Select the disk and add it to the &#039;&#039;&#039;Spare&#039;&#039;&#039; group.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Configuration ===&lt;br /&gt;
&lt;br /&gt;
In this step, you configure the final pool settings:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Zpool name&#039;&#039;&#039; - Enter a unique name for the zpool for easy identification.&lt;br /&gt;
*&#039;&#039;&#039;autoTRIM&#039;&#039;&#039; - If supported by your devices, enable the AutoTRIM feature to reclaim unused space automatically. AutoTRIM helps optimize SSD performance and lifespan by notifying the controller when blocks are no longer in use.&lt;br /&gt;
*&#039;&#039;&#039;Initialize the zpool after creation&#039;&#039;&#039; - Writes patterns to unallocated space to avoid initial-write latency, especially in virtualized environments.&amp;amp;nbsp;The process may extend creation time and briefly affect performance.&lt;br /&gt;
&lt;br /&gt;
Proper configuration ensures that the Zpool is tailored to your needs and operates efficiently.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Resource encryption (optional) ===&lt;br /&gt;
&lt;br /&gt;
Encryption applies to datasets and zvols created in the ZFS pool. The zpool itself remains unencrypted.&lt;br /&gt;
&lt;br /&gt;
#Enable &#039;&#039;&#039;Configure encryption passphrase&#039;&#039;&#039;.&amp;amp;nbsp;&lt;br /&gt;
#Select a &#039;&#039;&#039;Default encryption method&#039;&#039;&#039;.&amp;amp;nbsp;&lt;br /&gt;
#Enter and confirm the passphrase.&lt;br /&gt;
&lt;br /&gt;
Proper configuration ensures that the Zpool is tailored to your needs and operates efficiently.&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Note&#039;&#039;&#039;:&lt;br /&gt;
 • The passphrase cannot be recovered.&lt;br /&gt;
 • Encrypted resources inherit the passphrase unless changed later.&lt;br /&gt;
&lt;br /&gt;
=== Summary ===&lt;br /&gt;
&lt;br /&gt;
The summary page displays the complete zpool configuration before finalization. Click &#039;&#039;&#039;Add zpool&#039;&#039;&#039; to complete pool creation.&amp;amp;nbsp;The wizard will create the zpool with the selected configuration.&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Remember&#039;&#039;&#039;:&lt;br /&gt;
 • Redundancy level cannot be changed after the ZFS pool is created.&lt;br /&gt;
 • Mixed disk sizes reduce usable capacity to the smallest disk in a vdev.&lt;br /&gt;
 • SSDs are recommended for write log, special devices, and deduplication groups.&lt;br /&gt;
 • Encryption passphrases cannot be recovered.&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=System_volume_upgrade&amp;diff=1700</id>
		<title>System volume upgrade</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=System_volume_upgrade&amp;diff=1700"/>
		<updated>2025-11-26T14:35:24Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: Created page with &amp;quot;__NOTOC__    &amp;#039;&amp;#039;&amp;#039;Note&amp;#039;&amp;#039;&amp;#039;: This upgrade improves system stability and performance but is irreversible. Pools upgraded to a 64 KB system volume volblocksize cannot be accessed by...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Note&#039;&#039;&#039;: This upgrade improves system stability and performance but is irreversible. Pools upgraded to a 64 KB system volume volblocksize cannot be accessed by older software versions. &lt;br /&gt;
&lt;br /&gt;
After installing a software version that supports the latest ZFS file system, you may be prompted to upgrade the system volume on each storage pool. This operation improves stability and performance by setting the system volume&#039;s volblocksize to &#039;&#039;&#039;64 KB&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Upgrade Notification ==&lt;br /&gt;
&lt;br /&gt;
When a pool uses an older system volume format, an information banner appears in the Storage view. It recommends performing the upgrade and specifies the required free space - &#039;&#039;&#039;8 GB&#039;&#039;&#039; on the pool. To begin, open the &#039;&#039;&#039;Options&#039;&#039;&#039; menu for the selected pool, then select &#039;&#039;&#039;&#039;Upgrade system volume&#039;&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Upgrade Process ==&lt;br /&gt;
&lt;br /&gt;
#A confirmation window appears, displaying a warning that the operation cannot be undone. For safety, please type the word &#039;&#039;&#039;&#039;upgrade&#039;&#039;&#039;&#039; into the confirmation field.&lt;br /&gt;
#Click &#039;&#039;&#039;Upgrade&#039;&#039;&#039; to start the process.&lt;br /&gt;
#A progress window is shown during the upgrade.&lt;br /&gt;
#When completed, a message appears indicating that the system volume has been upgraded to 64 KB. To finalize, you must &#039;&#039;&#039;export and import&#039;&#039;&#039; the pool or use the &#039;&#039;&#039;Move&#039;&#039;&#039; option if the pool belongs to a cluster.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== After Upgrade ==&lt;br /&gt;
&lt;br /&gt;
Once the system volume has been successfully updated, the pool’s status panel displays a message indicating that a &#039;&#039;&#039;pool export/import&#039;&#039;&#039; is required to complete the process. After performing this step, the system volume upgrade is fully applied.&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Active_Directory_(AD)_server_authentication&amp;diff=772</id>
		<title>Active Directory (AD) server authentication</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Active_Directory_(AD)_server_authentication&amp;diff=772"/>
		<updated>2025-11-26T13:31:33Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
This functionality is available in &#039;&#039;&#039;User Management &amp;gt; Share users/groups &amp;gt; Authorization protocols&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;To configure a connection to the existing Active Directory server:&lt;br /&gt;
&lt;br /&gt;
#Navigate to the&amp;amp;nbsp;&#039;&#039;&#039;User Management&amp;amp;nbsp;&#039;&#039;&#039;section in the left menu.&lt;br /&gt;
#Go to the &#039;&#039;&#039;Share users/groups&#039;&#039;&#039; tab.&lt;br /&gt;
#Find the &#039;&#039;&#039;Active Directory (AD) server authentication&#039;&#039;&#039; block.&lt;br /&gt;
#Enable the&amp;amp;nbsp;&#039;&#039;&#039;Enable protocol&#039;&#039;&#039;&amp;amp;nbsp;option.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== AD server authentication status ==&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Connection&#039;&#039;&#039; - shows whether you are connected to an AD server or not.&lt;br /&gt;
*&#039;&#039;&#039;Users/groups list&#039;&#039;&#039; - shows when the lists of users and groups were last synchronized or if the synchronization is taking place at the moment.&lt;br /&gt;
&lt;br /&gt;
Users and groups are synchronized with an Active Directory server every 2 hours. Synchronization can also be started manually by using the &#039;&#039;&#039;Synchronize&#039;&#039;&#039;&amp;amp;nbsp;button.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== AD server authentication settings ==&lt;br /&gt;
&lt;br /&gt;
To connect to the existing AD server, fill in the following fields with credentials provided by the AD server administrator and click the &#039;&#039;&#039;Apply&#039;&#039;&#039; button.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Realm&#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039;Administrator name&#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039;Password&#039;&#039;&#039;&amp;lt;br/&amp;gt;NOTE&#039;&#039;&#039;: Password cannot contain:&#039;&#039;&#039;&lt;br /&gt;
**special characters such as &#039; &amp;quot; ` ^ &amp;amp; $ # ~ [ ] \ / | *&amp;amp;nbsp;:&amp;amp;nbsp;? &amp;amp;lt; &amp;amp;gt;&lt;br /&gt;
**spaces&lt;br /&gt;
**less than 12 and more than 16 characters&lt;br /&gt;
*&#039;&#039;&#039;Organizational Unit (OU) - &#039;&#039;&#039;a direct path to the container where the Computer Organizational Unit is placed. The path must be entered starting from the primary container name within the domain structure. The container name set by default is &#039;&#039;&#039;Computers&#039;&#039;&#039;.&amp;amp;nbsp;If another container name is used instead, then &#039;&#039;&#039;Computers&#039;&#039;&#039; must be changed to the appropriate name. If the path to the container is nested, use a slash as the connector. In the screenshot below, the OU is in the &#039;&#039;&#039;Computers&#039;&#039;&#039; container that is nested in&amp;amp;nbsp;&#039;&#039;&#039;AllComputers &amp;gt; Marketing&#039;&#039;&#039;. In this example, the path to the OU is: &#039;&#039;&#039;AllComputers/Marketing/Computers&#039;&#039;&#039;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;[[File:Ad-structure.png]]&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;NOTE&#039;&#039;&#039;: Container name can&#039;t contain:&#039;&#039;&#039;&lt;br /&gt;
**special characters such as , + &amp;quot; \ &amp;amp;lt; &amp;amp;gt;&amp;amp;nbsp;; = / #&lt;br /&gt;
**spaces&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div&amp;gt;&#039;&#039;&#039;The following reasons might prevent you from connecting to Active Directory:&#039;&#039;&#039;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
#Difference in time between Active Directory Server - if the time difference is greater than 5 minutes, the connection is not possible.&lt;br /&gt;
#The method of authenticating trusted domains - the authentication has to be set to two-way trust. Otherwise, it is not possible to read users and groups from trusted domains.&lt;br /&gt;
#DNS configuration - for an Active Directory domain, it is not possible to use a round-robin mechanism in DNS. This is connected to the fact that only one IP address is authorized. In a moment when another IP is obtained from DNS, the connection is not possible.&lt;br /&gt;
#The &#039;&#039;&#039;server name&#039;&#039;&#039; is the same as the Computer Organizational Unit (OU) named in the Active Directory (AD) server. If the object with the same name exists and the user that you use to log in to the AD server does not have permission to access this file, the connection will fail. The solution is to delete the existing computer object from the AD server. The following information explains how to delete the OU file:&lt;br /&gt;
&amp;lt;ul style=&amp;quot;margin-left: 80px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Log on to the Domain Controller with the domain administrator account. Press Windows Logo + R, enter &amp;quot;dsa.msc&amp;quot; and press Enter.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;In the &amp;quot;Active Directory Users and Computers&amp;quot; window, select the domain container in which the OU you are looking for is located.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Select the computer object and delete it.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&#039;&#039;&#039;Note&#039;&#039;&#039;: By default, any created Organizational Unit is protected from accidental deletion. To delete the OU, you need to clear the &amp;quot;Protect object from accidental deletion&amp;quot; checkbox, which you can find in the object properties in the &amp;quot;Object&amp;quot; tab. By deleting OU, you delete all nested objects that it contains as well.&lt;br /&gt;
:::&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Users and user groups management ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Management mode:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Scan single domain (default)&#039;&#039;&#039; - Using this function allows the user to obtain users and groups from the main domain only.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Scan all trusted domains&#039;&#039;&#039; - Using this function allows the user to obtain users and groups from the main and trusted domains.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;ID mapping backend:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;rid + tdb (default)&#039;&#039;&#039; - This option utilizes the rid backend for ID mapping to AD users. UID/GIDs range has to be entered manually The tdb backend is used when no other configuration is set. Recommended for large databases.Samba Wiki link for rid backend: [https://wiki.samba.org/index.php/Idmap_config_rid https://wiki.samba.org/index.php/Idmap_config_rid]&lt;br /&gt;
*&#039;&#039;&#039;ad (with RFC2307 schema) + tdb&#039;&#039;&#039; - Allows reading ID mappings from an AD server, provided that the uidNumber attributes for users and gidNumber attributes for groups were added in advance in the AD. This backend requires additional configuration of uidNumber and gidNumber on the AD server side. The tdb back end is used when no other configuration is set. Samba Wiki link for rid backend: [https://wiki.samba.org/index.php/Idmap_config_ad https://wiki.samba.org/index.php/Idmap_config_ad]&lt;br /&gt;
*&#039;&#039;&#039;autorid&#039;&#039;&#039; - Automatically configures the range to be used for each domain. The only configuration needed is the range of UID/GIDs used for user/group mappings and the number of IDs per domain. Samba Wiki link for autorid backend: [https://wiki.samba.org/index.php/Idmap_config_autorid https://wiki.samba.org/index.php/Idmap_config_autorid]&lt;br /&gt;
&lt;br /&gt;
 Autorid is not recommended in cluster environments.&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;span style=&amp;quot;font-size:small&amp;quot;&amp;gt;The TDB UID/GIDs mapping does not work properly.&amp;lt;/span&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Single-Domain Environments&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div&amp;gt;It is recommended to use the &amp;quot;autorid&amp;quot; option in the &amp;quot;ID mapping backend&amp;quot; settings. Alternatively, you can use the &amp;quot;rid+tdb&amp;quot; option. If you choose &amp;quot;rid+tdb,&amp;quot; set the UID/GIDs mapping to &amp;quot;rid&amp;quot; and define the Min ID and Max ID range (e.g., 2,000,000 to 2,999,999). The range 1,000,000 to 1,999,999 is reserved.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Multi-Domain Environments&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div&amp;gt;The &amp;quot;autorid&amp;quot; option cannot be used. Instead, use &amp;quot;rid+tdb&amp;quot; or &amp;quot;ad (with RFC2307 schema) + tdb.&amp;quot; Ensure the UID/GIDs mapping is set to &amp;quot;rid&amp;quot; and define the Min ID and Max ID range for each domain (e.g., 2,000,000 to 2,999,999 for the first domain, 3,000,000 to 3,999,999 for the second domain, etc.).&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=NVMe-oF_Subsystems&amp;diff=1699</id>
		<title>NVMe-oF Subsystems</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=NVMe-oF_Subsystems&amp;diff=1699"/>
		<updated>2025-09-16T13:37:23Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: Redirected page to NVMe-oF Initiator&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT[[NVMe-oF Initiator]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=NVMe-oF_discover&amp;diff=1698</id>
		<title>NVMe-oF discover</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=NVMe-oF_discover&amp;diff=1698"/>
		<updated>2025-09-16T13:37:17Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: Redirected page to NVMe-oF Initiator&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT[[NVMe-oF Initiator]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=NVMe-oF_Subsystem_Connection_Problems&amp;diff=1697</id>
		<title>NVMe-oF Subsystem Connection Problems</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=NVMe-oF_Subsystem_Connection_Problems&amp;diff=1697"/>
		<updated>2025-09-16T13:37:12Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: Redirected page to NVMe-oF Initiator&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT[[NVMe-oF Initiator]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=NVMe-oF_Initiator&amp;diff=1696</id>
		<title>NVMe-oF Initiator</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=NVMe-oF_Initiator&amp;diff=1696"/>
		<updated>2025-09-16T13:36:55Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
The NVMe-oF (NVMe over Fabrics) initiator enables connections to external NVMe storage arrays (targets) via network protocols. This feature provides efficient and high-performance management of remote storage solutions, overcoming traditional cabling limitations by allowing substantial distances between servers and storage arrays.&lt;br /&gt;
&lt;br /&gt;
== Supported Protocols ==&lt;br /&gt;
&lt;br /&gt;
The software supports two principal NVMe-oF initiator protocols:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;TCP&#039;&#039;&#039; – A widely adopted protocol ensuring ease of implementation and compatibility with conventional networking infrastructure.&lt;br /&gt;
*&#039;&#039;&#039;RDMA&#039;&#039;&#039; – A protocol providing lower latency and higher performance, ideal for environments requiring exceptional throughput. RDMA requires specialized hardware, such as Mellanox/NVIDIA ConnectX or ATTO network interface cards, to fully utilize its capabilities.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
Follow these steps to configure the NVMe-oF initiator:&lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Start Discovery&#039;&#039;&#039;&lt;br /&gt;
#;Click the &amp;quot;&#039;&#039;&#039;Discover&#039;&#039;&#039;&amp;quot; button to start the discovery wizard.&lt;br /&gt;
#&#039;&#039;&#039;Enter Connection Details&#039;&#039;&#039;&lt;br /&gt;
#*&#039;&#039;&#039;Server IP&#039;&#039;&#039;: IP address of the NVMe storage target.&lt;br /&gt;
#*&#039;&#039;&#039;Server port&#039;&#039;&#039;: Network port for communication (&#039;&#039;&#039;default is 4420&#039;&#039;&#039;).&lt;br /&gt;
#*&#039;&#039;&#039;Server protocol&#039;&#039;&#039;: Choose between TCP and RDMA.&lt;br /&gt;
#*&#039;&#039;&#039;Advanced settings (optional)&#039;&#039;&#039;: Enable and specify the number of I/O queues. Leave blank or disabled to use the system default, or enter a specific number to override.&lt;br /&gt;
#:The number of I/O queues refers to the parallel channels through which data is transferred between the NVMe initiator and the target. Increasing this number can improve performance by enabling higher parallelism and reducing latency. However, each queue consumes system resources, and setting the number too high may exceed hardware or network capabilities, leading to connection issues. Adjust this value based on performance requirements and available resources.&lt;br /&gt;
#&#039;&#039;&#039;Proceed to Subsystems&#039;&#039;&#039;&lt;br /&gt;
#;Click &amp;quot;&#039;&#039;&#039;Next&#039;&#039;&#039;&amp;quot;. A list of available NVMe-oF subsystems will appear. Select the subsystems you want to connect to and click “&#039;&#039;&#039;Connect&#039;&#039;&#039;”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Manage Connection Paths ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Add a new path&#039;&#039;&#039;: Click the “&#039;&#039;&#039;Options&#039;&#039;&#039;” dropdown menu and select “&#039;&#039;&#039;Add path&#039;&#039;&#039;”. Enter the required connection details (Server IP, port, protocol, and optionally the number of I/O queues).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Disconnect a subsystem&#039;&#039;&#039;: Use the “&#039;&#039;&#039;Options&#039;&#039;&#039;” menu and select “&#039;&#039;&#039;Disconnect subsystem&#039;&#039;&#039;”.&lt;br /&gt;
&lt;br /&gt;
You can perform additional discoveries at any time to connect new subsystems.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Practical Implementation ==&lt;br /&gt;
&lt;br /&gt;
After connecting to a subsystem, a list of available namespaces will be displayed, including:&lt;br /&gt;
&lt;br /&gt;
*Namespace ID&lt;br /&gt;
*Namespace capacity&lt;br /&gt;
*Namespace aliases&lt;br /&gt;
&lt;br /&gt;
Namespaces are sections of the NVMe controller on the storage array. They appear as independent NVMe disks to the server, can be identified by their alias, and are managed in the same manner as standard NVMe disks. Namespaces can be partitioned and added to storage pools.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: Only one partition per disk can be active within a single pool or data group to maintain redundancy and reliability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Multi-path Connectivity ==&lt;br /&gt;
&lt;br /&gt;
The initiator supports multi-path connectivity, allowing multiple redundant network paths to a single NVMe target. Each path requires a distinct IP address (Virtual IP) to ensure redundancy and high availability.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
If you encounter connection issues (e.g., “Could not connect to subsystem(s)” error), consider the following actions:&lt;br /&gt;
&lt;br /&gt;
#Check Network Connectivity:&lt;br /&gt;
#*Ensure that the server can ping the target’s IP address.&lt;br /&gt;
#*Verify that the correct port (default 4420) is open and not blocked by a firewall.&lt;br /&gt;
#Validate Target Configuration:&lt;br /&gt;
#*Verify that the NVMe target is online and properly configured to support NVMe over Fabrics (NVMe-oF) connections.&lt;br /&gt;
#*Ensure that access control lists (ACLs) or authentication settings on the target allow the initiator to establish a connection.&lt;br /&gt;
#Adjust I/O Queues:&lt;br /&gt;
#*If connection errors occur due to queue limits, try lowering the number of I/O queues in the advanced settings to match target capabilities.&lt;br /&gt;
#Use Alternative Paths:&lt;br /&gt;
#*If multiple network interfaces are available (typical in JBOD or HA environments), try using an alternative IP address or configure multi-path connectivity.&lt;br /&gt;
#Review Logs:&lt;br /&gt;
#*Check logs for detailed error messages that can guide further troubleshooting.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help_topics]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Main_Page&amp;diff=1206</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Main_Page&amp;diff=1206"/>
		<updated>2025-08-27T08:55:05Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===== &#039;&#039;Release Notes:&#039;&#039; =====&lt;br /&gt;
&lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList| &lt;br /&gt;
category = Release Notes &lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = descending&lt;br /&gt;
count = 1&lt;br /&gt;
mode = none&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;div&amp;gt;[[Release Notes|All release notes »]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Help topics:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
count=50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 50&lt;br /&gt;
count= 50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| style=&amp;quot;vertical-align: top&amp;quot; | &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 100&lt;br /&gt;
count=50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;ZFS and data storage articles:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = ZFS and data storage articles&lt;br /&gt;
count=60&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Main_Page&amp;diff=1205</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Main_Page&amp;diff=1205"/>
		<updated>2025-08-27T08:53:52Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===== &#039;&#039;Release Notes:&#039;&#039; =====&lt;br /&gt;
&lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList| &lt;br /&gt;
category = Release Notes &lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = descending&lt;br /&gt;
count = 1&lt;br /&gt;
mode = none&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;a href=&amp;quot;Release%20Notes&amp;quot;&amp;gt;All release notes »&amp;lt;/a&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Help topics:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
count=50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 50&lt;br /&gt;
count= 50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| style=&amp;quot;vertical-align: top&amp;quot; | &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 100&lt;br /&gt;
count=50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;ZFS and data storage articles:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = ZFS and data storage articles&lt;br /&gt;
count=60&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Main_Page&amp;diff=1204</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Main_Page&amp;diff=1204"/>
		<updated>2025-08-27T08:53:10Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===== &#039;&#039;Release Notes:&#039;&#039; =====&lt;br /&gt;
&lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList| &lt;br /&gt;
category = Release Notes &lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = descending&lt;br /&gt;
count = 1&lt;br /&gt;
mode = none&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;amp;lt;a href=&amp;quot;Release%20Notes&amp;quot;&amp;amp;gt;All release notes&amp;amp;nbsp;»&amp;amp;lt;/a&amp;amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Help topics:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
count=50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 50&lt;br /&gt;
count= 50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| style=&amp;quot;vertical-align: top&amp;quot; | &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 100&lt;br /&gt;
count=50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;ZFS and data storage articles:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = ZFS and data storage articles&lt;br /&gt;
count=60&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Main_Page&amp;diff=1203</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Main_Page&amp;diff=1203"/>
		<updated>2025-08-27T08:53:01Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h5&amp;gt; &amp;lt;i&amp;gt;Release Notes:&amp;lt;/i&amp;gt; &amp;lt;/h5&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;span class=&amp;quot;fck_mw_template&amp;quot;&amp;gt;{{fckLR#tag:DynamicPageList| fckLRcategory = Release Notes fckLRordermethod = categorysortkey fckLRorder = descendingfckLRcount = 1fckLRmode = nonefckLR}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;a href=&amp;quot;Release%20Notes&amp;quot;&amp;gt;All release notes »&amp;lt;/a&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;hr /&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;i&amp;gt;&amp;lt;b&amp;gt;Help topics:&amp;lt;/b&amp;gt;&amp;lt;/i&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;span class=&amp;quot;fck_mw_template&amp;quot;&amp;gt;{{fckLR#tag:DynamicPageList|fckLRcategory = Help topicsfckLRcount=50fckLRordermethod = categorysortkey fckLRorder = ascendingfckLR}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;span class=&amp;quot;fck_mw_template&amp;quot;&amp;gt;{{fckLR#tag:DynamicPageList|fckLRcategory = Help topicsfckLRoffset = 50fckLRcount= 50fckLRordermethod = categorysortkey fckLRorder = ascendingfckLR}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td style=&amp;quot;vertical-align: top&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;span class=&amp;quot;fck_mw_template&amp;quot;&amp;gt;{{fckLR#tag:DynamicPageList|fckLRcategory = Help topicsfckLRoffset = 100fckLRcount=50fckLRordermethod = categorysortkey fckLRorder = ascendingfckLR}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;/table&amp;gt;&lt;br /&gt;
&amp;lt;hr /&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;i&amp;gt;&amp;lt;b&amp;gt;ZFS and data storage articles:&amp;lt;/b&amp;gt;&amp;lt;/i&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;span class=&amp;quot;fck_mw_template&amp;quot;&amp;gt;{{fckLR#tag:DynamicPageList|fckLRcategory = ZFS and data storage articlesfckLRcount=60fckLRordermethod = categorysortkey fckLRorder = ascendingfckLR}}&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;/table&amp;gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Scale_Logic_ZX_ver.1.0_up32_Release_Notes&amp;diff=1694</id>
		<title>Scale Logic ZX ver.1.0 up32 Release Notes</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Scale_Logic_ZX_ver.1.0_up32_Release_Notes&amp;diff=1694"/>
		<updated>2025-08-27T08:52:51Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date: 2025-07-23&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build: 61683&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== New ==&lt;br /&gt;
&lt;br /&gt;
=== NVMe over Fabrics (NVMe-oF) Initiator with Multipath I/O functionality. ===&lt;br /&gt;
&lt;br /&gt;
=== Partition labeling for NVMe Drives. ===&lt;br /&gt;
&lt;br /&gt;
=== VMware VAAI support for NFS protocol. ===&lt;br /&gt;
&lt;br /&gt;
=== Storage Pool initialization feature. ===&lt;br /&gt;
&lt;br /&gt;
=== Power button settings available in Console tools -&amp;gt; Add-ons. ===&lt;br /&gt;
&lt;br /&gt;
=== Configurable TRIM support for thick-provisioned zvols. ===&lt;br /&gt;
&lt;br /&gt;
=== Network statistics for bonded RDMA interfaces available in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
=== Display of support license information in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== Linux kernel (v5.15.179). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM573xx and Broadcom BCM574xx controllers driver (bnxt_en, v1.10.3-232.0.155.5). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 100GbE Network Controller driver (ice, v1.14.13). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10/40GbE Network Controller driver (i40e, v2.25.11). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10GbE Network Controller driver (ixgbe, v5.20.10). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 1GbE Network Controller driver (igb, v5.16.11). ===&lt;br /&gt;
&lt;br /&gt;
=== Chelsio T4/t5 10 Gigabit Ethernet controller driver (cxgb4, v3.19.0.3). ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox firmware update driver (mft, v4.31.0). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom HBA 9600-16e 12Gb Tri-Mode Storage Adapter driver (mpi3mr, v8.12.1.0.0). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom HBA Adapter driver (mpt3sas, v52.00.00.00). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom MegaRAID Adapter driver (megaraid_sas, v07.731.01.00). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 24Gb/s GT HBA Adapter driver (esas6hba, v1.01.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 12Gb/s GT HBA Adapter driver (esas5hba, v1.09.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 12Gb/s HBA Adapter driver (esas4hba, v1.55.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 16Gb/32Gb Fibre Channel Adapter driver (celerity16fc, v2.11.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 8Gb Fibre Channel Adapter driver (celerity8fc, v2.28.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec SmartHBA and SmartRAID Adapter driver (smartpqi, v2.1.32-035). ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec MaxView tool v4.23. ===&lt;br /&gt;
&lt;br /&gt;
=== Open-iSCSI Initiator (open-iscsi, v2.1.10). ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== The system clock and IPMI time are not synchronized. ===&lt;br /&gt;
&lt;br /&gt;
=== The SED feature does not work simultaneously with Samsung and Micron drives on the same system. ===&lt;br /&gt;
&lt;br /&gt;
=== The Replacement drive status is not cleared from the WebGUI after the replacement is complete. ===&lt;br /&gt;
&lt;br /&gt;
=== Unexpected pool move to another cluster node after starting the HA Cluster. ===&lt;br /&gt;
&lt;br /&gt;
=== Details of VMware datastores list are not retrieved from VMware vCenter/vSphere and not shown in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
=== Storage sizes exceeding 1PB are not displayed correctly on the WebGUI and system console. ===&lt;br /&gt;
&lt;br /&gt;
=== (SU90917): Vulnerability due to enabled NTP mode 6 queries. ===&lt;br /&gt;
&lt;br /&gt;
=== (SU90998): Workgroup name containing &amp;quot;_&amp;quot; character is not accepted during AD server authentication. ===&lt;br /&gt;
&lt;br /&gt;
=== Rollback performed on a mounted dataset causes I/O blocking. ===&lt;br /&gt;
&lt;br /&gt;
=== Samba with Active Directory round-robin configuration causes unstable behavior. ===&lt;br /&gt;
&lt;br /&gt;
=== Changing the HTTPS port does not update the automatic redirection from HTTP port 80. ===&lt;br /&gt;
&lt;br /&gt;
=== Removing disks from pools created before enabling Multipath I/O fails. ===&lt;br /&gt;
&lt;br /&gt;
== Important notes for ZX HA configuration ==&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use sync always option for zvols and datasets in cluster&amp;amp;nbsp; ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended not to use more than eight ping nodes ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to configure each IP address in separate subnetwork ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to run Scrub scanner after failover action triggered by power failure (dirty system close) ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use UPS unit for each cluster node ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use static discovery in all iSCSI initiators ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly not recommended to change any settings when both nodes do not have the same ZX version, for example during software updating ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use different Server names for cluster nodes ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work properly with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work stable with ALB bonding mode ===&lt;br /&gt;
&lt;br /&gt;
=== FC Target HA cluster does not support Persistant Reservation Synchronization and it cannot be used as a storage for Microsoft Hyper-V cluster. This problem will be solved in future releases. ===&lt;br /&gt;
&lt;br /&gt;
=== When using certain Broadcom (previously LSI) SAS HBA controllers with SAS MPIO, Broadcom recommends to install specific firmware from Broadcom SAS vendor. ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;*Please consult Broadcom vendor for specific firmware that is suitable for your hardware setup.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
The utilization of physical hard drives in virtual machines hosted by Hyper-V is not supported and may cause problems. The problem does not occur when utilizing virtual hard drives in virtual machines within a Hyper-V environment.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Scale Logic VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Scale Logic VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Scale Logic ZX users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic ZX with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== High Availability shared storage cluster does not work with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Due to technical reasons the High Availability shared storage cluster does not work properly when using the Infiniband controllers for VIP interface configuration. This limitation will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Unexpected long failover time, especially with HA-Cluster with two or more pools ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Current failover mechanism procedure is moving pools in sequence. Since up27 release, up to 3 pools are supported in HA-cluster. If all pools are active on single node and failover needs to move all 3 pools, the failover may take longer than 60 seconds which is a default iSCSI timeout in Hyper-V Clusters. In some environments, under heavy load a problem with too long time of cluster resources switching may occur as well. If the switching time exceeds the iSCSI initiator timeout, it is strongly recommended to increase the timeout up to 600 seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using Windows, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Run regedit tool and find: &#039;&#039;HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\...\Parameters\MaxRequestHoldTime registry key&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
2. Change value of the key from default 60 sec to 600 sec (decimal)&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using VMware, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Select the host in the vSphere Web Client navigator&lt;br /&gt;
&lt;br /&gt;
2. Go to Settings in the Manage tab&lt;br /&gt;
&lt;br /&gt;
3. Under System, select Advanced System Settings&lt;br /&gt;
&lt;br /&gt;
4. Choose the &#039;&#039;Misc.APDTimeout&#039;&#039; attribute and click the Edit icon&lt;br /&gt;
&lt;br /&gt;
5. Change value from default 140 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using XenServer, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A. For existing Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Detach and reattach SRs. This will update the new iSCSI timeout settings for the existing SRs.&lt;br /&gt;
&lt;br /&gt;
B. For new Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Create the new SR. New and existing SRs will be updated with the new iSCSI timeout settings.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB, Round-Robin and Round-Robin with RDMA do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic ZX as Hyper-V or VMware guest, bonding ALB, Round-Robin and Round-Robin with RDMA is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Nodes connected to the same AD server must have unique Server names ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If ZX nodes are connected to the same AD server, they cannot have the same Server names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== SAS-MPIO cannot be used with Cluster over Ethernet ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly not recommended to use Cluster over Ethernet with SAS-MPIO functionality. Such a configuration can lead to a very unstable cluster behavior.&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Wrong state of storage devices in VMware after power cycle of both nodes in HA FC Target ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In FC Target HA environment, power cycle of both nodes simultaneously may lead to a situation when VMware is not able to restore proper state of the storage devices. In vSphere GUI LUNs are displayed as Error, Unknown or Normal,Degraded. Moving affected pools to another node and back to its native node should bring LUNs back to normal. A number two option is to restart the Failover in ZX’s GUI. Refresh vSphere’s Adapters and Devices tab afterwards.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Separated mode after update from ZX up24 to ZX up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In HA cluster environment after updating of one node from ZX up24 to ZX up25 the other node can fall into separated mode and the mirror path might indicate disconnected status. In such a case go to Failover Settings and in the Failover status section select Stop Failover on both nodes. Once this operation is finished select Start Failover.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of ZX up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of ZX up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of ZX up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target alias name is required while configuring HA FC Target in case of adding two or more ports to one FC group ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If we want to have more then one port in each FC group (in HA FC configuration) it is necessary to type in Target alias name for every port. Otherwise an error message “Target alias is already used” can show up while setting up remote port mapping for FC targets in (pool name) -&amp;gt; Fibre Channel -&amp;gt; Targets and initiators assigned to this zpool. This issue will be resolved in the future release.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of ZX up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open ZX TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;Please note that in order to change this parameter Failover must be stopped first.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating nodes of the ZX cluster from up24 and earlier versions changes FC ports to target mode resulting in losing connection to a storage connected via FC initiator ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is a significant difference in FC configurations in up24 and earlier versions. Those versions allowed the FC ports to be configured in initiator mode only, while later versions allow both target and initiator mode with target as default, so in case of using storage connected via FC initiator, FC port(s) must be manually corrected in GUI of the updated node.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating Metro Cluster node with NVMe disks as read cache from ZX up26 or earlier can cause the system to lose access to the NVMe disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The process of updating of Metro Cluster node from ZX up26 or earlier is changing NVME disk IDs. In consequence moving pool back to updated node is possible but the read cache is gone (ID mismatch). In order to bring read cache back to the pool we recommend to use console tools in the following way: press Ctrl+Alt+x -&amp;gt; “Remove ZFS data structures and disks partitions”, locate and select the missing NVMe disk and press OK to remove all ZFS metadata on the disk. After this operation click Rescan button in GUI -&amp;gt; Storage. The missing NVMe disk should now appear in Unassigned disks at the bottom of the page which allows to select that disk in pool’s Disk group’s tab. Open Disk group tab of the pool, press the Add group button and select Add read cache. The missing disk should now be available to select it as a read cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Long time of a failover procedure in case of Xen client with iSCSI MPIO configuration ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In a scenario where Xen client is an iSCSI initiator in MPIO configuration, the power-off of one node starts the failover procedure that takes a very long time. Pool is finally moved successfully but there are many errors showing up in dmesg in meantime. In case of such an environment we recommend to add the following entry in the device section of the configuration file: /etc/multipath.conf:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;no_path_retry queue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;The structure of the device section should look as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;device {&lt;br /&gt;
        vendor                  &amp;quot;SCST_FIO|SCST_BIO&amp;quot;&lt;br /&gt;
        product                 &amp;quot;*&amp;quot;&lt;br /&gt;
        path_selector           &amp;quot;round-robin 0&amp;quot;&lt;br /&gt;
        path_grouping_policy    multibus&lt;br /&gt;
        rr_min_io               100&lt;br /&gt;
        no_path_retry           queue&lt;br /&gt;
        }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== In case of large number of disks, zpool move can take a long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In certain environments in case of large number of disks (about 100 and more) the zpool move operation can take a long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Rolled back data are not properly refreshed both in Windows and Vmware systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before using rollback operation on zvol please detach iSCSI or FC target, perform rollback operation and reattach target.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User gets deleted from share access list after changing its username on AD server ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the AD user is assigned to a share and later the username is changed we should let ZX know about it. Using the &amp;quot;Synchronize and update shares configurations&amp;quot; operation on ZX leads to a situation where the changed user gets deleted from the share’s access list. The new username needs to be added to the share’s access list manually.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== QLogic 32Gbit FC HBA is no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from ZX up29 we no longer support QLogic 32Gbit FC adapters.&lt;br /&gt;
&lt;br /&gt;
=== Certain 16Gbit FC HBAs are no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from ZX up29 we no longer support certain 16Gbit FC adapters. If you are using a 16Gbit FC adapter based on the Qlogic chipset using the qla2xxx_scts driver, please refer to our online hardware compatibility list (HCL) to verify if that particular adapter is supported.&lt;br /&gt;
&lt;br /&gt;
=== E-mail password cannot contain special non-ASCII characters ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The following special characters #&amp;amp;nbsp;: + cannot be used in a password used in an e-mail notification feature. They can break the authentication process.&lt;br /&gt;
&lt;br /&gt;
=== LSA e-mail notifications does not work with SMTP servers requiring SSL/TLS authentication ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; E-mail alert configuration in LSI Storage Authority Software does not work with SMTP servers which require SSL/TLS authentication&lt;br /&gt;
&lt;br /&gt;
=== Moving IP address of the NFS share’s IP read only access list to read/write access list cannot be performed in one step ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If an IP address is already present on the NFS access list and you would like to move it to another access list, it has to be performed in two steps. First delete the IP address from the current list and apply the changes. Next edit the NFS share again and add the IP address to the other access list.&lt;br /&gt;
&lt;br /&gt;
=== If the used space on zpool reaches more than 80%, the system may generate high load and become unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the used space on the zpool reaches more then 80%, the system is trying to utilize the available space to the maximum. As a result, the system load may increase, especially waiting I/O and cause its unstable work. Expanding the pool size space is recommended.&lt;br /&gt;
&lt;br /&gt;
=== In certain situations WebGUI is not showing the current state of the system ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There are situations when the system is performing actions taking too long for the WebGUI to refresh the values in the web browser. In such a case the system is showing the old value taken directly from cache memory. We recommend using the F5 key to refresh the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== An ongoing O&amp;amp;ODP process involving small zvol block size or dataset record size generate high load and render the system unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; O&amp;amp;ODP backup tasks involving small zvol block sizes as well as small dataset record sizes (4KB - 16KB) are known to generate very high load rendering the system unstable. We recommend using at least 64KB sizes for zvols and datasets.&lt;br /&gt;
&lt;br /&gt;
=== Runtime UPS calibration in the client-server configuration unexpectedly shutdowns ZX ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the client-server configuration of the UPS, the runtime UPS calibration process returns an improper value interpreted by ZX as being on battery. When it timeouts it shuts the system down.&lt;br /&gt;
&lt;br /&gt;
=== Starting from up29 (including updating from previous version), system cannot boot up in UEFI mode if your boot medium is controlled by LSI SAS 9300 HBA with outdated firmware ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Make sure your LSI SAS 9300 HBA has the latest firmware installed. A quick workaround is changing the booting mode from UEFI to Legacy.&lt;br /&gt;
&lt;br /&gt;
=== Bonded Mellanox network cards show negative values on the network usage chart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be solved in the future release.&lt;br /&gt;
&lt;br /&gt;
=== In case of hundreds of thousands of LDAP users system starts very slowly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; LDAP database is stored on the boot medium. If you have that large LDAP database we recommend using ultra fast NVMe disk for boot medium.&lt;br /&gt;
&lt;br /&gt;
=== After update to ZX up29 write back cache on some hardware RAID volumes can be unintentionally disabled ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from ZX up29 we disable write-back cache on all HDD disks by default, but we do not disable write-back cache on SSD drives and hardware RAID volumes. It can happen however that the write-back cache on some RAID volumes can be turned off. Hardware RAID volume performance can be heavily impacted by the lack of the write-back cache, so please make sure it&#039;s enabled after update. Open TUI and invoke Extended tools by pressing CTRL+ALT+X, then select Disk write-back cache settings.&lt;br /&gt;
&lt;br /&gt;
=== Restarting or disconnecting JBOD with the write-back cache enabled on disks can lead to the data inconsistency ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If write-back cache is enabled on disks in JBOD then restarting or disconnecting JBOD can lead to data inconsistency. Starting from ZX up29 we disable write-back cache on HDD disks by default during bootup procedure. We do not disable write-back cache on SSD drives and hardware RAID volumes.&lt;br /&gt;
&lt;br /&gt;
=== Snapshots are not displayed after a system reboot if there are more than a few thousands of snapshots ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case there is a large number of snapshots (more than a few thousands) there might be a significant delay in listing them in WebGUI after a system reboot. Depending on how big the number of snapshots is, it may take a few minutes or up to several dozen minutes to populate the list in WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the gzip-9 compression algorithm. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the gzip-9 compression algorithm the system can behave unstable when copying the data to storage. It is possible to use this compression algorithm only in environments where very efficient processors are being used.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use more than 500 zvols. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using more than 500 zvols in the system, the responsiveness of the Web-GUI may be low and the system may have problems with the import of zpools.&lt;br /&gt;
&lt;br /&gt;
=== It is recommended to use Fibre Channel groups in Fibre Channel Target HA Cluster environments that use the Fibre Channel switches. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the Fibre Channel switches in FC Target HA Cluster environments, it is recommended to use only Fibre Channel groups (using the Fibre Channel Public group it is not recommended).&lt;br /&gt;
&lt;br /&gt;
=== Manual export and import of zpool in the system or deactivation of the Fibre Channel group without first suspending or turning off the virtual machines on the VMware ESXi side may cause loss of access to the data by VMware ESXi. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before a manual export and import of a zpool in the system or deactivation of the Fibre Channel group in Fibre Channel Target HA Cluster environment, you must suspend or turn off the virtual machines on the VMware ESXi side. Otherwise, the VMware ESXi may lose access to the data, and restarting it will be necessary.&lt;br /&gt;
&lt;br /&gt;
=== In Fibre Channel Target HA Cluster environments the VMware ESXi 6.7 must be used instead of VMware ESXi 7.0. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the VMware ESXi 7.0 in Fibre Channel Target HA Cluster environment, restarting one of the cluster nodes may cause the Fibre Channel paths to report a dead state.&lt;br /&gt;
&lt;br /&gt;
=== Sometimes cluster nodes hang up during boot of Scale Logic ZX. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case one of the cluster nodes hangs up during Scale Logic ZX boot, it must be manually restarted.&lt;br /&gt;
&lt;br /&gt;
=== Sometimes when using the ipmi hardware solutions, the cluster node may be restarted again by the ipmi watchdog ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In this case, it is recommended to wait 5 minutes before turning on the cluster node after it was turned off.&lt;br /&gt;
&lt;br /&gt;
=== Sometimes restarting one of the cluster nodes may cause some disks to be missing in the zpool configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In this case, click the “Rescan storage” button on the WebGUI to solve this problem.&lt;br /&gt;
&lt;br /&gt;
=== The Internet Connection Check functionality has been removed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to check the internet connection, try to get the date and time from the NTP server using the Web-GUI (System Settings -&amp;gt; System -&amp;gt; Time and date settings).&lt;br /&gt;
&lt;br /&gt;
=== After upgrading the system to a newer version, the event viewer reported an error message: An unexpected system reboot occurred. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After upgrading the system to a newer version, the event viewer reported an error message: &amp;quot;An unexpected system reboot occurred. Run the &#039;Scrub scanner&#039; on all pools to check the system integrity. Analysis of logs and previous events can help to find the cause of this problem and prevent the issue in the future. For more information, refer to the help article.&amp;quot;. This information should be ignored.&lt;br /&gt;
&lt;br /&gt;
=== Low performance on remote disks in case of new installation of ZX up29r2. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;The source for this problem is the zfs_vdev_max_active parameter set to 1 on a new installation of ZX up29r2. To resolve this problem, please change the value of the zfs_vdev_max_active parameter from 1 to 1000 in TUI.&amp;amp;nbsp; In order to change the zfs_vdev_max_active parameter to 1000 open ZX TUI and use CTRL+ALT+W keys combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select the zfs module, then the zfs_vdev_max_active parameter and change its value to 1000. This operation requires a restart of the ZX. This should be done by selecting the Reboot option in the TUI.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
=== In case of no local storage disks in any Non-Shared storage HA Cluster node, the remote disks mirroring path connection status shows incorrect state: Disconnected. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; According to assumptions, each cluster nodes in Non-Shared storage HA Cluster must have at least one local storage disk before creating the remote disk mirroring path connection.&lt;br /&gt;
&lt;br /&gt;
=== In some environments in case of using RDMA for remote disks mirroring path, shutdown one of the cluster nodes may causes its restart instead of shutting down. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In some environments in case of using RDMA for remote disks mirroring path, shutdown one of the cluster nodes may causes its restart instead of shutting down.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the ATTO Fibre Channel Target in the HA cluster environment. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of using the ATTO Fibre Channel Target in a HA Cluster environment, after the power cycle of one of the cluster nodes, the fibre channel path reports a dead state. In order to restore the correct status of these fibre channel paths, the VMware server must be restarted.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the ATTO Fibre Channel Target in a HA cluster environment, restarting the cluster node with both zpools imported in the system causes the second cluster node to be unexpectedly restarted.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;Therefore, using the ATTO Fibre Channel Target in the HA cluster environment is not recommended.&lt;br /&gt;
&lt;br /&gt;
=== The SED functionality configuration issues. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED functionality in Scale Logic ZX enables to use the drives with verified SED configuration only.&lt;br /&gt;
&lt;br /&gt;
=== The SED configuration tool available in TUI also lists devices that are not currently supported. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI also lists devices that are not currently supported. To check if a given device is supported, see the HCL list available on the Scale Logic webpage.&lt;br /&gt;
&lt;br /&gt;
=== Enabling the autotrim functionality in the zpools may cause drastic increase load or iowait in the system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of a drastic increase load or iowait in the system after enabling the autotrim functionality in the zpools, consider disabling it. It is recommended to run the &amp;quot;Trim&amp;quot; function manually on demand and at a convenient time (e.g. at a time when the system is working under less load).&lt;br /&gt;
&lt;br /&gt;
=== The Mellanox ConnectX-3 network controller is no longer supported in RDMA mode due to its instability. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In order to provide stable work with RDMA we recommend using the Mellanox ConnectX-4, ConnectX-5, or ConnectX-6.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115142 --&amp;gt;The Network usage charts display incorrect data for an Active-Backup bonding with RDMA. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Network usage charts incorrectly display data for systems using the Active-Backup bonding with RDMA. The charts only reflect the usage of one network interface included in the Active-Backup bonding (the charts for the second network interface are not generated at all).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115051 --&amp;gt;Duplicate entries appear in the Service Status tab in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In certain scenarios, the Service Status tab in the WebGUI shows duplicated instances of the same connection.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114628 --&amp;gt;Restoring data backups from the macOS Time Machine application may not work correctly with older versions of the macOS system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems with restoring a copy of data from the Time Machine application, it is recommended to update the macOS system to a new version.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114516 --&amp;gt;The Virtual Hard disks smaller than 1B are visible in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; It’s possible to disable the virtual disks through IPMI settings. In Settings -&amp;gt; Media Redirection Settings -&amp;gt; VMedia Instance Settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;  Uncheck &amp;quot;Emulate SD Media as USB disk to Host&amp;quot; checkbox - it  manages one of the virtual disks.&lt;br /&gt;
  Set &amp;quot;Hard disk instances&amp;quot; to 0 in a combo box.&lt;br /&gt;
  Set &amp;quot;Remote KVM Hard disk instances&amp;quot; to 0 in the combo box - settings of the combo box manage the second virtual disk. &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== &amp;lt;!-- #114449 --&amp;gt;Unsupported configuration of VMware virtual machines (consisting of multiple disks) for data rollback from snapshots in On- &amp;amp; Off-site Data Protection. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The VMware virtual machine data rollbacks from snapshots using the On- &amp;amp; Off-site Data Protection functionality are not supported when the virtual machines consist of multiple disks. The specific virtual machine configuration is incompatible with the restoration process.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114351 --&amp;gt;Subdomain statuses in the User Management tab in the WebGUI are not updated. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of unavailability of a given subdomain, information about its status will not be updated on the WebGUI (even by pressing the refresh button).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114251 --&amp;gt;The problems with users and groups synchronization within the Active Directory one-way trusted configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems it’s recommended to use two-way trusted configuration.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #108558 --&amp;gt;Partial support for REST API v3. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The REST API v3 is currently only partially supported. As a result, not all operations can be executed using this version of the REST API. For optimal utilization of the REST API, we highly recommend all customers to employ REST API v4.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #104059 --&amp;gt;SAS Multipath configuration is not supported in the Non-Shared Storage Cluster. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of the Non-Shared Storage Cluster, the SAS Multipath configuration is not supported at all. In this scenario, all the disks need to be connected through one path only. In the case of using the JBOD configuration with disks connected through a pair of SAS cables, one of them must be disconnected.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #99323 --&amp;gt;Username in LDAP database can’t be changed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To modify a username in the LDAP database, the administrator needs to delete the user account and creating a new one in the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115409 --&amp;gt;The hard disk LED locating and disk faulty functionality do not work properly using the Broadcom HBA 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter the Hard disk LED locating and disk faulty functionality do not work.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115436 --&amp;gt;The Broadcom HBA 9600 Storage Adapter may cause “Target allocation failed, error -6” error messages in dmesg. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter,using the “Rescan” button in the storage tab in the WebGUI may results in the “Target allocation failed, error -6” error messages in dmesg.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #109737 --&amp;gt;The ARCHTTP tool, when in use, might erroneously redirect to another network interface. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To avoid redirection to another network interface, it’s recommended to connect to the ARCHTTP tool using the primary network interface available in the Scale Logic ZX (the network interface is usually: eth0).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115494 --&amp;gt;Resilver progress bar in the HA Non-shared Cluster Storage environment may show values over 100%. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the HA Non-Shared storage cluster with compression and deduplication enabled it has been observed that the resilver progress bar on the WebGUI may display values exceeding 100%.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #116234 --&amp;gt;Identification LED blinking does not work on NVMe drives in AMD-based servers. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Identification LED blinking on NVMe drives does not work on servers based on AMD processors. This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115416 --&amp;gt;The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter.&lt;br /&gt;
&lt;br /&gt;
=== The TDB UID/GIDs mapping does not function properly. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; Workarounds:&lt;br /&gt;
&lt;br /&gt;
*Single-Domain Environments:&lt;br /&gt;
**Use the &amp;quot;autorid&amp;quot; option in the &amp;quot;ID mapping backend&amp;quot; settings.&lt;br /&gt;
**Alternatively, use &amp;quot;rid+tdb&amp;quot;:&lt;br /&gt;
**#Connect to the domain.&lt;br /&gt;
**#Navigate to the “Accessed domains” section.&lt;br /&gt;
**#Click the “Edit domain settings” button.&lt;br /&gt;
**#Set the UID/GID mapping to &amp;quot;rid&amp;quot; and define the Min ID and Max ID range (e.g., 2,000,000 to 2,999,999).&lt;br /&gt;
&lt;br /&gt;
Note: The range 1,000,000 to 1,999,999 is reserved.&lt;br /&gt;
&lt;br /&gt;
*Multi-Domain Environments:&lt;br /&gt;
**The &amp;quot;autorid&amp;quot; option is not supported. Use one of the following:&lt;br /&gt;
**#&amp;quot;rid+tdb&amp;quot;&lt;br /&gt;
**#&amp;quot;ad (with RFC2307 schema) + tdb&amp;quot;&lt;br /&gt;
**Steps for configuration:&lt;br /&gt;
&amp;lt;ol style=&amp;quot;margin-left: 80px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Connect to the domains.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Navigate to the “Accessed domains” section.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Click the “Edit domain settings” button for each domain.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Set the UID/GIDs mapping to &amp;quot;rid&amp;quot; for all domains.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Define unique Min ID and Max ID ranges for each domain (e.g., 2,000,000 to 2,999,999 for the first domain, 3,000,000 to 3,999,999 for the second domain, etc.).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== No Warning for Duplicate IP Addresses on Network Interfaces ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; No warning or error message is displayed if two network interfaces are configured with the same IP address. This can lead to network conflicts or connectivity issues. Users must manually verify configurations to avoid duplicates.&lt;br /&gt;
&lt;br /&gt;
=== No LED Management for aacraid Storage Controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; LED management is no longer supported for storage controllers using the aacraid driver, aligning with the manufacturer’s decision to discontinue these controllers. Users depending on LED indicators should explore alternative monitoring solutions or consider upgrading to supported hardware.&lt;br /&gt;
&lt;br /&gt;
=== LED Blinking Not Functional on NVMe Drives in Supermicro X12 Servers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; On Supermicro X12 servers, LED blinking functionality for NVMe drives is not operational. Users should rely on alternative methods to identify and manage drives.&lt;br /&gt;
&lt;br /&gt;
=== Web Server Settings in Maxview Storage Manager Not Preserved After Restart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; Changes made to the Web server settings in Maxview Storage Manager revert to default values after a server restart. Custom configurations are lost upon reboot. This issue will be addressed in a future release.&lt;br /&gt;
&lt;br /&gt;
=== Unnecessary dmesg Entries After Zpool Export/Import ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; Following a zpool export and import, dmesg may show entries such as &amp;quot;debugfs: Directory &#039;zdX&#039; with parent &#039;block&#039; already present!&amp;quot; While these entries do not affect functionality, they will be addressed in a future release.&lt;br /&gt;
&lt;br /&gt;
=== Discontinued IDE Disk Support in Scale Logic ZX Up31 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In Scale Logic ZX Up31, IDE disk support has been removed. Older servers or virtual machines relying on IDE disks may experience compatibility issues or failures. We recommend migrating to supported storage solutions to avoid disruptions. Future releases will not reintroduce IDE disk support.&lt;br /&gt;
&lt;br /&gt;
=== Consider Reducing Volume Block Size to 16KB for High Random Workloads ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; For workloads with high levels of random I/O, reducing the iSCSI volume block size to 16KB can improve performance. Users experiencing performance challenges with random workloads should consider this tuning option.&lt;br /&gt;
&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=JBODs_%26_JBOFs&amp;diff=1609</id>
		<title>JBODs &amp; JBOFs</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=JBODs_%26_JBOFs&amp;diff=1609"/>
		<updated>2025-05-29T11:12:06Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This functionality is available in the &#039;&#039;&#039;Storage Settings &amp;gt; JBODs &amp;amp; JBOFs&#039;&#039;&#039; tab&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It’s used to obtain more information about the disks in the JBOD or JBOF by using external services, e.g. Redfish. For this reason, it is dedicated to disks enclosures with out-of-band management.&amp;lt;br/&amp;gt;&#039;&#039;&#039;The functionality only works on currently supported devices such as SUPERMICRO SSG-136R-N32JBF.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In the case of SUPERMICRO SSG-136R-N32JBF, the Redfish service is used to gain more information about disks, so an account with this service will be needed. To link an enclosure to the service, click on the &amp;quot;&#039;&#039;&#039;Add device&#039;&#039;&#039;&amp;quot; button. A pop-up with a form will appear. Fill in the form.&lt;br /&gt;
&lt;br /&gt;
To link a device through the service, the following information must be provided:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Name (alias)&#039;&#039;&#039; - Set a name for the enclosure that allows the device to be recognized should there be a few machines of the same model.&lt;br /&gt;
*&#039;&#039;&#039;IP address / domain&#039;&#039;&#039; - The domain name or IP address connecting the device to the network.&lt;br /&gt;
*&#039;&#039;&#039;Port&#039;&#039;&#039; - Enter the number of the port used to communicate with the device through the Redfish service. The default port number is 443. Change as needed.&lt;br /&gt;
*&#039;&#039;&#039;Username&#039;&#039;&#039; - Enter the username to the Redfish service.&lt;br /&gt;
*&#039;&#039;&#039;Password&#039;&#039;&#039; - Enter the password that’s associated with the user name that’s been entered above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After filling in every field, click the &amp;quot;&#039;&#039;&#039;Add&#039;&#039;&#039;&amp;quot; button. The system will then connect to the service and start to scan all the available disks. This may take some time. The system needs a while to scan a disk. Thus the more disks there are in an enclosure, the more time is needed to scan them all. After all the disks are scanned, the information will be available in the disk details section. Additional data such as:&lt;br /&gt;
&lt;br /&gt;
*Name of the enclosure in which the disk is located,&lt;br /&gt;
*Number of the slot in which the disk is located&lt;br /&gt;
&lt;br /&gt;
will also be displayed generally (i.e., in pool&#039;s disk groups, pool wizard, etc.).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE!&#039;&#039;&#039; When the connection status changes, a rescan of all disks is required. This occurs, e.g.:&lt;br /&gt;
&lt;br /&gt;
*When a device configuration is changing,&lt;br /&gt;
*When the system is restarted,&lt;br /&gt;
*After network reconnection (when the connection has been lost), etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The connection status of the enclosure is displayed in the table in the &amp;quot;&#039;&#039;&#039;JBODs &amp;amp; JBOFs&#039;&#039;&#039;&amp;quot; tab at all times. Next to the connection status there’s a power state displayed that shows if the device is turned on.&amp;lt;br/&amp;gt;Every device that has been added can be edited, removed from the table, or selected to display its details.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To do any of the above, use the context menu.&amp;lt;br/&amp;gt;The “&#039;&#039;&#039;Edit&#039;&#039;&#039;” option allows changing the device’s data or credentials.&amp;lt;br/&amp;gt;The “&#039;&#039;&#039;Details&#039;&#039;&#039;” option shows more information about an enclosure such as:&lt;br /&gt;
&lt;br /&gt;
*Name (alias)&lt;br /&gt;
*Model&lt;br /&gt;
*Vendor name&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The “&#039;&#039;&#039;Remove&#039;&#039;&#039;” option leads to the removal of a device from the table. Removing a device causes it to disconnect from the external service. Any additional information uploaded via the service after removing a device will not be displayed. In some cases, the option to turn on the LED for disks in the JBOD/JBOF may also become disabled.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Snapshots&amp;diff=1640</id>
		<title>Snapshots</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Snapshots&amp;diff=1640"/>
		<updated>2024-01-03T11:58:22Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: Created page with &amp;quot;&amp;lt;div&amp;gt;&amp;#039;&amp;#039;&amp;#039;This functionality is available in:&amp;amp;nbsp; Storage &amp;gt; PoolName &amp;gt; Snapshots tab&amp;#039;&amp;#039;&amp;#039;&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;This section allows you to manage snapshots available in the zpool. The ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;&#039;&#039;&#039;This functionality is available in:&amp;amp;nbsp; Storage &amp;gt; PoolName &amp;gt; Snapshots tab&#039;&#039;&#039;&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;This section allows you to manage snapshots available in the zpool. The table contains a list of all existing snapshots, which you can sort by name or creation date and time by clicking on the respective column header.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;You can manage a single snapshot using the context menu, which offers the following options:&lt;br /&gt;
*Delete&lt;br /&gt;
*Snapshot details&lt;br /&gt;
*Clone&lt;br /&gt;
*Rollback&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Additionally, you can filter snapshots from a specific resource. Click the &#039;Select resource&#039; button above the table to open a popup with a list of resources. The resources are divided into two tabs: the first tab displays a list of zvols, and the second tab shows a dataset list. Select the resource you want to display snapshots for and click &#039;Apply&#039; to see a list of all snapshots from the selected resource.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;To view all snapshots again, use the &#039;Clear filter&#039; button.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;While a single resource is selected, you can manually add a new snapshot to that resource using the &#039;Add new snapshot&#039; button located in the right corner above the table. Note that this button is active only when snapshots from one resource are listed.&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Active_Directory_(AD)_server_authentication&amp;diff=769</id>
		<title>Active Directory (AD) server authentication</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Active_Directory_(AD)_server_authentication&amp;diff=769"/>
		<updated>2024-01-03T11:50:20Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
This functionality is available in &#039;&#039;&#039;User Management &amp;gt; Share users/groups &amp;gt; Authorization protocols&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;To configure a connection to the existing Active Directory server:&lt;br /&gt;
&lt;br /&gt;
#Navigate to the&amp;amp;nbsp;&#039;&#039;&#039;User Management&amp;amp;nbsp;&#039;&#039;&#039;section in the left menu.&lt;br /&gt;
#Go to the &#039;&#039;&#039;Share users/groups&#039;&#039;&#039; tab.&lt;br /&gt;
#Find the &#039;&#039;&#039;Active Directory (AD) server authentication&#039;&#039;&#039; block.&lt;br /&gt;
#Enable the&amp;amp;nbsp;&#039;&#039;&#039;Enable protocol&#039;&#039;&#039;&amp;amp;nbsp;option.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== AD server authentication status ==&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Connection&#039;&#039;&#039; - shows whether you are connected to an AD server or not.&lt;br /&gt;
*&#039;&#039;&#039;Users/groups list&#039;&#039;&#039; - shows when the lists of users and groups were last synchronized or if the synchronization is taking place at the moment.&lt;br /&gt;
&lt;br /&gt;
Users and groups are synchronized with an Active Directory server every 2 hours. Synchronization can also be started manually by using the &#039;&#039;&#039;Synchronize&#039;&#039;&#039;&amp;amp;nbsp;button.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== AD server authentication settings ==&lt;br /&gt;
&lt;br /&gt;
To connect to the existing AD server, fill in the following fields with credentials provided by the AD server administrator and click the &#039;&#039;&#039;Apply&#039;&#039;&#039; button.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Realm&#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039;Administrator name&#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039;Password&#039;&#039;&#039;&amp;lt;br/&amp;gt;NOTE&#039;&#039;&#039;: Password cannot contain:&#039;&#039;&#039;&lt;br /&gt;
**special characters such as &#039; &amp;quot; ` ^ &amp;amp; $ # ~ [ ] \ / | *&amp;amp;nbsp;:&amp;amp;nbsp;? &amp;amp;lt; &amp;amp;gt;&lt;br /&gt;
**spaces&lt;br /&gt;
**less than 12 and more than 16 characters&lt;br /&gt;
*&#039;&#039;&#039;Organizational Unit (OU) - &#039;&#039;&#039;a direct path to the container where the Computer Organizational Unit is placed. The path must be entered starting from the primary container name within the domain structure. The container name set by default is &#039;&#039;&#039;Computers&#039;&#039;&#039;.&amp;amp;nbsp;If another container name is used instead, then &#039;&#039;&#039;Computers&#039;&#039;&#039; must be changed to the appropriate name. If the path to the container is nested, use a slash as the connector. In the screenshot below, the OU is in the &#039;&#039;&#039;Computers&#039;&#039;&#039; container that is nested in&amp;amp;nbsp;&#039;&#039;&#039;AllComputers &amp;gt; Marketing&#039;&#039;&#039;. In this example, the path to the OU is: &#039;&#039;&#039;AllComputers/Marketing/Computers&#039;&#039;&#039;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;[[File:Ad-structure.png]]&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;NOTE&#039;&#039;&#039;: Container name can&#039;t contain:&#039;&#039;&#039;&lt;br /&gt;
**special characters such as , + &amp;quot; \ &amp;amp;lt; &amp;amp;gt;&amp;amp;nbsp;; = / #&lt;br /&gt;
**spaces&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div&amp;gt;&#039;&#039;&#039;The following reasons might prevent you from connecting to Active Directory:&#039;&#039;&#039;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
#Difference in time between Active Directory Server - if the time difference is greater than 5 minutes, the connection is not possible.&lt;br /&gt;
#The method of authenticating trusted domains - the authentication has to be set to two-way trust. Otherwise, it is not possible to read users and groups from trusted domains.&lt;br /&gt;
#DNS configuration - for an Active Directory domain, it is not possible to use a round-robin mechanism in DNS. This is connected to the fact that only one IP address is authorized. In a moment when another IP is obtained from DNS, the connection is not possible.&lt;br /&gt;
#The &#039;&#039;&#039;server name&#039;&#039;&#039; is the same as the Computer Organizational Unit (OU) named in the Active Directory (AD) server. If the object with the same name exists and the user that you use to log in to the AD server does not have permission to access this file, the connection will fail. The solution is to delete the existing computer object from the AD server. The following information explains how to delete the OU file:&lt;br /&gt;
&amp;lt;ul style=&amp;quot;margin-left: 80px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Log on to the Domain Controller with the domain administrator account. Press Windows Logo + R, enter &amp;quot;dsa.msc&amp;quot; and press Enter.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;In the &amp;quot;Active Directory Users and Computers&amp;quot; window, select the domain container in which the OU you are looking for is located.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Select the computer object and delete it.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&#039;&#039;&#039;Note&#039;&#039;&#039;: By default, any created Organizational Unit is protected from accidental deletion. To delete the OU, you need to clear the &amp;quot;Protect object from accidental deletion&amp;quot; checkbox, which you can find in the object properties in the &amp;quot;Object&amp;quot; tab. By deleting OU, you delete all nested objects that it contains as well.&lt;br /&gt;
:::&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Users and user groups management ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Management mode:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Scan single domain (default)&#039;&#039;&#039; - Using this function allows the user to obtain users and groups from the main domain only.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Scan all trusted domains&#039;&#039;&#039; - Using this function allows the user to obtain users and groups from the main and trusted domains.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;ID mapping back end:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;rid + tdb (default)&#039;&#039;&#039; - This option utilizes the rid backend for ID mapping to AD users. UID/GIDs range has to be entered manually The tdb backend is used when no other configuration is set. Recommended for large databases.Samba Wiki link for rid backend: [https://wiki.samba.org/index.php/Idmap_config_rid https://wiki.samba.org/index.php/Idmap_config_rid]&lt;br /&gt;
*&#039;&#039;&#039;ad (with RFC2307 schema) + tdb&#039;&#039;&#039; - Allows reading ID mappings from an AD server, provided that the uidNumber attributes for users and gidNumber attributes for groups were added in advance in the AD. This backend requires additional configuration of uidNumber and gidNumber on the AD server side. The tdb back end is used when no other configuration is set. Samba Wiki link for rid backend: [https://wiki.samba.org/index.php/Idmap_config_ad https://wiki.samba.org/index.php/Idmap_config_ad]&lt;br /&gt;
*&#039;&#039;&#039;autorid&#039;&#039;&#039; - This backend can be used if users are imported from a set of different domains. Automatically configures the range to be used for each domain. The only configuration needed is the range of UID/GIDs used for user/group mappings and the number of IDs per domain.Samba Wiki link for autorid backend: [https://wiki.samba.org/index.php/Idmap_config_autorid https://wiki.samba.org/index.php/Idmap_config_autorid]&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:Ad-structure.png&amp;diff=1638</id>
		<title>File:Ad-structure.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:Ad-structure.png&amp;diff=1638"/>
		<updated>2024-01-03T11:48:52Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:NYMNETWORKS-MIB-up30.txt&amp;diff=1637</id>
		<title>File:NYMNETWORKS-MIB-up30.txt</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:NYMNETWORKS-MIB-up30.txt&amp;diff=1637"/>
		<updated>2023-12-12T16:06:30Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:NYMNETWORKS-MIB.txt&amp;diff=1636</id>
		<title>File:NYMNETWORKS-MIB.txt</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:NYMNETWORKS-MIB.txt&amp;diff=1636"/>
		<updated>2023-12-12T16:06:12Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Scale_Logic_ZX_ver.1.0_up30_Release_Notes&amp;diff=1633</id>
		<title>Scale Logic ZX ver.1.0 up30 Release Notes</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Scale_Logic_ZX_ver.1.0_up30_Release_Notes&amp;diff=1633"/>
		<updated>2023-12-12T16:02:42Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date: 2023-12-06&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build: 53984&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;span style=&amp;quot;color:#cc0033&amp;quot;&amp;gt;&#039;&#039;&#039;Important!&#039;&#039;&#039; &amp;lt;/span&amp;gt;To upgrade the product, you need to have an active Technical Support plan. You will be prompted to re-activate your product after installing the upgrade to verify your Technical Support status.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have an active Technical Support plan, please contact Scale Logic sales team or your reseller for further assistance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== New ==&lt;br /&gt;
&lt;br /&gt;
=== ZFS Special Devices feature ===&lt;br /&gt;
&lt;br /&gt;
=== NVMe Disk Partitioning feature ===&lt;br /&gt;
&lt;br /&gt;
=== Self-Encrypting Drives (SED) support for HA Non-Shared Storage Clusters ===&lt;br /&gt;
&lt;br /&gt;
=== Active Directory with extended RID Range and RFC2307 compatibility ===&lt;br /&gt;
&lt;br /&gt;
=== Support for macOS Time Machine backup mechanism ===&lt;br /&gt;
&lt;br /&gt;
=== Support for ”hide unreadable folder and files” option in Samba ===&lt;br /&gt;
&lt;br /&gt;
=== Support for recycle bin in Samba for Microsoft Windows ===&lt;br /&gt;
&lt;br /&gt;
=== The &amp;quot;Send Compressed Data&amp;quot; Option in Scale Logic ZX On- &amp;amp; Off-Site Data Protection ===&lt;br /&gt;
&lt;br /&gt;
=== Support for Zero-configuration networking (zeroconf) feature with the services discovery options ===&lt;br /&gt;
&lt;br /&gt;
=== TUI: New predefined as well as editable custom storage performance profiles for tools testing purposes ===&lt;br /&gt;
&lt;br /&gt;
=== TRIM management for selected drives ===&lt;br /&gt;
&lt;br /&gt;
=== Active SMB user connections and active iSCSI connections statistics are available in the WebGUI in the Service Status tab ===&lt;br /&gt;
&lt;br /&gt;
=== S.M.A.R.T monitoring functionality in the WebGUI ===&lt;br /&gt;
&lt;br /&gt;
=== ZFS ARC, L2ARC, and ZIL statistics in the WebGUI ===&lt;br /&gt;
&lt;br /&gt;
=== LSI SNMP Agent ===&lt;br /&gt;
&lt;br /&gt;
=== Checkmk agent turn off in the TUI ===&lt;br /&gt;
&lt;br /&gt;
=== Driver for Broadcom HBA 9600-16e 12Gb Tri-Mode Storage Adapter (mpi3mr, v8.6.1.0.0) ===&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== Intel 100GbE Network Controller driver (ice, v1.11.14) ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10/40GbE Network Controller driver (i40e, v2.22.18) ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10GbE Network Controller driver (ixgbe, v5.18.11) ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 1GbE Network Controller driver (igb, v5.13.16) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom NeXtreme-E Series 10/100GbE Network Controller driver (bnxt_en, v1.10.2-223.0.162.0) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM57xx Network Controller driver (bnx2x, v1.715.13) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM57xx Network Controller driver (bnx2, v2.2.6a) ===&lt;br /&gt;
&lt;br /&gt;
=== Solarflare 10GbE Network Controller driver (sfc, v4.15.14.1001) ===&lt;br /&gt;
&lt;br /&gt;
=== Chelsio 10GbE Network Controller driver (cxgb4, v3.18.0.0) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom HBA Adapter driver (mpt3sas, v45.00.00.00) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom MegaRAID Adapter driver (megaraid_sas, v07.724.02.00) ===&lt;br /&gt;
&lt;br /&gt;
=== Marvell FastLinQ 41000 Network Controller driver (qede, v8.70.12.0) ===&lt;br /&gt;
&lt;br /&gt;
=== Areca RAID Adapter driver (arcmsr, v1.50.00.13) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec SmartHBA and SmartRAID Adapter driver (smartpqi, v2.1.22-040) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec MaxView tool (v3.10.00 (24308)) ===&lt;br /&gt;
&lt;br /&gt;
=== LSI Storage Authority Software (v008.004.010.000) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 6Gb/s HBA Adapter driver (esas2hba, v2.41.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 12Gb/s HBA Adapter driver (esas4hba, v1.51.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 12Gb/s GT HBA Adapter driver (esas5hba, v1.06.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 16Gb/32Gb Fibre Channel Adapter driver (celerity16fc, v2.08.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 8Gb Fibre Channel Adapter driver (celerity8fc, v2.25.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Config Tool (v4.39) ===&lt;br /&gt;
&lt;br /&gt;
=== Emulex LightPulse Fibre Channel Adapter driver (lpfc, v12.8.614.22) ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox firmware update driver (mft, v4.23.0) ===&lt;br /&gt;
&lt;br /&gt;
=== Check_mk agent (check_mk, v2.1.0p14) ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114726 --&amp;gt;(SU 90895): In the environments with more than 128GB RAM, kernel panic logs are not saved by Kdump ===&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114673 --&amp;gt;In some environments, under a heavy load and while using On- and Off-site Data Protection, the connection to the SMB share is interrupted ===&lt;br /&gt;
&lt;br /&gt;
== Important notes for ZX HA configuration ==&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use sync always option for zvols and datasets in cluster&amp;amp;nbsp; ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended not to use more than eight ping nodes ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to configure each IP address in separate subnetwork ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to run Scrub scanner after failover action triggered by power failure (dirty system close) ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use UPS unit for each cluster node ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use static discovery in all iSCSI initiators ===&lt;br /&gt;
&lt;br /&gt;
=== It is strongly not recommended to change any settings when both nodes do not have the same ZX version, for example during software updating ===&lt;br /&gt;
&lt;br /&gt;
=== It is necessary to use different Server names for cluster nodes ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work properly with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
=== HA cluster does not work stable with ALB bonding mode ===&lt;br /&gt;
&lt;br /&gt;
=== FC Target HA cluster does not support Persistant Reservation Synchronization and it cannot be used as a storage for Microsoft Hyper-V cluster. This problem will be solved in future releases. ===&lt;br /&gt;
&lt;br /&gt;
=== When using certain Broadcom (previously LSI) SAS HBA controllers with SAS MPIO, Broadcom recommends to install specific firmware from Broadcom SAS vendor. ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: justify&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;toctext&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;*Please consult Broadcom vendor for specific firmware that is suitable for your hardware setup.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
The utilization of physical hard drives in virtual machines hosted by Hyper-V is not supported and may cause problems. The problem does not occur when utilizing virtual hard drives in virtual machines within a Hyper-V environment.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Scale Logic VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Scale Logic VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Scale Logic ZX users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic ZX with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== High Availability shared storage cluster does not work with Infiniband controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Due to technical reasons the High Availability shared storage cluster does not work properly when using the Infiniband controllers for VIP interface configuration. This limitation will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Unexpected long failover time, especially with HA-Cluster with two or more pools ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Current failover mechanism procedure is moving pools in sequence. Since up27 release, up to 3 pools are supported in HA-cluster. If all pools are active on single node and failover needs to move all 3 pools, the failover may take longer than 60 seconds which is a default iSCSI timeout in Hyper-V Clusters. In some environments, under heavy load a problem with too long time of cluster resources switching may occur as well. If the switching time exceeds the iSCSI initiator timeout, it is strongly recommended to increase the timeout up to 600 seconds.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using Windows, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Run regedit tool and find: &#039;&#039;HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\...\Parameters\MaxRequestHoldTime registry key&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
2. Change value of the key from default 60 sec to 600 sec (decimal)&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using VMware, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Select the host in the vSphere Web Client navigator&lt;br /&gt;
&lt;br /&gt;
2. Go to Settings in the Manage tab&lt;br /&gt;
&lt;br /&gt;
3. Under System, select Advanced System Settings&lt;br /&gt;
&lt;br /&gt;
4. Choose the &#039;&#039;Misc.APDTimeout&#039;&#039; attribute and click the Edit icon&lt;br /&gt;
&lt;br /&gt;
5. Change value from default 140 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &#039;&#039;&#039;In case of using XenServer, to increase iSCSI initiator timeout, please perform following steps:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A. For existing Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Detach and reattach SRs. This will update the new iSCSI timeout settings for the existing SRs.&lt;br /&gt;
&lt;br /&gt;
B. For new Storage Repositories (SR):&lt;br /&gt;
&lt;br /&gt;
1. Edit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
2. node.session.timeo.replacement_timeout = 120&lt;br /&gt;
&lt;br /&gt;
3. Change value from default 120 to 600 sec.&lt;br /&gt;
&lt;br /&gt;
4. Create the new SR. New and existing SRs will be updated with the new iSCSI timeout settings.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB, Round-Robin and Round-Robin with RDMA do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic ZX as Hyper-V or VMware guest, bonding ALB, Round-Robin and Round-Robin with RDMA is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Nodes connected to the same AD server must have unique Server names ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If ZX nodes are connected to the same AD server, they cannot have the same Server names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== SAS-MPIO cannot be used with Cluster over Ethernet ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly not recommended to use Cluster over Ethernet with SAS-MPIO functionality. Such a configuration can lead to a very unstable cluster behavior.&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Wrong state of storage devices in VMware after power cycle of both nodes in HA FC Target ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In FC Target HA environment, power cycle of both nodes simultaneously may lead to a situation when VMware is not able to restore proper state of the storage devices. In vSphere GUI LUNs are displayed as Error, Unknown or Normal,Degraded. Moving affected pools to another node and back to its native node should bring LUNs back to normal. A number two option is to restart the Failover in ZX’s GUI. Refresh vSphere’s Adapters and Devices tab afterwards.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Separated mode after update from ZX up24 to ZX up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In HA cluster environment after updating of one node from ZX up24 to ZX up25 the other node can fall into separated mode and the mirror path might indicate disconnected status. In such a case go to Failover Settings and in the Failover status section select Stop Failover on both nodes. Once this operation is finished select Start Failover.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of ZX up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of ZX up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of ZX up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target alias name is required while configuring HA FC Target in case of adding two or more ports to one FC group ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If we want to have more then one port in each FC group (in HA FC configuration) it is necessary to type in Target alias name for every port. Otherwise an error message “Target alias is already used” can show up while setting up remote port mapping for FC targets in (pool name) -&amp;gt; Fibre Channel -&amp;gt; Targets and initiators assigned to this zpool. This issue will be resolved in the future release.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of ZX up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open ZX TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;Please note that in order to change this parameter Failover must be stopped first.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating nodes of the ZX cluster from up24 and earlier versions changes FC ports to target mode resulting in losing connection to a storage connected via FC initiator ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is a significant difference in FC configurations in up24 and earlier versions. Those versions allowed the FC ports to be configured in initiator mode only, while later versions allow both target and initiator mode with target as default, so in case of using storage connected via FC initiator, FC port(s) must be manually corrected in GUI of the updated node.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Updating Metro Cluster node with NVMe disks as read cache from ZX up26 or earlier can cause the system to lose access to the NVMe disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The process of updating of Metro Cluster node from ZX up26 or earlier is changing NVME disk IDs. In consequence moving pool back to updated node is possible but the read cache is gone (ID mismatch). In order to bring read cache back to the pool we recommend to use console tools in the following way: press Ctrl+Alt+x -&amp;gt; “Remove ZFS data structures and disks partitions”, locate and select the missing NVMe disk and press OK to remove all ZFS metadata on the disk. After this operation click Rescan button in GUI -&amp;gt; Storage. The missing NVMe disk should now appear in Unassigned disks at the bottom of the page which allows to select that disk in pool’s Disk group’s tab. Open Disk group tab of the pool, press the Add group button and select Add read cache. The missing disk should now be available to select it as a read cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Long time of a failover procedure in case of Xen client with iSCSI MPIO configuration ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In a scenario where Xen client is an iSCSI initiator in MPIO configuration, the power-off of one node starts the failover procedure that takes a very long time. Pool is finally moved successfully but there are many errors showing up in dmesg in meantime. In case of such an environment we recommend to add the following entry in the device section of the configuration file: /etc/multipath.conf:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;no_path_retry queue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;The structure of the device section should look as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;device {&lt;br /&gt;
        vendor                  &amp;quot;SCST_FIO|SCST_BIO&amp;quot;&lt;br /&gt;
        product                 &amp;quot;*&amp;quot;&lt;br /&gt;
        path_selector           &amp;quot;round-robin 0&amp;quot;&lt;br /&gt;
        path_grouping_policy    multibus&lt;br /&gt;
        rr_min_rio              100&lt;br /&gt;
        no_path_retry           queue&lt;br /&gt;
        }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== In case of large number of disks, zpool move can take a long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In certain environments in case of large number of disks (about 100 and more) the zpool move operation can take a long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No support for VMD option in BIOS leads to a problem with listing PCI devices ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; On some servers, an enabled VMD option in BIOS causes that PCI devices are not listed properly. If this is the case please disable the VMD option in BIOS. This problem will be solved in the future release.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Rolled back data are not properly refreshed both in Windows and Vmware systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before using rollback operation on zvol please detach iSCSI or FC target, perform rollback operation and reattach target.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User gets deleted from share access list after changing its username on AD server ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the AD user is assigned to a share and later the username is changed we should let ZX know about it. Using the &amp;quot;Synchronize and update shares configurations&amp;quot; operation on ZX leads to a situation where the changed user gets deleted from the share’s access list. The new username needs to be added to the share’s access list manually.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== QLogic 32Gbit FC HBA is no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from ZX up29 we no longer support QLogic 32Gbit FC adapters.&lt;br /&gt;
&lt;br /&gt;
=== Certain 16Gbit FC HBAs are no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from ZX up29 we no longer support certain 16Gbit FC adapters. If you are using a 16Gbit FC adapter based on the Qlogic chipset using the qla2xxx_scts driver, please refer to our online hardware compatibility list (HCL) to verify if that particular adapter is supported.&lt;br /&gt;
&lt;br /&gt;
=== E-mail password cannot contain special non-ASCII characters ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The following special characters #&amp;amp;nbsp;: + cannot be used in a password used in an e-mail notification feature. They can break the authentication process.&lt;br /&gt;
&lt;br /&gt;
=== LSA e-mail notifications does not work with SMTP servers requiring SSL/TLS authentication ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; E-mail alert configuration in LSI Storage Authority Software does not work with SMTP servers which require SSL/TLS authentication&lt;br /&gt;
&lt;br /&gt;
=== Moving IP address of the NFS share’s IP read only access list to read/write access list cannot be performed in one step ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If an IP address is already present on the NFS access list and you would like to move it to another access list, it has to be performed in two steps. First delete the IP address from the current list and apply the changes. Next edit the NFS share again and add the IP address to the other access list.&lt;br /&gt;
&lt;br /&gt;
=== If the used space on zpool reaches more than 80%, the system may generate high load and become unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the used space on the zpool reaches more then 80%, the system is trying to utilize the available space to the maximum. As a result, the system load may increase, especially waiting I/O and cause its unstable work. Expanding the pool size space is recommended.&lt;br /&gt;
&lt;br /&gt;
=== In certain situations WebGUI is not showing the current state of the system ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There are situations when the system is performing actions taking too long for the WebGUI to refresh the values in the web browser. In such a case the system is showing the old value taken directly from cache memory. We recommend using the F5 key to refresh the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== An ongoing O&amp;amp;ODP process involving small zvol block size or dataset record size generate high load and render the system unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; O&amp;amp;ODP backup tasks involving small zvol block sizes as well as small dataset record sizes (4KB - 16KB) are known to generate very high load rendering the system unstable. We recommend using at least 64KB sizes for zvols and datasets.&lt;br /&gt;
&lt;br /&gt;
=== Runtime UPS calibration in the client-server configuration unexpectedly shutdowns ZX ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the client-server configuration of the UPS, the runtime UPS calibration process returns an improper value interpreted by ZX as being on battery. When it timeouts it shuts the system down.&lt;br /&gt;
&lt;br /&gt;
=== Starting from up29 (including updating from previous version), system cannot boot up in UEFI mode if your boot medium is controlled by LSI SAS 9300 HBA with outdated firmware ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Make sure your LSI SAS 9300 HBA has the latest firmware installed. A quick workaround is changing the booting mode from UEFI to Legacy.&lt;br /&gt;
&lt;br /&gt;
=== Bonded Mellanox network cards show negative values on the network usage chart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be solved in the future release.&lt;br /&gt;
&lt;br /&gt;
=== In case of hundreds of thousands of LDAP users system starts very slowly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; LDAP database is stored on the boot medium. If you have that large LDAP database we recommend using ultra fast NVMe disk for boot medium.&lt;br /&gt;
&lt;br /&gt;
=== After update to ZX up29 write back cache on some hardware RAID volumes can be unintentionally disabled ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from ZX up29 we disable write-back cache on all HDD disks by default, but we do not disable write-back cache on SSD drives and hardware RAID volumes. It can happen however that the write-back cache on some RAID volumes can be turned off. Hardware RAID volume performance can be heavily impacted by the lack of the write-back cache, so please make sure it&#039;s enabled after update. Open TUI and invoke Extended tools by pressing CTRL+ALT+t, then select Disk write-back cache settings.&lt;br /&gt;
&lt;br /&gt;
=== Restarting or disconnecting JBOD with the write-back cache enabled on disks can lead to the data inconsistency ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If write-back cache is enabled on disks in JBOD then restarting or disconnecting JBOD can lead to data inconsistency. Starting from ZX up29 we disable write-back cache on HDD disks by default during bootup procedure. We do not disable write-back cache on SSD drives and hardware RAID volumes.&lt;br /&gt;
&lt;br /&gt;
=== Snapshots are not displayed after a system reboot if there are more than a few thousands of snapshots ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case there is a large number of snapshots (more than a few thousands) there might be a significant delay in listing them in WebGUI after a system reboot. Depending on how big the number of snapshots is, it may take a few minutes or up to several dozen minutes to populate the list in WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the gzip-9 compression algorithm. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the gzip-9 compression algorithm the system can behave unstable when copying the data to storage. It is possible to use this compression algorithm only in environments where very efficient processors are being used.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use more than 500 zvols. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using more than 500 zvols in the system, the responsiveness of the Web-GUI may be low and the system may have problems with the import of zpools.&lt;br /&gt;
&lt;br /&gt;
=== It is recommended to use Fibre Channel groups in Fibre Channel Target HA Cluster environments that use the Fibre Channel switches. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the Fibre Channel switches in FC Target HA Cluster environments, it is recommended to use only Fibre Channel groups (using the Fibre Channel Public group it is not recommended).&lt;br /&gt;
&lt;br /&gt;
=== Manual export and import of zpool in the system or deactivation of the Fibre Channel group without first suspending or turning off the virtual machines on the VMware ESXi side may cause loss of access to the data by VMware ESXi. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before a manual export and import of a zpool in the system or deactivation of the Fibre Channel group in Fibre Channel Target HA Cluster environment, you must suspend or turn off the virtual machines on the VMware ESXi side. Otherwise, the VMware ESXi may lose access to the data, and restarting it will be necessary.&lt;br /&gt;
&lt;br /&gt;
=== In Fibre Channel Target HA Cluster environments the VMware ESXi 6.7 must be used instead of VMware ESXi 7.0. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the VMware ESXi 7.0 in Fibre Channel Target HA Cluster environment, restarting one of the cluster nodes may cause the Fibre Channel paths to report a dead state.&lt;br /&gt;
&lt;br /&gt;
=== Sometimes cluster nodes hang up during boot of Scale Logic ZX. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case one of the cluster nodes hangs up during Scale Logic ZX boot, it must be manually restarted.&lt;br /&gt;
&lt;br /&gt;
=== Sometimes when using the ipmi hardware solutions, the cluster node may be restarted again by the ipmi watchdog ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In this case, it is recommended to wait 5 minutes before turning on the cluster node after it was turned off.&lt;br /&gt;
&lt;br /&gt;
=== Sometimes restarting one of the cluster nodes may cause some disks to be missing in the zpool configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In this case, click the “Rescan storage” button on the WebGUI to solve this problem.&lt;br /&gt;
&lt;br /&gt;
=== The Internet Connection Check functionality has been removed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to check the internet connection, try to get the date and time from the NTP server using the Web-GUI (System Settings -&amp;gt; System -&amp;gt; Time and date settings).&lt;br /&gt;
&lt;br /&gt;
=== After upgrading the system to a newer version, the event viewer reported an error message: An unexpected system reboot occurred. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After upgrading the system to a newer version, the event viewer reported an error message: &amp;quot;An unexpected system reboot occurred. Run the &#039;Scrub scanner&#039; on all pools to check the system integrity. Analysis of logs and previous events can help to find the cause of this problem and prevent the issue in the future. For more information, refer to the help article.&amp;quot;. This information should be ignored.&lt;br /&gt;
&lt;br /&gt;
=== Low performance on remote disks in case of new installation of ZX up29r2. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;The source for this problem is the zfs_vdev_max_active parameter set to 1 on a new installation of ZX up29r2. To resolve this problem, please change the value of the zfs_vdev_max_active parameter from 1 to 1000 in TUI.&amp;amp;nbsp; In order to change the zfs_vdev_max_active parameter to 1000 open ZX TUI and use CTRL+ALT+W keys combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select the zfs module, then the zfs_vdev_max_active parameter and change its value to 1000. This operation requires a restart of the ZX. This should be done by selecting the Reboot option in the TUI.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
=== In case of no local storage disks in any Non-Shared storage HA Cluster node, the remote disks mirroring path connection status shows incorrect state: Disconnected. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; According to assumptions, each cluster nodes in Non-Shared storage HA Cluster must have at least one local storage disk before creating the remote disk mirroring path connection.&lt;br /&gt;
&lt;br /&gt;
=== In some environments in case of using RDMA for remote disks mirroring path, shutdown one of the cluster nodes may causes its restart instead of shutting down. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In some environments in case of using RDMA for remote disks mirroring path, shutdown one of the cluster nodes may causes its restart instead of shutting down.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the ATTO Fibre Channel Target in the HA cluster environment. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of using the ATTO Fibre Channel Target in a HA Cluster environment, after the power cycle of one of the cluster nodes, the fibre channel path reports a dead state. In order to restore the correct status of these fibre channel paths, the VMware server must be restarted.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the ATTO Fibre Channel Target in a HA cluster environment, restarting the cluster node with both zpools imported in the system causes the second cluster node to be unexpectedly restarted.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;Therefore, using the ATTO Fibre Channel Target in the HA cluster environment is not recommended.&lt;br /&gt;
&lt;br /&gt;
=== The SED functionality configuration issues. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED functionality in Scale Logic ZX enables to use the drives with verified SED configuration only.&lt;br /&gt;
&lt;br /&gt;
=== The SED configuration tool available in TUI also lists devices that are not currently supported. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI also lists devices that are not currently supported. To check if a given device is supported, see the HCL list available on the Scale Logic webpage.&lt;br /&gt;
&lt;br /&gt;
=== Enabling the autotrim functionality in the zpools may cause drastic increase load or iowait in the system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of a drastic increase load or iowait in the system after enabling the autotrim functionality in the zpools, consider disabling it. It is recommended to run the &amp;quot;Trim&amp;quot; function manually on demand and at a convenient time (e.g. at a time when the system is working under less load).&lt;br /&gt;
&lt;br /&gt;
=== The Mellanox ConnectX-3 network controller is no longer supported in RDMA mode due to its instability. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In order to provide stable work with RDMA we recommend using the Mellanox ConnectX-4, ConnectX-5, or ConnectX-6.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115142 --&amp;gt;The Network usage charts display incorrect data for an Active-Backup bonding with RDMA. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Network usage charts incorrectly display data for systems using the Active-Backup bonding with RDMA. The charts only reflect the usage of one network interface included in the Active-Backup bonding (the charts for the second network interface are not generated at all).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115051 --&amp;gt;Duplicate entries appear in the Service Status tab in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In certain scenarios, the Service Status tab in the WebGUI shows duplicated instances of the same connection.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114628 --&amp;gt;Restoring data backups from the macOS Time Machine application may not work correctly with older versions of the macOS system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems with restoring a copy of data from the Time Machine application, it is recommended to update the macOS system to a new version.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114516 --&amp;gt;The Virtual Hard disks smaller than 1B are visible in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; It’s possible to disable the virtual disks through IPMI settings. In Settings -&amp;gt; Media Redirection Settings -&amp;gt; VMedia Instance Settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;  Uncheck &amp;quot;Emulate SD Media as USB disk to Host&amp;quot; checkbox - it  manages one of the virtual disks.&lt;br /&gt;
  Set &amp;quot;Hard disk instances&amp;quot; to 0 in a combo box.&lt;br /&gt;
  Set &amp;quot;Remote KVM Hard disk instances&amp;quot; to 0 in the combo box - settings of the combo box manage the second virtual disk. &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== &amp;lt;!-- #114449 --&amp;gt;Unsupported configuration of VMware virtual machines (consisting of multiple disks) for data rollback from snapshots in On- &amp;amp; Off-site Data Protection. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The VMware virtual machine data rollbacks from snapshots using the On- &amp;amp; Off-site Data Protection functionality are not supported when the virtual machines consist of multiple disks. The specific virtual machine configuration is incompatible with the restoration process.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114351 --&amp;gt;Subdomain statuses in the User Management tab in the WebGUI are not updated. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of unavailability of a given subdomain, information about its status will not be updated on the WebGUI (even by pressing the refresh button).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114251 --&amp;gt;The problems with users and groups synchronization within the Active Directory one-way trusted configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems it’s recommended to use two-way trusted configuration.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #108558 --&amp;gt;Partial support for REST API v3. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The REST API v3 is currently only partially supported. As a result, not all operations can be executed using this version of the REST API. For optimal utilization of the REST API, we highly recommend all customers to employ REST API v4.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #104059 --&amp;gt;SAS Multipath configuration is not supported in the Non-Shared Storage Cluster. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of the Non-Shared Storage Cluster, the SAS Multipath configuration is not supported at all. In this scenario, all the disks need to be connected through one path only. In the case of using the JBOD configuration with disks connected through a pair of SAS cables, one of them must be disconnected.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #99323 --&amp;gt;Username in LDAP database can’t be changed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To modify a username in the LDAP database, the administrator needs to delete the user account and creating a new one in the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115409 --&amp;gt;The hard disk LED locating and disk faulty functionality do not work properly using the Broadcom HBA 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter the Hard disk LED locating and disk faulty functionality do not work.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115436 --&amp;gt;The Broadcom HBA 9600 Storage Adapter may cause “Target allocation failed, error -6” error messages in dmesg. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter,using the “Rescan” button in the storage tab in the WebGUI may results in the “Target allocation failed, error -6” error messages in dmesg.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #109737 --&amp;gt;The ARCHTTP tool, when in use, might erroneously redirect to another network interface. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To avoid redirection to another network interface, it’s recommended to connect to the ARCHTTP tool using the primary network interface available in the Scale Logic ZX (the network interface is usually: eth0).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115494 --&amp;gt;Resilver progress bar in the HA Non-shared Cluster Storage environment may show values over 100%. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the HA Non-Shared storage cluster with compression and deduplication enabled it has been observed that the resilver progress bar on the WebGUI may display values exceeding 100%.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #JBmail 31.10.2023 g. 17:29 --&amp;gt;Sometimes a web browser may display a blank help page. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; Sometimes a web browser may redirect to the help page using HTTPS, which can result in a blank page being displayed. To resolve this issue, please ensure that your web browser is redirected to the help page using HTTP.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #116234 --&amp;gt;Identification LED blinking does not work on NVMe drives in AMD-based servers. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Identification LED blinking on NVMe drives does not work on servers based on AMD processors. This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Service_discovery&amp;diff=1631</id>
		<title>Service discovery</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Service_discovery&amp;diff=1631"/>
		<updated>2023-12-12T15:30:38Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: Created page with &amp;quot;__NOTOC__   === Zeroconf service discovery === &amp;lt;div&amp;gt;This functionality allows discovering NAS devices by any operating system that supports zero-configuration networking (zero...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zeroconf service discovery ===&lt;br /&gt;
&amp;lt;div&amp;gt;This functionality allows discovering NAS devices by any operating system that supports zero-configuration networking (zeroconf), e.g., macOS.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;Two types of services are currently supported:&lt;br /&gt;
*Discovering SMB services&lt;br /&gt;
*Discovering NAS for Time Machine via SMB&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
===== Discovering SMB services =====&lt;br /&gt;
&amp;lt;div&amp;gt;Allows the server to broadcast information about its SMB services.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
===== Discovering NAS for Time Machine via SMB =====&lt;br /&gt;
&amp;lt;div&amp;gt;Allows macOS users to perform backup tasks from the Mac computer to shared folders via SMB.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;The following conditions must be met for the functionality to work:&lt;br /&gt;
*The “Discovering SMB services” option must be enabled.&lt;br /&gt;
*The “Discovering NAS for Time Machine via SMB” option must be enabled.&lt;br /&gt;
*The SMB protocol must be enabled, and the following options must be set as follows:&lt;br /&gt;
**Vfs_fruit ON&lt;br /&gt;
**oplocks ON&lt;br /&gt;
**level2 oplocks ON&lt;br /&gt;
**SMB2 leases ON&lt;br /&gt;
**kernel oplocks OFF&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;When all conditions above are met, the option that allows shares to be discovered by Time Machine becomes active. The next step is to select the &amp;quot;Enable macOS Time Machine support&amp;quot; option in every share the user wishes to make visible for Time Machine. All shares with this option enabled will be visible for Time Machine.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;&#039;&#039;&#039;&amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;Note!&amp;lt;/span&amp;gt; If any of the above settings are changed, shares will no longer appear in Time Machine.&#039;&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=SNMP_settings&amp;diff=220</id>
		<title>SNMP settings</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=SNMP_settings&amp;diff=220"/>
		<updated>2023-12-12T15:19:43Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This function enables you to configure access over the &#039;&#039;&#039;SNMP&#039;&#039;&#039; protocol in versions 2 or 3.&lt;br /&gt;
&lt;br /&gt;
With SNMP enabled, you receive a wealth of information (CPU usage, system load, memory info, ethernet traffic, running processes).&amp;lt;br/&amp;gt;System location and system contact are only for your information.&amp;amp;nbsp;&amp;amp;nbsp;For example, when you connect from an SNMP client, you will see your location and name.&lt;br /&gt;
&lt;br /&gt;
SNMP, version 3 has an encrypted transmission feature as well as authentication by username and password.&amp;lt;br/&amp;gt;SNMP, version 2 does not have encrypted transmission, and authentication is done only via the community string.&lt;br /&gt;
&lt;br /&gt;
The community string you set can contain up to 20 characters, while the password needs to have at least 8 characters.&lt;br /&gt;
&lt;br /&gt;
Links to SNMP clients:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;[http://www.muonics.com http://www.muonics.com]&amp;lt;/span&amp;gt;&lt;br /&gt;
*&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;[http://www.mg-soft.com http://www.mg-soft.com]&amp;lt;/span&amp;gt;&lt;br /&gt;
*&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;[http://www.manageengine.com http://www.manageengine.com]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|&lt;br /&gt;
Our storage system supports the SNMP protocol in MIB-II standard.&amp;amp;nbsp; List of MIBs:&lt;br /&gt;
&lt;br /&gt;
*- mib-2.host&lt;br /&gt;
&lt;br /&gt;
*- mib-2.ip&lt;br /&gt;
&lt;br /&gt;
*- mib-2.tcp&lt;br /&gt;
&lt;br /&gt;
*- mib-2.udp&lt;br /&gt;
&lt;br /&gt;
*- mib-2.interfaces&lt;br /&gt;
&lt;br /&gt;
*- mib-2.at&lt;br /&gt;
&lt;br /&gt;
*- system&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
JovianDSS offers additional SNMP values to monitor Pool and ZFS attributes.&amp;lt;br/&amp;gt;It is necessary to query specific OIDs in order to receive those attributes.&lt;br /&gt;
&lt;br /&gt;
For basic ZFS parameters, NYMNETWORKS-MIB mib is included:&lt;br /&gt;
&lt;br /&gt;
*to version v.1.0 up29r4&amp;amp;nbsp; [[:Media:NYMNETWORKS-MIB.txt|NYMNETWORKS-MIB.txt]]&lt;br /&gt;
*from version v.1.0 up30 [[:Media:NYMNETWORKS-MIB-up30.txt|NYMNETWORKS-MIB.txt]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;snmpwalk -v 2c -m NYMNETWORKS-MIB -c community 192.168.251.79 .1.3.6.1.4.1.25359.1&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemName.1 = STRING: &amp;quot;Pool-0&amp;quot;&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemName.2 = STRING: &amp;quot;Pool-1&amp;quot;&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemAvailableKB.1 = Gauge32: 15861464&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemAvailableKB.2 = Gauge32: 15861672&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemUsedKB.1 = Gauge32: 4327720&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemUsedKB.2 = Gauge32: 4327512&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsPoolHealth.1 = INTEGER: online(1)&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsPoolHealth.2 = INTEGER: online(1)&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemSizeKB.1 = Wrong Type (should be INTEGER): Gauge32: 20189184&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemSizeKB.2 = Wrong Type (should be INTEGER): Gauge32: 20189184&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemAvailableMB.1 = Gauge32: 15489&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemAvailableMB.2 = Gauge32: 15489&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemUsedMB.1 = Gauge32: 4226&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemUsedMB.2 = Gauge32: 4226&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemSizeMB.1 = Gauge32: 19716&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemSizeMB.2 = Gauge32: 19716&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCSizeKB.0 = Gauge32: 61086&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCMetadataSizeKB.0 = Gauge32: 9278&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCDataSizeKB.0 = Gauge32: 51808&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCHits.0 = Counter32: 229308&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCMisses.0 = Counter32: 41260&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCTargetSize.0 = Gauge32: 64287&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCMru.0 = Gauge32: 59529&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsL2ARCHits.0 = Counter32: 0&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsL2ARCMisses.0 = Counter32: 0&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsL2ARCReads.0 = Counter32: 0&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsL2ARCWrites.0 = Counter32: 0&lt;br /&gt;
&lt;br /&gt;
Additional information, like compression ratio, deduplication ratio, available space (in bytes), age (in seconds) of latest snapshot on volume,&amp;lt;br/&amp;gt;can be obtained with standard NET-SNMP-EXTEND-MIB:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Examples:&#039;&#039;&#039;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;NET-SNMP-EXTEND-MIB::nsExtendOutputFull.&amp;quot;deduplication&amp;quot; = STRING:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;deduplication Pool-0 1.00&amp;quot;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;NET-SNMP-EXTEND-MIB::nsExtendOutputFull.&amp;quot;compression&amp;quot; = STRING:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;compression Pool-0/vol00 1.01&amp;lt;br/&amp;gt;compression Pool-0/clone-vol00 1.00&amp;quot;&lt;br /&gt;
&lt;br /&gt;
NET-SNMP-EXTEND-MIB::nsExtendOutputFull.&amp;quot;volumes_list&amp;quot; = STRING:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;quot;available Pool-0/vol00 11981377536&amp;lt;br/&amp;gt;available Pool-0/clone-vol00 11981377536&amp;quot;&lt;br /&gt;
&lt;br /&gt;
NET-SNMP-EXTEND-MIB::nsExtendOutputFull.&amp;quot;snapshots_age&amp;quot; = STRING:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;quot;snapshot_age Pool-0/vol00 3&amp;lt;br/&amp;gt;snapshot_age Pool-0/vol01 371&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Untranslated OIDs:&#039;&#039;&#039;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;root@p-GA-880GM-USB3:/home/p# snmpwalk -v2c -c public 192.168.0.80&amp;amp;nbsp; 1.3.6.1.4.1.8072.1.3.2.3&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.1.11.99.111.109.112.114.101.115.115.105.111.110 = STRING: &amp;quot;compression Pool-0/vol00 1.01&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.1.12.115.110.97.112.115.104.111.116.95.97.103.101 = STRING: &amp;quot;snapshot_age Pool-0/vol00 3&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.1.12.118.111.108.117.109.101.115.95.108.105.115.116 = STRING: &amp;quot;available Pool-0/vol00 11981377536&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.1.13.100.101.100.117.112.108.105.99.97.116.105.111.110 = STRING: &amp;quot;deduplication Pool-0 1.00&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.2.11.99.111.109.112.114.101.115.115.105.111.110 = STRING: &amp;quot;compression Pool-0/vol00 1.01&amp;lt;br/&amp;gt;compression Pool-0/clone-vol00 1.00&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.2.12.115.110.97.112.115.104.111.116.95.97.103.101 = STRING: &amp;quot;snapshot_age Pool-0/vol00 3&amp;lt;br/&amp;gt;snapshot_age Pool-0/vol01 371&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.2.12.118.111.108.117.109.101.115.95.108.105.115.116 = STRING: &amp;quot;available Pool-0/vol00 11981377536&amp;lt;br/&amp;gt;available Pool-0/clone-vol00 11981377536&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.2.13.100.101.100.117.112.108.105.99.97.116.105.111.110 = STRING: &amp;quot;deduplication Pool-0 1.00&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:WinFeatures.png&amp;diff=1630</id>
		<title>File:WinFeatures.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:WinFeatures.png&amp;diff=1630"/>
		<updated>2023-12-12T15:08:52Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:PC_view.png&amp;diff=1628</id>
		<title>File:PC view.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:PC_view.png&amp;diff=1628"/>
		<updated>2023-12-12T15:08:26Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: Pa-P uploaded a new version of &amp;amp;quot;File:PC view.png&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:Command_prompt_mount_nfs_share1.png&amp;diff=1629</id>
		<title>File:Command prompt mount nfs share1.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:Command_prompt_mount_nfs_share1.png&amp;diff=1629"/>
		<updated>2023-12-12T15:08:13Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:PC_view.png&amp;diff=1627</id>
		<title>File:PC view.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:PC_view.png&amp;diff=1627"/>
		<updated>2023-12-12T15:08:03Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_scenario2_conf2.png&amp;diff=1626</id>
		<title>File:Multiple NICs scenario2 conf2.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_scenario2_conf2.png&amp;diff=1626"/>
		<updated>2023-12-12T14:12:02Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_in_the_same_subnetwork_scenario1-issue1.png&amp;diff=1625</id>
		<title>File:Multiple NICs in the same subnetwork scenario1-issue1.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_in_the_same_subnetwork_scenario1-issue1.png&amp;diff=1625"/>
		<updated>2023-12-12T14:11:50Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_in_the_same_subnetwork_scenario1-issue2.png&amp;diff=1624</id>
		<title>File:Multiple NICs in the same subnetwork scenario1-issue2.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_in_the_same_subnetwork_scenario1-issue2.png&amp;diff=1624"/>
		<updated>2023-12-12T14:11:36Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_in_the_same_subnetwork_Scenario1.png&amp;diff=1623</id>
		<title>File:Multiple NICs in the same subnetwork Scenario1.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_in_the_same_subnetwork_Scenario1.png&amp;diff=1623"/>
		<updated>2023-12-12T14:11:25Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_in_the_same_subnetwork_scenario1-issue3.png&amp;diff=1622</id>
		<title>File:Multiple NICs in the same subnetwork scenario1-issue3.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_in_the_same_subnetwork_scenario1-issue3.png&amp;diff=1622"/>
		<updated>2023-12-12T14:11:15Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_static-routing-table.png&amp;diff=1621</id>
		<title>File:Multiple NICs static-routing-table.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_static-routing-table.png&amp;diff=1621"/>
		<updated>2023-12-12T14:11:02Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_scenario3_issue1.png&amp;diff=1620</id>
		<title>File:Multiple NICs scenario3 issue1.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_scenario3_issue1.png&amp;diff=1620"/>
		<updated>2023-12-12T14:10:49Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_scenario2_conf1.png&amp;diff=1619</id>
		<title>File:Multiple NICs scenario2 conf1.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_scenario2_conf1.png&amp;diff=1619"/>
		<updated>2023-12-12T14:10:39Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_scenario3.png&amp;diff=1618</id>
		<title>File:Multiple NICs scenario3.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_scenario3.png&amp;diff=1618"/>
		<updated>2023-12-12T14:10:25Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:Network-setting.png&amp;diff=1617</id>
		<title>File:Network-setting.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:Network-setting.png&amp;diff=1617"/>
		<updated>2023-12-12T14:10:08Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_static-routing-GUI.png&amp;diff=1616</id>
		<title>File:Multiple NICs static-routing-GUI.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_static-routing-GUI.png&amp;diff=1616"/>
		<updated>2023-12-12T14:09:54Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_scenario3_issue2.png&amp;diff=1615</id>
		<title>File:Multiple NICs scenario3 issue2.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_scenario3_issue2.png&amp;diff=1615"/>
		<updated>2023-12-12T14:09:39Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_static-routing-new-route.png&amp;diff=1614</id>
		<title>File:Multiple NICs static-routing-new-route.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=File:Multiple_NICs_static-routing-new-route.png&amp;diff=1614"/>
		<updated>2023-12-12T14:07:40Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Mounting_NFS_Shares_in_MS_Windows&amp;diff=1613</id>
		<title>Mounting NFS Shares in MS Windows</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Mounting_NFS_Shares_in_MS_Windows&amp;diff=1613"/>
		<updated>2023-11-03T10:03:08Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;This article describes how to set up an MS Windows desktop machine to support the Network File System (NFS) shares and how to connect to those shares. The presented method allows maintaining a connection to NFS shares when the pool moves. &#039;&#039;&#039;Recommended for cluster configuration&#039;&#039;&#039;.&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
Make sure that Services for NFS are enabled on your computer.&lt;br /&gt;
&lt;br /&gt;
#Go to &#039;&#039;&#039;Control Panel &amp;gt; Programs &amp;gt; Programs and Features&#039;&#039;&#039;.&lt;br /&gt;
#Enable the &amp;quot;Turn Windows features on or off&amp;quot; option. You can find it in the menu on the left side.&lt;br /&gt;
#Go to the &amp;quot;Services for NFS&amp;quot; option.&lt;br /&gt;
#Enable the &amp;quot;Services for NFS&amp;quot; and both subfeatures.&lt;br /&gt;
#Reboot the computer to apply changes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div&amp;gt;[[File:WinFeatures.png|none|WinFeatures.png]]&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;Use the following command-line syntax in the &amp;quot;Command prompt&amp;quot; window to mount the NFS share:&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&lt;br /&gt;
*Replace the &amp;quot;server.name.or.IP&amp;quot; with the server&#039;s name where the NFS share is on or with its IP.&lt;br /&gt;
*Replace the &amp;quot;share_name&amp;quot; with the name of the NFS share (for example, &amp;quot;test_share&amp;quot;)&lt;br /&gt;
*Replace the &amp;quot;X&amp;quot; with the desired drive letter.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;pre&amp;gt;mount -o anon mtype=hard timeout=30 \\server.name.or.IP\share_name X:&amp;lt;/pre&amp;gt;&amp;lt;div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;An example of a correct command:&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;pre&amp;gt;mount -o anon mtype=hard timeout=30 \\192.168.188.3\test_share F:&amp;lt;/pre&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;After the command is executed, the following confirmation is displayed:&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;[[File:Command prompt mount nfs share1.png|none|750px|Command prompt mount nfs share.PNG]]&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;The mounted NFS share is displayed in the &amp;quot;Network locations&amp;quot; section of the computer.&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;[[File:PC view.png|none|The mounted share in the PC view.png]]&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:ZFS and data storage articles]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=Multiple_NICs_in_the_same_subnetwork&amp;diff=1611</id>
		<title>Multiple NICs in the same subnetwork</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=Multiple_NICs_in_the_same_subnetwork&amp;diff=1611"/>
		<updated>2023-11-03T10:03:08Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div&amp;gt;This document explains how to configure different network layouts that involve having multiple network interfaces in the same subnet. Each layout has specific setup steps.&amp;lt;br/&amp;gt;&lt;br /&gt;
In the examples we use:&lt;br /&gt;
&lt;br /&gt;
*Host 1 to represent the storage server&lt;br /&gt;
*while Host 2 represents the machine that communicates with the storage server.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT NOTE&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div&amp;gt;As a rule, we suggest that you avoid using network layouts that need multiple interfaces in the same subnet, because they may cause some network services to be unstable. Additionally, it might result in routing issues – for example, a packet sent through eth0 may get a reply from eth1. If you want to improve network performance, please consider using bonding or iSCSI multipathing scenarios instead.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
== Scenario 1 ==&lt;br /&gt;
&amp;lt;div&amp;gt;Host 1 and Host 2 are connected through a switch. They have multiple network interfaces in the same subnet to communicate with each other.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;[[File:Multiple NICs in the same subnetwork Scenario1.png|none|Multiple NICs in the same subnetwork Scenario1.png]]&amp;lt;/div&amp;gt;&amp;lt;div style=&amp;quot;text-align: center&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
==== CONFIGURATION ====&lt;br /&gt;
&amp;lt;div&amp;gt;&lt;br /&gt;
*The need for static routing depends on the service being used.&lt;br /&gt;
*If Host 2 initiates the connection, Host 1 may not need static routing, but it is better to set it up for more reliability.&lt;br /&gt;
*If Host 1 initiates the connection, Host 1 needs static routing. This also applies if Host 2 is another storage server.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
==== POSSIBLE ISSUES ====&lt;br /&gt;
&amp;lt;div&amp;gt;Without static routing, Host 1 will use only the interface that comes first in the routing table (for example, eth1) for all connections it starts.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;[[File:Multiple NICs in the same subnetwork scenario1-issue1.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;The interface that comes first in the routing table is essential for Host 1 to initiate any connections. If it fails, Host 1 will lose this ability.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;[[File:Multiple NICs in the same subnetwork scenario1-issue2.png]]&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;You can prevent this problem by configuring static routing for each interface.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;[[File:Multiple NICs in the same subnetwork scenario1-issue3.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;To configure static routing in the GUI, go to &amp;quot;System Settings&amp;quot;, then select the &amp;quot;Network&amp;quot; tab and click the &amp;quot;Add static routing&amp;quot; button in the &amp;quot;Static routing manager&amp;quot; section.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Multiple NICs static-routing-GUI.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;The &#039;New route&#039; window will appear. The fields should be filled in with the appropriate values.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Multiple NICs static-routing-new-route.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;&#039;&#039;&#039;Network/Host IP&#039;&#039;&#039;: enter the IP address of the network card of the target host&amp;lt;br/&amp;gt;&#039;&#039;&#039;Netmask&#039;&#039;&#039;: when we use a static route for a specific end host, the subnet mask is 32-bit (255.255.255.255)&amp;lt;br/&amp;gt;&#039;&#039;&#039;Interface&#039;&#039;&#039;: from the drop-down menu, select the interface from which communication is to be established.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Below is an example static route configuration for three NICs in the same subnet.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Multiple NICs static-routing-table.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
== Scenario 2 ==&lt;br /&gt;
&amp;lt;div&amp;gt;Host 1 and Host 2 are connected through a switch. They have multiple network interfaces in the same subnet to communicate with each other. Additionally,&amp;amp;nbsp; Host 1 also has an interface for WAN access.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Multiple NICs scenario2 conf1.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
==== CONFIGURATION ====&lt;br /&gt;
&amp;lt;div&amp;gt;&lt;br /&gt;
*We strongly suggest that you use a different subnet for the WAN interface than the other interfaces. In such a case, this layout will be similar to Layout 1 and you can follow the same setup steps.&lt;br /&gt;
*But, unlike Layout 1, this layout always needs a gateway on the WAN interface (see the image below)&lt;br /&gt;
*Please check the “Possible issues” section for Layout 1 for problems that may also happen in this layout.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Multiple NICs scenario2 conf2.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;To set up the gateway, please press CTRL+ALT+N on the system console and choose the interface. Then, select &amp;quot;Gateway.&amp;quot;&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Network-setting.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
== Scenario 3 ==&lt;br /&gt;
&amp;lt;div&amp;gt;Host 1 and Host 2 are directly connected. For better connectivity with this host, there are several network interfaces within the same subnetwork.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Multiple NICs scenario3.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
==== CONFIGURATION ====&lt;br /&gt;
&amp;lt;div&amp;gt;&lt;br /&gt;
*This scenario requires static routing to be set up on Host 1 and, depending on the operating system, on Host 2 as well.&lt;br /&gt;
*A local output interface must be assigned to every remote IP. This rule applies to both hosts.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
==== POTENTIAL ISSUES ====&lt;br /&gt;
&amp;lt;div&amp;gt;If Host 1 does not use static routing, it will always initiate connections through the interface that appears first in the routing table (in the example below, eth1).&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;[[File:Multiple NICs scenario3 issue1.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;You can prevent this issue by configuring static routing for each interface.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Multiple NICs scenario3 issue2.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Category:ZFS and data storage articles]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=JBODs_%26_JBOFs&amp;diff=1608</id>
		<title>JBODs &amp; JBOFs</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=JBODs_%26_JBOFs&amp;diff=1608"/>
		<updated>2023-11-03T10:03:08Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This functionality is available in the &#039;&#039;&#039;Storage Settings &amp;gt; JBODs &amp;amp; JBOFs&#039;&#039;&#039; tab&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It’s used to obtain more information about the disks in the JBOD or JBOF by using external services, e.g. Redfish. For this reason, it is dedicated to disks enclosures with out-of-band management.&lt;br /&gt;
&lt;br /&gt;
The functionality only works on currently supported devices such as &#039;&#039;&#039;VORTEX SHELF&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the case of VORTEX SHELF, the Redfish service is used to gain more information about disks, so an account with this service will be needed. To link an enclosure to the service, click on the &amp;quot;&#039;&#039;&#039;Add device&#039;&#039;&#039;&amp;quot; button. A pop-up with a form will appear. Fill in the form.&lt;br /&gt;
&lt;br /&gt;
To link a device through the service, the following information must be provided:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Name (alias)&#039;&#039;&#039; - Set a name for the enclosure that allows the device to be recognized should there be a few machines of the same model.&lt;br /&gt;
*&#039;&#039;&#039;IP address / domain&#039;&#039;&#039; - The domain name or IP address connecting the device to the network.&lt;br /&gt;
*&#039;&#039;&#039;Port&#039;&#039;&#039; - Enter the number of the port used to communicate with the device through the Redfish service. The default port number is 443. Change as needed.&lt;br /&gt;
*&#039;&#039;&#039;Username&#039;&#039;&#039; - Enter the username to the Redfish service.&lt;br /&gt;
*&#039;&#039;&#039;Password&#039;&#039;&#039; - Enter the password that’s associated with the user name that’s been entered above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After filling in every field, click the &amp;quot;&#039;&#039;&#039;Add&#039;&#039;&#039;&amp;quot; button. The system will then connect to the service and start to scan all the available disks. This may take some time. The system needs a while to scan a disk. Thus the more disks there are in an enclosure, the more time is needed to scan them all. After all the disks are scanned, the information will be available in the disk details section. Additional data such as:&lt;br /&gt;
&lt;br /&gt;
*Name of the enclosure in which the disk is located,&lt;br /&gt;
*Number of the slot in which the disk is located&lt;br /&gt;
&lt;br /&gt;
will also be displayed generally (i.e., in pool&#039;s disk groups, pool wizard, etc.).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE!&#039;&#039;&#039; When the connection status changes, a rescan of all disks is required. This occurs, e.g.:&lt;br /&gt;
&lt;br /&gt;
*When a device configuration is changing,&lt;br /&gt;
*When the system is restarted,&lt;br /&gt;
*After network reconnection (when the connection has been lost), etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The connection status of the enclosure is displayed in the table in the &amp;quot;&#039;&#039;&#039;JBODs &amp;amp; JBOFs&#039;&#039;&#039;&amp;quot; tab at all times. Next to the connection status there’s a power state displayed that shows if the device is turned on.&amp;lt;br/&amp;gt;Every device that has been added can be edited, removed from the table, or selected to display its details.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To do any of the above, use the context menu.&amp;lt;br/&amp;gt;The “&#039;&#039;&#039;Edit&#039;&#039;&#039;” option allows changing the device’s data or credentials.&amp;lt;br/&amp;gt;The “&#039;&#039;&#039;Details&#039;&#039;&#039;” option shows more information about an enclosure such as:&lt;br /&gt;
&lt;br /&gt;
*Name (alias)&lt;br /&gt;
*Model&lt;br /&gt;
*Vendor name&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The “&#039;&#039;&#039;Remove&#039;&#039;&#039;” option leads to the removal of a device from the table. Removing a device causes it to disconnect from the external service. Any additional information uploaded via the service after removing a device will not be displayed. In some cases, the option to turn on the LED for disks in the JBOD/JBOF may also become disabled.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/zx/index.php?title=FC_Public_Group&amp;diff=1606</id>
		<title>FC Public Group</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/zx/index.php?title=FC_Public_Group&amp;diff=1606"/>
		<updated>2023-11-03T10:03:08Z</updated>

		<summary type="html">&lt;p&gt;Pa-P: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This functionality is available in: &#039;&#039;&#039;Storage &amp;gt; FC Targets &amp;gt; Fibre Channel groups &amp;gt; Public group&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= What is an FC public group =&lt;br /&gt;
&lt;br /&gt;
A public group is a group of one or more Fibre Channel ports. Fibre Channel port groups help you organize and manage LUN mappings more easily. FC Public Group gives access to assigned volumes to any initiator that is able to connect to the FC target ports assigned to a given group.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT!&#039;&#039;&#039; &#039;&#039;&#039;It is recommended to use FC Public Group only with peer-to-peer FC connections to avoid unexpected behavior.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;A public group is present on a pool by default and cannot be removed or created. Initially, no volumes or targets are assigned to this group, so nothing is available until it is configured manually. Devices available in such a group are visible globally, and there is no need to configure Fibre Channel Initiator by assigning WWN using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Functionalities within the public group =&lt;br /&gt;
&lt;br /&gt;
Within the public group, you can:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Deactivate group&#039;&#039;&#039; - causes that access to the data in the group is no longer valid.&lt;br /&gt;
*&#039;&#039;&#039;Add target&#039;&#039;&#039; - specifies access to a public group for a given target.&lt;br /&gt;
*&#039;&#039;&#039;Attach an existing zvol&#039;&#039;&#039; - attaches an existing zvol to a public group.&lt;br /&gt;
*&#039;&#039;&#039;Add a new zvol&#039;&#039;&#039; - creates a new zvol. The options available here are described in this [[Add zvol|article]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;In the case of an existing zvol, you can edit, delete, detach it from the group, and add it to a backup task, but for the Fibre Channel targets it is only possible to detach a zvol from a group.&lt;br /&gt;
&lt;br /&gt;
While it might be convenient to use this feature as it removes the necessity to configure initiators that will have access to the group resources, it might cause unwanted side effects for configurations that utilize an FC switch. Note that using an FC public group may lead to the following on the initiator&#039;s side:&lt;br /&gt;
&lt;br /&gt;
#An unauthorized system connected to the same public group can gain access to FC resources.&lt;br /&gt;
#Unpredicted system states, e.g. creating a multipath.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;Note!&amp;amp;nbsp;&#039;&#039;&#039; You cannot add a target to a public group when it is assigned to another Fibre Channel group. The same target cannot be assigned to two groups that share a set of initiators.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= What must be taken into considerations/risks =&lt;br /&gt;
&lt;br /&gt;
When the FC switch is used, and any of the server FC ports in initiator mode is also connected to that switch, it will cause the server to connect the volumes added to the public group. This behavior is caused by the fact that the public group allows connections from any initiator, including the initiator mode ports of the server that are connected to the same FC switch. In other words, the server will simply connect to itself. As a result, volumes exported by the server will be visible as connected to the server disks - it behaves as some sort of a loopback. Moreover, depending on the number of ports running in initiator mode connected to the same switch as target ports, the same volume might be connected to the server multiple times, creating a multipath configuration if that feature is enabled. The safest way to avoid this situation is to use the public group only for ports that are directly connected without utilizing the switch. In the case of FC switches, those devices usually allow the configuration of zones that describe which switch ports are logically interconnected. Zones may also be used to resolve described issues, but it might simply move the configuration effort from the FC group initiators configuration to the FC switch zone configuration.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More information about [[FC group|Fibre Channel groups]] can be found [[FC group|here]].&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Pa-P</name></author>
	</entry>
</feed>