<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://wiki.scalelogicinc.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ma-W</id>
	<title>Scalelogic Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://wiki.scalelogicinc.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ma-W"/>
	<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/Special:Contributions/Ma-W"/>
	<updated>2026-05-05T04:14:40Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.44.5</generator>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up32_Release_Notes&amp;diff=1506</id>
		<title>Scale Logic NX ver.1.0 up32 Release Notes</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up32_Release_Notes&amp;diff=1506"/>
		<updated>2025-08-06T14:37:15Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date: 2025-07-23&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build: 61683&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== New ==&lt;br /&gt;
&lt;br /&gt;
=== NVMe over Fabrics (NVMe-oF) Initiator with Multipath I/O functionality. ===&lt;br /&gt;
&lt;br /&gt;
=== Partition labeling for NVMe Drives. ===&lt;br /&gt;
&lt;br /&gt;
=== VMware VAAI support for NFS protocol. ===&lt;br /&gt;
&lt;br /&gt;
=== Storage Pool initialization feature. ===&lt;br /&gt;
&lt;br /&gt;
=== Power button settings available in Console tools -&amp;gt; Add-ons. ===&lt;br /&gt;
&lt;br /&gt;
=== Configurable TRIM support for thick-provisioned zvols. ===&lt;br /&gt;
&lt;br /&gt;
=== Network statistics for bonded RDMA interfaces available in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
=== Display of support license information in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== Linux kernel (v5.15.179). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM573xx and Broadcom BCM574xx controllers driver (bnxt_en, v1.10.3-232.0.155.5). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 100GbE Network Controller driver (ice, v1.14.13). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10/40GbE Network Controller driver (i40e, v2.25.11). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10GbE Network Controller driver (ixgbe, v5.20.10). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 1GbE Network Controller driver (igb, v5.16.11). ===&lt;br /&gt;
&lt;br /&gt;
=== Chelsio T4/t5 10 Gigabit Ethernet controller driver (cxgb4, v3.19.0.3). ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox firmware update driver (mft, v4.31.0). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom HBA 9600-16e 12Gb Tri-Mode Storage Adapter driver (mpi3mr, v8.12.1.0.0). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom HBA Adapter driver (mpt3sas, v52.00.00.00). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom MegaRAID Adapter driver (megaraid_sas, v07.731.01.00). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 24Gb/s GT HBA Adapter driver (esas6hba, v1.01.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 12Gb/s GT HBA Adapter driver (esas5hba, v1.09.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 12Gb/s HBA Adapter driver (esas4hba, v1.55.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 16Gb/32Gb Fibre Channel Adapter driver (celerity16fc, v2.11.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 8Gb Fibre Channel Adapter driver (celerity8fc, v2.28.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec SmartHBA and SmartRAID Adapter driver (smartpqi, v2.1.32-035). ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec MaxView tool v4.23. ===&lt;br /&gt;
&lt;br /&gt;
=== Open-iSCSI Initiator (open-iscsi, v2.1.10). ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== The system clock and IPMI time are not synchronized. ===&lt;br /&gt;
&lt;br /&gt;
=== The SED feature does not work simultaneously with Samsung and Micron drives on the same system. ===&lt;br /&gt;
&lt;br /&gt;
=== The Replacement drive status is not cleared from the WebGUI after the replacement is complete. ===&lt;br /&gt;
&lt;br /&gt;
=== Details of VMware datastores list are not retrieved from VMware vCenter/vSphere and not shown in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
=== Storage sizes exceeding 1PB are not displayed correctly on the WebGUI and system console. ===&lt;br /&gt;
&lt;br /&gt;
=== (SU90917): Vulnerability due to enabled NTP mode 6 queries. ===&lt;br /&gt;
&lt;br /&gt;
=== (SU90998): Workgroup name containing &amp;quot;_&amp;quot; character is not accepted during AD server authentication. ===&lt;br /&gt;
&lt;br /&gt;
=== Rollback performed on a mounted dataset causes I/O blocking. ===&lt;br /&gt;
&lt;br /&gt;
=== Samba with Active Directory round-robin configuration causes unstable behavior. ===&lt;br /&gt;
&lt;br /&gt;
=== Changing the HTTPS port does not update the automatic redirection from HTTP port 80. ===&lt;br /&gt;
&lt;br /&gt;
=== Removing disks from pools created before enabling Multipath I/O fails. ===&lt;br /&gt;
&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
The utilization of physical hard drives in virtual machines hosted by Hyper-V is not supported and may cause problems. The problem does not occur when utilizing virtual hard drives in virtual machines within a Hyper-V environment.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Scale Logic VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Scale Logic VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Scale Logic NX users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB, Round-Robin and Round-Robin with RDMA do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX as Hyper-V or VMware guest, bonding ALB, Round-Robin and Round-Robin with RDMA is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of NX up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of NX up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of NX up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of NX up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open NX TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In case of large number of disks, zpool move can take a long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In certain environments in case of large number of disks (about 100 and more) the zpool move operation can take a long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Rolled back data are not properly refreshed both in Windows and Vmware systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before using rollback operation on zvol please detach iSCSI or FC target, perform rollback operation and reattach target.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User gets deleted from share access list after changing its username on AD server ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the AD user is assigned to a share and later the username is changed we should let NX know about it. Using the &amp;quot;Synchronize and update shares configurations&amp;quot; operation on NX leads to a situation where the changed user gets deleted from the share’s access list. The new username needs to be added to the share’s access list manually.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== QLogic 32Gbit FC HBA is no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support QLogic 32Gbit FC adapters.&lt;br /&gt;
&lt;br /&gt;
=== Certain 16Gbit FC HBAs are no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support certain 16Gbit FC adapters. If you are using a 16Gbit FC adapter based on the Qlogic chipset using the qla2xxx_scts driver, please refer to our online hardware compatibility list (HCL) to verify if that particular adapter is supported.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== E-mail password cannot contain special non-ASCII characters ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The following special characters #&amp;amp;nbsp;: + cannot be used in a password used in an e-mail notification feature. They can break the authentication process.&lt;br /&gt;
&lt;br /&gt;
=== LSA e-mail notifications does not work with SMTP servers requiring SSL/TLS authentication ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; E-mail alert configuration in LSI Storage Authority Software does not work with SMTP servers which require SSL/TLS authentication&lt;br /&gt;
&lt;br /&gt;
=== Moving IP address of the NFS share’s IP read only access list to read/write access list cannot be performed in one step ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If an IP address is already present on the NFS access list and you would like to move it to another access list, it has to be performed in two steps. First delete the IP address from the current list and apply the changes. Next edit the NFS share again and add the IP address to the other access list.&lt;br /&gt;
&lt;br /&gt;
=== If the used space on zpool reaches more than 80%, the system may generate high load and become unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the used space on the zpool reaches more then 80%, the system is trying to utilize the available space to the maximum. As a result, the system load may increase, especially waiting I/O and cause its unstable work. Expanding the pool size space is recommended.&lt;br /&gt;
&lt;br /&gt;
=== In certain situations WebGUI is not showing the current state of the system ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There are situations when the system is performing actions taking too long for the WebGUI to refresh the values in the web browser. In such a case the system is showing the old value taken directly from cache memory. We recommend using the F5 key to refresh the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== An ongoing O&amp;amp;ODP process involving small zvol block size or dataset record size generate high load and render the system unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; O&amp;amp;ODP backup tasks involving small zvol block sizes as well as small dataset record sizes (4KB - 16KB) are known to generate very high load rendering the system unstable. We recommend using at least 64KB sizes for zvols and datasets.&lt;br /&gt;
&lt;br /&gt;
=== Runtime UPS calibration in the client-server configuration unexpectedly shutdowns NX ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the client-server configuration of the UPS, the runtime UPS calibration process returns an improper value interpreted by NX as being on battery. When it timeouts it shuts the system down.&lt;br /&gt;
&lt;br /&gt;
=== Starting from up29 (including updating from previous version), system cannot boot up in UEFI mode if your boot medium is controlled by LSI SAS 9300 HBA with outdated firmware ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Make sure your LSI SAS 9300 HBA has the latest firmware installed. A quick workaround is changing the booting mode from UEFI to Legacy.&lt;br /&gt;
&lt;br /&gt;
=== Bonded Mellanox network cards show negative values on the network usage chart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be solved in the future release.&lt;br /&gt;
&lt;br /&gt;
=== In case of hundreds of thousands of LDAP users system starts very slowly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; LDAP database is stored on the boot medium. If you have that large LDAP database we recommend using ultra fast NVMe disk for boot medium.&lt;br /&gt;
&lt;br /&gt;
=== After update to NX up29 write back cache on some hardware RAID volumes can be unintentionally disabled ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we disable write-back cache on all HDD disks by default, but we do not disable write-back cache on SSD drives and hardware RAID volumes. It can happen however that the write-back cache on some RAID volumes can be turned off. Hardware RAID volume performance can be heavily impacted by the lack of the write-back cache, so please make sure it&#039;s enabled after update. Open TUI and invoke Extended tools by pressing CTRL+ALT+X, then select Disk write-back cache settings.&lt;br /&gt;
&lt;br /&gt;
=== Restarting or disconnecting JBOD with the write-back cache enabled on disks can lead to the data inconsistency ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If write-back cache is enabled on disks in JBOD then restarting or disconnecting JBOD can lead to data inconsistency. Starting from NX up29 we disable write-back cache on HDD disks by default during bootup procedure. We do not disable write-back cache on SSD drives and hardware RAID volumes.&lt;br /&gt;
&lt;br /&gt;
=== Snapshots are not displayed after a system reboot if there are more than a few thousands of snapshots ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case there is a large number of snapshots (more than a few thousands) there might be a significant delay in listing them in WebGUI after a system reboot. Depending on how big the number of snapshots is, it may take a few minutes or up to several dozen minutes to populate the list in WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the gzip-9 compression algorithm. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the gzip-9 compression algorithm the system can behave unstable when copying the data to storage. It is possible to use this compression algorithm only in environments where very efficient processors are being used.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use more than 500 zvols. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using more than 500 zvols in the system, the responsiveness of the Web-GUI may be low and the system may have problems with the import of zpools.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The Internet Connection Check functionality has been removed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to check the internet connection, try to get the date and time from the NTP server using the Web-GUI (System Settings -&amp;gt; System -&amp;gt; Time and date settings).&lt;br /&gt;
&lt;br /&gt;
=== After upgrading the system to a newer version, the event viewer reported an error message: An unexpected system reboot occurred. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After upgrading the system to a newer version, the event viewer reported an error message: &amp;quot;An unexpected system reboot occurred. Run the &#039;Scrub scanner&#039; on all pools to check the system integrity. Analysis of logs and previous events can help to find the cause of this problem and prevent the issue in the future. For more information, refer to the help article.&amp;quot;. This information should be ignored.&lt;br /&gt;
&lt;br /&gt;
=== Low performance on remote disks in case of new installation of NX up29r2. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;The source for this problem is the zfs_vdev_max_active parameter set to 1 on a new installation of NX up29r2. To resolve this problem, please change the value of the zfs_vdev_max_active parameter from 1 to 1000 in TUI.&amp;amp;nbsp; In order to change the zfs_vdev_max_active parameter to 1000 open NX TUI and use CTRL+ALT+W keys combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select the zfs module, then the zfs_vdev_max_active parameter and change its value to 1000. This operation requires a restart of the NX. This should be done by selecting the Reboot option in the TUI.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The SED functionality configuration issues. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED functionality in Scale Logic NX enables to use the drives with verified SED configuration only.&lt;br /&gt;
&lt;br /&gt;
=== The SED configuration tool available in TUI also lists devices that are not currently supported. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI also lists devices that are not currently supported. To check if a given device is supported, see the HCL list available on the Scale Logic webpage ([https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX/ https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX/]). To find devices for which we support the SED functionality, on the Scale Logic HCL page in the form: &amp;quot;Search by component&amp;quot;, enter: “SED” in the keyword field and click the search button (loupe icon).&lt;br /&gt;
&lt;br /&gt;
=== Enabling the autotrim functionality in the zpools may cause drastic increase load or iowait in the system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of a drastic increase load or iowait in the system after enabling the autotrim functionality in the zpools, consider disabling it. It is recommended to run the &amp;quot;Trim&amp;quot; function manually on demand and at a convenient time (e.g. at a time when the system is working under less load).&lt;br /&gt;
&lt;br /&gt;
=== The Mellanox ConnectX-3 network controller is no longer supported in RDMA mode due to its instability. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In order to provide stable work with RDMA we recommend using the Mellanox ConnectX-4, ConnectX-5, or ConnectX-6.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115142 --&amp;gt;The Network usage charts display incorrect data for an Active-Backup bonding with RDMA. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Network usage charts incorrectly display data for systems using the Active-Backup bonding with RDMA. The charts only reflect the usage of one network interface included in the Active-Backup bonding (the charts for the second network interface are not generated at all).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115051 --&amp;gt;Duplicate entries appear in the Service Status tab in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In certain scenarios, the Service Status tab in the WebGUI shows duplicated instances of the same connection.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114628 --&amp;gt;Restoring data backups from the macOS Time Machine application may not work correctly with older versions of the macOS system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems with restoring a copy of data from the Time Machine application, it is recommended to update the macOS system to a new version.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114516 --&amp;gt;The Virtual Hard disks smaller than 1B are visible in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; It’s possible to disable the virtual disks through IPMI settings. In Settings -&amp;gt; Media Redirection Settings -&amp;gt; VMedia Instance Settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;  Uncheck &amp;quot;Emulate SD Media as USB disk to Host&amp;quot; checkbox - it  manages one of the virtual disks.&lt;br /&gt;
  Set &amp;quot;Hard disk instances&amp;quot; to 0 in a combo box.&lt;br /&gt;
  Set &amp;quot;Remote KVM Hard disk instances&amp;quot; to 0 in the combo box - settings of the combo box manage the second virtual disk. &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== &amp;lt;!-- #114449 --&amp;gt;Unsupported configuration of VMware virtual machines (consisting of multiple disks) for data rollback from snapshots in On- &amp;amp; Off-site Data Protection. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The VMware virtual machine data rollbacks from snapshots using the On- &amp;amp; Off-site Data Protection functionality are not supported when the virtual machines consist of multiple disks. The specific virtual machine configuration is incompatible with the restoration process.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114351 --&amp;gt;Subdomain statuses in the User Management tab in the WebGUI are not updated. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of unavailability of a given subdomain, information about its status will not be updated on the WebGUI (even by pressing the refresh button).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114251 --&amp;gt;The problems with users and groups synchronization within the Active Directory one-way trusted configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems it’s recommended to use two-way trusted configuration.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #108558 --&amp;gt;Partial support for REST API v3. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The REST API v3 is currently only partially supported. As a result, not all operations can be executed using this version of the REST API. For optimal utilization of the REST API, we highly recommend all customers to employ REST API v4.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #99323 --&amp;gt;Username in LDAP database can’t be changed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To modify a username in the LDAP database, the administrator needs to delete the user account and creating a new one in the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115409 --&amp;gt;The hard disk LED locating and disk faulty functionality do not work properly using the Broadcom HBA 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter the Hard disk LED locating and disk faulty functionality do not work.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115436 --&amp;gt;The Broadcom HBA 9600 Storage Adapter may cause “Target allocation failed, error -6” error messages in dmesg. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter,using the “Rescan” button in the storage tab in the WebGUI may results in the “Target allocation failed, error -6” error messages in dmesg.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #109737 --&amp;gt;The ARCHTTP tool, when in use, might erroneously redirect to another network interface. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To avoid redirection to another network interface, it’s recommended to connect to the ARCHTTP tool using the primary network interface available in the Scale Logic NX (the network interface is usually: eth0).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #116234 --&amp;gt;Identification LED blinking does not work on NVMe drives in AMD-based servers. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Identification LED blinking on NVMe drives does not work on servers based on AMD processors. This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115416 --&amp;gt;The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter.&lt;br /&gt;
&lt;br /&gt;
=== The TDB UID/GIDs mapping does not function properly. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; Workarounds:&lt;br /&gt;
&lt;br /&gt;
*Single-Domain Environments:&lt;br /&gt;
**Use the &amp;quot;autorid&amp;quot; option in the &amp;quot;ID mapping backend&amp;quot; settings.&lt;br /&gt;
**Alternatively, use &amp;quot;rid+tdb&amp;quot;:&lt;br /&gt;
**#Connect to the domain.&lt;br /&gt;
**#Navigate to the “Accessed domains” section.&lt;br /&gt;
**#Click the “Edit domain settings” button.&lt;br /&gt;
**#Set the UID/GID mapping to &amp;quot;rid&amp;quot; and define the Min ID and Max ID range (e.g., 2,000,000 to 2,999,999).&lt;br /&gt;
&lt;br /&gt;
Note: The range 1,000,000 to 1,999,999 is reserved.&lt;br /&gt;
&lt;br /&gt;
*Multi-Domain Environments:&lt;br /&gt;
**The &amp;quot;autorid&amp;quot; option is not supported. Use one of the following:&lt;br /&gt;
**#&amp;quot;rid+tdb&amp;quot;&lt;br /&gt;
**#&amp;quot;ad (with RFC2307 schema) + tdb&amp;quot;&lt;br /&gt;
**Steps for configuration:&lt;br /&gt;
&amp;lt;ol style=&amp;quot;margin-left: 80px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Connect to the domains.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Navigate to the “Accessed domains” section.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Click the “Edit domain settings” button for each domain.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Set the UID/GIDs mapping to &amp;quot;rid&amp;quot; for all domains.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Define unique Min ID and Max ID ranges for each domain (e.g., 2,000,000 to 2,999,999 for the first domain, 3,000,000 to 3,999,999 for the second domain, etc.).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== No Warning for Duplicate IP Addresses on Network Interfaces ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; No warning or error message is displayed if two network interfaces are configured with the same IP address. This can lead to network conflicts or connectivity issues. Users must manually verify configurations to avoid duplicates.&lt;br /&gt;
&lt;br /&gt;
=== No LED Management for aacraid Storage Controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; LED management is no longer supported for storage controllers using the aacraid driver, aligning with the manufacturer’s decision to discontinue these controllers. Users depending on LED indicators should explore alternative monitoring solutions or consider upgrading to supported hardware.&lt;br /&gt;
&lt;br /&gt;
=== LED Blinking Not Functional on NVMe Drives in Supermicro X12 Servers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; On Supermicro X12 servers, LED blinking functionality for NVMe drives is not operational. Users should rely on alternative methods to identify and manage drives.&lt;br /&gt;
&lt;br /&gt;
=== Web Server Settings in Maxview Storage Manager Not Preserved After Restart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; Changes made to the Web server settings in Maxview Storage Manager revert to default values after a server restart. Custom configurations are lost upon reboot. This issue will be addressed in a future release.&lt;br /&gt;
&lt;br /&gt;
=== Unnecessary dmesg Entries After Zpool Export/Import ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; Following a zpool export and import, dmesg may show entries such as &amp;quot;debugfs: Directory &#039;zdX&#039; with parent &#039;block&#039; already present!&amp;quot; While these entries do not affect functionality, they will be addressed in a future release.&lt;br /&gt;
&lt;br /&gt;
=== Discontinued IDE Disk Support in Scale Logic NX Up31 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In Scale Logic NX Up31, IDE disk support has been removed. Older servers or virtual machines relying on IDE disks may experience compatibility issues or failures. We recommend migrating to supported storage solutions to avoid disruptions. Future releases will not reintroduce IDE disk support.&lt;br /&gt;
&lt;br /&gt;
=== Consider Reducing Volume Block Size to 16KB for High Random Workloads ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; For workloads with high levels of random I/O, reducing the iSCSI volume block size to 16KB can improve performance. Users experiencing performance challenges with random workloads should consider this tuning option.&lt;br /&gt;
&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up32_Release_Notes&amp;diff=1505</id>
		<title>Scale Logic NX ver.1.0 up32 Release Notes</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up32_Release_Notes&amp;diff=1505"/>
		<updated>2025-08-06T14:32:39Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date: 2025-07-23&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build: 61683&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== New ==&lt;br /&gt;
&lt;br /&gt;
=== NVMe over Fabrics (NVMe-oF) Initiator with Multipath I/O functionality. ===&lt;br /&gt;
&lt;br /&gt;
=== Partition labeling for NVMe Drives. ===&lt;br /&gt;
&lt;br /&gt;
=== VMware VAAI support for NFS protocol. ===&lt;br /&gt;
&lt;br /&gt;
=== Storage Pool initialization feature. ===&lt;br /&gt;
&lt;br /&gt;
=== Power button settings available in Console tools -&amp;gt; Add-ons. ===&lt;br /&gt;
&lt;br /&gt;
=== Configurable TRIM support for thick-provisioned zvols. ===&lt;br /&gt;
&lt;br /&gt;
=== Network statistics for bonded RDMA interfaces available in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
=== Display of support license information in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== Linux kernel (v5.15.179). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM573xx and Broadcom BCM574xx controllers driver (bnxt_en, v1.10.3-232.0.155.5). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 100GbE Network Controller driver (ice, v1.14.13). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10/40GbE Network Controller driver (i40e, v2.25.11). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10GbE Network Controller driver (ixgbe, v5.20.10). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 1GbE Network Controller driver (igb, v5.16.11). ===&lt;br /&gt;
&lt;br /&gt;
=== Chelsio T4/t5 10 Gigabit Ethernet controller driver (cxgb4, v3.19.0.3). ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox firmware update driver (mft, v4.31.0). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom HBA 9600-16e 12Gb Tri-Mode Storage Adapter driver (mpi3mr, v8.12.1.0.0). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom HBA Adapter driver (mpt3sas, v52.00.00.00). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom MegaRAID Adapter driver (megaraid_sas, v07.731.01.00). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 24Gb/s GT HBA Adapter driver (esas6hba, v1.01.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 12Gb/s GT HBA Adapter driver (esas5hba, v1.09.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 12Gb/s HBA Adapter driver (esas4hba, v1.55.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 16Gb/32Gb Fibre Channel Adapter driver (celerity16fc, v2.11.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 8Gb Fibre Channel Adapter driver (celerity8fc, v2.28.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec SmartHBA and SmartRAID Adapter driver (smartpqi, v2.1.32-035). ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec MaxView tool v4.23. ===&lt;br /&gt;
&lt;br /&gt;
=== Open-iSCSI Initiator (open-iscsi, v2.1.10). ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== The system clock and IPMI time are not synchronized. ===&lt;br /&gt;
&lt;br /&gt;
=== The SED feature does not work simultaneously with Samsung and Micron drives on the same system. ===&lt;br /&gt;
&lt;br /&gt;
=== The Replacement drive status is not cleared from the WebGUI after the replacement is complete. ===&lt;br /&gt;
&lt;br /&gt;
=== Details of VMware datastores list are not retrieved from VMware vCenter/vSphere and not shown in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
=== Storage sizes exceeding 1PB are not displayed correctly on the WebGUI and system console. ===&lt;br /&gt;
&lt;br /&gt;
=== (SU90917): Vulnerability due to enabled NTP mode 6 queries. ===&lt;br /&gt;
&lt;br /&gt;
=== (SU90998): Workgroup name containing &amp;quot;_&amp;quot; character is not accepted during AD server authentication. ===&lt;br /&gt;
&lt;br /&gt;
=== Rollback performed on a mounted dataset causes I/O blocking. ===&lt;br /&gt;
&lt;br /&gt;
=== Samba with Active Directory round-robin configuration causes unstable behavior. ===&lt;br /&gt;
&lt;br /&gt;
=== Changing the HTTPS port does not update the automatic redirection from HTTP port 80. ===&lt;br /&gt;
&lt;br /&gt;
=== Removing disks from pools created before enabling Multipath I/O fails. ===&lt;br /&gt;
&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
The utilization of physical hard drives in virtual machines hosted by Hyper-V is not supported and may cause problems. The problem does not occur when utilizing virtual hard drives in virtual machines within a Hyper-V environment.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Scale Logic VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Scale Logic VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Scale Logic NX users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB, Round-Robin and Round-Robin with RDMA do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX as Hyper-V or VMware guest, bonding ALB, Round-Robin and Round-Robin with RDMA is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of NX up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of NX up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of NX up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of NX up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open NX TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In case of large number of disks, zpool move can take a long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In certain environments in case of large number of disks (about 100 and more) the zpool move operation can take a long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Rolled back data are not properly refreshed both in Windows and Vmware systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before using rollback operation on zvol please detach iSCSI or FC target, perform rollback operation and reattach target.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User gets deleted from share access list after changing its username on AD server ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the AD user is assigned to a share and later the username is changed we should let NX know about it. Using the &amp;quot;Synchronize and update shares configurations&amp;quot; operation on NX leads to a situation where the changed user gets deleted from the share’s access list. The new username needs to be added to the share’s access list manually.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== QLogic 32Gbit FC HBA is no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support QLogic 32Gbit FC adapters.&lt;br /&gt;
&lt;br /&gt;
=== Certain 16Gbit FC HBAs are no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support certain 16Gbit FC adapters. If you are using a 16Gbit FC adapter based on the Qlogic chipset using the qla2xxx_scts driver, please refer to our online hardware compatibility list (HCL) to verify if that particular adapter is supported.&lt;br /&gt;
&lt;br /&gt;
Our HCL is available at this link: [https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX/ https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX/]&lt;br /&gt;
&lt;br /&gt;
=== E-mail password cannot contain special non-ASCII characters ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The following special characters #&amp;amp;nbsp;: + cannot be used in a password used in an e-mail notification feature. They can break the authentication process.&lt;br /&gt;
&lt;br /&gt;
=== LSA e-mail notifications does not work with SMTP servers requiring SSL/TLS authentication ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; E-mail alert configuration in LSI Storage Authority Software does not work with SMTP servers which require SSL/TLS authentication&lt;br /&gt;
&lt;br /&gt;
=== Moving IP address of the NFS share’s IP read only access list to read/write access list cannot be performed in one step ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If an IP address is already present on the NFS access list and you would like to move it to another access list, it has to be performed in two steps. First delete the IP address from the current list and apply the changes. Next edit the NFS share again and add the IP address to the other access list.&lt;br /&gt;
&lt;br /&gt;
=== If the used space on zpool reaches more than 80%, the system may generate high load and become unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the used space on the zpool reaches more then 80%, the system is trying to utilize the available space to the maximum. As a result, the system load may increase, especially waiting I/O and cause its unstable work. Expanding the pool size space is recommended.&lt;br /&gt;
&lt;br /&gt;
=== In certain situations WebGUI is not showing the current state of the system ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There are situations when the system is performing actions taking too long for the WebGUI to refresh the values in the web browser. In such a case the system is showing the old value taken directly from cache memory. We recommend using the F5 key to refresh the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== An ongoing O&amp;amp;ODP process involving small zvol block size or dataset record size generate high load and render the system unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; O&amp;amp;ODP backup tasks involving small zvol block sizes as well as small dataset record sizes (4KB - 16KB) are known to generate very high load rendering the system unstable. We recommend using at least 64KB sizes for zvols and datasets.&lt;br /&gt;
&lt;br /&gt;
=== Runtime UPS calibration in the client-server configuration unexpectedly shutdowns NX ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the client-server configuration of the UPS, the runtime UPS calibration process returns an improper value interpreted by NX as being on battery. When it timeouts it shuts the system down.&lt;br /&gt;
&lt;br /&gt;
=== Starting from up29 (including updating from previous version), system cannot boot up in UEFI mode if your boot medium is controlled by LSI SAS 9300 HBA with outdated firmware ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Make sure your LSI SAS 9300 HBA has the latest firmware installed. A quick workaround is changing the booting mode from UEFI to Legacy.&lt;br /&gt;
&lt;br /&gt;
=== Bonded Mellanox network cards show negative values on the network usage chart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be solved in the future release.&lt;br /&gt;
&lt;br /&gt;
=== In case of hundreds of thousands of LDAP users system starts very slowly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; LDAP database is stored on the boot medium. If you have that large LDAP database we recommend using ultra fast NVMe disk for boot medium.&lt;br /&gt;
&lt;br /&gt;
=== After update to NX up29 write back cache on some hardware RAID volumes can be unintentionally disabled ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we disable write-back cache on all HDD disks by default, but we do not disable write-back cache on SSD drives and hardware RAID volumes. It can happen however that the write-back cache on some RAID volumes can be turned off. Hardware RAID volume performance can be heavily impacted by the lack of the write-back cache, so please make sure it&#039;s enabled after update. Open TUI and invoke Extended tools by pressing CTRL+ALT+X, then select Disk write-back cache settings.&lt;br /&gt;
&lt;br /&gt;
=== Restarting or disconnecting JBOD with the write-back cache enabled on disks can lead to the data inconsistency ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If write-back cache is enabled on disks in JBOD then restarting or disconnecting JBOD can lead to data inconsistency. Starting from NX up29 we disable write-back cache on HDD disks by default during bootup procedure. We do not disable write-back cache on SSD drives and hardware RAID volumes.&lt;br /&gt;
&lt;br /&gt;
=== Snapshots are not displayed after a system reboot if there are more than a few thousands of snapshots ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case there is a large number of snapshots (more than a few thousands) there might be a significant delay in listing them in WebGUI after a system reboot. Depending on how big the number of snapshots is, it may take a few minutes or up to several dozen minutes to populate the list in WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the gzip-9 compression algorithm. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the gzip-9 compression algorithm the system can behave unstable when copying the data to storage. It is possible to use this compression algorithm only in environments where very efficient processors are being used.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use more than 500 zvols. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using more than 500 zvols in the system, the responsiveness of the Web-GUI may be low and the system may have problems with the import of zpools.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The Internet Connection Check functionality has been removed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to check the internet connection, try to get the date and time from the NTP server using the Web-GUI (System Settings -&amp;gt; System -&amp;gt; Time and date settings).&lt;br /&gt;
&lt;br /&gt;
=== After upgrading the system to a newer version, the event viewer reported an error message: An unexpected system reboot occurred. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After upgrading the system to a newer version, the event viewer reported an error message: &amp;quot;An unexpected system reboot occurred. Run the &#039;Scrub scanner&#039; on all pools to check the system integrity. Analysis of logs and previous events can help to find the cause of this problem and prevent the issue in the future. For more information, refer to the help article.&amp;quot;. This information should be ignored.&lt;br /&gt;
&lt;br /&gt;
=== Low performance on remote disks in case of new installation of NX up29r2. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;The source for this problem is the zfs_vdev_max_active parameter set to 1 on a new installation of NX up29r2. To resolve this problem, please change the value of the zfs_vdev_max_active parameter from 1 to 1000 in TUI.&amp;amp;nbsp; In order to change the zfs_vdev_max_active parameter to 1000 open NX TUI and use CTRL+ALT+W keys combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select the zfs module, then the zfs_vdev_max_active parameter and change its value to 1000. This operation requires a restart of the NX. This should be done by selecting the Reboot option in the TUI.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The SED functionality configuration issues. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED functionality in Scale Logic NX enables to use the drives with verified SED configuration only.&lt;br /&gt;
&lt;br /&gt;
=== The SED configuration tool available in TUI also lists devices that are not currently supported. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI also lists devices that are not currently supported. To check if a given device is supported, see the HCL list available on the Scale Logic webpage ([https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX/ https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX/]). To find devices for which we support the SED functionality, on the Scale Logic HCL page in the form: &amp;quot;Search by component&amp;quot;, enter: “SED” in the keyword field and click the search button (loupe icon).&lt;br /&gt;
&lt;br /&gt;
=== Enabling the autotrim functionality in the zpools may cause drastic increase load or iowait in the system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of a drastic increase load or iowait in the system after enabling the autotrim functionality in the zpools, consider disabling it. It is recommended to run the &amp;quot;Trim&amp;quot; function manually on demand and at a convenient time (e.g. at a time when the system is working under less load).&lt;br /&gt;
&lt;br /&gt;
=== The Mellanox ConnectX-3 network controller is no longer supported in RDMA mode due to its instability. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In order to provide stable work with RDMA we recommend using the Mellanox ConnectX-4, ConnectX-5, or ConnectX-6.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115142 --&amp;gt;The Network usage charts display incorrect data for an Active-Backup bonding with RDMA. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Network usage charts incorrectly display data for systems using the Active-Backup bonding with RDMA. The charts only reflect the usage of one network interface included in the Active-Backup bonding (the charts for the second network interface are not generated at all).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115051 --&amp;gt;Duplicate entries appear in the Service Status tab in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In certain scenarios, the Service Status tab in the WebGUI shows duplicated instances of the same connection.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114628 --&amp;gt;Restoring data backups from the macOS Time Machine application may not work correctly with older versions of the macOS system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems with restoring a copy of data from the Time Machine application, it is recommended to update the macOS system to a new version.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114516 --&amp;gt;The Virtual Hard disks smaller than 1B are visible in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; It’s possible to disable the virtual disks through IPMI settings. In Settings -&amp;gt; Media Redirection Settings -&amp;gt; VMedia Instance Settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;  Uncheck &amp;quot;Emulate SD Media as USB disk to Host&amp;quot; checkbox - it  manages one of the virtual disks.&lt;br /&gt;
  Set &amp;quot;Hard disk instances&amp;quot; to 0 in a combo box.&lt;br /&gt;
  Set &amp;quot;Remote KVM Hard disk instances&amp;quot; to 0 in the combo box - settings of the combo box manage the second virtual disk. &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== &amp;lt;!-- #114449 --&amp;gt;Unsupported configuration of VMware virtual machines (consisting of multiple disks) for data rollback from snapshots in On- &amp;amp; Off-site Data Protection. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The VMware virtual machine data rollbacks from snapshots using the On- &amp;amp; Off-site Data Protection functionality are not supported when the virtual machines consist of multiple disks. The specific virtual machine configuration is incompatible with the restoration process.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114351 --&amp;gt;Subdomain statuses in the User Management tab in the WebGUI are not updated. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of unavailability of a given subdomain, information about its status will not be updated on the WebGUI (even by pressing the refresh button).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114251 --&amp;gt;The problems with users and groups synchronization within the Active Directory one-way trusted configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems it’s recommended to use two-way trusted configuration.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #108558 --&amp;gt;Partial support for REST API v3. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The REST API v3 is currently only partially supported. As a result, not all operations can be executed using this version of the REST API. For optimal utilization of the REST API, we highly recommend all customers to employ REST API v4.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #99323 --&amp;gt;Username in LDAP database can’t be changed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To modify a username in the LDAP database, the administrator needs to delete the user account and creating a new one in the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115409 --&amp;gt;The hard disk LED locating and disk faulty functionality do not work properly using the Broadcom HBA 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter the Hard disk LED locating and disk faulty functionality do not work.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115436 --&amp;gt;The Broadcom HBA 9600 Storage Adapter may cause “Target allocation failed, error -6” error messages in dmesg. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter,using the “Rescan” button in the storage tab in the WebGUI may results in the “Target allocation failed, error -6” error messages in dmesg.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #109737 --&amp;gt;The ARCHTTP tool, when in use, might erroneously redirect to another network interface. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To avoid redirection to another network interface, it’s recommended to connect to the ARCHTTP tool using the primary network interface available in the Scale Logic NX (the network interface is usually: eth0).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #116234 --&amp;gt;Identification LED blinking does not work on NVMe drives in AMD-based servers. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Identification LED blinking on NVMe drives does not work on servers based on AMD processors. This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115416 --&amp;gt;The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter.&lt;br /&gt;
&lt;br /&gt;
=== The TDB UID/GIDs mapping does not function properly. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; Workarounds:&lt;br /&gt;
&lt;br /&gt;
*Single-Domain Environments:&lt;br /&gt;
**Use the &amp;quot;autorid&amp;quot; option in the &amp;quot;ID mapping backend&amp;quot; settings.&lt;br /&gt;
**Alternatively, use &amp;quot;rid+tdb&amp;quot;:&lt;br /&gt;
**#Connect to the domain.&lt;br /&gt;
**#Navigate to the “Accessed domains” section.&lt;br /&gt;
**#Click the “Edit domain settings” button.&lt;br /&gt;
**#Set the UID/GID mapping to &amp;quot;rid&amp;quot; and define the Min ID and Max ID range (e.g., 2,000,000 to 2,999,999).&lt;br /&gt;
&lt;br /&gt;
Note: The range 1,000,000 to 1,999,999 is reserved.&lt;br /&gt;
&lt;br /&gt;
*Multi-Domain Environments:&lt;br /&gt;
**The &amp;quot;autorid&amp;quot; option is not supported. Use one of the following:&lt;br /&gt;
**#&amp;quot;rid+tdb&amp;quot;&lt;br /&gt;
**#&amp;quot;ad (with RFC2307 schema) + tdb&amp;quot;&lt;br /&gt;
**Steps for configuration:&lt;br /&gt;
&amp;lt;ol style=&amp;quot;margin-left: 80px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Connect to the domains.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Navigate to the “Accessed domains” section.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Click the “Edit domain settings” button for each domain.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Set the UID/GIDs mapping to &amp;quot;rid&amp;quot; for all domains.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Define unique Min ID and Max ID ranges for each domain (e.g., 2,000,000 to 2,999,999 for the first domain, 3,000,000 to 3,999,999 for the second domain, etc.).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== No Warning for Duplicate IP Addresses on Network Interfaces ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; No warning or error message is displayed if two network interfaces are configured with the same IP address. This can lead to network conflicts or connectivity issues. Users must manually verify configurations to avoid duplicates.&lt;br /&gt;
&lt;br /&gt;
=== No LED Management for aacraid Storage Controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; LED management is no longer supported for storage controllers using the aacraid driver, aligning with the manufacturer’s decision to discontinue these controllers. Users depending on LED indicators should explore alternative monitoring solutions or consider upgrading to supported hardware.&lt;br /&gt;
&lt;br /&gt;
=== LED Blinking Not Functional on NVMe Drives in Supermicro X12 Servers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; On Supermicro X12 servers, LED blinking functionality for NVMe drives is not operational. Users should rely on alternative methods to identify and manage drives.&lt;br /&gt;
&lt;br /&gt;
=== Web Server Settings in Maxview Storage Manager Not Preserved After Restart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; Changes made to the Web server settings in Maxview Storage Manager revert to default values after a server restart. Custom configurations are lost upon reboot. This issue will be addressed in a future release.&lt;br /&gt;
&lt;br /&gt;
=== Unnecessary dmesg Entries After Zpool Export/Import ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; Following a zpool export and import, dmesg may show entries such as &amp;quot;debugfs: Directory &#039;zdX&#039; with parent &#039;block&#039; already present!&amp;quot; While these entries do not affect functionality, they will be addressed in a future release.&lt;br /&gt;
&lt;br /&gt;
=== Discontinued IDE Disk Support in Scale Logic NX Up31 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In Scale Logic NX Up31, IDE disk support has been removed. Older servers or virtual machines relying on IDE disks may experience compatibility issues or failures. We recommend migrating to supported storage solutions to avoid disruptions. Future releases will not reintroduce IDE disk support.&lt;br /&gt;
&lt;br /&gt;
=== Consider Reducing Volume Block Size to 16KB for High Random Workloads ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; For workloads with high levels of random I/O, reducing the iSCSI volume block size to 16KB can improve performance. Users experiencing performance challenges with random workloads should consider this tuning option.&lt;br /&gt;
&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up31_ZFS_Upgrade&amp;diff=1503</id>
		<title>Scale Logic NX ver.1.0 up31 ZFS Upgrade</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up31_ZFS_Upgrade&amp;diff=1503"/>
		<updated>2025-01-21T11:36:31Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[File system upgrade]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=ISCSI_connections&amp;diff=1476</id>
		<title>ISCSI connections</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=ISCSI_connections&amp;diff=1476"/>
		<updated>2025-01-21T11:36:31Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Active iSCSI connections]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Critical_IO_Errors&amp;diff=1203</id>
		<title>Critical IO Errors</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Critical_IO_Errors&amp;diff=1203"/>
		<updated>2025-01-21T11:36:31Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Critical system error response policy]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=SMB_connections&amp;diff=1473</id>
		<title>SMB connections</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=SMB_connections&amp;diff=1473"/>
		<updated>2025-01-21T11:36:30Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Active SMB user connections]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Small_blocks_policy_settings&amp;diff=1485</id>
		<title>Small blocks policy settings</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Small_blocks_policy_settings&amp;diff=1485"/>
		<updated>2025-01-21T10:03:37Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;This feature is available only when the special devices group exists in the pool.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Devices assigned to the special devices group are designated for storing specific data, including metadata, indirect blocks of user data, and deduplication tables. Additionally, devices in the special devices group can be configured to handle small file blocks that are not listed above by applying the small blocks policy.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;The size of the small block refers to the size of a single block of data configured on the dataset. Maximum size of such blocks can be set for each dataset under the “Record size” option.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;Deduplication tables can alternatively be placed in a separate group known as the deduplication group.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;&amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;&#039;&#039;&#039;WARNING&#039;&#039;&#039;: If the size of the small block is greater than or equal to the value of record size on the dataset, all the blocks will be offloaded to the special devices group.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;The small block size can be configured for the whole Pool or for each dataset separately. Options to choose range from 4 KiB to 16 MiB.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Configuring a bigger size for the small block policy can be helpful in case the administrator expects the to have a substantial amount of small files that will require low access times separated from the bigger files. Using this option is recommended only if the administrator understands what kind of data will be stored in configured datasets and the maximum size of offloaded files is exactly known, to avoid accidental data offload and special devices congestion.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=File:Remove_WMware_server.png&amp;diff=1501</id>
		<title>File:Remove WMware server.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=File:Remove_WMware_server.png&amp;diff=1501"/>
		<updated>2025-01-21T10:00:58Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=File:VMware_server_details.png&amp;diff=1500</id>
		<title>File:VMware server details.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=File:VMware_server_details.png&amp;diff=1500"/>
		<updated>2025-01-21T10:00:07Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=File:Details_of_WMware_server.png&amp;diff=1499</id>
		<title>File:Details of WMware server.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=File:Details_of_WMware_server.png&amp;diff=1499"/>
		<updated>2025-01-21T09:59:31Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=File:Add_VMware_server_form.png&amp;diff=1498</id>
		<title>File:Add VMware server form.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=File:Add_VMware_server_form.png&amp;diff=1498"/>
		<updated>2025-01-21T09:59:04Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=File:V-center_add_server_option.png&amp;diff=1497</id>
		<title>File:V-center add server option.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=File:V-center_add_server_option.png&amp;diff=1497"/>
		<updated>2025-01-21T09:58:39Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=File:Upgrade_file_system_confirmation.png&amp;diff=1496</id>
		<title>File:Upgrade file system confirmation.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=File:Upgrade_file_system_confirmation.png&amp;diff=1496"/>
		<updated>2025-01-21T09:55:57Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=File:Upgrade_file_system_option.png&amp;diff=1495</id>
		<title>File:Upgrade file system option.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=File:Upgrade_file_system_option.png&amp;diff=1495"/>
		<updated>2025-01-21T09:55:24Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=File:Remove_server_option.png&amp;diff=1494</id>
		<title>File:Remove server option.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=File:Remove_server_option.png&amp;diff=1494"/>
		<updated>2025-01-21T09:54:20Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=File:Add_server_form.png&amp;diff=1493</id>
		<title>File:Add server form.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=File:Add_server_form.png&amp;diff=1493"/>
		<updated>2025-01-21T09:53:49Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=File:Add_server_option.png&amp;diff=1492</id>
		<title>File:Add server option.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=File:Add_server_option.png&amp;diff=1492"/>
		<updated>2025-01-21T09:53:21Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Backup_%26_Recovery_destination_server&amp;diff=1491</id>
		<title>Backup &amp; Recovery destination server</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Backup_%26_Recovery_destination_server&amp;diff=1491"/>
		<updated>2025-01-21T09:40:30Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;Remote servers that are added as snapshot targets are called Destination servers. To add one, click the “Add server” button.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Add server option.png|none|800px|Add server option.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;You will be asked for the following information:&lt;br /&gt;
*&#039;&#039;&#039;IP address/domain&#039;&#039;&#039;: provide IP address or a hostname associated with your Destination server.&lt;br /&gt;
*&#039;&#039;&#039;Port&#039;&#039;&#039;: 40000 is set by default. This value should not be changed, unless explicitly asked by the Support Team.&lt;br /&gt;
*&#039;&#039;&#039;Password&#039;&#039;&#039;: to establish the connection, it is required to provide the administrator password to the remote server.&lt;br /&gt;
*&#039;&#039;&#039;Description&#039;&#039;&#039;: This field is optional. Provide a short description of your server to identify it in the future.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Add server form.png|none|450px|Add server form.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Destination servers configured here will be available during the Destination configuration step of the Replication task wizard.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;If you wish to remove the Destination server click the “Remove” button. &#039;&#039;&#039;Remember to remove all Backup tasks related to this server beforehand!&#039;&#039;&#039;&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Remove server option.png|none|800px|Remove server option.png]]&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Backup_%26_Recovery_Esx_server&amp;diff=1489</id>
		<title>Backup &amp; Recovery Esx server</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Backup_%26_Recovery_Esx_server&amp;diff=1489"/>
		<updated>2025-01-21T09:40:30Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;Snapshot feature can be integrated with VMware ESX/vSphere snapshots. To integrate a new VMware server with click on the “Add server” option.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[File:V-center add server option.png|800px|V-center add server option.png]]&lt;br /&gt;
&amp;lt;div&amp;gt;You will be asked for the following information:&lt;br /&gt;
*&#039;&#039;&#039;IP address&#039;&#039;&#039;: provide IP address or a hostname associated with your Destination server.&lt;br /&gt;
*&#039;&#039;&#039;Port&#039;&#039;&#039;: 443 is set by default. This value should not be changed, unless explicitly asked by the Support Team.&lt;br /&gt;
*&#039;&#039;&#039;Username&#039;&#039;&#039;: provide the username for the VMware user you wish to use during integration. It is recommended to use the root user for this purpose.&lt;br /&gt;
*&#039;&#039;&#039;Password&#039;&#039;&#039;: to establish the connection, it is required to provide the password to the VMware user account provided in the previous step.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[File:Add VMware server form.png|450px|Add VMware server form.png]]&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;The integrated server will be visible in the table below. The user can browse through the configured datastores and Virtual Machines by going to the Options &amp;gt; Details.&amp;lt;/div&amp;gt;&lt;br /&gt;
[[File:Details of WMware server.png|800px|Details of WMware server.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:VMware server details.png|450px|VMware server details.png]]&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;It is also possible to remove the server from integration by clicking the Remove option. &amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;&#039;&#039;&#039;Remember to remove all Backup tasks related to this server beforehand!&#039;&#039;&#039;&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[File:Remove WMware server.png|800px|Remove WMware server.png]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Backup_%26_Recovery_replication_task&amp;diff=763</id>
		<title>Backup &amp; Recovery replication task</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Backup_%26_Recovery_replication_task&amp;diff=763"/>
		<updated>2025-01-21T09:40:30Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
== Backup &amp;amp; Recovery: Overview Tabs ==&lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Tasks&#039;&#039;&#039;: View the list of all tasks with their current statuses.&lt;br /&gt;
#&#039;&#039;&#039;Destination Servers&#039;&#039;&#039;: View all added destination servers. Use the &#039;&#039;&#039;Add Server&#039;&#039;&#039; button to configure a new server outside of the Backup Task Wizard.&lt;br /&gt;
#&#039;&#039;&#039;vCenter/vSphere Servers&#039;&#039;&#039;: View all added vCenter/vSphere servers. Use the &#039;&#039;&#039;Add Server&#039;&#039;&#039; button to configure a new server outside of the Backup Task Wizard.&lt;br /&gt;
&lt;br /&gt;
For additional support or detailed guidance, refer to the article [[On- and Off-site Data Protection|On-_and_Off-site_Data_Protection]]&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
== Backup &amp;amp; Recovery: Creating a Replication Task ==&lt;br /&gt;
&lt;br /&gt;
To create a replication task, navigate to &#039;&#039;&#039;Backup &amp;amp; Recovery&#039;&#039;&#039; and click on the &#039;&#039;&#039;Add Replication Task&#039;&#039;&#039; button. This launches the Backup Task Wizard, which consists of the following steps:&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Source Configuration ===&lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Resource Path&#039;&#039;&#039;: Browse and select the ZVOLs or datasets to be backed up. Confirm your selection by clicking &#039;&#039;&#039;Apply&#039;&#039;&#039;.&lt;br /&gt;
#&#039;&#039;&#039;Retention Interval Plan&#039;&#039;&#039;: Specify how often snapshots should be taken and how long they should be retained.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=== Step 2: Destination Configuration ===&lt;br /&gt;
&lt;br /&gt;
*If the &#039;&#039;&#039;Toggle Bar&#039;&#039;&#039; is disabled, no destination will be configured. Enable the toggle bar to activate &#039;&#039;&#039;Destination 1&#039;&#039;&#039;.&lt;br /&gt;
*The destination server can be either:&lt;br /&gt;
**&#039;&#039;&#039;Local Server&#039;&#039;&#039;: The same machine as the source.&lt;br /&gt;
**&#039;&#039;&#039;Remote Server&#039;&#039;&#039;: A different server. To configure a remote server:&lt;br /&gt;
**#Select &#039;&#039;&#039;Add New Server&#039;&#039;&#039;.&lt;br /&gt;
**#Provide the following details:&lt;br /&gt;
**#*&#039;&#039;&#039;IP Address/Domain&#039;&#039;&#039;&lt;br /&gt;
**#*&#039;&#039;&#039;Port&#039;&#039;&#039; (default: 40000)&lt;br /&gt;
**#*&#039;&#039;&#039;Password&#039;&#039;&#039;&lt;br /&gt;
**#*&#039;&#039;&#039;Description&#039;&#039;&#039; (optional)&lt;br /&gt;
**#After adding the server, select the appropriate &#039;&#039;&#039;Resource Path&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;&#039;&#039;&#039;Note&#039;&#039;&#039;: &#039;&#039;&#039;The resource path cannot have iSCSI targets attached (for ZVOLs) or shared datasets.&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Specify the &#039;&#039;&#039;Retention Interval Plan&#039;&#039;&#039; for the destination.&lt;br /&gt;
*To configure additional destinations, click &#039;&#039;&#039;Add Another Destination&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
For detailed explanations of these options, refer to the article [[On- and Off-site Data Protection|On-_and_Off-site_Data_Protection]].&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=== Step 3: vCenter/vSphere Server Integration ===&lt;br /&gt;
&lt;br /&gt;
*Add a vCenter/vSphere server to enable consistent snapshots.&lt;br /&gt;
&lt;br /&gt;
For detailed instructions, refer to the article [[On- and Off-site Data Protection|On-_and_Off-site_Data_Protection]].&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=== Step 4: Task Properties ===&lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Task Description&#039;&#039;&#039;: Create a custom description for the task.&lt;br /&gt;
#&#039;&#039;&#039;Enable MBuffer&#039;&#039;&#039;: Buffer the data stream on the source and destination to prevent buffer underruns. Configure:&lt;br /&gt;
#*&#039;&#039;&#039;Buffer Size&#039;&#039;&#039;&lt;br /&gt;
#*&#039;&#039;&#039;Rate Limit&#039;&#039;&#039;&lt;br /&gt;
#&#039;&#039;&#039;Send Compressed Data&#039;&#039;&#039;: Enable this option to transfer compressed data directly without decompression, which speeds up the process and reduces network bandwidth usage.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=== Step 5: Summary ===&lt;br /&gt;
&lt;br /&gt;
*Review a summary of the configured settings.&lt;br /&gt;
*Click &#039;&#039;&#039;Add&#039;&#039;&#039; to finalize the task.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Critical_system_error_response_policy&amp;diff=1487</id>
		<title>Critical system error response policy</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Critical_system_error_response_policy&amp;diff=1487"/>
		<updated>2025-01-21T09:40:29Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;A system reboot may be necessary when a critical error is detected. The administrator may choose to handle different errors in a different manner.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Possible critical errors are divided into three categories:&lt;br /&gt;
*&#039;&#039;&#039;ZFS pool I/O suspend&#039;&#039;&#039;: errors from this group are raised in case an uncorrectable I/O failure is encountered during read/write operation to the Pool. The I/O operation is suspended and the system awaits a reboot.&lt;br /&gt;
*&#039;&#039;&#039;Kernel oops or bug&#039;&#039;&#039;: kernel oops is defined as a deviation from the correct behavior of the Linux kernel that produces a certain error log. Such an error is not fatal to the system but may be dangerous to the system’s stability. Kernel oops often precedes a kernel panic, causing the system to immediately shutdown. Kernel bug refers to an internal error in the kernel code. Un-Kh errors put the system integrity at risk. It is highly recommended that a reboot is performed immediately to avoid unexpected failures.&lt;br /&gt;
*&#039;&#039;&#039;Out-of-memory error&#039;&#039;&#039;: This error, abbreviated as OOM, refers to the state of the system where no additional memory can be allocated for use by programs or the operating system. It is necessary to free up or add memory to the system to recover the system operation. Once this error occurs the system enters an unresponsive state until the memory issue is solved. It is highly recommended to reboot the system at the first moment possible.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;For each of the mentioned categories the following behavior patterns can be configured:&lt;br /&gt;
*&#039;&#039;&#039;Immediate&#039;&#039;&#039;: system will reboot the machine immediately after the error occurs (the event will not be recorded in the event viewer).&lt;br /&gt;
*&#039;&#039;&#039;Automatic&#039;&#039;&#039;: system will restart in 30 seconds from when the errors appear.&lt;br /&gt;
*&#039;&#039;&#039;Manual&#039;&#039;&#039;: system will prompt for manual restart.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Small_blocks_policy_settings&amp;diff=1484</id>
		<title>Small blocks policy settings</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Small_blocks_policy_settings&amp;diff=1484"/>
		<updated>2025-01-21T09:40:29Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;This feature is available only when the special devices group exists in the pool.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Devices assigned to the special devices group are designated for storing specific data, including metadata, indirect blocks of user data, and deduplication tables. Additionally, devices in the special devices group can be configured to handle small file blocks that are not listed above by applying the small blocks policy.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;The size of the small block refers to the size of a single block of data configured on the dataset. Maximum size of such blocks can be set for each dataset under the “Record size” option.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;Deduplication tables can alternatively be placed in a separate group known as the deduplication group.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;&amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;&#039;&#039;&#039;WARNING&#039;&#039;&#039;: If the size of the small block is greater than or equal to the value of record size on the dataset, all the blocks will be offloaded to the special devices group.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;The small block size can be configured for the whole Pool or for each dataset separately. Options to choose range from 4 KiB to 16 MiB.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Configuring a bigger size for the small block policy can be helpful in case the administrator expects the to have a substantial amount of small files that will require low access times separated from the bigger files. Using this option is recommended only if the administrator understands what kind of data will be stored in configured datasets and the maximum size of offloaded files is exactly known, to avoid accidental data offload and special devices congestion.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;[[Category:Help_topics]]&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=File_system_upgrade&amp;diff=1482</id>
		<title>File system upgrade</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=File_system_upgrade&amp;diff=1482"/>
		<updated>2025-01-21T09:40:29Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;After upgrading to a version with a newer ZFS filesystem, the following notification will be displayed upon first access:&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;q&amp;gt;Zpools available for ZFS filesystem upgrade Upgrading Zpools to the latest ZFS file system is recommended. Although the file system upgrade is absolutely safe for your data and its integrity and will only take few minutes please be aware that this operation cannot be undone and accessing this zpool data will not be possible with older software versions. In order to upgrade a single Zpool, please use “Upgrade file system&amp;quot; from Zpool&#039;s option menu.&amp;lt;/q&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Additionally, the zpool itself will display the following zpool status:&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;q&amp;gt;Some supported features are not enabled on the pool. The pool can still be used but it is recommended to upgrade it in order to fully utilize all system features. Action: Upgrade the pool using &amp;quot;Upgrade file system&amp;quot; in pool options menu. Once this is done, the pool will no longer be accessible by software that does not support new features.&amp;lt;/q&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;As prompted, expand the zpool options and choose &amp;quot;Upgrade file system&amp;quot;:&amp;lt;/div&amp;gt;&lt;br /&gt;
[[File:Upgrade file system option.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;The following windows will appear. Type ‘upgrade’ and click the Upgrade button to proceed:&amp;lt;/div&amp;gt;&lt;br /&gt;
[[File:Upgrade file system confirmation.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Once completed, the system will notify you that the zpool has been updated successfully.&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Zpool_wizard&amp;diff=89</id>
		<title>Zpool wizard</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Zpool_wizard&amp;diff=89"/>
		<updated>2025-01-21T09:40:29Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;A &#039;&#039;&#039;zpool&#039;&#039;&#039; is the foundational storage construct in ZFS. It serves as a logical storage pool that combines multiple physical storage devices (disks) into &#039;&#039;&#039;vdevs&#039;&#039;&#039; (virtual devices), which collectively form the unified zpool. From this zpool, ZFS creates and manages &#039;&#039;&#039;datasets&#039;&#039;&#039; (file systems) and &#039;&#039;&#039;zvols&#039;&#039;&#039; (block storage volumes).&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;The zpool wizard is made up of the following steps:&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;&#039;&#039;&#039;1. Add data group&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This section provides information about all storage devices connected to the storage server. To add the first Data Group to your Zpool, follow these steps:&lt;br /&gt;
&lt;br /&gt;
#Select the desired disks from the list on the left.&lt;br /&gt;
#Choose the redundancy type.&lt;br /&gt;
#Click the &amp;quot;Add group&amp;quot; button.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;The available redundancy options for groups are as follows:&lt;br /&gt;
*&#039;&#039;&#039;Single&#039;&#039;&#039;: Each disk operates as an independent drive with no redundancy.&lt;br /&gt;
*&#039;&#039;&#039;Mirror&#039;&#039;&#039;: All data written to one device in the mirror is automatically replicated to another device, ensuring data redundancy. A minimum of two disks is required to create a mirrored vdev.&lt;br /&gt;
**&#039;&#039;&#039;Mirror (Single Group)&#039;&#039;&#039;: All selected disks will be combined into a single mirrored group.&lt;br /&gt;
**&#039;&#039;&#039;Mirror (Multiple Groups)&#039;&#039;&#039;: The selected disks will be paired into multiple mirrored groups, each consisting of two disks.&lt;br /&gt;
*&#039;&#039;&#039;RAIDZ-1&#039;&#039;&#039;: Allows for the failure of one disk per RAIDZ-1 group without losing data. A minimum of three disks is required for a RAIDZ-1 group.&lt;br /&gt;
*&#039;&#039;&#039;RAIDZ-2&#039;&#039;&#039;: Allows for the failure of two disks per RAIDZ-2 group without losing data. A minimum of four disks is required for a RAIDZ-2 group.&lt;br /&gt;
*&#039;&#039;&#039;RAIDZ-3&#039;&#039;&#039;: Allows for the failure of three disks per RAIDZ-3 group without losing data. A minimum of five disks is required for a RAIDZ-3 group.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;To learn more vdev types, please refer to the following article:&amp;amp;nbsp;[[Redundancy in Disks Groups|Redundancy_in_Disks_Groups]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;&#039;&#039;&#039;2. Add write log&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;div&amp;gt;This feature allows you to configure the write log function using a chosen redundancy level (either a single drive or a mirror). The write log utilizes a separate intent log (SLOG) device. A fast SSD/NVME should be used for this vdev.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Key points to consider:&lt;br /&gt;
*If multiple log devices are specified, write operations are load-balanced between the devices.&lt;br /&gt;
*Log devices can be configured with redundancy by using mirrors to enhance fault tolerance.&lt;br /&gt;
*RAIDZ vdev types are not supported for the intent log.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;This ensures efficient and reliable write operations while leveraging the selected redundancy level.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;&#039;&#039;&#039;3. Add read cache&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;div&amp;gt;A cache device is used to store frequently accessed storage pool data, providing an additional layer of caching between main memory and disk. These devices cannot be configured as mirrors or RAIDZ groups. A fast SSD/NVME should be used for this vdev.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Key benefits and considerations:&lt;br /&gt;
*Cache devices are particularly useful for &#039;&#039;&#039;read-heavy workloads&#039;&#039;&#039; where the working dataset size exceeds the capacity of main memory.&lt;br /&gt;
*By utilizing cache devices, a larger portion of the working dataset can be served from low-latency storage, improving performance significantly.&lt;br /&gt;
*The greatest performance improvements are seen in workloads characterized by &#039;&#039;&#039;random reads&#039;&#039;&#039; of primarily static content.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Adding a read cache helps enhance performance and reduces latency for storage systems with high read demands.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;&#039;&#039;&#039;4. Add special devices group&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;div&amp;gt;Special devices are used to store specific types of data, such as metadata or small files, on dedicated storage devices separate from the main data pool. A fast SSD/NVME should be used for this vdev.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Key features and benefits:&lt;br /&gt;
*Storing metadata on special devices improves performance for metadata-intensive operations, such as file lookups and directory traversals.&lt;br /&gt;
*Small files below a certain size threshold can also be stored on these devices, enhancing read and write speeds for such workloads.&lt;br /&gt;
*Special devices are particularly beneficial for environments with a large number of small files or high metadata activity.&amp;lt;div&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div&amp;gt;Using special devices optimizes the overall performance of the ZFS pool by offloading critical metadata and small-file operations to faster storage.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;&#039;&#039;&#039;5. Add deduplication group&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;div&amp;gt;A deduplication group can be explicitly excluded from a special device group as a dedicated storage group used to hold deduplication tables. This allows the deduplication tables to be stored separately from the special device class.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Key features and considerations:&lt;br /&gt;
*Storing deduplication tables in a dedicated group improves the efficiency of deduplication processes by isolating them from other metadata operations.&lt;br /&gt;
*This configuration provides flexibility in optimizing storage layout based on workload requirements.&lt;br /&gt;
*Using a deduplication group is particularly beneficial for systems with high deduplication demands, ensuring better performance and management.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;This setup enhances deduplication performance while maintaining a clear separation of metadata and deduplication operations.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;&#039;&#039;&#039;6. Add spare disks&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;div&amp;gt;A spare disk is a special pseudo-vdev used to track available spare devices for a zpool.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Using spare disks enhances the reliability of the storage pool by allowing seamless drive replacement and reducing the risk of data loss.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;&#039;&#039;&#039;7. Configuration&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;div&amp;gt;During this step, you can configure the Zpool by naming it and enabling additional features if required.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Key configurations:&lt;br /&gt;
*&#039;&#039;&#039;Zpool Name&#039;&#039;&#039;: Assign a unique and descriptive name to the Zpool for easy identification.&lt;br /&gt;
*&#039;&#039;&#039;Enable AutoTRIM&#039;&#039;&#039;: If supported by your devices, enable the AutoTRIM feature to automatically reclaim unused space. AutoTRIM helps optimize the performance and lifespan of SSDs by informing them when blocks are no longer in use.&lt;br /&gt;
*&#039;&#039;&#039;Small blocks policy settings&#039;&#039;&#039; if a special device group has been configured in Step. When the small block size is set for the pool all datasets inherit this value by default. It can be changed for a particular dataset in its setting.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Proper configuration ensures that the Zpool is tailored to your needs and operates efficiently.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;&#039;&#039;&#039;8. Summary&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;div&amp;gt;This step provides a summary of the zpool configuration, detailing the arrangement of disk groups and their roles within the pool. Click ‘Add zpool’ to create a zpool.&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Backup_%26_Recovery_destination_server&amp;diff=1490</id>
		<title>Backup &amp; Recovery destination server</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Backup_%26_Recovery_destination_server&amp;diff=1490"/>
		<updated>2025-01-20T12:04:40Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;Remote servers that are added as snapshot targets are called Destination servers. To add one, click the “Add server” button.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Add server option.png|none|800px|Add server option.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;You will be asked for the following information:&lt;br /&gt;
*&#039;&#039;&#039;IP address/domain&#039;&#039;&#039;: provide IP address or a hostname associated with your Destination server.&lt;br /&gt;
*&#039;&#039;&#039;Port&#039;&#039;&#039;: 40000 is set by default. This value should not be changed, unless explicitly asked by the Support Team.&lt;br /&gt;
*&#039;&#039;&#039;Password&#039;&#039;&#039;: to establish the connection, it is required to provide the administrator password to the remote server.&lt;br /&gt;
*&#039;&#039;&#039;Description&#039;&#039;&#039;: This field is optional. Provide a short description of your server to identify it in the future.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Add server form.png|none|450px|Add server form.png]]&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Destination servers configured here will be available during the Destination configuration step of the Replication task wizard.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;If you wish to remove the Destination server click the “Remove” button. &#039;&#039;&#039;Remember to remove all Backup tasks related to this server beforehand!&#039;&#039;&#039;&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;[[File:Remove server option.png|none|800px|Remove server option.png]]&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Backup_%26_Recovery_Esx_server&amp;diff=1488</id>
		<title>Backup &amp; Recovery Esx server</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Backup_%26_Recovery_Esx_server&amp;diff=1488"/>
		<updated>2025-01-20T12:04:21Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;Snapshot feature can be integrated with VMware ESX/vSphere snapshots. To integrate a new VMware server with click on the “Add server” option.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[File:V-center add server option.png|800px|V-center add server option.png]]&lt;br /&gt;
&amp;lt;div&amp;gt;You will be asked for the following information:&lt;br /&gt;
*&#039;&#039;&#039;IP address&#039;&#039;&#039;: provide IP address or a hostname associated with your Destination server.&lt;br /&gt;
*&#039;&#039;&#039;Port&#039;&#039;&#039;: 443 is set by default. This value should not be changed, unless explicitly asked by the Support Team.&lt;br /&gt;
*&#039;&#039;&#039;Username&#039;&#039;&#039;: provide the username for the VMware user you wish to use during integration. It is recommended to use the root user for this purpose.&lt;br /&gt;
*&#039;&#039;&#039;Password&#039;&#039;&#039;: to establish the connection, it is required to provide the password to the VMware user account provided in the previous step.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[File:Add VMware server form.png|450px|Add VMware server form.png]]&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;The integrated server will be visible in the table below. The user can browse through the configured datastores and Virtual Machines by going to the Options &amp;gt; Details.&amp;lt;/div&amp;gt;&lt;br /&gt;
[[File:Details of WMware server.png|800px|Details of WMware server.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:VMware server details.png|450px|VMware server details.png]]&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;It is also possible to remove the server from integration by clicking the Remove option. &amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;&#039;&#039;&#039;Remember to remove all Backup tasks related to this server beforehand!&#039;&#039;&#039;&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[File:Remove WMware server.png|800px|Remove WMware server.png]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Main_Page&amp;diff=1111</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Main_Page&amp;diff=1111"/>
		<updated>2024-12-20T08:47:43Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===== &#039;&#039;Release Notes:&#039;&#039; =====&lt;br /&gt;
&lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList| &lt;br /&gt;
category = Release Notes &lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = descending&lt;br /&gt;
count = 1&lt;br /&gt;
mode = none&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;div&amp;gt;[[Release Notes|All release notes »]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Help topics:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
count=50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 50&lt;br /&gt;
count= 50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
| style=&amp;quot;vertical-align: top&amp;quot; | &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = Help topics&lt;br /&gt;
offset = 100&lt;br /&gt;
count=50&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;ZFS and data storage articles:&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
{{&lt;br /&gt;
#tag:DynamicPageList|&lt;br /&gt;
category = ZFS and data storage articles&lt;br /&gt;
count=60&lt;br /&gt;
ordermethod = categorysortkey &lt;br /&gt;
order = ascending&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=File:Ad-structure.png&amp;diff=1440</id>
		<title>File:Ad-structure.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=File:Ad-structure.png&amp;diff=1440"/>
		<updated>2024-12-19T15:14:02Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Ma-W uploaded a new version of &amp;amp;quot;File:Ad-structure.png&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Redundancy_in_Disks_Groups&amp;diff=1480</id>
		<title>Redundancy in Disks Groups</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Redundancy_in_Disks_Groups&amp;diff=1480"/>
		<updated>2024-12-19T15:12:55Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;Disk group redundancy refers to the ability of a zpool to maintain data integrity and availability in the event of disk failures. This is achieved through mirrored or RAID-Z configurations, which store multiple copies of data across different disks. When a disk fails or data corruption is detected, ZFS can use the redundant copies to repair or reconstruct the lost data, ensuring the system continues to operate without data loss.&lt;br /&gt;
&lt;br /&gt;
It is important not to mix the types of data groups (vdevs) inside your storage zpool, as it might lead to potential issues, so it is strongly recommended to consistently use only one type of vdev.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;br/&amp;gt;Data Group redundancy level: 2-way mirror (2 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with the number of mirror vdevs in the zpool.&lt;br /&gt;
*The 2-way mirror accepts a single disk failure in a given vdev.&lt;br /&gt;
*The 2-way mirrors can be used for mission critical applications, but it is recommended not to exceed 12 vdevs in a zpool (recommended up to 12 x 2 = 24 disks for mission-critical applications and 24 x 2 = 48 disks for non-mission critical applications in a zpool).&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: as a rule, the zpool performance increases with the number of vdevs in the pool. For mission-critical applications and using more than 12 groups, It is recommended to use 3-way mirrors or RAID-Z2 or RAID-Z3.&lt;br /&gt;
*For mission critical applications it is not recommended to use HDDs bigger than 4TB.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Data Group redundancy level: 3-way mirror (3 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with number of mirror vdevs in the zpool.&lt;br /&gt;
*The 3-way mirror accepts up to two disks failures in a given vdev.&lt;br /&gt;
*3-way mirrors can be used for mission critical applications, but it is recommended not to exceed vdevs in a zpool (recommended up to 16 x 3 = 48 disks for mission critical applications and 24 x 3 = 72 disks for non-mission critical applications in a zpool).&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: the zpool performance increases with the number of vdevs in a zpool. For mission-critical applications, it is recommended to use RAID-Z3.&lt;br /&gt;
*For mission critical applications it is not recommended to use HDDs bigger than 10TB.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Data Group redundancy level: 4-way mirror (4 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with number of mirror vdevs in the zpool.&lt;br /&gt;
*The 4-way mirror accepts up to three disks failures in a given vdev.&lt;br /&gt;
*It is also recommended not to exceed 24 of 4-way mirror vdevs in a zpool as a single group damage results in the destruction of the entire zpool (recommended up to 24 x 4 = 96 disks for mission-critical applications in a zpool).&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: as a rule, the zpool performance increases with the number of vdev in the pool.&lt;br /&gt;
*HDDs bigger than 16TB should be avoided for mission critical applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Data Group redundancy level: RAIDZ-1 (3-8 disks in a group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with the number of disks in a RAID-Z1 vdev.&lt;br /&gt;
*RAID-Z1 accepts one disk failure in a given vdev.&lt;br /&gt;
*The RAID-Z1 can be used for non-mission critical applications and it is not recommended to exceed 8 disks in a vdev. HDDs bigger than 4TB should be avoided.&lt;br /&gt;
*It is also not recommended to exceed 8 RAID-Z1 vdevs in a zpool as a single group damage results in the destruction of entire zpool (recommended up to 8 x 8 = 64 disks for non-mission critical applications in a zpool).&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: the zpool performance is doubled with 2 x RAID-Z1 with 4 disks each comparing to a single RAID-Z1 vdev with 8 disks.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;br/&amp;gt;Data Group redundancy level: RAIDZ-2 (4-24 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with the number of disks in the RAID-Z2 group.&lt;br /&gt;
*The RAID-Z2 accepts up to two disks failures in a given vdev.&lt;br /&gt;
*The RAID-Z2 can be used for mission-critical applications.&lt;br /&gt;
*It is not recommended to exceed 12 disks in a vdev for mission-critical and 24 disks for non-mission critical applications.&lt;br /&gt;
*It is also not recommended to exceed 16 of RAID-Z2 groups in a zpool as a single group damage results in the destruction of the entire zpool (recommended up to 16 x 12 = 192 disks for mission-critical applications and 16 x 24 = 384 disks for non-mission critical in a zpool). HDDs bigger than 16 TB should be avoided.&lt;br /&gt;
*If 3 disks failure in a vdev is required, it is recommended to use RAID-Z3.&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: the pool performance is doubled with 2 x RAID-Z2 with 6 disks each comparing to a single RAID-Z2 with 12 disks.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;br/&amp;gt;Data Group redundancy level: RAIDZ-3 (5-48 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with the number of disks in the RAID-Z3 group.&lt;br /&gt;
*The RAID-Z3 accepts up to three disks failure in a given vdev.&lt;br /&gt;
*The RAID-Z3 can be used for mission-critical applications.&lt;br /&gt;
*It is not recommended to exceed 24 disks in a vdev for mission-critical and 48 disks for non- mission critical applications.&lt;br /&gt;
*It is also not recommended to exceed 24 of RAID-Z3 groups in a zpool as a single group damage results in the destruction of the entire zpool (recommended up to 24x 24 =576 disks for mission critical applications and 24x 48 = 1152 disks for non-mission critical applications in a zpool). HDDs bigger than 16TB should be avoided.&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: the zpool performance is doubled with 2 x RAID-Z3 with 12 disks each comparing to single RAID-Z3 vdev with 24 disks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Write Log redundancy level ==&lt;br /&gt;
&lt;br /&gt;
*It should be configured as a 2-way mirror.&lt;br /&gt;
*When choosing a disk model for the Write Log, make sure to take the endurance parameter into consideration. Selecting a disk classified by the manufacturer as write intensive is strongly recommended.&lt;br /&gt;
*When selecting a disk size for the write log, consider the potential amount of data that’ll be able to reach the server in three consecutive ZFS transactions, e.g. based on the network card bandwidth for the data transfer. If the transaction length is set to 5 seconds (default), the write log device should be able to accommodate the amount of data that can be transferred within three transaction groups, i.e. 15 seconds of writing. Using a larger disk does not make sense economically, while a smaller one can be a performance bottleneck during synchronous writes. &#039;&#039;&#039;Practically speaking, 100GB for a write log should be more than enough.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Read Cache redundancy level ==&lt;br /&gt;
&lt;br /&gt;
Read Cache disks can only be configured as single disks, but it is possible to configure any number of them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Special devices and deduplication group redundancy level ==&lt;br /&gt;
&lt;br /&gt;
It should be configured as a 2-way mirror.&lt;br /&gt;
&lt;br /&gt;
[[Category:ZFS and data storage articles]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up31_Release_Notes&amp;diff=1478</id>
		<title>Scale Logic NX ver.1.0 up31 Release Notes</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up31_Release_Notes&amp;diff=1478"/>
		<updated>2024-12-19T15:12:55Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date: 2024-12-16&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build: 58473&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== New ==&lt;br /&gt;
&lt;br /&gt;
=== The iSCSI Target visibility and access to data through specified VIPs. ===&lt;br /&gt;
&lt;br /&gt;
=== Low-level console tools allow users to enable or disable the Transmit Packet Steering (XPS) mechanism for network interfaces (Hardware Configuration menu -&amp;gt; Tuning options -&amp;gt; Network interface options -&amp;gt; network interface -&amp;gt; XPS Transmit Packet Steering). ===&lt;br /&gt;
&lt;br /&gt;
=== Proxmox Qemu Guest Agent now available for improved NX VMs administration. ===&lt;br /&gt;
&lt;br /&gt;
=== Support for SED functionality on Micron drives. ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 24Gb/s GT HBA Adapter driver (esas6hba, v1.00.0f1). ===&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== Linux kernel (v5.15.167). ===&lt;br /&gt;
&lt;br /&gt;
=== ZFS (v2.2.4). ===&lt;br /&gt;
&lt;br /&gt;
=== LSI Storage Authority Software (v008.009.009.000, please note that email notifications in LSA will need to be reconfigured). ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox ConnectX-4/5 Network Controller driver (native driver from linux kernel 5.15.167). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM573xx and Broadcom BCM574xx controllers driver (bnxt_en, v1.10.3-230.2.52.0). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 100GbE Network Controller driver (ice, v1.14.11). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10/40GbE Network Controller driver (i40e, v2.25.9). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10GbE Network Controller driver (ixgbe, v5.20.9). ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 1GbE Network Controller driver (igb, v5.16.9). ===&lt;br /&gt;
&lt;br /&gt;
=== Marvell FastLinQ 41000 Network Controller driver (qede, v8.74.1.0). ===&lt;br /&gt;
&lt;br /&gt;
=== Chelsio T4/t5 10 Gigabit Ethernet controller driver (cxgb4, v3.19.0.2). ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox firmware update driver (mft, v4.28.0-92). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom HBA 9600-16e 12Gb Tri-Mode Storage Adapter driver (mpi3mr, v8.9.1.0.0). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom HBA Adapter driver (mpt3sas, v50.00.00.00). ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom MegaRAID Adapter driver (megaraid_sas, v07.729.00.00). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 12Gb/s HBA Adapter driver (esas4hba, v1.54.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 12Gb/s GT HBA Adapter driver (esas5hba, v1.08.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 16Gb/32Gb Fibre Channel Adapter driver (celerity16fc, v2.09.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 8Gb Fibre Channel Adapter driver (celerity8fc, v2.26.0f1). ===&lt;br /&gt;
&lt;br /&gt;
=== Areca RAID Adapter driver (arcmsr, v1.51.00.16). ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec SmartHBA and SmartRAID Adapter driver (smartpqi, v2.1.28-025). ===&lt;br /&gt;
&lt;br /&gt;
=== S.M.A.R.T monitoring tool (smartmontools, v7.4). ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== SSL Medium Strength Cipher Suites Supported (SWEET32) vulnerability (CVE-2016-2183). ===&lt;br /&gt;
&lt;br /&gt;
=== The product activation mechanism does not complete successfully on Proxmox virtual machines. ===&lt;br /&gt;
&lt;br /&gt;
=== Unexpected reboots occurred during ultra-heavy write operations. ===&lt;br /&gt;
&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
The utilization of physical hard drives in virtual machines hosted by Hyper-V is not supported and may cause problems. The problem does not occur when utilizing virtual hard drives in virtual machines within a Hyper-V environment.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Scale Logic VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Scale Logic VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Scale Logic NX users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB, Round-Robin and Round-Robin with RDMA do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX as Hyper-V or VMware guest, bonding ALB, Round-Robin and Round-Robin with RDMA is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of NX up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of NX up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of NX up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target alias name is required while configuring HA FC Target in case of adding two or more ports to one FC group ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If we want to have more then one port in each FC group (in HA FC configuration) it is necessary to type in Target alias name for every port. Otherwise an error message “Target alias is already used” can show up while setting up remote port mapping for FC targets in (pool name) -&amp;gt; Fibre Channel -&amp;gt; Targets and initiators assigned to this zpool. This issue will be resolved in the future release.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of NX up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open NX TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In case of large number of disks, zpool move can take a long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In certain environments in case of large number of disks (about 100 and more) the zpool move operation can take a long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Rolled back data are not properly refreshed both in Windows and Vmware systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before using rollback operation on zvol please detach iSCSI or FC target, perform rollback operation and reattach target.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User gets deleted from share access list after changing its username on AD server ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the AD user is assigned to a share and later the username is changed we should let NX know about it. Using the &amp;quot;Synchronize and update shares configurations&amp;quot; operation on NX leads to a situation where the changed user gets deleted from the share’s access list. The new username needs to be added to the share’s access list manually.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== QLogic 32Gbit FC HBA is no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support QLogic 32Gbit FC adapters.&lt;br /&gt;
&lt;br /&gt;
=== Certain 16Gbit FC HBAs are no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support certain 16Gbit FC adapters. If you are using a 16Gbit FC adapter based on the Qlogic chipset using the qla2xxx_scts driver, please refer to our online hardware compatibility list (HCL) to verify if that particular adapter is supported.&lt;br /&gt;
&lt;br /&gt;
=== E-mail password cannot contain special non-ASCII characters ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The following special characters #&amp;amp;nbsp;: + cannot be used in a password used in an e-mail notification feature. They can break the authentication process.&lt;br /&gt;
&lt;br /&gt;
=== LSA e-mail notifications does not work with SMTP servers requiring SSL/TLS authentication ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; E-mail alert configuration in LSI Storage Authority Software does not work with SMTP servers which require SSL/TLS authentication&lt;br /&gt;
&lt;br /&gt;
=== Moving IP address of the NFS share’s IP read only access list to read/write access list cannot be performed in one step ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If an IP address is already present on the NFS access list and you would like to move it to another access list, it has to be performed in two steps. First delete the IP address from the current list and apply the changes. Next edit the NFS share again and add the IP address to the other access list.&lt;br /&gt;
&lt;br /&gt;
=== If the used space on zpool reaches more than 80%, the system may generate high load and become unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the used space on the zpool reaches more then 80%, the system is trying to utilize the available space to the maximum. As a result, the system load may increase, especially waiting I/O and cause its unstable work. Expanding the pool size space is recommended.&lt;br /&gt;
&lt;br /&gt;
=== In certain situations WebGUI is not showing the current state of the system ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There are situations when the system is performing actions taking too long for the WebGUI to refresh the values in the web browser. In such a case the system is showing the old value taken directly from cache memory. We recommend using the F5 key to refresh the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== An ongoing O&amp;amp;ODP process involving small zvol block size or dataset record size generate high load and render the system unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; O&amp;amp;ODP backup tasks involving small zvol block sizes as well as small dataset record sizes (4KB - 16KB) are known to generate very high load rendering the system unstable. We recommend using at least 64KB sizes for zvols and datasets.&lt;br /&gt;
&lt;br /&gt;
=== Runtime UPS calibration in the client-server configuration unexpectedly shutdowns NX ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the client-server configuration of the UPS, the runtime UPS calibration process returns an improper value interpreted by NX as being on battery. When it timeouts it shuts the system down.&lt;br /&gt;
&lt;br /&gt;
=== Starting from up29 (including updating from previous version), system cannot boot up in UEFI mode if your boot medium is controlled by LSI SAS 9300 HBA with outdated firmware ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Make sure your LSI SAS 9300 HBA has the latest firmware installed. A quick workaround is changing the booting mode from UEFI to Legacy.&lt;br /&gt;
&lt;br /&gt;
=== Bonded Mellanox network cards show negative values on the network usage chart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be solved in the future release.&lt;br /&gt;
&lt;br /&gt;
=== In case of hundreds of thousands of LDAP users system starts very slowly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; LDAP database is stored on the boot medium. If you have that large LDAP database we recommend using ultra fast NVMe disk for boot medium.&lt;br /&gt;
&lt;br /&gt;
=== After update to NX up29 write back cache on some hardware RAID volumes can be unintentionally disabled ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we disable write-back cache on all HDD disks by default, but we do not disable write-back cache on SSD drives and hardware RAID volumes. It can happen however that the write-back cache on some RAID volumes can be turned off. Hardware RAID volume performance can be heavily impacted by the lack of the write-back cache, so please make sure it&#039;s enabled after update. Open TUI and invoke Extended tools by pressing CTRL+ALT+X, then select Disk write-back cache settings.&lt;br /&gt;
&lt;br /&gt;
=== Restarting or disconnecting JBOD with the write-back cache enabled on disks can lead to the data inconsistency ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If write-back cache is enabled on disks in JBOD then restarting or disconnecting JBOD can lead to data inconsistency. Starting from NX up29 we disable write-back cache on HDD disks by default during bootup procedure. We do not disable write-back cache on SSD drives and hardware RAID volumes.&lt;br /&gt;
&lt;br /&gt;
=== Snapshots are not displayed after a system reboot if there are more than a few thousands of snapshots ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case there is a large number of snapshots (more than a few thousands) there might be a significant delay in listing them in WebGUI after a system reboot. Depending on how big the number of snapshots is, it may take a few minutes or up to several dozen minutes to populate the list in WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the gzip-9 compression algorithm. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the gzip-9 compression algorithm the system can behave unstable when copying the data to storage. It is possible to use this compression algorithm only in environments where very efficient processors are being used.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use more than 500 zvols. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using more than 500 zvols in the system, the responsiveness of the Web-GUI may be low and the system may have problems with the import of zpools.&lt;br /&gt;
&lt;br /&gt;
=== Manual export and import of zpool in the system or deactivation of the Fibre Channel group without first suspending or turning off the virtual machines on the VMware ESXi side may cause loss of access to the data by VMware ESXi. ===&lt;br /&gt;
&lt;br /&gt;
Before a manual export and import of a zpool in the system  you must suspend or turn off the virtual machines on the VMware ESXi side. Otherwise, the VMware ESXi may lose access to the data, and restarting it will be necessary.&lt;br /&gt;
&lt;br /&gt;
=== The Internet Connection Check functionality has been removed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to check the internet connection, try to get the date and time from the NTP server using the Web-GUI (System Settings -&amp;gt; System -&amp;gt; Time and date settings).&lt;br /&gt;
&lt;br /&gt;
=== After upgrading the system to a newer version, the event viewer reported an error message: An unexpected system reboot occurred. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After upgrading the system to a newer version, the event viewer reported an error message: &amp;quot;An unexpected system reboot occurred. Run the &#039;Scrub scanner&#039; on all pools to check the system integrity. Analysis of logs and previous events can help to find the cause of this problem and prevent the issue in the future. For more information, refer to the help article.&amp;quot;. This information should be ignored.&lt;br /&gt;
&lt;br /&gt;
=== Low performance on remote disks in case of new installation of NX up29r2. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;The source for this problem is the zfs_vdev_max_active parameter set to 1 on a new installation of NX up29r2. To resolve this problem, please change the value of the zfs_vdev_max_active parameter from 1 to 1000 in TUI.&amp;amp;nbsp; In order to change the zfs_vdev_max_active parameter to 1000 open NX TUI and use CTRL+ALT+W keys combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select the zfs module, then the zfs_vdev_max_active parameter and change its value to 1000. This operation requires a restart of the NX. This should be done by selecting the Reboot option in the TUI.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
=== The SED functionality configuration issues. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED functionality in Scale Logic NX enables to use the drives with verified SED configuration only.&lt;br /&gt;
&lt;br /&gt;
=== The SED configuration tool available in TUI also lists devices that are not currently supported. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI also lists devices that are not currently supported. To check if a given device is supported, see the HCL list available on the Scale Logic webpage.&lt;br /&gt;
&lt;br /&gt;
=== Enabling the autotrim functionality in the zpools may cause drastic increase load or iowait in the system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of a drastic increase load or iowait in the system after enabling the autotrim functionality in the zpools, consider disabling it. It is recommended to run the &amp;quot;Trim&amp;quot; function manually on demand and at a convenient time (e.g. at a time when the system is working under less load).&lt;br /&gt;
&lt;br /&gt;
=== The Mellanox ConnectX-3 network controller is no longer supported in RDMA mode due to its instability. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In order to provide stable work with RDMA we recommend using the Mellanox ConnectX-4, ConnectX-5, or ConnectX-6.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115142 --&amp;gt;The Network usage charts display incorrect data for an Active-Backup bonding with RDMA. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Network usage charts incorrectly display data for systems using the Active-Backup bonding with RDMA. The charts only reflect the usage of one network interface included in the Active-Backup bonding (the charts for the second network interface are not generated at all).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115051 --&amp;gt;Duplicate entries appear in the Service Status tab in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In certain scenarios, the Service Status tab in the WebGUI shows duplicated instances of the same connection.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114628 --&amp;gt;Restoring data backups from the macOS Time Machine application may not work correctly with older versions of the macOS system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems with restoring a copy of data from the Time Machine application, it is recommended to update the macOS system to a new version.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114516 --&amp;gt;The Virtual Hard disks smaller than 1B are visible in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; It’s possible to disable the virtual disks through IPMI settings. In Settings -&amp;gt; Media Redirection Settings -&amp;gt; VMedia Instance Settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;  Uncheck &amp;quot;Emulate SD Media as USB disk to Host&amp;quot; checkbox - it  manages one of the virtual disks.&lt;br /&gt;
  Set &amp;quot;Hard disk instances&amp;quot; to 0 in a combo box.&lt;br /&gt;
  Set &amp;quot;Remote KVM Hard disk instances&amp;quot; to 0 in the combo box - settings of the combo box manage the second virtual disk. &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== &amp;lt;!-- #114449 --&amp;gt;Unsupported configuration of VMware virtual machines (consisting of multiple disks) for data rollback from snapshots in On- &amp;amp; Off-site Data Protection. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The VMware virtual machine data rollbacks from snapshots using the On- &amp;amp; Off-site Data Protection functionality are not supported when the virtual machines consist of multiple disks. The specific virtual machine configuration is incompatible with the restoration process.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114351 --&amp;gt;Subdomain statuses in the User Management tab in the WebGUI are not updated. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of unavailability of a given subdomain, information about its status will not be updated on the WebGUI (even by pressing the refresh button).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114251 --&amp;gt;The problems with users and groups synchronization within the Active Directory one-way trusted configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems it’s recommended to use two-way trusted configuration.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #108558 --&amp;gt;Partial support for REST API v3. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The REST API v3 is currently only partially supported. As a result, not all operations can be executed using this version of the REST API. For optimal utilization of the REST API, we highly recommend all customers to employ REST API v4.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #99323 --&amp;gt;Username in LDAP database can’t be changed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To modify a username in the LDAP database, the administrator needs to delete the user account and creating a new one in the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115409 --&amp;gt;The hard disk LED locating and disk faulty functionality do not work properly using the Broadcom HBA 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter the Hard disk LED locating and disk faulty functionality do not work.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115436 --&amp;gt;The Broadcom HBA 9600 Storage Adapter may cause “Target allocation failed, error -6” error messages in dmesg. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter,using the “Rescan” button in the storage tab in the WebGUI may results in the “Target allocation failed, error -6” error messages in dmesg.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #109737 --&amp;gt;The ARCHTTP tool, when in use, might erroneously redirect to another network interface. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To avoid redirection to another network interface, it’s recommended to connect to the ARCHTTP tool using the primary network interface available in the Scale Logic NX (the network interface is usually: eth0).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #116234 --&amp;gt;Identification LED blinking does not work on NVMe drives in AMD-based servers. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Identification LED blinking on NVMe drives does not work on servers based on AMD processors. This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115416 --&amp;gt;The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter.&lt;br /&gt;
&lt;br /&gt;
=== The TDB UID/GIDs mapping does not function properly. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; Workarounds:&lt;br /&gt;
&lt;br /&gt;
*Single-Domain Environments:&lt;br /&gt;
**Use the &amp;quot;autorid&amp;quot; option in the &amp;quot;ID mapping backend&amp;quot; settings.&lt;br /&gt;
**Alternatively, use &amp;quot;rid+tdb&amp;quot;:&lt;br /&gt;
**#Connect to the domain.&lt;br /&gt;
**#Navigate to the “Accessed domains” section.&lt;br /&gt;
**#Click the “Edit domain settings” button.&lt;br /&gt;
**#Set the UID/GID mapping to &amp;quot;rid&amp;quot; and define the Min ID and Max ID range (e.g., 2,000,000 to 2,999,999).&lt;br /&gt;
&lt;br /&gt;
Note: The range 1,000,000 to 1,999,999 is reserved.&lt;br /&gt;
&lt;br /&gt;
*Multi-Domain Environments:&lt;br /&gt;
**The &amp;quot;autorid&amp;quot; option is not supported. Use one of the following:&lt;br /&gt;
**#&amp;quot;rid+tdb&amp;quot;&lt;br /&gt;
**#&amp;quot;ad (with RFC2307 schema) + tdb&amp;quot;&lt;br /&gt;
**Steps for configuration:&amp;lt;ol style=&amp;quot;margin-left: 40px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;ol style=&amp;quot;margin-left: 80px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Connect to the domains.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Navigate to the “Accessed domains” section.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Click the “Edit domain settings” button for each domain.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Set the UID/GIDs mapping to &amp;quot;rid&amp;quot; for all domains.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Define unique Min ID and Max ID ranges for each domain (e.g., 2,000,000 to 2,999,999 for the first domain, 3,000,000 to 3,999,999 for the second domain, etc.).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== No Warning for Duplicate IP Addresses on Network Interfaces ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; No warning or error message is displayed if two network interfaces are configured with the same IP address. This can lead to network conflicts or connectivity issues. Users must manually verify configurations to avoid duplicates.&lt;br /&gt;
&lt;br /&gt;
=== No LED Management for aacraid Storage Controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; LED management is no longer supported for storage controllers using the aacraid driver, aligning with the manufacturer’s decision to discontinue these controllers. Users depending on LED indicators should explore alternative monitoring solutions or consider upgrading to supported hardware.&lt;br /&gt;
&lt;br /&gt;
=== LED Blinking Not Functional on NVMe Drives in Supermicro X12 Servers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; On Supermicro X12 servers, LED blinking functionality for NVMe drives is not operational. Users should rely on alternative methods to identify and manage drives.&lt;br /&gt;
&lt;br /&gt;
=== Web Server Settings in Maxview Storage Manager Not Preserved After Restart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; Changes made to the Web server settings in Maxview Storage Manager revert to default values after a server restart. Custom configurations are lost upon reboot. This issue will be addressed in a future release.&lt;br /&gt;
&lt;br /&gt;
=== Unnecessary dmesg Entries After Zpool Export/Import ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; Following a zpool export and import, dmesg may show entries such as &amp;quot;debugfs: Directory &#039;zdX&#039; with parent &#039;block&#039; already present!&amp;quot; While these entries do not affect functionality, they will be addressed in a future release.&lt;br /&gt;
&lt;br /&gt;
=== Discontinued IDE Disk Support in Scale Logic NX Up31 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In Scale Logic NX Up31, IDE disk support has been removed. Older servers or virtual machines relying on IDE disks may experience compatibility issues or failures. We recommend migrating to supported storage solutions to avoid disruptions. Future releases will not reintroduce IDE disk support.&lt;br /&gt;
&lt;br /&gt;
=== Consider Reducing Volume Block Size to 16KB for High Random Workloads ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; For workloads with high levels of random I/O, reducing the iSCSI volume block size to 16KB can improve performance. Users experiencing performance challenges with random workloads should consider this tuning option.&lt;br /&gt;
&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Active_Directory_(AD)_server_authentication&amp;diff=727</id>
		<title>Active Directory (AD) server authentication</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Active_Directory_(AD)_server_authentication&amp;diff=727"/>
		<updated>2024-12-19T15:12:55Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
This functionality is available in &#039;&#039;&#039;User Management &amp;gt; Share users/groups &amp;gt; Authorization protocols&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;To configure a connection to the existing Active Directory server:&lt;br /&gt;
&lt;br /&gt;
#Navigate to the&amp;amp;nbsp;&#039;&#039;&#039;User Management&amp;amp;nbsp;&#039;&#039;&#039;section in the left menu.&lt;br /&gt;
#Go to the &#039;&#039;&#039;Share users/groups&#039;&#039;&#039; tab.&lt;br /&gt;
#Find the &#039;&#039;&#039;Active Directory (AD) server authentication&#039;&#039;&#039; block.&lt;br /&gt;
#Enable the&amp;amp;nbsp;&#039;&#039;&#039;Enable protocol&#039;&#039;&#039;&amp;amp;nbsp;option.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== AD server authentication status ==&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Connection&#039;&#039;&#039; - shows whether you are connected to an AD server or not.&lt;br /&gt;
*&#039;&#039;&#039;Users/groups list&#039;&#039;&#039; - shows when the lists of users and groups were last synchronized or if the synchronization is taking place at the moment.&lt;br /&gt;
&lt;br /&gt;
Users and groups are synchronized with an Active Directory server every 2 hours. Synchronization can also be started manually by using the &#039;&#039;&#039;Synchronize&#039;&#039;&#039;&amp;amp;nbsp;button.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== AD server authentication settings ==&lt;br /&gt;
&lt;br /&gt;
To connect to the existing AD server, fill in the following fields with credentials provided by the AD server administrator and click the &#039;&#039;&#039;Apply&#039;&#039;&#039; button.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Realm&#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039;Administrator name&#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039;Password&#039;&#039;&#039;&amp;lt;br/&amp;gt;NOTE&#039;&#039;&#039;: Password cannot contain:&#039;&#039;&#039;&lt;br /&gt;
**special characters such as &#039; &amp;quot; ` ^ &amp;amp; $ # ~ [ ] \ / | *&amp;amp;nbsp;:&amp;amp;nbsp;? &amp;amp;lt; &amp;amp;gt;&lt;br /&gt;
**spaces&lt;br /&gt;
**less than 12 and more than 16 characters&lt;br /&gt;
*&#039;&#039;&#039;Organizational Unit (OU) - &#039;&#039;&#039;a direct path to the container where the Computer Organizational Unit is placed. The path must be entered starting from the primary container name within the domain structure. The container name set by default is &#039;&#039;&#039;Computers&#039;&#039;&#039;.&amp;amp;nbsp;If another container name is used instead, then &#039;&#039;&#039;Computers&#039;&#039;&#039; must be changed to the appropriate name. If the path to the container is nested, use a slash as the connector. In the screenshot below, the OU is in the &#039;&#039;&#039;Computers&#039;&#039;&#039; container that is nested in&amp;amp;nbsp;&#039;&#039;&#039;AllComputers &amp;gt; Marketing&#039;&#039;&#039;. In this example, the path to the OU is: &#039;&#039;&#039;AllComputers/Marketing/Computers&#039;&#039;&#039;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;[[File:Ad-structure.png]]&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;NOTE&#039;&#039;&#039;: Container name can&#039;t contain:&#039;&#039;&#039;&lt;br /&gt;
**special characters such as , + &amp;quot; \ &amp;amp;lt; &amp;amp;gt;&amp;amp;nbsp;; = / #&lt;br /&gt;
**spaces&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div&amp;gt;&#039;&#039;&#039;The following reasons might prevent you from connecting to Active Directory:&#039;&#039;&#039;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
#Difference in time between Active Directory Server - if the time difference is greater than 5 minutes, the connection is not possible.&lt;br /&gt;
#The method of authenticating trusted domains - the authentication has to be set to two-way trust. Otherwise, it is not possible to read users and groups from trusted domains.&lt;br /&gt;
#DNS configuration - for an Active Directory domain, it is not possible to use a round-robin mechanism in DNS. This is connected to the fact that only one IP address is authorized. In a moment when another IP is obtained from DNS, the connection is not possible.&lt;br /&gt;
#The &#039;&#039;&#039;server name&#039;&#039;&#039; is the same as the Computer Organizational Unit (OU) named in the Active Directory (AD) server. If the object with the same name exists and the user that you use to log in to the AD server does not have permission to access this file, the connection will fail. The solution is to delete the existing computer object from the AD server. The following information explains how to delete the OU file:&lt;br /&gt;
&amp;lt;ul style=&amp;quot;margin-left: 80px;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Log on to the Domain Controller with the domain administrator account. Press Windows Logo + R, enter &amp;quot;dsa.msc&amp;quot; and press Enter.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;In the &amp;quot;Active Directory Users and Computers&amp;quot; window, select the domain container in which the OU you are looking for is located.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Select the computer object and delete it.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::&#039;&#039;&#039;Note&#039;&#039;&#039;: By default, any created Organizational Unit is protected from accidental deletion. To delete the OU, you need to clear the &amp;quot;Protect object from accidental deletion&amp;quot; checkbox, which you can find in the object properties in the &amp;quot;Object&amp;quot; tab. By deleting OU, you delete all nested objects that it contains as well.&lt;br /&gt;
:::&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Users and user groups management ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Management mode:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Scan single domain (default)&#039;&#039;&#039; - Using this function allows the user to obtain users and groups from the main domain only.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Scan all trusted domains&#039;&#039;&#039; - Using this function allows the user to obtain users and groups from the main and trusted domains.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;ID mapping backend:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;rid + tdb (default)&#039;&#039;&#039; - This option utilizes the rid backend for ID mapping to AD users. UID/GIDs range has to be entered manually The tdb backend is used when no other configuration is set. Recommended for large databases.Samba Wiki link for rid backend: [https://wiki.samba.org/index.php/Idmap_config_rid https://wiki.samba.org/index.php/Idmap_config_rid]&lt;br /&gt;
*&#039;&#039;&#039;ad (with RFC2307 schema) + tdb&#039;&#039;&#039; - Allows reading ID mappings from an AD server, provided that the uidNumber attributes for users and gidNumber attributes for groups were added in advance in the AD. This backend requires additional configuration of uidNumber and gidNumber on the AD server side. The tdb back end is used when no other configuration is set. Samba Wiki link for rid backend: [https://wiki.samba.org/index.php/Idmap_config_ad https://wiki.samba.org/index.php/Idmap_config_ad]&lt;br /&gt;
*&#039;&#039;&#039;autorid&#039;&#039;&#039; - This backend can be used if users are imported from a set of different domains. Automatically configures the range to be used for each domain. The only configuration needed is the range of UID/GIDs used for user/group mappings and the number of IDs per domain.Samba Wiki link for autorid backend: [https://wiki.samba.org/index.php/Idmap_config_autorid https://wiki.samba.org/index.php/Idmap_config_autorid]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;span style=&amp;quot;font-size:small&amp;quot;&amp;gt;The TDB UID/GIDs mapping does not work properly.&amp;lt;/span&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Single-Domain Environments&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div&amp;gt;It is recommended to use the &amp;quot;autorid&amp;quot; option in the &amp;quot;ID mapping backend&amp;quot; settings. Alternatively, you can use the &amp;quot;rid+tdb&amp;quot; option. If you choose &amp;quot;rid+tdb,&amp;quot; set the UID/GIDs mapping to &amp;quot;rid&amp;quot; and define the Min ID and Max ID range (e.g., 2,000,000 to 2,999,999). The range 1,000,000 to 1,999,999 is reserved.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Multi-Domain Environments&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;div&amp;gt;The &amp;quot;autorid&amp;quot; option cannot be used. Instead, use &amp;quot;rid+tdb&amp;quot; or &amp;quot;ad (with RFC2307 schema) + tdb.&amp;quot; Ensure the UID/GIDs mapping is set to &amp;quot;rid&amp;quot; and define the Min ID and Max ID range for each domain (e.g., 2,000,000 to 2,999,999 for the first domain, 3,000,000 to 3,999,999 for the second domain, etc.).&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=SNMP_settings&amp;diff=222</id>
		<title>SNMP settings</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=SNMP_settings&amp;diff=222"/>
		<updated>2024-12-19T15:12:55Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This function enables you to configure access over the &#039;&#039;&#039;SNMP&#039;&#039;&#039; protocol in versions 2 or 3.&lt;br /&gt;
&lt;br /&gt;
With SNMP enabled, you receive a wealth of information (CPU usage, system load, memory info, ethernet traffic, running processes).&amp;lt;br/&amp;gt;System location and system contact are only for your information.&amp;amp;nbsp;&amp;amp;nbsp;For example, when you connect from an SNMP client, you will see your location and name.&lt;br /&gt;
&lt;br /&gt;
SNMP, version 3 has an encrypted transmission feature as well as authentication by username and password.&amp;lt;br/&amp;gt;SNMP, version 2 does not have encrypted transmission, and authentication is done only via the community string.&lt;br /&gt;
&lt;br /&gt;
The community string you set can contain up to 20 characters, while the password needs to have at least 8 characters.&lt;br /&gt;
&lt;br /&gt;
Links to SNMP clients:&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;[http://www.muonics.com http://www.muonics.com]&amp;lt;/span&amp;gt;&lt;br /&gt;
*&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;[http://www.mg-soft.com http://www.mg-soft.com]&amp;lt;/span&amp;gt;&lt;br /&gt;
*&amp;lt;span style=&amp;quot;font-size:larger&amp;quot;&amp;gt;[http://www.manageengine.com http://www.manageengine.com]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|&lt;br /&gt;
Our storage system supports the SNMP protocol in MIB-II standard.&amp;amp;nbsp; List of MIBs:&lt;br /&gt;
&lt;br /&gt;
*- mib-2.host&lt;br /&gt;
&lt;br /&gt;
*- mib-2.ip&lt;br /&gt;
&lt;br /&gt;
*- mib-2.tcp&lt;br /&gt;
&lt;br /&gt;
*- mib-2.udp&lt;br /&gt;
&lt;br /&gt;
*- mib-2.interfaces&lt;br /&gt;
&lt;br /&gt;
*- mib-2.at&lt;br /&gt;
&lt;br /&gt;
*- system&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
NX offers additional SNMP values to monitor Pool and ZFS attributes.&amp;lt;br/&amp;gt;It is necessary to query specific OIDs in order to receive those attributes.&lt;br /&gt;
&lt;br /&gt;
For basic ZFS parameters, NYMNETWORKS-MIB mib is included:&lt;br /&gt;
&lt;br /&gt;
*to version v.1.0 up29r4&amp;amp;nbsp; [[:Media:NYMNETWORKS-MIB.txt|NYMNETWORKS-MIB.txt]]&lt;br /&gt;
*from version v.1.0 up30 [[:Media:NYMNETWORKS-MIB-up30.txt|NYMNETWORKS-MIB.txt]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;SNMP v2:&#039;&#039;&#039; snmpwalk -v 2c -m NYMNETWORKS-MIB -c community 192.168.251.79 .1.3.6.1.4.1.25359.1&amp;lt;br/&amp;gt;&#039;&#039;&#039;SNMP v3:&#039;&#039;&#039; snmpwalk -v3 -l authPriv -u nagios -a MD5 -x DES -A 12345678 -X 12345678 192.168.150.70&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemName.1 = STRING: &amp;quot;Pool-0&amp;quot;&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemName.2 = STRING: &amp;quot;Pool-1&amp;quot;&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemAvailableKB.1 = Gauge32: 15861464&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemAvailableKB.2 = Gauge32: 15861672&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemUsedKB.1 = Gauge32: 4327720&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemUsedKB.2 = Gauge32: 4327512&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsPoolHealth.1 = INTEGER: online(1)&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsPoolHealth.2 = INTEGER: online(1)&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemSizeKB.1 = Wrong Type (should be INTEGER): Gauge32: 20189184&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemSizeKB.2 = Wrong Type (should be INTEGER): Gauge32: 20189184&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemAvailableMB.1 = Gauge32: 15489&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemAvailableMB.2 = Gauge32: 15489&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemUsedMB.1 = Gauge32: 4226&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemUsedMB.2 = Gauge32: 4226&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemSizeMB.1 = Gauge32: 19716&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsFilesystemSizeMB.2 = Gauge32: 19716&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCSizeKB.0 = Gauge32: 61086&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCMetadataSizeKB.0 = Gauge32: 9278&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCDataSizeKB.0 = Gauge32: 51808&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCHits.0 = Counter32: 229308&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCMisses.0 = Counter32: 41260&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCTargetSize.0 = Gauge32: 64287&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsARCMru.0 = Gauge32: 59529&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsL2ARCHits.0 = Counter32: 0&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsL2ARCMisses.0 = Counter32: 0&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsL2ARCReads.0 = Counter32: 0&amp;lt;br/&amp;gt;NYMNETWORKS-MIB::zfsL2ARCWrites.0 = Counter32: 0&lt;br /&gt;
&lt;br /&gt;
Additional information, like compression ratio, deduplication ratio, available space (in bytes), age (in seconds) of latest snapshot on volume,&amp;lt;br/&amp;gt;can be obtained with standard NET-SNMP-EXTEND-MIB:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Examples:&#039;&#039;&#039;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;NET-SNMP-EXTEND-MIB::nsExtendOutputFull.&amp;quot;deduplication&amp;quot; = STRING:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;deduplication Pool-0 1.00&amp;quot;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;NET-SNMP-EXTEND-MIB::nsExtendOutputFull.&amp;quot;compression&amp;quot; = STRING:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;compression Pool-0/vol00 1.01&amp;lt;br/&amp;gt;compression Pool-0/clone-vol00 1.00&amp;quot;&lt;br /&gt;
&lt;br /&gt;
NET-SNMP-EXTEND-MIB::nsExtendOutputFull.&amp;quot;volumes_list&amp;quot; = STRING:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;quot;available Pool-0/vol00 11981377536&amp;lt;br/&amp;gt;available Pool-0/clone-vol00 11981377536&amp;quot;&lt;br /&gt;
&lt;br /&gt;
NET-SNMP-EXTEND-MIB::nsExtendOutputFull.&amp;quot;snapshots_age&amp;quot; = STRING:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;quot;snapshot_age Pool-0/vol00 3&amp;lt;br/&amp;gt;snapshot_age Pool-0/vol01 371&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Untranslated OIDs:&#039;&#039;&#039;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;root@p-GA-880GM-USB3:/home/p# snmpwalk -v2c -c public 192.168.0.80&amp;amp;nbsp; 1.3.6.1.4.1.8072.1.3.2.3&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.1.11.99.111.109.112.114.101.115.115.105.111.110 = STRING: &amp;quot;compression Pool-0/vol00 1.01&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.1.12.115.110.97.112.115.104.111.116.95.97.103.101 = STRING: &amp;quot;snapshot_age Pool-0/vol00 3&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.1.12.118.111.108.117.109.101.115.95.108.105.115.116 = STRING: &amp;quot;available Pool-0/vol00 11981377536&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.1.13.100.101.100.117.112.108.105.99.97.116.105.111.110 = STRING: &amp;quot;deduplication Pool-0 1.00&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.2.11.99.111.109.112.114.101.115.115.105.111.110 = STRING: &amp;quot;compression Pool-0/vol00 1.01&amp;lt;br/&amp;gt;compression Pool-0/clone-vol00 1.00&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.2.12.115.110.97.112.115.104.111.116.95.97.103.101 = STRING: &amp;quot;snapshot_age Pool-0/vol00 3&amp;lt;br/&amp;gt;snapshot_age Pool-0/vol01 371&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.2.12.118.111.108.117.109.101.115.95.108.105.115.116 = STRING: &amp;quot;available Pool-0/vol00 11981377536&amp;lt;br/&amp;gt;available Pool-0/clone-vol00 11981377536&amp;quot;&amp;lt;br/&amp;gt;iso.3.6.1.4.1.8072.1.3.2.3.1.2.13.100.101.100.117.112.108.105.99.97.116.105.111.110 = STRING: &amp;quot;deduplication Pool-0 1.00&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Redundancy_in_Disks_Groups&amp;diff=1479</id>
		<title>Redundancy in Disks Groups</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Redundancy_in_Disks_Groups&amp;diff=1479"/>
		<updated>2024-12-19T10:12:28Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;Disk group redundancy refers to the ability of a zpool to maintain data integrity and availability in the event of disk failures. This is achieved through mirrored or RAID-Z configurations, which store multiple copies of data across different disks. When a disk fails or data corruption is detected, ZFS can use the redundant copies to repair or reconstruct the lost data, ensuring the system continues to operate without data loss.&lt;br /&gt;
&lt;br /&gt;
It is important not to mix the types of data groups (vdevs) inside your storage zpool, as it might lead to potential issues, so it is strongly recommended to consistently use only one type of vdev.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;br/&amp;gt;Data Group redundancy level: 2-way mirror (2 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with the number of mirror vdevs in the zpool.&lt;br /&gt;
*The 2-way mirror accepts a single disk failure in a given vdev.&lt;br /&gt;
*The 2-way mirrors can be used for mission critical applications, but it is recommended not to exceed 12 vdevs in a zpool (recommended up to 12 x 2 = 24 disks for mission-critical applications and 24 x 2 = 48 disks for non-mission critical applications in a zpool).&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: as a rule, the zpool performance increases with the number of vdevs in the pool. For mission-critical applications and using more than 12 groups, It is recommended to use 3-way mirrors or RAID-Z2 or RAID-Z3.&lt;br /&gt;
*For mission critical applications it is not recommended to use HDDs bigger than 4TB.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Data Group redundancy level: 3-way mirror (3 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with number of mirror vdevs in the zpool.&lt;br /&gt;
*The 3-way mirror accepts up to two disks failures in a given vdev.&lt;br /&gt;
*3-way mirrors can be used for mission critical applications, but it is recommended not to exceed vdevs in a zpool (recommended up to 16 x 3 = 48 disks for mission critical applications and 24 x 3 = 72 disks for non-mission critical applications in a zpool).&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: the zpool performance increases with the number of vdevs in a zpool. For mission-critical applications, it is recommended to use RAID-Z3.&lt;br /&gt;
*For mission critical applications it is not recommended to use HDDs bigger than 10TB.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Data Group redundancy level: 4-way mirror (4 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with number of mirror vdevs in the zpool.&lt;br /&gt;
*The 4-way mirror accepts up to three disks failures in a given vdev.&lt;br /&gt;
*It is also recommended not to exceed 24 of 4-way mirror vdevs in a zpool as a single group damage results in the destruction of the entire zpool (recommended up to 24 x 4 = 96 disks for mission-critical applications in a zpool).&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: as a rule, the zpool performance increases with the number of vdev in the pool.&lt;br /&gt;
*HDDs bigger than 16TB should be avoided for mission critical applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Data Group redundancy level: RAIDZ-1 (3-8 disks in a group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with the number of disks in a RAID-Z1 vdev.&lt;br /&gt;
*RAID-Z1 accepts one disk failure in a given vdev.&lt;br /&gt;
*The RAID-Z1 can be used for non-mission critical applications and it is not recommended to exceed 8 disks in a vdev. HDDs bigger than 4TB should be avoided.&lt;br /&gt;
*It is also not recommended to exceed 8 RAID-Z1 vdevs in a zpool as a single group damage results in the destruction of entire zpool (recommended up to 8 x 8 = 64 disks for non-mission critical applications in a zpool).&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: the zpool performance is doubled with 2 x RAID-Z1 with 4 disks each comparing to a single RAID-Z1 vdev with 8 disks.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;br/&amp;gt;Data Group redundancy level: RAIDZ-2 (4-24 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with the number of disks in the RAID-Z2 group.&lt;br /&gt;
*The RAID-Z2 accepts up to two disks failures in a given vdev.&lt;br /&gt;
*The RAID-Z2 can be used for mission-critical applications.&lt;br /&gt;
*It is not recommended to exceed 12 disks in a vdev for mission-critical and 24 disks for non-mission critical applications.&lt;br /&gt;
*It is also not recommended to exceed 16 of RAID-Z2 groups in a zpool as a single group damage results in the destruction of the entire zpool (recommended up to 16 x 12 = 192 disks for mission-critical applications and 16 x 24 = 384 disks for non-mission critical in a zpool). HDDs bigger than 16 TB should be avoided.&lt;br /&gt;
*If 3 disks failure in a vdev is required, it is recommended to use RAID-Z3.&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: the pool performance is doubled with 2 x RAID-Z2 with 6 disks each comparing to a single RAID-Z2 with 12 disks.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;br/&amp;gt;Data Group redundancy level: RAIDZ-3 (5-48 disks per group) ==&lt;br /&gt;
&lt;br /&gt;
*The chances of suffering multiple disk failures increase with the number of disks in the RAID-Z3 group.&lt;br /&gt;
*The RAID-Z3 accepts up to three disks failure in a given vdev.&lt;br /&gt;
*The RAID-Z3 can be used for mission-critical applications.&lt;br /&gt;
*It is not recommended to exceed 24 disks in a vdev for mission-critical and 48 disks for non- mission critical applications.&lt;br /&gt;
*It is also not recommended to exceed 24 of RAID-Z3 groups in a zpool as a single group damage results in the destruction of the entire zpool (recommended up to 24x 24 =576 disks for mission critical applications and 24x 48 = 1152 disks for non-mission critical applications in a zpool). HDDs bigger than 16TB should be avoided.&lt;br /&gt;
*&#039;&#039;&#039;Note&#039;&#039;&#039;: the zpool performance is doubled with 2 x RAID-Z3 with 12 disks each comparing to single RAID-Z3 vdev with 24 disks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Write Log redundancy level ==&lt;br /&gt;
&lt;br /&gt;
*It should be configured as a 2-way mirror.&lt;br /&gt;
*When choosing a disk model for the Write Log, make sure to take the endurance parameter into consideration. Selecting a disk classified by the manufacturer as write intensive is strongly recommended.&lt;br /&gt;
*When selecting a disk size for the write log, consider the potential amount of data that’ll be able to reach the server in three consecutive ZFS transactions, e.g. based on the network card bandwidth for the data transfer. If the transaction length is set to 5 seconds (default), the write log device should be able to accommodate the amount of data that can be transferred within three transaction groups, i.e. 15 seconds of writing. Using a larger disk does not make sense economically, while a smaller one can be a performance bottleneck during synchronous writes. &#039;&#039;&#039;Practically speaking, 100GB for a write log should be more than enough.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Read Cache redundancy level ==&lt;br /&gt;
&lt;br /&gt;
Read Cache disks can only be configured as single disks, but it is possible to configure any number of them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Special devices and deduplication group redundancy level ==&lt;br /&gt;
&lt;br /&gt;
It should be configured as a 2-way mirror.&lt;br /&gt;
&lt;br /&gt;
[[Category:ZFS and data storage articles]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=ISCSI_connections&amp;diff=1474</id>
		<title>ISCSI connections</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=ISCSI_connections&amp;diff=1474"/>
		<updated>2024-07-25T12:33:39Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Redirected page to Active iSCSI connections&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Active iSCSI connections]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=SMB_connections&amp;diff=1471</id>
		<title>SMB connections</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=SMB_connections&amp;diff=1471"/>
		<updated>2024-07-25T12:32:01Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: Redirected page to Active SMB user connections&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Active SMB user connections]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Active_iSCSI_connections&amp;diff=1468</id>
		<title>Active iSCSI connections</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Active_iSCSI_connections&amp;diff=1468"/>
		<updated>2024-07-25T12:29:23Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;This functionality is available in &#039;&#039;&#039;Services Status &amp;gt; Connections&#039;&#039;&#039;&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;The &amp;quot;Active iSCSI connections&amp;quot; section lists initiators actively connected to the targets. The table contains the following information:&lt;br /&gt;
*The initiator&#039;s name is connected to the server.&lt;br /&gt;
*The target&#039;s name to which the initiator is connected.&lt;br /&gt;
*Connection IP address.&lt;br /&gt;
*Session ID.&lt;br /&gt;
*Connection ID.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;To show a single session details, use the context menu. As a result, you will see the following list of parameters:&lt;br /&gt;
*&#039;&#039;&#039;Initiator name&#039;&#039;&#039;: Initiator name connected to the target (Target name).&lt;br /&gt;
*&#039;&#039;&#039;Target name&#039;&#039;&#039;: Target name to which the initiator (Initiator name) is connected.&lt;br /&gt;
*&#039;&#039;&#039;Initiator IP&#039;&#039;&#039;: The IP address of the initiator that is connected to the target in this session.&lt;br /&gt;
*&#039;&#039;&#039;SID&#039;&#039;&#039;: Session ID.&lt;br /&gt;
*&#039;&#039;&#039;CID&#039;&#039;&#039;: Connection ID.&lt;br /&gt;
*&#039;&#039;&#039;State&#039;&#039;&#039;: contains processing state of this connection.&lt;br /&gt;
*&#039;&#039;&#039;Reinstating&#039;&#039;&#039;: contains reinstatement state of the session.&lt;br /&gt;
*&#039;&#039;&#039;Bidi IO count KB (bidi_io_count_kb)&#039;&#039;&#039;: Amount of data in KB transferred by the initiator since beginning or last reset.&lt;br /&gt;
*&#039;&#039;&#039;Bidi command count (bidi_cmd_count)&#039;&#039;&#039;: Number of BIDI SCSI commands received since beginning or last reset.&lt;br /&gt;
*&#039;&#039;&#039;Read command count (read_cmd_count)&#039;&#039;&#039;: Number of READ SCSI commands received since beginning or last reset.&lt;br /&gt;
*&#039;&#039;&#039;Read IO count KB (read_io_count_kb)&#039;&#039;&#039;: Amount of data in KB read by the initiator since beginning or last reset.&lt;br /&gt;
*&#039;&#039;&#039;None command count (none_cmd_count)&#039;&#039;&#039;: Number of not transferring data SCSI commands (e.g. INQUIRY or TEST UNIT READY) received since beginning or last reset.&lt;br /&gt;
*&#039;&#039;&#039;Write command count (write_cmd_count)&#039;&#039;&#039;: Number of WRITE SCSI commands received since beginning or last reset.&lt;br /&gt;
*&#039;&#039;&#039;Write IO count KB (write_io_count_kb)&#039;&#039;&#039;: amount of data in KB written by the initiator since beginning or last reset.&lt;br /&gt;
*&#039;&#039;&#039;Unknown command count (unknown_cmd_count)&#039;&#039;&#039;: Number of unknown SCSI commands received since beginning or last reset .&lt;br /&gt;
*&#039;&#039;&#039;Commands&#039;&#039;&#039;: Contains overall number of SCSI commands in this session.&lt;br /&gt;
*&#039;&#039;&#039;Active commands&#039;&#039;&#039;: Contains number of active, i.e. not yet or being executed, SCSI commands in this session.&lt;br /&gt;
*&#039;&#039;&#039;FirstBurstLength&#039;&#039;&#039;: Specifies the maximum amount of unsolicited data an iSCSI initiator can send to the target during the execution of a single SCSI command, in bytes. This parameter is sent by both the initiator and the target, and the negotiated value used takes the minimum.&lt;br /&gt;
*&#039;&#039;&#039;DataDigest&#039;&#039;&#039;: Increases data integrity. When the data digest parameter is enabled, the system performs a checksum over each PDU data part. The system verifies the data using the CRC32C algorithm. This parameter is sent by the initiator and target. Checksum enablement is negotiated only if both the initiator and target intend to use CRC32c.&lt;br /&gt;
*&#039;&#039;&#039;HeaderDigest&#039;&#039;&#039;: Increases data integrity. When the header digest parameter is enabled, the system performs a checksum over each header part of the iSCSI Protocol Data Unit (PDU). The system verifies the data using the CRC32C algorithm. This parameter is sent by the initiator and target. Checksum enablement is negotiated only if both the initiator and target intend to use CRC32c.&lt;br /&gt;
*&#039;&#039;&#039;ImmediateData&#039;&#039;&#039;: This allows the initiator to append unsolicited data to a command. To achieve better performance, this should be set to &amp;quot;Yes&amp;quot;. This parameter is sent by the initiator and target, and the negotiated value used is the logical product.&lt;br /&gt;
*&#039;&#039;&#039;InitialR2T&#039;&#039;&#039;: Turns on the default use of R2T; if disabled, allows an initiator to start sending data to a target as if it had received an initial R2T. If set to &amp;quot;Yes&amp;quot;, the initiator has to wait for the target to solicit SCSI data before sending it. Setting it to &amp;quot;No&amp;quot; allows the initiator to send a burst of FirstBurstLength bytes unsolicited right after and/or (depending on the setting of ImmediateData) together with the command. Thus setting it to &amp;quot;No&amp;quot; may improve performance. This parameter is sent by the initiator and target, and the negotiated value used is the logical sum.&lt;br /&gt;
*&#039;&#039;&#039;MaxBurstLength&#039;&#039;&#039;: Parameter specifies the maximum amount of usable data in bytes (SCSI data payload) that can be sent in outgoing (SCSI Data-Out) or incoming (SCSI Data-In) packets. The value must be greater than or equal to the value of the FirstBurstLenght parameter.&amp;lt;br/&amp;gt;Configuring too large values may lead to problems allocating sufficient memory, which in turn may lead to SCSI commands timing out at the initiator host. This parameter is sent by the initiator and target, and the negotiated value used is the minimum.&lt;br /&gt;
*&#039;&#039;&#039;MaxOutstandingR2T&#039;&#039;&#039;: Defines the R2T (Ready to Transfer) PDUs that can be in transition before an acknowledged PDU is received.&amp;lt;br/&amp;gt;Controls the maximum number of data transfers the target may request at once, each of up to MaxBurstLength bytes. This parameter is sent by the initiator and target, and the negotiated used value is the minimum.&lt;br /&gt;
*&#039;&#039;&#039;MaxRecvDataSegmentLength&#039;&#039;&#039;: Sets the maximum data segment length that can be received in an iSCSI PDU. Configuring too large values may lead to problems allocating sufficient memory, which in turn may lead to SCSI commands timing out at the initiator host. This parameter is sent by the initiator and target, and the negotiated value used is the minimum.&lt;br /&gt;
*&#039;&#039;&#039;MaxXmitDataSegmentLength&#039;&#039;&#039;: Sets the maximum data segment length that can be sent in any iSCSI PDU. The value actually used is the minimum of MaxXmitDataSegmentLength and the MaxRecvDataSegmentLength announced by the initiator. Configuring too large values may lead to problems allocating sufficient memory, which in turn may lead to SCSI commands timing out at the initiator host.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Change_version&amp;diff=1470</id>
		<title>Change version</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Change_version&amp;diff=1470"/>
		<updated>2024-07-25T12:26:00Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This option is available in the &#039;&#039;&#039;System Settings&#039;&#039;&#039; &amp;gt; &#039;&#039;&#039;Update&#039;&#039;&#039; tab&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Note!&#039;&#039;&#039; Always back up your data and configuration before changing the software version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Upgrade to the latest version ==&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Note!&#039;&#039;&#039; If a new software version includes an updated ZFS version, do not upgrade your zpools unless you&#039;re ready for a permanent change. Upgrading zpools to a more recent version of ZFS is irreversible. After upgrading, the zpool won&#039;t work with older software versions.&lt;br /&gt;
&lt;br /&gt;
As a rule, you should upgrade the software from version to version. In case such an upgrade is not possible and some versions must be omitted, please contact the Support Team to confirm the best procedure.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;To upgrade the software&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
#Go to the “System Settings” and then the “Update” tab.&lt;br /&gt;
#Click the “Upload update” button. Select the iso file with the release you want to install, and apply by clicking the “Upload” button.&lt;br /&gt;
#After the file is uploaded, the popup with the following options appears:&lt;br /&gt;
#*&#039;&#039;&#039;Cancel&#039;&#039;&#039; - use this option to close the popup window if you want to install the uploaded version later. See the details below. This is also recommended if you install a system version older than the one currently running. To downgrade the system to an older version, we recommend using the Change Version option to decide whether to keep the current system settings. Downgrading the system version is described in the “Downgrade to an older version” section.&lt;br /&gt;
#*&#039;&#039;&#039;Change and reboot later&#039;&#039;&#039; - using this option will install the uploaded version, set it up as default, and boot it after the next restart. In this case, all current system settings will be saved and applied to the installed version.&lt;br /&gt;
#*&#039;&#039;&#039;Change and reboot now&#039;&#039;&#039; - using this option will install the uploaded version, set it up as default, and the new version will be applied immediately after the automatic reboot. In this case, all current system settings will be saved and applied to the installed version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the &amp;quot;Cancel&amp;quot; option has been chosen after downloading the iso file, you can finish the update with the &amp;quot;Change version&amp;quot; option.&lt;br /&gt;
&lt;br /&gt;
#Go to the “Available version” table.&lt;br /&gt;
#Select the uploaded version.&lt;br /&gt;
#Using the context menu, select the “Change version” option.&lt;br /&gt;
#The popup appears.&lt;br /&gt;
#&#039;&#039;&#039;Check&#039;&#039;&#039; the &amp;quot;Keep all current system settings&amp;quot; option &#039;&#039;&#039;while upgrading&#039;&#039;&#039; to the newer version. All current system settings will be saved and applied to the installed version.&lt;br /&gt;
#Finish by clicking one of the available buttons:&lt;br /&gt;
#*&#039;&#039;&#039;Change and reboot later&#039;&#039;&#039; - using this option will install the selected version, set it as the default, and boot it after the next restart.&lt;br /&gt;
#*&#039;&#039;&#039;Change and reboot now&#039;&#039;&#039; - using this option will install the selected version and apply it immediately after the automatic reboot.&lt;br /&gt;
#*&#039;&#039;&#039;Cancel&#039;&#039;&#039; - use this option to close the popup window without taking any action if you do not want to continue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Downgrade to an older version ==&lt;br /&gt;
&lt;br /&gt;
Downgrading is not recommended and must be done after careful consideration. The newer software version may include bug fixes and security updates. Downgrading to an older version may result in exposure to known vulnerabilities or issues that have been addressed in the newer release.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;To avoid issues when downgrading:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Always check compatibility and differences between iterations.&lt;br /&gt;
*Back up your data and configurations.&lt;br /&gt;
*&#039;&#039;&#039;Do not downgrade the system by more than one version.&#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039;Do not keep the current system settings.&#039;&#039;&#039; Downgrading the system with saved configuration settings from a newer version can cause issues for several reasons, for example:&lt;br /&gt;
**&#039;&#039;&#039;Incompatibility&#039;&#039;&#039;: A newer software version may introduce some changes in configuration settings or dependencies. When you downgrade to an older version, it might cause issues with interpreting or using the settings created by the newer version. This can lead to errors, crashes, or unexpected behavior.&lt;br /&gt;
**&#039;&#039;&#039;Missing features&#039;&#039;&#039;: A newer version of software may include new features, options, or optimizations unavailable in older versions. When you downgrade, you might lose access to these features, and the software may not handle the missing functionalities.&lt;br /&gt;
**&#039;&#039;&#039;Regression issues&#039;&#039;&#039;: Downgrading might not always be a straightforward process. Some changes made in the newer version may be tightly integrated into the system, and reverting to an older version could break these dependencies, causing problems.&lt;br /&gt;
**&#039;&#039;&#039;Configuration settings migration&#039;&#039;&#039;: If you have made significant changes to configuration settings in the newer version, these changes might not easily translate to the older version&#039;s configuration format. This can result in configuration conflicts or incomplete settings.&lt;br /&gt;
*Be prepared to reconfigure settings or adapt to any changes between versions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;To downgrade the system it is recommended to use the “Change version” option:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
#Go to the “System Settings” and then the “Update” tab.&lt;br /&gt;
#Go to the “Available version” table.&lt;br /&gt;
#Select the version to be installed.&lt;br /&gt;
#Using the context menu select the “Change version” option.&lt;br /&gt;
#The popup appears.&lt;br /&gt;
#Choose whether to keep the current system settings.&lt;br /&gt;
#*Leave the &amp;quot;Keep all current system settings&amp;quot; option &#039;&#039;&#039;unchecked while downgrading&#039;&#039;&#039; to the older version. This will launch the older version with the default system settings of that version.&lt;br /&gt;
#Finish by clicking one of the available buttons:&lt;br /&gt;
#*&#039;&#039;&#039;Change and reboot later&#039;&#039;&#039; - using this option will install the selected version, set it as the default, and boot it after the next restart.&lt;br /&gt;
#*&#039;&#039;&#039;Change and reboot now&#039;&#039;&#039; - using this option will install the selected version and apply it immediately after the automatic reboot.&lt;br /&gt;
#*&#039;&#039;&#039;Cancel&#039;&#039;&#039; - use this option to close the popup window without taking any action if you do not want to continue.&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Active_iSCSI_connections&amp;diff=1467</id>
		<title>Active iSCSI connections</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Active_iSCSI_connections&amp;diff=1467"/>
		<updated>2024-07-25T12:26:00Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;This functionality is available in &#039;&#039;&#039;Services Status &amp;gt; Connections&#039;&#039;&#039;&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;The &amp;quot;Active iSCSI connections&amp;quot; section lists initiators actively connected to the targets. The table contains the following information:&lt;br /&gt;
*The initiator&#039;s name is connected to the server.&lt;br /&gt;
*The target&#039;s name to which the initiator is connected.&lt;br /&gt;
*Connection IP address.&lt;br /&gt;
*Session ID.&lt;br /&gt;
*Connection ID.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;To show a single session details, use the context menu. As a result, you will see the following list of parameters:&lt;br /&gt;
*&#039;&#039;&#039;Initiator name&#039;&#039;&#039;: Initiator name connected to the target (Target name).&lt;br /&gt;
*&#039;&#039;&#039;Target name&#039;&#039;&#039;: Target name to which the initiator (Initiator name) is connected.&lt;br /&gt;
*&#039;&#039;&#039;Initiator IP&#039;&#039;&#039;: The IP address of the initiator that is connected to the target in this session.&lt;br /&gt;
*&#039;&#039;&#039;SID&#039;&#039;&#039;: Session ID.&lt;br /&gt;
*&#039;&#039;&#039;CID&#039;&#039;&#039;: Connection ID.&lt;br /&gt;
*&#039;&#039;&#039;State&#039;&#039;&#039;: contains processing state of this connection.&lt;br /&gt;
*&#039;&#039;&#039;Reinstating&#039;&#039;&#039;: contains reinstatement state of the session.&lt;br /&gt;
*&#039;&#039;&#039;Bidi IO count KB (bidi_io_count_kb)&#039;&#039;&#039;: Amount of data in KB transferred by the initiator since beginning or last reset.&lt;br /&gt;
*&#039;&#039;&#039;Bidi command count (bidi_cmd_count)&#039;&#039;&#039;: Number of BIDI SCSI commands received since beginning or last reset.&lt;br /&gt;
*&#039;&#039;&#039;Read command count (read_cmd_count)&#039;&#039;&#039;: Number of READ SCSI commands received since beginning or last reset.&lt;br /&gt;
*&#039;&#039;&#039;Read IO count KB (read_io_count_kb)&#039;&#039;&#039;: Amount of data in KB read by the initiator since beginning or last reset.&lt;br /&gt;
*&#039;&#039;&#039;None command count (none_cmd_count)&#039;&#039;&#039;: Number of not transferring data SCSI commands (e.g. INQUIRY or TEST UNIT READY) received since beginning or last reset.&lt;br /&gt;
*&#039;&#039;&#039;Write command count (write_cmd_count)&#039;&#039;&#039;: Number of WRITE SCSI commands received since beginning or last reset.&lt;br /&gt;
*&#039;&#039;&#039;Write IO count KB (write_io_count_kb)&#039;&#039;&#039;: amount of data in KB written by the initiator since beginning or last reset.&lt;br /&gt;
*&#039;&#039;&#039;Unknown command count (unknown_cmd_count)&#039;&#039;&#039;: Number of unknown SCSI commands received since beginning or last reset .&lt;br /&gt;
*&#039;&#039;&#039;Commands&#039;&#039;&#039;: Contains overall number of SCSI commands in this session.&lt;br /&gt;
*&#039;&#039;&#039;Active commands&#039;&#039;&#039;: Contains number of active, i.e. not yet or being executed, SCSI commands in this session.&lt;br /&gt;
*&#039;&#039;&#039;FirstBurstLength&#039;&#039;&#039;: Specifies the maximum amount of unsolicited data an iSCSI initiator can send to the target during the execution of a single SCSI command, in bytes. This parameter is sent by both the initiator and the target, and the negotiated value used takes the minimum.&lt;br /&gt;
*&#039;&#039;&#039;DataDigest&#039;&#039;&#039;: Increases data integrity. When the data digest parameter is enabled, the system performs a checksum over each PDU data part. The system verifies the data using the CRC32C algorithm. This parameter is sent by the initiator and target. Checksum enablement is negotiated only if both the initiator and target intend to use CRC32c.&lt;br /&gt;
*&#039;&#039;&#039;HeaderDigest&#039;&#039;&#039;: Increases data integrity. When the header digest parameter is enabled, the system performs a checksum over each header part of the iSCSI Protocol Data Unit (PDU). The system verifies the data using the CRC32C algorithm. This parameter is sent by the initiator and target. Checksum enablement is negotiated only if both the initiator and target intend to use CRC32c.&lt;br /&gt;
*&#039;&#039;&#039;ImmediateData&#039;&#039;&#039;: This allows the initiator to append unsolicited data to a command. To achieve better performance, this should be set to &amp;quot;Yes&amp;quot;. This parameter is sent by the initiator and target, and the negotiated value used is the logical product.&lt;br /&gt;
*&#039;&#039;&#039;InitialR2T&#039;&#039;&#039;: Turns on the default use of R2T; if disabled, allows an initiator to start sending data to a target as if it had received an initial R2T. If set to &amp;quot;Yes&amp;quot;, the initiator has to wait for the target to solicit SCSI data before sending it. Setting it to &amp;quot;No&amp;quot; allows the initiator to send a burst of FirstBurstLength bytes unsolicited right after and/or (depending on the setting of ImmediateData) together with the command. Thus setting it to &amp;quot;No&amp;quot; may improve performance. This parameter is sent by the initiator and target, and the negotiated value used is the logical sum.&lt;br /&gt;
*&#039;&#039;&#039;MaxBurstLength&#039;&#039;&#039;: Parameter specifies the maximum amount of usable data in bytes (SCSI data payload) that can be sent in outgoing (SCSI Data-Out) or incoming (SCSI Data-In) packets. The value must be greater than or equal to the value of the FirstBurstLenght parameter.&amp;lt;br/&amp;gt;Configuring too large values may lead to problems allocating sufficient memory, which in turn may lead to SCSI commands timing out at the initiator host. This parameter is sent by the initiator and target, and the negotiated value used is the minimum.&lt;br /&gt;
*&#039;&#039;&#039;MaxOutstandingR2T&#039;&#039;&#039;: Defines the R2T (Ready to Transfer) PDUs that can be in transition before an acknowledged PDU is received.&amp;lt;br/&amp;gt;Controls the maximum number of data transfers the target may request at once, each of up to MaxBurstLength bytes. This parameter is sent by the initiator and target, and the negotiated used value is the minimum.&amp;lt;br/&amp;gt;SEE:[https://kb.scalelogicinc.com/how-can-we-improve-high-latency-links-using-maxoutstandingr2t-iscsi-parameter_1083.html https://kb.scalelogicinc.com/how-can-we-improve-high-latency-links-using-maxoutstandingr2t-iscsi-parameter_1083.html]&lt;br /&gt;
*&#039;&#039;&#039;MaxRecvDataSegmentLength&#039;&#039;&#039;: Sets the maximum data segment length that can be received in an iSCSI PDU. Configuring too large values may lead to problems allocating sufficient memory, which in turn may lead to SCSI commands timing out at the initiator host. This parameter is sent by the initiator and target, and the negotiated value used is the minimum.&lt;br /&gt;
*&#039;&#039;&#039;MaxXmitDataSegmentLength&#039;&#039;&#039;: Sets the maximum data segment length that can be sent in any iSCSI PDU. The value actually used is the minimum of MaxXmitDataSegmentLength and the MaxRecvDataSegmentLength announced by the initiator. Configuring too large values may lead to problems allocating sufficient memory, which in turn may lead to SCSI commands timing out at the initiator host.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Active_SMB_user_connections&amp;diff=1465</id>
		<title>Active SMB user connections</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Active_SMB_user_connections&amp;diff=1465"/>
		<updated>2024-07-25T12:26:00Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div&amp;gt;&#039;&#039;&#039;This functionality is available in Services Status &amp;gt; Connections tab&#039;&#039;&#039;&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;The &amp;quot;Active SMB user connections&amp;quot; section lists users who are connected to the shares. The main table contains the following information:&lt;br /&gt;
*User&#039;s name. If a user is connected as a guest, the user&#039;s name is shown as &amp;quot;nobody&amp;quot;. Users who are set as superusers (Storage Settings &amp;gt; NAS Settings &amp;gt; SMB) may be named &amp;quot;superuser(root)&amp;quot; in the table.&lt;br /&gt;
*The IP address through which the user is connected.&lt;br /&gt;
&lt;br /&gt;
The protocol through which the user is connected.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;To see a user&#039;s active sessions list, use the &amp;quot;Connected resources&amp;quot; button. After you click, the popup appears, and you can see the list of the user&#039;s active sessions. On the list, you can see all shares that the user is connected to at the moment. You can also see:&lt;br /&gt;
*Share&#039;s name.&lt;br /&gt;
*Resource&#039;s location (the name of the zpool and the name of the dataset).&lt;br /&gt;
*Date and time when the connection has been established.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Non-data_group_disks&amp;diff=1463</id>
		<title>Non-data group disks</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Non-data_group_disks&amp;diff=1463"/>
		<updated>2024-07-25T12:26:00Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== General information ==&lt;br /&gt;
&amp;lt;div&amp;gt;This section displays partitioned disks only. &#039;&#039;&#039;Please note that only NVMe disks can be partitioned.&#039;&#039;&#039;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;NVMe disks available in the &amp;quot;Unassigned disks&amp;quot; section can be partitioned. Once divided, these partitions can be used as devices in the following non-data groups:&lt;br /&gt;
*Write log&lt;br /&gt;
*Read cache&lt;br /&gt;
*Special devices groups&lt;br /&gt;
*Deduplication data groups&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;When working with partitions, keep the following rules in mind:&lt;br /&gt;
#Partitions can only be used in the following non-data groups: write log, read cache, special devices group, and deduplication group.&lt;br /&gt;
#Only one partition per disk can be assigned to a single non-data group.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
== Creating partitions ==&lt;br /&gt;
&amp;lt;div&amp;gt;Disks suitable for partitioning activate the &amp;quot;Make partitions (non-data group only)&amp;quot; option in the context menu.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&#039;&#039;&#039;Important:&#039;&#039;&#039; All the partitions on a given disk must be created at once. It is not possible to edit or add partitions afterward.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;To create partitions:&lt;br /&gt;
#Navigate to the &amp;quot;Unassigned disks&amp;quot; card in the Storage section. It&#039;s located at the bottom of the page, under zpools.&lt;br /&gt;
#Select an NVMe disk that you wish to partition.&lt;br /&gt;
#Click the options icon, then choose &amp;quot;Make partitions (non-data group only)&amp;quot;.&lt;br /&gt;
#A popup window with the following options will appear:&lt;br /&gt;
#*&#039;&#039;&#039;Total disk size:&#039;&#039;&#039; indicates the overall capacity of the disk to be partitioned.&lt;br /&gt;
#*&#039;&#039;&#039;Remaining disk capacity left to use:&#039;&#039;&#039; displays the capacity remaining after adding a partition. This assists in partitioning without manual calculations, showing available capacity.&lt;br /&gt;
#*&#039;&#039;&#039;Defined partitions:&#039;&#039;&#039; presented as a percentage of the disk capacity covered by the added partitions.&lt;br /&gt;
#*&#039;&#039;&#039;&amp;quot;Add partition&amp;quot; button:&#039;&#039;&#039; used to add a partition.&lt;br /&gt;
#Click the &amp;quot;Add partition&amp;quot; button and complete the brief form, providing the following information:&lt;br /&gt;
#*&#039;&#039;&#039;Partition&#039;s name suffix:&#039;&#039;&#039; this should be a number. By default, it begins from 1 but can be modified.&lt;br /&gt;
#:The partition name structure includes the following: a disk name + p + partition number (suffix), e.g., nvme1p1&lt;br /&gt;
#*&#039;&#039;&#039;Partition size:&#039;&#039;&#039; expressed in MB, GB, or TB. Sizes can be expressed in integers or decimals. The unit can be selected from the adjacent dropdown menu.&amp;lt;div&amp;gt;&#039;&#039;&#039;Note!&#039;&#039;&#039; &amp;lt;span style=&amp;quot;color:#ff0000&amp;quot;&amp;gt;All the partitions on a given disk must be created at once. It is not possible to edit or add partitions afterward.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
#After specifying partition sizes, click &amp;quot;Apply&amp;quot; to confirm and finalize the partition creation.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;All created partitions are listed in the &amp;quot;Non-data group disks&amp;quot; section. Unassigned partitions are also visible in the &amp;quot;Unassigned disks&amp;quot; section. They can be used as devices in any non-data group.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
== Deleting partitions ==&lt;br /&gt;
&amp;lt;div&amp;gt;&#039;&#039;&#039;Important:&#039;&#039;&#039; Deleting all partitions at once is the only option. Partitions allocated to non-data groups cannot be deleted. If any partition is assigned, the option to delete is disabled. To enable it, all partitions must be unassigned.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;This action is irreversible. After partitions are deleted, the disk returns to the &amp;quot;Unassigned disks&amp;quot; group.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;To delete partitions:&lt;br /&gt;
#Go to the &amp;quot;Non-data group disks&amp;quot; section.&lt;br /&gt;
#Select the disk whose partitions you want to delete. Click the &amp;quot;Delete all partitions&amp;quot; option in the context menu.&lt;br /&gt;
#A confirmation popup will appear. Confirm the action, and all partitions on the disk will be permanently deleted.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=TRIM&amp;diff=1461</id>
		<title>TRIM</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=TRIM&amp;diff=1461"/>
		<updated>2024-07-25T12:26:00Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
== What is TRIM? ==&lt;br /&gt;
&amp;lt;div&amp;gt;TRIM is a feature associated with solid-state drives (SSDs). SSDs use NAND flash memory to store data; over time, as files are deleted or modified, free space within the SSD can become fragmented. This fragmentation can impact the performance and lifespan of the SSD. TRIM allows the operating system to inform the SSD which blocks of data are no longer in use, marking them as available for erasure. By doing so, the SSD can perform internal housekeeping tasks and optimize its performance by consolidating free space. This process helps maintain the SSD&#039;s efficiency, prevent write amplification, and extend its lifespan. TRIM support is contingent on the SSD and the operating system. The SSD must have TRIM functionality built into its firmware, and the operating system must support TRIM commands. When both the SSD and the operating system support TRIM, it ensures the optimal functioning and longevity of the SSD in computing systems.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
== How to use TRIM? ==&lt;br /&gt;
&amp;lt;div&amp;gt;The TRIM function can work in the system permanently (autoTRIM) or can be run manually periodically (TRIM). Un-Kh options can work simultaneously. It means that even if autoTRIM is enabled, it can still be run manually.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
== How to use autoTRIM? ==&lt;br /&gt;
&amp;lt;div&amp;gt;This functionality is available in the zpool’s settings in the &#039;&#039;&#039;Configuration&#039;&#039;&#039; tab. After enabling the autoTRIM feature, the TRIM will work on all disks in the zpool that support this functionality.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;&#039;&#039;&#039;Note!&#039;&#039;&#039; The autoTRIM is not recommended for heavy workload systems - in such a case, using the function manually once every 3-6 months is recommended.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
To activate autoTRIM:&lt;br /&gt;
&lt;br /&gt;
#Go to the Storage, expand the zpool options to find the Configuration tab.&lt;br /&gt;
#Enable the autoTRIM toggle.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;&#039;&#039;&#039;Note!&#039;&#039;&#039; The autoTRIM function status is not saved in the settings file and won&#039;t be restored when the &amp;quot;Restore settings&amp;quot; option is used. In this case, set it again.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Whether or not the &amp;quot;autoTRIM&amp;quot; function is enabled, it is recommended to run a manual &amp;quot;TRIM&amp;quot; periodically to ensure optimal performance. This can be done in the Status tab in the TRIM section by using the Run button.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
== How to use manual TRIM? ==&lt;br /&gt;
&amp;lt;div&amp;gt;This functionality is available in the zpool’s settings in the Status tab in the TRIM section. Note! It is only active if there are disks that support TRIM.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;To run TRIM manually:&lt;br /&gt;
#Go to the Storage, expand the zpool options to find the Configuration tab with the TRIM section.&lt;br /&gt;
#Click the Run button.&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;It is possible to run TRIM on individual disks rather than on the entire zpool. In order to do it, choose the Selected disks option instead of the All disks option. Then click on Settings and tick the disks you want to trim. Now the number of selected disks for TRIM will be displayed in this section (e.g. 5 out 7). You can now start the TRIM process by pressing the Run button.&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Upload_update&amp;diff=1135</id>
		<title>Upload update</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Upload_update&amp;diff=1135"/>
		<updated>2024-07-25T12:26:00Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This option is available in the &#039;&#039;&#039;System Settings&#039;&#039;&#039; &amp;gt; &#039;&#039;&#039;Update&#039;&#039;&#039; tab.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following can be uploaded into the Update tab:&lt;br /&gt;
&lt;br /&gt;
*A new release&lt;br /&gt;
*A small update provided by the Support Team.&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;Note!&#039;&#039;&#039; Always back up your data and configuration before changing the software version.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;Related articles&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Change version|Change version]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Restore_settings&amp;diff=581</id>
		<title>Restore settings</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Restore_settings&amp;diff=581"/>
		<updated>2024-07-25T12:26:00Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&amp;lt;div&amp;gt;This functionality is available in the &#039;&#039;&#039;System Settings&#039;&#039;&#039; &amp;gt; &#039;&#039;&#039;Settings management&#039;&#039;&#039; tab.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;With this function, you can restore configuration settings if they have been saved to a file. If a settings file is not on the &amp;quot;Manually saved settings&amp;quot; list, it should be uploaded. For this purpose, the &amp;quot;Upload settings file&amp;quot; button should be used, and through the &amp;quot;Browse&amp;quot; option, select the file to upload. To complete the action, the &amp;quot;Upload&amp;quot; button must be clicked. After the file is uploaded to the &amp;quot;Manually saved settings&amp;quot; list, you can proceed to restore the settings. To this end, use the &amp;quot;Restore&amp;quot; option from the context menu.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Alternatively, the restoring process will start if the &amp;quot;Apply settings after uploaded&amp;quot; option is enabled during the upload of the settings file.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;div&amp;gt;Regardless of which option has been chosen (&amp;quot;Restore&amp;quot; or the &amp;quot;Apply settings after uploaded&amp;quot;), a popup will appear. There are two types of settings that can be restored:&lt;br /&gt;
*System and storage settings which restore the following settings:&lt;br /&gt;
**Network, GUI and console configuration, services configuration, time and date settings, e-mail notifications, SNMP, CHAP users access discovery&lt;br /&gt;
*Pools settings that restore the following settings:&lt;br /&gt;
**Targets, shares, and virtual aliases together with their settings&amp;lt;/div&amp;gt;&lt;br /&gt;
 &#039;&#039;&#039;Note&#039;&#039;&#039; that the structure of the pool itself, i.e. its disk groups, datasets, zvols, snapshots, and clones will not be restored. Also, please remember that only pools imported at the moment of saving the settings will be restored.&lt;br /&gt;
&amp;lt;div&amp;gt;You can select which types of settings described above (only System and storage settings, only Pools settings, or both) to restore by enabling the respective toggle button. To confirm, click the &amp;quot;Reboot &amp;amp; Restore&amp;quot; action button. For security reasons, additional verification is needed. On the following popup that appears, the word &amp;quot;reboot&amp;quot; must be typed, and then the “Reboot&amp;quot; button pressed to confirm the execution of restoring the settings. The system will auto-save the current settings before it restores the settings from the selected file. The restored settings are available after the reboot.&amp;lt;/div&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Help topics]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up30_Release_Notes&amp;diff=1438</id>
		<title>Scale Logic NX ver.1.0 up30 Release Notes</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up30_Release_Notes&amp;diff=1438"/>
		<updated>2024-04-25T15:33:38Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date: 2023-12-06&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build: 53984&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;span style=&amp;quot;color:#cc0033&amp;quot;&amp;gt;&#039;&#039;&#039;Important!&#039;&#039;&#039; &amp;lt;/span&amp;gt;To upgrade the product, you need to have an active Technical Support plan. You will be prompted to re-activate your product after installing the upgrade to verify your Technical Support status.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have an active Technical Support plan, please contact Scale Logic sales team or your reseller for further assistance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== New ==&lt;br /&gt;
&lt;br /&gt;
=== ZFS Special Devices feature ===&lt;br /&gt;
&lt;br /&gt;
=== NVMe Disk Partitioning feature ===&lt;br /&gt;
&lt;br /&gt;
=== Active Directory with extended RID Range and RFC2307 compatibility ===&lt;br /&gt;
&lt;br /&gt;
=== Support for macOS Time Machine backup mechanism ===&lt;br /&gt;
&lt;br /&gt;
=== Support for ”hide unreadable folder and files” option in Samba ===&lt;br /&gt;
&lt;br /&gt;
=== Support for recycle bin in Samba for Microsoft Windows ===&lt;br /&gt;
&lt;br /&gt;
=== The &amp;quot;Send Compressed Data&amp;quot; Option in Scale Logic NX On- &amp;amp; Off-Site Data Protection ===&lt;br /&gt;
&lt;br /&gt;
=== Support for Zero-configuration networking (zeroconf) feature with the services discovery options ===&lt;br /&gt;
&lt;br /&gt;
=== TUI: New predefined as well as editable custom storage performance profiles for tools testing purposes ===&lt;br /&gt;
&lt;br /&gt;
=== TRIM management for selected drives ===&lt;br /&gt;
&lt;br /&gt;
=== Active SMB user connections and active iSCSI connections statistics are available in the WebGUI in the Service Status tab ===&lt;br /&gt;
&lt;br /&gt;
=== S.M.A.R.T monitoring functionality in the WebGUI ===&lt;br /&gt;
&lt;br /&gt;
=== ZFS ARC, L2ARC, and ZIL statistics in the WebGUI ===&lt;br /&gt;
&lt;br /&gt;
=== LSI SNMP Agent ===&lt;br /&gt;
&lt;br /&gt;
=== Checkmk agent turn off in the TUI ===&lt;br /&gt;
&lt;br /&gt;
=== Driver for Broadcom HBA 9600-16e 12Gb Tri-Mode Storage Adapter (mpi3mr, v8.6.1.0.0) ===&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== Intel 100GbE Network Controller driver (ice, v1.11.14) ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10/40GbE Network Controller driver (i40e, v2.22.18) ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 10GbE Network Controller driver (ixgbe, v5.18.11) ===&lt;br /&gt;
&lt;br /&gt;
=== Intel 1GbE Network Controller driver (igb, v5.13.16) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom NeXtreme-E Series 10/100GbE Network Controller driver (bnxt_en, v1.10.2-223.0.162.0) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM57xx Network Controller driver (bnx2x, v1.715.13) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom BCM57xx Network Controller driver (bnx2, v2.2.6a) ===&lt;br /&gt;
&lt;br /&gt;
=== Solarflare 10GbE Network Controller driver (sfc, v4.15.14.1001) ===&lt;br /&gt;
&lt;br /&gt;
=== Chelsio 10GbE Network Controller driver (cxgb4, v3.18.0.0) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom HBA Adapter driver (mpt3sas, v45.00.00.00) ===&lt;br /&gt;
&lt;br /&gt;
=== Broadcom MegaRAID Adapter driver (megaraid_sas, v07.724.02.00) ===&lt;br /&gt;
&lt;br /&gt;
=== Marvell FastLinQ 41000 Network Controller driver (qede, v8.70.12.0) ===&lt;br /&gt;
&lt;br /&gt;
=== Areca RAID Adapter driver (arcmsr, v1.50.00.13) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec SmartHBA and SmartRAID Adapter driver (smartpqi, v2.1.22-040) ===&lt;br /&gt;
&lt;br /&gt;
=== Microsemi Adaptec MaxView tool (v3.10.00 (24308)) ===&lt;br /&gt;
&lt;br /&gt;
=== LSI Storage Authority Software (v008.004.010.000) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 6Gb/s HBA Adapter driver (esas2hba, v2.41.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 12Gb/s HBA Adapter driver (esas4hba, v1.51.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO ExpressSAS 12Gb/s GT HBA Adapter driver (esas5hba, v1.06.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 16Gb/32Gb Fibre Channel Adapter driver (celerity16fc, v2.08.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Celerity 8Gb Fibre Channel Adapter driver (celerity8fc, v2.25.0f1) ===&lt;br /&gt;
&lt;br /&gt;
=== ATTO Config Tool (v4.39) ===&lt;br /&gt;
&lt;br /&gt;
=== Emulex LightPulse Fibre Channel Adapter driver (lpfc, v12.8.614.22) ===&lt;br /&gt;
&lt;br /&gt;
=== Mellanox firmware update driver (mft, v4.23.0) ===&lt;br /&gt;
&lt;br /&gt;
=== Check_mk agent (check_mk, v2.1.0p14) ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114726 --&amp;gt;(SU 90895): In the environments with more than 128GB RAM, kernel panic logs are not saved by Kdump ===&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114673 --&amp;gt;In some environments, under a heavy load and while using On- and Off-site Data Protection, the connection to the SMB share is interrupted ===&lt;br /&gt;
&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
The utilization of physical hard drives in virtual machines hosted by Hyper-V is not supported and may cause problems. The problem does not occur when utilizing virtual hard drives in virtual machines within a Hyper-V environment.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Scale Logic VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Scale Logic VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Scale Logic NX users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB, Round-Robin and Round-Robin with RDMA do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX as Hyper-V or VMware guest, bonding ALB, Round-Robin and Round-Robin with RDMA is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of NX up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of NX up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of NX up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of NX up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open NX TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In case of large number of disks, zpool move can take a long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In certain environments in case of large number of disks (about 100 and more) the zpool move operation can take a long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Rolled back data are not properly refreshed both in Windows and Vmware systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before using rollback operation on zvol please detach iSCSI or FC target, perform rollback operation and reattach target.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User gets deleted from share access list after changing its username on AD server ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the AD user is assigned to a share and later the username is changed we should let NX know about it. Using the &amp;quot;Synchronize and update shares configurations&amp;quot; operation on NX leads to a situation where the changed user gets deleted from the share’s access list. The new username needs to be added to the share’s access list manually.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== QLogic 32Gbit FC HBA is no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support QLogic 32Gbit FC adapters.&lt;br /&gt;
&lt;br /&gt;
=== Certain 16Gbit FC HBAs are no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support certain 16Gbit FC adapters. If you are using a 16Gbit FC adapter based on the Qlogic chipset using the qla2xxx_scts driver, please refer to our online hardware compatibility list (HCL) to verify if that particular adapter is supported.&lt;br /&gt;
&lt;br /&gt;
Our HCL is available at this link: [https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/ https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/]&lt;br /&gt;
&lt;br /&gt;
=== E-mail password cannot contain special non-ASCII characters ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The following special characters #&amp;amp;nbsp;: + cannot be used in a password used in an e-mail notification feature. They can break the authentication process.&lt;br /&gt;
&lt;br /&gt;
=== LSA e-mail notifications does not work with SMTP servers requiring SSL/TLS authentication ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; E-mail alert configuration in LSI Storage Authority Software does not work with SMTP servers which require SSL/TLS authentication&lt;br /&gt;
&lt;br /&gt;
=== Moving IP address of the NFS share’s IP read only access list to read/write access list cannot be performed in one step ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If an IP address is already present on the NFS access list and you would like to move it to another access list, it has to be performed in two steps. First delete the IP address from the current list and apply the changes. Next edit the NFS share again and add the IP address to the other access list.&lt;br /&gt;
&lt;br /&gt;
=== If the used space on zpool reaches more than 80%, the system may generate high load and become unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the used space on the zpool reaches more then 80%, the system is trying to utilize the available space to the maximum. As a result, the system load may increase, especially waiting I/O and cause its unstable work. Expanding the pool size space is recommended.&lt;br /&gt;
&lt;br /&gt;
=== In certain situations WebGUI is not showing the current state of the system ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There are situations when the system is performing actions taking too long for the WebGUI to refresh the values in the web browser. In such a case the system is showing the old value taken directly from cache memory. We recommend using the F5 key to refresh the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== An ongoing O&amp;amp;ODP process involving small zvol block size or dataset record size generate high load and render the system unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; O&amp;amp;ODP backup tasks involving small zvol block sizes as well as small dataset record sizes (4KB - 16KB) are known to generate very high load rendering the system unstable. We recommend using at least 64KB sizes for zvols and datasets.&lt;br /&gt;
&lt;br /&gt;
=== Runtime UPS calibration in the client-server configuration unexpectedly shutdowns NX ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the client-server configuration of the UPS, the runtime UPS calibration process returns an improper value interpreted by NX as being on battery. When it timeouts it shuts the system down.&lt;br /&gt;
&lt;br /&gt;
=== Starting from up29 (including updating from previous version), system cannot boot up in UEFI mode if your boot medium is controlled by LSI SAS 9300 HBA with outdated firmware ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Make sure your LSI SAS 9300 HBA has the latest firmware installed. A quick workaround is changing the booting mode from UEFI to Legacy.&lt;br /&gt;
&lt;br /&gt;
=== Bonded Mellanox network cards show negative values on the network usage chart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be solved in the future release.&lt;br /&gt;
&lt;br /&gt;
=== In case of hundreds of thousands of LDAP users system starts very slowly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; LDAP database is stored on the boot medium. If you have that large LDAP database we recommend using ultra fast NVMe disk for boot medium.&lt;br /&gt;
&lt;br /&gt;
=== After update to NX up29 write back cache on some hardware RAID volumes can be unintentionally disabled ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we disable write-back cache on all HDD disks by default, but we do not disable write-back cache on SSD drives and hardware RAID volumes. It can happen however that the write-back cache on some RAID volumes can be turned off. Hardware RAID volume performance can be heavily impacted by the lack of the write-back cache, so please make sure it&#039;s enabled after update. Open TUI and invoke Extended tools by pressing CTRL+ALT+t, then select Disk write-back cache settings.&lt;br /&gt;
&lt;br /&gt;
=== Restarting or disconnecting JBOD with the write-back cache enabled on disks can lead to the data inconsistency ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If write-back cache is enabled on disks in JBOD then restarting or disconnecting JBOD can lead to data inconsistency. Starting from NX up29 we disable write-back cache on HDD disks by default during bootup procedure. We do not disable write-back cache on SSD drives and hardware RAID volumes.&lt;br /&gt;
&lt;br /&gt;
=== Snapshots are not displayed after a system reboot if there are more than a few thousands of snapshots ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case there is a large number of snapshots (more than a few thousands) there might be a significant delay in listing them in WebGUI after a system reboot. Depending on how big the number of snapshots is, it may take a few minutes or up to several dozen minutes to populate the list in WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the gzip-9 compression algorithm. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the gzip-9 compression algorithm the system can behave unstable when copying the data to storage. It is possible to use this compression algorithm only in environments where very efficient processors are being used.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use more than 500 zvols. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using more than 500 zvols in the system, the responsiveness of the Web-GUI may be low and the system may have problems with the import of zpools.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The Internet Connection Check functionality has been removed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to check the internet connection, try to get the date and time from the NTP server using the Web-GUI (System Settings -&amp;gt; System -&amp;gt; Time and date settings).&lt;br /&gt;
&lt;br /&gt;
=== After upgrading the system to a newer version, the event viewer reported an error message: An unexpected system reboot occurred. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After upgrading the system to a newer version, the event viewer reported an error message: &amp;quot;An unexpected system reboot occurred. Run the &#039;Scrub scanner&#039; on all pools to check the system integrity. Analysis of logs and previous events can help to find the cause of this problem and prevent the issue in the future. For more information, refer to the help article.&amp;quot;. This information should be ignored.&lt;br /&gt;
&lt;br /&gt;
=== Low performance on remote disks in case of new installation of NX up29r2. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;The source for this problem is the zfs_vdev_max_active parameter set to 1 on a new installation of NX up29r2. To resolve this problem, please change the value of the zfs_vdev_max_active parameter from 1 to 1000 in TUI.&amp;amp;nbsp; In order to change the zfs_vdev_max_active parameter to 1000 open NX TUI and use CTRL+ALT+W keys combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select the zfs module, then the zfs_vdev_max_active parameter and change its value to 1000. This operation requires a restart of the NX. This should be done by selecting the Reboot option in the TUI.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
=== The SED functionality configuration issues. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED functionality in Scale Logic NX enables to use the drives with verified SED configuration only.&lt;br /&gt;
&lt;br /&gt;
=== The SED configuration tool available in TUI also lists devices that are not currently supported. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI also lists devices that are not currently supported. To check if a given device is supported, see the HCL list available on the Scale Logic webpage.&lt;br /&gt;
&lt;br /&gt;
=== Enabling the autotrim functionality in the zpools may cause drastic increase load or iowait in the system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of a drastic increase load or iowait in the system after enabling the autotrim functionality in the zpools, consider disabling it. It is recommended to run the &amp;quot;Trim&amp;quot; function manually on demand and at a convenient time (e.g. at a time when the system is working under less load).&lt;br /&gt;
&lt;br /&gt;
=== The Mellanox ConnectX-3 network controller is no longer supported in RDMA mode due to its instability. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In order to provide stable work with RDMA we recommend using the Mellanox ConnectX-4, ConnectX-5, or ConnectX-6.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115142 --&amp;gt;The Network usage charts display incorrect data for an Active-Backup bonding with RDMA. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Network usage charts incorrectly display data for systems using the Active-Backup bonding with RDMA. The charts only reflect the usage of one network interface included in the Active-Backup bonding (the charts for the second network interface are not generated at all).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115051 --&amp;gt;Duplicate entries appear in the Service Status tab in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In certain scenarios, the Service Status tab in the WebGUI shows duplicated instances of the same connection.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114628 --&amp;gt;Restoring data backups from the macOS Time Machine application may not work correctly with older versions of the macOS system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems with restoring a copy of data from the Time Machine application, it is recommended to update the macOS system to a new version.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114516 --&amp;gt;The Virtual Hard disks smaller than 1B are visible in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; It’s possible to disable the virtual disks through IPMI settings. In Settings -&amp;gt; Media Redirection Settings -&amp;gt; VMedia Instance Settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;  Uncheck &amp;quot;Emulate SD Media as USB disk to Host&amp;quot; checkbox - it  manages one of the virtual disks.&lt;br /&gt;
  Set &amp;quot;Hard disk instances&amp;quot; to 0 in a combo box.&lt;br /&gt;
  Set &amp;quot;Remote KVM Hard disk instances&amp;quot; to 0 in the combo box - settings of the combo box manage the second virtual disk. &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== &amp;lt;!-- #114449 --&amp;gt;Unsupported configuration of VMware virtual machines (consisting of multiple disks) for data rollback from snapshots in On- &amp;amp; Off-site Data Protection. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The VMware virtual machine data rollbacks from snapshots using the On- &amp;amp; Off-site Data Protection functionality are not supported when the virtual machines consist of multiple disks. The specific virtual machine configuration is incompatible with the restoration process.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114351 --&amp;gt;Subdomain statuses in the User Management tab in the WebGUI are not updated. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of unavailability of a given subdomain, information about its status will not be updated on the WebGUI (even by pressing the refresh button).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114251 --&amp;gt;The problems with users and groups synchronization within the Active Directory one-way trusted configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems it’s recommended to use two-way trusted configuration.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #108558 --&amp;gt;Partial support for REST API v3. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The REST API v3 is currently only partially supported. As a result, not all operations can be executed using this version of the REST API. For optimal utilization of the REST API, we highly recommend all customers to employ REST API v4.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #99323 --&amp;gt;Username in LDAP database can’t be changed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To modify a username in the LDAP database, the administrator needs to delete the user account and creating a new one in the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115409 --&amp;gt;The hard disk LED locating and disk faulty functionality do not work properly using the Broadcom HBA 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter the Hard disk LED locating and disk faulty functionality do not work.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115436 --&amp;gt;The Broadcom HBA 9600 Storage Adapter may cause “Target allocation failed, error -6” error messages in dmesg. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter,using the “Rescan” button in the storage tab in the WebGUI may results in the “Target allocation failed, error -6” error messages in dmesg.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #109737 --&amp;gt;The ARCHTTP tool, when in use, might erroneously redirect to another network interface. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To avoid redirection to another network interface, it’s recommended to connect to the ARCHTTP tool using the primary network interface available in the Scale Logic NX (the network interface is usually: eth0).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #116234 --&amp;gt;Identification LED blinking does not work on NVMe drives in AMD-based servers. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Identification LED blinking on NVMe drives does not work on servers based on AMD processors. This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115416 --&amp;gt;The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter.&lt;br /&gt;
&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up30r1_Release_Notes&amp;diff=1454</id>
		<title>Scale Logic NX ver.1.0 up30r1 Release Notes</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up30r1_Release_Notes&amp;diff=1454"/>
		<updated>2024-04-25T15:32:47Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date: 2023-12-22&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build: 54118&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;span style=&amp;quot;color:#cc0033&amp;quot;&amp;gt;&#039;&#039;&#039;Important!&#039;&#039;&#039; &amp;lt;/span&amp;gt;To upgrade the product, you need to have an active Technical Support plan. You will be prompted to re-activate your product after installing the upgrade to verify your Technical Support status.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have an active Technical Support plan, please contact Scale Logic sales team or your reseller for further assistance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== Broadcom MegaRAID Adapter driver (megaraid_sas, v07.727.03.00) ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #--&amp;gt;The system experiences boot failure on servers using the Supermicro X13 motherboard. ===&lt;br /&gt;
&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
The utilization of physical hard drives in virtual machines hosted by Hyper-V is not supported and may cause problems. The problem does not occur when utilizing virtual hard drives in virtual machines within a Hyper-V environment.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Scale Logic VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Scale Logic VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Scale Logic NX users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB, Round-Robin and Round-Robin with RDMA do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX as Hyper-V or VMware guest, bonding ALB, Round-Robin and Round-Robin with RDMA is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of NX up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of NX up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of NX up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of NX up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open NX TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In case of large number of disks, zpool move can take a long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In certain environments in case of large number of disks (about 100 and more) the zpool move operation can take a long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Rolled back data are not properly refreshed both in Windows and Vmware systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before using rollback operation on zvol please detach iSCSI or FC target, perform rollback operation and reattach target.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User gets deleted from share access list after changing its username on AD server ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the AD user is assigned to a share and later the username is changed we should let NX know about it. Using the &amp;quot;Synchronize and update shares configurations&amp;quot; operation on NX leads to a situation where the changed user gets deleted from the share’s access list. The new username needs to be added to the share’s access list manually.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== QLogic 32Gbit FC HBA is no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support QLogic 32Gbit FC adapters.&lt;br /&gt;
&lt;br /&gt;
=== Certain 16Gbit FC HBAs are no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support certain 16Gbit FC adapters. If you are using a 16Gbit FC adapter based on the Qlogic chipset using the qla2xxx_scts driver, please refer to our online hardware compatibility list (HCL) to verify if that particular adapter is supported.&lt;br /&gt;
&lt;br /&gt;
Our HCL is available at this link: [https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/ https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/]&lt;br /&gt;
&lt;br /&gt;
=== E-mail password cannot contain special non-ASCII characters ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The following special characters #&amp;amp;nbsp;: + cannot be used in a password used in an e-mail notification feature. They can break the authentication process.&lt;br /&gt;
&lt;br /&gt;
=== LSA e-mail notifications does not work with SMTP servers requiring SSL/TLS authentication ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; E-mail alert configuration in LSI Storage Authority Software does not work with SMTP servers which require SSL/TLS authentication&lt;br /&gt;
&lt;br /&gt;
=== Moving IP address of the NFS share’s IP read only access list to read/write access list cannot be performed in one step ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If an IP address is already present on the NFS access list and you would like to move it to another access list, it has to be performed in two steps. First delete the IP address from the current list and apply the changes. Next edit the NFS share again and add the IP address to the other access list.&lt;br /&gt;
&lt;br /&gt;
=== If the used space on zpool reaches more than 80%, the system may generate high load and become unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the used space on the zpool reaches more then 80%, the system is trying to utilize the available space to the maximum. As a result, the system load may increase, especially waiting I/O and cause its unstable work. Expanding the pool size space is recommended.&lt;br /&gt;
&lt;br /&gt;
=== In certain situations WebGUI is not showing the current state of the system ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There are situations when the system is performing actions taking too long for the WebGUI to refresh the values in the web browser. In such a case the system is showing the old value taken directly from cache memory. We recommend using the F5 key to refresh the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== An ongoing O&amp;amp;ODP process involving small zvol block size or dataset record size generate high load and render the system unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; O&amp;amp;ODP backup tasks involving small zvol block sizes as well as small dataset record sizes (4KB - 16KB) are known to generate very high load rendering the system unstable. We recommend using at least 64KB sizes for zvols and datasets.&lt;br /&gt;
&lt;br /&gt;
=== Runtime UPS calibration in the client-server configuration unexpectedly shutdowns NX ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the client-server configuration of the UPS, the runtime UPS calibration process returns an improper value interpreted by NX as being on battery. When it timeouts it shuts the system down.&lt;br /&gt;
&lt;br /&gt;
=== Starting from up29 (including updating from previous version), system cannot boot up in UEFI mode if your boot medium is controlled by LSI SAS 9300 HBA with outdated firmware ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Make sure your LSI SAS 9300 HBA has the latest firmware installed. A quick workaround is changing the booting mode from UEFI to Legacy.&lt;br /&gt;
&lt;br /&gt;
=== Bonded Mellanox network cards show negative values on the network usage chart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be solved in the future release.&lt;br /&gt;
&lt;br /&gt;
=== In case of hundreds of thousands of LDAP users system starts very slowly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; LDAP database is stored on the boot medium. If you have that large LDAP database we recommend using ultra fast NVMe disk for boot medium.&lt;br /&gt;
&lt;br /&gt;
=== After update to NX up29 write back cache on some hardware RAID volumes can be unintentionally disabled ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we disable write-back cache on all HDD disks by default, but we do not disable write-back cache on SSD drives and hardware RAID volumes. It can happen however that the write-back cache on some RAID volumes can be turned off. Hardware RAID volume performance can be heavily impacted by the lack of the write-back cache, so please make sure it&#039;s enabled after update. Open TUI and invoke Extended tools by pressing CTRL+ALT+t, then select Disk write-back cache settings.&lt;br /&gt;
&lt;br /&gt;
=== Restarting or disconnecting JBOD with the write-back cache enabled on disks can lead to the data inconsistency ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If write-back cache is enabled on disks in JBOD then restarting or disconnecting JBOD can lead to data inconsistency. Starting from NX up29 we disable write-back cache on HDD disks by default during bootup procedure. We do not disable write-back cache on SSD drives and hardware RAID volumes.&lt;br /&gt;
&lt;br /&gt;
=== Snapshots are not displayed after a system reboot if there are more than a few thousands of snapshots ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case there is a large number of snapshots (more than a few thousands) there might be a significant delay in listing them in WebGUI after a system reboot. Depending on how big the number of snapshots is, it may take a few minutes or up to several dozen minutes to populate the list in WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the gzip-9 compression algorithm. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the gzip-9 compression algorithm the system can behave unstable when copying the data to storage. It is possible to use this compression algorithm only in environments where very efficient processors are being used.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use more than 500 zvols. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using more than 500 zvols in the system, the responsiveness of the Web-GUI may be low and the system may have problems with the import of zpools.&lt;br /&gt;
&lt;br /&gt;
=== The Internet Connection Check functionality has been removed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to check the internet connection, try to get the date and time from the NTP server using the Web-GUI (System Settings -&amp;gt; System -&amp;gt; Time and date settings).&lt;br /&gt;
&lt;br /&gt;
=== After upgrading the system to a newer version, the event viewer reported an error message: An unexpected system reboot occurred. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After upgrading the system to a newer version, the event viewer reported an error message: &amp;quot;An unexpected system reboot occurred. Run the &#039;Scrub scanner&#039; on all pools to check the system integrity. Analysis of logs and previous events can help to find the cause of this problem and prevent the issue in the future. For more information, refer to the help article.&amp;quot;. This information should be ignored.&lt;br /&gt;
&lt;br /&gt;
=== Low performance on remote disks in case of new installation of NX up29r2. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;The source for this problem is the zfs_vdev_max_active parameter set to 1 on a new installation of NX up29r2. To resolve this problem, please change the value of the zfs_vdev_max_active parameter from 1 to 1000 in TUI.&amp;amp;nbsp; In order to change the zfs_vdev_max_active parameter to 1000 open NX TUI and use CTRL+ALT+W keys combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select the zfs module, then the zfs_vdev_max_active parameter and change its value to 1000. This operation requires a restart of the NX. This should be done by selecting the Reboot option in the TUI.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The SED functionality configuration issues. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED functionality in Scale Logic NX enables to use the drives with verified SED configuration only.&lt;br /&gt;
&lt;br /&gt;
=== The SED configuration tool available in TUI also lists devices that are not currently supported. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI also lists devices that are not currently supported. To check if a given device is supported, see the HCL list available on the Scale Logic webpage.&lt;br /&gt;
&lt;br /&gt;
=== Enabling the autotrim functionality in the zpools may cause drastic increase load or iowait in the system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of a drastic increase load or iowait in the system after enabling the autotrim functionality in the zpools, consider disabling it. It is recommended to run the &amp;quot;Trim&amp;quot; function manually on demand and at a convenient time (e.g. at a time when the system is working under less load).&lt;br /&gt;
&lt;br /&gt;
=== The Mellanox ConnectX-3 network controller is no longer supported in RDMA mode due to its instability. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In order to provide stable work with RDMA we recommend using the Mellanox ConnectX-4, ConnectX-5, or ConnectX-6.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115142 --&amp;gt;The Network usage charts display incorrect data for an Active-Backup bonding with RDMA. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Network usage charts incorrectly display data for systems using the Active-Backup bonding with RDMA. The charts only reflect the usage of one network interface included in the Active-Backup bonding (the charts for the second network interface are not generated at all).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115051 --&amp;gt;Duplicate entries appear in the Service Status tab in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In certain scenarios, the Service Status tab in the WebGUI shows duplicated instances of the same connection.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114628 --&amp;gt;Restoring data backups from the macOS Time Machine application may not work correctly with older versions of the macOS system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems with restoring a copy of data from the Time Machine application, it is recommended to update the macOS system to a new version.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114516 --&amp;gt;The Virtual Hard disks smaller than 1B are visible in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; It’s possible to disable the virtual disks through IPMI settings. In Settings -&amp;gt; Media Redirection Settings -&amp;gt; VMedia Instance Settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;  Uncheck &amp;quot;Emulate SD Media as USB disk to Host&amp;quot; checkbox - it  manages one of the virtual disks.&lt;br /&gt;
  Set &amp;quot;Hard disk instances&amp;quot; to 0 in a combo box.&lt;br /&gt;
  Set &amp;quot;Remote KVM Hard disk instances&amp;quot; to 0 in the combo box - settings of the combo box manage the second virtual disk. &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== &amp;lt;!-- #114449 --&amp;gt;Unsupported configuration of VMware virtual machines (consisting of multiple disks) for data rollback from snapshots in On- &amp;amp; Off-site Data Protection. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The VMware virtual machine data rollbacks from snapshots using the On- &amp;amp; Off-site Data Protection functionality are not supported when the virtual machines consist of multiple disks. The specific virtual machine configuration is incompatible with the restoration process.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114351 --&amp;gt;Subdomain statuses in the User Management tab in the WebGUI are not updated. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of unavailability of a given subdomain, information about its status will not be updated on the WebGUI (even by pressing the refresh button).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114251 --&amp;gt;The problems with users and groups synchronization within the Active Directory one-way trusted configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems it’s recommended to use two-way trusted configuration.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #108558 --&amp;gt;Partial support for REST API v3. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The REST API v3 is currently only partially supported. As a result, not all operations can be executed using this version of the REST API. For optimal utilization of the REST API, we highly recommend all customers to employ REST API v4.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #99323 --&amp;gt;Username in LDAP database can’t be changed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To modify a username in the LDAP database, the administrator needs to delete the user account and creating a new one in the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115409 --&amp;gt;The hard disk LED locating and disk faulty functionality do not work properly using the Broadcom HBA 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter the Hard disk LED locating and disk faulty functionality do not work.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115436 --&amp;gt;The Broadcom HBA 9600 Storage Adapter may cause “Target allocation failed, error -6” error messages in dmesg. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter,using the “Rescan” button in the storage tab in the WebGUI may results in the “Target allocation failed, error -6” error messages in dmesg.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #109737 --&amp;gt;The ARCHTTP tool, when in use, might erroneously redirect to another network interface. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To avoid redirection to another network interface, it’s recommended to connect to the ARCHTTP tool using the primary network interface available in the Scale Logic NX (the network interface is usually: eth0).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #116234 --&amp;gt;Identification LED blinking does not work on NVMe drives in AMD-based servers. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Identification LED blinking on NVMe drives does not work on servers based on AMD processors. This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115416 --&amp;gt;The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter.&lt;br /&gt;
&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up30r2_Release_Notes&amp;diff=1459</id>
		<title>Scale Logic NX ver.1.0 up30r2 Release Notes</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up30r2_Release_Notes&amp;diff=1459"/>
		<updated>2024-04-25T15:29:59Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date: 2024-03-11&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build: 55016&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;span style=&amp;quot;color:#cc0033&amp;quot;&amp;gt;&#039;&#039;&#039;Important!&#039;&#039;&#039; &amp;lt;/span&amp;gt;To upgrade the product, you need to have an active Technical Support plan. You will be prompted to re-activate your product after installing the upgrade to verify your Technical Support status.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have an active Technical Support plan, please contact Scale Logic sales team or your reseller for further assistance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== New ==&lt;br /&gt;
&lt;br /&gt;
=== Support for LED disk location for NVMe drives on Intel platforms ===&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== ZFS (v2.1.14) ===&lt;br /&gt;
&lt;br /&gt;
=== Ledctl (v0.97) ===&lt;br /&gt;
&lt;br /&gt;
=== Chelsio T4/T5 10 Gigabit Ethernet controller driver (cxgb4, v3.19.0.1) ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== The Hot-Plug mechanism for NVMe drives does not work properly on several environments ===&lt;br /&gt;
&lt;br /&gt;
=== The system restart or shutdown procedure does not function correctly in environments utilizing the HP Smart Array controller (hpsa driver) ===&lt;br /&gt;
&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
The utilization of physical hard drives in virtual machines hosted by Hyper-V is not supported and may cause problems. The problem does not occur when utilizing virtual hard drives in virtual machines within a Hyper-V environment.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Scale Logic VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Scale Logic VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Scale Logic NX users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB, Round-Robin and Round-Robin with RDMA do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX as Hyper-V or VMware guest, bonding ALB, Round-Robin and Round-Robin with RDMA is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of NX up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of NX up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of NX up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of NX up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open NX TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In case of large number of disks, zpool move can take a long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In certain environments in case of large number of disks (about 100 and more) the zpool move operation can take a long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Rolled back data are not properly refreshed both in Windows and Vmware systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before using rollback operation on zvol please detach iSCSI or FC target, perform rollback operation and reattach target.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User gets deleted from share access list after changing its username on AD server ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the AD user is assigned to a share and later the username is changed we should let NX know about it. Using the &amp;quot;Synchronize and update shares configurations&amp;quot; operation on NX leads to a situation where the changed user gets deleted from the share’s access list. The new username needs to be added to the share’s access list manually.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== QLogic 32Gbit FC HBA is no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support QLogic 32Gbit FC adapters.&lt;br /&gt;
&lt;br /&gt;
=== Certain 16Gbit FC HBAs are no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support certain 16Gbit FC adapters. If you are using a 16Gbit FC adapter based on the Qlogic chipset using the qla2xxx_scts driver, please refer to our online hardware compatibility list (HCL) to verify if that particular adapter is supported.&lt;br /&gt;
&lt;br /&gt;
Our HCL is available at this link: [https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/ https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/]&lt;br /&gt;
&lt;br /&gt;
=== E-mail password cannot contain special non-ASCII characters ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The following special characters #&amp;amp;nbsp;: + cannot be used in a password used in an e-mail notification feature. They can break the authentication process.&lt;br /&gt;
&lt;br /&gt;
=== LSA e-mail notifications does not work with SMTP servers requiring SSL/TLS authentication ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; E-mail alert configuration in LSI Storage Authority Software does not work with SMTP servers which require SSL/TLS authentication&lt;br /&gt;
&lt;br /&gt;
=== Moving IP address of the NFS share’s IP read only access list to read/write access list cannot be performed in one step ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If an IP address is already present on the NFS access list and you would like to move it to another access list, it has to be performed in two steps. First delete the IP address from the current list and apply the changes. Next edit the NFS share again and add the IP address to the other access list.&lt;br /&gt;
&lt;br /&gt;
=== If the used space on zpool reaches more than 80%, the system may generate high load and become unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the used space on the zpool reaches more then 80%, the system is trying to utilize the available space to the maximum. As a result, the system load may increase, especially waiting I/O and cause its unstable work. Expanding the pool size space is recommended.&lt;br /&gt;
&lt;br /&gt;
=== In certain situations WebGUI is not showing the current state of the system ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There are situations when the system is performing actions taking too long for the WebGUI to refresh the values in the web browser. In such a case the system is showing the old value taken directly from cache memory. We recommend using the F5 key to refresh the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== An ongoing O&amp;amp;ODP process involving small zvol block size or dataset record size generate high load and render the system unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; O&amp;amp;ODP backup tasks involving small zvol block sizes as well as small dataset record sizes (4KB - 16KB) are known to generate very high load rendering the system unstable. We recommend using at least 64KB sizes for zvols and datasets.&lt;br /&gt;
&lt;br /&gt;
=== Runtime UPS calibration in the client-server configuration unexpectedly shutdowns NX ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the client-server configuration of the UPS, the runtime UPS calibration process returns an improper value interpreted by NX as being on battery. When it timeouts it shuts the system down.&lt;br /&gt;
&lt;br /&gt;
=== Starting from up29 (including updating from previous version), system cannot boot up in UEFI mode if your boot medium is controlled by LSI SAS 9300 HBA with outdated firmware ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Make sure your LSI SAS 9300 HBA has the latest firmware installed. A quick workaround is changing the booting mode from UEFI to Legacy.&lt;br /&gt;
&lt;br /&gt;
=== Bonded Mellanox network cards show negative values on the network usage chart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be solved in the future release.&lt;br /&gt;
&lt;br /&gt;
=== In case of hundreds of thousands of LDAP users system starts very slowly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; LDAP database is stored on the boot medium. If you have that large LDAP database we recommend using ultra fast NVMe disk for boot medium.&lt;br /&gt;
&lt;br /&gt;
=== After update to NX up29 write back cache on some hardware RAID volumes can be unintentionally disabled ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we disable write-back cache on all HDD disks by default, but we do not disable write-back cache on SSD drives and hardware RAID volumes. It can happen however that the write-back cache on some RAID volumes can be turned off. Hardware RAID volume performance can be heavily impacted by the lack of the write-back cache, so please make sure it&#039;s enabled after update. Open TUI and invoke Extended tools by pressing CTRL+ALT+t, then select Disk write-back cache settings.&lt;br /&gt;
&lt;br /&gt;
=== Restarting or disconnecting JBOD with the write-back cache enabled on disks can lead to the data inconsistency ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If write-back cache is enabled on disks in JBOD then restarting or disconnecting JBOD can lead to data inconsistency. Starting from NX up29 we disable write-back cache on HDD disks by default during bootup procedure. We do not disable write-back cache on SSD drives and hardware RAID volumes.&lt;br /&gt;
&lt;br /&gt;
=== Snapshots are not displayed after a system reboot if there are more than a few thousands of snapshots ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case there is a large number of snapshots (more than a few thousands) there might be a significant delay in listing them in WebGUI after a system reboot. Depending on how big the number of snapshots is, it may take a few minutes or up to several dozen minutes to populate the list in WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the gzip-9 compression algorithm. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the gzip-9 compression algorithm the system can behave unstable when copying the data to storage. It is possible to use this compression algorithm only in environments where very efficient processors are being used.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use more than 500 zvols. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using more than 500 zvols in the system, the responsiveness of the Web-GUI may be low and the system may have problems with the import of zpools.&lt;br /&gt;
&lt;br /&gt;
=== The Internet Connection Check functionality has been removed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to check the internet connection, try to get the date and time from the NTP server using the Web-GUI (System Settings -&amp;gt; System -&amp;gt; Time and date settings).&lt;br /&gt;
&lt;br /&gt;
=== After upgrading the system to a newer version, the event viewer reported an error message: An unexpected system reboot occurred. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After upgrading the system to a newer version, the event viewer reported an error message: &amp;quot;An unexpected system reboot occurred. Run the &#039;Scrub scanner&#039; on all pools to check the system integrity. Analysis of logs and previous events can help to find the cause of this problem and prevent the issue in the future. For more information, refer to the help article.&amp;quot;. This information should be ignored.&lt;br /&gt;
&lt;br /&gt;
=== Low performance on remote disks in case of new installation of NX up29r2. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;The source for this problem is the zfs_vdev_max_active parameter set to 1 on a new installation of NX up29r2. To resolve this problem, please change the value of the zfs_vdev_max_active parameter from 1 to 1000 in TUI.&amp;amp;nbsp; In order to change the zfs_vdev_max_active parameter to 1000 open NX TUI and use CTRL+ALT+W keys combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select the zfs module, then the zfs_vdev_max_active parameter and change its value to 1000. This operation requires a restart of the NX. This should be done by selecting the Reboot option in the TUI.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
=== The SED functionality configuration issues. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED functionality in Scale Logic NX enables to use the drives with verified SED configuration only.&lt;br /&gt;
&lt;br /&gt;
=== The SED configuration tool available in TUI also lists devices that are not currently supported. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI also lists devices that are not currently supported.&lt;br /&gt;
&lt;br /&gt;
=== Enabling the autotrim functionality in the zpools may cause drastic increase load or iowait in the system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of a drastic increase load or iowait in the system after enabling the autotrim functionality in the zpools, consider disabling it. It is recommended to run the &amp;quot;Trim&amp;quot; function manually on demand and at a convenient time (e.g. at a time when the system is working under less load).&lt;br /&gt;
&lt;br /&gt;
=== The Mellanox ConnectX-3 network controller is no longer supported in RDMA mode due to its instability. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In order to provide stable work with RDMA we recommend using the Mellanox ConnectX-4, ConnectX-5, or ConnectX-6.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115142 --&amp;gt;The Network usage charts display incorrect data for an Active-Backup bonding with RDMA. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Network usage charts incorrectly display data for systems using the Active-Backup bonding with RDMA. The charts only reflect the usage of one network interface included in the Active-Backup bonding (the charts for the second network interface are not generated at all).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115051 --&amp;gt;Duplicate entries appear in the Service Status tab in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In certain scenarios, the Service Status tab in the WebGUI shows duplicated instances of the same connection.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114628 --&amp;gt;Restoring data backups from the macOS Time Machine application may not work correctly with older versions of the macOS system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems with restoring a copy of data from the Time Machine application, it is recommended to update the macOS system to a new version.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114516 --&amp;gt;The Virtual Hard disks smaller than 1B are visible in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; It’s possible to disable the virtual disks through IPMI settings. In Settings -&amp;gt; Media Redirection Settings -&amp;gt; VMedia Instance Settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;  Uncheck &amp;quot;Emulate SD Media as USB disk to Host&amp;quot; checkbox - it  manages one of the virtual disks.&lt;br /&gt;
  Set &amp;quot;Hard disk instances&amp;quot; to 0 in a combo box.&lt;br /&gt;
  Set &amp;quot;Remote KVM Hard disk instances&amp;quot; to 0 in the combo box - settings of the combo box manage the second virtual disk. &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== &amp;lt;!-- #114449 --&amp;gt;Unsupported configuration of VMware virtual machines (consisting of multiple disks) for data rollback from snapshots in On- &amp;amp; Off-site Data Protection. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The VMware virtual machine data rollbacks from snapshots using the On- &amp;amp; Off-site Data Protection functionality are not supported when the virtual machines consist of multiple disks. The specific virtual machine configuration is incompatible with the restoration process.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114351 --&amp;gt;Subdomain statuses in the User Management tab in the WebGUI are not updated. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of unavailability of a given subdomain, information about its status will not be updated on the WebGUI (even by pressing the refresh button).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114251 --&amp;gt;The problems with users and groups synchronization within the Active Directory one-way trusted configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems it’s recommended to use two-way trusted configuration.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #108558 --&amp;gt;Partial support for REST API v3. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The REST API v3 is currently only partially supported. As a result, not all operations can be executed using this version of the REST API. For optimal utilization of the REST API, we highly recommend all customers to employ REST API v4.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #99323 --&amp;gt;Username in LDAP database can’t be changed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To modify a username in the LDAP database, the administrator needs to delete the user account and creating a new one in the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115409 --&amp;gt;The hard disk LED locating and disk faulty functionality do not work properly using the Broadcom HBA 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter the Hard disk LED locating and disk faulty functionality do not work.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115436 --&amp;gt;The Broadcom HBA 9600 Storage Adapter may cause “Target allocation failed, error -6” error messages in dmesg. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter,using the “Rescan” button in the storage tab in the WebGUI may results in the “Target allocation failed, error -6” error messages in dmesg.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #109737 --&amp;gt;The ARCHTTP tool, when in use, might erroneously redirect to another network interface. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To avoid redirection to another network interface, it’s recommended to connect to the ARCHTTP tool using the primary network interface available in the Scale Logic NX (the network interface is usually: eth0).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #116234 --&amp;gt;Identification LED blinking does not work on NVMe drives in AMD-based servers. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Identification LED blinking on NVMe drives does not work on servers based on AMD processors. This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115416 --&amp;gt;The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter.&lt;br /&gt;
&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up30r2_Release_Notes&amp;diff=1458</id>
		<title>Scale Logic NX ver.1.0 up30r2 Release Notes</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up30r2_Release_Notes&amp;diff=1458"/>
		<updated>2024-04-25T15:03:19Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date: 2024-03-11&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build: 55016&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;span style=&amp;quot;color:#cc0033&amp;quot;&amp;gt;&#039;&#039;&#039;Important!&#039;&#039;&#039; &amp;lt;/span&amp;gt;To upgrade the product, you need to have an active Technical Support plan. You will be prompted to re-activate your product after installing the upgrade to verify your Technical Support status.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have an active Technical Support plan, please contact Scale Logic sales team or your reseller for further assistance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== New ==&lt;br /&gt;
&lt;br /&gt;
=== Support for LED disk location for NVMe drives on Intel platforms ===&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== ZFS (v2.1.14) ===&lt;br /&gt;
&lt;br /&gt;
=== Ledctl (v0.97) ===&lt;br /&gt;
&lt;br /&gt;
=== Chelsio T4/T5 10 Gigabit Ethernet controller driver (cxgb4, v3.19.0.1) ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== The Hot-Plug mechanism for NVMe drives does not work properly on several environments ===&lt;br /&gt;
&lt;br /&gt;
=== The system restart or shutdown procedure does not function correctly in environments utilizing the HP Smart Array controller (hpsa driver) ===&lt;br /&gt;
&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
The utilization of physical hard drives in virtual machines hosted by Hyper-V is not supported and may cause problems. The problem does not occur when utilizing virtual hard drives in virtual machines within a Hyper-V environment.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Scale Logic VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Scale Logic VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Scale Logic NX users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB, Round-Robin and Round-Robin with RDMA do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX as Hyper-V or VMware guest, bonding ALB, Round-Robin and Round-Robin with RDMA is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of NX up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of NX up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of NX up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of NX up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open NX TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In case of large number of disks, zpool move can take a long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In certain environments in case of large number of disks (about 100 and more) the zpool move operation can take a long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Rolled back data are not properly refreshed both in Windows and Vmware systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before using rollback operation on zvol please detach iSCSI or FC target, perform rollback operation and reattach target.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User gets deleted from share access list after changing its username on AD server ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the AD user is assigned to a share and later the username is changed we should let NX know about it. Using the &amp;quot;Synchronize and update shares configurations&amp;quot; operation on NX leads to a situation where the changed user gets deleted from the share’s access list. The new username needs to be added to the share’s access list manually.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== QLogic 32Gbit FC HBA is no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support QLogic 32Gbit FC adapters.&lt;br /&gt;
&lt;br /&gt;
=== Certain 16Gbit FC HBAs are no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support certain 16Gbit FC adapters. If you are using a 16Gbit FC adapter based on the Qlogic chipset using the qla2xxx_scts driver, please refer to our online hardware compatibility list (HCL) to verify if that particular adapter is supported.&lt;br /&gt;
&lt;br /&gt;
Our HCL is available at this link: [https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/ https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/]&lt;br /&gt;
&lt;br /&gt;
=== E-mail password cannot contain special non-ASCII characters ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The following special characters #&amp;amp;nbsp;: + cannot be used in a password used in an e-mail notification feature. They can break the authentication process.&lt;br /&gt;
&lt;br /&gt;
=== LSA e-mail notifications does not work with SMTP servers requiring SSL/TLS authentication ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; E-mail alert configuration in LSI Storage Authority Software does not work with SMTP servers which require SSL/TLS authentication&lt;br /&gt;
&lt;br /&gt;
=== Moving IP address of the NFS share’s IP read only access list to read/write access list cannot be performed in one step ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If an IP address is already present on the NFS access list and you would like to move it to another access list, it has to be performed in two steps. First delete the IP address from the current list and apply the changes. Next edit the NFS share again and add the IP address to the other access list.&lt;br /&gt;
&lt;br /&gt;
=== If the used space on zpool reaches more than 80%, the system may generate high load and become unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the used space on the zpool reaches more then 80%, the system is trying to utilize the available space to the maximum. As a result, the system load may increase, especially waiting I/O and cause its unstable work. Expanding the pool size space is recommended.&lt;br /&gt;
&lt;br /&gt;
=== In certain situations WebGUI is not showing the current state of the system ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There are situations when the system is performing actions taking too long for the WebGUI to refresh the values in the web browser. In such a case the system is showing the old value taken directly from cache memory. We recommend using the F5 key to refresh the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== An ongoing O&amp;amp;ODP process involving small zvol block size or dataset record size generate high load and render the system unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; O&amp;amp;ODP backup tasks involving small zvol block sizes as well as small dataset record sizes (4KB - 16KB) are known to generate very high load rendering the system unstable. We recommend using at least 64KB sizes for zvols and datasets.&lt;br /&gt;
&lt;br /&gt;
=== Runtime UPS calibration in the client-server configuration unexpectedly shutdowns NX ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the client-server configuration of the UPS, the runtime UPS calibration process returns an improper value interpreted by NX as being on battery. When it timeouts it shuts the system down.&lt;br /&gt;
&lt;br /&gt;
=== Starting from up29 (including updating from previous version), system cannot boot up in UEFI mode if your boot medium is controlled by LSI SAS 9300 HBA with outdated firmware ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Make sure your LSI SAS 9300 HBA has the latest firmware installed. A quick workaround is changing the booting mode from UEFI to Legacy.&lt;br /&gt;
&lt;br /&gt;
=== Bonded Mellanox network cards show negative values on the network usage chart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be solved in the future release.&lt;br /&gt;
&lt;br /&gt;
=== In case of hundreds of thousands of LDAP users system starts very slowly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; LDAP database is stored on the boot medium. If you have that large LDAP database we recommend using ultra fast NVMe disk for boot medium.&lt;br /&gt;
&lt;br /&gt;
=== After update to NX up29 write back cache on some hardware RAID volumes can be unintentionally disabled ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we disable write-back cache on all HDD disks by default, but we do not disable write-back cache on SSD drives and hardware RAID volumes. It can happen however that the write-back cache on some RAID volumes can be turned off. Hardware RAID volume performance can be heavily impacted by the lack of the write-back cache, so please make sure it&#039;s enabled after update. Open TUI and invoke Extended tools by pressing CTRL+ALT+t, then select Disk write-back cache settings.&lt;br /&gt;
&lt;br /&gt;
=== Restarting or disconnecting JBOD with the write-back cache enabled on disks can lead to the data inconsistency ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If write-back cache is enabled on disks in JBOD then restarting or disconnecting JBOD can lead to data inconsistency. Starting from NX up29 we disable write-back cache on HDD disks by default during bootup procedure. We do not disable write-back cache on SSD drives and hardware RAID volumes.&lt;br /&gt;
&lt;br /&gt;
=== Snapshots are not displayed after a system reboot if there are more than a few thousands of snapshots ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case there is a large number of snapshots (more than a few thousands) there might be a significant delay in listing them in WebGUI after a system reboot. Depending on how big the number of snapshots is, it may take a few minutes or up to several dozen minutes to populate the list in WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the gzip-9 compression algorithm. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the gzip-9 compression algorithm the system can behave unstable when copying the data to storage. It is possible to use this compression algorithm only in environments where very efficient processors are being used.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use more than 500 zvols. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using more than 500 zvols in the system, the responsiveness of the Web-GUI may be low and the system may have problems with the import of zpools.&lt;br /&gt;
&lt;br /&gt;
=== The Internet Connection Check functionality has been removed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to check the internet connection, try to get the date and time from the NTP server using the Web-GUI (System Settings -&amp;gt; System -&amp;gt; Time and date settings).&lt;br /&gt;
&lt;br /&gt;
=== After upgrading the system to a newer version, the event viewer reported an error message: An unexpected system reboot occurred. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After upgrading the system to a newer version, the event viewer reported an error message: &amp;quot;An unexpected system reboot occurred. Run the &#039;Scrub scanner&#039; on all pools to check the system integrity. Analysis of logs and previous events can help to find the cause of this problem and prevent the issue in the future. For more information, refer to the help article.&amp;quot;. This information should be ignored.&lt;br /&gt;
&lt;br /&gt;
=== Low performance on remote disks in case of new installation of NX up29r2. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;The source for this problem is the zfs_vdev_max_active parameter set to 1 on a new installation of NX up29r2. To resolve this problem, please change the value of the zfs_vdev_max_active parameter from 1 to 1000 in TUI.&amp;amp;nbsp; In order to change the zfs_vdev_max_active parameter to 1000 open NX TUI and use CTRL+ALT+W keys combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select the zfs module, then the zfs_vdev_max_active parameter and change its value to 1000. This operation requires a restart of the NX. This should be done by selecting the Reboot option in the TUI.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
=== The SED functionality configuration issues. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED functionality in Scale Logic NX enables to use the drives with verified SED configuration only - they are tagged as &amp;quot;SED&amp;quot; and listed on the Scale Logic NX HCL. In order to properly configure the functionality, please follow the steps described in the Knowledge Base article: [https://kb.scalelogicinc.com/NX-sed-support-in-NX_3381.html https://kb.scalelogicinc.com/NX-sed-support-in-NX_3381.html]&lt;br /&gt;
&lt;br /&gt;
=== The SED configuration tool available in TUI also lists devices that are not currently supported. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI also lists devices that are not currently supported. To check if a given device is supported, see the HCL list available on the Scale Logic webpage ([https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/ https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/]). To find devices for which we support the SED functionality, on the Scale Logic HCL page in the form: &amp;quot;Search by component&amp;quot;, enter: “SED” in the keyword field and click the search button (loupe icon).&lt;br /&gt;
&lt;br /&gt;
=== Enabling the autotrim functionality in the zpools may cause drastic increase load or iowait in the system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of a drastic increase load or iowait in the system after enabling the autotrim functionality in the zpools, consider disabling it. It is recommended to run the &amp;quot;Trim&amp;quot; function manually on demand and at a convenient time (e.g. at a time when the system is working under less load).&lt;br /&gt;
&lt;br /&gt;
=== The Mellanox ConnectX-3 network controller is no longer supported in RDMA mode due to its instability. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In order to provide stable work with RDMA we recommend using the Mellanox ConnectX-4, ConnectX-5, or ConnectX-6.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115142 --&amp;gt;The Network usage charts display incorrect data for an Active-Backup bonding with RDMA. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Network usage charts incorrectly display data for systems using the Active-Backup bonding with RDMA. The charts only reflect the usage of one network interface included in the Active-Backup bonding (the charts for the second network interface are not generated at all).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115051 --&amp;gt;Duplicate entries appear in the Service Status tab in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In certain scenarios, the Service Status tab in the WebGUI shows duplicated instances of the same connection.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114628 --&amp;gt;Restoring data backups from the macOS Time Machine application may not work correctly with older versions of the macOS system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems with restoring a copy of data from the Time Machine application, it is recommended to update the macOS system to a new version.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114516 --&amp;gt;The Virtual Hard disks smaller than 1B are visible in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; It’s possible to disable the virtual disks through IPMI settings. In Settings -&amp;gt; Media Redirection Settings -&amp;gt; VMedia Instance Settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;  Uncheck &amp;quot;Emulate SD Media as USB disk to Host&amp;quot; checkbox - it  manages one of the virtual disks.&lt;br /&gt;
  Set &amp;quot;Hard disk instances&amp;quot; to 0 in a combo box.&lt;br /&gt;
  Set &amp;quot;Remote KVM Hard disk instances&amp;quot; to 0 in the combo box - settings of the combo box manage the second virtual disk. &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== &amp;lt;!-- #114449 --&amp;gt;Unsupported configuration of VMware virtual machines (consisting of multiple disks) for data rollback from snapshots in On- &amp;amp; Off-site Data Protection. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The VMware virtual machine data rollbacks from snapshots using the On- &amp;amp; Off-site Data Protection functionality are not supported when the virtual machines consist of multiple disks. The specific virtual machine configuration is incompatible with the restoration process.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114351 --&amp;gt;Subdomain statuses in the User Management tab in the WebGUI are not updated. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of unavailability of a given subdomain, information about its status will not be updated on the WebGUI (even by pressing the refresh button).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114251 --&amp;gt;The problems with users and groups synchronization within the Active Directory one-way trusted configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems it’s recommended to use two-way trusted configuration.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #108558 --&amp;gt;Partial support for REST API v3. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The REST API v3 is currently only partially supported. As a result, not all operations can be executed using this version of the REST API. For optimal utilization of the REST API, we highly recommend all customers to employ REST API v4.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #99323 --&amp;gt;Username in LDAP database can’t be changed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To modify a username in the LDAP database, the administrator needs to delete the user account and creating a new one in the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115409 --&amp;gt;The hard disk LED locating and disk faulty functionality do not work properly using the Broadcom HBA 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter the Hard disk LED locating and disk faulty functionality do not work.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115436 --&amp;gt;The Broadcom HBA 9600 Storage Adapter may cause “Target allocation failed, error -6” error messages in dmesg. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter,using the “Rescan” button in the storage tab in the WebGUI may results in the “Target allocation failed, error -6” error messages in dmesg.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #109737 --&amp;gt;The ARCHTTP tool, when in use, might erroneously redirect to another network interface. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To avoid redirection to another network interface, it’s recommended to connect to the ARCHTTP tool using the primary network interface available in the Scale Logic NX (the network interface is usually: eth0).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #116234 --&amp;gt;Identification LED blinking does not work on NVMe drives in AMD-based servers. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Identification LED blinking on NVMe drives does not work on servers based on AMD processors. This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115416 --&amp;gt;The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter.&lt;br /&gt;
&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up30r2_Release_Notes&amp;diff=1457</id>
		<title>Scale Logic NX ver.1.0 up30r2 Release Notes</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up30r2_Release_Notes&amp;diff=1457"/>
		<updated>2024-04-25T15:02:46Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date: 2024-03-11&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build: 55016&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;span style=&amp;quot;color:#cc0033&amp;quot;&amp;gt;&#039;&#039;&#039;Important!&#039;&#039;&#039; &amp;lt;/span&amp;gt;To upgrade the product, you need to have an active Technical Support plan. You will be prompted to re-activate your product after installing the upgrade to verify your Technical Support status.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have an active Technical Support plan, please contact Scale Logic sales team or your reseller for further assistance.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== New ==&lt;br /&gt;
&lt;br /&gt;
=== Support for LED disk location for NVMe drives on Intel platforms ===&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== ZFS (v2.1.14) ===&lt;br /&gt;
&lt;br /&gt;
=== Ledctl (v0.97) ===&lt;br /&gt;
&lt;br /&gt;
=== Chelsio T4/T5 10 Gigabit Ethernet controller driver (cxgb4, v3.19.0.1) ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== The Hot-Plug mechanism for NVMe drives does not work properly on several environments ===&lt;br /&gt;
&lt;br /&gt;
=== The system restart or shutdown procedure does not function correctly in environments utilizing the HP Smart Array controller (hpsa driver) ===&lt;br /&gt;
&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
The utilization of physical hard drives in virtual machines hosted by Hyper-V is not supported and may cause problems. The problem does not occur when utilizing virtual hard drives in virtual machines within a Hyper-V environment.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Scale Logic VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Scale Logic VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Scale Logic NX users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB, Round-Robin and Round-Robin with RDMA do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX as Hyper-V or VMware guest, bonding ALB, Round-Robin and Round-Robin with RDMA is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of NX up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of NX up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of NX up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of NX up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open NX TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In case of large number of disks, zpool move can take a long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In certain environments in case of large number of disks (about 100 and more) the zpool move operation can take a long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Rolled back data are not properly refreshed both in Windows and Vmware systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before using rollback operation on zvol please detach iSCSI or FC target, perform rollback operation and reattach target.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User gets deleted from share access list after changing its username on AD server ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the AD user is assigned to a share and later the username is changed we should let NX know about it. Using the &amp;quot;Synchronize and update shares configurations&amp;quot; operation on NX leads to a situation where the changed user gets deleted from the share’s access list. The new username needs to be added to the share’s access list manually.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== QLogic 32Gbit FC HBA is no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support QLogic 32Gbit FC adapters.&lt;br /&gt;
&lt;br /&gt;
=== Certain 16Gbit FC HBAs are no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support certain 16Gbit FC adapters. If you are using a 16Gbit FC adapter based on the Qlogic chipset using the qla2xxx_scts driver, please refer to our online hardware compatibility list (HCL) to verify if that particular adapter is supported.&lt;br /&gt;
&lt;br /&gt;
Our HCL is available at this link: [https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/ https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/]&lt;br /&gt;
&lt;br /&gt;
=== E-mail password cannot contain special non-ASCII characters ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The following special characters #&amp;amp;nbsp;: + cannot be used in a password used in an e-mail notification feature. They can break the authentication process.&lt;br /&gt;
&lt;br /&gt;
=== LSA e-mail notifications does not work with SMTP servers requiring SSL/TLS authentication ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; E-mail alert configuration in LSI Storage Authority Software does not work with SMTP servers which require SSL/TLS authentication&lt;br /&gt;
&lt;br /&gt;
=== Moving IP address of the NFS share’s IP read only access list to read/write access list cannot be performed in one step ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If an IP address is already present on the NFS access list and you would like to move it to another access list, it has to be performed in two steps. First delete the IP address from the current list and apply the changes. Next edit the NFS share again and add the IP address to the other access list.&lt;br /&gt;
&lt;br /&gt;
=== If the used space on zpool reaches more than 80%, the system may generate high load and become unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the used space on the zpool reaches more then 80%, the system is trying to utilize the available space to the maximum. As a result, the system load may increase, especially waiting I/O and cause its unstable work. Expanding the pool size space is recommended.&lt;br /&gt;
&lt;br /&gt;
=== In certain situations WebGUI is not showing the current state of the system ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There are situations when the system is performing actions taking too long for the WebGUI to refresh the values in the web browser. In such a case the system is showing the old value taken directly from cache memory. We recommend using the F5 key to refresh the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== An ongoing O&amp;amp;ODP process involving small zvol block size or dataset record size generate high load and render the system unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; O&amp;amp;ODP backup tasks involving small zvol block sizes as well as small dataset record sizes (4KB - 16KB) are known to generate very high load rendering the system unstable. We recommend using at least 64KB sizes for zvols and datasets.&lt;br /&gt;
&lt;br /&gt;
=== Runtime UPS calibration in the client-server configuration unexpectedly shutdowns NX ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the client-server configuration of the UPS, the runtime UPS calibration process returns an improper value interpreted by NX as being on battery. When it timeouts it shuts the system down.&lt;br /&gt;
&lt;br /&gt;
=== Starting from up29 (including updating from previous version), system cannot boot up in UEFI mode if your boot medium is controlled by LSI SAS 9300 HBA with outdated firmware ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Make sure your LSI SAS 9300 HBA has the latest firmware installed. A quick workaround is changing the booting mode from UEFI to Legacy.&lt;br /&gt;
&lt;br /&gt;
=== Bonded Mellanox network cards show negative values on the network usage chart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be solved in the future release.&lt;br /&gt;
&lt;br /&gt;
=== In case of hundreds of thousands of LDAP users system starts very slowly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; LDAP database is stored on the boot medium. If you have that large LDAP database we recommend using ultra fast NVMe disk for boot medium.&lt;br /&gt;
&lt;br /&gt;
=== After update to NX up29 write back cache on some hardware RAID volumes can be unintentionally disabled ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we disable write-back cache on all HDD disks by default, but we do not disable write-back cache on SSD drives and hardware RAID volumes. It can happen however that the write-back cache on some RAID volumes can be turned off. Hardware RAID volume performance can be heavily impacted by the lack of the write-back cache, so please make sure it&#039;s enabled after update. Open TUI and invoke Extended tools by pressing CTRL+ALT+t, then select Disk write-back cache settings.&lt;br /&gt;
&lt;br /&gt;
=== Restarting or disconnecting JBOD with the write-back cache enabled on disks can lead to the data inconsistency ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If write-back cache is enabled on disks in JBOD then restarting or disconnecting JBOD can lead to data inconsistency. Starting from NX up29 we disable write-back cache on HDD disks by default during bootup procedure. We do not disable write-back cache on SSD drives and hardware RAID volumes.&lt;br /&gt;
&lt;br /&gt;
=== Snapshots are not displayed after a system reboot if there are more than a few thousands of snapshots ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case there is a large number of snapshots (more than a few thousands) there might be a significant delay in listing them in WebGUI after a system reboot. Depending on how big the number of snapshots is, it may take a few minutes or up to several dozen minutes to populate the list in WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the gzip-9 compression algorithm. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the gzip-9 compression algorithm the system can behave unstable when copying the data to storage. It is possible to use this compression algorithm only in environments where very efficient processors are being used.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use more than 500 zvols. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using more than 500 zvols in the system, the responsiveness of the Web-GUI may be low and the system may have problems with the import of zpools.&lt;br /&gt;
&lt;br /&gt;
=== The Internet Connection Check functionality has been removed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to check the internet connection, try to get the date and time from the NTP server using the Web-GUI (System Settings -&amp;gt; System -&amp;gt; Time and date settings).&lt;br /&gt;
&lt;br /&gt;
=== After upgrading the system to a newer version, the event viewer reported an error message: An unexpected system reboot occurred. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After upgrading the system to a newer version, the event viewer reported an error message: &amp;quot;An unexpected system reboot occurred. Run the &#039;Scrub scanner&#039; on all pools to check the system integrity. Analysis of logs and previous events can help to find the cause of this problem and prevent the issue in the future. For more information, refer to the help article.&amp;quot;. This information should be ignored.&lt;br /&gt;
&lt;br /&gt;
=== Low performance on remote disks in case of new installation of NX up29r2. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;The source for this problem is the zfs_vdev_max_active parameter set to 1 on a new installation of NX up29r2. To resolve this problem, please change the value of the zfs_vdev_max_active parameter from 1 to 1000 in TUI.&amp;amp;nbsp; In order to change the zfs_vdev_max_active parameter to 1000 open NX TUI and use CTRL+ALT+W keys combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select the zfs module, then the zfs_vdev_max_active parameter and change its value to 1000. This operation requires a restart of the NX. This should be done by selecting the Reboot option in the TUI.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
=== The SED functionality configuration issues. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED functionality in Scale Logic NX enables to use the drives with verified SED configuration only - they are tagged as &amp;quot;SED&amp;quot; and listed on the Scale Logic NX HCL. In order to properly configure the functionality, please follow the steps described in the Knowledge Base article: [https://kb.scalelogicinc.com/NX-sed-support-in-NX_3381.html https://kb.scalelogicinc.com/NX-sed-support-in-NX_3381.html]&lt;br /&gt;
&lt;br /&gt;
=== The SED configuration tool available in TUI also lists devices that are not currently supported. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI also lists devices that are not currently supported. To check if a given device is supported, see the HCL list available on the Scale Logic webpage ([https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/ https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/]). To find devices for which we support the SED functionality, on the Scale Logic HCL page in the form: &amp;quot;Search by component&amp;quot;, enter: “SED” in the keyword field and click the search button (loupe icon).&lt;br /&gt;
&lt;br /&gt;
=== Enabling the autotrim functionality in the zpools may cause drastic increase load or iowait in the system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of a drastic increase load or iowait in the system after enabling the autotrim functionality in the zpools, consider disabling it. It is recommended to run the &amp;quot;Trim&amp;quot; function manually on demand and at a convenient time (e.g. at a time when the system is working under less load).&lt;br /&gt;
&lt;br /&gt;
=== The Mellanox ConnectX-3 network controller is no longer supported in RDMA mode due to its instability. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In order to provide stable work with RDMA we recommend using the Mellanox ConnectX-4, ConnectX-5, or ConnectX-6.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115142 --&amp;gt;The Network usage charts display incorrect data for an Active-Backup bonding with RDMA. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Network usage charts incorrectly display data for systems using the Active-Backup bonding with RDMA. The charts only reflect the usage of one network interface included in the Active-Backup bonding (the charts for the second network interface are not generated at all).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115051 --&amp;gt;Duplicate entries appear in the Service Status tab in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In certain scenarios, the Service Status tab in the WebGUI shows duplicated instances of the same connection.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114628 --&amp;gt;Restoring data backups from the macOS Time Machine application may not work correctly with older versions of the macOS system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems with restoring a copy of data from the Time Machine application, it is recommended to update the macOS system to a new version.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114516 --&amp;gt;The Virtual Hard disks smaller than 1B are visible in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; It’s possible to disable the virtual disks through IPMI settings. In Settings -&amp;gt; Media Redirection Settings -&amp;gt; VMedia Instance Settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;  Uncheck &amp;quot;Emulate SD Media as USB disk to Host&amp;quot; checkbox - it  manages one of the virtual disks.&lt;br /&gt;
  Set &amp;quot;Hard disk instances&amp;quot; to 0 in a combo box.&lt;br /&gt;
  Set &amp;quot;Remote KVM Hard disk instances&amp;quot; to 0 in the combo box - settings of the combo box manage the second virtual disk. &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== &amp;lt;!-- #114449 --&amp;gt;Unsupported configuration of VMware virtual machines (consisting of multiple disks) for data rollback from snapshots in On- &amp;amp; Off-site Data Protection. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The VMware virtual machine data rollbacks from snapshots using the On- &amp;amp; Off-site Data Protection functionality are not supported when the virtual machines consist of multiple disks. The specific virtual machine configuration is incompatible with the restoration process.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114351 --&amp;gt;Subdomain statuses in the User Management tab in the WebGUI are not updated. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of unavailability of a given subdomain, information about its status will not be updated on the WebGUI (even by pressing the refresh button).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114251 --&amp;gt;The problems with users and groups synchronization within the Active Directory one-way trusted configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems it’s recommended to use two-way trusted configuration.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #108558 --&amp;gt;Partial support for REST API v3. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The REST API v3 is currently only partially supported. As a result, not all operations can be executed using this version of the REST API. For optimal utilization of the REST API, we highly recommend all customers to employ REST API v4.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #99323 --&amp;gt;Username in LDAP database can’t be changed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To modify a username in the LDAP database, the administrator needs to delete the user account and creating a new one in the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115409 --&amp;gt;The hard disk LED locating and disk faulty functionality do not work properly using the Broadcom HBA 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter the Hard disk LED locating and disk faulty functionality do not work.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115436 --&amp;gt;The Broadcom HBA 9600 Storage Adapter may cause “Target allocation failed, error -6” error messages in dmesg. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter,using the “Rescan” button in the storage tab in the WebGUI may results in the “Target allocation failed, error -6” error messages in dmesg.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #109737 --&amp;gt;The ARCHTTP tool, when in use, might erroneously redirect to another network interface. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To avoid redirection to another network interface, it’s recommended to connect to the ARCHTTP tool using the primary network interface available in the Scale Logic NX (the network interface is usually: eth0).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #116234 --&amp;gt;Identification LED blinking does not work on NVMe drives in AMD-based servers. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Identification LED blinking on NVMe drives does not work on servers based on AMD processors. This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115416 --&amp;gt;The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter.&lt;br /&gt;
&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up30r2_Release_Notes&amp;diff=1456</id>
		<title>Scale Logic NX ver.1.0 up30r2 Release Notes</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up30r2_Release_Notes&amp;diff=1456"/>
		<updated>2024-04-25T14:58:02Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date: 2024-03-11&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build: 55016&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== New ==&lt;br /&gt;
&lt;br /&gt;
=== Support for LED disk location for NVMe drives on Intel platforms ===&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== ZFS (v2.1.14) ===&lt;br /&gt;
&lt;br /&gt;
=== Ledctl (v0.97) ===&lt;br /&gt;
&lt;br /&gt;
=== Chelsio T4/T5 10 Gigabit Ethernet controller driver (cxgb4, v3.19.0.1) ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== The Hot-Plug mechanism for NVMe drives does not work properly on several environments ===&lt;br /&gt;
&lt;br /&gt;
=== The system restart or shutdown procedure does not function correctly in environments utilizing the HP Smart Array controller (hpsa driver) ===&lt;br /&gt;
&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
The utilization of physical hard drives in virtual machines hosted by Hyper-V is not supported and may cause problems. The problem does not occur when utilizing virtual hard drives in virtual machines within a Hyper-V environment.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Scale Logic VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Scale Logic VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Scale Logic NX users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB, Round-Robin and Round-Robin with RDMA do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX as Hyper-V or VMware guest, bonding ALB, Round-Robin and Round-Robin with RDMA is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of NX up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of NX up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of NX up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of NX up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open NX TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In case of large number of disks, zpool move can take a long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In certain environments in case of large number of disks (about 100 and more) the zpool move operation can take a long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Rolled back data are not properly refreshed both in Windows and Vmware systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before using rollback operation on zvol please detach iSCSI or FC target, perform rollback operation and reattach target.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User gets deleted from share access list after changing its username on AD server ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the AD user is assigned to a share and later the username is changed we should let NX know about it. Using the &amp;quot;Synchronize and update shares configurations&amp;quot; operation on NX leads to a situation where the changed user gets deleted from the share’s access list. The new username needs to be added to the share’s access list manually.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== QLogic 32Gbit FC HBA is no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support QLogic 32Gbit FC adapters.&lt;br /&gt;
&lt;br /&gt;
=== Certain 16Gbit FC HBAs are no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support certain 16Gbit FC adapters. If you are using a 16Gbit FC adapter based on the Qlogic chipset using the qla2xxx_scts driver, please refer to our online hardware compatibility list (HCL) to verify if that particular adapter is supported.&lt;br /&gt;
&lt;br /&gt;
Our HCL is available at this link: [https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/ https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/]&lt;br /&gt;
&lt;br /&gt;
=== E-mail password cannot contain special non-ASCII characters ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The following special characters #&amp;amp;nbsp;: + cannot be used in a password used in an e-mail notification feature. They can break the authentication process.&lt;br /&gt;
&lt;br /&gt;
=== LSA e-mail notifications does not work with SMTP servers requiring SSL/TLS authentication ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; E-mail alert configuration in LSI Storage Authority Software does not work with SMTP servers which require SSL/TLS authentication&lt;br /&gt;
&lt;br /&gt;
=== Moving IP address of the NFS share’s IP read only access list to read/write access list cannot be performed in one step ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If an IP address is already present on the NFS access list and you would like to move it to another access list, it has to be performed in two steps. First delete the IP address from the current list and apply the changes. Next edit the NFS share again and add the IP address to the other access list.&lt;br /&gt;
&lt;br /&gt;
=== If the used space on zpool reaches more than 80%, the system may generate high load and become unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the used space on the zpool reaches more then 80%, the system is trying to utilize the available space to the maximum. As a result, the system load may increase, especially waiting I/O and cause its unstable work. Expanding the pool size space is recommended.&lt;br /&gt;
&lt;br /&gt;
=== In certain situations WebGUI is not showing the current state of the system ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There are situations when the system is performing actions taking too long for the WebGUI to refresh the values in the web browser. In such a case the system is showing the old value taken directly from cache memory. We recommend using the F5 key to refresh the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== An ongoing O&amp;amp;ODP process involving small zvol block size or dataset record size generate high load and render the system unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; O&amp;amp;ODP backup tasks involving small zvol block sizes as well as small dataset record sizes (4KB - 16KB) are known to generate very high load rendering the system unstable. We recommend using at least 64KB sizes for zvols and datasets.&lt;br /&gt;
&lt;br /&gt;
=== Runtime UPS calibration in the client-server configuration unexpectedly shutdowns NX ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the client-server configuration of the UPS, the runtime UPS calibration process returns an improper value interpreted by NX as being on battery. When it timeouts it shuts the system down.&lt;br /&gt;
&lt;br /&gt;
=== Starting from up29 (including updating from previous version), system cannot boot up in UEFI mode if your boot medium is controlled by LSI SAS 9300 HBA with outdated firmware ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Make sure your LSI SAS 9300 HBA has the latest firmware installed. A quick workaround is changing the booting mode from UEFI to Legacy.&lt;br /&gt;
&lt;br /&gt;
=== Bonded Mellanox network cards show negative values on the network usage chart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be solved in the future release.&lt;br /&gt;
&lt;br /&gt;
=== In case of hundreds of thousands of LDAP users system starts very slowly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; LDAP database is stored on the boot medium. If you have that large LDAP database we recommend using ultra fast NVMe disk for boot medium.&lt;br /&gt;
&lt;br /&gt;
=== After update to NX up29 write back cache on some hardware RAID volumes can be unintentionally disabled ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we disable write-back cache on all HDD disks by default, but we do not disable write-back cache on SSD drives and hardware RAID volumes. It can happen however that the write-back cache on some RAID volumes can be turned off. Hardware RAID volume performance can be heavily impacted by the lack of the write-back cache, so please make sure it&#039;s enabled after update. Open TUI and invoke Extended tools by pressing CTRL+ALT+t, then select Disk write-back cache settings.&lt;br /&gt;
&lt;br /&gt;
=== Restarting or disconnecting JBOD with the write-back cache enabled on disks can lead to the data inconsistency ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If write-back cache is enabled on disks in JBOD then restarting or disconnecting JBOD can lead to data inconsistency. Starting from NX up29 we disable write-back cache on HDD disks by default during bootup procedure. We do not disable write-back cache on SSD drives and hardware RAID volumes.&lt;br /&gt;
&lt;br /&gt;
=== Snapshots are not displayed after a system reboot if there are more than a few thousands of snapshots ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case there is a large number of snapshots (more than a few thousands) there might be a significant delay in listing them in WebGUI after a system reboot. Depending on how big the number of snapshots is, it may take a few minutes or up to several dozen minutes to populate the list in WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the gzip-9 compression algorithm. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the gzip-9 compression algorithm the system can behave unstable when copying the data to storage. It is possible to use this compression algorithm only in environments where very efficient processors are being used.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use more than 500 zvols. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using more than 500 zvols in the system, the responsiveness of the Web-GUI may be low and the system may have problems with the import of zpools.&lt;br /&gt;
&lt;br /&gt;
=== The Internet Connection Check functionality has been removed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to check the internet connection, try to get the date and time from the NTP server using the Web-GUI (System Settings -&amp;gt; System -&amp;gt; Time and date settings).&lt;br /&gt;
&lt;br /&gt;
=== After upgrading the system to a newer version, the event viewer reported an error message: An unexpected system reboot occurred. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After upgrading the system to a newer version, the event viewer reported an error message: &amp;quot;An unexpected system reboot occurred. Run the &#039;Scrub scanner&#039; on all pools to check the system integrity. Analysis of logs and previous events can help to find the cause of this problem and prevent the issue in the future. For more information, refer to the help article.&amp;quot;. This information should be ignored.&lt;br /&gt;
&lt;br /&gt;
=== Low performance on remote disks in case of new installation of NX up29r2. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;The source for this problem is the zfs_vdev_max_active parameter set to 1 on a new installation of NX up29r2. To resolve this problem, please change the value of the zfs_vdev_max_active parameter from 1 to 1000 in TUI.&amp;amp;nbsp; In order to change the zfs_vdev_max_active parameter to 1000 open NX TUI and use CTRL+ALT+W keys combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select the zfs module, then the zfs_vdev_max_active parameter and change its value to 1000. This operation requires a restart of the NX. This should be done by selecting the Reboot option in the TUI.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
=== The SED functionality configuration issues. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED functionality in Scale Logic NX enables to use the drives with verified SED configuration only - they are tagged as &amp;quot;SED&amp;quot; and listed on the Scale Logic NX HCL. In order to properly configure the functionality, please follow the steps described in the Knowledge Base article: [https://kb.scalelogicinc.com/NX-sed-support-in-NX_3381.html https://kb.scalelogicinc.com/NX-sed-support-in-NX_3381.html]&lt;br /&gt;
&lt;br /&gt;
=== The SED configuration tool available in TUI also lists devices that are not currently supported. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI also lists devices that are not currently supported. To check if a given device is supported, see the HCL list available on the Scale Logic webpage ([https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/ https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/]). To find devices for which we support the SED functionality, on the Scale Logic HCL page in the form: &amp;quot;Search by component&amp;quot;, enter: “SED” in the keyword field and click the search button (loupe icon).&lt;br /&gt;
&lt;br /&gt;
=== Enabling the autotrim functionality in the zpools may cause drastic increase load or iowait in the system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of a drastic increase load or iowait in the system after enabling the autotrim functionality in the zpools, consider disabling it. It is recommended to run the &amp;quot;Trim&amp;quot; function manually on demand and at a convenient time (e.g. at a time when the system is working under less load).&lt;br /&gt;
&lt;br /&gt;
=== The Mellanox ConnectX-3 network controller is no longer supported in RDMA mode due to its instability. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In order to provide stable work with RDMA we recommend using the Mellanox ConnectX-4, ConnectX-5, or ConnectX-6.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115142 --&amp;gt;The Network usage charts display incorrect data for an Active-Backup bonding with RDMA. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Network usage charts incorrectly display data for systems using the Active-Backup bonding with RDMA. The charts only reflect the usage of one network interface included in the Active-Backup bonding (the charts for the second network interface are not generated at all).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115051 --&amp;gt;Duplicate entries appear in the Service Status tab in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In certain scenarios, the Service Status tab in the WebGUI shows duplicated instances of the same connection.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114628 --&amp;gt;Restoring data backups from the macOS Time Machine application may not work correctly with older versions of the macOS system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems with restoring a copy of data from the Time Machine application, it is recommended to update the macOS system to a new version.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114516 --&amp;gt;The Virtual Hard disks smaller than 1B are visible in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; It’s possible to disable the virtual disks through IPMI settings. In Settings -&amp;gt; Media Redirection Settings -&amp;gt; VMedia Instance Settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;  Uncheck &amp;quot;Emulate SD Media as USB disk to Host&amp;quot; checkbox - it  manages one of the virtual disks.&lt;br /&gt;
  Set &amp;quot;Hard disk instances&amp;quot; to 0 in a combo box.&lt;br /&gt;
  Set &amp;quot;Remote KVM Hard disk instances&amp;quot; to 0 in the combo box - settings of the combo box manage the second virtual disk. &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== &amp;lt;!-- #114449 --&amp;gt;Unsupported configuration of VMware virtual machines (consisting of multiple disks) for data rollback from snapshots in On- &amp;amp; Off-site Data Protection. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The VMware virtual machine data rollbacks from snapshots using the On- &amp;amp; Off-site Data Protection functionality are not supported when the virtual machines consist of multiple disks. The specific virtual machine configuration is incompatible with the restoration process.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114351 --&amp;gt;Subdomain statuses in the User Management tab in the WebGUI are not updated. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of unavailability of a given subdomain, information about its status will not be updated on the WebGUI (even by pressing the refresh button).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114251 --&amp;gt;The problems with users and groups synchronization within the Active Directory one-way trusted configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems it’s recommended to use two-way trusted configuration.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #108558 --&amp;gt;Partial support for REST API v3. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The REST API v3 is currently only partially supported. As a result, not all operations can be executed using this version of the REST API. For optimal utilization of the REST API, we highly recommend all customers to employ REST API v4.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #99323 --&amp;gt;Username in LDAP database can’t be changed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To modify a username in the LDAP database, the administrator needs to delete the user account and creating a new one in the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115409 --&amp;gt;The hard disk LED locating and disk faulty functionality do not work properly using the Broadcom HBA 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter the Hard disk LED locating and disk faulty functionality do not work.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115436 --&amp;gt;The Broadcom HBA 9600 Storage Adapter may cause “Target allocation failed, error -6” error messages in dmesg. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter,using the “Rescan” button in the storage tab in the WebGUI may results in the “Target allocation failed, error -6” error messages in dmesg.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #109737 --&amp;gt;The ARCHTTP tool, when in use, might erroneously redirect to another network interface. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To avoid redirection to another network interface, it’s recommended to connect to the ARCHTTP tool using the primary network interface available in the Scale Logic NX (the network interface is usually: eth0).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #116234 --&amp;gt;Identification LED blinking does not work on NVMe drives in AMD-based servers. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Identification LED blinking on NVMe drives does not work on servers based on AMD processors. This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115416 --&amp;gt;The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter.&lt;br /&gt;
&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
	<entry>
		<id>http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up30r1_Release_Notes&amp;diff=1453</id>
		<title>Scale Logic NX ver.1.0 up30r1 Release Notes</title>
		<link rel="alternate" type="text/html" href="http://wiki.scalelogicinc.com/nx/index.php?title=Scale_Logic_NX_ver.1.0_up30r1_Release_Notes&amp;diff=1453"/>
		<updated>2024-04-25T14:58:02Z</updated>

		<summary type="html">&lt;p&gt;Ma-W: 1 revision&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Release date: 2023-12-22&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Build: 54118&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;span style=&amp;quot;color:#cc0033&amp;quot;&amp;gt;&#039;&#039;&#039;Important!&#039;&#039;&#039; &amp;lt;/span&amp;gt;To upgrade the product, you need to have an active Technical Support plan. You will be prompted to re-activate your product after installing the upgrade to verify your Technical Support status.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have an active Technical Support plan, please contact Scale Logic sales team or your reseller for further assistance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;cke_show_border cke_show_border cke_show_border&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| __TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Updated ==&lt;br /&gt;
&lt;br /&gt;
=== Broadcom MegaRAID Adapter driver (megaraid_sas, v07.727.03.00) ===&lt;br /&gt;
&lt;br /&gt;
== Fixed ==&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #--&amp;gt;The system experiences boot failure on servers using the Supermicro X13 motherboard. ===&lt;br /&gt;
&lt;br /&gt;
== Performance tuning ==&lt;br /&gt;
&lt;br /&gt;
=== iSCSI Target with VMware ESX performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of low iSCSI Target performance with VMware ESX, a few parameters need to be changed in VMware ESX iSCSI Initiator. Go to Storage Adapters -&amp;gt; iSCSI Software Adapter -&amp;gt; Advanced Options and the change the following settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;           &lt;br /&gt;
                MaxOutstandingR2T      change the default 1              to 8&lt;br /&gt;
&lt;br /&gt;
                FirstBurstLength       change the default 262144         to 65536&lt;br /&gt;
&lt;br /&gt;
                MaxBurstLength         change the default 262144         to 1048576&lt;br /&gt;
&lt;br /&gt;
                MaxRecvDataSegLen      change the default 131072         to 1048576&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Write cache sync requests performance tuning ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Write cache sync requests (sync) set to “always” for zvol is the safest option and is set by default. However, it can cause write performance decreases since all operations are written and flushed directly to the persistent storage. In case of using sync=always, it is strongly recommended using mirrored write log devices (very fast random writes devices).&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sync=standard or sync=disabled zvol options provide huge performance improvement but the most recent (up to 5 seconds) cached data can be lost in case of a sudden power failure. Use this option only in environments equipped with UPS.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For NFS shares the Synchronous data record is enabled by default. This option causes performance to be worse, but data can be safely written. In order to improve the NFS performance you can use Asynchronous data record but in such case, it is strongly recommended to use UPS.&lt;br /&gt;
&lt;br /&gt;
== Known issues ==&lt;br /&gt;
&lt;br /&gt;
=== Browser recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Web browser’s cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System as a guest in virtual environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Hyper-V:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a Hyper-V guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Number of virtual processors: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Boot Disk: 20GB IDE Disk&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;- Add at least 6 virtual disk&lt;br /&gt;
&lt;br /&gt;
The utilization of physical hard drives in virtual machines hosted by Hyper-V is not supported and may cause problems. The problem does not occur when utilizing virtual hard drives in virtual machines within a Hyper-V environment.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; VMware ESXi:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of installing the system as a VMware ESXi guest please use the following settings:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Guest OS: Other 2.6.x Linux ( 64bit )&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Number of Cores: 4&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Memory: Minimum 8GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Network Adapter: VMXNET 3&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - SCSI Controller Type: Paravirtual or LSI Logic SAS&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Boot Disk&amp;amp;nbsp;: 20GB Thick Provision Eager Zeroed&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Add at least 6 virtual disk&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - Edit Settings-&amp;gt;Options-&amp;gt;Advanced-General-&amp;gt;Configuration-&amp;gt; Add row: disk.EnableUUID&amp;amp;nbsp;: TRUE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reclaim deleted blocks on thin-provisioned LUNs in various systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the &amp;quot;file-delete notification&amp;quot; feature in the system registry. To do so, follow the steps below:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - start Registry Editor.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - double-click DisableDeleteNotification.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; - in the Value data box, enter a value of 1, and then click OK.&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to reclaim the free space in Windows 2012 please change the&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use &amp;quot;Optimize&amp;quot; tool located in Disc Management-&amp;gt;[disk]-&amp;gt;Properties-&amp;gt;Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours. &amp;amp;nbsp;&amp;lt;br/&amp;gt;In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:&lt;br /&gt;
&lt;br /&gt;
For VMware ESXi 5.0: [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2014849 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;amp;cmd=displayKC&amp;amp;amp;externalId=2014849]&amp;lt;br/&amp;gt;For VMware ESXi 5.5 and newer: [https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;amp;cmd=displayKC&amp;amp;externalId=2057513]&amp;lt;br/&amp;gt;For VMware ESXi 6.7 and newer: search in Internet for “Space Reclamation Requests from VMFS Datastores” and read vendor documentation.&lt;br /&gt;
&lt;br /&gt;
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Deduplication issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.&amp;lt;br/&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine the amount of System RAM required for deduplication, use this formula:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (Size of Zvol / Volume block size) * 320B / 0.75 / 0.25&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;320B - is the size of entry in DDT table&amp;lt;br/&amp;gt;0.75 - Percentage of RAM reservation for ARC (75%)&amp;lt;br/&amp;gt;0.25 - Percentage of DDT reservation in ARC (25%)&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 64KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 28633115306.67B / 1024 / 1024 / 1024 = 26.67GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 26.67GB RAM.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Example for 1TB data and 128KB Volume block size:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; (1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 14316557653.33B / 1024 / 1024 / 1024 = 13.33GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;so for every extra 1TB of storage, system needs extra 13.33GB RAM.&lt;br /&gt;
&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Example for 1TB data and 1MB Volume block size:&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; (1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; 1789569706,66B / 1024 / 1024 / 1024 = 1.66GB&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp;&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;so for every extra 1TB of storage, system needs extra 1.66GB RAM.&amp;lt;/div&amp;gt;&amp;lt;div&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: &#039;&#039;&#039;De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Zvols configuration issues and recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target number limit ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Targets with the same name are not assigned correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Installation on disks containing LVM metadata ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Import Zpool with broken write log ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Periodically after some operations, the GUI needs to be manually refreshed ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Operation of replacing a disk in a data group for a smaller one will cause an error &amp;quot;zpool unknown error, exit code 255&amp;quot;, and the disk will become unavailable. In order to reuse this disk, please use function &amp;quot;Remove ZFS data structures and disks partitions&amp;quot; located in the Extended tools on the Console screen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== It is strongly recommended to use 64KB or higher Volume block size ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== RAM recommendations for Read Cache ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; To determine how much System RAM is required for Read Cache, use the following formula:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 8KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 57981809664B / 1024 / 1024 / 1024 = 54GB&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;Where:&amp;lt;br/&amp;gt;1099511627776B - 1TB Read Cache&amp;lt;br/&amp;gt;4718592B - reserved size and labels&amp;lt;br/&amp;gt;432B - bytes reserved by l2hdr structure&amp;lt;br/&amp;gt;8192B - Volume block size&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 64KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 7247726208B / 1024 / 1024 /1024 = 6.75GB&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For 128KB Volume block size and 1TB Read Cache:&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B&amp;lt;br/&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 3623863104B / 1024 / 1024 /1024 = 3.37GB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multiple GUI disk operations may result in an inaccurate available disks list ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error &amp;quot;[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification&amp;quot;. In this case, detach this disk once again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== After removing disks from groups they may not be displayed on a list of available disks ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Reusing disks from an exported and deleted Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Negotiated speed of network interfaces may not display correctly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Limited possibility to display a large number of elements by the GUI ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Scale Logic VSS Hardware Provider system recommendations ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; It is strongly recommended to use Windows Server 2012. On the other Windows systems, Scale Logic VSS Hardware Provider Configuration works unstable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Exceeded quota for dataset does not allow to remove files ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datagroups ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Slow WebGUI with multiple datasets ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; More than 25 datasets cause that WebGUI works slow.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== ZFS Upgrade ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; For Scale Logic NX users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use &amp;quot;WebGUI -&amp;gt; Zpool options -&amp;gt; Upgrade file system&amp;quot; from Zpool&#039;s option menu.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Intel® Ethernet Controller XL710 Family ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Motherboards with x2APIC technology ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== NFS FSIDs and Zpool name ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Disks with LVM data cannot be used with the created Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Attempt to create Zpool with drives that contain LVM data will fail with the following error:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;quot;cannot open &#039;lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si&#039;: no such device in /dev must be a full path or shorthand device name&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activation may be lost after update ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Bonding ALB, Round-Robin and Round-Robin with RDMA do not work in Hyper-V and VMware environments ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using Scale Logic NX as Hyper-V or VMware guest, bonding ALB, Round-Robin and Round-Robin with RDMA is not supported. Please use another type of bonding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Enabling quota on dataset can cause file transfer interrupt ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Share can not be named the same as Zpool ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of share with the same name as Pool connections problem will occur. Please use different names.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== No persistent rules for network cards in virtual environment ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Downgrade to up17 or earlier is not possible ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== System cannot be installed on cciss based controllers ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be fixed in next releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== On- &amp;amp; Off-site Data Protection backward compatibility problem ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using On- &amp;amp; Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Problem with maintenance in case of disk failure ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Different Write Cache default setting for zvols in early beta versions of NX up25 ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the early beta versions of NX up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of NX up25 the Log bias is set to “Write log device (Latency)”.&amp;lt;br/&amp;gt;Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== New default value for qlini_mode parameter for FC kernel module qla2xxx_scst ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of NX up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open NX TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If there&#039;s more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -&amp;gt; Rescan).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== In case of large number of disks, zpool move can take a long time ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In certain environments in case of large number of disks (about 100 and more) the zpool move operation can take a long time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Rolled back data are not properly refreshed both in Windows and Vmware systems ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Before using rollback operation on zvol please detach iSCSI or FC target, perform rollback operation and reattach target.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== User gets deleted from share access list after changing its username on AD server ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the AD user is assigned to a share and later the username is changed we should let NX know about it. Using the &amp;quot;Synchronize and update shares configurations&amp;quot; operation on NX leads to a situation where the changed user gets deleted from the share’s access list. The new username needs to be added to the share’s access list manually.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== QLogic 32Gbit FC HBA is no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support QLogic 32Gbit FC adapters.&lt;br /&gt;
&lt;br /&gt;
=== Certain 16Gbit FC HBAs are no longer supported ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we no longer support certain 16Gbit FC adapters. If you are using a 16Gbit FC adapter based on the Qlogic chipset using the qla2xxx_scts driver, please refer to our online hardware compatibility list (HCL) to verify if that particular adapter is supported.&lt;br /&gt;
&lt;br /&gt;
Our HCL is available at this link: [https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/ https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/]&lt;br /&gt;
&lt;br /&gt;
=== E-mail password cannot contain special non-ASCII characters ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; The following special characters #&amp;amp;nbsp;: + cannot be used in a password used in an e-mail notification feature. They can break the authentication process.&lt;br /&gt;
&lt;br /&gt;
=== LSA e-mail notifications does not work with SMTP servers requiring SSL/TLS authentication ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; E-mail alert configuration in LSI Storage Authority Software does not work with SMTP servers which require SSL/TLS authentication&lt;br /&gt;
&lt;br /&gt;
=== Moving IP address of the NFS share’s IP read only access list to read/write access list cannot be performed in one step ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If an IP address is already present on the NFS access list and you would like to move it to another access list, it has to be performed in two steps. First delete the IP address from the current list and apply the changes. Next edit the NFS share again and add the IP address to the other access list.&lt;br /&gt;
&lt;br /&gt;
=== If the used space on zpool reaches more than 80%, the system may generate high load and become unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If the used space on the zpool reaches more then 80%, the system is trying to utilize the available space to the maximum. As a result, the system load may increase, especially waiting I/O and cause its unstable work. Expanding the pool size space is recommended.&lt;br /&gt;
&lt;br /&gt;
=== In certain situations WebGUI is not showing the current state of the system ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; There are situations when the system is performing actions taking too long for the WebGUI to refresh the values in the web browser. In such a case the system is showing the old value taken directly from cache memory. We recommend using the F5 key to refresh the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== An ongoing O&amp;amp;ODP process involving small zvol block size or dataset record size generate high load and render the system unstable ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; O&amp;amp;ODP backup tasks involving small zvol block sizes as well as small dataset record sizes (4KB - 16KB) are known to generate very high load rendering the system unstable. We recommend using at least 64KB sizes for zvols and datasets.&lt;br /&gt;
&lt;br /&gt;
=== Runtime UPS calibration in the client-server configuration unexpectedly shutdowns NX ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In the client-server configuration of the UPS, the runtime UPS calibration process returns an improper value interpreted by NX as being on battery. When it timeouts it shuts the system down.&lt;br /&gt;
&lt;br /&gt;
=== Starting from up29 (including updating from previous version), system cannot boot up in UEFI mode if your boot medium is controlled by LSI SAS 9300 HBA with outdated firmware ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Make sure your LSI SAS 9300 HBA has the latest firmware installed. A quick workaround is changing the booting mode from UEFI to Legacy.&lt;br /&gt;
&lt;br /&gt;
=== Bonded Mellanox network cards show negative values on the network usage chart ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; This issue will be solved in the future release.&lt;br /&gt;
&lt;br /&gt;
=== In case of hundreds of thousands of LDAP users system starts very slowly ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; LDAP database is stored on the boot medium. If you have that large LDAP database we recommend using ultra fast NVMe disk for boot medium.&lt;br /&gt;
&lt;br /&gt;
=== After update to NX up29 write back cache on some hardware RAID volumes can be unintentionally disabled ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Starting from NX up29 we disable write-back cache on all HDD disks by default, but we do not disable write-back cache on SSD drives and hardware RAID volumes. It can happen however that the write-back cache on some RAID volumes can be turned off. Hardware RAID volume performance can be heavily impacted by the lack of the write-back cache, so please make sure it&#039;s enabled after update. Open TUI and invoke Extended tools by pressing CTRL+ALT+t, then select Disk write-back cache settings.&lt;br /&gt;
&lt;br /&gt;
=== Restarting or disconnecting JBOD with the write-back cache enabled on disks can lead to the data inconsistency ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; If write-back cache is enabled on disks in JBOD then restarting or disconnecting JBOD can lead to data inconsistency. Starting from NX up29 we disable write-back cache on HDD disks by default during bootup procedure. We do not disable write-back cache on SSD drives and hardware RAID volumes.&lt;br /&gt;
&lt;br /&gt;
=== Snapshots are not displayed after a system reboot if there are more than a few thousands of snapshots ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case there is a large number of snapshots (more than a few thousands) there might be a significant delay in listing them in WebGUI after a system reboot. Depending on how big the number of snapshots is, it may take a few minutes or up to several dozen minutes to populate the list in WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use the gzip-9 compression algorithm. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using the gzip-9 compression algorithm the system can behave unstable when copying the data to storage. It is possible to use this compression algorithm only in environments where very efficient processors are being used.&lt;br /&gt;
&lt;br /&gt;
=== It is not recommended to use more than 500 zvols. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In case of using more than 500 zvols in the system, the responsiveness of the Web-GUI may be low and the system may have problems with the import of zpools.&lt;br /&gt;
&lt;br /&gt;
=== The Internet Connection Check functionality has been removed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; In order to check the internet connection, try to get the date and time from the NTP server using the Web-GUI (System Settings -&amp;gt; System -&amp;gt; Time and date settings).&lt;br /&gt;
&lt;br /&gt;
=== After upgrading the system to a newer version, the event viewer reported an error message: An unexpected system reboot occurred. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; After upgrading the system to a newer version, the event viewer reported an error message: &amp;quot;An unexpected system reboot occurred. Run the &#039;Scrub scanner&#039; on all pools to check the system integrity. Analysis of logs and previous events can help to find the cause of this problem and prevent the issue in the future. For more information, refer to the help article.&amp;quot;. This information should be ignored.&lt;br /&gt;
&lt;br /&gt;
=== Low performance on remote disks in case of new installation of NX up29r2. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;The source for this problem is the zfs_vdev_max_active parameter set to 1 on a new installation of NX up29r2. To resolve this problem, please change the value of the zfs_vdev_max_active parameter from 1 to 1000 in TUI.&amp;amp;nbsp; In order to change the zfs_vdev_max_active parameter to 1000 open NX TUI and use CTRL+ALT+W keys combination to launch Hardware configuration. Press &amp;quot;Yes&amp;quot; to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select the zfs module, then the zfs_vdev_max_active parameter and change its value to 1000. This operation requires a restart of the NX. This should be done by selecting the Reboot option in the TUI.&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The SED functionality configuration issues. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED functionality in Scale Logic NX enables to use the drives with verified SED configuration only - they are tagged as &amp;quot;SED&amp;quot; and listed on the Scale Logic NX HCL. In order to properly configure the functionality, please follow the steps described in the Knowledge Base article: [https://kb.scalelogicinc.com/NX-sed-support-in-NX_3381.html https://kb.scalelogicinc.com/NX-sed-support-in-NX_3381.html]&lt;br /&gt;
&lt;br /&gt;
=== The SED configuration tool available in TUI also lists devices that are not currently supported. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI also lists devices that are not currently supported. To check if a given device is supported, see the HCL list available on the Scale Logic webpage ([https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/ https://www.scalelogicinc.com/support/hardware-compatibility-list/Scale Logic-NX-dss/]). To find devices for which we support the SED functionality, on the Scale Logic HCL page in the form: &amp;quot;Search by component&amp;quot;, enter: “SED” in the keyword field and click the search button (loupe icon).&lt;br /&gt;
&lt;br /&gt;
=== Enabling the autotrim functionality in the zpools may cause drastic increase load or iowait in the system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of a drastic increase load or iowait in the system after enabling the autotrim functionality in the zpools, consider disabling it. It is recommended to run the &amp;quot;Trim&amp;quot; function manually on demand and at a convenient time (e.g. at a time when the system is working under less load).&lt;br /&gt;
&lt;br /&gt;
=== The Mellanox ConnectX-3 network controller is no longer supported in RDMA mode due to its instability. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In order to provide stable work with RDMA we recommend using the Mellanox ConnectX-4, ConnectX-5, or ConnectX-6.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115142 --&amp;gt;The Network usage charts display incorrect data for an Active-Backup bonding with RDMA. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Network usage charts incorrectly display data for systems using the Active-Backup bonding with RDMA. The charts only reflect the usage of one network interface included in the Active-Backup bonding (the charts for the second network interface are not generated at all).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115051 --&amp;gt;Duplicate entries appear in the Service Status tab in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In certain scenarios, the Service Status tab in the WebGUI shows duplicated instances of the same connection.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114628 --&amp;gt;Restoring data backups from the macOS Time Machine application may not work correctly with older versions of the macOS system. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems with restoring a copy of data from the Time Machine application, it is recommended to update the macOS system to a new version.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114516 --&amp;gt;The Virtual Hard disks smaller than 1B are visible in the WebGUI. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; It’s possible to disable the virtual disks through IPMI settings. In Settings -&amp;gt; Media Redirection Settings -&amp;gt; VMedia Instance Settings:&lt;br /&gt;
&amp;lt;pre&amp;gt;  Uncheck &amp;quot;Emulate SD Media as USB disk to Host&amp;quot; checkbox - it  manages one of the virtual disks.&lt;br /&gt;
  Set &amp;quot;Hard disk instances&amp;quot; to 0 in a combo box.&lt;br /&gt;
  Set &amp;quot;Remote KVM Hard disk instances&amp;quot; to 0 in the combo box - settings of the combo box manage the second virtual disk. &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== &amp;lt;!-- #114449 --&amp;gt;Unsupported configuration of VMware virtual machines (consisting of multiple disks) for data rollback from snapshots in On- &amp;amp; Off-site Data Protection. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The VMware virtual machine data rollbacks from snapshots using the On- &amp;amp; Off-site Data Protection functionality are not supported when the virtual machines consist of multiple disks. The specific virtual machine configuration is incompatible with the restoration process.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114351 --&amp;gt;Subdomain statuses in the User Management tab in the WebGUI are not updated. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of unavailability of a given subdomain, information about its status will not be updated on the WebGUI (even by pressing the refresh button).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #114251 --&amp;gt;The problems with users and groups synchronization within the Active Directory one-way trusted configuration. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In case of problems it’s recommended to use two-way trusted configuration.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #108558 --&amp;gt;Partial support for REST API v3. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The REST API v3 is currently only partially supported. As a result, not all operations can be executed using this version of the REST API. For optimal utilization of the REST API, we highly recommend all customers to employ REST API v4.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #99323 --&amp;gt;Username in LDAP database can’t be changed. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To modify a username in the LDAP database, the administrator needs to delete the user account and creating a new one in the WebGUI.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115409 --&amp;gt;The hard disk LED locating and disk faulty functionality do not work properly using the Broadcom HBA 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter the Hard disk LED locating and disk faulty functionality do not work.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115436 --&amp;gt;The Broadcom HBA 9600 Storage Adapter may cause “Target allocation failed, error -6” error messages in dmesg. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; In the case of using the Broadcom HBA 9600 Storage Adapter,using the “Rescan” button in the storage tab in the WebGUI may results in the “Target allocation failed, error -6” error messages in dmesg.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #109737 --&amp;gt;The ARCHTTP tool, when in use, might erroneously redirect to another network interface. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; To avoid redirection to another network interface, it’s recommended to connect to the ARCHTTP tool using the primary network interface available in the Scale Logic NX (the network interface is usually: eth0).&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #116234 --&amp;gt;Identification LED blinking does not work on NVMe drives in AMD-based servers. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The Identification LED blinking on NVMe drives does not work on servers based on AMD processors. This problem will be solved in future releases.&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;!-- #115416 --&amp;gt;The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter. ===&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp; The SED configuration tool available in TUI does not lists devices when using the Broadcom 9600 Storage Adapter.&lt;br /&gt;
&lt;br /&gt;
[[Category:Release Notes]]&lt;/div&gt;</summary>
		<author><name>Ma-W</name></author>
	</entry>
</feed>