Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Unity is optimized for core IT applications. These include transactional workloads such as Oracle, SAP,
SQL, Exchange or SharePoint, server virtualization and end user computing such as VDI, and all other
applications that need traditional file, block or unified storage. All models are available as an AF, All Flash
option.
Unity is also a good fit for partner lead configurations optimized for virtual applications with VMware and
Hyper-V integration. The Unity platform with multi-core optimized architecture unleashes the power of
Flash, taking full advantage of the latest Intel multi-core technology.
Unity will meet all of the top new customer demands, including ease of Service and Support, low Total
Cost of Ownership, it is easy to configure and implement, it has top notch features and it is easy to
manage.
Traditionally, the best practices for optimizing storage performance involved manual, resource intensive
processes. Unity allows SQL administrators to leverage an easy-to-use and potentially hands-off
mechanism for optimizing the performance of the most demanding applications. Automating the
movement of data between storage tiers saves both time and resources. Unity eliminates the need to
spend hours manually monitoring and analyzing data to determine a storage strategy, then maintaining,
relocating and migrating LUNs (Unity logical volumes) to the appropriate storage tiers.
The common business requirement in SAP environments is reducing TCO while improving performance
and service level delivery. Frequently, responsiveness to sensitive SAP applications has deteriorated
over time due to increased data volumes, unbalanced data stores, and changing business requirements.
By using Unity with block data, SAP deployments can gain a significant performance boost without the
need to redesign the applications, adjust the data layouts, or reload significant amounts of data. With
automated sub-LUN level tiering and extended cache, Administrators can properly balance data
distribution across the tiers that allow capacity and performance optimization.
Virtualization management integration allows the VMware administrator or the Microsoft Hyper-V
administrator to extend their familiar management console for Unity related activities.
VMware vStorage APIs for Array Integration (VAAI) for both SAN and NAS connections, allows Unity to be
fully optimized for virtualized environments. EMC Virtual Storage Integrator (VSI) is targeted towards the
VMware administrator. VSI supports Unity provisioning within vCenter, full visibility to physical storage,
and increases management efficiency.
In the Microsoft Server 2012 and Hyper-V3.0 space, the Array Offloaded Data Transfer (ODX) allows
Unity to be fully optimized for Windows virtual environments. This technology offloads storage-related
functions from the server to the storage system.
EMC Storage Integrator (ESI) for Windows provides the ability to provision block and file storage for
Microsoft Windows or Microsoft SharePoint sites.
Access to Block Storage is provided by containers on both Storage Processors to hosts connected via FC
or iSCSI IO modules. A common software structure provides access to the Virtual Storage Pool and
configured LUN resources.
Access to Network Attached Storage (NAS) is also provided by containers on both Storage Processors to
Unix or Windows based clients and virtualized server environments via Ethernet IO modules. Again, a
common software structure provides internal access to the File Systems stored in the Unity Virtual
Storage Pool.
This architecture deliver unified storage functionality at the host and network interface, providing greater
utilization of resources and higher performance capabilities the customer.
Homogeneous pools are recommended for applications with limited skew, such that their access profiles
can be very random across a large address range. Multiple LUNs with similar profiles can share the same
pool resources. These LUNs provide more predictable performance based on the disk type employed. In a
homogeneous pool, only one disk type (flash, SAS, or NL-SAS) is selected during pool creation.
Heterogeneous pools consist of multiple disk types. The system supports flash, SAS, and NL-SAS disks in
the same pool. There can be a maximum of three disk types in a heterogeneous pool. Data in a particular
LUN can reside on some or all of the different disk types. FAST VP is able to relocate slices across
different disk types in a heterogeneous pool to ensure the hottest data resides on the highest performance
drives.
A NAS server is required prior to creating file systems. NAS servers are used for NAS protocols only;
iSCSI block storage is provided natively and not through a NAS server.
The NAS server root file system and configuration data requires a storage pool and an owning Storage
Processor.
Each NAS server is a separate file server. Users on one NAS server cannot access data on another NAS
server. Each NAS server has a separate configuration with independent network interfaces, sharing
protocols, directory services, NDMP backup and Security.
With Unity, the storage pools are shared by all resource types, meaning that File systems, LUNs and
VMware Virtual Volumes or VVols can be provisioned out of the same unified pools without need for a
second level “file pool.” File systems are provisioned simply by choosing a storage pool and a previously
created NAS server.
Better performance is also provided through faster fail overs, file shrink and expand, space efficient
snapshots and simpler quotas.
The user can manually shrink the size of a file resource to a size that is equal to or less than the allocated
size. For example, if a thin File System with 565 GB of allocated space is shrunk from a size of 1 TB down
to 500 GB, the total allocated size and the pool size used amounts decrease. As a consequence, the
65GB of newly freed space is returned to the storage pool and is added to the storage pool free space.
FAST Cache consists of one or more pairs of SAS Flash 2 in RAID 1 (1+1) and provides both read and
write caching. For reads, the FAST Cache driver copies data off the disks being accessed into the FAST
Cache. For writes, FAST Cache effectively buffers the data waiting to be written to disk.
At a system level, the FAST Cache reduces the load on back-end hard drives by identifying when a chunk
of data on a LUN is accessed frequently, and copying it temporarily to FAST Cache.
The storage system then services any subsequent requests for this data faster from the Flash disks that
make up the FAST Cache; thus, reducing the load on the disks in the LUNs that contain the data (the
underlying disks). The data is flushed out of cache when it is no longer accessed as frequently as other
data.
Subsets of the storage capacity are copied to the FAST Cache in 64KB chunks of granularity.
FAST Cache operations are non-disruptive to applications and users. It uses internal memory resources
and does not place any load on host resources.
Online expansion and shrinking of a FAST Cache is possible by adding or removing drives.
When Unity models are offered in an All Flash configuration, the Fast Cache and FAST VP features are
not available.
FAST Cache monitors the wear on flash disks and dynamically removes capacity (pages) via an unmap
command. This increases the amount of over-provisioning within the flash disks. FAST Cache updates
wear information every 7 days, and adjusts the amount of over-provisioning in flash disks to attempt to
maintain a minimum of 5 years lifetime of flash disks. Based on the latest wear information, the weekly
wear information report can increase or decrease the amount of over-provisioning in the flash drive.
Unity Compression is intended to lower the cost per storage consumed and also improve the cost per
IOPS, through better utilization of system resources.
Compression is supported on physical hardware only. For Hybrid arrays, the functionality is only
supported on All Flash pools with no additional licenses required.
FAST VP enables the system to retain the most frequently accessed or important data on fast, high-
performance disks and move the less frequently accessed and less important data to lower-performance,
cost-effective disks.
FAST VP tracks data in a Pool at a granularity of 256 MB – a slice – and ranks slices according to their
level of activity and how recently that activity took place.
Slices that are heavily and frequently accessed are moved to the highest tier of storage, typically the SAS
Flash drives, while the data that is accessed least are moved to lower performing, but higher capacity
storage – typically NL-SAS drives. The ranking process is automatic, and requires no user intervention.
Limiting usage is not the only application of quotas. The quota tracking capability can be useful for tracking
and reporting usage by simply setting the quota limits to zero.
Quota limits can be designated for users, or a directory tree. Limits are stored in quota records for each
user and quota tree. Limits are also stored for users within a quota tree.
The File Size quota policy calculates the disk usage based on logical file sizes in 1K increments.
The block quota policy calculates the disk usage in file system blocks at 8KB file system blocks.
Hard and soft limits set on the amount of disk space allowed to be used.
QoS is either enabled or disabled in a Unity system. All host I/O limits are active if the feature is active.
Host I/O limits are active as soon as policies are created and assigned to the storage resources. The
feature provides system wide Pause and Resume controls.
Limits can be set by throughput in IOs per second or Bandwidth defined by Kilobytes or Megabytes per
second, or a combination of both limits. If both thresholds are set, the system limits traffic according to the
threshold that is reached first.
A Host I/O Limit policy can be one of two types: absolute or density-based.
Only one I/O limit policy can be applied to an individual LUN or a LUN that is a member of a consistency
group.
When an I/O limit policy is applied to a group of LUNs, it can also be shared.
An internal key manager generates and manages encryption keys. This method is simpler, lower cost, and
more maintainable than self-encrypting drives. With the encryption hardware embedded in the array,
drive vendor and drive type are agnostic, allowing use of any disk drive type and eliminates drive specific
vendor overhead.
This provides protection against data being read from a lost, stolen, or failed disk drive. Compliance is
with industry or government data security regulations that require or suggest encryption including, HIPAA
(healthcare), PCI DSS (credit cards), and GLBA (finance).
Securely decommissioning arrays is easily accomplished by deleting pools, this in turn deletes all drive
encryption keys and most often eliminates the need to shred disk drives. Encryption is a licensed feature
and will not appear in the licenses page if the license is not active. No data-in-place upgrades are
supported and changing the encryption state requires a destructive re-initialization.
The introduction of UFS64 will require a new tape format. The format is named Format N. The previous
generation format for UFS32 is named Format N-1.
The backup module will format the data on tape in different ways based on the type of file system on
which the backup is performed. When backing up data on a UFS64 file system, the data will be written to
the tape in Format N. When backing up data on a UFS32 file system, the data will be written to the tape in
Format N-1.
The restore module will recognize the backup data in Format N-1 from older generation systems (VNX,
Celerra, VNXe, VNX2e, etc.) and restore them to a UFS64 file system on Unity arrays. When restoring
the backup data in Format N-1 to a UFS64 file system on a Unity systems, all the new attributes will be set
with their default values as specified by the file system. When data is restored to UFS32 on a VNX,
Celerra, VNXe, VNX2e, etc., all the new attributes will be discarded as they are only applicable to UFS64.
Backup data in Format N cannot be restored to old generation systems (VNX, Celerra, VNXe, VNX2e,
etc.).
Caution: Deduped files in legacy backup can not be restored to UFS64 filesystem.
Available RAID protection levels include 1/0, 5, and 6 and can co-exist in the same array simultaneously to
match different protection requirements.
Each disk drive has two data ports. This gives two separate paths to each drive, one from each Storage
Processor. If an SP fails, or any component of the path fails, the drive can still be accessed by the other
SP.
Proactive hot sparing enhances system robustness and delivers maximum reliability and availability.
Redundant power supplies, one for each Storage Processor, are included. In the event of a failure, one
power supply can power the entire Disk Processor Enclosure.
Each SP also has a Battery backup to allow for an orderly shutdown and cache de-staging to the Vault
SSD. In the event of a power failure, the Vault SSD provides the de-stage area for data in write cache that
is not yet committed to the disk.
Unified Snapshots are also the foundation for native asynchronous replication in Unity.
With Unified Snapshots, the storage required for your snapshot data comes out of the same storage pool
as your source LUN data so there is no separate management of Reserved LUNs.
Auto-delete and expiration can be configured so that snapshots are automatically deleted at a specified
time or based on user defined storage consumption thresholds.
Remote Replication is one method that enables data centers to avoid disruptions in operations. In a
disaster recovery scenario, if the source site becomes unavailable, the replicated data will still be available
for access from the remote site.
Remote Replication uses a Recovery Point Objective (RPO) which is an amount of data, measured in
units of time to perform automatic data synchronization between the source and remote systems. The
RPO for asynchronous replication is configurable. The RPO for synchronous replication is set to zero. The
RPO value represents the acceptable amount of data that may be lost in a disaster situation. The remote
data will be consistent to the configured RPO value.
Remote Replication is also beneficial for keeping data available during planned downtime scenarios. If a
production site has to be brought down for maintenance or testing the replica data can be made available
for access from the remote site. In a planned downtime situation, the remote data is synchronized to the
source before being made available and there is no data loss.
Native Asynchronous replication is for both File and Block. Supported Block Resources include LUNs,
Consistency Groups, and VMFS Datastores. Supported File Resources include File Systems, NAS
Servers, and VMware NFS Datastores.
Native Asynchronous Replication can be performed between Unity and UnityVSA systems for both block
and file storage and also between Unity or UnityVSA and VNXe3200, VNXe1600, or vVNX systems for
block storage.
For local replication, an internal connection is pre-made and used. For Remote Replication, replication
interfaces are used to send data between systems. The replication interfaces on each system must be
able to communicate with the other system. Replication interfaces can be used for both block and file
asynchronous replication connections/sessions.
Native Synchronous Replication is configurable through GUI, CLI, and the REST API to provide protection
for LUNs, Consistency Groups, and VMFS Datastores.
Data transport is over Fibre Channel only. Synchronous replication data transfer connections are
supported in switched or direct connect environments.
Data transfer is performed on the first FC Port. Port location will change depending on the system I/O
Module and CNA layout in the Unit array. The FC port does not require configuration in Unisphere and it
can simultaneously be used for Host I/O.
The Sync Replication Management Port is used for communications of operations between the local and
remote systems. Management commands are transferred over the SP’s MGMT Port via a LAN or WAN.
RecoverPoint CDP provides block replication functionality across all RecoverPoint supported platforms
and can be used for VNX1/VNX2 migration or replication to Unity.
RecoverPoint for VMs provides VM-granular protection of your VMs and associated data, is compatible
with Unity and other EMC products.
EMC AppSync is a policy driven, self-service, software for managing copies of various
applications/databases running on various EMC arrays. Unity leverages AppSync, to enable
application consistent snapshots with Unity.
The File Import feature is fully managed from Unisphere, UEMCLI commands and REST API calls. Some
preparatory work is required on the VNX system, but creation, monitoring, and cutover are all managed in
Unity through the available management interfaces.
The operation is transparent to host I/O with little or no disruption to client access of data.
SANCopy must be enabled on the system. The SANCopy enabler is contained in the VNX Installation
Toolbox on the EMC Support site if needed.
The import of block data is configured and controlled from the Unity system using the SANCopy engine
running on the VNX. Then the data is migrated to Unity using a SANCopy push from the VNX.
The Native SANCopy Import feature is managed from Unisphere, UEMCLI commands and REST API
calls.
These are individual software products available for use in situations not covered by Unity’s native
migration and replication functionality.
The following additional tools can be used for VNX2 to Unity/UnityVSA data migration: RecoverPoint /
RecoverPoint for VMs and VPLEX for replication; SAN Copy, PPME, and VMware Storage vMotion for
block migration; and EMCopy and Rsync for file migration.
The following additional tools can be used for VNXe3200 to Unity/UnityVSA: Native Asynchronous Block
Replication / RecoverPoint / RecoverPoint for VMs and VPLEX for replication; PPME and VMware
Storage vMotion for block migration; and EMCopy and Rsync for file migration.
The following additional tools can be used for VNXe1600 to Unity/UnityVSA: Native Asynchronous Block
Replication for replication and VMware Storage vMotion for block migration.
Cloud Tiering Appliance (CTA) release 11 supports the Unity family as source File Servers allowing the
tiering of file data to cloud storage based on policies. The release also included support to Virtustream as
a cloud destination. Microsoft Azure and Amazon S3 are also supported as cloud destinations.
The CTA 11 integration with Unity OE (v4.1 and later) only supports archiving/stubbing operations.
Encryption and compression are supported for the tiering operations.
Tiering from Unity systems to other EMC storage or SMB/NFS shares is not supported. The only
supported tiering destinations are Virtustream, Microsoft Azure, or Amazon S3.
The Unisphere wizards help the user to provision and manage the storage while automatically
implementing best practices for the configuration.
Unisphere for Unity supports a wide range of browsers including Google Chrome, Internet Explorer,
Mozilla Firefox, and Apple Safari.
Unisphere contains a complete system ecosystem, the highlight of which is Proactive Assist with call
home, and Cloud-based management.
Historical metrics display data collected within a preset or customized time range.
Real-time metrics display data collected during the current session, over a maximum time range of 15
minutes.
• Compare changes in performance across multiple metrics, such as network traffic, bandwidth, and
throughput.
• Analyze data at the aggregated level using line charts, to quickly determine whether there are any
performance issues.
The time range for all the charts displayed in the performance dashboard is configured using the Custom
link on the top of the main page.
It is also possible to add new charts to the Dashboard or create your own dashboard and add the desired
charts.
The available performance charts are System cache, System I/O, system resources, LUN, File System,
VVol Datastore, Fibre Channel port, iSCSI interface, Ethernet port, drive and tenant. These options will
differ depending on the chart type selection (historical or real-time).
It uses EMC Unisphere for managing a system and supports various kinds of tasks against both Block and
File storage. Supported tasks include configuring and monitoring the system, managing users,
provisioning storage, protecting data, and controlling host access to storage.
UEMCLI is intended for advanced users who want to use commands in scripts for automating routine
tasks, such as provisioning storage or scheduling snapshots to protect stored data. It can also be used as
an interface in addition to other data exchange protocols, such as SNMP, that are supported by Unity
when integrating with other products.
For example, the third party who decides to develop a centralized monitor which collects alerts and other
information from a set of systems including Unity can take advantage of UEMCLI.
Communications are stateless, meaning all information required to complete a request is contained within
the request. It is a set of resources, operations, and attributes that lets you manage the array through web
browsers, command-line HTTP tools, programming languages like C++ and Java, and scripting
Languages like Perl and Python.
REST is very common within the IT industry and allows programs to easily integrate with the storage
system. REST API is more programmer-friendly than UEMCLI and doesn’t require a separate client.
The REST API allows interaction with Unisphere management functionality, including system settings and
monitoring, host and remote system connections, network settings, storage management, data protection,
including snapshots and replication, and it supports configuration management.
CloudIQ is a proactive management system that contains all of the capabilities needed to resolve
problems fast. Once a user is signed up to the CloudIQ ecosystem, they can:
• Live chat with an EMC support rep and ask questions of other CloudIQ community members.
CloudIQ requires that Unity is configured with ESRS and a valid support account in order to work.
Unisphere Central is a network application that remotely monitors the status, activity, and resources of
multiple supported EMC storage systems from a central location. The application allows administrators to
take a look at their storage environment from a single interface and rapidly access the systems that need
attention or maintenance.
CEPA provides event notifications and contexts to consumer auditing and quota management applications
that monitor the SMB and NFS file system activity on the Unity system. The event publishing agent
delivers to the application both event notification and the associated context in one message.
The EMC Common Event Enabler framework or CEE server installed on Windows or Linux servers, runs
the CEPA facility (CEPA server) and sometimes it can be also be running the third party consumer
software.
The feature is enabled via Unisphere GUI, and UEMCLI. SMB and NFS File system activity is monitored
from the third party application interface.
IP multi-tenancy allows Service Providers, with multiple customers on a single system, to isolate storage
resources for tenants, ensuring that tenant visibility and management are restricted to the assigned
resources only.
IP multi-tenancy provides the ability to assign isolated, file-based storage partitions to the NAS servers on
a storage processor. Each tenant have its own network namespace: net interfaces, VLAN domain, routing
table, IP firewall, DNS, etc. The network traffic segregated at the kernel level on the SP.
Configuration and management of these resources is done via Unisphere GUI, and UEMCLI.
VVols are stored in storage containers which are created by the storage administrator. Storage Containers
have a 1:1 mapping with VVol datastores.
Unity supports both Block and File VMware VVols datastores deployments.
By using storage profiles for VM provisioning, the available datastores are categorized in to compatible
and incompatible categories. This ensures VMs are deployed and remain on storage that has the
appropriate characteristics for performance and availability. Using policies makes the VMware and storage
administrator’s job easier because they can be confident the datastore has the appropriate configuration
for the VM.
Traditionally, LUNs or file systems are used to store VM data. Data services, such as snapshots, are
applied at a LUN or file system level, which means all of the VMs on that datastore are snapped. VVols
enables applying data services at a VM-level granularity so individual VMs can be snapped. These
operations are also offloaded to the Unity system so to improve efficiency and decrease resource
utilization.
Unity is optimized for virtualized environments, not only in its storage capabilities, but also in its close
integration with VMware. Unity has EMC tools to enhance its integration with VMware, plus it works
closely with existing VMware features. Key features which Unity seamlessly integrates are, VMware
vSphere Storage APIs Array Integration for SAN, VMware vSphere Storage APIs Array Integration for
NAS, Virtual Storage Integrator, and VMware vCenter Site Recovery Manager.
VASA was introduced with vSphere 5.0 in 2011. The initial release (v1.0) of VASA is a read-only API that
simply gathers information about the storage system, focusing on LUN and File System properties and
data services, and displays this information in vCenter.
VASA v2.0 adds significant functionality to the protocol, including additional insight into the storage,
reporting of granular IO statistics, and active management of new storage concepts such as virtual
volumes and their related entities.
In general, a VASA session is created when a vCenter connects to the VASA Provider using the VASA
protocol; the protocol allows (and enforces) only one session per vCenter. Sessions are created with
information about the client’s context (FC/iSCSI initiators, NFS mounts, etc.) for use in filtering results.
Sessions are maintained in memory only; they are not currently persisted across restarts. vCenter will
detect a failed session and automatically start a new one.
Microsoft Windows Server 2016 and its System Center Virtual Machine Manager (SCVMM) will continue
to use the SMI-S API to manage external storage. The Unity array is designed to integrate with the
Microsoft Windows Server and SCVMM, and it provides APIs to support the storage health monitoring
feature.
Health monitoring requires storage vendors to deliver lifecycle indication of alerts on specific storage
objects, including: Array, LUN/Disk, LUN/Pool Capacity, File System, File Share, File System Capacity,
Fan/Power supply, LUN/LUN group replication as defined in the SMI-S Indication and Health Profiles.
In a typical environment, a storage admin creates a LUN, a Database admin creates a database and then
a SharePoint admin creates the Web application on the database.
If all the admins execute their respective tasks in a sequential order it would take about 90 minutes.
In the real world, this takes a lot longer as you are waiting for one or more admins to get to this work order.
With the EMC Storage Integrator or ESI however, all you need is a Storage Pool and one can accomplish
the same task in about 20 minutes.
ESI also includes System Center integrations such as System Center Operations Manager (SCOM), SCO,
and SCVMM.
The UnityVSA is a Unified Array, providing both Block (iSCSI) and File (NFS & SMB/CIFS), and VVols in
one integrated platform. Easy configuration and management of the storage array is possible using the
same HTML5 Unisphere interface as Unity purpose-built storage arrays. A consistent feature set and data
services such as Unified Snapshots and Replication are available with the UnityVSA.
Benefits of this approach is a low acquisition cost option for hardware consolidation, multi-tenant storage
instances, remote/branch office storage environment, and easier to build, maintain, destroy environment
for staging and testing. UnityVSA can coexist with and provide storage to applications running on the
same server hardware, enabling customers to implement an affordable software-defined solution. Multiple
VSA instances can be deployed on a single server.
It is available as a free 4TB capacity Community Edition and a Professional Edition (10TB, 25TB and
50TB) subscription product offering with EMC support.
For users who initially purchase a 10TB or 25TB subscription and require additional capacity, the following
capacity upgrades are supported:
Capacity upgrades and license renewals can be installed non-disruptively. When a capacity upgrade is
installed, the limits on the system also scale accordingly.
For the virtual systems, the license keys are based on the system’s UUID (Universally Unique Identifier).
These keys, which are included in the License (.lic) files can be obtained through the ‘Get License Online’
link in the window. The user must provide the virtual system UUID and the license authorization code
(LAC) ID to download the license file locally.
The license file must then be transferred to a computer with access to the virtual Unity system. By clicking
on the Install License link, the user can upload the license file from the local machine to the storage
system after accepting the license agreement.
The following will be a typical scenario for your UnityVSA. The customer purchases a license which will
be valid for 12 months. A month before expiration, they see license expiration alerts in Unisphere. These
are repeated periodically- 28 days to expiry, 21 days, 14, 7, 6, 5, 4, 3, 2, and 1. There is also a ‘Get
License’ link in the GUI that directs customers to Software Licensing Central where they can renew their
license. Once the license expires, users can continue to use the UnityVSA but not provision anything new
until they renew their license.
Please note that support is bundled in with the VSA. So if a license expires, the customer’s support
contract expires too. They can never have the software without support or just support with an expired
software license. From a diagnosis standpoint, the support contract is your best gauge. The license
expiration date is also stored in ELMS.
With all-inclusive software, the UnityVSA allows the setup of NAS or SAN, optimize performance,
efficiency, and simplify storage management with FAST VP, protect data locally and remotely, and
provision storage from within VMware vCenter.
Note that the features that rely on specific physical hardware are not supported by the UnityVSA, FAST
Cache, Data at Rest Encryption and Synchronous replication. UnityVSA is an Ethernet based
implementation and does not support Fibre Channel connectivity. And for that reason synchronous
replication is not supported.
The Links for the updated white paper can be found in the UnityVSA Info Hub, the Unity Technical
Documentation web page, and the EMC Online Support web site.
Links for the updated specifications of these models can be found in the Unity Hub, the Unity Technical
Documentation web page, and the EMC Online Support web site.
These documents are also available from links in documents generated using the SolVe Desktop tool.
The System Limits page of the Unisphere Settings window displays the size, capacity, and count limits of
various system components or storage resources including pool and LUN count, LUN and file system size,
and total pool capacity.
The user can scroll through the list and select a limit.
The bottom of the table shows the description for a selected limit.
• A threshold of the specified limit above which the system will generate an alert
• Or a license identifier related to the given limit since some system limits depend on the type of license
installed.
The Unity Technical Documentation web page provides access to key documents for the entire Unity
family of storage systems.
The page has links to white papers, specifications, guides, procedures, and the simple support matrix,
among other documentation.
The user can also enter the name or strings of text in the search field to locate a specific document.
The SolVe Desktop available menu options will depend on the access level. EMC employees and Service
Partners will have access to more options. Customers will have access to some installation and
configuration procedures, CRU replacement procedures, IPMI and CLI commands, and hardware
information reference.