The following major configuration steps for the storage, server and switches necessary for implementing the CLARiiON.
  1. Install Fibre Channel HBAs in all systems
  2. Install EMC CLARiiON LP8000 port driver ( For Emulex) on all system
  3. Connect each host to both switches ( Broace/Cisco/McData)
  4. Connect SP1-A and SP2-A to the first switch
  5. Connect SP1-B and SP2-B to the second switch
  6. Note:- You can use cross SP connection for HA and connect SPA1 and SPB1 to first switch and SPB2 and SPA2 to the second switch.
  7. Install Operating System on windows/solaris/linux/Vmware hosts
  8. Connect all hosts to the Ethernet LAN
  9. Install EMC CLARiiON Agent Configurator/Navisphere Agent on all hosts
  10. Install EMC CLARiiON ATF software on all hosts if you are not using EMC powerpath fail-over software otherwise install supported version EMC Powerpath on all hosts.
  11. Install the Navisphere Manager on one of the NT hosts
  12. Configure Storage Groups using the Navisphere Manager
  13. Assign Storage groups to hosts as dedicated storage/Cluster/Shared Storage
  14. Install cluster software on host.
  15. Test the cluster for node failover
  16. Create Raid Group with protection as application required(raid5,raid1/0 etc)
  17. Bind LUN according to application device layout requirement.
  18. Add LUN to storage Group.
  19. Zone SP port and Host HBA on both switch
  20. Register Host on CLARiiON using Navisphere Manager.
  21. Add all hosts to storage group.
  22. Scan the devices on host.
  23. Label and Format the device on host.

EMC is introducing a revolutionary new Virtual Matrix architecture within the Symmetrix system family which will redefine high-end storage capabilities. This new Symmetrix V-Max system architecture allows for unprecedented levels of scalability. Robust high availability is enabled by clustering, with fully redundant V-Max Engines and interconnects. The Symmetrix V-Max series is built on a revolutionary Virtual Matrix architecture. Symmetrix V-Max, along with Enginuity 5874, delivers unprecedented performance, availability, functionality, and economic advantages. The Symmetrix V-Max series, with the unique scale-out Virtual Matrix architecture, can be configured with 96 to 2,400 drives and usable capacity up to 2 PB. Systems provide up to 944 GB of mirrored global memory and up to 128 Fibre Channel ports, 64 FICON ports, 64 Gigabit Ethernet ports, or 64 iSCSI connections. The Symmetrix V-Max series is a distributed multi-node storage system that can scale from one to eight highly available V-Max Engines. Systems are configured around a central system bay and adjacent storage bays of up to 240 disks each. A full range of drive options is available scaling from ultra-fast enterprise Flash drives, to Fibre Channel drives; to the highest capacity 1 TB SATA II drives. Enhanced device configuration and replication operations result in simpler, faster and more efficient management of large virtual and physical environments. This allows organizations to save on administrative costs, reduce the risk of operational errors and respond rapidly to changing business requirements.Enginuity 5874 also introduces cost and performance optimized business continuity solutions. This includes the zero RPO 2-site long distance solution.

RAID Virtual Architecture (RVA) - Enginuity 5874 introduces a new RAID implementation infrastructure. This enhancement increases configuration options in SRDF environments by reducing the number of mirror positions for RAID 1 and RAID 5 devices. This enhancement also provides additional configuration options, for example, allowing LUN migrations in a Concurrent or Cascaded SRDF environment. Large Volume Support Enginuity 5874 increases the maximum volume size to approximately 240 GB for open systems environments and 223 GB for mainframe environments.

512 Hyper Volumes per Physical Drive: - Enginuity 5874 supports up to 512 hyper volumes on a single drive, twice as much as Enginuity 5773. Customers can improve flexibility and capacity utilization by configuring more granular volumes that more closely meet their space requirements and leave less space unused. Autoprovisioning Groups Autoprovisioning Groups reduce the complexity of Symmetrix device masking by allowing the creation of groups of host initiators, front-end ports and storage volumes. This provides the ability to mask storage to multiple paths instead of one path at a time, reducing the time required and potential for error for consolidated and virtualized server environments. Concurrent Provisioning and Scripts Concurrent configuration changes provide the ability to run scripts concurrently instead of serially, improving system management efficiency. Uses for concurrent configuration changes include parallel device mapping, unmapping, metavolume form and dissolve from different hosts.

Dynamic Provisioning Enhancements: - Dynamic configuration changes allow the dynamic setting of the BCV and dynamic SRDF device attributes and decrease the impact to hosts I/O during the corresponding configuration manager operations.

New Management Integration: - With Enginuity 5874, the Symmetrix Management Console (SMC) and SMI-S provider are available on the Symmetrix system's Service Processor. This frees host resources and simplifies Symmetrix system management; by attaching the Service Processor to a customer network, the customer can open SMC and manage the Symmetrix system from anywhere in their enterprise.

Enhanced Virtual LUN :- With Enginuity 5874, Virtual LUN technology provides the ability to nondisruptively change the physical location on disk, and/or the protection type of Symmetrix logical volumes and allows the migration of open systems, Mainframe and System i volumes to unallocated storage or to existing volumes. Organizations can respond more easily to changing business requirements when using tiered storage in the array.

Enhanced Virtual Provisioning:- Draining With Enginuity 5874, Virtual Provisioning support for draining of data devices allows the nondisruptive removal of one or more data devices from a thin device pool, without losing the data that belongs to the thin devices. This feature allows for improved capacity utilization.Enhanced Virtual Provisioning: Support for all RAID Types with Enginuity 5874, Virtual Provisioning no longer restricts RAID 5 data devices. Virtual Provisioning now supports all data device RAID types.

Veritas Disk Group Configuration Guidelines:-

1) Use multiple Disk Groups—preferably a minimum of four; place the DATA, REDO, TEMP, UNDO, and FRA archive logs in different (separate) Veritas Disk Groups

2) Optimally, use RAID 1 for tier 1 storage

3) Configure Disk Groups so that each contains LUNs of the same size and performance characteristics

Distribute Veritas Disk Group members over as many spindles as is practical for the site’s configuration and operational needs

Data Striping and Load Balancing:-

1) Veritas software level striping: layout=stripe ncols=10 stripeunit=128k

2) Storage-level striping further parallelizes the individual I/O requests within storage

3) Using the storage RAID protection, the amount of I/O traffic (host to storage) is reduced

4) EMC PowerPath should be used for load balancing and path failover

5) Use of metavolumes is optional

a) There is an upper limit on the number of LUNs that a host can address—typically ranging from 256 to 1,024 per HBA.

b) When these limits are reached, metavolumes are a convenient way to access more Symmetrix hypervolume.


Volume Configuration with Veritas (Hypervolumes):


1) Created 5 Veritas Disk Groups

2) Five Disk Groups are used because this number provides better granularity for performance planning

3) The use of five Disk Groups also provides increased flexibility when planning for the utilization of EMC replication technology within the context of an enterprise-scale workload

4) Having five Disk Groups permits the placement of data onto different storage tiers if desired

Hypervolume

Purpose

Size

1

DATA

32 GB

2

REDO

400 MB

3

DATA

32 GB

4

FRA

30 GB

5

TEMP

10 GB

6

FRA

30 GB

Average Disk Utilization for Raid 1 should be below 150 IOPS per disk and should not go above 200 IOPS per disk as per below configuration.

-- 80 physical disks (40 mirrored pairs)

-- 240 devices visible to Veritas

-- Average user count ~ 16,000

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”