If you're looking for EMC Interview Questions for Experienced or Freshers, you are at the right place. There are a lot of opportunities from many reputed companies in the world. According to research, EMC has a progressive market share. So, You still have the opportunity to move ahead in your career in EMC Engineering. Mindmajix offers Advanced EMC Interview Questions 2024 that helps you in cracking your interview & acquire your dream career as EMC Engineer
If you want to enrich your career and become a professional in EMC, then enroll in "EMC Training". This course will help you to achieve excellence in this domain. |
I will use the formula:
Total Approximate Drives required = (RAID Group IOPS / (Hard Drive Type IOPS)) + Large Random I/O adjustment + Hot Spares + System Drives
Capacity = Heads X Cylinders X Sectors X Block Size
Input/output operations per second (IOPS) is the measure of how many input/output operations a storage device can complete within one second.
IOPS is important for transaction-based applications.
IOPS performance is heavily dependent on the number and type of disk drives.
To calculate the IOPS of a Hard disk drive:
1
IOPS = —————————————
(Average Latency) + (Average Seek Time)
To calculate IOPS in a RAID:
(Total Workload IOPS * Percentage of workload that is read operations) + (Total Workload IOPS * Percentage of workload that is read operations * RAID IO Penalty)
Max IOPS an HBA Port can generate to any LUN = (Device Queue Depth per LUN * (1 / (Storage Latency in ms/1000)))
The queue depth is the maximum number of commands that can be queued on the system at the same time.
Q is the Queue Depth =Execution Throttle= Maximum Number of simultaneous I/O for each LUN on a particular path to the Storage Port.
Calculation of the maximum queue depth: The queue depth is the number of I/O operations that can be run in parallel on a device.
Q = Storage Port Max Queue Depth / (I * L),
I is the number of initiators per storage port
L is the quantity of LUNs in the storage group.
T = P * q * L
T = Target Port Queue Depth
P = Paths connected to the target port
Q = Queue depth
L = number of LUN presented to the host through this port
Execution Throttle= (Maximum Storage Port Command Queue) / (Host Ports)
Total Approximate Drives required = (RAID Group IOPS / (Hard Drive Type IOPS)) + Large Random I/O adjustment + Hot Spares + System Drives
Total Approximate Drives = (RAID Group IOPS / (Hard Drive Type IOPS)) + Large Random I/O adjustment + Hot Spares + System Drives
Capacity = Heads X Cylinders X Sectors X Block Size
The Rotational speed and latency time are related as follows:
Latency time = (1/((Rotational Speed in RPM)/60)) * 0.5 * 1000 milliseconds
Latency and RPM:
HDD
Spindle RPM Average rotational latency [ms]
7,200 4.17
10,000 3.00
15,000 2.00
Core-Edge:
Here a picture of Core-Edge Topology:
Benefits are:
Servers/ hosts use multipathing for failover from one path to the other when one path from the Servers/host to the SAN becomes unavailable, the host switches to another path.
Servers/ hosts can also use multipathing for load balancing.
Types of policy:
I will use a Fixed, or Preferred, path management policy to intelligently segment workload across both controllers.
Rescan the devices:
# /usr/sbin/ioscan -C disk
ioscan – scan the I/O system
SYNOPSIS:
driver | class] instance] hw_path] | [devfile]
instance]
driver | class] instance] hw_path] [devfile]
driver | class] instance] lun hw_path] [devfile]
[devfile]
hw_path]
Generate device files: insf –e
Verify the new devices: ioscan funC
Rescan the devices: cfgmgr vl fcsx, x is FC adapter number
Verify the new devices: lsdev Cc
echo scsi add-single-device > /proc/scsi/scsi
Determine the FC channels: cfgadm -al
Force rescan: cfgadm o force_update c configure cx
Where x is the FC channel number
Force rescan at HBA port level: luxadm e forcelip /dev/fc/fpx
Force rescan on all FC devices: cfgadm al o show_FCP_dev
Install device files: devfsadm
Display all ports: luxadm e port
Display HBA port information luxadm -v display
Display HBA port information: luxadm e dump_map
To force Fibre Channel SAN disk rescan, Use device path from luxadm -e port output.
# luxadm -e forcelip
Sign in to VMware Infrastructure Client. Select the ESX host and then click the “Configuration” tab. Select “Storage Adapters” from under Hardware. Click “Rescan”
ESXESXi 4.x and before
esxcfg-rescan
ESXi 5.x and later
esxcli storage core adapter rescan –all
1. AIX
lscfg –v –l fcs#
(fcs – FC Adapter)
SMIT
2. HP-UX
fcmsutil /dev/td#
(td – Tachyon Adapter)
SAM
3. WIN
emulexcfg –emc or
hbanywhere
I can use Storage Explorer to see detailed information about the Fibre Channel host bus adapters (HBAs).
4. Solaris
/usr/sbin/lpfc/lputil
Also I can use:
more /var/adm/messages | grep –i wwn |more dmesg
5. VMware vSphere ESX/ESXi host
There are several ways to get HBA WWNs on VM
vSphere Client;
Using ESXi Shell;
Using Powershell / PowerCLI script.
6. LINUX
/sys/class/scsi_host/hostN/device/fc_host/hostN/port_name
Where “N” is the number of device for your fibre HBAs
I will check the OS log files/event logs for errors:
Yes. I have been using Brocade Fabric and I have used “support save “to collect various logs for any issues.
Syntax:
supportsave [ os | platform | l2 | l3 | custom | core | all ]
Buffer credits, also called buffer-to-buffer credits (BBC) are used as a flow control method by Fibre Channel technology and represent the number of frames a port can store. Fibre Channel interfaces use buffer credits to ensure all packets are delivered to their destination. Flow-control mechanism to ensure that Fibre Channel switches do not run out of buffers, so that switches do not drop frames .overall performance can be boosted by optimizing the buffer-to-buffer credit allotted to each port.
Number of Buffers: BB_Credit = [port speed] x [round trip time] / [frame size]
I have used Brocade SAN and it has these load balancing policies:
I will use:
Fan Out
For example 10:1.
I will determine this ratio, based on the server platform and performance requirement by consulting Storage vendors.
Ans: Drooping= Bandwidth Inefficiency
Drooping begins if: BB_Credit Where RTT = Round Trip Time
SF = Serialization delay for a data frame
Explore EMC Tutorial |
Design should address three separate levels:
Tier 1: 99.999% availability (5 minutes of downtime per year)
Tier 2: 99.9% availability (8.8 hours average downtime per year, 13.1 hours maximum)
Tier 3: 99% availability (3.7 days of downtime per year)
SAN Storage array has data integrity built into it.
A storage array uses spae disk drives to take the place of any disk drives that are blocked because of errors. Hot spares are available and will spare out predictively when a drive fails.
There are two types of disk sparing:
Multimode fiber =large light carrying core, 62.5 microns or larger in diameter for short-distance transmissions with LED-based fiber optic equipment.
I have hand-drawn a picture of Multimode Fibre core as you can see Multimode fibers have a much larger core than single-mode shown below (50, 62.5 μm or even higher), allowing light transmission through different paths
Single-mode fiber =small light carrying core of 8 to 10 microns in diameter used for long-distance transmissions with laser diode-based fiber optic transmission equipment.
A Single-mode fiber core has a much smaller core, only about 9 microns so that the light travels in only one ray as shown below.
By using a Fibre Optic Loopback Fibre Optic LoopBack
Fibre Optic Loopback modules are also called optical loopback adapters. I use the best practice of sending a loopback test to equipment, one at a time for isolating the problem
I have used different types:
Others
It helps in testing the transmission capability and the receiver sensitivity of network equipment.
To use I connect One connector into the output port, while the other is plugged into the input port of the equipment.
Number of Buffers: BB_Credit = [port speed] x [round trip time] / [frame size]
Platform | Tool |
AIX | iostat |
HPUX | SAR |
iostat | |
Glance+ | |
vxstat | |
Linux | iostat |
Windows | Performance Monitor |
Solaris | iostat |
Vmware | esxtop |
To calculate IOPS per drive the formula I will use is:
1000 / (Seek Time + Latency) = IOPS
SSD drives have no movable parts and therefore have no RPM.
The required Bandwidth=the required bandwidth is determined by measuring the average number of write operations and the average size of write operations over a period of time.
Raw Capacity= Usable + Parity
I select it on the basis of transmission distance.
If the distance is less than a couple of miles, I will use a multimode fiber cable.
If the distance is more than 3-5 miles, I will use a single-mode fiber cable.
The device masking commands allow you to:
SAN zoning is a method of arranging Fibre Channel devices into logical groups over the physical configuration of the fabric.
SAN zoning may be utilized to implement compartmentalization of data for security purposes.
Each device in a SAN may be placed into multiple zones.
The base components of zoning are
1. Zones
2. Zone sets
3. Default zone
4. Zone members
A zone is a set of devices that can access each other through port-to-port connections. When you create a zone with a certain number of devices, only those devices are permitted to communicate with each other. This means that a device that is not a member of the zone cannot access devices that belong to the zone.
The figure shows the SAN Connectivity and accordingly zoning must be done.
Hard zoning is zoning that is implemented in hardware.
Soft zoning is zoning that is implemented in software.
Hard zoning physically blocks access to a zone from any device outside of the zone.
Soft zoning uses filtering implemented in fibre channel switches to prevent ports from being seen from outside of their assigned zones. The security vulnerability in soft zoning is that the ports are still accessible if the user in another zone correctly guesses the fibre channel address.
Port zoning utilizes physical ports to define security zones. A user’s access to data is determined by what physical port he or she is connected to. With port zoning, zone information must be updated every time a user changes switch ports. In addition, port zoning does not allow zones to overlap. Port zoning is normally implemented using hard zoning, but could also be implemented using soft zoning.
WWN zoning uses name servers in the switches to either allow or block access to particular World Wide Names (WWNs) in the fabric. A major advantage of WWN zoning is the ability to re-cable the fabric without having to redo the zone information. WWN zoning is susceptible to unauthorized access, as the zone can be bypassed if an attacker is able to spoof the World Wide Name of an authorized HBA.
A Logical Unit Number or LUN is a logical reference to the entire physical disk or a subset of a larger physical disk or disk volume or portion of a storage subsystem.
LUN (Logical Unit Number) Masking is an authorization process that makes a LUN available to some hosts and unavailable to other hosts.
LUN Masking is implemented primarily at the HBA (Host Bus Adapter) level. LUN Masking implemented at this level is vulnerable to any attack that compromises the HBA. Some storage controllers also support LUN Masking.
LUN Masking is important because Windows-based servers attempt to write volume labels to all available LUN’s. This can render the LUN’s unusable by other operating systems and can result in data loss.
Device masking lets you control your host HBA access to certain storage arrays devices. A device masking database, based in the storage arrays unit, eliminates conflicts through centralized monitoring and access records. Both HBA and storage arrays director ports in their Channel topology are uniquely identified by a 64-bit World Wide Name (WWN). For ease of use, you can associate an ASCII World Wide Name (AWWN) with each WWN.
I will use the Persistent Binding for Tape Devices.
Persistent binding is a host-centric enforced way of directing an operating system to assign certain SCSI target IDs and LUNs.
Persistent Name Binding support is for target devices.
Persistent binding is provided for users to associate a specified device World Wide Port Name (WWPN) to a specified SCSI target ID.
For example, where a specific host will always assign SCSI ID 3 to the first router it finds, and LUNs 0, 1, and 2 to the three-tape drives attached to the router.
Practical examples:
For Emulex HBA on a Solaris host for setting up persistent binding:
# lputil
MAIN MENU
Using option 5 will perform a manual persistent binding and the file is: /kernel/drv/lpfc.conf file.
lpfc.conf file looks like:
fcp-bind-WWNN=”50060XY484411 c6c11:lpfc0t1″,
“50060XY4411 c6c12:lpfc1t2”;
sd.conf file looks like:
name=”sd” parent=”lpfc” target=1 lun=0;
name=”sd” parent=”lpfc” target=2 lun=0;
Reconfigure:
# touch /reconfigure
# shutdown -y -g0 -i6
Yes, on brocade:
1. I will create an alias.
aliCreate “aliname”, “member; member”
2. I will create a zone.
zonecreate “Zone Name”, “alias1; alias2″
3. I will add the zone to the defined configuration.
cfgadd “ConfigName”, “ZoneXYZ″
# cfgadd “configuration_Name”, “Zone_name”
4. I will save the defined configuration to persistent storage.
# cfgsave
5. I will enable the configuration.
cfgenable “ConfigName”
# cfgenable ” configuration_Name
Explore EMC Sample Resumes Download & Edit, Get Noticed by Top Employers! |
Our work-support plans provide precise options as per your project tasks. Whether you are a newbie or an experienced professional seeking assistance in completing project tasks, we are here with the following plans to meet your custom needs:
Name | Dates | |
---|---|---|
EMC Training | Dec 07 to Dec 22 | View Details |
EMC Training | Dec 10 to Dec 25 | View Details |
EMC Training | Dec 14 to Dec 29 | View Details |
EMC Training | Dec 17 to Jan 01 | View Details |
Ravindra Savaram is a Technical Lead at Mindmajix.com. His passion lies in writing articles on the most popular IT platforms including Machine learning, DevOps, Data Science, Artificial Intelligence, RPA, Deep Learning, and so on. You can stay up to date on all these technologies by following him on LinkedIn and Twitter.