Implementing Cisco UCS Solutions. Starting with the description of Cisco UCS equipment options, this hands-on guide then introduces Cisco UCS Emulator. Gain an overview of the main components of Cisco UCS: unified fabric, unified that they can be uniformly implemented across your entire organization. بعد از ورود موفق سیسکو به بازار سرورها, سه شرکت IBM, HP, DELL تصمیم گرفتند با آغاز همکاری های سود ده و تاسیس یک شرکت مشترک که دفتر مرکزی آن در.
|Language:||English, Portuguese, Hindi|
|Genre:||Academic & Education|
|ePub File Size:||19.63 MB|
|PDF File Size:||11.20 MB|
|Distribution:||Free* [*Registration Required]|
Implementing cisco ucs solution. Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available?. Cisco Unified Computer System (UCS) is a powerful solution for modern data centers and is responsible for increasing efficiency and reducing. Cisco Unified Computer System is a powerful solution for data centers that can raise efficiency and lower costs. This tutorial helps professionals.
Machine Learning. Mobile Application Development. Penetration Testing. Raspberry Pi. Virtual and Augmented Reality. NET and C. Cyber Security.
Full Stack. Game Dev. Git and Github. Technology news, analysis, and tutorials from Packt. Stay up to date with what's important in software engineering today. Become a contributor. Go to Subscription. You don't have anything in your cart right now. Cisco UCS is a unified solution that consolidates computing, network and storage connectivity components along-with centralized management.
Stateless computing blade server's design simplifies the troubleshooting, and Cisco-patented extended memory technology provides higher virtualized servers consolidation results. A hands-on guide to take you through deployment in Cisco UCS. With real-world examples for configuring and deploying Cisco UCS components, this book will prepare you for the practical deployments of Cisco UCS data centre solutions.
If you want to learn and enhance your hands-on skills with Cisco UCS solutions, this book is certainly for you. You will also be introduced to all areas of UCS solutions with practical configuration examples. Finally, you will learn about virtualized networking, 3rd party integration tools and testing failure scenarios.
You will learn everything you need to know for the rapidly growing Cisco UCS deployments in the real-world. Farhan Nadeem has been in the IT field for over 19 years. Farhan has proven experience in successfully engineering, deploying, administering, and troubleshooting heterogeneous infrastructure solutions. Starting with the MCSE-NT Microsoft certification in , he's always stayed abreast of the latest technologies and server hardware through proactive learning and successful real-world deployments.
He has extensive work experience in complex heterogeneous environments comprising various hardware platforms, operating systems, and applications. This exposure has given him broad knowledge in investigating, designing, implementing, and managing infrastructure solutions. He progressively started focusing on virtualization technologies and the Cisco UCS platform and has completed several successful UCS deployments with multiple virtualization platforms.
When not working with computers, he enjoys spending time with his family. He has also technically reviewed the second edition of this book.
Prasenjit Sarkar is a product manager at Oracle for their public cloud, with a focus on cloud strategy, Oracle Ravello, cloud-native applications, and the API platform. His primary focus is driving Oracle's cloud computing business with commercial and public sector customers, helping to shape and deliver a strategy to build broad use of Oracle's Infrastructure as a Service offerings, such as Compute, Storage, and Database as a Service.
He has also authored six industry-leading books on virtualization, SDN, and physical compute, among others. He has also authored numerous research articles. Sign up to our emails for regular updates, bespoke offers, exclusive discounts and great free content.
Log in. My Account. Log in to your account. Not yet a member? Register for an account and access leading-edge content on emerging technologies. Register now. Packt Logo. My Collection. Deal of the Day Take your networking skills to the next level by learning network programming concepts and algorithms using Python.
Sign up here to get these deals straight to your inbox. Find Ebooks and Videos by Technology Android. Packt Hub Technology news, analysis, and tutorials from Packt. Insights Tutorials. News Become a contributor. Categories Web development Programming Data Security. The specifications of the C M2 rack-mount servers are as follows: For a quick comparison of C-series servers, please visit http: Select all the servers and click on the Compare button to get the results. Getting started with mezzanine adapters A huge variety of mezzanine adapters, also known as Virtual Interface Cards VICs , is available from Cisco for both B-series blade servers and C-series rack servers.
Older adapters are of the fixed port type and are not optimized for contemporary virtualized server environments. There are some older third-party network cards also available as an option. Newer adapters are optimized for virtualization and can provide or dynamic virtual adapters. The number of virtual adapters is dependent on the VIC model. Our focus will be on those mezzanine adapters that are virtualization optimized.
Implementing Cisco UCS Solutions by Prasenjit Sarkar, Farhan Ahmed Nadeem
Cisco VICs are also famous by their code name, Palo. Power capacity and power plug types The UCS blade chassis comes with options of up to four power supply units. Each is a single phase unit and provides 2, watts. Depending on the total number of power supplies in the chassis and input power sources, UCS can be configured into the following three modes: Nonredundant mode Power supply units installed in the system provide adequate power.
A power supply failure results in a chassis failure. Load is evenly distributed among power supplies; however, there is no power redundancy. The extra power supply unit is in standby mode and the load is evenly distributed among the operational power supply units. In case of single power supply failure, standby power will replace the failed power supply immediately.
Chapter 1 [ 35 ] Grid redundant mode In this mode, all the four power supply units must be available in the system and power should be supplied from two different power sources.
Power supply units must be configured in pairs. Units 1 and 2 form one pair, and 3 and 4 form the second pair. Ideally, separate physical power cabling from two independent utility grids is recommended to feed each pair of power supply. In case one power source fails, the remaining power supply units on the other circuit continue to provide power to the system. The connector on the other side of the power cable varies according to the country-specific electrical standards. More information regarding power and environmental requirements is available at http: Care must be taken during the installation of all components as failure to follow the installation procedure may result in component malfunction and bodily injury.
UCS chassis don'ts Do not try to lift even an empty chassis alone. At least two persons are required to handle the UCS chassis. Before the physical installation of the UCS solution, it is also imperative to consider other datacenter design factors including the following: Anyone with prior server installation experience should be comfortable installing internal components using the guidelines provided in the blade server manual and following the standard safety procedures.
ESD transient charges may result in thousands of volts of charge building up, which can degrade or permanently damage electronic components. The Cisco ESD training course may be referred to at http: All Cisco UCS blade servers have similar cover design with a button at the top front of the blade; this button needs to be pushed down.
Then, there is a slight variation among models in the way that the cover slides; this could be either towards the rear and upwards or towards self and upwards. Make sure you are wearing an ESD wrist wrap grounded to the blade server cover. Move the lever up and remove the CPU blank cover. Keep the blank cover in a safe place just in case you need to remove a CPU. Pick up the CPU with the plastic edges and align it with the socket.
The CPU can only fit one way. Lower the mounting bracket with the side lever and secure the CPU into the socket. Chapter 1 [ 37 ] 6. Align the heat sink with its fins in a position allowing unobstructed airflow from front to back. Gently tighten the heat sink screws on to the motherboard. CPU removal is the reverse of the installation process. It is critical to place the socket blank cover back over the CPU socket. Damage could occur to the socket without the blank cover.
Move away the clips on the side of the memory slot. Hold the memory module with both edges in an upright position and firmly push straight down, matching the notch of the module to the socket. Close the side clips to hold the memory module.
Memory removal is the reverse of the installation process. The memory modules must be inserted in pairs and split equally between each CPU if all the memory slots are not populated.
Refer to the server manual for identifying memory slot pairs and slot-CPU relationship. Blade servers B, B, and B support regular thickness 15 mm hard drives whereas B supports thin 7 mm hard drives. Remove the blank cover.
Press the button on the catch lever on the ejector arm. Slide the hard disk completely into the slot. Push the ejector lever until it clicks to lock the hard disk. To remove a hard disk press the release button, pull the catch lever outward, and slide the hard disk out. To insert or release a thin hard drive into or from the B server, release the catch by pushing it inside while inserting or removing the hard disk.
Do not leave a hard disk slot empty. If you do not intend to replace the hard disk, cover it with a blank plate to ensure proper airflow. The procedure for installing these cards is the same for all servers, which is as follows: Open the server top cover. Grab the card with its edges and align the male molex connector, the female connector, and the motherboard.
Press the card gently into the slot. Once the card is properly seated, secure it by tightening the screw on top. Mezzanine card removal is the reverse of the installation process. Chapter 1 [ 39 ] Installation of blade servers on the chassis Installation and removal of half-width and full-width blade servers is almost identical with the only difference being the use of one ejector arm for half-width blade servers whereas for full-width blade servers, there are two ejector arms.
Carry out the following steps: Make sure you are wearing an ESD wrist wrap grounded to the chassis. Open one ejector arm for the half-width blade servers or both ejector arms for full-width blade servers.
Push the blade into the slot. Once firmly in, close the ejector arm on the face of the server and tighten the screw with your hands.
The removal of a blade server is the opposite of the installation process. In order to install a full-width blade, it is necessary to remove the central divider. This can be done with a Philips screwdriver to push two clips, one in the downward and the other in the upward direction, and sliding the divider out of the chassis.
All management and data movement intelligence for chassis components and blade servers is present in the FIs and IOM modules which are line cards for the FIs. These links can be configured in the port channel for bandwidth aggregation. The figure on the right shows a configuration in which links from a single IOM are connected to different FIs.
This is an invalid topology, and hence chassis discovery will fail. Chassis discovery will also fail if a high availability cluster is not established between FIs. IOM interface connectivity to blade servers does not require user configuration.
IOM to FI connectivity, however, requires physical cabling. There are a variety of possibilities in terms of physical interfaces. Some of the common configurations include the following: Depending on the bandwidth requirements and model, it is possible to have only one, two, four, or eight connections from IOM to FI.
Chapter 1 [ 41 ] Although large numbers of links provide higher bandwidth for individual servers, as each link consumes a physical port on the FI, they also decrease the total number of UCS chassis which can be connected to the FIs. The following figure shows a direct connection between FIs and Nexus switches. These links can be aggregated into a PC. Each FI has two fast Ethernet ports. Chapter 1 [ 43 ] The following figure shows the FI to Nexus switch connectivity where links traverse Nexus switches.
Both these connections are configured via vPC. Both these connections are also configured via vPC. It is also imperative to have vPC on a physical connection between both the Nexus switches. This is shown as two physical links between Nexus Switch 1 and Nexus Switch 2. Without this connectivity and configuration between Nexus Switches, vPC will not work.
Cisco has already surpassed Dell from its number three position. According to this report, Cisco's entry into the blade server market is causing re-evaluation among the installed blade server bases. Cisco's clustered FI-based design provides a complete solution for converged data connectivity, blade server provisioning, and management whereas other vendor offerings acquire the same functionality with increased complexity. Cisco UCS presented a paradigm shift for blade servers and datacenter design and management to the industry.
In this chapter, we learned about the UCS solution's integral components and physical installation of the UCS solution. We learned about the various available options for all the UCS components including FIs, blade chassis, blade servers, rack-mount servers, and the internal parts of the servers. The detailed Gartner Magic Quadrant report, , is available at http: UCS emulator could be used to enhance familiarization with the UCS platform and also to provide demos to prospective clients.
It mimics the real UCS hardware with a configurable hardware inventory, including multiple chassis with multiple blade servers and multiple rack-mount servers. Working with UCS emulator provides a feeling of real hardware, and allows the user to set up the UCS hardware virtually, while becoming more comfortable with it before configuring the actual UCS hardware.
It is an excellent resource for getting hands-on experience with the UCS platform. UCS emulator requirements are so minimal that it can be easily installed in a home-based lab on a standard laptop or desktop. UCS platform emulator is freely downloadable from the Cisco website. In order to download it, a Cisco Connection Online CCO login is required, which can be created by anyone interested in learning and working with Cisco technologies.
It is available on the Cisco developer's network website developer. You can download the latest emulator package from the developer network, which can be installed under various virtualization platforms. Old archives of previous UCS platform emulator versions are also available for download on the same page. Also, a configuration created on the UCS platform emulator can be exported in an XML file format, which can then be imported to a production system.
This is extremely helpful for duplicating a production UCS system configuration for troubleshooting, testing, and development purposes. This chapter will cover the following topics: System requirements The minimum system requirements for the installation of UCS emulator are as follows: For a basic home-based lab environment, VMware Player is an excellent choice.
Users interested in advanced virtualization benefits such as snapshots may download VMware Workstation. Registration on the VMware website is required for the download of VMware Workstation and it is not available for free. VMware Player can be installed on a desktop or laptop even with lower specifications, as mentioned in the minimum system requirements for UCS platform emulator. New laptops and desktops generally have more CPU speed than the minimum system requirements.
Hard disk storage is very cheap and usually all new systems have it available abundantly. System memory is usually a major consideration. VMware Player installation is a typical Microsoft Windows manual-click-next type of installation. It is recommended that you close all the running programs and save your data before installing VMware Player, as the system may require a reboot. VMware Player can be downloaded at https: A simple Google search for UCS emulator download will take you right to the correct section of the Cisco developer network website, where you will be required to enter your Cisco credentials to download the UCSPE.
Download the emulator. Extract the downloaded file to an appropriate folder on the local system. Chapter 2 [ 49 ] 4. Go to the folder where the extracted files have been stored, select the VMX VMware virtual machine configuration file, and click on Open as highlighted in the following screenshot: Once the VM is shown in the VMware Player inventory, click on Play virtual machine as highlighted in the following screenshot: Download the.
Select an appropriate folder on the local system, and click on Import as shown in the following screenshot: Chapter 2 [ 51 ] 4.
VMware Player will show the OVA import progress, which may take a few minutes as shown in the following screenshot: Once the import is complete, VM could be played from the VMware Player by clicking on the Play virtual machine button. VMware Workstation provides some extra features such as snapshots that are not available in VMware Player. The most widely supported JRE version is 1. For web-based access, type the management IP into the browser as it appears in the VM console. Take a look at the following screenshot for more details on this: The web interface is divided into two main panes.
On the left side is the Navigation pane and on the right side is the main Work pane. The Navigation pane has tabs as shown in the preceding screenshot, and these have been explained in the following table. CLI access provides driven console interface for making the previously mentioned changes. Chapter 2 [ 53 ] The following table summarizes the purpose of the Navigation pane menu tabs: For all system changes requiring a reboot, for example to run a factory reset, the default action selected is No, which should be changed to Yes in order to perform the action, otherwise the task will not be performed.
The following screenshot shows the Factory Reset configuration change and the four steps required to complete this task: In this scenario, IP address can be assigned by the DHCP server running on the network or the network can be assigned manually. Perform the following steps to assign an IP manually through the console: For Change settings, select y. Add the IP, subnet mask, and gateway on the next prompts.
The interface will reinitialize with the IP settings. This IP, username, and password are used for accessing the VM as shown in the following screenshot: No network connectivity from Fabric Interconnects to any north-bound switch is possible.
UCSPE is initially configured with one blade chassis including six blades and two rack-mount servers. The following menu is used for hardware inventory control: Chapter 2 [ 57 ] The following table describes the purpose of each of the preceding icons: Adds new chassis.
Adds a new chassis to the hardware inventory. Loads a saved configuration. Loads a previously stored hardware inventory.
Nested RAID levels
Imports XML file. Imports XML file in the hardware inventory from a file on the local system. Imports equipment from a live Cisco UCS system. Imports from a live Cisco UCS system. Restarts the emulator with this hardware setup.
Restarts emulator to enable new hardware inventory. Saves configurations. Saves the inventory configuration on the local system, which is available from the "Loads a saved configuration" icon.
Exports configuration as an XML file. Exports current configuration as an XML file which can be saved when generated as an onscreen file. Validates the present configuration. A report is generated showing the configured hardware It serves as a virtual staging area where servers can be configured, before being deployed to the chassis B-series blades or connected to Fabric Extenders FEXs c-series rack-mount.
The Stash area is shown in the following screenshot: Both blade and rack-mount servers can be added and removed from the hardware inventory using the Stash area. This can be accomplished by simply dragging-and-dropping the servers and server components to it. Components to the individual servers and chassis can also be dragged and dropped directly for addition and removal while the server is in chassis blade server.
In order to modify an existing blade server hardware configuration, it is recommended to eject the server to the Stash area and make the changes. The server removed to the Stash area will preserve the slot identity. Drag-and-drop from hardware inventory to the Stash area is shown in the following screenshot: Chapter 2 [ 59 ] The following icons are used in the Start-up Inventory on the work pane of the Stash area: Collapses all the items Expands all the items Empties hardware from the Stash area The following table lists the icons used for blade chassis in the Start-up Inventory work pane: Collapses all the items Expands all the items Duplicates the chassis Disconnects the chassis from the Fabric Interconnect Removes the chassis Adding a new chassis with blade servers A new chassis with blade servers could be added in many different ways.
The easiest way is to duplicate the current chassis which will create an exact replica, including the blade servers. The blade servers can then be dragged and dropped to the chassis using the Stash area.
The other method is to manually add the chassis and blade servers. Adding an empty chassis 1. Click on the icon for adding a new chassis and provide the chassis ID and chassis name. An empty chassis will be added to the inventory. Add chassis Fans by dragging them from the hardware inventory catalog area at the bottom of the page. Add chassis PSUs by dragging them from the hardware inventory catalog area at the bottom of the page.
A key point to consider is that the new blade server does not have any components. It is therefore recommended to drag a new server to the Stash area, add server components such as CPU, RAM, and others, and move the server to the chassis, which is explained in the following steps: Click on the Blades tab at the bottom of the page. Click-and-hold the mouse's left button, and drag the desired blade server to the Stash area. Repeat step 4 for the required server components. Once the server configuration is complete, drag it to the blade chassis.
Repeat the same procedure for all new blade servers. Configuring and adding a rack-mount server Rack-mount servers can be added directly to the appropriate rack-mount server area or to the Stash area. It is recommended to configure the rack-mount server in the Stash area and move the server to the chassis. The FEX series is automatically included in the rack-mount server area. The following steps explain this procedure in detail: Click on the Rack Servers tab at the bottom of the page.
Click and hold the mouse's left button and drag the desired rack-mount server to the Stash area. Once the server configuration is complete, drag it to the New Server area and provide an ID for the server.
Repeat the same procedure for all new rack-mount servers.
Chapter 2 [ 61 ] Modifying server components In order to remove blade servers from chassis, click on the Eject Server icon for the server to eject. The ejected server will be moved to the Stash area as shown in the following screenshot: In order to remove the rack servers, click on the Delete server icon for the server to be deleted. The deleted server will not be moved to the Stash area as shown in the following screenshot: Adding server components could be achieved by dragging-and-dropping components directly to the server.
It is recommended to first move the server to the Stash area to add or remove components. Once the required changes to the server inventory are done, it can be saved for future use. The following screenshot shows the services in progress: It is recommended to use Mozilla compliant browsers such as Firefox or Chrome. Java is also required and JRE 1. In Microsoft Windows, you can check your version of Java from the Java icon present in the control panel.
The following steps are needed to be carried out for launching UCSM using platform emulator: At the security prompt, accept the self-signed certificate as shown in the following screenshot: In the pop-up Login dialog box, type the User Name as config, Password as config, and click on Login as shown in the following screenshot: Allow the JRE security-related prompts to trust the application and allow the application to run.Both blade and rack-mount servers can be added and removed from the hardware inventory using the Stash area.
Paperback pages. If you encounter Java compatibility issues while using JRE 1. Discover how to simplify your data center architecture, reduces costs, and improve speed and agility with Cisco UCS at your side. You can upgrade to the eBook version at www. Salient features and benefits of the UCS solution include the following: Chapter 7: Saves the inventory configuration on the local system, which is available from the "Loads a saved configuration" icon. Table of Contents Chapter 1: Cyber Security.
- INTRODUCTION TO LINGUISTICS PDF
- ACCOUNTING WARREN REEVE DUCHAC PDF
- INTRODUCTION TO WORK STUDY PDF
- OPERATING SYSTEMS DESIGN AND IMPLEMENTATION 3RD EDITION PDF
- LET US C SOLUTION BOOK PDF
- DIREITO CONSTITUCIONAL ESQUEMATIZADO PEDRO LENZA EPUB
- BILLIONAIRE UNDER CONSTRUCTION PDF
- INTRODUCTION TO MATLAB 7 FOR ENGINEERS PDF
- BECOMING CALDER PDF
- INTERNAL COMBUSTION ENGINES BY V GANESAN PDF