Installation Steps For Virtual Wire Mode Evaluation Forms

Posted on
Installation Steps For Virtual Wire Mode Evaluation FormsInstallation Steps For Virtual Wire Mode Evaluation Forms

ES — For management of SonicWall Email Security appliances. The policy panels are used to configure SonicWall appliances.

From these pages, you can apply settings to all SonicWall appliances being managed by SonicWall GMS, all SonicWall appliances within a group, or individual SonicWall appliances. Introduction to Firewall Policies To open the Policies panel, click the Firewall tab at the top of the SonicWall GMS UI and then click Policies >System >Status. The SonicWall appropriate appliance Policies panel appears: System This covers a variety SonicWall firewall appliance controls for managing system status information, registering the SonicWall firewall appliance, activating and managing SonicWall Security Services licenses, configuring SonicWall firewall appliance local and remote management options, managing firmware versions and preferences, and using included diagnostics tools for troubleshooting. It also describes how to use GMS to configure general System Policy settings on managed SonicWall appliances. The following describe how to configure the system settings.

In a virtual wire (vwire) deployment, the firewall is installed transparently in the network (see figure below). This deployment mode is typically used when no switching or routing is needed or desired. A vwire deployment allows the firewall to be installed in any network environment without requiring any configuration changes.

Licensed Nodes (Unit-level view only)—Provides a Node License Status table listing the number of nodes your SonicWall security appliance is licensed to have connected at any one time, how many nodes are currently connected, and how many nodes you have in your Node license Exclusion List. Network This covers configuring the SonicWall firewall appliance for your network environment.

Describing how to configure network settings for SonicWall appliances. It is divided into sections for SonicWall security appliances running SonicOS Enhanced and SonicOS Standard. NOTE: For information on configuring wireless WAN (WWAN) settings, see. This describes how to configure the dialup settings for SonicWall SmartPath (SP) and SmartPath ISDN (SPi) appliances.

SonicWall SP appliances have a WAN Failover feature that enables automatic use of a built-in modem to establish Internet connectivity when the primary broadband connection becomes unavailable. This is ideal when the SonicWall appliance must remain connected to the Internet, regardless of network speed. This contains the following. WGS This describes how to configure Wireless Guest Services (WGS) enabled appliances running SonicOS Standard. For appliances running SonicOS Standard, these configuration options are available at the unit level. Wireless Guest Services allows the administrator to configure wireless access points for guest access.

Wireless Guest Services is configured with optional custom login pages, user accounts and is compatible with several different authentication methods including those which require external authentication. Firewall This describes how to configure Access Rules and App Control policies for SonicWall firewalls from the GMS management interface. This includes the following sections. Capture ATP Capture Advanced Threat Protection (ATP) is sold as an add-on security service to the firewall, similar to Gateway Anti-Virus (GAV). Capture ATP helps a firewall identify whether a file is malicious or not by transmitting the file to the cloud where the SonicWall Capture ATP service analyzes the file to determine if it contains a virus or other malicious elements.

Capture ATP then sends the results to the firewall. This is done in real time while the file is being processed by the firewall. This contains the following. Anti-Spam This provides a quick, efficient, and effective way to add anti-spam, anti-phishing, and anti-virus capabilities to your SonicWall firewall appliance.

There are two primary ways inbound messages are analyzed by the Anti-Spam feature - Advanced IP Reputation Management and Cloud-based Advanced Content Management. IP Address Reputation uses the GRID Network to identify the IP addresses of known spammers, and reject any mail from those senders without even allowing a connection. GRID Network Sender IP Reputation Management checks the IP address of incoming connecting requests against a series of lists and statistics to ensure that the connection has a probability of delivering valuable email. The lists are compiled using the collaborative intelligence of the SonicWall GRID Network.

Known spammers are prevented from connecting to the SonicWall firewall appliance, and their junk email payloads never consume system resources on the targeted systems. This includes the following. Web Filters SonicWall Content Security Manager (CSM) CF provides appliance-based Internet filtering that enhances security and employee productivity, optimizes network utilization, and mitigates legal liabilities by managing access to objectionable and unproductive Web content. This provides configuration tasks for deploying these services. High Availability This describes how to use GMS to configure High Availability that allows the administrator to specify a primary and secondary SonicWall appliance. In the case that the connection to the primary device fails, connectivity will transfer to the backup device. In addition, SonicWall GMS can utilize the same device pairing technology to implement different forms of load balancing.

Load balancing helps regulate the flow of network traffic by splitting that traffic between primary and secondary SonicWall devices. This includes the following.

Security Services This includes an overview of available SonicWall Security Services as well as instructions for activating the service, including FREE trials. These subscription-based services include SonicWall Gateway Anti-Virus, SonicWall Intrusion Prevention Service, SonicWall Content Filtering Service, SonicWall Client Anti-Virus, and well as other services. SonicWall firewall appliances offer several services for protecting networks against viruses and attacks. This provides concept overviews and configuration tasks for deploying these services. This contains the following.

NOTE: This feature is only available for SonicWall security appliances running SonicOS 6.1 and higher firmware. Log This covers managing the SonicWall firewall appliance’s logging, alerting, and reporting features. The SonicWall firewall appliance’s logging features provide a comprehensive set of log categories for monitoring security and network activities. This describes how to use GMS to configure where the SonicWall appliance(s) send their logs, how often the logs are sent, and what information is included.

This includes the following. The Events >Current Alerts screen displays all active alerts for this appliance.

Introduction to Email Security Policies After a SonicWall Email Security appliance has been added to SonicWall GMS, the unit can be managed through the ES Policies panel. System The System >Status windows displays both general deployment status, as well as individual appliance status for Email Security appliances. The System >Tools section provides options to force your SonicWall ES appliance to synchronize its license and subscription information with MySonicWall.com immediately. The System >Info screen allows you to edit Email Security appliance information on a global or unit level. Register/Upgrades The Register/Upgrades >Register ESA screen provides the ability to register ESA appliances with your mysonicwall.com account.

Viewing System Status The System Status page provides a comprehensive collection of information to help you manage your SonicWall security appliances and SonicWall Security Services licenses. In the global view mode, it provides a summary of all of the devices that are managed by the SonicWall GMS, including the number of appliances, whether the appliances are up or down, and the number of security services subscriptions. To view a summary of all devices managed by the GMS, click the Change View icon at the top left and select GlobalView. Expand the System tree in the middle panel, and click on Status.

The Status page displays. At the individual appliance level, the Status page provides more details such as the serial number, firmware version, and information on management, reporting, and security service subscriptions.

To view a summary of the status of an individual appliance, select the appliance in the left pane, and then click System >Status in the navigation pane. The Status page displays.

If tasks are pending for the selected unit, GMS provides a hyperlink that takes the user to the Tasks Screen for that unit. Also in System >Status, GMS displays the Last Log Entry for the unit with a hyperlink that takes the user to the unit Logs screen. The links are only provided if the user actually has permissions to access those screens on the Console tab. In the Subscription section header, GMS provides a click here for details link that displays your current subscription details on the Register/Upgrades >Search screen. The search parameters are pre-populated for retrieving the subscription services that are currently active on the appliance(s) and the search is executed and the results are sorted by Expiry Date for your convenience. This page provides a PDF icon that you can click to get a PDF file containing the same content as the Web page. At the bottom of the status screen, GMS provides a way to retrieve dynamic information about the selected appliance, and also provides a link to the GMS Getting Started Guide.

You can click the Fetch Information link to view the following dynamic information. Modem speed and active profile used (only for dial-up appliances) You can retrieved this information by clicking Fetch Information at the global, group, or unit level.

The actual results, however, are displayed only at the unit level. To view the SonicWall GMS Getting Started Guide, click Open Getting Started Instructions In New Window. Configuring Administrator Settings System >Administrator The System >Administration page provides settings for the configuration of the SonicWall Security Appliance for secure and remote management. The Administrator page configures administrator settings for the SonicWall appliance. These settings affect both GMS and other administrators.

To change administrator settings on one or more SonicWall appliances, complete the following steps. NOTE: Not all UI elements have Tooltips. If a Tooltip does not display after hovering your mouse over an element for a couple of seconds, you can safely conclude that it does not have an associated Tooltip.

When applicable, Tooltips display the minimum, maximum, and default values for form entries. These entries are generated directly from the GMS firmware, so the values will be correct for the specific platform and firmware combination you are using. Tooltips are enabled by default. To disable Tooltips, clear Enable Tooltip. You can configure the duration of time before Tooltips display. OCSP check interval 1~72 (in hours) – Enter the interval between OCSP checks, in hours. The minimum interval is 1 hour, the maximum is 72 hours, and the default is 24 hours.

Using the Client Certificate Check If you use the client certificate check without a CAC, you must manually import the client certificate into the browser. If you use the Client Certificate Check with a CAC, the client certificate is automatically installed on the browser by middleware. When you begin a management session through HTTPS, the certificate selection window is displayed asking you to confirm the certificate. After you select the client certificate from the drop-down menu, the HTTPS/SSL connection is resumed, and the SonicWall security appliance checks the Client Certificate Issuer to verify that the client certificate is signed by the CA. If a match is found, the administrator login page is displayed. If no match is found, the browser displays a standard browser connection fail message, such as.cannot display web page! If OCSP is enabled, before the administrator login page is displayed, the browser performs an OCSP check and displays the following message while it is checking.

Client Certificate OCSP Checking. If a match is found, the administrator login page is displayed, and you can use your administrator credentials to continue managing the SonicWall security appliance. If no match is found, the browser displays the following message: OCSP Checking fail! Please contact system administrator!

Troubleshooting User Lock Out When using the client certificate feature, these situations can lock the user out of the SonicWall security appliance. The Download URL section provides fields for specifying the URL address of a site for downloading the SonicPoint images. SonicOS Enhanced 5.0 and higher does not contain an image of the SonicPoint firmware. If your SonicWall appliance has Internet connectivity, it will automatically download the correct version of the SonicPoint image from the SonicWall server when you connect a SonicPoint device. If your SonicWall appliance does not have Internet access, or has access only through a proxy server, you must manually specify a URL for the SonicPoint firmware.

You do not need to include the prefix, but you do need to include the filename at the end of the URL. The filename should have a.bin extension. NOTE: To avoid confliction for delete/create route policies, updating this option to create a management interface address object and configure route policy causes system reboot. This management interface provides a trusted interface to the management appliance. Network connections to this interface is very limited. If the NTP, DNS, and SYSLOG servers are configured in the MGMT subnet, the appliance uses the MGMT IP as the source IP and creates MGMT address object and route policies automatically. All traffic from the management interface is routed by this policy.

Created routes display on the Network >Routing page. The MGMT address object and route policies are create/update IPv4 management IP. As the IPv6 management IP address object is created by default, this feature doesn't work on IPv6 management IP address object creation. If you have configured security associations on the appliance the Security Association Information section displays at the bottom of the Management page. Enter the SA keys in the Encryption Key and Authentication Key fields and click Change Only SA Keys. One-Touch Configuration Overrides The One-Touch Configuration Overrides feature is configured on the System >Management page. It can be thought of us as a quick tune-up for your SonicWall network security appliance’s security settings.

With a single click, One-Touch Configuration Override applies over sixty configuration settings to implement SonicWall’s recommended best practices. These settings ensure that your appliance is taking advantage of SonicWall’s security features. NOTE: NDPP is a part of Common Criteria (CC) certification. However, NDPP in GMS is not currently certified. The security objectives for a device that claims compliance to a Protection Profile are defined as follows: Compliant TOEs (Targets Of Evaluation) will provide security functionality that address threats to the TOE and implement policies that are imposed by law or regulation. The security functionality provided includes protected communications to and between elements of the TOE; administrative access to the TOE and its configuration capabilities; system monitoring for detection of security relevant events; control of resource availability; and the ability to verify the source of updates to the TOE. You enable NDPP by selecting Enable NDPP Mode on the System >Settings page.

Once you do this, a popup message displays with the NDPP mode setting compliance checklist. The checklist displays every setting in your current GMS configuration that violates NDPP compliance so that you can change these settings.

You need to navigate around the GMS management interface to make the changes. The checklist for an appliance with factory default settings is shown in the following procedure. To enable NDPP and see a list of which of your current configurations are not allowed or are not present. SCEP - Manage certificates using the Simple Certificate Enrollment Protocol (SCEP) standard About Certificates A digital certificate is an electronic means to verify identity by using a trusted third-party known as a Certificate Authority (CA). SonicWall now supports third-party certificates in addition to the existing Authentication Service. SonicWall security appliances interoperate with any X.509v3-compliant provider of Certificates. However, SonicWall security appliances have been tested with the following vendors of Certificate Authority Certificates.

A search function is available for NTP Servers. Select your search criteria in the NTP Server Search section, then click Search. A list of servers that match your criteria will display. From here you can edit the server settings or delete unwanted servers from the list.

Configuring Schedules You can configure schedule groups on the Policies panel, in System >Schedules. Schedule Groups are groups of schedules to which you can apply firewall rules. For example, you might want to block access to auction sites during business hours, but allow employees to access the sites after hours.

You can apply rules to specific schedule times or all schedules within a Schedule Group. For example, you might create an Engineering Work Hours group that runs from 11:00 AM to 9:00 PM, Monday through Friday and 12:00 PM to 5:00 PM, Saturday and Sunday.

After configured, you can apply specific firewall rules to the entire Engineering Work Hours Schedule Group or only to the weekday schedule. To create a Schedule Group, complete the following steps. If the user selects to update the target parent node and all unit nodes, a “Modify Task Description and Schedule” panel opens in place of the Preview panel (This panel does not appear if the user selects “ Update only target parent node”). If the “Modify Task Description and Schedule” panel opens, the user can edit the task description in the “Description” field. They might also adjust the schedule for inheritance, or continue with the default scheduling. If the user chooses to edit the timing by clicking on the arrow next to “Schedule,” a calendar expands allowing the user to click on a radio button for “Immediate” execution, or to select an alternate day and time for inheritance to occur.

After the user has completed any edits, they select either “Accept” or “Cancel” to execute or cancel the scheduled inheritance, respectively. After the inheritance operation begins, a progress bar appears, along with text stating the operation might take a few minutes, depending on the volume of data to be inherited.

After the inheritance operation is complete, the desired settings from the unit or group node should now be updated and reflected in the parent node’s settings, as well as in the settings of all other units, if selected. NOTE: For the Access/Services and Access/Rules pages, by default, inheriting group settings overwrites the values at the unit level with the group values. If you wish for SonicWall GMS to append the group settings to the values at the unit level, you need to enable the Append Group Settings option on the General/GMS Settings page on the Console tab. For more information on inheritance, refer to. Synchronizing Appliances If a change is made to the SonicWall appliance through any means other than through GMS, GMS is notified of the change through the syslog data stream. You can configure an alert through the Granular Event Management framework to send email notification when a local administrator makes changes to a SonicWall appliance through the local user interface rather than through GMS. After the syslog notification is received, GMS schedules a task to synchronize its database with the local change.

After the task successfully executes, the current configuration (prefs) file is read from the SonicWall appliance and loaded into the database. Auto-synchronization automatically occurs whenever GMS receives a local change notification status syslog message from a SonicWall appliance. You can also force an auto-synchronization at any time for a SonicWall appliance or a group of SonicWall appliances. To do this, complete the following steps. NOTE: The auto-synchronization feature can be disabled on the Console/Management Settings screen and by unchecking Enable Auto Synchronization. Synchronizing with MySonicWall.com SonicWall appliances check their licenses/subscriptions with once very 24 hours. Using Synchronize with mysonicwall.com Now, a user can have an appliance synchronize this information with mysonicwall.com without waiting for the 24-hour schedule.

To force the SonicWall to synchronize with mysonicwall.com now, complete the following steps. To synchronize the selected SonicWall appliance(s), click Synchronize with mysonicwall.com Now. GMS schedules a task to synchronize the selected SonicWall appliances’ license information into GMS. Manually Uploading Signature Updates For SonicWall appliances that do not have direct access to the Internet (for example, appliances in high-security environments) you can manually upload updates to security service signatures.

To instruct GMS to download updates to security service signatures, complete the following steps. After entering the street address, city, state, zip code, and country appliance contact information, click Locate Geocode. This populates the GeoLocation field with the SonicWall appliance latitude and longitude coordinates. Similarly, you can enter the latitude or longitude coordinates, and click Locate Address to populate the address information fields. The location information enables your SonicWall appliance to display on the Dashboard Geographic Map. For more information on using the Dashboard Geographic Map to drag and drop the location of your unit, refer to.

To reset all screen settings and start over, click Reset. Configuring System Settings GMS enables you to save SonicWall appliance settings to the GMS database that can be used for restoration purposes. GMS can automatically take back ups of the appliance configuration files at regular schedules and store them in the database. The schedule is configured in the Console >Management >GMS Settings screen Automatically save. Here you can specify that a back up should never be taken or back ups should be taken on a daily or weekly schedule.

If the schedules are set for daily or weekly, then the back ups are done for all appliances for which Enable Prefs File Backup is selected in this screen. To purge older back ups, you can specify how many of the latest prefs files should be stored in the database. The listbox here displays all the Prefs files backed up, along with the firmware version. In addition to automatic back ups, you can manually force a Prefs back up by selecting Store settings. To save or apply SonicWall appliance settings, complete the following steps. Tap Mode—Provides the same visibility as Inspect Mode, but differs from the latter in that it ingests a mirrored packet stream by a single switch port on the SonicWall security appliance, eliminating the need for physically intermediated insertion. Tap Mode is designed for use in environments employing network taps, smart taps, port mirrors, or SPAN ports to deliver packets to external devices for inspection or collection.

Like all other forms of Wire Mode, Tap Mode can operate on multiple concurrent port instances, supporting discrete streams from multiple taps. Shows the basic interfaces for a SonicWall appliance.

The WAN interface can use a static or dynamic IP address and can connect to the Internet through Transmission Control Protocol (TCP), Point-to-Point Protocol over Ethernet (PPPoE), Level 2 Tunneling Protocol (L2TP), or Point-to-Point Tunneling Protocol (PPTP). A SonicWall appliance might have one, many, or no optional interfaces.

Optional interfaces can be configured for LAN, WAN, DMZ, WLAN, or Multicast connections, or they can be disabled. Interfaces Virtual Interfaces (VLAN) On the SonicWall NSA Series and SonicWall PRO 2040/3060/4060/4100/5060 security appliances, virtual Interfaces are sub-interfaces assigned to a physical interface. Virtual interfaces allow you to have more than one interface on one physical connection. Virtual interfaces provide many of the same features as physical interfaces, including Zone assignment, DHCP Server, and NAT and Access Rule controls. Selecting Layer 2 Bridged mode is not possible for a VLAN interface.

VLAN support on SonicOS Enhanced is achieved by means of sub-interfaces, which are logical interfaces nested beneath a physical interface. Every unique VLAN ID requires its own sub-interface. For reasons of security and control, SonicOS does not participate in any VLAN trunking protocols, but instead requires that each VLAN that is to be supported be configured and assigned appropriate security characteristics. VLAN Interfaces SonicOS Enhanced 4.0 and higher can apply bandwidth management to both egress (outbound) and ingress (inbound) traffic on the WAN interface. Outbound bandwidth management is done using Class Based Queuing.

Inbound Bandwidth Management is done by implementing ACK delay algorithm that uses TCP’s intrinsic behavior to control the traffic. Class Based Queuing (CBQ) provides guaranteed and maximum bandwidth Quality of Service (QoS) for the SonicWall security appliance.

Every packet destined to the WAN interface is queued in the corresponding priority queue. The scheduler then dequeues the packets and transmits it on the link depending on the guaranteed bandwidth for the flow and the available link bandwidth. Configuring Network Settings in SonicOS Enhanced The following sections describe how to configure network settings in SonicOS Enhanced. Select Disable stateful-inspection on this bridge-pair to enable asymmetric routing on this interface. Layer 2 Bridge Bypass Relay Control The Engage physical bypass on malfunction option enables Layer 2 Bridge Bypass Relay Control, also known as “Fail to Wire.” The bypass relay option provides the user the choice of avoiding disruption of network traffic by bypassing the firewall in the event of a malfunction. The bypass relay is closed for any unexpected anomaly (power failure, watchdog exception, fallback to safe-mode).

NOTE: The Wire Mode feature is supported only on NSA and SuperMassive platforms. Wire Mode 2.0 can be configured on any zone (except wireless zones). Wire Mode is a simplified form of Layer 2 Bridge Mode, and is configured as a pair of interfaces.

In Wire Mode, the destination zone is the Paired Interface Zone. Access rules are applied to the Wire Mode pair based on the direction of traffic between the source Zone and its Paired Interface Zone. For example, if the source Zone is WAN and the Paired Interface Zone is LAN, then WAN to LAN and LAN to WAN rules are applied, depending on the direction of the traffic. In Wire Mode, administrators can enable Link State Propagation, which propagates the link status of an interface to its paired interface. If an interface goes down, its paired interface is forced down to mirror the link status of the first interface. Both interfaces in a Wired Mode pair always have the same link status. In Wire Mode, administrators can Disable Stateful Inspection.

When Disable Stateful Inspection is selected, Stateful Packet Inspection (SPI) is turned off. When Disable Stateful Inspection is not selected, new connections can be established without enforcing a 3-way TCP handshake. Disable Stateful Inspection must be selected if asymmetrical routes are deployed. When the Bypass when SonicOS is restarting or down option is selected, and the Wire Mode Type is set to Secure, traffic continues to flow even when the SonicWall Security Appliance is rebooting or is down. The Bypass when SonicOS is restarting or down option is always enabled and is not editable when Disable Stateful Inspection is selected.

To configure Wire Mode 2.0. Schedule—Select the schedule for when the interface is enabled. The default value is Always on. The available options can be customized in the System >Schedule page. The default choices are: Always On Work Hours or M-T-W-TH-F 08:00-17:00 (these two options are the same schedules) M-T-W-TH-F 00:00-08:00 After Hours or M-T-W-TH-F 17:00-24:00 (these two options are the same schedules) Weekend Hours or SA-SU 00:00-24:00 (these two options are the same schedules) AppFlow Report Hours or SU- M-T-W-TH-F-S 00:00-24:00 TSR Report Hours. None (default)—Disables BWM.

GMS can apply bandwidth management to both egress (outbound) and ingress (inbound) traffic on the interfaces in the WAN zone. Outbound bandwidth management is done using Class Based Queuing. Inbound Bandwidth Management is done by implementing ACK delay algorithm that uses TCP’s intrinsic behavior to control the traffic. Class Based Queuing (CBQ) provides guaranteed and maximum bandwidth Quality of Service (QoS) for the SonicWall security appliance. Every packet destined to the WAN interface is queued in the corresponding priority queue. The scheduler then dequeues the packets and transmits it on the link depending on the guaranteed bandwidth for the flow and the available link bandwidth.

Balancing the bandwidth allocated to different network traffic and then assigning priorities to traffic improves network performance. Use the Bandwidth Management section of the Edit Interface screen to enable or disable the ingress and egress bandwidth management.

Egress and Ingress available link bandwidth can be used to configure the upstream and downstream connection speeds in kilobits per second. NOTE: The Link Aggregation features are supported only on NSA and SuperMassive platforms. Link Aggregation groups up to four Ethernet interfaces together forming a single logical link to support greater throughput than a single physical interface could support, this is referred to as a Link Aggregation Group (LAG). This provides the ability to send multi-gigabit traffic between two Ethernet domains.

All ports in an aggregate link must be connected to the same switch. The firewall uses a round-robin algorithm for load balancing traffic across the interfaces in a Link Aggregation Group. Link Aggregation also provides a measure of redundancy, in that if one interface in the LAG goes down, the other interfaces remain connected.

Link Aggregation is referred to using different terminology by different vendors, including Port Channel, Ether Channel, Trunk, and Port Grouping. Link Aggregation failover SonicWall provides multiple methods for protecting against loss of connectivity in the case of a link failure, including High Availability (HA), Load Balancing Groups (LB Groups), and now Link Aggregation.

If all three of these features are configured on a firewall, the following order of precedence is followed in the case of a link failure. Load Balancing Groups HA takes precedence over Link Aggregation. Because each link in the LAG carries an equal share of the load, the loss of a link on the Active firewall forces a failover to the Idle firewall (if all of its links remain connected). Physical monitoring needs to be configured only on the primary aggregate port. When Link Aggregation is used with a LB Group, Link Aggregation takes precedence. LB takes over only if all the ports in the aggregate link are down.

Link Aggregation Configuration To configure Link Aggregation, complete the following steps. NOTE: The Port Redundancy features are supported only on NSA and SuperMassive platforms. Port Redundancy provides a simple method for configuring a redundant port for a physical Ethernet port. This is a valuable feature, particularly in high-end deployments, to protect against switch failures being a single point of failure. When the primary interface is active, it processes all traffic to and from the interface. If the primary interface goes down, the secondary interface takes over all outgoing and incoming traffic.

The secondary interface assumes the MAC address of the primary interface and sends the appropriate gratuitous ARP on a failover event. When the primary interface comes up again, it resumes responsibility for all traffic handling duties from the secondary interface. In a typical Port Redundancy configuration, the primary and secondary interfaces are connected to different switches. This provides for a failover path in case the primary switch goes down. Both switches must be on the same Ethernet domain. Port Redundancy can also be configured with both interfaces connected to the same switch.

Port Redundancy Failover SonicWall provides multiple methods for protecting against loss of connectivity in the case of a link failure, including High Availability (HA), Load Balancing Groups (LB Groups), and now Port Redundancy. If all three of these features are configured on a firewall, the following order of precedence is followed in the case of a link failure. LB Group When Port Redundancy is used with HA, Port Redundancy takes precedence. Typically an interface failover causes an HA failover to occur, but if a redundant port is available for that interface, then an interface failover occurs but not an HA failover. If both the primary and secondary redundant ports go down, then an HA failover occurs (assuming the secondary firewall has the corresponding port active). When Port Redundancy is used with a LB Group, Port Redundancy again takes precedence. Any single port (primary or secondary) failures are handled by Port Redundancy just like with HA.

When both the ports are down then LB kicks in and tries to find an alternate interface. Port Redundancy Configuration To configure Port Redundancy, complete the following steps. WAN Failover and Load Balancing WAN Failover enables you to configure one of the user-defined interfaces as a secondary WAN port. The secondary WAN port can be used in a simple “active/passive” setup to allow traffic to be only routed through the secondary WAN port if the Primary WAN port is unavailable. This allows the SonicWall to maintain a persistent connection for WAN port traffic by “failing over” to the secondary WAN port. For a SonicWall appliance with a WWAN interface, such as a TZ 190, you can configure failover using the WWAN interface.

Failover between the Ethernet WAN (the WAN port, OPT port, or both) and the WWAN is supported through the WAN Connection Model setting. This feature also allows you to do simple load balancing for the WAN traffic on the SonicWall. You can select a method of dividing the outbound WAN traffic between the two WAN ports and balance network traffic. Load-balancing is currently only supported on Ethernet WAN interfaces, but not on WWAN interfaces.

The SonicWall can monitor WAN traffic using Physical Monitoring that detects if the link is unplugged or disconnected, or Physical and Logical Monitoring that monitors traffic at a higher level, such as upstream connectivity interruptions. Depending on what you selected from the Type drop-down menu, one of these options display: Type drop-down options Type selection Option Basic Failover Preempt and failback to preferred interfaces when possible Select to enable rank to determine the order of preemption. Selected by default. Spill-over When bandwidth exceeds BandwidthLimit Kbit/s on PrimaryInterface, new flows will go to the alternate group members in Round Robin manner Specify the bandwidth for the Primary in the field.

If this value is exceeded, new flows are then sent to alternate group members according to the order listed in the Selected column. The default value is 0. Round Robin, Spillover, and Ratio Use Source and Destination IP Address binding The option is especially useful when using HTTP/HTTPS redirection or in a similar situation. For example, connection A and connection B need to be on the same WAN interface, the source and destination IP addresses in Connection A are the same as those for connection B, but a different service is being used. In this case, source and destination IP address binding is required to keep both the connections on the same WAN interface so that the transactions do not fail. This option is not selected by default. Probe responder.global.sonicwall.com on all interfaces in this group—Enable this check box to automatically set Logical/Probe Monitoring on all interfaces in the Group.

When enabled, TCP probe packets are sent to the global SNWL host that responds to SNWL TCP packets, responder.global.sonicwall.com, using a target probe destination address of 204.212.10. When this check box is selected, the rest of the probe configuration enables built-in settings automatically. The same probe will be applied to all four WAN Ethernet interfaces. Configuring Multiple WAN Interfaces The Multiple WAN (MWAN) feature allows the administrator to configure all but one of the appliance's interface for WAN network routing (one interface must remain configured for the LAN zone for local administration). All of the WAN interfaces can be probed using the SNWL Global Responder host. Multiple WAN is configured across the following sections of the UI.

Configuring Network Interfaces for Multiple WAN The Network >Interfaces page allows more than two WAN interfaces to be configured for routing. It is possible to configure WAN interfaces in the Network Interfaces page, but not include them in the Failover & LB. Only the Primary WAN Ethernet Interface is required to be part of the LB group whenever LB has been enabled. Any WAN interface that does not belong to the LB group is not included in the LB function, but does normal WAN routing functions.

NOTE: A virtual WAN interface might belong to the LB group. However, prior to using within the LB group, ensure that the virtual WAN network is fully routable like that of a physical WAN. Routing the Default & Secondary Default Gateways for Multiple WAN Because the gateway address objects previously associated with the Primary WAN and Secondary WAN are now deprecated, user-configured Static Routes need to be re-created in order to use the correct gateway address objects associated with the WAN interfaces.

This must be configured manually as part of the firmware upgrade procedure on the Network >Routing page. The old address object, Default Gateway, corresponds to the default gateway associated with the Primary WAN in the LB group. The Secondary Default Gateway address object corresponds to the default gateway associated with Alternate WAN #1.

NOTE: After re-adding the routes, delete the old ones referring to the Default and Secondary Default Gateways. Configuring DNS for Multiple WAN If DNS name resolution issues are encountered with multiple WAN interfaces, you might need to select the Specify DNS Servers Manually option on the Network >DNS page and set the servers to Public DNS Servers (ICANN or non-ICANN). Depending on your location, some DNS Servers might respond faster than others. Verify that these servers work correctly from your installation prior to using your SonicWall appliance.

Configuring Zones A Zone is a logical grouping of one or more interfaces designed to make management, such as the definition and application of Access Rules, a simpler and more intuitive process than following a strict physical interface scheme. There are four fixed Zone types: Trusted, Untrusted, Public, and Encrypted. Trusted is associated with LAN Zones. These fixed Zone types cannot be modified or deleted. A Zone instance is created from a Zone type and named accordingly, such as Sales, Finance, and so on.

Only the number of interfaces limits the number of Zone instances for Trusted and Untrusted Zone types. The Untrusted Zone type (such as the WAN) is restricted to two Zone instances. The Encrypted Zone type is a special system Zone comprising all VPN traffic and does not have any associated interfaces.

Trusted and Public Zone types offer an option, Interface Trust, to automate the creation of Access Rules to allow traffic to flow between the Interfaces of a Zone instance. For example, if the LAN Zone has interfaces X0, X3, and X5 assigned to it, checking Allow Interface Trust on the LAN Zone creates the necessary Access Rules to allow hosts on these Interfaces to communicate with each other. To add or edit a Zone, complete the following steps. Configure any of the following options: – Enforce Guest Login over HTTPS—Requires guests to use HTTPS instead of HTTP to access the guest services. – Enable inter-guest communication—Allows guests connecting to SonicPoints in this Zone to communicate directly and wirelessly with each other. – Bypass AV Check for Guests—Allows guest traffic to bypass Anti-Virus protection.

– Enable External Guest Authentication—Requires guests connecting from the device or network you select to authenticate before gaining access. This feature, based on Lightweight Hotspot Messaging (LHM) is used for authenticating Hotspot users and providing them parametrically bound network access.

NOTE: Refer to the SonicWall Lightweight Hotspot Messaging tech note available at the SonicWall documentation Web site for complete configuration of the Enable External Guest Authentication feature. – Custom Authentication Page—Redirects you to a custom authentication page when you first connect to the zone. Click Configure to set up the custom authentication page. Enter either a URL to an authentication page or a custom challenge statement in the text field, and click OK.

– Post Authentication Page—Directs you to the page you specify immediately after successful authentication. Enter a URL for the post-authentication page in the field. – Bypass Guest Authentication—Allows the appliance to integrate into environments already using some form of user-level authentication. This feature automates the Guest Services authentication process, allowing you to reach Guest Services resources without requiring authentication. This feature should only be used when unrestricted Guest Services access is desired, or when another device upstream of the appliance is enforcing authentication.

– Redirect SMTP traffic to—Redirects SMTP traffic incoming on this zone to an SMTP server you specify. Select the address object from which to redirect traffic. – Deny Networks—Blocks traffic from the networks you name.

Select the subnet, address group, or IP address from which to block traffic. – Pass Networks—Automatically allows traffic through the zone from the networks you select. – Max Guests—Specifies the maximum number of guest users allowed to connect to the zone. The default is 10. Select WiFiSec Enforcement to require that all traffic that enters into the WLAN Zone interface be either IPsec traffic, WPA traffic, or both. With WiFiSec Enforcement enabled, all non-guest wireless clients connected to SonicPoints attached to an interface belonging to a Zone on which WiFiSec is enforced are required to use the strong security of IPsec. The VPN connection inherent in WiFiSec terminates at the “WLAN GroupVPN”, which you can configure independently of “WAN GroupVPN” or other Zone GroupVPN instances.

If you select both WiFiSec Enforcement, and SMA Enforcement, the Wireless zone allows traffic authenticated by either a SMA or an IPsec VPN. Click the Guest Services tab. You can choose from the following configuration options for Wireless Guest Services: – Enable Wireless Guest Services—Enables guest services on the WLAN zone.

– Enforce Guest Login over HTTPS—Requires guests to use HTTPS instead of HTTP to access the guest services. – Enable inter-guest communication—Allows guests connecting to SonicPoints in this WLAN Zone to communicate directly and wirelessly with each other. – Bypass AV Check for Guests—Allows guest traffic to bypass Anti-Virus protection. – Enable External Guest Authentication—Requires guests connecting from the device or network you select to authenticate before gaining access. This feature, based on Lightweight Hotspot Messaging (LHM) is used for authenticating Hotspot users and providing them parametrically bound network access. NOTE: Refer to the SonicWall Lightweight Hotspot Messaging tech note available at the SonicWall documentation Web site for complete configuration of the Enable External Guest Authentication feature.

– Custom Authentication Page—Redirects you to a custom authentication page when you first connect to a SonicPoint in the WLAN zone. Click Configure to set up the custom authentication page. Enter either a URL to an authentication page or a custom challenge statement in the text field, and click OK. – Post Authentication Page—Directs you to the page you specify immediately after successful authentication. Enter a URL for the post-authentication page in the field.

– Bypass Guest Authentication—Allows a SonicPoint running WGS to integrate into environments already using some form of user-level authentication. This feature automates the WGS authentication process, allowing wireless users to reach WGS resources without requiring authentication.

This feature should only be used when unrestricted WGS access is desired, or when another device upstream of the SonicPoint is enforcing authentication. – Redirect SMTP traffic to—Redirects SMTP traffic incoming on this zone to an SMTP server you specify. Select the address object to redirect traffic to. – Deny Networks—Blocks traffic from the networks you name. Select the subnet, address group, or IP address to block traffic from.

– Pass Networks—Automatically allows traffic through the WLAN zone from the networks you select. – Max Guests—Specifies the maximum number of guest users allowed to connect to the WLAN zone. The default is 10. – Enable Dynamic Address Translation (DAT)—Wireless Guest Services (WGS) provides spur of the moment “hotspot” access to wireless-capable guests and visitors. For easy connectivity, WGS allows wireless users to authenticate and associate, obtain IP settings from the SonicWall appliance Wireless DHCP services, and authenticate using any Web-browser. Without DAT, if a WGS user is not a DHCP client, but instead has static IP settings incompatible with the Wireless WLAN network settings, network connectivity is prevented until the user’s settings change to compatible values. Dynamic Address Translation (DAT) is a form of Network Address Translation (NAT) that allows the SonicWall Wireless to support any IP addressing scheme for WGS users.

For example, the SonicWall Wireless WLAN interface is configured with an address of 172.16.31.1, and one WGS client has a static IP Address of 192.168.0.10 and a default gateway of 192.168.0.1, while another has a static IP address of 10.1.1.10 and a gateway of 10.1.1.1, and DAT enables network communication for both of these clients. When you are finished, click Update. The settings are changed for the selected SonicWall appliance. To clear all screen settings and start over, click Reset.

DNS Rebinding Attack Prevention DNS rebinding is a DNS-based attack on code embedded in web pages. Normally requests from code embedded in web pages (JavaScript, Java and Flash) are bound to the web-site they are originating from.DNS rebinding attackers register a domain which is delegated to a DNS server they control. The domains exploit very short TTL parameters to scan the attacked network and do other malicious activities.

To configure DNS, complete the following steps. (Optional) For the Allowed Domains pull-down menu, select an FQDN Address Object/Group containing allowed domain-names (for example, *.sonicwall.com) for which locally connected/routed subnets should be considered legal responses. Configuring Dynamic DNS Dynamic DNS (DDNS) is a service provided by various companies and organizations that dynamically changes IP addresses to automatically update DNS records without manual intervention. This service allows for network access using domain names rather than IP addresses, even when the target’s IP addresses change. DDNS is supported for IPv6 as well as IPv4.

To configure Dynamic DNS on the SonicWall security appliance, complete these steps. TIP: By default, LAN to WAN has a NAT policy predefined on the firewall.

The Network Address Translation (NAT) engine in SonicOS allows you to define granular NAT polices for your incoming and outgoing traffic. By default, the firewall has a preconfigured NAT policy to allow all systems connected to the X0 interface to perform Many-to-One NAT using the IP address of the X1 interface, and a policy to not perform NAT when traffic crosses between the other interfaces. This section explains how to set up the most common NAT policies. Understanding how to use NAT policies starts with an the construction of an IP packet. Every packet contains addressing information that allows the packet to get to its destination, and for the destination to respond to the original requester. The packet contains (among other things) the requester’s IP address, the protocol information of the requestor, and the destination’s IP address.

The NAT Policies engine in SonicOS can inspect the relevant portions of the packet and can dynamically rewrite the information in specified fields for incoming, as well as outgoing traffic. You can add up to 512 NAT Policies on a SonicWall Security Appliance running SonicOS, and they can be as granular as you need. It is also possible to create multiple NAT policies for the same object. For instance, you can specify that an internal server use one IP address when accessing Telnet servers, and to use a totally different IP address for all other protocols. Because the NAT engine in SonicOS supports inbound port forwarding, it is possible to hide multiple internal servers off the WAN IP address of the firewall.

The more granular the NAT Policy, the more precedence it takes. Below, shows the maximum number of routes and NAT policies allowed for each network security appliance model.

Maximum routes and NAT policies allowed per firewall model Model Routes NAT Policies Model Routes NAT Policies Static Dynamic Static Dynamic SM 9600 3072 4096 2048 TZ600 256 1024 512 SM 9400 3072 4096 2048 TZ500/TZ500 W 256 1024 512 SM 9200 3072 4096 2048 TZ400/TZ400 W 256 1024 512 NSA 6600 2048 4096 2048 TZ300/TZ300 W 256 1024 512 NSA 5600 2048 4096 2048 NSA 4600 1088 2048 1024 SOHO W 256 1024 512 NSA 3600 1088 2048 1024 NSA 2600 1088 2048 1024 Topics. About NAT64 Beginning with GMS 8.3, GMS supports the NAT64 feature that enables an IPv6-only client to contact an IPv4-only server through an IPv6-to-IPv4 translation device known as a NAT64 translator. NAT64 provides the ability to access legacy IPv4-only servers from IPv6 networks; a SonicWall with NAT64 is placed as the intermediary router.

As a NAT64 translator, GMS allows an IPv6-only client from any zone to initiate communication to an IPv4-only server with proper route configuration. GMS maps IPv6 addresses to IPv4 addresses so IPv6 traffic changes to IPv4 traffic and vice versa.

IPv6 address pools (represented as Address Objects) and IPv4 address pools are created to allow mapping by translating packet headers between IPv6 and IPv4. The IPv4 addresses of IPv4 hosts are translated to and from IPv6 addresses by using an IPv6 prefix configured in GMS. The DNS64 translator enables NAT64. Either an IPv6 client must configure a DNS64 server or the DNS server address the IPv6 client gets automatically from the gateway must be a DNS64 server. The DNS64 server of an IPv6-only client creates AAAA (IPv6) records with A (IPv4) records.

GMS does not act as a DNS64 server. Does not support High Availability. For NAT64 traffic matches, two mixed connection caches are created. Thus, the capacity for NAT64 connection caches is half that for pure IPv4 or IPv6 connections. Pref64::/n The DNS64 server uses Pref64::/ n to judge if an IPv6 address is an IPv4-converted IPv6 address by comparing the first n bits with pref64:. DNS64 creates IPv4-converted IPv6 addresses by synthesizing pref64:: with IPv4 addresses records and sending a DNS response to IPv6-only clients.

Pref64::/ n defines a source network that can go from an IPv6-only client through NAT64 to an IPv4-only client. In GMS, an Address Object of the Network can be configured to represent all addresses with pref64::/ n to represent all IPv6 clients that can do NAT64. For configuring a Pref64::/n Address Object, see Default Pref64 Network Address Object on page 408. DNS64 DNS Extensions for Network Address Translation from IPv6 Clients to IPv4 Servers IPv4-converted IPv6 addresses IPv6 addresses used to represent IPv4 nodes in an IPv6 network IPv4-embedded IPv6 addresses IPv6 addresses in which 32 bits contain an IPv4 address NAT Network Address Translation NAT64 Stateful Network Address and Protocol Translation from IPv6 Clients to IPv4 Servers NATPT Network Address Translation - Protocol Translation PMTUD Path MTU discovery XLATs IP/ICMP translators NAT Policies Tab The NAT Policies tab allows you to view and manage your NAT Policies. Viewing NAT Policy Entries Topics. All Types Displays all the routing policies including Custom Policies and Default Policies. Initially, before you create NAT policies, only the Default Policies.

Default Policies Displays only Default Policies. Custom Policies Displays only those NAT policies you configure. Filtering the Display You can enter the policy number (the number listed in the # column) in the Search field to display a specific VPN policy. You can also enter alphanumeric search patterns, such as WLAN, X1 IP, or Private, to display only those policies of interest. Displaying Information about Policies Moving your pointer over the Comment icon in the Configure column of NAT Policies table displays the comments entered in the Comments field of the Add NAT Policy dialog for custom policies.

How To Install Khmer Font On Sony Xperia P here. Default policies have a brief description of the type of NAT policy, such as IKE NAT Policy or NAT Management Policy. Moving your pointer over the Statistics icon in the Configure column of NAT Policies table displays traffic statistics for the NAT policy. Deleting Entries Clicking the Delete icon deletes the NAT Policy entry. If the icon is dimmed, the NAT Policy is a default entry, and you cannot delete it. Selecting the checkboxes of specific custom policies makes the Delete button available.

Clicking the button deletes the selected policies. Clicking Delete All deletes all custom policies. SonicWall appliances support Network Address Translation (NAT).

NAT is the automated translation of IP addresses between different networks. For example, a company might use private IP addresses on a LAN that are represented by a single IP address on the WAN side of the SonicWall appliance. SonicWall appliances support two types of NAT. NOTE: IP address/port combinations are dynamic and not preserved for new connections. For example, the first connection for IP address might use port 2302, but the second connection might use 2832. IPv6 address objects display in the Original Source, Original Destination, Translated Source, and Translated Destination columns of the Nat Polices table. To add a NAT Policy, click the Add NAT Policy link.

To edit an existing policy, click the Configure icon for the policy you want to edit. The procedures for adding and editing NAT policies in IPv6 is configured in the same method as for IPv4. Common Types of Mapping SonicWall supports several types of address mapping.

These include. Many-to-Many Mapping—many local IP addresses are mapped to many public IP addresses. If the number of public IP addresses are greater than or equal to the number of local IP addresses, the SonicWall appliance uses Address-to-Address translation. If the number of public IP addresses is less than the number of local IP addresses, the SonicWall appliance uses NAPT. If there are 10 private IP addresses and 5 public IP addresses, two private IP addresses will be assigned to each public IP address using NAPT. SonicWall NAT Policy Fields When configuring a NAT Policy, you will configure a group of settings that specifies how the IP address originates and how it will be translated.

Additionally, you can apply a group of filters that allow you to apply different policies to specific services and interfaces. Translated Destination—specifies the IP address or IP address range to which the original source will be mapped. This drop-down menu setting is what the firewall translates the specified Original Destination to as it exits the firewall, whether it is to another interface, or into/out-of VPN tunnels. When creating outbound NAT polices, this entry is usually set to Original, as the destination of the packet is not being changed, but the source is being changed. However, these Address Objects entries can be single host entries, address ranges, or IP subnets.

Original Service—used to filter destination addresses by service, this field specifies a Service Object that can be a single service or group of services. This drop-down menu setting is used to identify the IP service in the packet crossing the firewall, whether it is across interfaces, or into/out-of VPN tunnels. You can use the predefined services on the firewall, or you can create your own entries. For many NAT policies, this field is set to Any, as the policy is only altering source or destination IP addresses. NOTE: If you map more than one private IP address to the same public IP address, the private IP addresses will automatically be configured for port mapping or NAPT. To configure one-to-one mapping from the public network to the private network, select the Address Object that corresponds to the public network IP address in the Original Destination field and the private IP address that is used to reach the server in the Translated Destination field.

Leave the other fields alone, unless you want to filter by service or interface. NOTE: If you map one public IP address to more than one private IP address, the public IP addresses is mapped to the first private IP address. Load balancing is not supported. Additionally, you must set the Original Source to Any.

Many-to-One Mapping To configure many-to-one mapping from the private network to the public network, select the select the Address Object that corresponds to the private network IP addresses in the Original Source field and the public IP address that is used to reach the Internet in the Translated Source field. Leave the other fields alone, unless you want to filter by service or interface. NOTE: You can also specify Any in the Original Source field and the Address Object of the LAN interface in the Translated Source field. Many-to-Many Mapping To configure many-to-many mapping from the private network to the public network, select the select the Address Object that corresponds to the private network IP addresses in the Original Source field and the public IP addresses to which they are mapped in the Translated Source field.

Leave the other fields alone, unless you want to filter by service or interface. NOTE: If the IP address range specified in the Original Source is larger than the Translated Source, the SonicWall appliance uses port mapping or NAPT. If the Translated Source is equal to or larger than the Original Source, addresses are individually mapped. To configure many-to-many mapping from the public network to the private network, select the Address Object that corresponds to the public network IP addresses in the Original Destination field and the IP addresses on the private network in the Translated Destination field. Leave the other fields alone, unless you want to filter by service or interface. NOTE: If the IP address range specified in the Original Destination is smaller than the Translated Destination, the SonicWall appliance will be individually mapped to the first translated IP addresses in the translated range. If the Translated Destination is equal to or smaller than the Original Destination, addresses are individually mapped.

NAT Load Balancing and Probing NAT load balancing provides the ability to balance incoming traffic across multiple, similar network resources. Load Balancing distributes traffic among similar network resources so that no single server becomes overwhelmed, allowing for reliability and redundancy. If one server becomes unavailable, traffic is routed to available resources, providing maximum uptime. With probing enabled, the SonicWall uses one of two methods to probe the addresses in the load-balancing group, using either a simple ICMP ping query to determine if the resource is alive, or a TCP socket open query to determine if the resource is alive. Per the configurable intervals, the SonicWall can direct traffic away from a non-responding resource, and return traffic to the resource after it has begun to respond again. NAT Load Balancing Methods NAT load balancing is configured on the Advanced tab of a NAT policy.

SonicOS offers the following NAT methods. When you are finished, click Update. The policy is added and you are returned to the NAT Policies screen. Configuring Web Proxy Forwarding Settings A Web proxy server intercepts HTTP requests and determines if it has stored copies of the requested Web pages. If it does not, the proxy completes the request to the server on the Internet, returning the requested information to the user and also saving it locally for future requests. Setting up a Web proxy server on a network can be cumbersome, because each computer on the network must be configured to direct Web requests to the server.

If there is a proxy server on the SonicWall appliance’s network, you can move the SonicWall appliance between the network and the proxy server, and enable Web Proxy Forwarding. This forwards all WAN requests to the proxy server without requiring the computers to be individually configured. Configuring Automatic Proxy Forwarding (Web Only). After the SonicWall appliance has been updated, a message confirming the update is displayed at the bottom of the browser window. Bypass Proxy Servers Upon Proxy Failure If a Web proxy server is specified on the Firewall >Web Proxy page, selecting Bypass Proxy Servers Upon Proxy Server Failure allows clients behind the SonicWall appliance to bypass the Web proxy server in the event it becomes unavailable. Instead, the client’s browser accesses the Internet directly as if a Web proxy server is not specified.

Adding a Proxy Server To add a Web Proxy server through which users’ web request might come, complete the following steps. Click Update.

Configuring Routing in SonicOS Enhanced If you have routers on your interfaces, you can configure the SonicWall appliance to route network traffic to specific predefined destinations. Static routes must be defined if the network connected to an interface is segmented into subnets, either for size or practical considerations. For example, a subnet can be created to isolate a section of a company, such as finance, from network traffic on the rest of the LAN, DMZ, or WAN.

To add static routes, complete the following steps. When you are finished, click Update. The route settings are configured for the selected SonicWall appliance(s). To clear all screen settings and start over, click Reset.

Probe-Enabled Policy Based Routing Configuration For appliances running SonicOS Enhanced 5.5 and above, you can optionally configure a Network Monitor policy for the route. When a Network Monitor policy is used, the static route is dynamically disabled or enabled, based on the state of the probe for the policy. Policy Based Routing is fully supported for IPv6 by selecting IPv6 address objects and gateways for route policies on the Network >Routing page. IPv6 address objects are listed in the Source, Destination, and Gateway columns of the Route Policies table. Configuring routing polices for IPv6 is nearly identical to IPv4. To configure a policy based route, complete the following steps. Click Update to apply the configuration.

Configuring RIP in SonicOS Enhanced Routing Information Protocol (RIP) is a distance-vector routing protocol that is commonly used in small homogeneous networks. Using RIP, a router periodically sends its entire routing table to its closest neighbor, which passes the information to its next neighbor, and so on. Eventually, all routers within the network has the information about the routing paths.

When attempting to route packets, a router checks the routing table and selects the path that requires the fewest hops. SonicWall appliances support RIPv1 or RIPv2 to advertise its static and dynamic routes to other routers on the network. Changes in the status of VPN tunnels between the SonicWall and remote VPN gateways are also reflected in the RIPv2 advertisements. Choose between RIPv1 or RIPv2 based on your router’s capabilities or configuration. RIPv1 is an earlier version of the protocol that has fewer features, and it also sends packets through broadcast instead of multicast. RIPv2 packets are backwards-compatible and can be accepted by some RIPv1 implementations that provide an option of listening for multicast packets. The RIPv2 Enabled (broadcast) selection broadcasts packets instead of multicasting packets, and is for heterogeneous networks with a mixture of RIPv1 and RIPv2 routers.

The images in this section are displaying management interfaces running SonicOS 5.9 and higher firmware versions. To configure RIP, refer to the following subsections.

Provide a password. Advanced Routing Services For appliances running SonicOS versions 5.6 and higher, VPN Tunnel Interfaces can be configured for advanced routing. To do so, you must enable advanced routing for the tunnel interface on the Advanced tab of its configuration. See for more information. After you have enabled advanced routing for a Tunnel Interface, it is displayed in the list with the other interfaces in the Advanced Routing table on the Network >RIP page.

The RIP configurations for Tunnel Interfaces are very similar to the configurations for traditional interfaces with the addition of two new options that are listed at the bottom of the RIP configuration window under a new Global Unnumbered Configuration heading. When running SonicOS version 5.9 or higher, a BGP drop-down menu is available under the Advanced Routing Services heading. This menu gives you the options to enable or disable the BGP feature and is only available if Use Advanced Routing is clicked. Global Unnumbered Configuration Because Tunnel Interfaces are not physical interfaces and have no inherent IP address, they must “borrow” the IP address of another interface. Therefore, the advanced routing configuration for a Tunnel Interface includes the following options for specifying the source and destination IP addresses for the tunnel. When more than one Tunnel Interface on an appliance is connected to the same remote device, each Tunnel Interface must use a unique borrowed interface. Depending on the specific circumstances of your network configuration, these guidelines might not be essential to ensure that the Tunnel Interface functions properly.

But these guidelines are SonicWall best practices that will avoid potential network connectivity issues. Global RIP Configuration To configure the Global RIP settings, complete the following steps. When you are finished, click Update. The settings are changed for the SonicWall appliance. To clear all screen settings and start over, click Reset. Configuring IP Helper The IP Helper allows the SonicWall to forward DHCP requests originating from the interfaces on a SonicWall to a centralized DHCP server on the behalf of the requesting client.

IP Helper is used extensively in routed VLAN environments where a DHCP server is not available for each interface, or where the layer three routing mechanism is not capable of acting as a DHCP server itself. The IP Helper also allows NetBIOS broadcasts to be forwarded with DHCP client requests.

When you are finished, click Update. The settings are changed for the selected SonicWall appliance. To clear all screen settings and start over, click Reset. Configuring ARP ARP (Address Resolution Protocol) maps layer three (IP addresses) to layer two (physical or MAC addresses) to enable communications between hosts residing on the same subnet.

ARP is a broadcast protocol that can create excessive amounts of network traffic on your network. To minimize the broadcast traffic, an ARP cache is maintained to store and reuse previously learned ARP information. To configure ARP, complete the following steps. Bind MAC Address—Enabling the Bind MAC Address option in the Add Static ARP window binds the MAC address specified to the designated IP address and interface. This can be used to ensure that a particular workstation (as recognized by the network card's unique MAC address) can only be used on a specified interface on the SonicWall.

After the MAC address is bound to an interface, the SonicWall will not respond to that MAC address on any other interface. It also removes any dynamically cached references to that MAC address that might have been present, and it prohibits additional (non-unique) static mappings of that MAC address. Update IP Address Dynamically—The Update IP Address Dynamically setting in the Add Static ARP window is a sub-feature of the Bind MAC Address option. This allows for a MAC address to be bound to an interface when DHCP is being used to dynamically allocate IP addressing. Enabling this option blurs the IP Address field, and populates the ARP Cache with the IP Address allocated by the SonicWall's internal DHCP server, or by the external DHCP server if IP Helper is in use.

Secondary Subnets with Static ARP The Static ARP feature allows for secondary subnets to be added on other interfaces, and without the addition of automatic NAT rules. Adding a Secondary Subnet using the Static ARP Method. Optional: Add a static route on upstream device(s) so that they know which gateway IP to use to reach the secondary subnet. Flushing the ARP Cache It is sometimes necessary to flush the ARP cache if the IP address has changed for a device on the network.

Because the IP address is linked to a physical address, the IP address can change but still be associated with the physical address in the ARP Cache. Flushing the ARP Cache allows new information to be gathered and stored in the ARP Cache. Click Flush ARP Cache to clear the information. To configure a specific length of time for the entry to time out, enter a value in minutes in the ARP Cache entry time out (minutes) field.

Navigating and Sorting the ARP Cache Table Entries To view ARP cache information, click Request ARP Cache display from unit(s). The ARP Cache table provides easy pagination for viewing a large number of ARP entries.

You can navigate a large number of ARP entries listed in the ARP Cache table by using the navigation control bar located at the top right of the ARP Cache table. Navigation control bar includes four buttons. The far left button displays the first page of the table.

The far right button displays the last page. The inside left and right arrow buttons moved the previous or next page respectively. You can enter the policy number (the number listed before the policy name in the # Name column) in the Items field to move to a specific ARP entry.

The default table configuration displays 50 entries per page. You can change this default number of entries for tables on the System >Administration page. You can sort the entries in the table by clicking on the column header. The entries are sorted by ascending or descending order.

The arrow to the right of the column entry indicates the sorting status. A down arrow means ascending order.

An up arrow indicates a descending order. Configuring Neighbor Discovery The Neighbor Discovery Protocol (NDP) is a messaging protocol that was created as part of IPv6 to complete a number of the tasks that ICMP and ARP accomplish in IPv4. Just like ARP, Neighbor Discovery builds a cache of dynamic entries, and the administrator can configure static Neighbor Discovery entries. The following table shows the IPv6 neighbor messages and functions that are analogous to the traditional IPv4 neighbor messages. IPv6 neighbor messages and functions IPv4 Neighbor message IPv6 Neighbor message ARP request message Neighbor solicitation message ARP relay message Neighbor advertisement message ARP cache Neighbor cache Gratuitous ARP Duplicate address detection Router solicitation message (optional) Router solicitation (required) Router advertisement message (optional) Router advertisement (required) Redirect Message Redirect Message NDP objects Use the NDP Object Search tool to find existing NDP objects.

The search results display in the NDP Objects table. Each entry details the IP Address, MAC Address, and Interface of the NDP object. From this table you can add, edit, or delete NDP objects by click the ADD New NDP object or Delete NDP Object(s) links, or clicking Configure for an existing object. NDP Cache Request an NDP cache list by clicking the Request NDP Cache List from Firewall link located in the “Request NDP Cache List” section. The requested list displays in the NDP Cache Objects table, where information about the IP Address, Type, MAC Address, Interface, Timeout, and Flush is shown. To search for particular NDP cache lists, use the NDP Cache Search tool.

The filtered results display in the NDP Cache Objects table. To flush the NDP cache, click the Flush NDP Cache link. Configuring SwitchPorts The SwitchPorts page allows you to manage the assignments of ports to PortShield interfaces. A PortShield interface is a virtual interface with a set of ports assigned to it.

To configure a SwitchPort, complete the following steps. NOTE: The NSA2600 firewall does not support PortShield, and the SM 9800 and SOHO W firewalls do not support the X-Series Solution. A PortShield interface is a virtual interface with a set of ports, including ports on Dell Networking X-Series, or extended, switches, assigned to it. PortShield architecture enables you to configure some or all of the LAN ports into separate security contexts, providing protection not only from the WAN and DMZ, but between devices inside your network as well. In effect, each context has its own wire-speed PortShield that enjoys the protection of a dedicated, deep packet inspection firewall.

On the Network >PortShield Groups page, you can manually group ports together that allow them to share a common network subnet as well as common zone settings. IMPORTANT: When an extended switch has been powered off and then the firewall is restarted (rebooted), it could take up to five minutes before the firewall discovers the extended switch and reports the Status of the switch as Connected. When configuring extended switches in a PortShield group, it could take up to five minutes for the configuration to be displayed on the Network >PortShield Groups page. Configuring MAC-IP Anti-Spoof MAC and IP address-based attacks are increasingly common in today’s network security environment. These types of attacks often target a Local Area Network (LAN) and can originate from either outside or inside a network. In fact, anywhere internal LANs are somewhat exposed, such as in office conference rooms, schools, or libraries, could provide an opening to these types of attacks.

These attacks also go by various names: man-in-the-middle attacks, ARP poisoning, SPITS. The MAC-IP Anti-Spoof feature lowers the risk of these attacks by providing administrators with different ways to control access to a network, and by eliminating spoofing attacks at OSI Layer 2/3. The effectiveness of the MAC-IP Anti-Spoof feature focuses on two areas. The first is admission control which allows administrators the ability to select which devices gain access to the network. The second area is the elimination of spoofing attacks, such as denial-of-service attacks, at Layer 2.

To achieve these goals, two caches of information must be built: the MAC-IP Anti-Spoof Cache, and the ARP Cache. The MAC-IP Anti-Spoof cache validates incoming packets and determines whether they are to be allowed inside the network. An incoming packet’s source MAC and IP addresses are looked up in this cache. If they are found, the packet is allowed through. The MAC-IP Anti-Spoof cache is built through one or more of the following sub-systems. Interface Settings To edit MAC-IP Anti-Spoof settings within the Network Security Appliance management interface, go to the Network >MAC-IP Anti-spoof page. To configure settings for a particular interface, click the pencil icon in the Configure column for the desired interface.

The Settings window is displayed for the selected interface. In this window, the following settings can be enabled or disabled by clicking on the corresponding check box. After your setting selections for this interface are complete, click OK. The following options are available. If you need to edit a static Anti-Spoof cache entry, click the pencil icon, under the Configure column, on the same line.

Single, or multiple, static anti-spoof cache entries can be deleted. To do this, select the “delete check box” next to each entry, then click Delete Anti-Spoof Cache(s). To clear cache statistics, select the desired devices, then click Clear Stats. Some packet types are bypassed even though the MAC-IP Anti-Spoof feature is enabled: 1) Non-IP packets, 2) DHCP packets with source IP as 0, 3) Packets from a VPN tunnel, 4) Packets with invalid unicast IPs as their source IPs, and 5) Packets from interfaces where the Management status is not enabled under anti-spoof settings. The Anti-Spoof Cache Search section provides the ability to search the entries in the cache. To search the MAC-IP Anti-Spoof Cache, complete the following steps. NOTE: Spoof Detected List display is available only at the Unit level.

The Spoof Detect List displays devices that failed to pass the ingress anti-spoof cache check. Entries on this list can be added as a static anti-spoof entry. To view the Spoof Detect List, click the Request Spoof Detected List from Firewall link.

To add an entry to the static anti-spoof list, click on the pencil icon under the “Add” column for the desired device. An alert message window opens, asking if you wish to add this static entry. Click OK to proceed. Entries can be flushed from the list by clicking Flush. The name of each device can also be resolved using NetBios, by clicking Resolve. Configuring Network Monitor This section describes how to configure the Network Monitor feature, which provides a flexible mechanism for monitoring network path viability. The results and status of this monitoring are displayed on the Network Monitor page, and are also provided to affected client components and logged in the system log.

Each custom NM policy defines a destination Address Object to be probed. This Address Object might be a Host, Group, Range, or FQDN. When the destination Address Object is a Group, Range or FQDN with multiple resolved addresses, Network Monitor probes each probe target and derives the NM Policy state based on the results.

GMS monitors any remote host status in the local or remote network. GMS now checks the availability of the traffic between the appliance and the target host in real time, thus ensuring the target host can receive network traffic.

GMS also displays the status of the monitored host on the Network >Network Monitor page. To add a network monitor policy on the SonicWall security appliance, complete these steps. TCP - This probe uses the route table to find the egress interface and next-hop for the defined probe targets. A TCP SYN packet is sent to the probe target with the source IP address of the egress interface. A successful response is counted independently for each probe target when the target responds with either a SYN/ACK or RST through the same interface within the Response Timeout time window.

When a SYN/ACK is received, a RST is sent to close the connection. If a RST is received, no response is returned. Click Update to submit the Network Monitor policy. Then click Update on the Network >Network Monitor page. Configuring Probe-Enabled Policy Based Routing When configuring a static route, you can optionally configure a Network Monitor policy for the route. When a Network Monitor policy is used, the static route is dynamically disabled or enabled, based on the state of the probe for the policy.

For more information, see. Configuring Network Settings in SonicOS Standard The following sections describe how to configure network settings in SonicOS Standard. NOTE: Web proxy forwarding settings are identical in SonicOS Standard and Enhanced. For configuration information, refer to in the SonicOS Enhanced section of this chapter. Configuring Intranet Settings SonicWalls can be installed between LAN segments of intranets to prevent unauthorized access to certain resources. For example, if the administrative offices of a school are on the same network as the student computer lab, they can be separated by a SonicWall.

Shows how a SonicWall appliance can be installed between two network segments on an Intranet. SonicWall Intranet Configuration. When you are finished, click Update.

The settings are changed for each selected SonicWall appliance. To clear all screen settings and start over, click Reset. Configuring RIP in SonicOS Standard RIP is a distance-vector routing protocol that is commonly used in small homogeneous networks.

Using RIP, a router periodically sends its entire routing table to its closest neighbor that passes the information to its next neighbor, and so on. Eventually, all routers within the network will have the information about the routing paths. When attempting to route packets, a router checks the routing table and selects the path that requires the fewest hops. RIP is not supported by all SonicWall appliances. To configure RIP, complete the following steps. When you are finished, click Update. The settings are changed for each selected SonicWall appliance.

To clear all screen settings and start over, click Reset. Configuring OPT Addresses SonicWall appliances protect users by preventing Internet users from accessing systems within the LAN (WorkPort). However, this security also prevents users from reaching servers intended for public access, such as Web and mail servers. To allow these services, many SonicWall models have a special Demilitarized Zone (DMZ) port (also known as the HomePort) which is used for public servers. The DMZ sits between the LAN (WorkPort) and the Internet. Servers on the DMZ are publicly accessible, but are protected from denial of service attacks such as SYN Flood and Ping of Death. Although the DMZ port is optional, it is strongly recommended for public servers or when connecting the servers directly to the Internet where they are not protected.

When you are finished, click Update. The settings are changed for each selected SonicWall appliance. To clear all screen settings and start over, click Reset. Configuring One-to-One NAT One-to-One NAT maps valid external IP addresses to internal addresses hidden by NAT. This enables you to hide most of your network by using internal IP addresses. However, some machines might require access. This enables you to allow direct access when necessary.

To do this, assign a range of internal IP addresses to a range of external IP addresses of equal size. The first internal IP address corresponds to the first external IP address, the second internal IP address to the second external IP address, and so on. For example, if an ISP has assigned IP addresses 209.19.28.16 through 209.19.28.31 with 209.19.28.16 as the NAT public address and the address range 192.168.168.1 through 192.168.168.255 is used on the LAN (WorkPort), the following table shows how the IP addresses are assigned. To add additional IP address ranges, repeat Step through for each range. When you are finished, click Update.

The settings are changed for each selected SonicWall appliance. To clear all screen settings and start over, click Reset. Configuring Ethernet Settings This section describes how to configure Ethernet settings on each port of the SonicWall appliance(s). The Ethernet Settings screen is only available on SonicWall 6.x.x.x firmware versions and SonicOS Standard firmware versions.

To configure Ethernet settings, complete the following steps. NOTE: The X-Series Solution is not supported on the SM 9800, NSA 2600, or SOHO W firewalls. Critical network elements, such as a firewall and switch, need to be managed, usually individually. GMS allows unified management of both the firewall and a Dell Networking X-Series switch using the firewall management interface (UI) and GMS. GMS Support of X-Series Switches The maximum number of interfaces available on the SonicWall firewalls vary depending on the model, as shown in. Interfaces per firewall Firewall model Available interfaces SM 9600 SM 9400 SM 9200 20 (4 10 GbE SFP+, 8 1 GbE SFP, 8 1GE copper), 1 GbE Management, and 1 Console 20 (4 10 GbE SFP+, 8 1 GbE SFP, 8 1GE copper), 1 GbE Management, and 1 Console 20 (4 10 GbE SFP+, 8 1 GbE SFP, 8 1GE copper), 1 GbE Management, and 1 Console.

Building Architectures to Solve Business Problems About the Authors John George, Reference Architect, Infrastructure and Cloud Engineering, NetApp John George is a reference architect on the NetApp Infrastructure and Cloud Engineering team and is focused on developing, validating, and supporting cloud infrastructure solutions that include NetApp products. Before his current role, he supported and administered Nortel's worldwide training network and VPN infrastructure. John holds a master's degree in computer engineering from Clemson University.

Mike Mankovsky, Cisco Systems Mike Mankovsky is a Cisco Unified Computing System architect, focusing on Microsoft solutions with extensive experience in Hyper-V, storage systems, and Microsoft Exchange Server. He has expert product knowledge of Microsoft Windows storage technologies and data protection technologies. Chris O'Brien, Technical Marketing Engineer, Server Access Virtualization Business Unit, Cisco Systems Chris O'Brien is currently focused on developing infrastructure best practices and solutions that are designed, tested, and documented to facilitate and improve customer deployments. Previously, O'Brien was an application developer and has worked in the IT industry for more than 15 years. Chris Reno, Reference Architect, Infrastructure and Cloud Engineering, NetApp Chris Reno is a reference architect in the NetApp Infrastructure and Cloud Enablement group and is focused on creating, validating, supporting, and evangelizing solutions based on NetApp products. Before being employed in his current role, he worked with NetApp product engineers designing and developing innovative ways to perform Q and A for NetApp products, including enablement of a large grid infrastructure using physical and virtualized computing resources. In these roles, Chris gained expertise in stateless computing, netboot architectures, and virtualization.

Glenn Sizemore, NetApp Glenn Sizemore is a private cloud reference architect in the Microsoft Solutions Group at NetApp, where he specializes in cloud and automation. Since joining NetApp, Glenn has delivered a variety of Microsoft-based solutions, ranging from general best practices guidance to co-authoring the NetApp Hyper-V Cloud Fast Track with Cisco reference architecture. Lindsey Street, Systems Architect, Infrastructure and Cloud Engineering, NetApp Lindsey Street is a systems architect on the NetApp Infrastructure and Cloud Engineering team. She focuses on the architecture, implementation, compatibility, and security of innovative vendor technologies to develop competitive and high-performance end-to-end cloud solutions for customers.

Lindsey started her career in 2006 at Nortel as an interoperability test engineer, testing customer equipment interoperability for certification. Lindsey has a bachelor of science degree in computer networking and a master of science in information security from East Carolina University.

Adam Fazio, Microsoft Adam Fazio is a Solution Architect in the Worldwide Datacenter and Private Cloud Center of Excellence organization with a passion for evolving customers' IT infrastructure from a cost-center to a key strategic asset. With focus on the broad Core Infrastructure Optimization model, his specialties include: Private & Hybrid Cloud, Datacenter, Virtualization, Management & Operations, Storage, Networking, Security, Directory Services, People & Process. In his 14 years in IT, Adam has successfully led strategic projects for Government, Education Sector, and Fortune 100 organizations. Adam is a lead architect for Microsoft's Datacenter Services Solution and the Microsoft Private Cloud Fast Track program.

Adam is a course instructor, published writer and regular conference speaker on Microsoft Cloud, Datacenter, and Infrastructure solutions. Joel Yoker, Microsoft Joel Yoker is an Architect in the Americas Office of the Chief Technical Officer (OCTO) organization focusing on Private Cloud, Datacenter, Virtualization, Management & Operations, Storage, Networking, Security, Directory Services, Auditing and Compliance. In his 14 years at Microsoft, Joel has successfully led strategic projects for Government, Education Sector, and Fortune 100 organizations. Joel is a lead architect for Microsoft's Datacenter Services Solution and the Microsoft Private Cloud Fast Track program and serves as course instructor, published writer and regular conference speaker on Microsoft Private Cloud, virtualization and infrastructure solutions. Jeff Baker, Microsoft Jeff Baker is an Architect in the Center of Excellence for Private Cloud at Microsoft Corporation. Jeff has worked with datacenter focused virtualization, management and operations technologies for almost a decade of his 14 plus years in IT.

Jeff has IT experience in a broad range of industries including Government, Education, Healthcare, Energy and Fortune 100 companies. Jeff has worked extensively with Microsoft's Datacenter Services Solution and the Microsoft Private Cloud Fast Track programs. Jeff is a regular conference speaker on Microsoft Private Cloud, virtualization and infrastructure solutions. About Cisco Validated Design (CVD) Program The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments.

For more information visit. ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, 'DESIGNS') IN THIS MANUAL ARE PRESENTED 'AS IS,' WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.

This document describes FlexPod for Microsoft Private Cloud Fast Track v3 from Cisco and NetApp and discusses design choices and deployment best practices using this shared infrastructure platform. Customer Challenges As customers transition toward shared infrastructure or cloud computing, they face a number of questions, such as the following: • How do I start the transition? • What will be my return on investment? • How do I build an infrastructure that is ready for the future?

• How do I transition from my current infrastructure cost-effectively? • Will my applications run properly in a shared infrastructure? • How do I manage the infrastructure? The FlexPod architecture is designed to help you answer these questions by providing proven guidance and measurable value. By introducing standardization, FlexPod helps customers mitigate the risk and uncertainty involved in planning, designing, and implementing a new data center infrastructure.

The result is a more predictive and adaptable architecture capable of meeting and exceeding customers' IT demands. FlexPod Program Benefits Cisco and NetApp have thoroughly tested and verified the FlexPod solution architecture and its many use cases while creating a portfolio of detailed documentation, information, and references to assist customers in transforming their data centers to this shared infrastructure model.

This portfolio includes the following items: • Best practice architectural design • Workload sizing and scaling guidance • Implementation and deployment instructions • Technical specifications (rules for what is, and what is not, a FlexPod configuration) • Frequently asked questions (FAQs) • Cisco Validated Designs and NetApp Validated Architectures (NVAs) focused on a variety of use cases Cisco and NetApp have also built an experienced support team focused on FlexPod solutions, from customer account and technical sales representatives to professional services and technical support engineers. The support alliance formed by NetApp and Cisco provides customers and channel services partners with direct access to technical experts who collaborate with multiple vendors and have access to shared lab resources to resolve potential issues. FlexPod supports tight integration with virtualized and cloud infrastructures, making it the logical choice for long-term investment. The following IT initiatives are addressed by the FlexPod solution: Integrated Systems FlexPod is a prevalidated infrastructure that brings together computing, storage, and network to simplify and accelerate data center builds and application rollouts while reducing the risks. Vermeer 2 Patch Updates on this page.

These integrated systems provide a standardized approach in the data center that supports staff expertise, application onboarding, and automation, as well as operational efficiencies that are important for compliance and certification. Fabric Infrastructure Resilience FlexPod is a highly available and scalable infrastructure that IT can evolve over time to support multiple physical and virtual application workloads. FlexPod contains no single point of failure at any level, from the server through the network to the storage. The fabric is fully redundant and scalable, providing smooth traffic failover should any individual component fail at the physical or virtual layer. Fabric Convergence Cisco Unified Fabric is a data center network that supports both traditional LAN traffic and all types of storage traffic, including the lossless requirements for block-level storage transport over Fibre Channel.

Cisco Unified Fabric creates high-performance, low-latency, and highly available networks serving a diverse set of data center needs. FlexPod Gen-II uses Cisco Unified Fabric to offer a wire-once environment that accelerates application deployment. Each reference architecture in the Fast Track program combines concise guidance with validated configurations for the computing, network, storage, and virtualization and management layers. Each architecture presents multiple design patterns for using the architecture and describes the minimum requirements for validating each Fast Track solution.

This document describes the Enterprise Medium configuration and the Converged Infrastructure design pattern. The Converged Infrastructure in this context of Microsoft Private Cloud is the sharing of network topology between network and storage network traffic. This typically implies Ethernet network devices and network controllers with particular features to provide segregation, QoS (performance), and scalability.

The result is a network fabric with less physical complexity, greater agility, and lower costs than those associated with traditional fiber-based storage networks (). Figure 4 Converged Fabric Design Pattern.

FlexPod This section provides an overview on the FlexPod design. System Overview FlexPod is a best practice data center architecture that is built with three components: • Cisco UCS • Cisco Nexus switches • NetApp fabric-attached storage (FAS) systems These components are connected and configured according to the best practices of both Cisco and NetApp to provide the ideal platform for running a variety of enterprise workloads with confidence.

FlexPod can scale up for greater performance and capacity (adding computing, network, or storage resources individually as needed), or it can scale out for environments that need multiple consistent deployments (rolling out additional FlexPod stacks). FlexPod delivers not only a baseline configuration but also the flexibility to be sized and optimized to accommodate many different use cases. Typically, the more scalable and flexible a solution is, the more difficult it becomes to maintain a single unified architecture capable of offering the same features and functionality across each implementation. This is one of the key benefits of FlexPod. Each of the component families shown in Figure 1 (Cisco UCS, Cisco Nexus, and NetApp FAS) offers platform and resource options to scale the infrastructure up or down while supporting the same features and functionality that are required under the configuration and connectivity best practices of FlexPod.

Design Principles FlexPod addresses four primary design principles: scalability, elasticity, availability, and manageability. These architecture goals are as follows: • Application availability: Ensure accessible and ready-to-use services • Scalability: Address increasing demands with appropriate resources • Flexibility: Provide new services or recovered resources without requiring infrastructure modification • Manageability: Facilitate efficient infrastructure operations through open standards and APIs. Note Performance and security are key design criteria that were not directly addressed in this project but have been addressed in other collateral, benchmarking and solution testing efforts.

Functionality and basic security elements were validated. FlexPod Discrete Uplink Design represents the FlexPod Discrete Uplink Design with Data ONTAP operating in 7-Mode.

Data On-Tap operating in 7-Mode is NetApp's traditional functional model. As depicted, the FAS devices are configured in a high-availability (HA) pair delivering five nines (99.999 percent) availability. Scalability is achieved through the addition of storage capacity (disk and shelves) as well as through additional controllers, whether they be FAS 2200, 3200, or 6200 series. The controllers are deployed only in HA pairs, meaning more HA pairs can be added for scalability, but each pair is managed separately. Figure 5 FlexPod Discrete Uplink Design with 7-Mode Data ONTAP.

The FlexPod Discrete Uplink Design is an end-to-end Ethernet transport solution supporting multiple LAN protocols and most notably FCoE. The solution provides a unified 10 Gigabit Ethernet enabled fabric defined by dedicated FCoE uplinks and dedicated Ethernet uplinks between the Cisco UCS fabric interconnects and the Cisco Nexus switches, as well as converged connectivity between the NetApp storage devices and the same multipurpose Cisco Nexus platforms.

The Discrete Uplink Design does not employ a dedicated SAN switching environment and requires no dedicated Fibre Channel connectivity. The Cisco Nexus 5500 Series Switches are configured in N port ID virtualization (NPIV) mode, providing storage services for the FCoE-based traffic traversing its fabric. As illustrated in, link aggregation technology plays an important role, providing improved aggregate bandwidth and link resiliency across the solution stack. The NetApp storage controllers, Cisco UCS, and Nexus 5500 platforms all support active port channeling using 802.3ad standard Link Aggregation Control Protocol (LACP). Port channeling is a link aggregation technique offering link fault tolerance and traffic distribution (load balancing) for improved aggregate bandwidth across member ports.

In addition, the Cisco Nexus 5000 Series features virtual port channel (vPC) capabilities. VPCs allow links that are physically connected to two different Cisco Nexus 5500 Series devices to appear as a single 'logical' port channel to a third device, essentially offering device fault tolerance. VPCs address aggregate bandwidth and link and device resiliency. The Cisco UCS fabric interconnects and NetApp FAS controllers benefit from the Nexus vPC abstraction, gaining link and device resiliency, as well as full utilization of a nonblocking Ethernet fabric. Note The Spanning Tree protocol does not actively block redundant physical links in a properly configured vPC-enabled environment, so all ports should forward on vPC member ports.

This dedicated uplink design leverages FCoE-capable NetApp FAS controllers. From a storage traffic perspective, both standard LACP and Cisco's vPC link aggregation technologies play an important role in FlexPod distinct uplink design. Figure 5 shows the use of dedicated FCoE uplinks between the Cisco UCS Fabric Interconnects and Cisco Nexus 5500 Unified Switches. The Cisco UCS Fabric Interconnects operate in the N-Port Virtualization (NPV) mode, meaning the servers' FC traffic is either manually or automatically pinned to a specific FCoE uplink, in this case either of the two FCoE port channels are pinned. The use of discrete FCoE port channels with distinct VSANs allows an organization to maintain traditional SAN A/B fabric separation best practices, including separate zone databases. The vPC links between the Cisco Nexus 5500 switches' and NetApp storage controllers' Unified Target Adapters (UTAs) are converged, supporting both FCoE and traditional Ethernet traffic at 10 Gigabit providing a robust 'last mile' connection between the initiator and target. Organizations with the following characteristics or needs may wish to use the 7-Mode design: • Existing Data ONTAP 7G and Data ONTAP 8.x 7-Mode customers who are looking to upgrade • Midsize enterprise customers who are primarily interested in the FAS2000 series • Customers who absolutely require SnapVault ®, synchronous SnapMirror ®, MetroCluster ™, SnapLock ® software, IPv6, or Data ONTAP Edge.

Note It is always advisable to seek advice from experts. Please consider reaching out to your NetApp account team or partner for further guidance.

The 'Logical Build' section provides more details regarding the design of the physical components and virtual environment consisting of Windows Server 2012 with Hyper-V, Cisco UCS, and NetApp storage controllers. Integrated System Components The following components are required to deploy the Discrete Uplink design: • Cisco UCS • Cisco Nexus 5500 Series Switch • Cisco Nexus 1000V Switch for Hyper-V • NetApp FAS and Data ONTAP • Windows Server 2012 with Hyper-V Role • System Center 2012 SP1 Cisco UCS The Cisco Unified Computing System is a next-generation solution for blade and rack server computing. It is an innovative data center platform that unites computing, network, storage access, and virtualization into a cohesive system designed to reduce total cost of ownership (TCO) and increase business agility. The system integrates a low-latency, lossless 10 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers.

The system is an integrated, scalable, multichassis platform in which all resources participate in a unified management domain. Managed as a single system, whether it has one server or 160 servers with thousands of virtual machines, Cisco UCS decouples scale from complexity. It accelerates the delivery of new services simply, reliably, and securely through end-to-end provisioning and migration support for both virtualized and nonvirtualized systems. Cisco UCS consists of the following components: • Cisco UCS 6200 Series Fabric Interconnects ( ) is a series of line-rate, low-latency, lossless, 10-Gbps Ethernet and FCoE interconnect switches providing the management and communication backbone for Cisco UCS.

Cisco UCS supports VM-FEX technology. • Cisco UCS 5100 Series Blade Server Chassis ( ) supports up to eight blade servers and up to two fabric extenders in a 6-rack unit (RU) enclosure.

• Cisco UCS B-Series Blade Servers ( ): Increase performance, efficiency, versatility, and productivity with these Intel-based blade servers. • Cisco UCS adapters ): Wire-once architecture offers a range of options to converge the fabric, optimize virtualization, and simplify management. Cisco adapters support VM-FEX technology. • Cisco UCS C-Series Rack Servers ( ) deliver unified computing in an industry-standard form factor to reduce TCO and increase agility. • Cisco UCS Manager ( ) provides unified, embedded management of all software and hardware components in the Cisco UCS. For more information, see. Cisco Nexus 2232PP 10GE Fabric Extender The Cisco Nexus 2232PP provides 32 10 Gigabit Ethernet and FCoE Small Form-Factor Pluggable Plus (SFP+) server ports and eight 10 Gigabit Ethernet and FCoE SFP+ uplink ports in a compact 1RU form factor.

The built-in standalone software, Cisco Integrated Management Controller, manages Cisco UCS C-Series Rack Servers. When a UCS C-Series rack server is integrated with Cisco UCS Manager via the Nexus 2232 platform, the management controller does not manage the server anymore. Instead, it is managed by the Cisco UCS Manager software, using the Cisco UCS Manager GUI or command-line interface (CLI).

The Nexus 2232 provides data and control traffic support for the integrated UCS C-Series server. Cisco VM Fabric Extender Cisco VM-FEX technology collapses virtual switching infrastructure and physical switching infrastructure into a single, easy-to-manage environment. Benefits include: • Simplified operations: Eliminates the need for a separate virtual networking infrastructure • Improved network security: Contains VLAN proliferation • Optimized network utilization: Reduces broadcast domains • Enhanced application performance: Offloads virtual machine switching from host CPU to parent switch application-specific integrated circuits (ASICs) VM-FEX is supported on Windows Server 2012 Hyper-V hypervisors and fully supports workload mobility through Quick Migration and Live Migration. VM-FEX eliminates the virtual switch within the hypervisor by providing individual virtual machines with virtual ports on the physical network switch. VM I/O is sent directly to the upstream physical network switch that takes full responsibility for VM switching and policy enforcement.

This leads to consistent treatment for all network traffic, virtual or physical. VM-FEX collapses virtual and physical switching layers into one and reduces the number of network management points by an order of magnitude. The single root I/O virtualization (SR-IOV) specs do, however, describe how a hardware device can expose multiple 'lightweight' hardware surfaces for use by virtual machines. These are called virtual functions (VFs). VFs are associated with a physical function (PF). The PF is what the parent partition uses in Hyper-V and is equivalent to the regular bus/device/function (BDF) addressed Personal Computer Interconnect (PCI) device you may have heard of before.

The PF is responsible for arbitration relating to policy decisions (such as link speed or MAC addresses in use by VMs in the case of networking) and for I/O from the parent partition itself. Although a VF could be used by the parent partition, in Windows Server 2012, VFs are used only by VMs. A single PCI Express device can expose multiple PFs, each with its own set of VF resources. While software-based devices work extremely efficiently, they have an unavoidable overhead to the I/O path.

Consequently, software-based devices introduce latency, increase overall path length, and consume computing cycles. With SR-IOV capability, part of the network adapter hardware is exposed inside the virtual machine and provides a direct I/O path to the network hardware. For this reason, a vendor-specific driver needs to be loaded into the VM in order to use the VF network adapter (). Figure 6 VM-FEX from the Hyper-V Node Point Perspective. As illustrated in, the I/O data path from the VF does not go across the virtual machine bus (VMBus) or through the Windows hypervisor.

It is a direct hardware path from the VF in the VM to the NIC. Also note that the control path for the VF is through VMBus (back to the PF driver in the parent partition). Cisco Nexus 1000V Switch for Hyper-V Cisco Nexus 1000V Series Switches provide a comprehensive and extensible architectural platform for virtual machine and cloud networking. These switches are designed to accelerate server virtualization and multitenant cloud deployments in a secure and operationally transparent manner.

Public access that can be teamed to fail over the cluster. Highly available host servers are one critical component of a dynamic, virtual infrastructure. A Hyper-V host failover cluster is a group of independent servers that work together to increase the availability of applications and services. The clustered servers (nodes) are connected physically.

If one of the cluster nodes fails, another node begins to provide service. In the case of a planned live migration, users experience no perceptible service interruption. Microsoft System Center 2012 SP1 Microsoft System Center 2012 SP1 helps organizations deliver flexible and cost-effective private cloud infrastructure in a self-service model, while using existing data center hardware and software investments. It provides a common management experience across data centers and private or partner hosted clouds. To deliver the best experience for modern applications, System Center 2012 SP1 offers deep insight into applications, right down to client script performance.

System Center 2012 SP1 delivers the tools and capabilities that organizations need to scale their capacity and, where necessary, use cloud resources as well. Microsoft System Center 2012 offers unique application management capabilities that can enable you to deliver agile, predictable application services. Using the App Controller, Operations Manager, and Virtual Machine Manager components of System Center 2012, you can deliver 'applications as a service,' where a 'service' is a deployed instance of a cloud-style application, along with its associated configuration and virtual infrastructure. The following application management capabilities are included: Standardized Application Provisioning • Virtual Machine Manager offers service templates to help you define standardized application blueprints.

A service template would typically include specifications for the hardware, operating system, and application packages that compose the service. • Supports multiple package types for Microsoft.NET applications, including MS Deploy for the web tier (IIS), Microsoft Server Application Virtualization (Server App-V) for the application tier, and SQL Server DAC for the data tier. • Specifies application configuration requirements such as topology, elasticity and scale-out rules, health thresholds, and upgrade rules. • Server App-V, a unique technology in Virtual Machine Manager, optimizes applications for private cloud deployments by abstracting the application from the underlying OS and virtual infrastructure. By enabling image-based management, Server App-V simplifies application upgrades and maintenance.

Comprehensive Hybrid Application Management • App Controller offers application owners a single view to manage application services and virtual machines, whether they are on-premises, at service providers, or on Windows Azure. • App Controller provides the ability to deploy and migrate virtual machines to the Windows Azure Virtual Machine service. You can migrate core applications such as Microsoft SQL Server, Active Directory, and Microsoft SharePoint Server from on-premises environments to Windows Azure with just a few mouse clicks. 360-Degree Application Monitoring, Diagnosis, and Dev-Ops • Operations Manager offers deep application and transaction monitoring insight for.NET applications (and J2EE application servers) and helps you efficiently isolate the root cause of application performance issues down to the offending line of code. • Outside-in monitoring with Global Service Monitor (GSM) and Operations Manager provides real time visibility into application performance as experienced by end users. • Operations Manager and GSM integrate with Microsoft Visual Studio to facilitate dev-ops collaboration, thereby helping you remediate application issues faster. • Operations Manager offers easy-to-use reporting and custom dashboarding.

Using the Service Manager and Orchestrator components of System Center 2012, you can automate core organizational process workflows such as incident management, problem management, change management, and release management. You can also integrate and extend your existing toolsets and build flexible workflows (or runbooks) to automate processes across your IT assets and organizations. The following service delivery and automation capabilities are provided: Standardize IT Services • Define standardized service offerings by using dependencies in a centralized configuration management database (CMDB). • Publish standardized service offerings through the Service Catalog offered by Service Manager.

• Provision and allocate pooled infrastructure resources to internal business unit ITs (BUITs) using the Cloud Services Process Pack (CSPP) that's natively integrated into Service Manager. • Chargeback (or showback) storage, network, and computing costs to BUITs; specify pricing for BUITs at different levels of granularity.

• Helps ensure compliance with pertinent industry regulations and business needs with the IT GRC Process Pack. Enable IT Service Consumers to Identify, Access, and Request Services • Enable self-service infrastructure with the self-service portal offered by Service Manager. • Set access and resource quota levels on a per-user or per-BUIT basis.

• Capture and track required service request information. Automate Processes and Systems Necessary to Fulfill Service Requests • Integrate and extend automation across System Center and third-party management toolsets (including BMC, HP, IBM, and VMware) with Orchestrator Integration Packs; extend automation to Windows Azure virtual machine workflows without the need for coding or scripting. • Orchestrate automated workflows across multiple processes, departments, and systems. • Automate provisioning of service requests for end-to-end request fulfillment. Microsoft System Center 2012 SP1 provides a common management toolset to help you configure, provision, monitor, and operate your IT infrastructure. If your infrastructure is like that of most organizations, you have physical and virtual resources running heterogeneous operating systems.

The integrated physical, virtual, private, and public cloud management capabilities in System Center 2012 can help you ensure efficient IT management and optimized ROI of those resources. The following infrastructure management capabilities are provided: Provision your Physical and Virtual Infrastructure • Support deployment and configuration of virtual servers and Hyper-V with Virtual Machine Manager. • Manage VMware vSphere and Citrix XenServer using one interface. • Automatically deploy Hyper-V to bare metal servers and create Hyper-V clusters. • Provision everything from operating systems to physical servers, patches, and endpoint protection with Configuration Manager. Provision Private Clouds • Use 'create cloud' functionality in Virtual Machine Manager to aggregate virtual resources running on Hyper-V, vSphere, and XenServer into a unified private cloud fabric.

• Customize and assign private cloud resources to suit your organization's needs. • Deliver self-service capability for application owners to request and automate provisioning of new private cloud resources. • Operate Your Infrastructure • Use a single console and customizable dashboards in Operations Manager to monitor and manage your physical, virtual, networking, application, and cloud resources. • Dynamically optimize virtual resources for load balancing and power efficiency. • Protect your physical and virtual resources with Endpoint Protection and Data Protection Manager.

• Automatically patch your physical and virtual resources with Configuration Manager and Virtual Machine Manager. • Automatically track and create custom reports for hardware inventory, software inventory, and software usage metering. Domain and Element Management This section provides general descriptions of the domain and element mangers used during the validation effort. The following managers are used: • Cisco UCS Manager • Cisco UCS Power Tool • Cisco VM-FEX Port Profile Configuration Utility • Nexus 1000V for Hyper-V VSM • NetApp OnCommand System Manager • NetApp SnapDrive for Windows • NetApp SnapManager for Hyper-V • Microsoft System Center 2012 SP1 – App Controller – Operations Manager – Orchestrator – Service Manager – Virtual Machine Manager Cisco UCS Manager Cisco UCS Manager provides unified, centralized, embedded management of all Cisco UCS software and hardware components across multiple chassis and thousands of virtual machines.

Administrators use this software to manage the entire Cisco UCS as a single logical entity through an intuitive GUI, a CLI, or an XML API. Cisco UCS Manager resides on a pair of Cisco UCS 6200 Series Fabric Interconnects using a clustered, active-standby configuration for high availability. The software gives administrators a single interface for performing server provisioning, device discovery, inventory, configuration, diagnostics, monitoring, fault detection, auditing, and statistics collection. Cisco UCS Manager service profiles and templates support versatile role- and policy-based management, and system configuration information can be exported to CMDBs to facilitate processes based on ITIL ® concepts. Service profiles let server, network, and storage administrators treat Cisco UCS servers as raw computing capacity to be allocated and reallocated as needed. The profiles define server I/O properties and are stored in the Cisco UCS 6200 Series Fabric Interconnects.

Using service profiles, administrators can provision infrastructure resources in minutes instead of days, creating a more dynamic environment and more efficient use of server capacity. Each service profile consists of a server software definition and the server's LAN and SAN connectivity requirements. When a service profile is deployed to a server, Cisco UCS Manager automatically configures the server, adapters, fabric extenders, and fabric interconnects to match the configuration specified in the profile. The automatic configuration of servers, NICs, host bus adapters (HBAs), and LAN and SAN switches lowers the risk of human error, improves consistency, and decreases server deployment times. Service profiles benefit both virtualized and nonvirtualized environments. The profiles increase the mobility of nonvirtualized servers, such as when moving workloads from server to server or taking a server offline for service or an upgrade. Profiles can also be used in conjunction with virtualization clusters to bring new resources online easily, complementing existing virtual machine mobility.

For more information on Cisco UCS Manager, visit:. Cisco UCS PowerTool Cisco UCS PowerTool is a PowerShell module that helps automate all aspects of Cisco UCS Manager, including server, network, storage, and hypervisor management. PowerTool enables easy integration with existing IT management processes and tools. Cisco UCS PowerTool is a flexible and powerful command-line toolkit that includes more than 1500 PowerShell cmdlets, providing customers with an efficient, cost-effective, and easy-to-use interface to integrate and automate UCS management with Microsoft products and many third-party products. It lets you take advantage of the flexible and powerful scripting environment offered by Microsoft PowerShell.

Cisco VM-FEX Port Profile Configuration Utility The Cisco VM-FEX Port Profile Configuration Utility maps a Cisco UCS port profile to the virtual switch port that connects a virtual machine NIC to a virtual function. This utility is available in a Microsoft Management Console (MMC) snap-in and as a set of PowerShell cmdlets. Cisco Nexus 1000V for Hyper-V The Cisco Nexus 1000V is a logical switch that fully integrates into Windows Server 2012 Hyper-V and Virtual Machine Manager 2012 SP1. The Cisco Nexus 1000V operationally emulates a physical modular switch, with a Virtual Supervisor Module (VSM) providing control and management functionality to multiple line cards. In the case of the Nexus 1000V, the Cisco Virtual Ethernet Module (VEM) is a forwarding extension for the Hyper-V logical switch when installed on the Hyper-V host. Describes the Cisco Nexus 1000V architecture. Figure 7 Cisco Nexus 1000V for Hyper-V Architecture.

NetApp OnCommand System Manager NetApp OnCommand System Manager makes it possible for administrators to manage individual or clusters of NetApp storage systems through an easy-to-use browser-based interface. System Manager comes with wizards and workflows, simplifying common storage tasks such as creating volumes, LUNs, qtrees, shares, and exports, which saves time and prevents errors. System Manager works across all NetApp storage: FAS2000, FAS3000, and FAS6000 series as well as V-Series systems. Shows a sample screen in NetApp OnCommand System Manager. Figure 8 Sample NetApp OnCommand System Manager Screen.

NetApp SnapDrive for Windows NetApp SnapDrive for Windows (SDW) is an enterprise-class storage and data management application that simplifies storage management and increases availability of application data. The key functionality includes storage provisioning, file system-consistent data Snapshot copies, rapid application recovery, and the ability to manage data easily. SDW complements the native file system and volume manager and integrates seamlessly with the clustering technology supported by the host OS. NetApp SnapManager for Hyper-V NetApp SnapManager for Hyper-V (SMHV) automates and simplifies backup and restore operations for virtual machines running in Microsoft Windows Server 2012 Hyper-V environments hosted on Data ONTAP storage systems. SMHV enables application-consistent dataset backups according to protection policies set by the storage administrator. VM backups can also be restored from those application-consistent backups. SnapManager for Hyper-V makes it possible to back up and restore multiple VMs across multiple hosts.

Policies can be applied to the datasets to automate backup tasks such as scheduling, retention, and replication. System Center 2012 SP1 App Controller App Controller is a member of the Microsoft System Center suite. It provides a common self-service experience that can help administrators easily configure, deploy, and manage virtual machines and services across private clouds. App Controller provides the user interface for connecting and managing workloads post-provisioning. System Center 2012 SP1 Operations Manager Operations Manager is a member of the Microsoft System Center suite. It provides infrastructure monitoring that is flexible and cost-effective, helps ensure the predictable performance and availability of vital applications, and offers comprehensive monitoring for your data center and private cloud.

System Center 2012 SP1 Service Manager Service Manager is a member of the Microsoft System Center suite. It provides an integrated platform for automating and adapting your organization's IT service management best practices, such as those found in Microsoft Operations Framework (MOF) and ITIL. It provides built-in processes for incident and problem resolution, change control, and asset lifecycle management. System Center 2012 SP1 Virtual Machine Manger Virtual Machine Manager (VMM) is a member of the Microsoft System Center suite. It is a management solution for the virtualized data center, enabling you to configure and manage your virtualization host, networking, and storage resources in order to create and deploy virtual machines and services to private clouds that you have created. Microsoft SQL Server 2012 SP1 Microsoft SQL Server is a highly available database management and analysis system for e-commerce, line-of-business, and data warehousing solutions.

It stores data and provides reporting services for the System Center components. A Closer Look at FlexPod Discrete Uplink Design Physical Build: Hardware and Software Revisions describes the hardware and software versions used during validation. It is important to note that Cisco, NetApp, and Microsoft have interoperability matrixes that should be referenced to determine support for a specific implementation of FlexPod. Please refer to the following links: • • • Table 3 Validated Software and Firmware Versions. Updated with current updates Logical Build illustrates the FlexPod Discrete Uplink Design. The design is physically redundant across the stack, addressing Layer 1 high availability requirements, but there are additional Cisco and NetApp technologies and features that make for an even more effective solution.

This section discusses the logical configuration validated for FlexPod. FlexPod allows organizations to adjust the individual components of the system to meet their particular scale or performance requirements. One key design decision in the Cisco UCS domain is the selection of I/O components. Numerous combinations of I/O adapter, I/O module (IOM), and fabric interconnect are available, so it is important to understand the impact of these selections on the overall flexibility, scalability and resiliency of the fabric. Illustrates the available backplane connections in the Cisco UCS 5100 series chassis.

As the illustration shows, each of the two fabric extenders (IOMs) has four 10GBASE KR (802.3ap) standardized Ethernet backplane paths available for connection to the half-width blade slot. This means that each half-width slot has the potential to support up to 80 Gb of aggregate traffic. What is realized depends on several factors, namely: • Fabric extender model (2204 or 2208) • Modular LAN on Motherboard (mLOM) card • Mezzanine slot card The Cisco UCS 2208XP Fabric Extender has eight 10 Gigabit Ethernet, FCoE-capable, SFP+ ports that connect the blade chassis to the fabric interconnect. The Cisco UCS 2204XP has four external ports with identical characteristics to connect to the fabric interconnect. Each Cisco UCS 2208XP has 32 10 Gigabit Ethernet ports connected through the midplane KR lanes to each half-width slot in the chassis, while the 2204XP has 16.

This means the 2204XP enables two KR lanes per half-width blade slot while the 2208XP enables all four. The number of KR lanes indicates the potential I/O available to the chassis and therefore to the blades. Figure 9 Cisco UCS B-Series M3 Server Chassis Backplane Connections. Port aggregation is supported by the second-generation Cisco UCS 6200 Series Fabric Interconnects, 2200 Series Fabric Extenders, and 1200 Series VICs. This capability allows for workload rebalancing between these devices, providing link fault tolerance in addition to increased aggregate bandwidth within the fabric. It should be noted that in the presence of second-generation VICs and fabric extenders, fabric port channels will automatically be created in the fabric.

Fabric port channels between the fabric extenders and fabric interconnects are controlled via the Chassis/FEX discovery policy. Illustrates the two modes of operation for this policy.

In Discrete Mode each FEX KR connection and therefore server connection is tied or pinned to a network fabric connection homed to a port on the fabric interconnect. In the presence of a failure on the external 'link,' all KR connections are disabled within the FEX I/O module. In the case of a fabric port channel discovery policy, the failure of a network fabric link allows for redistribution of flows across the remaining port channel members. This is less disruptive to the fabric. Figure 10 Example of Discrete Mode Versus Port Channel Mode. Note First-generation Cisco UCS hardware is compatible with the second-generation gear, but it will operate only in discrete mode. Represents one of the Cisco UCS B200 M3 backplane connections validated for the FlexPod.

The B200 M3 uses a VIC 1240 in the mLOM slot with an empty mezzanine slot. The FEX 2204XP enables 2 KR lanes to the half-width blade, while the global discovery policy dictates the formation of a fabric port channel. Figure 11 Validated Cisco UCS Backplane Configurations VIC 1240 Only. Illustrates another Cisco UCS B200 M3 instance in the test bed. In this instance the mezzanine slot is populated with the port expander option.

This passive device provides connectivity for the unused ports on the Cisco UCS VIC 1240, essentially enabling the 40-Gb potential of the mLOM card. Beyond the raw capacity improvements is the creation of two more automatic port channels between the fabric extender and the server. This provides link resiliency at the adapter level and double the bandwidth available to the system. (dual 2x10 Gb).

Figure 12 Validated Cisco UCS Backplane Configuration: VIC 1240 with Port Extender. Note See Appendix B for additional combinations of Cisco UCS second-generation hardware and the connectivity options they afford. The FlexPod defines two FCoE port channels (Po1 and Po2) and two LAN port channels (Po13 and Po14).

The FCoE port channels carry only Fibre Channel traffic that is associated with a VSAN/VLAN set, with the set in turn supported only on one side of the fabric A or B. As in this example, the virtual HBA (vHBA) 'FABRIC-A' is defined in the service profile. The vHBA uses a virtual circuit, VC 737, to traverse the Cisco UCS unified fabric to port channel Po1, where FCoE traffic egresses the Cisco UCS domain and enters the Cisco Nexus 5500 platform. Fabric A supports a distinct VSAN, which is not present on Fabric B, thus maintaining fabric isolation.

It has been said that design is the art of compromise; however, with the FlexPod architecture there is very little sacrifice. Availability and performance are present. The question becomes what combination meets the application and business requirements of the organization. Describes the availability and performance aspects of the second-generation Cisco UCS I/O gear. Table 4 Cisco UCS B-Series M3 FEX 2204XP and 2280XP Options. Note Third-party generation 3 PCIe adapters were not validated.

A balanced fabric is critical within any data center environment. Given the myriad traffic types (live migration, CSV, FCoE, public, control traffic, etc.) the FlexPod must be able to provide for specific traffic requirements while simultaneously being able to absorb traffic spikes and protect against traffic loss.

To address these requirements, the Cisco UCS QoS system classes and Cisco Nexus QoS policies should be configured. In this validation effort the FlexPod was configured to support jumbo frames with a maximum transmission unit (MTU) size of 9000. This class was assigned to the Best-Effort class. With regard to jumbo frames, it is important to make sure the MTU settings are applied uniformly across the stack to prevent fragmentation and the negative performance implications inconsistent MTUs may introduce. Cisco UCS C-Series Server Design Cisco UCS Manager 2.1 provides two connectivity modes for Cisco UCS C-Series Rack Server management: • Dual-wire management (shared LAN on motherboard [LOM]): This management mode is supported in Cisco UCS Manager releases earlier than 2.1. Shared LOM ports on the rack server are used exclusively for carrying management traffic.

A separate cable connected to one of the ports on the PCIe card carries the data traffic. Using two separate cables for managing data traffic and management traffic is also referred to as dual-wire management. • Single-wire management (Sideband): Cisco UCS Manager version 2.1 introduces an additional rack server management mode using Network Controller Sideband Interface (NC-SI). Cisco UCS Virtual Interface Card 1225 uses the NC-SI, which can carry both data traffic and management traffic on the same cable. This new feature is referred to as single-wire management and will allow for denser server to FEX deployments. From a functional perspective, the 1 RU Nexus FEX 2232PP replaces the UCS 2204 or 2208 IOMs that are located with the UCS 5108 blade chassis.

Each 10 Gigabit Ethernet VIC port connects to Fabric A or B via the FEX. The FEX and fabric interconnects form port channels automatically based on the chassis discovery policy, providing a link resiliency to the C-Series server. This is identical to the behavior of the IOM to fabric interconnect connectivity. From a logical perspective, the virtual circuits formed within the Cisco UCS domain are consistent between the B-Series and C-Series deployment models and the virtual constructs formed at the Hyper-V. Cisco Nexus 5500 Series Switch As shows, the Cisco Nexus 5500 Series Switch provides Ethernet and, in particular, FCoE connectivity for the Cisco UCS domain as well as for the NetApp storage controllers.

From an Ethernet perspective, the Nexus 5500 uses virtual PortChannel (vPC) allowing links that are physically connected to two different Cisco Nexus 5000 Series devices to appear as a single port channel to a third device, in this case the Cisco UCS fabric interconnects and NetApp controllers. VPCs provide the following benefits: • Allow a single device to use a port channel across two upstream devices • Eliminate Spanning Tree Protocol blocked ports • Provide a loop-free topology • Use all available uplink bandwidth • Provide fast convergence if either the link or a device fails • Provide link-level resiliency • Help ensure high availability Figure 13 Discrete Uplink Design: Nexus 5500 and NetApp Storage Focus. VPCs requires a 'peer link,' which is documented as port channel 10 in Figure 13. It is important to note that the VLAN associated with the FCoE traffic does not traverse this peer link.

Remember that the FCoE VLAN is associated or mapped to a VSAN, typically using the same numeric ID. It is crucial that the fabrics do not mix, maintaining SAN A/B isolation best practices.

In addition, the vPC links facing the UCS fabric interconnects, vPC13 and vPC14, do not carry any FCoE traffic. Do not define any FCoE VLANs on these links. However, the vPCs connected to the NetApp UTAs are converged, supporting both FCoE and all other VLANs associated with LAN protocols. The vPC peer keepalive link is a required component of a vPC configuration. The peer keepalive link allows each vPC-enabled switch to monitor the health of its peer. This link accelerates convergence and reduces the occurrence of split-brain scenarios.

In this validated solution, the vPC peer keepalive link uses the out-of-band management network. (This link is not shown in Figure 13.) Each Cisco Nexus 5500 Series Switch defines a port channel dedicated to FCoE and connected to the Cisco UCS fabric interconnects, in this instance Po15 and Po16.

Each discrete port channel supports a single VLAN associated with Fabric A or Fabric B. A virtual Fibre Channel interface (vfc) is then bound to the logical port channel interface.

This same construct is applied to the vPCs facing the NetApp storage controllers, in this example vfc11 and vfc12. This assures universal accessibility of the fabric to each NetApp storage node in case of failures. To maintain SAN A and B isolation, vfc 11 and 12 are associated with a different VLAN/VSAN pairing, meaning the vPCs facing the NetApp storage systems support all LAN and FCoE traffic but have unique FCoE VLANs defined on each Nexus switch. Note It is considered a best practice to name your vfc for the port channel it is residing on; for example, vfc15 is on port channel 15. The Nexus 5500 in the FlexPod design provides Fibre Channel services to the Cisco UCS and NetApp FAS platforms. Internally the Nexus 5500 platforms are performing FC zoning to enforce access policy between UCS-based initiators and FAS-based targets. FlexPod is a converged infrastructure platform.

This convergence is possible due to the support of Ethernet enhancements across the integrated computing stack with regard to bandwidth allocation and flow control based on traffic classification. As such, it is important to implement these QoS techniques to help ensure quality of service in the FlexPod. • Priority Flow Control (PFC) 802.1Qbb: Lossless Ethernet using a PAUSE on a per class of service (CoS) • Enhanced Transmission Selection (ETS) 802.1Qaz: Traffic protection through bandwidth management • Data Center Bridging Capability Exchange (DCBX): Negotiates Ethernet functionality between devices (PFC, ETS, and CoS values) The Nexus 5500 supports these capabilities through QoS policy. QoS is enabled by default and managed using Cisco Modular QoS CLI (MQC), providing class-based traffic control.

The Nexus system will instantiate basic QoS classes for Ethernet traffic and a system FCoE class (class-fcoe) when the FCoE feature is enabled. It is important to align the QoS setting (CoS, MTU) within the Nexus 5500 and the Cisco UCS fabric interconnects. DCBX signaling can affect the NetApp controller, so be sure to allocate the proper bandwidth, based on the site's application needs, to the appropriate CoS classes and keep MTU settings consistent in the environment to avoid fragmentation issues and improve performance. The following list summarizes the best practices used in the validation of the FlexPod architecture: • Nexus 5500 features enabled – FCoE uses PFC, ETS, and DCBX to provide a lossless fabric – NPIV, allows the network fabric port (N-port) to be virtualized and support multiple Fibre Channel initiators on a single physical port – LACP – Cisco vPC for link and device resiliency – Link Layer Discovery Protocol (LLDP) allows the Nexus 5000 to share and discover DCBX features and capabilities between neighboring FCoE-capable devices. – Enable Cisco Discovery Protocol for infrastructure visibility and troubleshooting • vPC considerations – Define a unique domain ID – Set the priority of the intended vPC primary switch lower than the secondary (default priority is 32768) – Establish peer keepalive connectivity. It is recommended to use the out-of-band management network (mgmt0) or a dedicated switched virtual interface (SVI) – Enable the vPC auto-recovery feature – Enable IP arp synchronization to optimize convergence across the vPC peer link.

Note: Cisco Fabric Services over Ethernet is responsible for synchronization of configuration, Spanning Tree, MAC and VLAN information, which removes the requirement for explicit configuration. The service is enabled by default.

– A minimum of two 10 Gigabit Ethernet connections are required for vPC. – All port channels should be configured in LACP active mode • Spanning tree considerations – Make sure that the path cost method is set to long. This setting accounts for 10 Gigabit Ethernet links in the environment. – Do not modify the spanning tree priority, the assumption being that this is an access layer deployment.

– Loopguard is disabled by default. – Bridge Protocol Data Unit (BPDU) guard and filtering are enabled by default. – Bridge assurance is enabled only on the vPC peer link. – Ports facing the NetApp storage controller and UCS are defined as 'edge' trunk ports. For configuration details, refer to the Cisco Nexus 5000 Series Switches configuration guides. Hyper-V The Hyper-V role enables you to create and manage a virtualized computing environment by using virtualization technology that is built into Windows Server 2012. Installing the Hyper-V role installs the required components and optionally installs management tools.

The required components include Windows hypervisor, Hyper-V Virtual Machine Management Service, the virtualization Windows Management Interface (WMI) provider, and other virtualization components such as the VMBus, virtualization service provider (VSP), and virtual infrastructure driver (VID). The management tools for the Hyper-V role consist of: • GUI-based management tools: Hyper-V Manager, an MMC snap-in, and Virtual Machine Connection, which provides access to the video output of a virtual machine so you can interact with the VM. • Hyper-V-specific cmdlets for Windows PowerShell. Windows Server 2012 includes a Hyper-V module, which provides command-line access to all the functionality available in the GUI, as well as functionality not available through the GUI. Windows Server 2012 introduced many new features and enhancements for Hyper-V. The following are some of the more notable enhancements that are used in this design.

• Host Scale-Up: Greatly expands support for host processors and memory. New features include support for up to 64 virtual processors and 1 TB of memory for Hyper-V guests, a new VHDX virtual hard disk format with a larger disk capacity of up to 64 TB, and additional resiliency. These features help ensure that your virtualization infrastructure can support the configuration of large, high-performance virtual machines to support workloads that might need to scale up significantly.

• SR-IOV-capable network devices that let an SR-IOV virtual function of a physical network adapter be assigned directly to a virtual machine. This increases network throughput and reduces network latency while also reducing the host CPU overhead that is required for processing network traffic. Refer back to Figure 6 to see the architecture of SR-IOV support in Hyper-V. Cisco Virtual Machine Fabric Extender Cisco Virtual Machine Fabric Extender (VM-FEX) addresses both management and performance concerns in the data center by unifying physical and virtual switch management.

The use of Cisco VM-FEX collapses both virtual and physical networking into a single infrastructure, reducing the number of network management points and enabling consistent provisioning, configuration, and management policy within the enterprise. This integration point between the physical and virtual domains of the data center allows administrators to efficiently manage both their virtual and physical network resources. The decision to use VM-FEX is typically driven by application requirements such as performance and the operational preferences of the IT organization. The Cisco UCS VIC offers each virtual machine a virtual Ethernet interface or vNIC.

This vNIC provides direct access to the fabric interconnects and Nexus 5500 Series switches, where forwarding decisions can be made for each VM using a VM-FEX interface. Cisco VM-FEX technology for Hyper-V provides SR-IOV networking devices.

SR-IOV works in conjunction with system chipset support for virtualization technologies. This provides remapping of interrupts and DMA and allows SR-IOV capable devices to be assigned directly to a virtual machine. Hyper-V in Windows Server 2012 enables support for SR-IOV-capable network devices and allows an SR-IOV virtual function of a physical network adapter to be assigned directly to a virtual machine. This increases network throughput and reduces network latency, while also reducing the host CPU overhead required for processing network traffic. For more information on the configuration limits associated with VM-FEX, go to. FlexPod Discrete Uplink Design with Data ONTAP Operating in 7-Mode shows FlexPod with Data ONTAP operating in 7-mode.

7-mode consists of only two storage controllers with shared media. The NetApp FAS controllers use redundant 10-Gb converged adapters configured in a two-port interface group (IFGRP). Each port of the IFGRP is connected to one of the upstream switches, allowing multiple active paths by using the Nexus vPC feature. IFGRP is a mechanism that allows the aggregation of a network interface into one logical unit. Combining links aids in network availability and bandwidth. NetApp provides three types of IFGRPs for network port aggregation and redundancy: • Single mode • Static multimode • Dynamic multimode Dynamic multimode IFGRPs are recommended due to the increased reliability and error reporting and also because of their compatibility with vPCs.

A dynamic multimode IFGRP uses LACP to group multiple interfaces together to act as a single logical link. This provides intelligent communication between the storage controller and the Cisco Nexus switches and enables load balancing across physical interfaces as well as failover capabilities. From a Fibre Channel perspective, SAN A (red in ) and SAN B (blue in ) fabric isolation is maintained across the architecture with dedicated FCoE channels and virtual interfaces. The 7-mode design allocates Fibre Channel interfaces with SAN A and SAN B access for each controller in the HA pair. Figure 14 Discrete Uplink Design with Data ONTAP Operating in 7-Mode. Private Cloud Architecture Principles The Fast Track architecture attempts to achieve the principles, patterns, and concepts outlined in the Microsoft TechNet article '.' Please refer to this article if you need clarification on why a particular design choice was made in the Fast Track management architecture.

The introduction to the article provides a synopsis: A key goal is to allow IT organizations to utilize the principles and concepts described in the content set to offer Infrastructure as a Service (IaaS), allowing any workload hosted on this infrastructure to automatically inherit a set of cloud-like attributes. Fundamentally, the consumer should have the perception of infinite capacity and continuous availability of the services they consume. They should also see a clear correlation between the amount of services they consume and the price they pay for these services. Achieving this requires virtualization of all elements of the infrastructure, compute (processing and memory), network, and storage, into a fabric that is presented to the container or the virtual machine. It also requires the IT organization to take a service provider's approach to delivering infrastructure, necessitating a high degree of IT service management maturity.

Moreover, most of the operational functions must be automated to minimize the variance as much as possible while creating a set of predictable models that simplify management. Private Cloud Reference Model Infrastructure as a service (IaaS) is the application of private cloud architecture principles to deliver infrastructure. As the cloud ecosystem matures, product features and capabilities broaden and deepen. The reference model described in this section and shown in is used as a guide for delivering a holistic solution that spans all the layers required for mature IaaS. The model acts as a guide to assist architects in their efforts to holistically address the development of a private cloud architecture. This model is a reference only. Some elements are emphasized more than others in the technical reference architecture, and that preference is based on experience operating private clouds in real-world environments.

Figure 15 Private Cloud Reference Model. The reference model is split into the following layers: • The software, platform, and infrastructure layers represent the technology stack.

Each layer provides services to the layer above. • The service operations and management layers represent the process perspective and include the management tools required to implement the process. • The service delivery layer represents the alignment between business and IT. This reference model is a deliberate attempt to blend the technology and process perspectives, because cloud computing is as much about service management as it is about the technologies involved in it. For examples, see and. For further reading, please see.

Private Cloud Management Overview Fabric Management As we discuss later in the 'Management Architecture' section, fabric management involves treating discrete capacity pools of servers, storage, and networks as a single fabric. Key capabilities of the fabric management system include: • Hardware integration • Fabric provisioning • Virtual machine and application provisioning • Resource optimization • Health and performance monitoring • Maintenance • Reporting Process Automation and Orchestration The orchestration layer managing the automation and management components must be implemented as the interface between the IT organization and the infrastructure. Orchestration provides the bridge between IT business logic, such as 'deploy a new web-server virtual machine when capacity reaches 85 percent' and the dozens of steps in an automated workflow that are required to actually implement such a change. Ideally, the orchestration layer provides a graphical interface that combines complex workflows with events and activities across multiple management-system components and forms an end-to-end IT business process.

The orchestration layer must provide the ability to design, test, implement, and monitor these IT workflows. Service Delivery Service Management System A service management system is a set of tools designed to facilitate service management processes.

Ideally, these tools should integrate data and information from the entire set of tools found in the management layer. The service management system should process and present the data as needed. At a minimum, the service management system should link to the configuration management system (CMS), commonly known as the configuration management database (CMDB), and should log and track incidents, problems, and changes. The service management system should be integrated with the service health modeling system so that incident tickets can be generated automatically. User Self-Service Self-service capability is a characteristic of private cloud computing and must be present in any implementation. The intent is to permit users to approach a self-service capability and be presented with options available for provisioning. The capability may be basic, provisioning of a virtual machine with a predefined configuration; more advanced, allowing configuration options to the base configuration; or complex, when implementing a platform capability or service.

Self-service capability is a critical business driver that allows members of an organization to become more agile in responding to business needs with IT capabilities that align and conform to internal business and IT requirements. The interface between IT and the business should be abstracted to a well-defined, simple, and approved set of service options. The options should be presented as a menu in a portal or available from the command line. The business can select these services from the catalog, start the provisioning process, and be notified upon completion, at which point they are charged only for the services actually used. Service Catalog Service catalog management involves defining and maintaining a catalog of services offered to consumers.

This catalog will list the following: • Classes of services that are available • Requirements to be eligible for each service class • Service-level attributes and targets included with each service class • Cost models for each service class The service catalog might also include specific virtual machine templates designed for different workload patterns. Each template will define the VM configuration specifics, such as the amount of allocated CPU, memory, and storage.

Capacity Management Capacity management defines the processes necessary to achieve the perception of infinite capacity. Capacity must be managed to meet existing and future peak demand while controlling underutilization. Business relationship and demand management are key inputs into effective capacity management and require a service provider's approach. Predictability and optimization of resource usage are primary principles in achieving capacity management objectives.

Availability Management Availability management defines processes necessary to achieve the perception of continuous availability. Continuity management defines how risks will be managed in a disaster scenario to help make sure minimum service levels are maintained.

The principles of resiliency and automation are fundamental here. Service Level Management Service-level management (SLM) is the process of negotiating SLAs and making sure the agreements are met. SLAs define target levels for cost, quality, and agility by service class, as well as the metrics for measuring actual performance. Managing SLAs is necessary for achieving the perception of infinite capacity and continuous availability. SLM also requires a service provider's approach by IT.

Service Lifecycle Management Service lifecycle management takes an end-to-end management view of a service. A typical journey starts with identifying a business need, then moves to managing a business relationship, and concludes when that service becomes available. Service strategy drives service design. After launch, the service is transitioned to operations and refined through continual service improvement. A service provider's approach is critical to successful service lifecycle management. Operations Change Management Change management controls the lifecycle of all changes.

The primary objective of change management is to eliminate, or at least minimize, disruption while desired changes are made to the services. Change management focuses on understanding and balancing the cost and risk of making the change versus the potential benefit of the change to the business or the service. Providing predictability and minimizing human involvement are the core principles for achieving a mature service management process and making sure changes can be made without affecting the perception of continuous availability. Incident and Problem Management Incident management quickly resolves events that affect, or threaten to affect, services with minimal disruption. Problem management identifies and resolves the root causes of incidents. It also tries to prevent or minimize the impact of possible incidents. Configuration Management Configuration management involves making sure that the assets required to deliver services are properly controlled.

The goal is to have accurate and effective information about those assets available when and where it is needed. This information includes details about asset configuration and the relationships between assets. Configuration management typically requires a CMDB, which is used to store configuration records throughout their lifecycle. The configuration management system maintains one or more CMDBs, and each CMDB stores attributes of configuration items and relationships to other configuration items.

Management Architecture Fabric and Fabric Management At a high level, the Fast Track architectures include the concepts of a computing, storage, and network fabric. This fabric is logically and physical independent of components such as System Center that provide management of the fabric, that is, fabric management ().

Figure 16 High-Level Diagram of Fast Track Architecture. Fabric The fabric is defined as all of the physical and virtual resources under the scope of management within the fabric management infrastructure. The fabric is typically the entire computing, storage, and network infrastructure, usually implemented as Hyper-V host clusters managed by the System Center infrastructure. For private cloud infrastructures, the fabric constitutes a resource pool that consists of one of more scale units. In a modular architecture, a scale unit is the point to which a module in the architecture can scale before another module is required. For example, an individual server is a scale unit because it can be expanded to a certain point in terms of CPU and RAM; however, once it reaches its maximum scalability, an additional server is required to continue scaling. Each scale unit also has an associated amount of physical installation and configuration labor.

With large-scale units, such a preconfigured full rack of servers, the labor overhead can be minimized. It is critical to know the scale limits of all components, both hardware and software, when determining the optimum scale units for the overall architecture. Scale units allow the documentation of all the requirements needed for implementation, including space; power; heating, ventilation and air conditioning (HVAC); and connectivity. Fabric Management Fabric management involves treating discrete capacity pools of servers, storage, and networks as a single fabric. The fabric is then subdivided into capacity clouds, or resource pools, that carry characteristics such as delegation of access and administration, SLAs, and cost metering. Fabric management allows the centralization and automation of complex management functions that can be carried out in a highly standardized, repeatable fashion to increase availability and lower operational costs. Fabric Management Host Architecture In a private cloud infrastructure, it is recommended that the systems that make up the resource pools be physically separate from the systems that provide management.

Much like the concept of having a top-of-rack switch, this separation is recommended to provide separate fabric management hosts to manage the underlying services that provide capacity to the private cloud infrastructure. This model helps make sure that the availability of the fabric is separated from fabric management and, regardless of the state of the underlying fabric resource pools, management of the infrastructure and its workloads is maintained at all times. To support this level of availability and separation, Fast Track private cloud architectures should contain a separate set of hosts, a minimum of two, configured as a failover cluster in which the Hyper-V role is enabled. Furthermore, these hosts should contain highly available virtualized instances of the management infrastructure, System Center, to support fabric management operations that are stored on dedicated CSVs. All management hosts will use the Windows Server 2012 Datacenter Edition operating system with the Hyper-V role enabled. For the specified scalability, the supporting System Center products and their dependencies will run within Hyper-V virtual machines on the management hosts.

For enterprise implementations, a minimum two-node fabric management cluster is required, with four nodes recommended for scale and availability, to provide high availability of the fabric management workloads. This fabric management cluster is dedicated to the virtual machines running the suite of products providing IaaS management functionality and is not intended to run additional customer workloads outside of those that provide management capabilities over the fabric infrastructure. For additional management scale points, additional management host capacity might be required. The host architecture is illustrated in. Figure 17 Management Fabric Infrastructure.

Management Host Computing (CPU) The management virtual machine workloads are expected to have a fairly high level of utilization. A conservative virtual CPU to logical processor ratio of two or fewer should be used. This ratio implies a minimum of two sockets per fabric management host, with six to eight cores per socket. During maintenance or failure of one of the two nodes, this CPU ratio will be temporarily exceeded. The following recommendation is provided for each fabric management host within the configuration: • Minimum 12 logical CPUs and 96 virtual CPUs Management Host Memory (RAM) Host memory should be sized accordingly to support the System Center products and their dependencies providing IaaS management functionality. Lists recommendations for each fabric management host within the configuration: Table 5 Recommendations for Fabric Management Hosts. 192-GB RAM recommended Management Host Network Use multiple network adapters, multiport network adapters, or both on each host server.

For converged designs, network technologies that provide teaming or virtual NICs can be used, provided that two or more physical adapters can be teamed for redundancy and multiple vNICs and VLANs can be presented to the hosts for traffic segmentation and bandwidth control. 10 Gigabit Ethernet or higher network interfaces must be used to reduce bandwidth contention and simplify the network configuration through consolidation. Management Host Storage Connectivity The requirement for storage is simply that shared storage is provided with sufficient connectivity, but no particular storage technology is required. The following guidance is provided to assist with storage connectivity choices.

For storage attached directly to the host, an internal SATA or SAS controller is required (for boot volumes), unless the design is 100 percent SAN based, including booting from SAN for the host operating system. Note A two-node or four-node host cluster can also be deployed. The amount of RAM will need to be increased to 192 GB (256 GB recommended) if deploying a two-node host cluster for the management fabric. The management architecture consists of a minimum of two physical nodes in a failover cluster with shared storage and redundant network connections. This architecture provides a highly available platform for the management systems. Some management systems have additional highly available options, and in these cases, the most effective highly available option will be used.

Topology dependent. Note that in Fast Track Service Manager is used solely for private cloud virtual machine management. An advanced deployment topology can support up to 50,000 computers. As shown by the component scalability in the table, the default Fast Track deployment can support the management of up to 8000 virtual machines and associated fabric hosts, based on the deployment of a single 64-node Windows Server 2012 Hyper-V failover cluster. Note that individual components such as Operations Manager can be scaled further to support larger and more complex environments.

In these cases, a four-node management cluster would be required to support scale. Prerequisite Infrastructure Active Directory Domain Services (AD DS) AD DS is a required foundational component. Fast Track provides support for Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012 AD DS customer deployments.

Previous versions are not directly supported for all workflow provisioning and deprovisioning automation. It is assumed that AD DS deployments exist at the customer site, and deployment of these services is not in scope for the typical deployment. • Forests and domains: The preferred approach is to integrate into an existing AD DS forest and domain, but this is not a strict requirement.

A dedicated resource forest or domain may also be employed as an additional part of the deployment. Fast Track does support multiple domains or multiple forests in a trusted environment using two-way forest trusts. • Trusts: Fast Track allows multidomain support within a single forest in which two-way forest (Kerberos) trusts exist between all domains.

This is referred to as multidomain or interforest support. Domain Name System (DNS) DNS name resolution is a required element for System Center 2012 SP1 components and the process automation solution. AD DS integrated DNS is required for automated provisioning and deprovisioning components within Orchestrator Runbook as part of the solution.

The solution provides full support and automation for Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012 AD DS integrated DNS deployments. Use of non-Microsoft or non-AD DS integrated DNS solutions might be possible, but they would not provide for automated creation and removal of DNS records related to virtual machine provisioning and deprovisioning processes.

Use of solutions outside of AD DS integrated DNS would either require manual intervention for these scenarios or require modifications to Cloud Services Process Pack Orchestrator runbooks. Dynamic Host Configuration Protocol (DHCP) To support dynamic provisioning and management of physical and virtual computing capacity within the IaaS infrastructure, use DHCP for all physical and virtual machines by default to support runbook automation. For physical hosts such as the fabric management cluster nodes and the scale-unit cluster nodes, DHCP reservations are recommended so that physical servers and NICs have known IP addresses while providing centralized management of those addresses through DHCP. Windows DHCP is required for automated provisioning and deprovisioning components within Orchestrator runbooks as part of the solution. DHCP is used to support host cluster provisioning, DHCP reservations, and other areas supporting dynamic provisioning of computing within the infrastructure. The solution provides full support and automation for Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012 versions of the DHCP server role. Use of solutions outside of the Windows DHCP server role requires additional testing and validation.

Consolidated SQL Server Design Under System Center 2012 SP1, the support matrix for the various versions of SQL Server has been simplified. System Center 2012 SP1 supports SQL Server 2008 R2 and SQL Server 2012 fully, with limited component support for SQL Server 2008. Provides a compatibility matrix.

Table 7 Compatibility of System Center 2012 SP1 Components with SQL Server. RTM or later To support advanced availability scenarios and more flexible storage options, SQL Server 2012 is required for Fast Track deployments of fabric management. Two SQL Server 2012 virtual machines must be deployed as a guest failover cluster to support the solution, with an option to scale to a four-node cluster. This multinode SQL Server failover cluster will contain all the databases for each System Center product in discrete instances by product and function. This separation of instances allows for division by unique requirements and scaling over time as the needs of each component grow. Note Not all features are supported for failover cluster installations. Some features cannot be combined on instances, and some allow configuration only at initial installation.

As a general rule, database engine services and analysis services will be hosted in separate instances within the failover cluster. Because of the support for SQL Server Reporting Services (SSRS) in a failover cluster, SSRS will be installed on the hosting System Center component server, the Operations Manager Reporting Server. This installation, however, will be 'files only,' and the SSRS configuration will configure remote reporting services databases hosted on the component instance on the SQL cluster.

The exception to this is the System Center Operations Manager (SCOM) Analysis Services and Reporting Services configuration. For this instance, Analysis Services and Reporting Services must be installed with the same server and with the same instance to support VMM and Operations Manager integration. All instances are required to be configured with Windows authentication. In System Center 2012 SP1, the App Controller and Orchestrator components can share an instance of SQL Server with the SharePoint farm, providing additional consolidation of the SQL instance requirements. Outlines the options required for each instance. Table 8 Database Instances and Requirements. Note For a more detailed version of this diagram, see the Appendix.

Note That the Operations Manager and Service Manager database sizing assumes a managed infrastructure of 8000 virtual machines. Additional references for sizing are provided in the component sections below.

Virtual Machine Manager System Center 2012 SP1 VMM is required. Two VMM servers are deployed and configured in a failover cluster, using a dedicated SQL Server instance on the virtualized SQL Server cluster. One library share on the VMM servers will be utilized. Additional library servers can be added as needed. The VMM and Operations Manager integration is configured during the installation process.

The following hardware configurations will be used: Servers • Two guest clustered virtual machines • Windows Server 2012 • Four virtual CPUs • 8 GB memory • Two vNICs (one for client connections, one for cluster communications) • Storage: One operating system VHDX, one data VHDX (pass-through volume, iSCSI LUN or virtual Fibre Channel LUN) Operations Manager System Center 2012 SP1 Operations Manager is required. A minimum of two Operations Manager servers are deployed in a single management group, using a dedicated SQL Server instance on the virtualized SQL Server cluster. An Operations Manager agent is required to be installed on every management host and scale unit cluster node to support health monitoring functionality. Additionally, agents may be installed on every guest VM to provide guest-level monitoring capabilities.

Note that Operations Manager gateway servers and additional management servers are supported for custom solutions; however, for the base reference implementation these additional roles are not implemented. The Operations Manager installation uses a dedicated SQL Server instance on the virtualized SQL Server cluster. The installation will follow a split SQL Server configuration: SQL Server Reporting Services and Operations Manager components will reside on the Operations Manager VM, while the SQL Server Reporting Services and Operations Manager databases will use a dedicated instance on the virtualized SQL Server cluster.

Note that for the Fast Track implementation the data warehouse is sized for 90-day retention instead of the default retention period. 908 GB Management Scenarios Listed below are the primary management scenarios addressed in Fast Track, although the management layer can provide many more capabilities. • Fabric management • Fabric provisioning • Virtual machine provisioning and deprovisioning • IT service provisioning (including platform and application provisioning) • Fabric and IT service maintenance • Fabric and IT service monitoring • Resource optimization • Service management • Reporting (used by chargeback, capacity, service management, health, and performance) • Backup and disaster recovery • Security Fabric Management Fabric management is the act of pooling multiple disparate computing resources together and being able to subdivide, allocate, and manage them as a single fabric.

The various methods outlined in the sections that follow make fabric management possible. Hardware Integration Hardware integration refers to the management system being able to perform deployment or operational tasks directly against the underlying physical infrastructure, such as storage arrays, network devices, and servers.

Storage Integration and Management In VMM, you can discover, classify, and provision remote storage on supported storage arrays through the VMM console. VMM fully automates the assignment of storage to a Hyper-V host or Hyper-V host cluster, and tracks the storage that is managed by VMM. SAN Integration To activate the storage features, VMM uses the Windows Storage Management API (SMAPI) to manage external storage using symmetric multiprocessing (SMP), or uses SMAPI together with the Microsoft standards-based storage management service to communicate with Storage Management Initiative - Specification (SMI-S) compliant storage.

The Microsoft standards-based storage management service is an optional server feature that allows communication with SMI-S storage providers. It is activated during installation of System Center 2012 SP1. NetApp storage arrays have an SMI-S provider that is installed on the VMM Management server and enables the management of the NetApp storage array.

Windows Server 2012 Based Storage Integration Windows Server 2012 provides support for using SMB 3.0 file shares as shared storage for Hyper-V 2012. System Center 2012 SP1 allows you to assign SMB file shares to Hyper-V standalone hosts and clusters. System Center 2012 SP1 provides support for the Microsoft iSCSI software target using an SMI-S provider.

Microsoft iSCSI is now fully integrated into Windows Server 2012. The installation file (.msi) for the SMI-S provider for Microsoft iSCSI target server is included in the System Center 2012 SP1 installation. Network Integration and Management Networking in VMM includes several enhancements that allow administrators to efficiently provision network resources for a virtualized environment. The networking enhancements include the following: Logical Networks System Center 2012 SP1 allows you to easily connect virtual machines to a network that serves a particular function in your environment, for example, the 'back-end,' 'front-end,' or 'backup' network.

To do this, associate IP subnets and, if needed, VLANs together into named units called logical networks. You can design your logical networks to fit your environment. Load Balancer Integration Networking in VMM includes load-balancing integration so that you can automatically provision load balancers in your virtualized environment. Load-balancing integration works together with other network enhancements in VMM. By adding a load balancer to VMM, you can load balance requests to the virtual machines that make up a service tier. You can use Microsoft Windows Network Load Balancing (NLB), or you can add supported hardware load balancers through the VMM console. NLB is included when you install VMM.

NLB uses round robin as the load-balancing method. To add supported hardware load balancers, you must install a configuration provider that is available from the load balancer manufacturer. The configuration provider is a plug-in to VMM that translates VMM PowerShell commands to API calls that are specific to a load balancer manufacturer and model. Switches and Ports VMM in System Center 2012 SP1 allows you to consistently configure identical capabilities for network adapters across multiple hosts by using port profiles and logical switches. Port profiles and logical switches act as containers for the properties or capabilities that you want your network adapters to have. Instead of configuring individual properties or capabilities for each network adapter, you can specify the capabilities in port profiles and logical switches, which you can then apply to the appropriate adapters. This can simplify the configuration process.

Virtual Machine Networks Virtual machine networks offer the ability to use network virtualization that extends the concept of server virtualization to make it possible for you to deploy multiple virtual networks (VM networks) on the same physical network. However, VM networks can be configured in multiple ways: Network virtualization (Hyper-V network virtualization): If you wish to support multiple tenants, also called clients or customers, with their own networks, isolated from the networks of others, use network virtualization.

To do this, create a logical network, and on top of that logical network create multiple VM networks, each of which uses the option to isolate using Hyper-V network virtualization. With this isolation, your tenants can use any IP addresses that they want for their virtual machines, regardless of the IP addresses that are used on other VM networks. Also, you can allow your tenants to configure some aspects of their own networks, based on limits that you specify. Note if using network virtualization and the virtual machines require network communication outside of the private subnet, you will need to provide a gateway. See 'How to Add a Gateway in System Center 2012 SP1,'. VLAN-based configuration: If you are working with networks that use familiar VLAN technology for network isolation, you can manage those networks as they are, using VMM to simplify the management process.

Virtual Switch Extension Management VMM 2012 SP1 can use a vendor-provided network-management console and the VMM management server together. You can configure settings or capabilities in the vendor-provided network-management console, also known as the management console for a forwarding extension, and then use the console and the VMM management server in a coordinated way. To do this, you must first install the provider software that is provided by the vendor on the VMM management server. Then you can add the virtual switch extension manager to VMM, which will cause the VMM management server to connect to the vendor network-management database and import network settings and capabilities from that database. The result is that you can see those settings and capabilities, and all your other settings and capabilities, together in VMM. Fabric Provisioning In accordance with the principle of standardization and automation, creating the fabric and adding capacity should be an automated process.

There are multiple scenarios for adding fabric resources in VMM. This section is specifically referring to bare-metal provisioning of Hyper-V hosts and host clusters. In VMM, this is achieved through a multistep process: 1.

Provisioning Hyper-V hosts 2. Configuring host properties, networking, and storage 3. Note For additional in-guest workload and application-specific monitoring, simply deploy an Operations Manager agent within the virtual machine operating system and install the desired management pack. Be aware that this scenario would not be considered fabric monitoring. Reporting A private cloud solution must provide a centralized reporting capability.

The reporting capability should provide standard reports detailing capacity, utilization, and other system metrics. The reporting functionality serves as the foundation for capacity-based, or utilization-based, billing and chargeback to tenants. In a service-oriented IT model, reporting serves the following purposes: • Systems performance and health • Capacity metering and planning • Service-level availability • Usage-based metering and chargeback • Incident and problem reports that help IT focus efforts As a result of VMM and Operations Manager integration, several reports are created and available by default.

However, metering and chargeback reports and incident and problem reports are enabled by the use of Service Manager. VMM and Operations Manager Integration Default Reports lists default reports created through the integration of VMM and Operations Manager.

Table 11 Default Reports Available Through VMM and Operations Manager Integration. Note You can also design your own reports. Service Management System The goal of System Center 2012 Service Manager is to support IT service management in a broad sense.

This includes implementing ITIL and MOF processes, such as change and incident management, and it can also include processes like allocating resources from a private cloud. Service Manager maintains a CMDB. The CMDB is the repository for nearly all configuration- and management-related information in the System Center 2012 environment. For the System Center Cloud Services Process Pack, this information includes VMM resources such as VM templates, and VM service templates, which are all copied regularly from the VMM library into the CMDB. This allows objects such as VMs and users to be tied to Orchestrator runbooks for automated tasks such as request fulfillment, metering, and chargeback. User Self-Service The Microsoft Private Cloud Self-Service Solution consists of the: • Service Manager self-service portal with the Cloud Services Process Pack • App Controller Service Manager 2012 SP1 provides its own self-service portal. Using the information in the CMDB, Service Manager 2012 can create a service catalog that shows the services available to a particular user.

For example, when a user wants to create a virtual machine in the group's cloud, instead of passing the request directly on to VMM, as the App Controller does, Service Manager starts an Orchestrator workflow to handle the request. The workflow contacts the user's manager to get an approval for the request. If the request is approved, the workflow then starts an Orchestrator runbook.

The Service Manager self-service portal consists of two parts and has the prerequisite of a service manager server and database: • Web content server • SharePoint web part. Note These roles must be co-located on a single dedicated server. The is an add-on component that allows IaaS capabilities through the Service Manager self-service portal and Orchestrator runbooks.

It provides: • Standardized and well-defined processes for requesting and managing cloud services, which includes the ability to define projects, capacity pools, and virtual machines • Natively supported request, approval, and notification to allow businesses to effectively manage their own allocated infrastructure capacity pools App Controller is the portal that a self-service user would utilize after a request has been fulfilled in order to connect to and manage his or her virtual machines and services. App Controller connects directly to VMM, using the credentials of the authenticated user to display his or her virtual machines and services, and to provide a configurable set of actions. Service Management The service management layer provides the means for automating and adapting IT service management best practices, found in MOF 4.0 and ITIL, to provide built-in processes for incident resolution, problem resolution, and change control. MOF provides relevant, practical, and accessible guidance for today's IT professionals.

It is a downloadable framework that encompasses the entire service management lifecycle (). For more information about MOF, see. Figure 20 MOF 4.0 Model. Operations Manager also has the ability to integrate with Visual Studio Team Foundation Server. Streamlining the communications between development and IT operations teams, often called dev-ops, can help decrease the time it takes for the application maintenance and delivery to move into production stage, where your application delivers value to customers.

To speed interactions between these teams, it is essential to quickly detect and fix problems that might need assistance from the engineering team. For more information, see. Security The three pillars of IT security are confidentiality, integrity, and availability.

IT infrastructure threat modeling is the practice of considering what attacks might be attempted against the different components in an IT infrastructure (). Generally, threat modeling assumes the following conditions: • Organizations have resources (in this case, IT components) that they wish to protect • All resources are likely to exhibit some vulnerability • People might exploit these vulnerabilities to cause damage or gain unauthorized access to information • Properly applied security countermeasures help mitigate threats that exist because of vulnerabilities The IT infrastructure threat modeling process is a systematic analysis of IT components that compiles component information into profiles. The goal of the process is to develop a threat model portfolio, which is a collection of component profiles. One way to establish these pillars as a basis for threat modeling IT infrastructure is through MOF, a framework that provides practical guidance for managing IT practices and activities throughout the entire IT lifecycle. The in the Plan phase of MOF addresses creating plans for confidentiality, integrity, availability, continuity, and capacity. The in the Plan phase provides context to help understand the reasons for policies and their creation, validation, and enforcement, and includes processes to communicate policy, incorporate feedback, and help IT maintain compliance with directives.

The Deliver phase contains several SMFs that help make sure that project planning, solution building, and the final release of the solution are accomplished in ways that fulfill requirements and create a solution that is fully supportable and maintainable when operating in production. Figure 21 Security Threat Modeling. For more information on threat modeling, see the following resources: Security for Microsoft private cloud is founded on three pillars: protected infrastructure, application access, and network access, as described in the sections that follow Protected Infrastructure A defense-in-depth strategy is used at each layer of the Microsoft private cloud architecture. Security technologies and controls must be implemented in a coordinated fashion. An entry point represents data or process flow that crosses a trust boundary.

Any portions of an IT infrastructure in which data or processes cross from a less-trusted zone into a more-trusted zone should have a higher review priority. Users, processes, and IT components all operate at specific trust levels that vary between fully trusted and fully untrusted. Typically, parity exists between the level of trust assigned to a user, process, or IT component and the level of trust associated with the zone in which the user, process, or component resides. Malicious software poses numerous threats to organizations, from intercepting a user's login credentials with a keystroke logger to achieving complete control over a computer or an entire network by using a rootkit. Malicious software can cause websites to become inaccessible, destroy or corrupt data, and reformat hard disks.

Effects can include additional costs to disinfect computers, restore files, and reenter or re-create lost data. Virus attacks can also cause project teams to miss deadlines, leading to breach of contract or loss of customer confidence. Organizations that are subject to regulatory compliance can be prosecuted and fined.

A defense-in-depth strategy, with overlapping layers of security, is a strong way to counter these threats. The least-privileged user account (LUA) approach is an important part of that defensive strategy.

The LUA approach directs users to follow the principle of least privilege and log in with limited user accounts. This strategy also aims to limit the use of administrative credentials to administrators for administrative tasks only. Application Access AD DS provides the means to manage the identities and relationships that make up a Microsoft private cloud.

Integrated with Windows Server 2008 R2 and Windows Server 2012, AD DS provides the functionality needed to centrally configure and administer system, user, and application settings. Windows Identity Foundation allows.NET developers to externalize identity logic from their application, improving developer productivity, enhancing application security, and allowing interoperability. Developers can enjoy greater productivity while applying the same tools and programming model to build on-premises software as well as cloud services.

They can create more secure applications by reducing custom implementations and using a single simplified identity model based on claims. Network Access Windows Firewall with Advanced Security combines a host firewall and IP Security (IPsec). Unlike a perimeter firewall, Windows Firewall with Advanced Security runs on each computer, running a particular version of Windows, and provides local defense from network attacks that might pass through your perimeter network or originate inside your organization. It also contributes to computer-to-computer connection security by allowing you to require authentication and data protection for communications. Network Access Protection (NAP) is a platform that allows network administrators to define specific levels of network access based on a client's identity, the groups to which the client belongs, and the degree to which the client complies with corporate governance policy. If a client is not compliant, NAP provides a mechanism for automatically bringing the client into compliance-a process known as remediation-and then dynamically increasing its level of network access. NAP includes an API that developers and vendors can use to integrate their products and use this health state validation, access enforcement, and ongoing compliance evaluation.

You can logically isolate server and domain resources to limit access to authenticated and authorized computers. This involves create a logical network inside an existing physical network in which computers share a common set of requirements for more secure communications. In order to establish connectivity, each computer in the logically isolated network must provide authentication credentials to other computers in the isolated network, to prevent unauthorized computers and programs from gaining access to resources inappropriately. Requests from computers that are not part of the isolated network will be ignored. Desktop management and security have traditionally existed as two separate disciplines, yet both play central roles in helping to keep users safe and productive. Management provides proper system configuration, deploys patches against vulnerabilities, and delivers necessary security updates. Security provides critical threat detection, incident response, and remediation of system infection.

System Center 2012 SP1 Endpoint Protection (formerly known as Forefront Endpoint Protection 2012) aligns these two work streams into a single infrastructure. Key Features System Center 2012 SP1 Endpoint Protection makes it easier to help protect critical desktop and server operating systems against viruses, spyware, rootkits, and other threats. • Single console for endpoint management and security: Configuration Manager provides a single interface for managing and securing desktops that reduces complexity and improves troubleshooting and reporting insights. • Central policy creation: Administrators have a central location for creating and applying all client-related policies. • Enterprise scalability: Use of the Configuration Manager infrastructure in System Center 2012 Endpoint Protection makes it possible to efficiently deploy clients and policies in large organizations around the globe. By using Configuration Manager distribution points and an automatic software deployment model, organizations can quickly deploy updates without relying on WSUS.

• Highly accurate and efficient threat detection: The anti-malware engine in System Center 2012 SP1 Endpoint Protection helps protect against the latest malware and rootkits, with a low false-positive rate, and helps to keep employees productive with scanning that has a low impact on performance. • Behavioral threat detection: System Center 2012 SP1 Endpoint Protection uses system behavior and file reputation data to identify and block attacks on client systems from previously unknown threats. Detection methods include behavior monitoring, the cloud-based dynamic signature service, and dynamic translation. • Vulnerability shielding: System Center 2012 SP1 Endpoint Protection helps prevent exploitation of endpoint vulnerabilities with deep protocol analysis of network traffic. • Automated agent replacement: System Center 2012 SP1 Endpoint Protection automatically detects and removes common endpoint security agents, to lower the time and effort needed to deploy new protection. • Windows Firewall management: System Center 2012 SP1 Endpoint Protection makes sure that Windows Firewall is active and working properly to help protect against network-layer threats.

It also allows administrators to more easily manage protections across the enterprise. Service Delivery Layer As the primary interface with the business, the service delivery layer is expected to know or obtain answers to the following questions: • What services does the business want? • What level of service are the business decision makers willing to pay for? • How can private cloud move IT from being a cost center to becoming a strategic partner with the business? With these questions in mind, IT departments must address two main issues within the service layer: • How do we provide a cloudlike platform for business services that meets business objectives?

• How do we adopt an easily understood, usage-based cost model that can be used to influence business decisions? An organization must adopt the private cloud architecture principles in order to meet the business objectives of a cloudlike service. See the section, for more information on these principles. Figure 22 Service Delivery Layer of Dynamic Data Center Model. The components of the service delivery layer are as follows: • Financial management: Financial management incorporates the functions and processes used to meet a service provider's budgeting, accounting, metering, and charging requirements.

The primary financial management concerns in a private cloud are providing cost transparency to the business and structuring a usage-based cost model for the consumer. Achieving these goals is a basic precursor to achieving the principle of encouraging desired consumer behavior. • Demand management: Demand management involves understanding and influencing customer demands for services, and includes the capacity to meet these demands. The principles of perceived infinite capacity and continuous availability are fundamental to stimulating customer demand for cloud-based services. A resilient, predictable environment with predictable capacity management is necessary to adhere to these principles.

Cost, quality, and agility factors influence consumer demand for these services. • Business relationship management: Business relationship management is the strategic interface between the business and IT. If an IT department is to adhere to the principle that it must act as a service provider, mature business relationship management is critical.

The business should define the capabilities of the required services and partner with the IT department on solution procurement. The business will also need to work closely with the IT department to define future capacity requirements to continue to adhere to the principle of perceived infinite capacity. • Service catalog: The output of demand and business relationship management will be a list of services or service classes offered and documented in the service catalog. This catalog describes each service class, eligibility requirements for each service class, service-level attributes, targets included with each service class (such as availability targets), and cost models for each service class. The catalog must be managed over time to reflect changing business needs and objectives. • Service lifecycle management: Service lifecycle management takes an end-to-end management view of a service.

A typical journey starts with identification of a business need and continues through business relationship management to the time when that service becomes available. Service strategy drives service design.

After launch, the service is transitioned to operations and refined through continual service improvement. Taking a service provider's approach is critical to successful service lifecycle management. • Service-level management: Service-level management is the process of negotiating SLAs and making sure they are met. SLAs define target levels for cost, quality, and agility by service class as well as the metrics for measuring actual performance.

Managing SLAs is necessary to achieve the perception of infinite capacity and continuous availability. This, too, requires IT departments to implement a service provider's approach. • Continuity and availability management: Availability management defines processes necessary to achieve the perception of continuous availability. Continuity management defines how risks will be managed in a disaster scenario to help make sure that minimum service levels are maintained. The principles of resiliency and automation are fundamental here.

• Capacity management: Capacity management defines the processes necessary to achieve the perception of infinite capacity. Capacity must be managed to meet existing and future peak demand while controlling underutilization. Business relationship and demand management are key inputs into effective capacity management and require a service provider's approach. Predictability and optimization of resource usage are primary principles in achieving capacity management objectives.

• Information security management: Information security management strives to make sure that all requirements are met for confidentiality, integrity, and availability of the organization's assets, information, data, and services. An organization's particular information security policies will drive the architecture, design, and operations of a private cloud.

Resource segmentation and multitenancy requirements are important factors to consider during this process. Operations The operations layer defines the operational processes and procedures necessary to deliver IT as a service (). This layer uses IT service management concepts that can be found in prevailing best practices such as ITIL and MOF. The main focus of the operations layer is to carry out the business requirements defined at the service delivery layer. Cloudlike service attributes cannot be achieved through technology alone; mature IT service management will be required.

The operations capabilities are common to all three services are IaaS, platform as a service (PaaS), and software as a service (SaaS). Figure 23 Operations Layer of Dynamic Data Center Model. The components of the operations layer include the following: • Change management: Change management is responsible for controlling the lifecycle of all changes. The primary objective is to implement beneficial changes with minimum disruption to the perception of continuous availability.

Change management determines the cost and risk of making changes and balances them against the potential benefits to the business or service. Offering predictability and minimizing human involvement are the core principles behind a mature change management process. • Service asset and configuration management: Service asset and configuration management maintains information on the assets, components, and infrastructure needed to provide a service. Accurate configuration data for each component, and its relationship to other components, must be captured and maintained. This data should include historical, current, and expected future states, and it should be easily available to those who need it. Mature service asset and configuration management processes are necessary for achieving predictability. • Release and deployment management: Release and deployment management involves seeing that changes to a service are built, tested, and deployed with minimal disruption to the service or production environment.

Change management provides the approval mechanism (determining what will be changed and why), but release and deployment management is the mechanism for determining how changes are implemented. Predictability and minimal human involvement in the release and deployment process are critical to achieving cost, quality, and agility goals. • Knowledge management: Knowledge management is responsible for gathering, analyzing, storing, and sharing information within an organization.

Mature knowledge management processes are necessary to achieve a service provider's approach, and are a key element of IT service management. • Incident and problem management: The goal of incident and problem management is to resolve disruptive, or potentially disruptive, events with maximum speed and minimum disruption.

Problem management also identifies the root causes of past incidents and seeks to identify and prevent, or minimize the impact of, future ones. In a private cloud, the resiliency of the infrastructure helps make sure that faults, when they occur, have a minimal impact on service availability. Resilient design promotes rapid restoration of service continuity. Predictability and minimal human involvement are necessary to achieve this resiliency. • Request fulfillment: The goal of request fulfillment is to manage user requests for services. As the IT department adopts a service provider's approach, it should define available services in a service catalog based on business functionality.

The catalog should encourage desired user behavior by exposing cost, quality, and agility factors to the user. Self-service portals, when appropriate, can assist the drive toward minimal human involvement.

• Access management: The goal of access management is to deny access to unauthorized users while making sure that authorized users have access to needed services. Access management implements security policies defined by information security management at the service delivery layer.

Maintaining smooth access for authorized users is critical to achieving the perception of continuous availability. Adopting a service provider's approach to access management will also make sure that resource segmentation and multitenancy are addressed. • Systems administration: The goal of systems administration is to perform the daily, weekly, monthly, and as-needed tasks required for system health. A mature approach to systems administration is required for achieving a service provider's approach and for promoting predictability. The vast majority of systems administration tasks should be automated. Conclusion FlexPod with Microsoft Private Cloud is the optimal shared infrastructure foundation on which to deploy a variety of IT workloads. Cisco and NetApp have created a platform that is both flexible and scalable for multiple use cases and applications.

One common use case is to deploy Windows Server 2012 with Hyper-V as the virtualization solution, as described in this document. From virtual desktop infrastructure to Microsoft Exchange Server, Microsoft SharePoint Server, Microsoft SQL Server, and SAP, FlexPod can efficiently and effectively support business-critical applications running simultaneously from the same shared infrastructure. The flexibility and scalability of FlexPod also enable customers to start out with a right-sized infrastructure that can ultimately grow with and adapt to their evolving business requirements. Appendix Validated Bill of Materials The following product information is provided for reference and will require modification depending on specific customer environments.

Considerations include optics, cabling preferences, application workload, and performance expectations.