Introduction to windows deployment services




















This fourth generation deployment accelerator adds integration with recently released Microsoft deployment technologies such as WDS to create a single path for image creation and automated installation. MD is the recommended process and tool set to automate server and desktop deployment. You can download the Microsoft Deployment complete solution and corresponding source code at no cost.

The following video provides an overview of Microsoft Deployment with key features demostrations. System Center Configuration Manager provides adaptive, policy-based automation to manage the full deployment, update and extension life cycle for servers, clients and handheld devices, across physical, virtual, distributed, or mobile systems.

It packs the most complete feature set for deployment. Skip to main content. Note A tracing process may affect performance. Therefore, we recommend that you disable the tracing functionality when you do not have to generate a log. After you set this registry entry, trace information for the Windows Deployment Services server component is logged in the following file:.

To do this, set the following registry entries:. Where applicable, use the following options to obtain extra information:. If you perform legacy management functions, set the following registry entry to enable tracing in the RISetup component:.

Start a second instance of the WDSCapture component. Then, reproduce the problem by using the second instance of WDSCapture. The following are the definitions of the logging levels:. The NONE logging level disables the logging functionality. By default, this logging level is used. The INFO logging level logs errors, warnings, and informational events.

This logging level is the highest logging level. The image storage location has been identified. And finally, the size and number of servers has been determined for each location. This section determines the mechanisms used to provide fault tolerance to the WIM-based and VHD-based operating system images within the Windows Deployment Services infrastructure. In addition, mechanisms for keeping images consistent are also determined. In order to increase the availability of the infrastructure, the share through which the WIM-based and VHD-based images are accessed can be made fault tolerant.

Determine the method for making all shares used in the infrastructure fault tolerant. Distributed File System DFS can be used to provide a fault-tolerant method for accessing file shares. DFS allows the administrator to define a file namespace and provide multiple targets for folders contained within the namespace. When a client attempts to access a DFS-enabled share, the request is handled by the nearest DFS server hosting that particular share.

This can be used to control which server will provide the client with the install image. This is particularly useful for controlling bandwidth usage on WANs between clients in satellite sites and remote Windows Deployment Services servers in the hub. If the operating system images are stored locally, DFS can still be used to present a unified namespace with failover capabilities. Server clustering can increase the fault tolerance of a single content storage system file share.

The file share becomes a clustered resource running on a cluster with two or more computers. If the computer hosting the file share fails, the file share moves to a remaining active node.

The most practical approach to using server clustering is when an existing file server cluster is already implemented within the environment and can meet the expected capacity and performance requirements introduced by Windows Deployment Services.

Note Server clustering cannot be used to provide fault tolerance when the content is locally stored with the Windows Deployment Services system because Windows Deployment Services is not cluster-aware. Server clustering, therefore, is not supported. Although a share hosted in a server cluster can become part of a DFS namespace, the content of the share cannot be replicated using Distributed File System Replication.

Configuring the DFS namespace can be moderately difficult. Microsoft provides guidance for implementing this form of file protection using DFS. Server clustering tends to be extremely complex to set up due to the interaction between networks, shared storage, and specialized hardware and software configurations. If using existing file servers, then DFS is fairly low in cost as it is built into the operating system.

DFS allows for multiple copies of the target share to be accessible at the same time through a single namespace. The cluster allows only one copy of the target share to be accessible through a single namespace on the cluster. The following questions have been known to affect fault-tolerant design decisions:. The mechanism for managing images is determined in this step. Although the file system or remote share has been made fault tolerant, it's possible that images that are shared across systems may become inconsistent with respect to each other.

This step identifies the method for maintaining and managing the consistency of the images. Images are managed locally at each server. In order to share images with other Windows Deployment Services servers, the images are manually copied to the target machines. Manual copy or managing images at each Windows Deployment Services instance is extremely complex to coordinate, particularly if a standard image is required for all sites.

Configuring the DFS with replication can be moderately difficult. Microsoft provides guidance for implementing this form of file protection using DFS Replication. Third-party replication systems will vary on the complexity and knowledge required for implementation and operation. Manual copy or managing images at each Windows Deployment Services instance is extremely costly and prone to errors. If using existing file servers, then DFS is fairly low in cost because it is built into the operating system, although upgrading to Windows Server R2 or later is required for DFS Replication.

Depending upon the licensing costs, third-party replication systems could be costly. In addition, another skill set must be learned in order to appropriately manage the system. Although this method can increase the fault tolerance of the system by providing duplicate copies of the images, it's an extremely poor option due to the human factor.

When replication is combined with DFS namespaces, an extremely fault-tolerant system can be provided. The design of DFS Replication allows for a high level of performance in replicating data on networks.

The replication method implemented can have an effect on the performance of the infrastructure as the number of nodes increases. After deciding which method to use to ensure image consistency, record the chosen method in Table A-6 in Appendix A.

If replication is being used to distribute images to multiple Windows Deployment Services servers, then the refresh policy of Boot Configuration Data BCD stores must be configured on each of the Windows Deployment Services servers. The frequency with which it does this operation is controlled by the refresh period configuration.

This setting is required in replication scenarios so that changes made to boot images add, remove, rename, and so on on the master server are reflected in the boot menus that clients receive from remote servers. This time interval should be set to an appropriate value based on the frequency of image updates. If changes to boot images do not occur very often or if it is acceptable to have a long delay between a modification and when clients at remote sites see the changes, then set this to a higher value.

If changes to boot images are made often, or if booting clients are expected to immediately pick up the changes, then set this to a lower value. However, be careful in setting a low value. BCD generation causes CPU and disk overhead on the Windows Deployment Services server, and configuring the value to too small a window can harm performance on the server. A good default value is 30 minutes. These folders contain server-specific data.

For each share utilized in the infrastructure, the mechanism for ensuring fault tolerance has been determined. In addition, image consistency among the servers has been addressed. All of the data collected in this step was recorded in Table A-6 in Appendix A. For each new Windows Deployment Services instance, determine the method used by clients to discover the Windows Deployment Services servers.

When the Windows Deployment Services server and the PXE client reside on the same network segment, no additional changes to the infrastructure are required.

The broadcast will be heard by the Windows Deployment Services server. On networks where the clients and the Windows Deployment Services server are located on separate subnets, a mechanism for discovering the Windows Deployment Services server is required. Clients can discover Windows Deployment Services servers either through network boot referrals or through IP helper updates.

Configuration of the IP helpers should follow these guidelines:. Configuring network boot referrals can be complex because each DHCP server has to be associated with a Windows Deployment Services server.

This option reduces complexity because IP helper tables on the routers are updated with the IP address of the Windows Deployment Services servers. This option provides no fault tolerance; if the Windows Deployment Services server being referred is down, the client will fail to complete the PXE boot process.

This option provides fault tolerance provided there is more than one Windows Deployment Services server that can answer the client. This option allows any Windows Deployment Services server in the location to answer a client request. In locations where the clients and Windows Deployment Services servers are separated by a router, a mechanism for discovering the servers was determined. The method for Windows Deployment Services discovery was recorded in Table A-7 in Appendix A so that it can be implemented at deployment.

This guide summarized the critical design decisions, activities, and tasks required to enable a successful design of a Windows Deployment Services infrastructure. It addressed the technical aspects, service characteristics, and business requirements required to complete a comprehensive review of the decision-making process.

This guide, when used in conjunction with product documentation, allows organizations to confidently plan the implementation of Windows Deployment Services technologies. In addition to the product documentation, the following web links contain supplemental information on product concepts, features, and capabilities addressed in this guide:. Use the job aids that follow to record the information required for the organization to implement Windows Deployment Services. Step 1. Use this job aid to record Windows Deployment Services instance locations and the number of instances required.

Step 2. Use this job aid to record whether a new installation will be put into place or if an upgrade of existing infrastructure is to be used for each location. Step 3. Use this job aid to record for each instance whether a full Windows Deployment Services server will be deployed, or only the Transport Server role.

Table A Step 4. Use this job aid to record the physical and virtual instance requirements. Step 5. Use this job aid to record the fault-tolerance and consistency method for each share used in the infrastructure. Step 6. Use this job aid to record the Windows Deployment Services client discovery method. The following information identifies important monitoring counters used for capacity planning and performance monitoring of a system. Over-committing CPU resources can adversely affect all the workloads on the same server, causing significant performance issues for a larger number of users.

Because CPU resource use patterns can vary significantly, no single metric or unit can quantify total resource requirements. At the highest level, measurements can be taken to see how the processor is utilized within the entire system and whether threads are being delayed.

The following table lists the performance counters for capturing the overall average processor utilization and the number of threads waiting in the processor Ready Queue over the measurement interval. As a general rule, processors that are running for sustained periods at greater than 90 percent busy are running at their CPU capacity limits. Processors running regularly in the 75—90 percent range are near their capacity constraints and should be closely monitored.

Processors regularly reporting 20 percent or less utilization can make good candidates for consolidation. For response-oriented workloads, sustained periods of utilization above 80 percent should be investigated closely as this can affect the responsiveness of the system.

For throughput-oriented workloads, extended periods of high utilization are seldom a concern, except as a capacity constraint. Unique hardware factors in multiprocessor configurations and the use of Hyper-threaded logical processors raise difficult interpretation issues that are beyond the scope of this document. Additionally, comparing results between bit and bit versions of the processor are not as straightforward as comparing performance characteristics across like hardware and processor families.

The Processor Queue Length can be used to identify if processor contention, or high CPU utilization, is caused by the processor capacity being insufficient to handle the workloads assigned to it.

The Processor Queue Length shows the number of threads that are delayed in the processor Ready Queue and are waiting to be scheduled for execution. The value listed is the last observed value at the time the measurement was taken. On a machine with a single processor, observations where the queue length is greater than 5 are a warning sign that there is frequently more work available than the processor can handle readily.

When this number is greater than 10, then it is an extremely strong indicator that the processor is at capacity, particularly when coupled with high CPU utilization. On systems with multiprocessors, divide the queue length by the number of physical processors. On a multiprocessor system configured using hard processor affinity that is, processes are assigned to specific CPU cores , which have large values for the queue length, can indicate that the configuration is unbalanced.

Although Processor Queue Length typically is not used for capacity planning, it can be used to identify whether systems within the environment are truly capable of running the loads or if additional processors or faster processors should be purchased for future servers. In order to sufficiently cover memory utilization on a server, both physical and virtual memory usage needs to be monitored.

Low memory conditions can lead to performance problems, such as excessive paging when physical memory is low, to catastrophic failures, such as widespread application failures or system crashes when virtual memory becomes exhausted. As physical RAM becomes scarce, the virtual memory manager will free up RAM by transferring the information in a memory page to a cache on the disk. Excessive paging to disk might consume too much of the available disk bandwidth and slow down applications attempting to access their files on the same disk or disks.

For capacity planning, watch for upward trends in this counter. Excessive paging can usually be reduced by adding additional memory. Because disk bandwidth is finite, capacity used for paging operations is unavailable for application-oriented file operations. The Available Mbytes displays the amount of physical memory, in megabytes, that is immediately available for allocation to a process or for system use. The percent Available Megabytes can be used to indicate if additional memory is required.

Add memory if this value drops consistently below 10 percent. To calculate the percent of Available Megabytes:. This is the primary indicator to determine whether the supply of physical memory is ample. Downward trends can indicate a need for additional memory. Counters are available for Available Bytes and Available Kbytes. The Pool Paged Bytes is the size, in bytes, of the paged pool, an area of system memory used by the operating system for objects that can be written to disk when they are not being used.

The Pool Paged Resident Bytes is the size, in bytes, of the nonpaged pool, an area of system memory used by the operating system for objects that can never be written to disk, but must remain in physical memory as long as they are allocated.

This ratio can be used as a memory contention index to help in planning capacity. As this approaches zero, additional memory needs to be added to the system to allow both the Nonpaged pool and Page pool to grow.

Status information for each TCP connection is stored in the Nonpaged pool. By adding memory, additional space can be allocated to the Nonpaged pool to handle additional TCP connections. The Transition Faults counter returns the number of soft or transition faults during the sampling interval. Transition faults occur when a trimmed page on the Standby list is re-referenced. The page is then returned to the working set. It is important to note that the page was never saved to disk.

An upward trend is an indicator that there may be a developing memory shortage. High rates of transition faults on their own do not indicate a performance problem. However, if the Available Megabytes is at or near its minimum threshold value, usually 10 percent, then it indicates that the operating system has to work to maintain a pool of available pages.

The Committed Bytes measures the amount of committed virtual memory. Committed memory is allocated memory that the system must reserve space for either in physical RAM or on the paging file so that this memory can be addressed by threads running in the associated process.

A memory contention index called the Committed Bytes:RAM can be calculated to aid in capacity planning and performance. When the ratio is greater than 1, virtual memory exceeds the size of RAM and some memory management will be necessary.

Memory should be added when the ratio exceeds 1. The Working Set counter shows the amount of memory allocated to a given process that can be addressed without causing a page fault to occur. Watch for upward trends for important applications. To measure their working sets, application-specific counters need to be used. Step 4: Select the default path or enter a custom path to store the directories, then click Next. Step 5: Then, you will receive a warning message, you just need to click Yes to confirm.

The configuration process for Windows Deployment Services takes several minutes, depending on the speed of the server being used. Once Windows Deployment Services has been successfully configured, it'll be ready to use. To sum up, this post has introduced what Windows Deployment Services is, and you can also know the purpose and requirements of the Windows Deployment Services.



0コメント

  • 1000 / 1000