Monday, 7 April 2014

File Services in Windows Server 2012

File Services: a new approach in Windows Server 2012

Article Ian Murphy Mar 4th 2013

Within Windows Server 2012, a new range of files services offers improved functionality and a range of features for sysadmins

File servers are the basic bread-and-butter servers on computer networks. With previous versions of Windows Server, basic file services were installed on all servers. With Windows Server 2012 (WS2012) the File and Storage Services (FSS) role is installed by default but the additional services that can be enabled as part of FSS, have to be specifically configured for a server.
The basic functionality installed as a default is necessary only so that PowerShell or Server Manager can manage the storage connected to a server. If you want to use the additional roles you can install them through Server Manager or via PowerShell.
One of the major management tools in WS2012 is PowerShell and there is a specific set of cmdlets to enable administrators to manage FSS.
Additional FSS Roles
There are two groups of roles that can be added once FSS is installed. One group, Storage Services, is part of the default installation and allows the administrator to manage storage pools and storage spaces.
The second group, File and iSCSI Services contains some default roles and a number of new and enhanced roles. Those additional roles are:
  1. File Server (default)Allows the management of file shares on the network.
     
  2. BranchCache for network filesFirst introduced in Windows Server 2008 R2 (WS2008R2), BranchCache speeds up access of data in branch offices by caching commonly used files at the branch. It uses data deduplication to reduce network traffic over the WAN by only sending unique blocks of data. Where a block is identified as being used by multiple files, only the pointer to that block of data is sent.In WS2012, Microsoft has done some significant tuning and simplification of the BranchCache feature. It is now tightly integrated into the File Server role and takes advantage of the data deduplication feature (see below).Administrators can preload cache data before users have requested it. This allows for read ahead in large data sets or where a branch is working on a specific project. All cached content is now encrypted by default to increase security and reduce the chances of data theft.
     
  3. Data deduplication (default)
    Now as part of the WS2012 operating systems, data deduplication is a much more efficient way of reducing the space required to store data than NTFS compression or Single Instance Storage. Deduplication works by looking for common blocks of data and replacing them with a pointer to a master copy of that data block.The amount of space saved depends heavily on the type of data being stored. For example, a file server holding Office documents (PowerPoint, Word, Excel, etc) should be able to reduce the used space by 50 percent. If the server is storing a lot of virtual machine hard disks (VHD), the savings could be as extreme as 95 percent. The reality for most organisations will be somewhere between the two.Historically data deduplication has been very heavy on compute resources, which has led companies to only use it in overnight batch processing and then, only on backups or older data tiers. With WS2012, data deduplication is a solution that can be used on live storage without penalising users or applications by using CPU and memory throttling.If using BranchCache and data deduplication on file servers, the amount of time and bandwidth required to synchronise data across the WAN is significantly reduced.
     
  4. DFS Namespaces.
    For performance and security, data may be spread across a number of different physical locations. Using DFS Namespaces, they can be presented to users as if they were a single hierarchical set of folders.In WS2012, Microsoft has added some new features. The most important for administrators is full PowerShell support for DFS with a set of custom Cmdlets. At the same time, Microsoft has deprecated the existing DFS administrative tools to ensure that administrators have a single consistent management option.DFS is DirectAccess friendly which means that users who connect to the corporate network across DirectAccess can access DFS Namespaces as easily as if they were locally connected.
     
  5. DFS Replication
    Replicates files from multiple servers across multiple sites. Only the portion of the files that have changed is actually replicated so after the first synchronisation, the amount of WAN traffic is substantially reduced.DFS Replication supports the use of volumes on which data deduplication is running, thereby keeping storage space to a minimum. In addition, DFS will route DirectAccess requests for files to the closest DFS location to the user to reduce latency and network bandwidth.
     
  6. File Server Resource Manager (FRSM) (default)
    FRSM is a suite of tools that enable administrators to manage and classify data. One of the key elements of FRSM is the ability to create a File Classification Infrastructure (FCI). FCI makes it possible for administrators to apply policies to files based on their classification.Automatic classification was available in WS2008R2 but with WS2012, Microsoft has added PowerShell cmdlets and a more granular control of how data is classified. WS2012 also adds support for manual classification of files which enables users to create their own classifications without creating new rules.There are a couple of reasons for doing this. It may be for security where the policies may related to the sensitivity of the data and the need to encrypt the files. Alternatively it may be about the retention of data as required by compliance legislation. In WS2012, Microsoft has introduced Dynamic Access Control (DAC) to restrict access to files and the policies to support DAC are applied through FRSM.
  7. Among the more commonly used tools with FRSM is Quota Management to control how much space individual users, folders and even disks can have. In environments where space is very restricted, this is an essential but often underused tool.
    A feature that is not used enough is File Screening Management (FSM). Although it is seen as a blunt tool that can be circumnavigated, FSM enables the administrator to restrict the storage of files with specific extension such as AVI or MP3. There needs to be a careful balance between how multi-media materials inside the company are being used, set against the prevention of illegal content being stored on corporate servers.
    Updated and new File Management tasks include support for Active Directory Rights Management Services (AD RMS) that enforces the encryption of any file with an AD RMS protector. This can be done manually or through an existing template. Another feature is Dynamic name space support that configures file management tasks based on the content in a specified folder.
    One of the most frustrating problems for users and administrators alike is dealing with Access Denied messages. With WS2012 FRSM, administrators can now customise the access-denied error message. This might be to enable a file access request email from the access-denied error or to direct the user to the local administrator who is responsible for managing those resources.
    1. File Server Volume Copy Shadow Service (VSS) Agent ServiceEnables consistency of application snaphots (shadow copies). With previous versions of Windows Server, VSS only supported shadow copies of data on the local server. With WS2012, Microsoft has added VSS for SMB File Shares which extends shadow copy support for network volumes.Administrators install the FS VSS Agent on the file server where the application data is located. They then install the VSS provider in the server where the application is located. The provider talks to the agent using the new File Server Remote VSS protocol in order to manage the shadow copies of the data.
    The big benefit here is that there is no longer any need to take the volume offline with making backup copies of data on a network SMB share. This is part of an extended set of features in WS2012 that are now available through SMB 3.0.
    1. iSCSI Target Server (default)By merging Windows Storage Server into the main Windows Server code base, iSCSI Target Server now enables any server to be a block storage devices accessible over the network. One of the main uses of iSCSI across WS2012 networks is likely to be diskless booting of servers using differencing virtual disks.A single differencing disk can support up to 256 computers reducing storage requirements by up to 90 percent. This use of a 'golden master' image makes it much easier to maintain server images as there are fewer images to patch and the patches can be deployed faster.Installation of servers is also much faster when done through a 'golden image' stored on an iSCSI Target Server. As the disk already exists, the savings mean a server can be deployed in a couple of minutes rather than hours.
      In WS2012, iSCSI also supports non-Windows iSCSI initiators making it possible to share storage on Windows Servers to non-Windows machines.
    2. iSCSI Target Server Provider (VDS and VSS)
      Enables the management of iSCSI virtual disks.
       
    3. Server for Network File System (NFS)
      Allows mixed UNIX and Windows environments to share files between machines and users via the NFS protocol.
    4. File and storage services enabled outside of FSS
      In addition to the FSS roles and features listed above, Microsoft has enhanced and added other file service functionality.
      1. Offline files
        Offline files caches files locally so that the user is unaffected by slow or dropped network connections. With WS2012, Always Offline is a new mode that automatically moves the user to offline mode when it detects network problems.
         
      2. Thin Provisioning
        Thin provisioning is a just-in-time storage allocation mechanism that extends storage volumes as they fill up. The advantage of thin provisioning is that it eliminates the need to allocate large amounts of storage on a 'just-in-case' basis. This approach often leads to a lot of storage being unused and unavailable. If the storage is no longer needed, thin provisioning supports a feature Microsoft calls Trim Storage. This reallocated unused storage back into the general storage pool until it is needed.
         
      3. Storage Spaces
        This is a new technology in WS2012 that makes it easy for administrators to virtualise storage by creating storage pools from existing disk and then creating virtual disks known as storage spaces. For smaller organisations that cannot afford a Storage Area Network (SAN), Storage Spaces is an easier and cheaper solution.Storage spaces are used in WS2012 to replace Dynamic Disks which are being deprecated. This means that when upgrading servers from earlier versions of Windows Server, administrators need to think about how they will migrate their disk configurations to be Storage Spaces. Like RAID, Storage Spaces can be used to provide resiliency for storage. Support for disk striping, mirroring and parity is built into Storage Spaces. The advantage of using Storage Spaces rather than physical drives for parity is that there is no need to have unused  or spare physical drives available to the storage subsystem. Instead, new parity drives can be created using available storage and then provisioned.This can be done through the use of PowerShell cmdlets. For example, a Storage Space drive fails and the parity is immediately used to provide cover. As soon as the parity is brought into use, a PowerShell cmdlet can create another parity drive to ensure continuous protection. If this were being done with physical drives, there would be an operational risk while the parity drive was being used and the failed drive was waiting for a physical replacement.
         
      4. Dynamic Access Control (DAC)
        This is a whole new way of managing and audit users from Microsoft through a new approach to data and user permissions. The existing user management approach inside Windows Server has been around for a long time now and although it has evolved, it has a lot of pain points for administrators.DAC is a claims-based approach to access control. It underpins data classification and allows documents to be managed based on their content. User access is controlled by policies, nothing new for Windows Server but these policies are much more complex yet use natural-language statements to define them.Auditing is a key feature in DAC which makes it possible to align user access logs with compliance legislation such Basel III and Sarbanes-Oxley (SOX).
      Conclusion
      Microsoft has invested a lot of effort in improving the file services and functionality inside WS2012. Those administrators who already have WS2008R2 will see a lot of changes and new features. Administrators coming from earlier versions of Windows Server may find themselves struggling to decide where to start.

No comments:

Post a Comment

RADIUS Server Configure in Server 2012

Home About Me Apple Networking Software Storage Downloads Links Contact How to setup Radius for authentication wi...