Quantcast
Channel: Ask the Directory Services Team
Viewing all 36 articles
Browse latest View live

The importance of following ALL the authoritative restore steps

$
0
0

Hello, David Everett here again. Recently a customer contacted Microsoft Product Support to determine why the Connect to Domain Controller option in Active Directory Users and Computers (aka: ADUC or dsa.msc) was generating an incomplete list of Domain Controllers (DCs) for one domain. Even though the list of available DCs was truncated we found we could manually enter the name of any DC not in the list and Active Directory Users and Computers would connect to the DC without issue.

Determining the scope of the issue:

Wanting to see if the truncated list of DCs was specific to Active Directory Users and Computers or if other tools also failed to locate all the DCs we ran nltest.exe /dclist:contoso.com. The output shown below revealed a complete list of domain controllers for contoso.com but many were missing their [DS] Site: information. We found that those DCs missing their [DS] Site: information happened to be the same DCs missing when Connect to Domain Controller was selected. One final observation was that the list of available DCs varied from one DC to the next when selecting Connect to Domain Controller.

Get list of DCs in domain 'contoso.com' from '\\dc01.contoso.com '.
     MAYDC01.contoso.com       [DS] Site: Mayberry
     MAYDC02.contoso.com       [DS] Site: Mayberry
     DALDC01.contoso.com        [DS] Site: Dallas
     DALDC02.contoso.com           
     LADC01.contoso.com        [DS] Site: LosAngeles
     LADC02.contoso.com           
     SEADC01.contoso.com           
     SEADC02.contoso.com   

The two DCs in the Los Angeles site saw themselves in the list of available DCs but not the other DC in the same site. Suspecting Active Directory (AD) replication might be at fault we ran Repadmin /showrepl * /csv > Showrepl.csv and found AD replication was free of errors forest wide.

Checking for Database Inconsistencies:

Since AD Replication was not at fault our focus switched to AD database inconsistencies. We focused on three primary objects which house all of the metadata needed for DC discovery:

  • The Distinguished Name (DN) of the DC’s object in the domain partition

CN=LADC01,OU=Domain Controllers,DC=contoso,DC=com

  • The DN of the DC’s NTDS Settings object and

CN=NTDS Settings,CN=LADC01,CN=Servers,CN=LosAngeles,CN=Sites,CN=Configuration,DC=contoso,DC=com

  • The DN of the DC’s Server object which resides just above the DC’s NTDS Settings object in the Configuration partition

CN=LADC01,CN=Servers,CN=LosAngeles,CN=Sites,CN=Configuration,DC=contoso,DC=com

Using LDP.EXE we connected to both DCs in the Los Angeles site and gathered dumps of all three objects for both DCs and compared the output. For those who tend to avoid this tool, see MSKB 252335 on how to Bind and Connect but make certain to select CN=Configuration,DC=forestrootdomain from the Base DN: drop-down. Expand the configuration partition on the left until you locate the server object of the DC that was restored.

image

This LDP dump of LADC01’s Server object in the Configuration partition was taken while bound to LADC01 (notice the DC name in blue title bar indicating which DC we’re bound to). Looking at the third attribute from the bottom we find the serverReference forward link attribute in the list of attributes. This attribute contains the DN path of the corresponding DC object in the DC=contoso,DC=com partition. Below is an LDP dump of LADC01’s Server object while bound to LADC02. Notice the serverReference forward link attribute is missing which indicates it is not populated on this DC’s copy of the AD Database.

image

When we examined the LDP dumps of LADC02’s Server object we found the same was true. LADC02 had a DN for its own DC object but the LDP dump of LADC02’s Server object taken while bound to LADC01 had an empty serverReference attribute. Finally, those DCs which always appear in the list of domain controllers had a populated serverReference attribute on all DCs.

To determine how widespread this issue was we queried the serverReference attribute for both Los Angeles DCs from every DC in the forest using the repadmin /showattr command below. DCs that returned a serverReference attribute had the DC object DN and those DCs that had no serverReference attribute were empty:

Repadmin.exe /showattr * CN=LADC01,CN=Servers,CN=LosAngeles,CN=Sites,CN=Configuration,DC=contoso,DC=com /atts:serverReference

Fixing the problem:

We connected to the configuration partition on LADC01 using adsiedit.msc and manually added the “CN=LADC02,OU=Domain Controllers,DC=contoso,DC=com” DN to the serverReference attribute on LADC02. This change made LADC02 appear in the list of available DCs when the Connect to Domain Controller option was selected in Active Directory Users and Computers. Also, nltest.exe /dclist:contoso.com now showed [DS] Site: LosAngeles next to LADC02.contoso.com on all DCs. Not shown here, but once the DN of the DC’s object in the contoso.com domain was added to the serverReference attribute, the serverReferenceBL back-link attribute was automatically populated on the DC object in the contoso.com domain.

Determining how this occurred:

Now that we understood why the DC list was incomplete we started looking for how this occurred. To do this we gathered replication metadata from these three objects for both LADC01 and LADC02. The command used to gather the metadata from LADC01 for LADC02’s server object in the configuration partition is:

repadmin.exe /showobjmeta LADC01CN=LADC02,CN=Servers,CN=LosAngeles,CN=Sites,CN=Configuration,DC=contoso,DC=com

Comparing the objects dumped from LADC01 and LADC02 we found the Ver (version) numbers matched. It wasn’t until we looked at metadata of the DC object in the domain partition and compared it with the corresponding Server object in the configuration partition that we understood what occurred.

Here is a showobjmeta dump of CN=LADC02,CN=Servers,CN=LosAngeles,CN=Sites,CN=Configuration,DC=contoso,DC=com from LADC01:

repadmin.exe /showobjmeta ladc01 CN=LADC02,CN=Servers,CN=LosAngeles,CN=Sites,CN=Configuration,DC=contoso,DC=com

11 entries.
Loc.USN     Originating DC                        Org.USN     Org.Time/Date        Ver   Attribute
=======     ============                          === ======  =============        ===   =========
   8363     b5c14a75-7f99-4f31-b84b-d755190a2c0d  213256008   2007-04-15 15:04:39    1   objectClass
  85895     LosAngeles\LADC02                     85895       2007-04-15 17:11:27    2   cn
   8363     b5c14a75-7f99-4f31-b84b-d755190a2c0d  213256008   2007-04-15 15:04:39    1   instanceType
   8363     b5c14a75-7f99-4f31-b84b-d755190a2c0d  213256008   2007-04-15 15:04:39    1   whenCreated
<snip>

Here is a truncated showobjmeta dump of CN=LADC02,OU=Domain Controllers,DC=contoso,DC=com from LADC01:

repadmin.exe /showobjmeta ladc01 CN=LADC02,OU=Domain Controllers,DC=contoso,DC=com

41 entries.
Loc.USN   Originating DC                        Org.USN       Org.Time/Date        Ver    Attribute
=======   ============                          === ========  =============        ===    =========
92248340  fb36d148-19fd-43f0-8876-91a027863f79  155898        2009-11-18 12:56:34  100001 objectClass
92248339  77dba4f6-3870-4eb5-b46a-4f1fb1ee0be6  92248339      2009-11-18 12:59:51    4    cn
92248340  fb36d148-19fd-43f0-8876-91a027863f79  155898        2009-11-18 12:56:34  100001 description
92248340  fb36d148-19fd-43f0-8876-91a027863f79  155898        2009-11-18 12:56:34  100001 instanceType
9027     4855f23c-744c-488d-852c-9c170dd3359c   108176481     2007-04-15 18:10:11    1    whenCreated
<snip>

Interpretation of the data:

The Version number of the attributes on LADC01’s DC object in the domain partition have a USN that is 100,000 higher than the DC’s corresponding Server object in the configuration partition. This strongly suggests the DC object in the Domain Controllers OU was authoritatively restored with the default version increase of 100,000 while the DC’s corresponding Server object in the configuration partition was not authoritatively restored. The customer then remembered accidentally deleting several of the DCs a while back and performing an authoritative restore on the entire Domain Controllers OU.

Understanding the inconsistencies:

Now that we knew an authoritative restore of the domain controllers OU was performed we needed to determine why the serverReference and serverReferenceBL attributes for restored DCs were missing and different across all DCs.

Anyone who has performed authoritative restores of users and groups will recall an issue where group membership is not correct on replica DCs after users and groups are authoritatively restored; this is discussed at length in KB280079. In the case of restored users and groups, when a user is deleted their membership from the remaining group is removed. If the user is then restored, but the group is not, the membership will not be restored on any DC except the DC where the restore took place. For those wondering what this has to do with DCs being restored, it is identical. DCs are security principals just like users, and the DC’s server object in the configuration partition behaves much like a group. If the DC object is deleted from the domain partition the serverReference attribute containing the forward link will be NULL’ed out on the server object in the configuration partition. If just the DC object in the domain partition is restored the serverReference attribute on the corresponding server object in the configuration partition will not be updated on replica DCs once the restored DC object inbound replicates to them.

Avoiding this issue:

Since the release of Windows Server 2003 Service Pack 1 ntdsutil.exe has automatically created LDF files for all partitions in the forest where restored objects have back-links. This is discussed further in MSKB 840001. In the case of user accounts you ensure all users have the correct group membership on all DCs by allowing the restored user accounts to replicate to all DCs/GCs. Once all DCs have the restored account you use ldifde -i -f <AR*.ldf> and import the user’s group membership against to the recovery DC. Doing this ensures the user’s DN is added to the member attribute on the group and the version of the member attribute is bumped higher causing it to replicate to all DCs. Since all DCs have a copy of the restored user account in their local database the DN on the member attribute is retained. As a rule of thumb, if you are authoritatively restoring users, computers or groups you should always import the LDF files created by ntdsutil.exe and avoid issues like this.

Or even better, deploy Windows Server 2008 R2 and enable the AD Recycle Bin– it automatically handles back links and forward links.

- Dave “metadata” Everett


Best practices around Active Directory Authoritative Restores in Windows Server 2003 and 2008

$
0
0

It’s your guest writer Herbert Mauerer again.

A very common AD disaster is an unexpected deletion or modification of objects. Unlike a bad football match or family meeting, you can prepare for that and make the crisis more bearable. In this blog, I will discuss best practices of Windows Server 2003 and 2008 forest level backup and restore. I will not discuss Windows 2000 as it’s at the end of its lifetime, and also not Windows 2008 R2 because we have a pretty good solution for object deletion without the need for backups: AD Recycle Bin.

AD Object deletions and unwanted modifications are often due to human error. Sometimes a bad script does that or a provisioning system, but then these are also created by humans. The common factor is the data loss. Now there is quite some complexity in the current KB article on AD object restores:

840001 How to restore deleted user accounts and their group memberships in Active Directory
http://support.microsoft.com/default.aspx?scid=kb;EN-US;840001

We basically follow method 1 in the article 840001, with a few tweaks and preparations:

  1. Windows Server 2003 Service Pack 2 or newer and Windows Server 2003 Forest Functional level. This allows the restoration of links using LDIF files written by NTDSUTIL.EXE.
  2. Preparing for the restore by converting the LEGACY group memberships to Link-Value Replication (LVR) group memberships.

Step 2 requires that you remove and re-add all group members. This is relatively easy using the Windows Server 2003 command line tools for querying and changing AD objects:

Dsget group CN=GroupX,OU=Groups,DC=contoso,DC=com /members | dsmod group CN=GroupX,OU=Groups,DC=contoso,DC=com /chmbr

This command gets the members of the group and replaces the members with the output, thus adding them as LVR members. This may mean some replication traffic if you apply this to many groups within a short time. However, since the removal and re-adding of members happened in a very short time, you should not see the member go away, as all changes should be replicated in the same replication job.

There are two problems with this approach:

  1. The version of DSGET.EXE in Windows Server 2003 gets a maximum of 1500 members, and the command above would discard the rest. In one of my test environments, DSGET does not show any members. I have not seen both problems with the Windows Server 2008 or later version of DSGET.
  2. If a group has lots of members, the execution of DSMOD.EXE can overflow the AD database version store, and adding the members would fail. So although the change of the membership type is a one-time action, it’s certainly a good idea to first export the member list to files first. I describe that below.
  3. Use updated operating system file versions, details also below.

How to Convert Groups for LVR

You can determine the state of linked attribute values like group memberships from the replication meta-data on the object. When you run "repadmin.exe /showobjmeta", the link "Type" says LEGACY:

Type     Attribute       Last Mod Time   Originating DC  Loc.USN Org.USN     Ver  Distinguished Name
=======  ========        =============   =============== ========= ========= ===  ==========================================
LEGACY  member                                                                  CN=test-user1,OU=Test-OU,DC=contoso,DC=com
PRESENT   member   2008-09-16 18:22:29   HQ\contoso-DC1  175379684 175379684   1  CN=test-user2,OU=Test-OU,DC=contoso,DC=com

The line for LEGACY does not have data on the latest change all these values share that with the attribute meta-data listed in the first part of the output.

Hint: I use the "for" command in CMD for loops for direct execution on the command line. When you use CMD scripts, you need to change loop variables like "%f" to "%%f".

The steps are:

1. Freeze the group memberships, stop all group member management.

2. Export the groups in your domain:

Dsquery group dc=contoso,dc=com /limit 0 > grouplist.txt

Review the list and remove all built-in groups. You cannot remove all members there, and these groups usually don’t have many members to begin with. Also, there’s hope nobody deletes the memberships for these.

3. Get all group members on a Windows Vista member with RSAT or Windows Server 2008 computer:

For "delims=" %f in (grouplist.txt) do DSGET %f /members > groupmembers\%f

Important: The DNs of the groups must not contain characters ":\/*?". If you need to rename a group or an OU, it’s sufficient to change the DN.

4. Now we count the members:

In object counting, I use an older version of the MKS toolkit WC.EXE which counts lines, words and characters. You need a similar tool that counts lines in text files. You may have a favorite tool for that, or use a text editor that provides a line count, and use that to get the line count for the bigger output files.

For "delims=" %f in (grouplist.txt) do wc groupmembers \%f >> membercount.txt

The file membercount.txt has the count of members in the first column. Groups with more than 5000 members require special treatment. I suggest moving their member list into a folder "big-groups1".

5. For all other groups, execute:

Dir /b groupmembers >grouplist-small.txt
For /f "delims=" %f in (grouplist-small.txt) do dsmod group "%f" /chmbr < " groupmembers\%f"

Hint: "%f" is in double quotes here as "dir /b" does not print quotes, and we want to handle names with spaces properly.

So after this, the small groups are off our radar.

6. Now for the big groups:

My suggestion is to split the membership lists into multiple text files of approximately 5000 lines and put these sections into separate folders with the same file names (the DN of the group):

Dir /b big-groups1> big-groups1.txt
For /f "delims=" %f in (big-groups1.txt) do dsmod group "%f" /chmbr < " big-groups1\%f"

Dir /b big-groups2> big-groups2.txt
For /f "delims=" %f in (big-groups2.txt) do dsmod group "%f" /chmbr < " big-groups2\%f"

7. Verify the change by getting the group meta-data:

For "delims=" %f in (grouplist.txt) do repadmin /showobjmeta . %f > group-meta\%f

The output files in “group-meta” should have "LEGACY" only for other attributes.

Well, this was quite some work. The good news is that we don’t have to worry about the primary group… :-)

I suggest you test-drive this with test groups, for these tests it does not matter that the members are in LVR mode already. Then expand the work to smaller OU trees until you tackle the rest of the domain. One idea would be to split out parts of the group-list.txt so you impact a few groups only.

Suggested Fixes

After Windows Server 2003 Service Pack 2 was released, we had two problems affecting replication after a (authoritative) restore:

937855 After you restore deleted objects by performing an authoritative restoration on a Windows Server 2003-based domain controller, the linked attributes of some objects are not replicated to the other domain controllers
http://support.microsoft.com/default.aspx?scid=kb;EN-US;937855

943576 Active Directory objects may not be replicated from the restored server after an authoritative restore on a Windows Server 2003-based domain controller
http://support.microsoft.com/default.aspx?scid=kb;EN-US;943576

When you install Fix 943576 on the DCs you backup on a regular basis, you should not see any issues. In the long run, you should get this fix on all DCs in the enterprise. This problem is fixed in Windows 2008.

On top of that, there are also problem with the tool we use to restore the objects, NTDSUTIL.

Problems with object names containing extended characters:
886689 The Ntdsutil authoritative restore operation is not successful if the distinguished name path contains extended characters in Windows Server 2003 and in Windows 2000
http://support.microsoft.com/default.aspx?scid=kb;EN-US;886689

910823 Error message when you try to import .ldf files on a computer that is running Windows Server 2003 with Service Pack 1: "Add error on line LineNumber: No such object"
http://support.microsoft.com/default.aspx?scid=kb;EN-US;910823

Problems with excessive links in the LDIF files:

951320 The ntdsutil.exe utility in Windows Server 2003 writes out too many links to .ldf files during an authoritative restore process
http://support.microsoft.com/default.aspx?scid=kb;EN-US;951320

The good news is that having fix 951320 on the DCs you backup often also takes care of problem 910823. Fix 951320 is included in Windows Server 2008 Service Pack 2.

There is a new problem with authoritative restores where objects are not treated correctly in replication. We have a corrective fix in NTDSUTIL for that:

974803 The domain controller runs slower or stops responding when the garbage collection process runs
http://support.microsoft.com/default.aspx?scid=kb;EN-US;974803

This last problem is not fixed in Windows Server 2008 yet. The problem cannot happen on Windows Server 2008 R2.

How to Avoid Accidental Deletion

Most of the AD objects recovery cases we see in our support work are because of accidental deletions. Typically, there are whole OUs with lots of user, groups and computer accounts affected. These objects have attributes that can’t be recovered by re-creation as you can’t get the same Sid again.

For this problem, Microsoft has implemented an option in the Windows Server 2008 Admin Tools, also available for Windows Vista using RSAT (Remote Server Administration Toolkit). This version has a check box "Protect object from accidental deletion” when “advanced view” is enabled. This flag is set automatically on all OUs you create with this admin tool.

When this is used, the OU itself and its parent get ACEs (Access Control Entries) that deny "Everyone" the permission to delete the object itself and child objects. This works with Domains hosted on Windows Server 2003. You can add the ACEs this way using your own tools and scripts and you will get the same effect.

Thus such a mass deletion should not happen by accident anymore, as you need to step through clearing the check box. Yes, the deletion might still happen from a script, but then it’s written to remove these ACEs and brings the problem to the next level, the script programmer. At this point, you can’t call it accidental anymore.

A Solution

For the object deletion scenario, Windows Server 2008 R2 offers the AD Recycle Bin. The forest needs to run in the Windows Server 2008 R2 forest mode and the feature needs to be enabled separately.

All objects that are deleted will have no attributes removed anymore, and linked attributes will have a third state to signal they are inactive pointing to a deleted object. When the object is reactivated, the link state will be switched to active again.

The graphical user interface for this facility is not in this release, but the groundwork is done, PowerShell is available, and there are scripts to help you perform the recovery.

Until you can enable this feature, the task is to be prepared for object deletions. This article will still be useful for unwanted object attribute modifications.

Herbert “by way of Germany” Mauerer

Friday Mail Sack: Cluedo Edition

$
0
0

Hello there folks, it's Ned. I’ve been out of pocket for a few weeks and I am moving to a new role here, plus Scott and Jonathan are busy as #$%#^& too, so that all adds up to the blog suffering a bit and the mail sack being pushed a few times. Never fear, we’re back with some goodness and frankness. Heck, Jonathan answered a bunch of these rather than sipping cognac while wearing a smoking jacket, which is his Friday routine. Today we talk certs, group policy, backups, PowerShell, passwords, Uphclean, RODC+FRS+SYSVOL+DFSR, and blog editing. There were a lot of questions in the past few weeks that required some interesting investigations on our part – keep them coming.

Let us adjourn to the conservatory.

Question

Do you know of a way to set user passwords to expire after 30 days of inactivity?

Answer

There is no automatic method for this, but with a bit of scripting it would be pretty trivial to implement. Run this sample command as an admin user (in your test environment first!!!):

Dsquery.exe user -inactive 4 | dsmod.exe user –mustchpwd yes

Dsquery will find all users in that domain that have not logged in for 4 weeks or longer, then pipe that list of DN’s into the Dsmod command that sets the “must change password at next logon” (pwdlastset) flag on each of those users.

image

You can also use AD PowerShell in Win2008 R2/Windows 7 RSAT to do this.

search-adaccount –accountinactive –timespan 30 –usersonly | set-aduser –changepasswordatlogon 1

The PowerShell method works a little differently; Dsquery only considers inactive accounts that have logged on. Search-adaccount also considers users that have never logged on. This means it will find a few “users” that cannot usually have their password change flags enabled, such as Guest, KRBTGT, and TDO accounts that are actually trusts between domains. If someone wants to post a slick example of bypassing those, please send them along (as the clock ran down here).

Question

As it’s stated here: http://technet.microsoft.com/en-us/library/cc753609%28WS.10%29.aspx  

"You are not required to run the ntdsutil snapshot operation to use Dsamain.exe. You can instead use a backup of the AD DS or AD LDS database or another domain controller or AD LDS server. The ntdsutil snapshot operation simply provides a convenient data input for Dsamain.exe."

I should be able to mount snapshot and use dsamain to read AD content, with only full backup of AD. But I can't. Using ntdsutil I can list and mount snapshot from AD, but I can't do "dsamain -dbpath full_path_to_ntds.dit".

Answer

You have to extract the .DIT file from the backup.

1. First run wbadmin get versions. In the output, locate your most recent backup and note the Version identifier:

wbadmin get versions

2. Extract the Active Directory files from the backup. Run:

 wbadmin start recovery -versions:<version identifier> -itemtype:app -items:AD -recoverytarget:<drive>

3. A folder called Active Directory will be created on the recovery drive. Contained therein you'll find the NTDS.DIT file. To mount it, run:

dsamain -dbpath <recovery folder>\ntds.dit -ldapPort 4321

4. The .DIT file will be mounted, and you can use LDP or ADSIEDIT to connect to the the directory on port 4321 and browse it.

Question

I has run into the issue described in KB976922 where "Run only specified Windows Applications" or “Run only allowed Windows applications” is blank when you are mixing Windows XP/Windows Server 2003 and Windows Server 2008/R2 Windows 7 computers. Some forum posts on TechNet state that this was being fixed in Win7 and Win2008 R2 though, which appears to be untrue. Is this being fixed in SP1 or later or something?

Answer

It’s still broken in Win7/R2 and still broken in SP1. It’s quite likely to remain broken forever as there are so many workarounds and the technology in question actually dates back to before group policy – it was part of Windows 95 (!!!) system policies. Using this policy isn’t very safe. It’s often something that was configured many years ago  that lives on through inertia.

Windows 7 and Windows Server 2008 R2 introduced AppLocker to:

  • Help prevent malicious software (malware) and unsupported applications from affecting computers in your environment.
  • Prevent users from installing and using unauthorized applications.
  • Implement application control policy to satisfy security policy or compliance requirements in your organization.

Windows XP, Windows Server 2003, Windows Vista, and Windows Server 2008 all support Software Restriction Policies (SAFER) which also control applications similarly to AppLocker. Both AppLocker and SAFER replace that legacy policy setting with something less easily bypassed and limited.

For more information about AppLocker, please review:
http://technet.microsoft.com/en-us/library/dd723678(WS.10).aspx

For more information about SAFER, please review:
http://technet.microsoft.com/en-us/library/bb457006.aspx

I updated the KB to reflect all this too.

Question

Is it possible to store computer certificates in a Trusted Platform Module (TPM) in Windows 7?

Answer

The default Windows Key Storage Provider (KSP) does not use a TPM to store private keys. That doesn't mean that some third party can't provide a KSP that implements the Trusted Computer Group (TCG) 1.2 standard to interact with a TPM and use it to store private keys. It just means that Windows 7 doesn't have such a KSP by default.

Question

It appears that there is a new version of Uphclean available (http://www.microsoft.com/downloads/en/details.aspx?FamilyId=1B286E6D-8912-4E18-B570-42470E2F3582&displaylang=en). What’s new about this version and is it safe to run on Win2003?

Answer

The new 1.6 version only fixes a security vulnerability and is definitely recommended if you are using older versions. It has no other announced functionality changes. As Robin has said previously, Uphclean is otherwise deceased and 2.0 beta will not be maintained or updated. Uphclean has never been an officially supported MS tool, so use is always at your own risk.

Question

My RODCs are not replicating SYSVOL even though there are multiple inbound AD connections showing when DSSITE.MSC is pointed to an affected RODC. Examining the DFSR event log shows:

Log Name: DFS Replication
Source: DFSR
Date: 5/20/2009 10:54:56 AM
Event ID: 6804
Task Category: None
Level: Warning
Keywords: Classic
User: N/A
Computer: 2008r2-04.contoso.com
Description:
The DFS Replication service has detected that no connections are configured for replication group Domain System Volume. No data is being replicated for this replication group.

New RODCs that are promoted work fine. Demoting and promoting an affected RODC fixes the issue.

Answer

Somebody has deleted the automatically generated "RODC Connection (FRS)" objects for these affected RODCs.

  • This may have been done because the customer saw that the connections were named "FRS" and they thought that with DFSR replicating SYSVOL that they were no longer required.
  • Or they may have created manual connection objects per their own processes and deleted these old ones.

RODCs require a special flag on their connection objects for SYSVOL replication to work. If not present, SYSVOL will not work for FRS or DFSR. To fix these servers:

1. Logon to a writable DC in the affected forest as an Enterprise Admin.

2. Run DSSITE.MSC and navigate to an affected RODC within its site, down to the NTDS Settings object. There may be no connections listed here, or there may be manually created connections.

dssitenedpyle1

3. Create a new connection object. Ideally, it will be named the same as the default (ex: "RODC Connection (FRS)").

dssitenedpyle2

4. Edit that connection in ADSIEDIT.MSC or with DSSITE.MSC attribute editor tab. Navigate to the "Options" attribute and add the value of "0x40".

dssitenedpyle3

dssitenedpyle4

5. Create more connections using these steps as necessary.

6. Force AD replication outbound from this DC to the RODCs, or wait for convergence. When the DFSR service on the RODC sees these connections, SYSVOL will begin replicating again.

More info about this 0x40 flag: http://msdn.microsoft.com/en-us/library/dd340911(PROT.10).aspx

RT (NTDSCONN_OPT_RODC_TOPOLOGY, 0x00000040): The NTDSCONN_OPT_RODC_TOPOLOGY bit in the options attribute indicates whether the connection can be used for DRS replication [MS-DRDM]. When set, the connection should be ignored by DRS replication and used only by FRS replication.

Despite the mention only of FRS in this article, the 0x40 value is required for both DFSR and FRS. Other connections for AD replication are still separately required and will exist on the RODC locally.

Question

What editor do you use to update and maintain this blog?

Answer

Windows Live Writer 2011 (here). Before this version I was hesitant to recommend it, as the older flavors had idiosyncrasies and were irritating. WLW 2011 is a joy, I highly recommend it. The price is right too: free, with no adware. And it makes adding content easy…

 
Like Peter Elson artwork.

Or the complete 5 minutes and 36 seconds of Lando Calrissian dialog
 
Map picture

Or Ned

GoatBlack Sheep
Or ovine-related emoticons.

 

That’s all for this week.

- Ned “Colonel Mustard” Pyle and Jonathan “Professor Plum” Stephens

Friday Mail Sack: Not Particularly Terrifying Edition

$
0
0

Hiya folks, Ned here again. In today’s Mail Sack I discuss SP1, DFSR, GPP passwords, USMT, backups, AD disk configurations, and the importance of costumed pets.

Boo.

Question

Should it be safe to use the Windows 7 and Windows Server 2008 R2 Service Pack 1 Release Candidate builds in production? They came out this week and it looks like it’s pretty close to being done.

Answer

No. This build is for testing only, just like the beta. The EULA specifically states that this is not for production servers and you will get no support running it in those environments.

For more info and test support:

Question

I need to ramp up on USMT for our planned Windows 7 rollout early next year. I’ve found a lot of documentation but do you have recommendations on how I can learn it progressively? I know nothing about USMT so I’m not sure where to start.

Answer

I would recommend going this route:

Intro

  1. What Does USMT Migrate?
  2. Common Migration Scenarios
  3. Quick Start Checklist
  4. Step-by-Step: Basic Windows Migration using USMT for IT Professionals
  5. Step-by-Step: Offline Migration with USMT 4.0
  6. How USMT Works
  7. Requirements

Intermediate

  1. ScanState Syntax
  2. LoadState Syntax
  3. Config.xml File
  4. Create a Custom XML File
  5. Customize USMT XML Files
  6. USMT Custom XML the Free and Easy Way
  7. Exclude Files and Settings
  8. Include Files and Settings
  9. Reroute Files and Settings
  10. Migrate EFS Files and Certificates
  11. Offline Migration
  12. USMT, OST, and PST
  13. Understanding USMT 4.0 Behavior with UEL and UE
  14. Controlling USMT Desktop Shell Icon Behavior from XP (and how to create registry values out of thin air)
  15. Get Shiny with USMT: Turning the Aero Theme on During XP to Windows 7 Migration

Advanced

  1. Conflicts and Precedence
  2. Recognized Environment Variables
  3. USMT and /SF
  4. XML Elements Library

Troubleshooting

  1. Common Issues
  2. USMT 4.0: Cryptic Messages with Easy Fixes
  3. Don’t mess about with USMT’s included manifests!
  4. Log Files
  5. Return Codes
  6. USMT 4.0 and Custom Exclusion Troubleshooting
  7. USMT 4 and WinPE: Common Issues

Question

Is there a way to generate a daily DFSR health report?

Answer

You can use DFSRADMIN.EXE HEALTH NEW <options> as part of a Scheduled Task to generate a report every morning before you get your coffee.

image

Question

Is there any good reason to separate the AD Logs, DB and SYSVOL onto separate drives? Performance maybe?

Answer

Thomas Aquinas would have made a good DS engineer:

"If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments [if] one suffices."

We’ve not really pursued that performance line of thinking as it turned out to be of little value on most DC’s: AD’s database and logs are mostly static. In most environments for every write to an AD DB, there are thousands of reads. If your average disk read/write is under 25ms for any disks that hold the AD database and its transaction logs you are in the sweet spot. LSA tries to load as much of the DB into physical RAM as possible and it also keeps common query and index data in physical memory, so the disk perf isn’t super relevant unless you are incredibly starved for RAM. Server hardware is so much better now than when AD was invented that it’s just hard to buy crappy equipment – this isn’t Exchange or SQL where every little bit counts.

Guidance around separating the files for SYSVOL was always pretty suspicious. That data is glacially static (in most environments it might only see a few changes a year, if ever). It has almost no data being read during GP processing either so disk performance is almost immaterial. I have never personally worked a case of a slow disk subsystem making GP processing slow.

We still provide plenty of space guidance though, and that might make you need to separate things out:

http://technet.microsoft.com/en-us/library/cc753439(WS.10).aspx

Since Win2008 and later made it so easy to grow and shrink volumes though, even that is not a big deal anymore.

Question

We are looking to make some mass refreshes to our local admin passwords on servers and workstations. Initially I started looking at some 3rd party tools, but they are a little pricey. Then I recalled the "Local Users and Groups" option in Group Policy preferences. However, I have seen some rumblings on the Internet about the password stored in the XML being not completely secure.

Answer

We consider that password system in GPP XML files “obscured” rather than “securely encrypted”.

The password is obfuscated with AES-256 (i.e. encrypted but with a symmetric public seed). If you were to control permissions to that GP folder (so that it no longer had “Authenticated Users” or any other user groups with READ access) containing the policy as well as use IPSEC to protect the traffic on the wire, it would be reasonably secure from anyone but admins and the computers themselves. Alan Burchill goes into a clever GPP technique for periodic password changes here:

How to use Group Policy Preferences to set change Passwords

He also makes the excellent point that a reasonably secure periodic password change system is better than just having the same password unchanged for years on end! Again, I would add to his example that using IPSEC and removing the “Authenticated Users” group from that group policy’s folder in SYSVOL (and replacing it with “Domain Computers”)  is healthy paranoia.

Official ruling here, regardless of above:

http://blogs.technet.com/b/grouppolicy/archive/2009/04/22/passwords-in-group-policy-preferences-updated.aspx

Try to not get spit all over me when you scream in the Comments section…

Question

Can DFSR read-only folders be backed up incrementally? Files Archive bits never change when I run a backup, so how can the backup software know to only grab changed files?

Answer

Absolutely. And here’s a treat for you:

The Archive bit has been dead since Windows Vista.

If you run a backup on a non-read-only replicated folder (or anywhere else) while using Windows Server Backup you will notice that the Archive bit never gets dropped either. The Volume Shadow Service instead uses the NTFS USN journal to track files included in incremental backups. Some backup solutions might still use Archive bits, but Windows does not – it is dangerous to rely on it as so many third party apps (or even just users) can clear the attribute and break your backups. There’s next to no TechNet info on this out there, but SriramB (the lead developer of DPM) talks about this at length:

http://social.technet.microsoft.com/Forums/en-US/windowsbackup/thread/df7045fb-9d88-453c-93c0-5e0613107d89

Now obviously, you cannot restore files directly into a read-only replicated folder as the IO blocking driver won’t allow it. If you try with WSB it will report error “Access is Denied”.

image

If you are restoring a backed up read-only replica, you have two options:

  1. Convert that replicated folder back to read-write temporarily, restore the data and allow it to replicate out, then set the folder back to read-only.
  2. Restore the data to an alternate location and copy or move it into the read-write replicated folder.

 

As for other randomness…

Best Comeback Comment of the Year

From our recent hiring post:

clip_image001Artem -

Crap. You know, I've recently joined Microsoft here in Russia. And guess what? No free Starbucks!

clip_image002NedPyle -

Congrats on the job. Sorry about the Starbucks. I'm sure there's a vodka dispenser joke here somewhere, but I'll leave that to you. :-P

clip_image001[1]Artem -

Yep, it's in the Samovar right in the lobby hall. The problem is like in any big company there's a policy for everything. And in today's tough economy, free vodka is reserved for customer meetings only. Usually a policy is not a big problem, but not this one. It is enforced by bear guards.

    Halloween

    For those of you that aren’t from the US, Ireland, Canada, and the Isle of Limes: this week marks the Halloween holiday where kids dress up in costumes and run around getting free candy from neighbors. If you get stiffed on candy, it’s your responsibility to burn down that neighbor’s house. Wait, that’s just Detroit.

    It’s also an opportunity for people who were born without the shame gene to dress up their animals in cute outfits. Yay Internet! Here are some good ones for the dog lovers.


    (from http://www.dogbirthdaysandparties.com)


    (from http://www.premierphotographer.com)

     dog_sleepyhollow
    (from http://www.dreamdogs.co.uk)

    potterdog 
    (From http://www.gearfuse.com)

    Cat lovers can get bent.

    And finally, don’t forget to watch Night of the Living Dead, courtesy of the excellent Archive.org and the public domain law. Still Romero’s best zombie movie ever. Which makes it the best zombie movie ever. You must do it with all lights off, preferably in a house in the woods.

    - Ned “ghouls night out” Pyle

    Friday Mail Sack: Geek Week Edition

    $
    0
    0

    Hey all, Ned here again. Welcome back from Christmas, New Years, etc. Today we talk some BitLocker, SSL, DFS, FRS, MS news, and some geeky goo. Despite us being offline for the past few weeks, we weren’t deluged with new questions – glad you took some time off, you deserved it.

    Yoink!

    Question

    Is it possible to have the Windows 7 machines that have been BitLocker’ed before the AD DS backup was setup automatically check in and store their recovery information? I have seen the two manage-bde commands that are needed but I was wondering if there was a script somewhere that could run at logon or system start up to register all those keys.

    Answer

    Yes, our sister site AskCore has a sample VBS you can use:

    http://blogs.technet.com/b/askcore/archive/2010/04/06/how-to-backup-recovery-information-in-ad-after-bitlocker-is-turned-on-in-windows-7.aspx

    Despite the security and AD nature of BitLocker, it is not supported by us in DS – instead, the core operating system team handles it, as they own storage. Punt!

    Question

    Please summarize the support (or lack thereof) for receiving fragmented SSL/TLS handshake messages by OS version and service pack. Which, if any, service pack(s) of WinXP or Vista supports receipt of fragmented handshake messages? For each OS/version that supports receipt of fragmented handshake messages:

    • What is the size limit for messages fragmented into multiple records?
    • What is the size limit for each certificate in a certificates message?
    • Would a valid 122K byte certificate with 6700 DNS names in the subject Alt Names extension be honored?
    • If not, what are the size and DNS name count limits?
    • Must a fragmented message begin at the beginning of a record?
    • Or can a record contain the last fragment of one handshake message and the first fragment of the next one

    Answer

    [From Jonathan, naturally]

    Only Windows 7 and Windows Server 2008 R2 have support for SSL record fragmentation, and then only so far as to accept and coalesce fragmented records. This support was introduced in the RTM release, and does not require a service pack update. Previous Windows OS versions do not support SSL record fragmentation of any sort.

    Per RFC 2246, message length is an unsigned 24-bit integer, so the maximum message length is 16,777,215 ((1<<24) - 1) bytes.

    There is no size limit on a certificate itself, but there is a size limit on each individual extension. On Windows, the size of a certificate extension must not exceed 4096 bytes. For example, 151 25-character DNS name entries, plus the overhead for encoding (~2 bytes per name), comes in at 4,081 bytes, just under the 4KB limit.

    Fragmented handshake records are supported (exceptions below), including the following cases:

    1. A 1 byte handshake fragment can be included in the end of a record.
    2. A client receives 1 byte fragment in a 1 byte record.

    The exceptions are:

    1. TLS alerts cannot be fragmented.
    2. The ClientHello must have at least 6 bytes, otherwise there is insufficient information to determine protocol version.
    3. ClientHello must not be fragmented.

    Question

    I went to add a new server as a DFS replication partner and noticed that on the "Replication Folders" tab is now says "Not Published". I then looked at all the replication objects and they also say "Not Published". The strange thing is our namespaces is still responding and seems to be conforming to the rules in place. Should I go through and republish all the replication groups to the namespace? What would cause this type of thing to happen?

    Answer

    First, some background. The attribute msDFSR-Dfspath on that replicated folder in AD is what stores a DFS Namespace path and lets the GUI populate those values. This is on the global DFSR RF “content” object within a given replication group. For example, a replicated folder named “primarybit” that exists in a replication group called “warrenpritest1” in the “Contoso.com” domain would show this:

    clip_image002

    clip_image002[4]

    Often though, no one ever set this value and it is only noticed a long time later – a problem that never was. :) The only way this normally gets set is if you use DFSMGMT.MSC to first create a DFS Namespace, create some links, then get prompted to configure replication (or if you create an RG and then select “share and publish in namespace”. If you just setup DFSR by itself, this field doesn’t get populated. It has no real effect on DFSR, DFSN, or end users – the field exists purely as a convenience to the administrator so that they know that the replication and namespace are related; just a visual thing for DFSMGMT.MSC.

    You can edit the attribute manually to be the DFS Link path you want using ADSIEDIT, but I recommend instead using:

    DFSRADMIN.EXE RF SET /RFDFSPath<other options>

    Once that’s done it will all fill in:

    clip_image002[6]

    clip_image002[8]

    If you want to see when it might have been deleted, you can use:

    REPADMIN /SHOWMETA <DN of that content set>

    It will show when they were modified:

    clip_image002[10]

    Question

    [After a bit further chatting in the above Q & A]

    It turns out that happened exactly when I migrated to 2008 mode in DFS. I wonder if I missed a step or something?

    Answer

    Ah! So that would be expected – when you “migrate” DFSN between modes you are actually recreating them from scratch. When the namespace is deleted that value is being cleaned out, but never put back – because the DFSN migration tools have no idea about DFSR at all. If you wanted to fix that as part of your migration, you can just add the DFSRADMIN command above to your steps.

    Question

    I was using FRSDIAG to look at a system. The connstat.txt log file it created is blank. Do you know what can cause this?

    Answer

    Anything that makes the command NTFRSUTIL.EXE SETS not work normally will cause this; FRSDIAG just calls that command-line tool then parses the NTFRS_SETS.TXT output to make connstat.txt.

    In this case it was FRS being in Journal Wrap. Since the NTFRS_SETS.TXT log only showed “DOMAIN SYSTEM VOLUME (SYSVOL SHARE) in state JRNL_WRAP_ERROR... DELETED REPLICA SETS” there was nothing useful to parse.

    I’ve also seen it when a server had all of its FRS replica registry settings removed from under the Parameters registry key:

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NtFrs\Parameters\Replica Sets <-- gone

    The service will start up and you will get an FRS event 13516. But then nothing will replicate ever. You will have to use a D2 non-authoritative restore to fix the server.

    Geeky Time

    Want another way to know when the AskDS blog is updated? You can use the NetworkedBlogs Facebook App. This does not mean that I am going to create a Facebook account for myself. Not doing Twitter either. I got into computers 25 years ago to avoid being social.

    On that subject, Mark sent in a link that any self-respecting geek should read: “Wake Up Geek Culture. Time to Die”. It’s written by Patton Oswalt, who is awesome and usually totally NSFW; in this case he kept it mostly PG. Just to prime the pump:

    When our coworkers nodded along to Springsteen and Madonna songs at the local Bennigan’s, my select friends and I would quietly trade out-of-context lines from Monty Python sketches—a thieves’ cant, a code language used for identification. We needed it, too, because the essence of our culture—our “escape hatch” culture—would begin to change in 1987.

    That was the year the final issue of Watchmen came out, in October. After that, it seemed like everything that was part of my otaku world was out in the open and up for grabs, if only out of context. I wasn’t seeing the hard line between “nerds” and “normals” anymore. It was the last year that a T-shirt or music preference or pastime (Dungeons & Dragons had long since lost its dangerous, Satanic, suicide-inducing street cred) could set you apart from the surface dwellers. Pretty soon, being the only person who was into something didn’t make you outcast; it made you ahead of the curve and someone people were quicker to befriend than shun. Ironically, surface dwellers began repurposing the symbols and phrases and tokens of the erstwhile outcast underground.

    Fast-forward to now: Boba Fett’s helmet emblazoned on sleeveless T-shirts worn by gym douches hefting dumbbells. The Glee kids performing the songs from The Rocky Horror Picture Show. And Toad the Wet Sprocket, a band that took its name from a Monty Python riff, joining the permanent soundtrack of a night out at Bennigan’s. Our below-the-topsoil passions have been rudely dug up and displayed in the noonday sun. The Lord of the Rings used to be ours and only ours simply because of the sheer ******* thickness of the books. Twenty years later, the entire cast and crew would be trooping onstage at the Oscars to collect their statuettes, and replicas of the One Ring would be sold as bling.

    For the record, I know the last words of Roy Batty too and it sickens me.

    Next, the best Kinect hack yet – Ultra Seven!

    Definitely watch the whole thing. Hopefully there will be no Ultraman versus Spectreman slap fights in the comments section. Tokusatsu always seems to get people’s blood up.

    If you don’t follow IO9 and Rock Paper Shotgun you are not maximizing your egghead quotient. They have started off the year with a few must-reads if you are a sci-fi or PC gaming spaz like myself:

    There was plenty of interesting stuff at CES 2011, but the thing that caught my eye was the new Touch Mouse. How exciting can a mouse with no buttons be, right? Watch this video:

    Finally, in case you missed it, we are going to start supporting System on a Chip RISC processors in the next version of Windows – specifically ARM. Everything old is new again! According to NVIDIA this is the end of Intel and AMD, but I wouldn’t start throwing away all your x86 motherboards just yet.

    Until next time.

    Ned “can you at least fry the chicken head?” Pyle

    Restrictions for Unauthenticated RPC Clients: The group policy that punches your domain in the face

    $
    0
    0

    Hi folks, Ned here again. Around six years ago we released Service Pack 1 for Windows Server 2003. Like Windows XP SP2, it was a security-focused update. It was the first major server update since the Trustworthy Computing initiative began so there were things like a bootstrapping firewall, Data Execution Protection, and the Security Configuration Wizard.

    Amongst all this, the RPC developers added these new configurable group policy settings:

    Computer Configuration \ <policies> \ Administrative Templates \ System \ Remote Procedure Call

    Restrictions for unauthenticated RPC clients
    RPC endpoint mapper client authentication

    Which map to the DWORD registry settings:

    HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\Rpc

    RestrictRemoteClients
    EnableAuthEpResolution

    These two settings add an additional authentication "callback capability" to RPC connections. Ordinarily, no authentication is required to make the initial connection to the endpoint mapper (EPM). The EPM is the network service that tells a client what TCP/UDP ports to use in further communications. In Windows, those further communications to the actual application are what typically get authenticated and encrypted. For example, DFSR is an RPC application that uses RPC_C_AUTHN_LEVEL_PKT_PRIVACY with Kerberos required, with Mutual Auth required, and with Impersonation blocked. The EPM connection not requiring authentication is not critical, as there is no application data transmitted: EPM is like a phone book or perhaps more appropriately, a switchboard with an operator.

    That quest for Trustworthy Computing added these extra security policies. In doing so, it introduced a very dangerous scenario for domain-based computing: one of the possible policy settings requires all applications that initiate the RPC conversation send along this authentication data or be able to understand a callback request to authenticate.

    The problem is most applications haveno idea how to satisfy the setting's requirements.

    The Argument

    One of the options for Restrictions for unauthenticated RPC clients is "Authenticated without Exceptions".

    image

    When enabled, RPC applications are required to authenticate to RPC service on the destination computer. If your application doesn't know how to do this, it is no longer allowed to connect at all.

    Which brings us to…

    The Brawl

    Having configured this policy in your domain on your DCs, members, and clients, you will now see the following issues no matter your credentials or admin rights:

    Group policy fails to apply with errors:

    GPUPDATE /FORCE returns:

    The processing of Group Policy failed. Windows could not resolve the computer name. This could be caused by one of more of the following:
    a) Name Resolution failure on the current domain controller.
    b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller).
    Computer Policy update has completed successfully.
    To diagnose the failure, review the event log or invoke gpmc.msc to access information about Group Policy results
    .

    The System Event log returns errors 1053 and 1055 for group policy:

    The processing of Group Policy failed. Windows could not resolve the user name. This could be caused by one of more of the following:
    a) Name Resolution failure on the current domain controller.
    b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller).

    The Group Policy Operational event log will show error 7320:

    Error: retrieved account information. Error code 0x5.
    Error: Failed to register for connectivity notification. Error code 0x32.

    Active Directory Replication fails with errors:

    Repadmin.exe returns:

    DsBindWithCred to RPC <servername> failed with status 5 (0x5)

    DSSites.msc returns:

    image

    Directory Service event log returns:

    Warning 1655:
       
    Active Directory Domain Services attempted to communicate with the following global catalog and the attempts were unsuccessful.
    Global catalog:
    \\somedc.cohowineyard.com
    The operation in progress might be unable to continue. Active Directory Domain Services will use the domain controller locator to try to find an available global catalog server.
    Additional Data
    Error value:
    5 Access is denied.

    Error 1126:

    Active Directory Domain Services was unable to establish a connection with the global catalog.
     
    Additional Data
    Error value:
    1355 The specified domain either does not exist or could not be contacted.
    Internal ID:
    3200e7b

    Warning 2092:

    This server is the owner of the following FSMO role, but does not consider it valid. For the partition which contains the FSMO, this server has not replicated successfully with any of its partners since this server has been restarted. Replication errors are preventing validation of this role. Operations which require contacting a FSMO operation master will fail until this condition is corrected.

    Domain join fails with error:

    Changing the primary domain DNS name of this computer to "" failed.
    The name will remain "<something>".
    The error was:
    Access is denied

    image

    After failed join above, rebooting computer and attempting a domain logon fails with error:

    The security database on the server does not have a computer account for this workstation trust relationship.

    image

    Remotely connecting to WMI returns error:

    Win32: Access is denied.

    image

    Remotely connecting to Routing and Remote Access returns error:

    You do not have sufficient permissions to complete the operation

    image

    Remotely connecting to Disk Management returns error:

    You do not have access rights to logical disk manager

    image

    Remotely connecting to Component Services (DCOM) returns error:

    Either the machine does not exist or you don't have permission to access this machine

    image

    Running DFSR Health Reports returns errors:

    Domain Controller is unreachable
    Cannot access the local WMI repository
    Cannot connect to reporting DCOM server

    image

    DFSR does not replicate nor start initial sync, with errors:

    DFSR Event log error 1202:

    The DFS Replication service failed to contact domain controller to access configuration information. Replication is stopped. The service will try again during the next configuration polling cycle, which will occur in 60 minutes. This event can be caused by TCP/IP connectivity, firewall, Active Directory Domain Services, or DNS issues.

    error: 160 (one or more arguments are not correct)

    DFSRMIG does not allow configuration of SYSVOL migration and returns error:

    "Unable to connect to the Primary DC's AD. Please make sure that the PDC is reachable and retry the command later"

    FRS does not replicate and returns event log warning 13562:

    Could not bind to a Domain Controller. Will try again at next polling cycle.

    Remotely connecting to Windows Firewall with Advanced Security returns error:

    You do not have the correct permissions to open the Windows Firewall with Advanced Security Console.
    Error code: 0x5

    image

    Remotely connecting to Share and Storage Management returns error:

    Connection to the Virtual Disk Service failed. A VDS (Virtual Disk Service) error occurred while performing the requested operation.

    image

    Remotely connecting to Storage Explorer returns error:

    Access is denied.

    image

    Remotely connecting to Windows Server Backup returns error:

    The Windows Server Backup engine is not accessible on the computer that you want to manage backups on. Make sure you are a member of the Administrators or Backup Operators group on that computer.

    image
    Remotely connecting to DHCP Management returns error:

    Access is Denied

    RPC Endpoint connections seen through network capture shows errors:

    Note how the client (10.90.0.94) attempts to bind to the EPM on a DC (10.90.0.101) and gets rejected with status 0x5 (Access is Denied).

    image

    Depending on the calling application - in this case, the Group Policy service running on a Win7 client that is trying to refresh policy - it may continue to try binding many times before giving up. Again, the DC responds with the unhelpful error "REASON_NOT_SPECIFIED" and keeps rejecting the GP service.

    image

    For comparison, a normal working EPM bind of the GP service looks like this:

    image

    Restitution

    Anyone notice the Catch-22 above? If you deployed this setting using domain-based group policy to your DCs, you have no way to undo it!  This is another example of “always test security changes before deploying to production”. Many virtualization products are free, like Hyper-V and Virtual PC– even a single virtualized DC environment would have shown gross problems after you tried to use this policy.

    To fix your environment:

    1. You must delete or unlink the whole policy that includes this RPC setting:

    image

    2. Delete or rename this specific policy's GUID folder from each DCs SYSVOL folders (remember, file replication is not working so it must be done on all individual servers).

    image

    image

    3. Manually visit all DCs and delete the RestrictRemoteClients registry setting.

    image

    4. Reboot all DCs to get your domain back in operation. Not all at once, of course!

    These are only the affected Windows in-box applications and components that I have identified. The full list probably includes 99% of all third party RPC applications ever written.

    Parole

    Some security audit consulting company may ask you to turn this policy on to be compliant with their standards. Make sure you show them this article and make them explain why. You can also point out that our Security Compliance Manager tool does not recommend enabling "Authenticated without Exceptions" even in Specialized Security Limited Functionality networks (and SSLF is far too restrictive for most businesses). This setting is really only useful in an unmanaged, standalone, non-domain joined member computer environment such as a DMZ network where you want to close an RPC connection vector. Probably just web servers with local policy.

    You should always get in-depth explanation from any third party security audit's findings and recommendations; many a CritSit case here started with a customer implicitly trusting an auditor's recommendations. That auditor is not going to be there to troubleshoot for you when everything goes to crap. Disconnecting all your DCs from the network makes them more secure. So does disabling all your user accounts. Neither is practical.

    If you absolutely must turn on Restrictions for unauthenticated RPC clients, make sure it is set only to "Authenticated", and guaranteeRPC endpoint mapper client authentication is also enabled. Then test like your job depends on it - because it does. Your applications may still fail with this setting in its less restrictive mode. Not all group policies are intended for domains.

    By the way, if you are a software development company you should be giving the Security Development Lifecycle a frank appraisal. It is a completely free force for good.

    Until next time.

    Ned "2005? I am feeling old" Pyle

    Disk Image Backups and Multi-Master Databases (or: how to avoid early retirement)

    $
    0
    0

    Hi folks, Ned here again. We published a KB a while back around the dangers of using virtualized snapshots with DFSR:

    Distributed File System Replication (DFSR) no longer replicates files after restoring a virtualized server's snapshot

    Customers have asked me some follow up questions I address today. Not because the KB is missing info (it's flawless, I wrote it ;-P) but because they were now nervous about their DCs and backups. With good reason, it turns out.

    Today I discuss the risks of restoring an entire disk image of a multi-master server. In practical Windows OS terms, this refers to Domain Controllers, servers running DFSR, or servers running FRS; the latter two servers might be member servers or also DCs. All of them use databases to interchange files or objects with no single server being the only originator of data.

    The Dangerous Way to Backup Multi-Master Servers

    • Backing up only a virtualized multi-master server's VHD file from outside the running OS. For example, running Windows Server Backup or DPM on a hyper-V host machine and backing up all the guest VHD files. This includes full volume backups of the hyper-v host.
    • Backing up only a multi-master server's disk image from outside the running OS. For example, running a SAN disk block-based backup that captures the servers disk partitions as raw data blocks, and does not run a VSS-based backup within the running server OS.

    Note: It is ok to take these kinds of outside backups as long as you are also getting a backup that runs within the running multi-master guest computers. Naturally, this internal backup requirement makes the outside backup redundant though.

    What happens

    What's the big deal? Haven't you read somewhere that we recommend VSS full disk backups?

    Yes and no. And no. And furthermore, no.

    Starting in Windows Server 2008, we incorporated special VSS writer and Hyper-V integration components to prevent insidiously difficult-to-fix USN issues that came from restoring domain controllers as "files". Rather than simply chop a DC off at the knees with USN Rollback protection, the AD developers had a clever idea: the integration components tell the guest OS that the server is a restored backup and resets its invocation ID.

    After restore, you'll see this Directory Services 1109 event when the DC boots up:

    image

    This only prevents a problem; it's not the actual solution. Meaning that this DC immediately replicates inbound from a partner and discards all of its local differences that came from the restored "backup". Anything created on that DC before it last replicated outbound is lost forever. Quite like these "oh crap" steps we have here for the truly desperate who are fighting snapshot USN rollbacks; much better than nothing.

    Now things get crummy:

    • This VSS+Hyper-V behavior only works if you back up the running Windows Server 2008 and 2008 R2 DC guests. If backed up while turned off, the restore will activate USN rollback protection as noted in KB875495 (events 2095, 1113, 1115, 2103) and trash AD on that DC.
    • Windows Server 2008 and 2008 R2 only implement this protection as part of Hyper-V integration components so third party full disk image restores or other virtualization products have to implement it themselves. They may not, leading to USN rollback protection as noted in KB875495 (events 2095, 1113, 1115, 2103) and trash AD on that DC.
    • Windows Server 2003 DCs do not have this restore capability even as part of Hyper-V. Restoring their VHD as a file immediately invokes USN rollback protection as noted in KB875495 (events 2095, 1113, 1115, 2103), again leading to trashed AD on that DC.
    • DFSR (for SYSVOL or otherwise) does not have this restore capability in any OS version. Restoring a DFSR server's VHD file or disk image leads to the same database destruction as noted in KB2517913 (events 2212, 2104, 2004, 2106).
    • FRS (for SYSVOL or otherwise) does not have this restore capability in any OS version. Restoring an FRS server's VHD file or disk image does not stop FRS replication for new files. However, all subfolders under the FRS-replicated folder (such as SYSVOL) - along with their file and folder contents - disappear from the server. This deletion will not replicate outbound, but if you add a new DC and use this restored server as a source DC, the new DC will have inconsistent data. There is no indication of the issue in the event logs. Files created in those subfolders on working servers will not replicate to this server, nor will their parent folders. To repair the issue, perform a "D2 burflag" operation on the restored server for all FRS replicas, as described in KB290762.

    Multi-master databases are some of the most complex software in the world and one-size-fits all backup and restore solutions are not appropriate for them.

    The Safe Way to Backup Multi-Master Servers

    When dealing with any Windows server that hosts a multi-master database, the safest method is taking a full/incremental (and specifically including System State) backupusing VSS within the running operating system itself. System state backs up all aspects of a DC (including SYSVOL DFSR and FRS), but does not include custom DFSR or FRS, which is why we recommend full/incremental backups for all the volumes. This goes for virtualized guests or physical servers. Avoid relying solely on techniques that involve backing up the entire server as a single virtualized guest VHD file or backing up the raw disk image of that server. As I've shown above, this makes the backups easier, but you are making the restore much harder.

    And when it gets to game time, the restore is what keeps you employed: your boss doesn't care how easy you made your life with backups that don’t work.

    Final thoughts

    Beware any vendor that claims they can do zero-impact server restores like those that I mentioned in the "Dangerous" section and make them prove that they can restore a single domain controller in a two-DC domain without any issues and where you created new users and group policies after the backup. Don't take the word of some salesman: make them demonstrate my scenario above. You don’t want to build your backup plans around something that doesn’t work as advertised.

    Our fearless writers are banging away on TechNet as I write this to ensure we're not giving out any misleading info around virtualized server backups and restores. If you find any articles that look scary, please feel free to send us an email and I'll see to the edits.

    Until next time.

    - Ned "one of these servers is not like the other" Pyle

    Friday Mail Sack: Tuesday To You Edition

    $
    0
    0

    Hi folks, Ned here again. It’s a long weekend here in the United States, so today I talk to you tell myself about a domain join issue one can only see in Win7/R2 or later, what USMT hard link migrations really do, how to poke LDAP in legacy PowerShell, time zone migration, and an emerging issue for which we need your feedback.

    Question

    None of our Windows Server 2008 R2 or Windows 7 computers can join the domain – they all show error:

    “The following error occurred attempting to join the domain "contoso.com": The service cannot be started, either because it is disabled or because it has no enabled devices associated with it.”

    image

    Windows Vista, Widows Server 2008, and older operating systems join without issue in the exact same domain while using the same user credentials.

    Answer

    Not a very actionable error – which service do you mean, Windows!? If you look at the System event log there are no errors or mention of broken services. Fortunately, any domain join operations are logged in another spot – %systemroot%\debug\netsetup.log. If you crack open that log and look for references to “service” you find:

    05/27/2011 16:00:39:403 Calling NetpQueryService to get Netlogon service state.
    05/27/2011 16:00:39:403 NetpJoinDomainLocal: NetpQueryService returned: 0x0.
    05/27/2011 16:00:39:434 NetpSetLsaPrimaryDomain: for 'CONTOSO' status: 0x0
    05/27/2011 16:00:39:434 NetpJoinDomainLocal: status of setting LSA pri. domain: 0x0
    05/27/2011 16:00:39:434 NetpManageLocalGroupsForJoin: Adding groups for new domain, removing groups from old domain, if any.
    05/27/2011 16:00:39:434 NetpManageLocalGroups: Populating list of account SIDs.
    05/27/2011 16:00:39:465 NetpManageLocalGroupsForJoin: status of modifying groups related to domain 'CONTOSO' to local groups: 0x0
    05/27/2011 16:00:39:465 NetpManageLocalGroupsForJoin: INFO: No old domain groups to process.
    05/27/2011 16:00:39:465 NetpJoinDomainLocal: Status of managing local groups: 0x0
    05/27/2011 16:00:39:637 NetpJoinDomainLocal: status of setting ComputerNamePhysicalDnsDomain to 'contoso.com': 0x0
    05/27/2011 16:00:39:637 NetpJoinDomainLocal: Controlling services and setting service start type.
    05/27/2011 16:00:39:637 NetpControlServices: start service 'NETLOGON' failed: 0x422
    05/27/2011 16:00:39:637 NetpJoinDomainLocal: initiating a rollback due to earlier errors

    Aha – the Netlogon service. Without that service running, you cannot join a domain. What’s 0x422?

    c:\>err.exe 0x422

    ERROR_SERVICE_DISABLED winerror.h
    # The service cannot be started, either because it is
    # disabled or because it has no enabled devices associated
    # with it.

    Nice, that’s our guy. It appears that the service was disabled and the join process is trying to start it. And it almost worked too – if you run services.msc, it will say that Netlogon is set to “Automatic” (and if you look at another machine you have not yet tried to join, it is set to “Disabled” instead of the default “Manual”). The problem here is that the join code is only setting the start state through direct registry edits instead of using Service Control Manager. This is necessary in Win7/R2 because we now always go through the offline domain join code (even when online) and for reasons that I can’t explain without showing you our source code, we can’t talk to SCM while we’re in the boot path or we can have hung startups. So the offline code set the start type correctly and the next boot up would have joined successfully – but since the service is still disabled according to SCM, you cannot start it. It’s one of those “it hurts if I do this” type issues.

    And why did the older operating systems work? They don’t support offline domain join and are allowed to talk to the Service Control Manager whenever they like. So they tell him to set the Netlogon service start type, then tell him to start the service – and he does.

    The lesson here is that a service set to Manual by default should not be set to disabled without a good reason. It’s not like it’s going to accidentally start in either case, nor will anyone without permissions be able to start it. You are just putting a second lock on the bank vault. It’s already safe enough.

    Question

    USMT is always going on about hard link migrations. I’ve used them and those migrations are fast… but what the heck is it and why do I care?

    Answer

    A hard link is simply a way for NTFS to point to the same file from multiple spots, always on the same volume. It has nothing to do with USMT (who is just a customer). Instead of making many copies of a file, you are making copies of how you get to the file. The file itself only exists once. Any changes to the file through one path or another are always reflected on the same physical file on the disk. This means that when USMT is storing a hard link “copy” of a file it is just telling NTFS to make another pointer to the same file data and is not copying anything – which makes it wicked fast.

    Let’s say I have a file like so:

    c:\hithere\bwaamp.txt

    If I open it up I see:

    image

    Really though, it’s NTFS pointing to some file data with some metadata that tells you the name and path. Now I will use FSUTIL.EXE to create a hard link:

    C:\>fsutil.exe hardlink create c:\someotherplace\bwaamp.txt c:\hithere\bwaamp.txt
    Hardlink created for c:\someotherplace\bwaamp.txt <<===>> c:\hithere\bwaamp.txt

    I can use that other path to open the same data (it helps if you don’t think of these as files):

    image

    I can even create a hard link where the file name is not the same (remember – we’re pointing to file data and giving the user some friendly metadata):

    C:\>fsutil.exe hardlink create c:\yayntfs\sneaky!.txt c:\hithere\bwaamp.txt
    Hardlink created for c:\yayntfs\sneaky!.txt <<===>> c:\hithere\bwaamp.txt

    And it still goes to the same spot.

    image

    What if I edit this new "”sneaky!.txt” file and then open the original “bwaamp.txt”?

    image

    Perhaps a terrible Visio diagram will help:

    hardlink

    When you delete one of these representations of the file, you are actually deleting the hard link. When the last one is deleted, you are deleting the actual file data.

    It’s magic, smoke and mirrors, hoodoo. If you want a more disk-oriented (aka: yaaaaaaawwwwnnn) explanation, check out this article. Rob and Joseph have never met a File Record Segment Header they didn’t like. I bet they are a real hit at parties…

    Question

    How can I use PowerShell to detect if a specific DC is reachable via LDAP? Don’t say AD PowerShell, this environment doesn’t have Windows 7 or 2008 R2 yet! :-)

    Answer

    One way is going straight to .NET and use the DirectoryServices namespace:

    New-Object System.DirectoryServices.DirectoryEntry(LDAP://yourdc:389/dc=yourdomaindn)

    For example:

    image
    Yay!

    image
    Boo!

    Returning anything but success is a problem you can then evaluate.

    As always, I welcome more in the Comments. I suspect people have a variety of techniques (third parties, WMI LDAP provider, and so on).

    Question

    Is USMT supposed to migrate the current time zone selection?

    Answer

    Nope. Whenever you use timedate.cpl, you are updating this registry key:

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation

    Windows XP has very different data in that key when compared to Vista and Windows 7:

    Windows XP

     

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation]

    "ActiveTimeBias"=dword:000000f0

    "Bias"=dword:0000012c

    "DaylightBias"=dword:ffffffc4

    "DaylightName"="Eastern Daylight Time"

    "DaylightStart"=hex:00,00,03,00,02,00,02,00,00,00,00,00,00,00,00,00

     

    "StandardBias"=dword:00000000

    "StandardName"="Eastern Standard Time"

    "StandardStart"=hex:00,00,0b,00,01,00,02,00,00,00,00,00,00,00,00,00

     

    Windows 7

     

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation]

    "ActiveTimeBias"=dword:000000f0

    "Bias"=dword:0000012c

    "DaylightBias"=dword:ffffffc4

    "DaylightName"="@tzres.dll,-111"

    "DaylightStart"=hex:00,00,03,00,02,00,02,00,00,00,00,00,00,00,00,00

    "DynamicDaylightTimeDisabled"=dword:00000000

    "StandardBias"=dword:00000000

    "StandardName"="@tzres.dll,-112"

    "StandardStart"=hex:00,00,0b,00,01,00,02,00,00,00,00,00,00,00,00,00

    "TimeZoneKeyName"="Eastern Standard Time"

    The developers from the Time team simply didn’t want USMT to assume anything as they knew there were significant version differences; to do so would have taken an expensive USMT plugin DLL for a task that would likely be redundant to most customer imaging techniques. There are manifests (such as "INTERNATIONAL-TIMEZONES-DL.MAN") that migrate any additional custom time zones to the up-level computers, but again, this does not include the currently specified time zone. Not even when migrating from Win7 to Win7.

    But that doesn’t mean that you are out of luck. Come on, this is me! :-)

    To migrate the current zone setting from XP to any OS you have the following options:

    To migrate the current zone setting from Vista to Vista, Vista to 7, or 7 to 7, you have the following options:

    • Any of the three mentioned above for XP
    • Use this sample USMT custom XML (making sure that nothing else has changed since this blog post and you reading it). Woo, with fancy OS detection code!

    <?xmlversion="1.0"encoding="utf-8" ?>
    <
    migrationurlid="http://www.microsoft.com/migration/1.0/migxmlext/currenttimezonesample">
    <
    componenttype="Application"context="System">
    <
    displayName>Copy the currently selected timezone as long as Vista or later OS</displayName>
    <
    rolerole="Settings">
    <!--
    Check as this is only valid for uplevel-level OS >= than Windows Vista –>
    <
    detects>
      <
    detect>
       <
    condition>MigXmlHelper.IsOSLaterThan("NT", "6.0.0.0")</condition>
      </
    detect>
    </
    detects>
    <
    rules>
    <
    include>
      <
    objectSet>
       <
    patterntype="Registry">HKLM\SYSTEM\CurrentControlSet\Control\TimeZoneInformation\* [*]</pattern>
      </
    objectSet>
    </
    include>
    </
    rules>
    </
    role>
    </
    component>
    </
    migration>

    Question for our readers

    We’ve had a number of cases come in this week with the logon failure:

    Logon Process Initialization Error
    Interactive logon process initialization has failed.
    Please consult the Event Logs for more details.

    You may also find an application event if you connect remotely to the computer (interactive logon is impossible at this point):

    ID: 4005
    Source: Microsoft-Windows-Winlogon
    Version: 6.0
    Message: The Windows logon process has unexpectedly terminated.

    In the cases we’ve seen this week, the problem was seen after restoring a backup when using a specific third party backup product. The backup was restored to either Hyper-V or VMware guests (but this may be coincidental). After the restore large portions of the registry were missing and most of our recovery tools (SFC, Recovery Console, diskpart, etc.) would not function. If you have seen this, please email us with the backup product and version you are using. We need to contact this vendor and get this fixed, and your evidence will help. I can’t mention the suspected company name here yet, as if we’re wrong I’d be creating a legal firestorm, but if all the private emails say the same company we’ll have enough justification for them to examine this problem and fix it.

    ------------

    Have a safe weekend, and take a moment to think of what Memorial Day really means besides grilling, racing, and a day off.

    Ned “I bet SGrinker has the bratwurst hookup” Pyle


    Friday Mail Sack: Best Post This Year Edition

    $
    0
    0

    Hi folks, Ned here and welcoming you to 2012 with a new Friday Mail Sack. Catching up from our holiday hiatus, today we talk about:

    So put down that nicotine gum and get to reading!

    Question

    Is there an "official" stance on removing built-in admin shares (C$, ADMIN$, etc.) in Windows? I’m not sure this would make things more secure or not. Larry Osterman wrote a nice article on its origins but doesn’t give any advice.

    Answer

    The official stance is from the KB that states how to do it:

    Generally, Microsoft recommends that you do not modify these special shared resources.

    Even better, here are many things that will break if you do this:

    Overview of problems that may occur when administrative shares are missing
    http://support.microsoft.com/default.aspx?scid=kb;EN-US;842715

    That’s not a complete list; it wasn’t updated for Vista/2008 and later. It’s so bad though that there’s no point, frankly. Removing these shares does not increase security, as only administrators can use those shares and you cannot prevent administrators from putting them back or creating equivalent custom shares.

    This is one of those “don’t do it just because you can” customizations.

    Question

    The Windows PowerShell Get-ADDomainController cmdlet finds DCs, but not much actual attribute data from them. The examples on TechNet are not great. How do I get it to return useful info?

    Answer

    You have to use another cmdlet in tandem, without pipelining: Get-ADComputer. The Get-ADDomainController cmdlet is good mainly for searching. The Get-ADComputer cmdlet, on the other hand, does not accept pipeline input from Get-ADDomainController. Instead, you use a pseudo “nested function” to first find the PDC, then get data about that DC. For example, (this is all one command, wrapped):

    get-adcomputer (get-addomaincontroller -Discover -Service "PrimaryDC").name -property * | format-list operatingsystem,operatingsystemservicepack

    When you run this, PowerShell first processes the commands within the parentheses, which finds the PDC. Then it runs get-adcomputer, using the property of “Name” returned by get-addomaincontroller. Then it passes the results through the pipeline to be formatted. So it’s 123.

    get-adcomputer (get-addomaincontroller -Discover -Service "PrimaryDC").name -property * | format-list operatingsystem,operatingsystemservicepack

    Voila. Here I return the OS of the PDC, all without having any idea which server actually holds that role:

    clip_image002[6]

    Moreover, before the Internet clubs me like a baby seal: yes, a more efficient way to return data is to ensure that the –property list contains only those attributes desired:

    image

    Get-ADDomainController can find all sorts of interesting things via its –service argument:

    PrimaryDC
    GlobalCatalog
    KDC
    TimeService
    ReliableTimeService
    ADWS

    The Get-ADDomain cmdlet can also find FSMO role holders and other big picture domain stuff. For example, the RID Master you need to monitor.

    Question

    I know about Kerberos “token bloat” with user accounts that are a member of too many groups. Does this also affect computers added to too many groups? What would some practical effects of that? We want to use a lot of them in the near future for some application … stuff.

    Answer

    Yes, things will break. To demonstrate, I use PowerShell to create 2000 groups in my domain and added a computer named “7-01” to them:

    image

    I then restart the 7-01 computer. Uh oh, the System Event log is un-pleased. At this point, 7-01 is no longer applying computer group policy, getting start scripts, or allowing any of its services to logon remotely to DCs:

    image 

    Oh, and check out this gem:

    image

    I’m sure no one will go on a wild goose chase after seeing that message. Applications will be freaking out even more, likely with the oh-so-helpful error 0x80090350:

    “The system detected a possible attempt to compromise security. Please ensure that you can contact the server that authenticated you.”

    Don’t do it. MaxTokenSize is probably in your future if you do, and it has limits that you cannot design your way out of. IT uniqueness is bad.

    Question

    We have XP systems using two partitions (C: and D:) migrating to Windows 7 with USMT. The OS are on C and the user profiles on D.  We’ll use that D partition to hold the USMT store. After migration, we’ll remove the second partition and expand the first partition to use the space freed up by the first partition.

    When restoring via loadstate, will the user profiles end up on C or on D? If the profiles end up on D, we will not be able to delete the second partition obviously, and we want to stop doing that regardless.

    Answer

    You don’t have to do anything; it just works. Because the new profile destination is on C, USMT just slots everything in there automagically :). The profiles will be on C and nothing will be on D except the store itself and any non-profile folders*:

    clip_image001
    XP, before migrating

    clip_image001[5]
    Win7, after migrating

    If users have any non-profile folders on D, that will require a custom rerouting xml to ensure they are moved to C during loadstate and not obliterated when D is deleted later. Or just add a MOVE line to whatever DISKPART script you are using to expand the partition.

    Question

    Should we stop the DFSR service before performing a backup or restore?

    Answer

    Manually stopping the DFSR service is not recommended. When backing up using the DFSR VSS Writer – which is the only supported way – replication is stopped automatically, so there’s no reason to stop the service or need to manually change replication:

    Event ID=1102
    Severity=Informational
    The DFS Replication service has temporarily stopped replication because another
    application is performing a backup or restore operation. Replication will resume
    after the backup or restore operation has finished.

    Event ID=1104
    Severity=Informational
    The DFS Replication service successfully restarted replication after a backup
    or restore operation.

    Another bit of implied evidence – Windows Server Backup does not stop the service.

    Stopping the DFSR service for extended periods leaves you open to the risk of a USN journal wrap. And what if someone/something thinks that the service being stopped is “bad” and starts it up in the middle of the backup? Probably nothing bad happens, but certainly nothing good. Why risk it?

    Question

    In an environment where AGMP controls all GPOs, what is the best practice when application setup routines make edits "under the hood" to GPOs, such as the Default Domain Controllers GPO? For example, Exchange setup make changes to User Rights Assignment (SeSecurityPrivilege). Obviously if this setup process makes such edits on the live GPO in sysvol the changes will happen, but then only to have those critical edits be lost and overwritten the next time an admin re-deploys with AGPM.

    Answer

    [via Fabian “Wunderbar” Müller  – Ned]

    From my point of view:

    1. The Default Domain and Default Domain Controller Policies should be edited very rarely. Manual changes as well as automated changes (e.g. by the mentioned Exchange setup) should be well known and therefore the workaround in 2) should be feasible.

    2. After those planned changes were performed, you have to use “import from production” the production GPO to the AGPM archive in order to reflect the production change to AGPM. Another way could be to periodically use “import from production” the default policies or to implement a manual / human process that defines the “import from production” procedure before a change in these policies is done using AGPM.

    Not a perfect answer, but manageable.

    Question

    In testing the rerouting of folders, I took the this example from TechNet and placed in a separate custom.xml.  When using this custom.xml along with the other defaults (migdocs.xml and migapp.xml unchanged), the EngineeringDrafts folder is copied to %CSIDL_DESKTOP%\EngineeringDrafts' but there’s also a copy at C:\EngineeringDrafts on the destination computer.

    I assume this is not expected behavior.  Is there something I’m missing?

    Answer

    Expected behavior, pretty well hidden though:

    http://technet.microsoft.com/en-us/library/dd560751(v=WS.10).aspx

    If you have an <include> rule in one component and a <locationModify> rule in another component for the same file, the file will be migrated in both places. That is, it will be included based on the <include> rule and it will be migrated based on the <locationModify> rule

    That original rerouting article could state this more plainly, I think. Hardly anyone does this relativemove operation; it’s very expensive for disk space– one of those “you can, but you shouldn’t” capabilities of USMT. The first example also has an invalid character in it (the apostrophe in “user’s” on line 12, position 91 – argh!).

    Don’t just comment out those areas in migdocs though; you are then turning off most of the data migration. Instead, create a copy of the migdocs.xml and modify it to include your rerouting exceptions, then use that as your custom XML and stop including the factory migdocs.xml.

    There’s an example attached to this blog post down at the bottom. Note the exclude in the System context and the include/modify in the user context:

    image

    image

    Don’t just modify the existing migdocs.xml and keep using it un-renamed either; that becomes a versioning nightmare down the road.

    Question

    I'm reading up on CAPolicy.inf files, and it looks like there is an error in the documentation that keeps being copied around. TechNet lists RenewalValidityPeriod=Years and RenewalValidityPeriodUnits=20 under the "Windows Server 2003" sample. This is the opposite of the Windows 2000 sample, and intuitively the "PeriodUnits" should be something like "Years" or "Weeks", while the "Period" would be an integer value. I see this on AskDS here and here also.

    Answer

    [via Jonathan “scissor fingers” Stephens  – Ned]

    You're right that the two settings seem like they should be reversed, but unfortunately this is not correct. All of the *Period values can be set to Minutes, Hours, Days, Weeks, Months or Years, while all of the *PeriodUnits values should be set to some integer.

    Originally, the two types of values were intended to be exactly what one intuitively believes they should be -- *PeriodUnits was to be Day, Weeks, Months, etc. while *Period was to be the integer value. Unfortunately, the two were mixed up early in the development cycle for Windows 2000 and, once the error was discovered, it was really too late to fix what is ultimately a cosmetic problem. We just decided to document the correct values for each setting. So in actuality, it is the Windows 2000 documentation that is incorrect as it was written using the original specs and did not take the switch into account. I’ll get that fixed.

    Question

    Is there a way to control the number, verbosity, or contents of the DFSR cluster debug logs (DfsrClus_nnnnn.log and DfsrClus_nnnnn.log.gz in %windir%\debug)?

    Answer

    Nope, sorry. It’s all static defined:

    • Severity = 5
    • Max log messages per log = 10000
    • Max number of log files = 999

    Question

    In your previous article you say that any registry modifications should be completed with resource restart (take resource offline and bring it back online), instead of direct service restart. However, official whitepaper (on page 16) says that CA service should be restarted by using "net stop certsvc && net start certsvc".

    Also, I want to clarify about a clustered CA database backup/restore. Say, a DB was damaged or destroyed. I have a full backup of CA DB. Before restoring, I do I stop only AD CS service resource (cluadmin.msc) or stop the CA service directly (net stop certsvc)?

    Answer

    [via Rob “there's a Squatch in These Woods” Greene  – Ned]

    The CertSvc service has no idea that it belongs to a cluster.  That’s why you setup the CA as a generic service within Cluster Administration and configure the CA registry hive within Cluster Administrator.

    When you update the registry keys on the Active CA Cluster node, the Cluster service is monitoring the registry key changes.  When the resource is taken offline the Cluster Service makes a new copy of the registry keys to so that the other node gets the update.  When you stop and start the CA service the cluster services has no idea why the service is stopped and started, since it is being done outside of cluster and those registry key settings are never updated on the stand-by node. General guidance around clusters is to manage the resource state (Stop/Start) within Cluster Administrator and do not do this through Services.msc, NET STOP, SC, etc.

    As far as the CA Database restore: just logon to the Active CA node and run the certutil or CA MMC to perform the operation. There’s no need to touch the service manually.

    Other stuff

    The Microsoft Premier Field Organization has started a new blog that you should definitely be reading.

    Welcome to your nightmare (Thanks Mark!)

    Totally immature and therefore funny. Doubles as a gender test.

    Speaking of George Lucas re-imaginings, check out this awesome shot-by-shot comparison of Raiders and 30 other previous adventure films:


    Indy whipped first!

    I am completely addicted to Panzer Corps; if you ever played Panzer General in the 90’s, you will be too.

    Apropos throwback video gaming and even more re-imagining, here is Battlestar Galactica as a 1990’s RPG:

       
    The mail sack becomes meta of meta of meta

    Like Legos? Love Simon Pegg? This is for you.

    Best sci-fi books of 2011, according to IO9.

    What’s your New Year’s resolution? Mine is to stop swearing so much.

     

    Until next time,

    - Ned “$#%^&@!%^#$%^” Pyle

    Monthly Mail Sack: Yes, I Finally Admit It Edition

    $
    0
    0

    Heya folks, Ned here again. Rather than continue the lie that this series comes out every Friday like it once did, I am taking the corporate approach and rebranding the mail sack. Maybe we’ll have the occasional Collector’s Edition versions.

    This week month, I answer your questions on:

    Let’s incentivize our value props!

    Question

    Everywhere I look, I find documentation saying that when Kerberos skew exceeds five minutes in a Windows forest, the sky falls and the four horsemen arrive.

    I recall years ago at a Microsoft summit when I brought that time skew issue up and the developer I was speaking to said no, that isn't the case anymore, you can log on fine. I recently re-tested that and sure enough, no amount of skew on my member machine against a DC prevents me from authenticating.

    Looking at the network trace I see the KRB_APP_ERR_SKEW response for the AS REQ which is followed by breaking down of the kerb connection which is immediately followed by reestablishing the kerb connection again and another AS REQ that works just fine and is responded to with a proper AS REP.

    My first question is.... Am I missing something?

    My second question is... While I realize that third party Kerb clients may or may not have this functionality, are there instances where it doesn't work within Windows Kerb clients? Or could it affect other scenarios like AD replication?

    Answer

    Nope, you’re not missing anything. If I try to logon from my highly-skewed Windows client and apply group policy, the network traffic will look approximately like:

    Frame

    Source

    Destination

    Packet Data Summary

    1

    Client

    DC

    AS Request Cname: client$ Realm: CONTOSO.COM Sname:

    2

    DC

    Client

    KRB_ERROR - KRB_AP_ERR_SKEW (37)

    3

    Client

    DC

    AS Request Cname: client$ Realm: CONTOSO.COM Sname: krbtgt/CONTOSO.COM

    4

    DC

    Client

    AS Response Ticket[Realm: CONTOSO.COM, Sname: krbtgt/CONTOSO.COM]

    5

    Client

    DC

    TGS Request Realm: CONTOSO.COM Sname: cifs/DC.CONTOSO.COM

    6

    DC

    Client

    KRB_ERROR - KRB_AP_ERR_SKEW (37)

    7

    Client

    DC

    TGS Request Realm: CONTOSO.COM Sname: cifs/DC.CONTOSO.COM

    8

    DC

    Client

    TGS Response Cname: client$

    When your client sends a time stamp that is outside the range of Maximum tolerance for computer clock synchronization, the DC comes back with that KRB_APP_ERR_SKEW error – but it also contains an encrypted copy of his own time stamp. The client uses that to create a valid time stamp to send back. This doesn’t decrease security in the design because we are still using encryption and requiring knowledge of the secrets,  plus there is still only – by default – 5 minutes for an attacker to break the encryption and start impersonating the principal or attempt replay attacks. Which is not feasible with even XP’s 11 year old cipher suites, much less Windows 8’s.

    This isn’t some Microsoft wackiness either – RFC 4430 states:

    If the server clock and the client clock are off by more than the policy-determined clock skew limit (usually 5 minutes), the server MUST return a KRB_AP_ERR_SKEW.The optional client's time in the KRB-ERROR SHOULD be filled out.

    If the server protects the error by adding the Cksum field and returning the correct client's time, the client SHOULD compute the difference (in seconds) between the two clocks based upon the client and server time contained in the KRB-ERROR message.

    The client SHOULD store this clock difference and use it to adjust its clock in subsequent messages. If the error is not protected, the client MUST NOT use the difference to adjust subsequent messages, because doing so would allow an attacker to construct authenticators that can be used to mount replay attacks.

    Hmmm… SHOULD. Here’s where things get more muddy and I address your second question. No one actually has to honor this skew correction:

    1. Windows 2000 didn’t always honor it. But it’s dead as fried chicken, so who cares.
    2. Not all third parties honor it.
    3. Windows XP and Windows Server 2003 do honor it, but there were bugs that sometimes prevented it (long gone, AFAIK). Later Windows OSes do of course and I know of no regressions.
    4. If the clock of the client computer is faster than the clock time of the domain controller plus the lifetime of Kerberos ticket (10 hours, by default), the Kerberos ticket is invalid and auth fails.
    5. Some non-client logon application scenarios enforce the strict skew tolerance and don’t care to adjust, because of other time needs tied to Kerberos and security. AD replication is one of them – event LSASRV 40960 with extended error 0xc000133 comes to mind in this scenario, as does trying to run DSSite.msc “replicate now” and getting back error 0x576 “There is a time and / or date difference between the client and the server.” I have recent case evidence of Dcpromo enforcing the 5 minutes with Kerberos strictly, even in Windows Server 2008 R2, although I have not personally tried to validate it. I’ve seen it with appliances and firewalls too.

    With that RFC’s indecisiveness and the other caveats, we beat the “just make sure it’s no more than 5 minutes” drum in all of our docs and here on AskDS. It’s too much trouble to get into what-ifs.

    We have a KB tucked away on this here but it is nearly un-findable.

    Awesome question.

    Question

    I’ve found articles on using Windows PowerShell to locate all domain controllers in a domain, and even all GCs in a forest, but I can’t find one to return all DCs in a forest. Get-AdDomainController seems to be limited to a single domain. Is this possible?

    Answer

    It’s trickier than you might think. I can think of two ways to do this; perhaps commenters will have others. The first is to get the domains in the forest, then find one domain controller in each domain and ask it to list all the domain controllers in its own domain. This gets around the limitation of Get-AdDomainController for a single domain (single line wrapped).

    (get-adforest).domains | foreach {Get-ADDomainController -discover -DomainName $_} | foreach {Get-addomaincontroller -filter * -server $_} | ft hostname

    The second is to go directly to the the native  .NET AD DS forest class to return the domains for the forest, then loop through each one returning the domain controllers (single lined wrapped).

    [system.directoryservices.activedirectory.Forest]::GetCurrentForest().domains | foreach {$_.DomainControllers} | foreach {$_.hostname}

    This also lead to updated TechNet content. Good work, Internet!

    Question

    Hi, I've been reading up on RID issuance management and the new RID Master changes in Windows Server 2012. They still leave me with a question, however: why are RIDs even needed in a SID? Can't the SID be incremented on it's own? The domain identifier seems to be an adequately large number, larger than the 30-bit RID anyway. I know there's a good reason for it, but I just can't find any material that says why there are separate domain ID and relative ID in a SID.

    Answer

    The main reason was a SID needs the domain identifier portion to have a contextual meaning. By using the same domain identifier on all security principals from that domain, we can quickly and easily identify SIDs issued from one domain or another within a forest. This is useful for a variety of security reasons under the hood.

    That also allows us a useful technique called “SID compression”, where we want to save space in a user’s security data in memory. For example, let’s say I am a member of five domain security groups:

    DOMAINSID-RID1
    DOMAINSID-RID2
    DOMAINSID-RID3
    DOMAINSID-RID4
    DOMAINSID-RID5

    With a constant domain identifier portion on all five, I now have the option to use one domain SID portion on all the other associated ones, without using all the memory up with duplicate data:

    DOMAINSID-RID1
    “-RID2
    “-RID3
    “-RID4
    “-RID5

    The consistent domain portion also fixes a big problem: if all of the SIDs held no special domain context, keeping track of where they were issued from would be a much bigger task. We’d need some sort of big master database (“The SID Master”?) in an environment that understood all forests and domains and local computers and everything. Otherwise we’d have a higher chance of duplication through differing parts of a company. Since the domain portion of the SID unique and the RID portion is an unsigned integer that only climbs, it’s pretty easy for RID masters to take care of that case in each domain.

    You can read more about this in coma-inducing detail here: http://technet.microsoft.com/en-us/library/cc778824.aspx.

    Question

    When I want to set folder and application redirection for our user in different forest (with a forest trust) in our Remote Desktop Services server farm, I cannot find users or groups from other domain. Is there a workaround?

    Answer

    The Object Picker in this case doesn’t allow you to select objects from the other forest – this is a limitation of the UI the that Folder Redirection folks put in place. They write their own FR GP management tools, not the GP team.

    Windows, by default, does not process group policy from user logon across a forest—it automatically uses loopback Replace.  Therefore, you can configure a Folder Redirection policy in the resource domain for users and link that policy to the OU in the domain where the Terminal Servers reside.  Only users from a different forest should receive the folder redirection policy, which you can then base on a group in the local forest.

    Question

    Does USMT support migrating multi-monitor settings from Windows XP computers, such as which one is primary, the resolutions, etc.?

    Answer

    USMT 4.0 does not supported migrating any monitor settings from any OS to any OS (screen resolution, monitor layout, multi-monitor, etc.). Migrating hardware settings and drivers from one computer to another is dangerous, so USMT does not attempt it. I strongly discourage you from trying to make this work through custom XML for the same reason – you may end up with unusable machines.

    Starting in USMT 5.0, a new replacement manifest – Windows 7 to Windows 7, Windows 7 to Windows 8, or Windows 8 to Windows 8 only – named “DisplayConfigSettings_Win7Update.man” was added. For the first time in USMT, it migrates:

    <pattern type="Registry">HKLM\System\CurrentControlSet\Control\GraphicsDrivers\Connectivity\* [*]</pattern>
    <pattern type="Registry">HKLM\System\CurrentControlSet\Control\GraphicsDrivers\Configuration\* [*]</pattern>

    This is OK on Win7 and Win8 because the OS itself knows what valid and invalid are in that context and discards/fixes things as necessary. I.e. this is safe is only because USMT doesn’t actually do anything but copy some values and relies on the OS to fix things after migration is over.

    Question

    Our proprietary application is having memory pressure issues and it manifests when someone runs gpupdate or waits for GP to refresh; some times it’s bad enough to cause a crash.  I was curious if there was a way to stop the policy refresh from occurring.

    Answer

    Only in Vista and later does preventing total refresh become possible vaguely possible; you could prevent the group policy service from running at all (no, I am not going to explain how). The internet is filled with thousands of people repeating a myth that preventing GP refresh is possible with an imaginary registry value on Win2003/XP – it isn’t.

    What you could do here is prevent background refresh altogether. See the policies in the “administrative templates\system\group policy” section of GP:

    1. You could enable policy “group policy refresh interval for computers” and apply it to that one server. You could set the background refresh interval to 45 days (the max). That way it be far more likely to reboot in the meantime for a patch Tuesday or whatever and never have a chance to refresh automatically.

    2. You could also enable each of the group policy extension policies (ex: “disk quota policy processing”, “registry policy processing”) and set the “do not apply during periodic background processing” option on each one.  This may not actually prevent GPUPDATE /FORCE though – each CSE may decide to ignore your background refresh setting; you will have to test, as this sounds boring.

    Keep in mind for #1 that there are two of those background refresh policies – one per user (“group policy refresh interval for users”), one per computer (“group policy refresh interval for computers”). They both operate in terms of each boot up or each interactive logon, on a per computer/per user basis respectively. I.e. if you logon as a user, you apply your policy. Policy will not refresh for 45 days for that user if you were to stay logged on that whole time. If you log off at 22 days and log back on, you get apply policy, because that is not a refresh – it’s interactive logon foreground policy application.

    Ditto for computers, only replace “logon” with “boot up”. So it will apply the policy at every boot up, but since your computers reboot daily, never again until the next bootup.

    After those thoughts… get a better server or a better app. :)

    Question

    I’m testing Virtualized Domain Controller cloning in Windows Server 2012 on Hyper-V and I have DCs with snapshots. Bad bad bad, I know, but we have our reasons and we at least know that we need to delete them when cloning.

    Is there a way to keep the snapshots on the source computer, but not use VM exports? I.e. I just want the new copied VM to not have the old source machine’s snapshots.

    Answer

    Yes, through the new Hyper-V disk management Windows PowerShell cmdlets or through the management snap-in.

    Graphical method

    1. Examine the settings of your VM and determine which disk is the active one. When using snapshots, it will be an AVHD/X file.

    image

    2. Inspect that disk and you see the parent as well.

    image

    3. Now use the Edit Disk… option in the Hyper-V manager to select that AVHD/X file:

    image

    4. Merge the disk to a new copy:

    image

    image

    Windows PowerShell method

    Much simpler, although slightly counter-intuitive. Just use:

    Convert-vhd

    For example, to export the entire chain of a VM's disk snapshots and parent disk into a new single disk with no snapshots named DC4-CLONED.VHDX:

    image
    Violin!

    You don’t actually have to convert the disk type in this scenario (note how I went from dynamic to dynamic). There is also Merge-VHD for more complex differencing disk and snapshot scenarios, but it requires some extra finagling and disk copying, and  isn’t usually necessary. The graphical merge option works well there too.

    As a side note, the original Understand And Troubleshoot VDC guide now redirects to TechNet. Coming soon(ish) is an RTM-updated version of the original guide, in web format, with new architecture, troubleshooting, and other info. I robbed part of my answer above from it – as you can tell by the higher quality screenshots than you usually see on AskDS – and I’ll be sure to announce it. Hard.

    Question

    It has always been my opinion that if a DC with a FSMO role went down, the best approach is to seize the role on another DC, rebuild the failed DC from scratch, then transfer the role back. It’s also been my opinion that as long as you have more than one DC, and there has not been any data loss, or corruption, it is better to not restore.

    What is the Microsoft take on this?

    Answer

    This is one of those “it depends” scenarios:

    1. The downside to restoring from (usually proprietary) backup solutions is that the restore process just isn’t something most customers test and work out the kinks on until it actually happens; tons of time is spent digging out the right tapes, find the right software, looking up the restore process, contacting that vendor, etc. Often times a restore doesn’t work at all, so all the attempts are just wasted effort. I freely admit that my judgment is tainted through my MS Support experience here – customers do not call us to say how great their backups worked, only that they have a down DC and they can’t get their backups to restore.

    The upside is if your recent backup contained local changes that had never replicated outbound due to latency, restoring them (even non-auth) still means that those changes will have a chance to replicate out. E.g. if someone changed their password or some group was created on that server and captured by the backup, you are not losing any changes. It also includes all the other things that you might not have been aware of – such as custom DFS configurations, operating as a DNS server that a bunch of machines were solely pointed to, 3rd party applications pointed directly to the DC by IP/Name for LDAP or PDC or whatever (looking at you, Open Source software!), etc. You don’t have to be as “aware”, per se.

    2. The downside to seizing the FSMO roles and cutting your losses is the converse of my previous point around latent changes; those objects and attributes that could not replicate out but were caught by the backup are gone forever. You also might miss some of those one-offs where someone was specifically targeting that server – but you will hear from them, don’t worry; it won’t be too hard to put things back.

    The upside is you get back in business much faster in most cases; I can usually rebuild a Win2008 R2 server and make it a DC before you even find the guy that has the combo to the backup tape vault. You also don’t get the interruptions in service for Windows from missing FSMO roles, such as DCs that were low on their RID pool and now cannot retrieve more (this only matters with default, obviously; some customers raise their pool sizes to combat this effect). It’s typically a more reliable approach too – after all, your backup may contain the same time bomb of settings or corruption or whatever that made your DC go offline in the first place. Moreover, the backup is unlikely to contain the most recent changes regardless – backups usually run overnight, so any un-replicated originating updates made during the day are going to be nuked in both cases.

    For all these reasons, we in MS Support generallyrecommend a rebuild rather than a restore, all things being equal. Ideally, you fix the actual server and do neither!

    As a side note, restoring the RID master usedto cause issues that we first fixed in Win2000 SP3. This unfortunately has live on as a myth that you cannot safely restore the RID master. Nevertheless, if someone impatiently seizes that role, then someone else restores that backup, you get a new problem where you cannot issue RIDs anymore. Your DC will also refuse to claim role ownership with a restored RID Master (or any FSMO role) if your restored server has an AD replication problem that prevents at least one good replication with a partner. Keep those in mind for planning no matter how the argument turns out!

    Question

    I am trying out Windows Server 2012 and its new Minimal Server Interface. Is there a way to use WMI to determine if a server is running with a Full Installation, Core Installation, or a Minimal Shell installation?

    Answer

    Indeed, although it’s not made it way to MSDN quite yet. The Win32_ServerFeature class returns a few new properties in our latest operating system. You can use WMIC or Windows PowerShell to browse the installed ones. For example:

    image

    The “99” ID is Server Graphical Shell, which means, in practical terms, “Full Installation”. If 99 alone is not present, that means it’s a minshell server. If the “478” ID is also missing, it’s a Core server.

    E.g. if you wanted to apply some group policy that only applied to MinShell servers, you’d set your query to return true if 99 was not present but 478 was present.

    Other Stuff

    Speaking of which, Windows Server 2012 General Availability is September 4th. If you manage to miss the run up, you might want to visit an optometrist and/or social media consultant.

    Stop worrying so much about the end of the world and think it through.

    So awesome:


    And so fake :(

    If you are married to a psychotic Solitaire player who poo-poo’ed switching totally to the Windows 8 Consumer Preview because they could not get their mainline fix of card games, we have you covered now in Windows 8 RTM. Just run the Store app and swipe for the Charms Bar, then search for Solitaire.

    image

    It’s free and exactly 17 times better than the old in-box version:

    image
    OMG Lisa, stop yelling at me! 

    Is this the greatest geek advert of all time?


    Yes. Yes it is.

    When people ask me why I stopped listening to Metallica after the Black Album, this is how I reply:

    Hetfield in Milan
    Ride the lightning Mercedes

    We have quite a few fresh, youthful faces here in MS Support these days and someone asked me what “Mall Hair” was when I mentioned it. If you graduated high school between 1984 and 1994 in the Midwestern United States, you already know.

    Finally – I am heading to Sydney in late September to yammer in-depth about Windows Server 2012 and Windows 8. Anyone have any good ideas for things to do? So far I’ve heard “bridge climb”, which is apparently the way Australians trick idiot tourists into paying for death. They probably follow it up with “funnel-web spider petting zoo” and “swim with the saltwater crocodiles”. Lunatics.

    Until next time,

    - Ned “I bet James Hetfield knows where I can get a tropical drink by the pool” Pyle

    Two lines that can save your AD from a crisis

    $
    0
    0

    Editor's note:  This is the first of very likely many "DS Quickies".  "Quickies" are shorter technical blog posts that relate hopefully-useful information and concepts for you to use in administering your networks.  We thought about doing these on Twitter or something, but sadly we're still too technical to be bound by a 140-character limit :-)

    For those of you who really look forward to the larger articles to help explain different facets of Windows, Active Directory, or troubleshooting, don't worry - there will still be plenty of those too. 

     

    Hi! This is Gonzalo writing to you from the support team for Latin America.

    Recently we got a call from a customer, where one of the administrators accidentally executed a script that was intended to delete local users… on a domain controller. The result was that all domain users were deleted from the environment in just a couple of seconds. The good thing was that this customer had previously enabled Recycle Bin, but it still took a couple of hours to recover all users as this was a very large environment. This type of issue is something that comes up all the time, and it’s always painful for the customers who run into it. I have worked many cases where the lack of proper protection to objects caused a lot of issues for customer environments and even in some cases ended up costing administrators their jobs, all because of an accidental click. But, how can we avoid this?

    If you take a look at the properties of any object in Active Directory, you will notice a checkbox named “Protect object from accidental deletion” under Object tab. When this enabled, permissions are set to deny
    deletion of this object to Everyone.


     

    With the exception of Organizational Units, this setting is not enabled by default on all objects in Active Directory.  When creating an object, it needs to be set manually. The challenge is how to easily enable this on thousands of objects.

    ANSWER!  Powershell!

    Two simple PowerShell commands will enable you to set accidental deletion protection on all objects in your Active Directory. The first command will set this on any users or computers (or any object with value user on the ObjectClass attribute). The second command will set this on any Organizational Unit where the setting is not already enabled.

     

    Get-ADObject -filter {(ObjectClass -eq "user")} | Set-ADObject -ProtectedFromAccidentalDeletion:$true

    Get-ADOrganizationalUnit -filter * | Set-ADObject -ProtectedFromAccidentalDeletion:$true

     

    Once you run these commands, your environment will be protected against accidental (or intentional) deletion of objects.

    Note: As a proof of concept, I tested the script that my customer used with the accidental deletion protection enabled and none of the objects in my Active Directory environment were deleted.

     

    Gonzalo “keep your job” Reyna

    Migrating your Certification Authority Hashing Algorithm from SHA1 to SHA2

    $
    0
    0

     

    Hey all, Rob Greene here again. Well it’s been a very long while since I have written anything for the AskDS blog. I’ve been heads down supporting all the new cool technology from Microsoft.

    I wanted to see if I could head off some cases coming our way with regard to the whole SHA1 deprecation that seems to be getting talked about on all kinds of PKI related websites. I am not discussing anything new about Microsoft SHA1 deprecation plans. If you want information on this topic please look at the following link: SHA1 Deprecation Policy – http://blogs.technet.com/b/pki/archive/2013/11/12/sha1-deprecation-policy.aspx

    It does appears that some Web browsers are on a faster timeline to not allow SHA1 certificates as Goggle Chrome has outlined in this blog: http://blog.chromium.org/2014/09/gradually-sunsetting-sha-1.html

    So as you would suspect, we are starting to get a few calls from customers wanting to know how to migrate their current Microsoft PKI hierarchy to support SHA2 algorithms. We actually do have a TechNet article explaining the process.

    Before you go through this process of updating your current PKI hierarchy, I have one question for you. Are you sure that all operating systems, devices, and applications that currently use internal certificates in your enterprise actually support SHA2 algorithms?

    How about that ancient Java based application running on the 20 year old IBM AS400 that basically runs the backbone of your corporate data? Does the AS400 / Java version running on it support SHA2 certificates so that it can do LDAPS calls to the domain controller for user authentication?

    What about the old version of Apache or Tomcat web servers you have running? Do they support SHA2 certificates for the websites they host?

    You are basically going to have to test every application within your environment to make sure that they will be able to do certificate chaining and revocation checking against certificates and CRLs that have been signed using one of the SHA2 algorithms. Heck, you might remember we have the following hotfix’s so that Windows XP SP3 and Windows Server 2003 SP2 can properly chain a certificate that contains certification authorities that were signed using SHA2 algorithms.

    Windows Server 2003 and Windows XP clients cannot obtain certificates from a Windows Server 2008-based certification authority (CA) if the CA is configured to use SHA2 256 or higher encryption

    http://support.microsoft.com/kb/968730/EN-US

    Applications that use the Cryptography API cannot validate an X.509 certificate in Windows Server 2003

    http://support.microsoft.com/kb/938397/EN-US

    Inevitably we get the question “What would you recommend Microsoft?” Well that is really a loaded question since we have no idea what is in your vast enterprise environment outside of Microsoft operating systems and applications. When this question comes up the only thing that we can say is that any currently supported Microsoft operating system or application should have no problems supporting a certificate chain or CRL signed using SHA2 algorithms. So if that is the only thing in your environment you could easily follow the migration steps and be done. However, if you are using a Microsoft operating system outside of main stream support, it most likely does not support SHA2 algorithms. I actually had a customer ask if Windows CE supported SHA2; which I had to tell him it does not. (Who knew you guys still ran those things in your environments!)

    If you have any 3rdparty applications or operating systems, then I would suggest you look on the vendor’s website or contact their technical support to get a definitive answer about support for SHA2 algorithms. If you are using a product that has no support then you might need to stand up a SHA2 certificate chain in a lab environment and test the product. Once a problem has been identified you can work with that vendor to find out if they have a new version of the application and/or operating system that supports SHA2 or find out when they plan on supporting it.

    If you do end up needing to support some applications that currently do not support SHA2 algorithms, I would suggest that you look into bringing up a new PKI hierarchy alongside your current SHA1 PKI hierarchy. Slowly begin migrating SHA2 supported applications and operating systems over to the new hierarchy and only allow applications and operating systems that support SHA1 on the existing PKI hierarchy.

    Nah, I want to do the migration!

    So if you made it down to this part of the blog you either actually want to do the migration or curiosity has definitely got the better of you, so let’s get to it. The TechNet article below discusses how to migrate your private key from using a Cryptographic Service Provider (CSP) which only supports SHA1 to a Key Storage Provider (KSP) that supports SHA2 algorithms:

    Migrating a Certification Authority Key from a Cryptographic Service Provider (CSP) to a Key Storage Provider (KSP) – http://technet.microsoft.com/en-us/library/dn771627.aspx

    In addition to this process, I would first recommend that you export all the private and public key pairs that your Certification Authority has before going through with the steps outlined in the above TechNet article. The article seems to assume you have already taken good backups of the Certification Authorities private keys and public certificates.

    Keep in mind that if your Certification Authority has been in production for any length of time you have more than likely renewed the Certification Authority certificate at least once in its lifetime. You can quickly find out by looking at the properties of the CA on the general tab.

    When you change the hashing algorithm over to a SHA2 algorithm you are going to have to migrate all CA certificates to use the newer Key Storage Providers if you are currently using Cryptographic Service Providers. If you are NOT using the Microsoft Providers please consult your 3rdparty vendor to find out their recommended way to migrate from CSP’s to KSP’s. This would also include those certification authorities that use Hardware Storage Modules (HSM).

    Steps 1 -9 in the article further explain backing up the CA configuration, and then changing from CSP’s over to KSP’s. This is required as I mentioned earlier, since SHA2 algorithms are only supported by Key Storage Providers (KSP) which was not possible prior to Windows Server 2008 Certification Authorities. If you previously migrated your Windows Server 2003 CA to one of the newer operating systems you were previously kind of stuck using CSP’s.

    Step 10 is all about switching over to use SHA2 algorithms, and then starting the Certification Authority back up.

    So there you go. You have your existing Certification Authority issuing SHA2 algorithm certificates and CRLS. This does not mean that you will start seeing the SHA256 RSA for signature algorithm or SHA256 for signature hash algorithm on the certification authority’s certificates. For that to happen you would need to do the following:

    · Update the configuration on the CA that issued its certificate and then renew with a new key.

    · If it is a Root CA then you also need to renew with a new key.

    Once the certification authority has been configured to use SHA2 hashing algorithms. not only will newly issued certificates be signed using the new hashing algorithm, all the certification authorities CRLs will also be signed using the new hashing algorithm.

    Run: CertUtil –CRL on the certification authority; which causes the CA to generate new CRLs. Once this is done double click on one of the CRLs and you will see the new signature algorithm.

    As you can tell, not only do newly issued end entity certificates get signed using the SHA2 algorithm, so do all existing CRLs that the CA needs to publish. This is why you not only have to update the current CA certificate to use KSP’s, you also need to update the existing CA certificates as well as long as they are still issuing new CRLs. Existing CA certificates issue new CRLs until they expire, once the expiration period has happened then that CA certificate will no longer issue CRLs.

    As you can see, asking that simple question of “can I migrate my current certification authority from SHA1 to SHA2” it’s really not such an easy question to answer for us here at Microsoft. I would suspect that most of you are like me and would like to err on the side of caution in this regard. If this was my environment I would stand up a new PKI hierarchy that is built using SHA2 algorithms from the start. Once that has been accomplished, I would test each application in the environment that leverages certificates. When I run into an application that does not support SHA2 I would contact the vendor and get on record when they are going to start supporting SHA2, or ask the application owner when they are planning to stop using the application. Once all this is documented I would revisit these end dates to see if the vendor has updated support or find out if the application owner has replaced the application with something that does support SHA2 algorithms.

    Rob “Pass the Hashbrowns” Greene

    CA_properties.jpg

    A Treatise on Group Policy Troubleshooting–now with GPSVC Log Analysis!

    $
    0
    0

    Hi all, David Ani here from Romania. This guide outlines basic steps used to troubleshoot Group Policy application errors using the Group Policy Service Debug logs (gpsvc.log). A basic understanding of the logging discussed here will save time and may prevent you from having to open a support ticket with Microsoft. Let's get started.

    The gpsvc log has evolved from the User Environment Debug Logs (userenv log) in Windows XP and Windows Server 2003 but the basics are still there and the pattern is the same. There are also changes from 2008 to 2012 in the logging itself but they are minor and will not prevent you from understanding your first steps in analyzing the debug logs.

    Overview of Group Policy Client Service (GPSVC)

    • One of the major changes that came with Windows Vista and later operating systems is the new Group Policy Client service. Earlier operating systems used the WinLogon service to apply Group Policy. However, the new Group Policy Client service improves the overall stability of the Group Policy infrastructure and the operating system by isolating it from the WinLogon process.
    • The service is responsible for applying settings configured by administrators to computers and users through the Group Policy component. If the service is stopped or disabled, the settings will not be applied, so applications and components will not be manageable through Group Policy. Please keep in mind that, to increased security, users cannot start or stop the Group Policy Client service. In the Services snap-in, the options to start, stop, pause, and resume the Group Policy client are unavailable.
    • Finally, any components or applications that depend on the Group Policy component will not be functional if the service is stopped or disabled.

    Note: The important thing to remember is that the Group Policy Client is a service running on every OS since Vista and is responsible for applying GPOs. The process itself will run under a svchost instance, which you can check by using the “tasklist /svc” command line.

    clip_image003

    One final point: Since the startup value for the service is Automatic (Trigger Start), you may not always see it in the list of running services. It will start, perform its actions, and then stop.

    Group Policy processing overview
    Group Policy processing happens in two phases:

    • Group Policy Core Processing - where the client enumerates all Group Policies together with all settings that need to be applied. It will connect to a Domain Controller, accessing Active Directory and SYSVOL and gather all the required data in order to process the policies.
    • Group Policy CSE Processing – Client Side Extensions (CSEs) are responsible for client side policy processing. These CSEs ensure all settings configured in the GPOs will be applied to the workstation or server.

    Note: The Group Policy architecture includes both server and client-side components. The server component includes the user interface (GPEdit.msc, GPMC.msc) that an administrator can use to configure a unique policy. GPEdit.msc is always present even on client SKU's while GPMC.msc and GPME.msc get installed either via RSAT or if the machine is a domain controller. When Group Policy is applied to a user or computer, the client component interprets the policy and makes the appropriate changes to the environment. These are known as Group Policy client-side extensions. 

    See the following post for a reference list for most of the CSEs: http://blogs.technet.com/b/mempson/archive/2010/12/01/group-policy-client-side-extension-list.aspx

    In troubleshooting a given extension's application of policy, the administrator can view the configuration parameters for that extension. These parameters are in the form of registry values. There are two things to keep in mind:

    • When configuring GPOs in your Domain you must make sure they have been replicated to all domain controllers, both in AD and SYSVOL. It is important to understand that AD replication is not the same as SYSVOL replication and one can be successful while the other may not. However, if you have a Windows 8 or Windows Server 2012 or later OS, this is easily verified using the Group Policy Management Console (GPMC) and the status tab for an Organizational Unit (OU).
    • At a high level, we know that the majority of your GPO settings are just registry keys that need to be delivered and set on a client under the user or machine keys.

    First troubleshooting steps

    • Start by using GPResult or the Group Policy Results wizard in GPMC and check which GPOs have been applied. What are the winning GPOs? Are there contradictory settings? Finally, be on the lookout for Loopback Policy Processing that can sometimes deliver unexpected results.

    Note: To have a better understanding of Loopback Policy Processing please review this post: http://blogs.technet.com/b/askds/archive/2013/02/08/circle-back-to-loopback.aspx

    • On the target client, you can run GPResult /v or /h and verify that the GPO is there and listed under “Applied GPOs.” Is it listed? It should look the same as the results from the Group Policy Results wizard in GPMC. If not verify replication and that policy has been recently applied.

    Note: You can always force a group policy update on a client with gpupdate /force. This will require admin privileges for the computer side policies. If you do not have admin rights an old fashioned reboot should force policy to apply.

    • If the Group Policy is unexpectedly listed under “Denied GPOs”, then please check the following:

    – If the reason for “Denied GPOs” is empty, then you probably have linked a User Configuration GPO to an OU with computers or the other way around. Link the GPO to the corresponding OU, the one which contains your users.

    – If the reason for “Denied GPOs” is “Access Denied (Security Filtering)”, then make sure you have the correct objects (Authenticated Users or desired Group) in “Security Filtering” in GPMC. Target objects need at least “Read” and “Apply Group Policy” permissions.

    – If the reason for “Denied GPOs” is “False WMI Filter”, then make sure you configure the WMI filter accordingly, so that the GPO works with the WMI filter for the desired user and computers.

    See the following TechNet reference for more on WMI Filters: http://technet.microsoft.com/en-us/library/cc787382(v=ws.10).aspx

    – If the Group Policy isn’t listed in gpresult.exe at all, verify the scope by ensuring that either the user or computer object in Active Directory reside in the OU tree the Group Policy is linked to in GPMC.


    Start Advanced Troubleshooting

    • If the problem cannot be identified from the previous steps, then we can enable gpsvc logging. On the client where the GPO Problem occurs follow these steps to enable Group Policy Service debug logging.

    1. Click Start, click Run, type regedit, and then click OK.
    2. Locate and then click the following registry subkey: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion
    3. On the Edit menu, point to New, and then click Key.
    4. Type Diagnostics, and then press ENTER.
    5. Right-click the Diagnostics subkey, point to New, and then click DWORD (32-bit) Value.
    6. Type GPSvcDebugLevel, and then press ENTER.
    7. Right-click GPSvcDebugLevel, and then click Modify.
    8. In the Value data box, type 30002 (Hexadecimal), and then click OK.

    9. Exit Registry Editor.
    10. View the Gpsvc.log file in the following folder: %windir%\debug\usermode

    Note – If the usermode folder does not exist, create it under %windir%\debug.
    If the usermode folder does not exist under %WINDIR%\debug\ the gpsvc.log file will not be created.

    • Now, you can either do a “gpupdate /force” to trigger GPO processing or do a restart of the machine in order to get a clean boot application of group policy (Foreground vs Background GPO Processing).
    • After that, the log itself should be found under: C:\Windows\Debug\Usermode\gpsvc.log

    An important note for Windows 7/ Windows Server 2008 R2 or older operating systems to consider: On multiprocessor machines, we might have concurrent threads writing to log at the same time. In heavy logging scenarios, one of the writes attempts may fail and we may possibly lose debug log information.
    Concurrent processing is very common with group policy troubleshooting since you usually run "gpupdate /force" without specifying user or machine processing separately. To reduce the chance of lost logging while troubleshooting, initiate machine and user policy processing separately:

    • Gpupdate /force /target:computer
    • Gpupdate /force /target:user


    Analysis – Understanding PID, TID and Dependencies
    Now let's get started with the GPSVC Log analysis! The first thing to understand is the Process Identifier (PID) and Thread Identifier (TID) of a gpsvc log. Here is an example:

    GPSVC(31c.328) 10:01:56:711 GroupPolicyClientServiceMain

    What are those? As an example I took “GPSVC(31c.328)”, where the first number is 31c, which directly relates to the PID. The second number is 328, which relates to the TID. We know that the 31c doesn’t look like a PID, but that’s because it is in Hexadecimal. By translating it into decimal, you will get the PID of the process for the SVCHOST containing the GPSVC.

    Then we have a TID, which will differ for every thread the GPClient is working on. One thing to consider: we will have two different threads for Machine and User GPO processing, so make sure you follow the correct one.

    Example:

    GPSVC(31c.328) 10:01:56:711 CGPService::Start: InstantiateGPEngine
    GPSVC(31c.328) 10:01:56:726 CGPService::InitializeRPCServer starting RPCServer

    GPSVC(31c.328) 10:01:56:741 CGPService::InitializeRPCServer finished starting RPCServer. status 0x0
    GPSVC(31c.328) 10:01:56:741 CGPService::Start: CreateGPSessions
    GPSVC(31c.328) 10:01:56:758 Updating the service status to be RUNNING.

    This shows that the GPService Engine is being started and we can see that it also checks for dependencies (RPCServer) to be started.

    Synchronous vs Asynchronous Processing
    I will not spend a lot of time explaining this because there is a great post from the GP Team out there which explains this very well. This is important to understand because it has a big impact on how settings are applied and when. Look at:
    http://blogs.technet.com/b/grouppolicy/archive/2013/05/23/group-policy-and-logon-impact.aspx

    Synchronous vs. asynchronous processing
    Foreground processing can operate under two different modes—synchronously or asynchronously. The default foreground processing mode for Windows clients since Windows XP has been asynchronous.

    Asynchronous GP processing does not prevent the user from using their desktop while GP processing completes. For example, when the computer is starting up GP asynchronous processing starts to occur for the computer. In the meantime, the user is presented the Windows logon prompt. Likewise, for asynchronous user processing, the user logs on and is presented with their desktop while GP finishes processing. There is no delay in getting either their logon prompt or their desktop during asynchronous GP processing. When foreground processing is synchronous, the user is not presented with the logon prompt until computer GP processing has completed after a system boot. Likewise the user will not see their desktop at logon until user GP processing completes. This can have the effect of making the user feel like the system is running slow. To summarize, synchronous processing can impact startup time while asynchronous does not.

    Foreground processing will run synchronously for two reasons:

    1)      The administrator forces synchronous processing through a policy setting. This can be done by enabling the Computer Configuration\Policies\Administrative Templates\System\Logon\Always wait for the network at computer startup and logon policy setting. Enabling this setting will make all foreground processing synchronous. This is commonly used for troubleshooting problems with Group Policy processing, but doesn’t always get turned back off again.

    Note: For more information on fast logon optimization see:
    305293 Description of the Windows Fast Logon Optimization feature
    http://support.microsoft.com/kb/305293

    2)      A particular CSE requires synchronous foreground processing. There are four CSEs provided by Microsoft that currently require synchronous foreground processing: Software Installation, Folder Redirection, Microsoft Disk Quota and GP Preferences Drive Mapping. If any of these are enabled within one or more GPOs, they will trigger the next foreground processing cycle to run synchronously when they are changed.

    Action: Avoid synchronous CSEs and don’t force synchronous policy. If usage of synchronous CSEs is necessary, minimize changes to these policy settings.

    Analysis – Starting to read into the gpsvc log
    Starting to read into the gpsvc log

    First, we identify where the machine settings are starting, because they process first:

    GPSVC(31c.37c) 10:01:57:101 CStatusMessage::UpdateWinlogonStatusMessage::++ (bMachine: 1)
    GPSVC(31c.37c) 10:01:57:101 Message Status = <Applying computer settings>
    GPSVC(31c.37c) 10:01:57:101 User SID = MACHINE SID
    GPSVC(31c.37c) 10:01:57:101 Setting GPsession state = 1
    GPSVC(31c.174) 10:01:57:101 CGroupPolicySession::ApplyGroupPolicyForPrincipal::++ (bTriggered: 0, bConsole: 0)

    The above lines are quite clear, “<Applying computer settings>” and “User SID = MACHINE SID” pointing out we are talking about machine context. From the “bConsole: 0” part, which means “Boolean Console” with a value of 0, as in false, meaning no user – machine processing.

     

    GPSVC(31c.174) 10:01:57:101 Waiting for connectivity before applying policies
    GPSVC(31c.174) 10:01:57:116 CGPApplicationService::MachinePolicyStartedWaitingOnNetwork.
    GPSVC(31c.564) 10:01:57:804 NlaGetIntranetCapability returned Not Ready error. Consider it as NOT intranet capable.
    GPSVC(31c.564) 10:01:57:804 There is no connectivity. Waiting for connectivity again…
    GPSVC(31c.564) 10:01:59:319 There is connectivity.
    GPSVC(31c.564) 10:01:59:319 Wait For Connectivity: Succeeded
    GPSVC(31c.174) 10:01:59:319 We have network connectivity… proceeding to apply policy.

    This shows us that, at this moment in time, the machine does not have connectivity. However, it does state that it is going to wait for connectivity before applying the policies. After two seconds, we can see that it does find connectivity and moves on with GPO processing.
    It is important to understand that there is a default timeout when waiting for connectivity. The default value is 30 seconds, which is configurable.

    Connectivity
    Now let’s check a bad case scenario where there won’t be a connection available and we run into a timeout:

    GPSVC(324.148) 04:58:34:301 Waiting for connectivity before applying policies
    GPSVC(324.578) 04:59:04:301 CConnectivityWatcher::WaitForConnectivity: Failed WaitForSingleObject.
    GPSVC(324.148) 04:59:04:301 Wait for network connectivity timed out… proceeding to apply policy.
    GPSVC(324.148) 04:59:04:301 CGroupPolicySession::ApplyGroupPolicyForPrincipal::ApplyGroupPolicy (dwFlags: 7).
    GPSVC(324.148) 04:59:04:317 Application complete with bConnectivityFailure = 1.

    As we can see, after 30 seconds it is failing with a timeout and then proceeds to apply policies.
    Without a network connection there are no policies from the domain and no version checks between cached ones and domain ones that can be made.
    In such cases, you will always encounter “bConnectivityFailure = 1”, which isn’t only typical to a general network connectivity issue, but also for every connectivity problem that the machine encounters, LDAP bind as an example.

    Slow Link Detection

    GPSVC(31c.174) 10:01:59:397 GetDomainControllerConnectionInfo: Enabling bandwidth estimate.
    GPSVC(31c.174) 10:01:59:397 Started bandwidth estimation successfully
    GPSVC(31c.174) 10:01:59:976 Estimated bandwidth : DestinationIP = 192.168.1.102
    GPSVC(31c.174) 10:01:59:976 Estimated bandwidth : SourceIP = 192.168.1.105
    GPSVC(31c.174) 10:02:00:007 IsSlowLink: Bandwidth Threshold (WINLOGON) = 500.
    GPSVC(31c.174) 10:02:00:007 IsSlowLink: Bandwidth Threshold (SYSTEM) = 500.
    GPSVC(31c.174) 10:02:00:007 IsSlowLink: WWAN Policy (SYSTEM) = 0.
    GPSVC(31c.174) 10:02:00:007 IsSlowLink: Current Bandwidth >= Bandwidth Threshold.

    Moving further, we can see that a bandwidth estimation is taking place, since Vista, this is done through Network Location Awareness (NLA).

    Slow Link Detection Backgrounder from our very own "Group Policy Slow Link Detection using Windows Vista and later"

    The Group Policy service begins bandwidth estimation after it successfully locates a domain controller. Domain controller location includes the IP address of the domain controller. The first action performed during bandwidth estimation is an authenticated LDAP connect and bind to the domain controller returned during the DC Locator process.

    This connection to the domain controller is done under the user's security context and uses Kerberos for authentication. This connection does not support using NTLM. Therefore, this authentication sequence must succeed using Kerberos for Group Policy to continue to process. Once successful, the Group Policy service closes the LDAP connection. The Group Policy service makes an authenticated LDAP connection in computer context when user policy processing is configured in loopback-replace mode.

    The Group Policy service then determines the network name. The service accomplishes this by using IPHelper APIs to determine the best network interface in which to communicate with the IP address of the domain controller. Additionally, the domain controller and network name are saved in the client computer's registry for future use.

    The Group Policy service is ready to determine the status of the link between the client computer and the domain controller. The service asks NLA to report the estimated bandwidth it measured while earlier Group Policy actions occurred. The Group Policy service compares the value returned by NLA to the GroupPolicyMinTransferRate named value stored in Registry.

    The default minimum transfer rate to measure Group Policy slow link is 500 (Kbps). The link between the domain controller and the client is slow if the estimated bandwidth returned by NLA is lower than the value stored in the registry. The policy value has precedence over the preference value if both values appear in the registry. After successfully determining the link state (fast or slow—no errors), then the Group Policy service writes the slow link status into the Group Policy history, which is stored in the registry. The named value is IsSlowLink.

    If the Group Policy service encounters an error, it read the last recorded value from the history key and uses that true or false value for the slow link status.

    There is updated client-side behavior with Windows 8.1 and later:
    What's New in Group Policy in Windows Server – Policy Caching

    In Windows Server 2012 R2 and Windows 8.1, when Group Policy gets the latest version of a policy from the domain controller, it writes that policy to a local store. Then if Group Policy is running in synchronous mode the next time the computer reboots, it reads the most recently downloaded version of the policy from the local store, instead of downloading it from the network. This reduces the time it takes to process the policy. Consequently, the boot time is shorter in synchronous mode. This is especially important if you have a latent connection to the domain controller, for example, with DirectAccess or for computers that are off premises. This behavior is controllable by a new policy called Configure Group Policy Caching.

    - The updated slow link detection only takes place during synchronous policy processing. It “pings” the Domain Controller with calling DsGetDcName and measures the duration.

    - By default, the Configure Group Policy Caching group policy setting is set to Not Configured. The feature will be enabled by default and using the default values for slow link detection (500ms) and time-out for communicating with a Domain Controller (5000ms) to determine whether it is on the network, if the below conditions are met:

    o The Turn off background refresh of Group Policy policy setting is Not Configured or Disabled.

    o The Configure Group Policy slow link detection policy setting is Not Configured, or, when Enabled, contains a value for Connection speed (Kbps) that is not outlandish (500 is the default value).

    o The Set Group Policy refresh interval for computers is Not Configured or, when Enabled, contains values for Minutes that are not outlandish (90 and 30 at the default values).

    Order of processing settings
    Next on the agenda is retrieving GPOs from the domain. Here we have Group Policy processing and precedence, Group Policy objects that apply to a user (or computer) do not have the same precedence.
    Settings that are applied later can override settings that are applied earlier. The policies are applied in the hierarchy –> Local machine, Sites, Domains and Organizational Units (LSDOU).
    For nested organizational units, GPOs linked to parent organizational units are applied before GPOs linked to child organizational units are applied.

    Note: The order in which GPOs are processed is significant because when policy is applied, it overwrites policy that was applied earlier.

    There are of course some exceptions to the rule:

    • A GPO link may be enforced, or disabled, or both.
    • A GPO may have its user settings disabled, its computer settings disabled, or all settings disabled.
    • An organizational unit or a domain may have Block Inheritance set.
    • Loopback may be enabled. 

    For a better understanding regarding these, please have a look in the following TechNet article: http://technet.microsoft.com/en-us/library/bb742376.aspx

    How does the order of processing look in a gpsvc log
    In the gpsvc log you will notice that the ldap search is done starting at the OU level and up to the site level.

    "The Group Policy service uses the distinguished name of the computer or user to determine the list of OUs and the domain it must search for group policy objects. The Group Policy service builds this list by analyzing the distinguished name from left to right. The service scans the name looking for each instance of OU= in the name. The service then copies the distinguished name to a list, which is used later. The Group Policy service continues to scan the distinguished name for OUs until it encounters the first instance of DC=. At this point, the Group Policy service has found the domain name, finally it searches for policies at site level."

    As you have probably noticed in our example, we only have two GPOs, one at the OU level and one at the Domain level.

    The searches are done using the policies GUID and not their name, the same way you would find them in Sysvol, not by name but by their policy GUID.
    It is always a best practice to be aware of the policy name and its GUID, thus making it easier to work with, while troubleshooting.

    GPSVC(31c.174) 10:01:59:413 GetGPOInfo: Entering…
    GPSVC(31c.174) 10:01:59:413 GetMachineToken: Looping for authentication again.
    GPSVC(31c.174) 10:01:59:413 SearchDSObject: Searching <OU=Workstations,DC=contoso,DC=lab>
    GPSVC(31c.174) 10:01:59:413 SearchDSObject: Found GPO(s): <[LDAP://cn={CC02524C-727C-4816-A298-
    63D12E68C0F},cn=policies,cn=system,DC=contoso,DC=lab;
    0]>
    GPSVC(31c.174) 10:01:59:413 ProcessGPO(Machine): ==============================
    GPSVC(31c.174) 10:01:59:413 ProcessGPO(Machine): Deferring search for LDAP://cn={CC02524C-727C-4816-A298-63D12E68C0F},cn=policies,cn=system,DC=contoso,DC=lab
    GPSVC(31c.174) 10:01:59:413 SearchDSObject: Searching <DC=contoso,DC=lab>
    GPSVC(31c.174) 10:01:59:413 SearchDSObject: Found GPO(s): <[LDAP://CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=contoso,DC=lab;0]>
    GPSVC(31c.174) 10:01:59:413 ProcessGPO(Machine): ==============================
    GPSVC(31c.174) 10:01:59:413 ProcessGPO(Machine): Deferring search for LDAP://CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=contoso,DC=lab
    GPSVC(31c.174) 10:01:59:522 SearchDSObject: Searching <CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=contoso,DC=lab>
    GPSVC(31c.174) 10:01:59:522 SearchDSObject: No GPO(s) for this object.

    You can see if the policy is enabled, disable or enforced here:

    GPSVC(31c.174) 10:01:59:413 SearchDSObject: Searching <OU=Workstations,DC=contoso,DC=lab>
    GPSVC(31c.174) 10:01:59:413 SearchDSObject: Found GPO(s): <[LDAP://cn={CC02524C-727C-4816-A298-D63D12E68C0F},cn=policies,cn=system,DC=contoso,DC=lab;0]>

    Note the 0 at the end of the ldap query, this is the default setting. If the value were 1 instead of 0 it would mean the policy is set to disabled. In other words, a value of 1 means the policy is linked to that particular OU, domain, or site level but is disabled. If the value is set to 2 then it would mean that the policy has been set to “Enforced.”

    A setting of “Enforced” means that if two separate GPOs have the same setting defined, but hold different values, the one that is set to “Enforced” will win and will be applied to the client. If a policy is set to “Enforced” at an OU/domain level and an OU below that is set to block inheritance, then the policy set for “Enforced” will still apply. You cannot block a policy from applying if “Enforced” has been set.

    Example of an enforced policy:

    GPSVC(328.7fc) 07:01:14:334 SearchDSObject: Searching <OU=Workstations,DC=contoso,DC=lab>
    GPSVC(328.7fc) 07:01:14:334 SearchDSObject: Found GPO(s): <[LDAP://cn={CC02524C-727C-4816-A298-D63D12E68C0F},cn=policies,cn=system,DC=contoso,DC=lab;2]>
    GPSVC(328.7fc) 07:01:14:334 AllocGpLink: GPO cn={CC02524C-727C-4816-A298-D63D12E68C0F},cn=policies,cn=system,DC=contoso,DC=lab has enforced link.
    GPSVC(328.7fc) 07:01:14:334 ProcessGPO(Machine): ==============================

    Now let‘s move down the log and we‘ll find the next step where the policies are being processed:

    GPSVC(31c.174) 10:02:00:007 ProcessGPO(Machine): ==============================
    GPSVC(31c.174) 10:02:00:007 ProcessGPO(Machine): Searching <CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=contoso,DC=lab>
    GPSVC(31c.174) 10:02:00:007 ProcessGPO(Machine): Machine has access to this GPO.
    GPSVC(31c.174) 10:02:00:007 ProcessGPO(Machine): Found common name of: <{31B2F340-016D-11D2-945F-00C04FB984F9}>
    GPSVC(31c.174) 10:02:00:007 ProcessGPO(Machine):
    GPO passes the filter check.
    GPSVC(31c.174) 10:02:00:007 ProcessGPO(Machine): Found functionality version of: 2
    GPSVC(31c.174) 10:02:00:007 ProcessGPO(Machine): Found file system path of: \\contoso.lab\sysvol\contoso.lab\Policies\{31B2F340-016D-11D2-945F-00C04FB984F9}
    GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found display name of: <Default Domain Policy>
    GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found machine version of: GPC is 17, GPT is 17
    GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found flags of: 0
    GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found extensions: [{35378EAC-683F-11D2-A89A-00C04FBBCFA2}{53D6AB1B-2488-11D1-A28C-00C04FB94F17}{53D6AB1D-2488-11D1-A28C-00C04FB94F17}][{827D319E-6EAC-11D2-A4EA-00C04F79F83A}{803E14A0-B4FB-11D0-A0D0-00A0C90F574B}][{B1BE8D72-6EAC-11D2-A4EA-00C04F79F83A}{53D6AB1B-2488-11D1-A28C-00C04FB94F17}{53D6AB1D-2488-11D1-A28C-00C04FB94F17}]
    GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): ==============================

     

    GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): ==============================
    GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Searching <cn={CC02524C-727C-4816-A298-D63D12E68C0F},cn=policies,cn=system,DC=contoso,DC=lab>
    GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Machine has access to this GPO.
    GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found common name of: <{CC02524C-727C-4816-A298-D63D12E68C0F}>
    GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): GPO passes the filter check.
    GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found functionality version of: 2
    GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found file system path of: \\contoso.lab\SysVol\contoso.lab\Policies\{CC02524C-727C-4816-A298-D63D12E68C0F}
    GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found display name of: <GPO Guide test>
    GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found machine version of: GPC is 1, GPT is 1
    GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found flags of: 0
    GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found extensions: [{35378EAC-683F-11D2-A89A-00C04FBBCFA2}{D02B1F72-3407-48AE-BA88-E8213C6761F1}]
    GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): ==============================

    First, we find the path where the GPO is stored in AD. As you can see, the GPO is still being represented by the GPO GUID and not its name: Searching <cn={CC02524C-727C-4816-A298-D63D12E68C0F},cn=policies,cn=system,DC=contoso,DC=lab>
    After that, it checks to see if the machine has access to the policy, if yes then the computer can apply the policy; if it does not have access, then he cannot apply it. As per example: Machine has access to this GPO.

    Moving on, if a policy has a WMI filter being applied, it will be verified in order to see if the filter matches the current machine\user or not.
    The WMI filter can be found in AD. If you are using GPMC, then this can be found in the right hand pane at the very bottom box, after highlighting the policy. From our example: GPO passes the filter check.

    Functionality version has to be a 2 for a Windows 2003 or later OS to apply the policy. From our example: Found functionality version of: 2
    A search in Sysvol for the GPO is also being executed, as explained in the beginning, both AD and Sysvol must be aware of the GPO and its settings. From our example: Found file system path of: <\\contoso.lab\SysVol\contoso.lab\Policies\{CC02524C-727C-4816-A298-D63D12E68C0F}>

    The next part is where we check the GPC (Group Policy Container, AD) and the GPT (Group Policy Template, Sysvol) for the version numbers. We check the version numbers to determine if the policy has changed since the last time it was applied. If the version numbers are different (GPC different than GPT) then we either have an AD replication or File replication problem. From our example we can see that there’s a match between those two: Found machine version of: GPC is 1, GPT is 1

    The extensions in the next line refers to the CSE (client-side extensions GUIDs) and will vary from policy to policy. As explained, they are the ones in charge at the client side to carry on our settings: From our example: GPSVC(31c.174) 10:02:00:038 ProcessGPO(Machine): Found extensions: [{35378EAC-683F-11D2-A89A-00C04FBBCFA2}{D02B1F72-3407-48AE-BA88-E8213C6761F1}]

    Let‘s have a look at an example with a WMI Filter being used, which does not suit our current system:

    GPSVC(328.7fc) 08:04:32:803 ProcessGPO(Machine): ==============================
    GPSVC(328.7fc) 08:04:32:803 ProcessGPO(Machine): Searching <cn={CC02524C-727C-4816-A298-D63D12E68C0F},cn=policies,cn=system,DC=contoso,DC=lab>
    GPSVC(328.7fc) 08:04:32:803 ProcessGPO(Machine): Machine has access to this GPO.
    GPSVC(328.7fc) 08:04:32:803 ProcessGPO(Machine): Found common name of: <{CC02524C-727C-4816-A298-D63D12E68C0F}> GPSVC(328.7fc) 08:04:32:803 FilterCheck: Found WMI Filter id of: <[contoso.lab;{CD718707-ACBD-4AD7-8130-05D61C897783};0]>
    GPSVC(328.7fc) 08:04:32:913 ProcessGPO(Machine): The GPO does not pass the filter check and so will not be applied.
    GPSVC(328.7fc) 08:04:32:913 ProcessGPO(Machine): Found functionality version of: 2
    GPSVC(328.7fc) 08:04:32:913 ProcessGPO(Machine): Found file system path of: \\contoso.lab\SysVol\contoso.lab\Policies\{CC02524C-727C-4816-A298-D63D12E68C0F}
    GPSVC(328.7fc) 08:04:32:928 ProcessGPO(Machine): Found display name of: <GPO Guide test>
    GPSVC(328.7fc) 08:04:32:928 ProcessGPO(Machine): Found machine version of: GPC is 1, GPT is 1
    GPSVC(328.7fc) 08:04:32:928 ProcessGPO(Machine): Found flags of: 0
    GPSVC(328.7fc) 08:04:32:928 ProcessGPO(Machine): Found extensions: [{35378EAC-683F-11D2-A89A-00C04FBBCFA2}{D02B1F72-3407-48AE-BA88-E8213C6761F1}]
    GPSVC(328.7fc) 08:04:32:928 ProcessGPO(Machine): ==============================

    In this scenario a WMI filter was used, which specifies that the used OS has to be Windows XP, so in order to apply the GPO the system OS has to match our filter. As our OS is Windows 2012R2, the filter does not match and so the GPO will not apply.

    Now we come to the part where we process CSE’s for particular settings, such as Folder Redirection, Disk Quota, etc. If the particular extension is not being used then you can simply ignore this section.

    GPSVC(31c.174) 10:02:00:038 ProcessGPOs(Machine): Get 2 GPOs to process.
    GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {35378EAC-683F-11D2-A89A-00C04FBBCFA2}
    GPSVC(31c.174) 10:02:00:038 ReadStatus: Read Extension's Previous status successfully.
    GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {0ACDD40C-75AC-47ab-BAA0-BF6DE7E7FE63}
    GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {0E28E245-9368-4853-AD84-6DA3BA35BB75}
    GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {16be69fa-4209-4250-88cb-716cf41954e0} GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {17D89FEC-5C44-4972-B12D-241CAEF74509}
    GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {1A6364EB-776B-4120-ADE1-B63A406A76B5}
    GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {25537BA6-77A8-11D2-9B6C-0000F8080861}
    GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {3610eda5-77ef-11d2-8dc5-00c04fa31a66} GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {3A0DBA37-F8B2-4356-83DE-3E90BD5C261F}
    GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {426031c0-0b47-4852-b0ca-ac3d37bfcb39} GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {42B5FAAE-6536-11d2-AE5A-0000F87571E3}GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {4bcd6cde-777b-48b6-9804-43568e23545d} GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {4CFB60C1-FAA6-47f1-89AA-0B18730C9FD3}
    GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {4D2F9B6F-1E52-4711-A382-6A8B1A003DE6}
    GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {5794DAFD-BE60-433f-88A2-1A31939AC01F}
    GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {6232C319-91AC-4931-9385-E70C2B099F0E} GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {6A4C88C6-C502-4f74-8F60-2CB23EDC24E2}GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {7150F9BF-48AD-4da4-A49C-29EF4A8369BA}
    GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {728EE579-943C-4519-9EF7-AB56765798ED} GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {74EE6C03-5363-4554-B161-627540339CAB} GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {7B849a69-220F-451E-B3FE-2CB811AF94AE} GPSVC(31c.174) 10:02:00:038 ReadExtStatus: Reading Previous Status for extension {827D319E-6EAC-11D2-A4EA-00C04F79F83A}

    Note:

    • You can always do a search for each of these GUIDs on MSDN and you should be able to find their proper names.
    • At the end of the machine GPO thread, we can also see the Foreground processing that we talked about in the beginning. We can see that the Foreground processing was Synchronous and that the next one will be Synchronous as well.
    • The end of the machine GPO processing thread comes to an end and we can see that it was completed with a bConnectivityFailure = 0.

    GPSVC(31c.174) 10:02:00:397 ProcessGPOs(Machine): SKU is SYNC: Mode: 1, Reason: 7
    GPSVC(31c.174) 10:02:00:397 gpGetFgPolicyRefreshInfo (Machine): Mode: Synchronous, Reason: 7
    GPSVC(31c.174) 10:02:00:397 gpSetFgPolicyRefreshInfo (bPrev: 1, szUserSid: Machine, info.mode: Synchronous)
    GPSVC(31c.174) 10:02:00:397 SetFgRefreshInfo: Previous Machine Fg policy Synchronous, Reason: SKU.
    GPSVC(31c.174) 10:02:00:397 gpSetFgPolicyRefreshInfo (bPrev: 0, szUserSid: Machine, info.mode: Synchronous)
    GPSVC(31c.174) 10:02:00:397 SetFgRefreshInfo: Next Machine Fg policy Synchronous, Reason: SKU.
    GPSVC(31c.174) 10:02:00:397 ProcessGPOs(Machine): Policies changed – checking if UBPM trigger events need to be fired
    GPSVC(31c.174) 10:02:00:397 CheckAndFireGPTriggerEvent: Fired Policy present UBPM trigger event for Machine.
    GPSVC(31c.174) 10:02:00:397 Application complete with bConnectivityFailure = 0.

     

    User GPO Thread

    This next part of the GPO log is dedicated to the user thread.

    While the machine thread had the TID (31c.174) the user thread has (31c.b8) which you can notice when the thread actually starts. You can see that the user SID is found.
    Also, notice this time the “bConsole: 1” at the end instead of 0 which we had for the machine.

    GPSVC(31c.704) 10:02:47:147 CGPEventSubSystem::GroupPolicyOnLogon::++ (SessionId: 1)
    GPSVC(31c.704) 10:02:47:147 CGPApplicationService::UserLogonEvent::++ (SessionId: 1, ServiceRestart: 0)
    GPSVC(31c.704) 10:02:47:147 CGPApplicationService::CheckAndCreateCriticalPolicySection.
    GPSVC(31c.704) 10:02:47:147 User SID = <S-1-5-21-646618010-1986442393-1057151281-1103>
    GPSVC(31c.b8) 10:02:47:147 CGroupPolicySession::ApplyGroupPolicyForPrincipal::++ (bTriggered: 0, bConsole: 1)

    You can see that it does the network check again and that it is also prepared to wait for network.

    GPSVC(31c.b8) 10:02:47:147 CGPApplicationService::GetTimeToWaitOnNetwork.
    GPSVC(31c.b8) 10:02:47:147 CGPMachineStartupConnectivity::CalculateWaitTimeoutFromHistory: Average is 3334.
    GPSVC(31c.b8) 10:02:47:147 CGPMachineStartupConnectivity::CalculateWaitTimeoutFromHistory: Current is 2203.
    GPSVC(31c.b8) 10:02:47:147 CGPMachineStartupConnectivity::CalculateWaitTimeoutFromHistory: Taking min of 6668 and 120000.
    GPSVC(31c.b8) 10:02:47:147 CGPApplicationService::GetStartTimeForNetworkWait.
    GPSVC(31c.b8) 10:02:47:147 StartTime For network wait: 3750ms

    In this case it decides to wait for network with timeout 0 ms because it already has network connectivity and so moves on to processing GPOs.

    GPSVC(31c.b8) 10:02:47:147 UserPolicy: Waiting for machine policy wait for network event with timeout 0 ms
    GPSVC(31c.b8) 10:02:47:147 CGroupPolicySession::ApplyGroupPolicyForPrincipal::ApplyGroupPolicy (dwFlags: 38).

    The next part remains the same as for the machine thread, it searches and returns networks found, number of interfaces and bandwidth check.

    GPSVC(31c.b8) 10:02:47:147 NlaQueryNetSignatures returned 1 networks
    GPSVC(31c.b8) 10:02:47:147 NSI Information (Network GUID) : {1F777393-0B42-11E3-80AD-806E6F6E6963}
    GPSVC(31c.b8) 10:02:47:147 # of interfaces : 1
    GPSVC(31c.b8) 10:02:47:147 Interface ID: {9869CFDA-7F10-4B3F-B97A-56580E30CED7}
    GPSVC(31c.b8) 10:02:47:163 GetDomainControllerConnectionInfo: Enabling bandwidth estimate.
    GPSVC(31c.b8) 10:02:47:475 Started bandwidth estimation successfully
    GPSVC(31c.b8) 10:02:47:851 IsSlowLink: Current Bandwidth >= Bandwidth Threshold.

    The ldap query for the GPOs is done in the same manner as for the machine thread:

    GPSVC(31c.b8) 10:02:47:490 GetGPOInfo: Entering…
    GPSVC(31c.b8) 10:02:47:490 SearchDSObject: Searching <OU=Admin Users,DC=contoso,DC=lab>
    GPSVC(31c.b8) 10:02:47:490 SearchDSObject: Found GPO(s): <[LDAP://cn={CCF581E3-E2ED-441F-B932-B78A3DFAE09B},cn=policies,cn=system,DC=contoso,DC=lab;0]>
    GPSVC(31c.b8) 10:02:47:490 ProcessGPO(User): ==============================
    GPSVC(31c.b8) 10:02:47:490 ProcessGPO(User): Deferring search for LDAP://cn={CCF581E3-E2ED-441F-B932-B78A3DFAE09B},cn=policies,cn=system,DC=contoso,DC=lab
    GPSVC(31c.b8) 10:02:47:490 SearchDSObject: Searching <DC=contoso,DC=lab>
    GPSVC(31c.b8) 10:02:47:490 SearchDSObject: Found GPO(s): <[LDAP://CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=contoso,DC=lab;0]>
    GPSVC(31c.b8) 10:02:47:490 ProcessGPO(User): ==============================
    GPSVC(31c.b8) 10:02:47:490 ProcessGPO(User): Deferring search for LDAP://CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=contoso,DC=lab
    GPSVC(31c.b8) 10:02:47:490 SearchDSObject: Searching <CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=contoso,DC=lab>
    GPSVC(31c.b8) 10:02:47:490 SearchDSObject: No GPO(s) for this object.
    GPSVC(31c.b8) 10:02:47:490 EvaluateDeferredGPOs: Searching for GPOs in cn=policies,cn=system,DC=contoso,DC=lab
    GPSVC(31c.b8) 10:02:47:490 EvaluateDeferredGPOs: Adding filters (&(!(flags:1.2.840.113556.1.4.803:=1))(gPCUserExtensionNames=[*])((|(distinguishedName=CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=contoso,DC=lab)(distinguishedName=cn={CCF581E3-E2ED-441F-B932-B78A3DFAE09B},cn=policies,cn=system,DC=contoso,DC=lab))))

    We can see the GPOs are processed exactly as explained in the machine part, while the difference is that the GPO has to be available for the user this time and not the machine. The important thing in the following example is that the Default Domain Policy (we know it is the Default Domain Policy because it has a hardcoded GUID {31B2F340-016D-11D2-945F-00C04FB984F9} which will be that same in every Domain) contains no extensions for the user side, thus being reported to us “has no extensions”:

    GPSVC(31c.b8) 10:02:47:851 EvalList: Object <CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=contoso,DC=lab> cannot be accessed/is disabled/or has no extensions
    GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): ==============================
    GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): Searching <cn={CCF581E3-E2ED-441F-B932-B78A3DFAE09B},cn=policies,cn=system,DC=contoso,DC=lab>
    GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): User has access to this GPO.
    GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): Found common name of: <{CCF581E3-E2ED-441F-B932-B78A3DFAE09B}>
    GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User):
    GPO passes the filter check.
    GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): Found functionality version of: 2
    GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): Found file system path of: \\contoso.lab\SysVol\contoso.lab\Policies\{CCF581E3-E2ED-441F-B932-B78A3DFAE09B}
    GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): Found display name of: <GPO Guide Test Admin Users>
    GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): Found user version of: GPC is 3, GPT is 3
    GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): Found flags of: 0
    GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): Found extensions: [{35378EAC-683F-11D2-A89A-00C04FBBCFA2}{D02B1F73-3407-48AE-BA88-E8213C6761F1}]
    GPSVC(31c.b8) 10:02:47:851 ProcessGPO(User): ==============================

    After that, our policy settings are processed directly into the registry by the CSE:

    GPSVC(318.7ac) 02:02:02:187 SetRegistryValue: NoWindowsMarketplace => 1 [OK]
    GPSVC(318.7ac) 02:02:02:187 SetRegistryValue: ScreenSaveActive => 0 [OK]

    While moving on to process CSE’s for particular settings, such as Folder Redirection, Disk Quota, etc., exactly as it was done for the machine thread.

    Here it is the same as the machine thread, where the user thread is also finished with a bConnectivityFailure = 0 and everything was applied as expected.

    GPSVC(31c.b8) 10:02:47:912 User logged in on active session
    GPSVC(31c.b8) 10:02:47:912 ApplyGroupPolicy: Getting ready to create background thread GPOThread.
    GPSVC(31c.b8) 10:02:47:912 CGroupPolicySession::ApplyGroupPolicyForPrincipal Setting m_pPolicyInfoReadyEvent
    GPSVC(31c.b8) 10:02:47:912 Application complete with bConnectivityFailure = 0.

    In the gpsvc log, you will always have a confirmation that the “problematic” GPO was indeed processed or not; this is to make sure that the GPO was read and applied from the domain. The registry values that the GPO contains should be applied on the client side by the CSEs, so if you see a GPO in gpsvc getting applied but the desired setting isn’t applied on the client side, it is a good idea to check the registry values yourself by using “regedit” in order to ensure they have been properly set.

    If these registry values are getting changed after they have been applied, a good tool provided by Microsoft to further troubleshoot this is Process Monitor, which can be used to follow those certain registry settings and see who’s changing them.

    There are definitely all sort of problem scenarios that I haven’t covered with this guide. This is meant as a starter guide for you to have an idea how to follow up if your domain GPOs aren’t getting applied and you want to use our gpsvc log to troubleshoot this.

    Finally, as Client Side Extensions (CSE) play a major role for GPO settings distribution, here is a list for those of you that want to go deeper with CSE Logging, which you can enable in order to gather more information about the CSE state:

    Scripts and Administrative Templates CSE Debug Logging (gptext.dll) HKLM\Software\Microsoft\WindowsNT\CurrentVersion\Winlogon

    ValueName: GPTextDebugLevel
    ValueType: REG_DWORD
    Value Data: 0x00010002
    Options: 0x00000001 = DL_Normal
    0x00000002 = DL_Verbose
    0x00010000 = DL_Logfile
    0x00020000 = DL_Debugger

    Log File: C:\WINNT\debug\usermode\gptext.log

    Security CSE WINLOGON Debug Logging (scecli.dll)
    KB article: 245422 How to Enable Logging for Security Configuration Client Processing in Windows 2000

    HKLM\Software\Microsoft\WindowsNT\CurrentVersion\WinLogon\GPExtensions\{827D319E-6EAC-11D2- A4EA-00C04F79F83A

    ValueName: ExtensionDebugLevel
    ValueType: REG_DWORD
    Value Data: 2
    Options: 0 = Log Nothing
    1 = Log only errors
    2 = Log all transactions

    Log File: C:\WINNT\security\logs\winlogon.log

    Folder Redirection CSE Debug Logging (fdeploy.dll)
    HKLM\Software\Microsoft\WindowsNT\CurrentVersion\Diagnostics

    ValueName: fdeployDebugLevel
    ValueType: REG_DWORD
    Value Data: 0x0f

    Log File: C:\WINNT\debug\usermode\fdeploy.log

    Offline Files CSE Debug Logging (cscui.dll)
    KB article: 225516 How to Enable the Offline Files Notifications Window in Windows 2000

    Software Installation CSE Verbose logging (appmgmts.dll)
    KB article: 246509 Troubleshooting Program Deployment by Using Verbose Logging
    HKLM\Software\Microsoft\WindowsNT\CurrentVersion\Diagnostics

    ValueName: AppmgmtDebugLevel
    ValueType: REG_DWORD
    Value Data: 0x9B or 0x4B

    Log File: C:\WINNT\debug\usermode\appmgmt.log

    Software Installation CSE Windows Installer Verbose logging
    KB article: 314852 How to enable Windows Installer logging

    HKLM\Software\Policies\Microsoft\Windows\Installer

    ValueName: Logging
    Value Type: Reg_SZ
    Value Data: voicewarmup

    Log File: C:\WINNT\temp\MSI*.log

    Desktop Standard CSE Debug Logging
    KB article: 931066 How to enable tracing for client-side extensions in PolicyMaker

    GPEDIT – Group Policy Editor Console Debug Logging
    TechNet article: Enabling Logging for Group Policy Editor
    HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon

    Value Name: GPEditDebugLevel
    Value Type: REG_DWORD
    Value Data: 0x10002

    Log File: %windir%\debug\usermode\gpedit.log

    GPMC – Group Policy Management Console Debug Logging
    TechNet article: Enable Logging for Group Policy Management Console
    HKLM\Software\Microsoft\Windows NT\CurrentVersion\Diagnostics

    Value Name: GPMgmtTraceLevel
    Value Type: REG_DWORD
    Value Data: 2

    HKLM\Software\Microsoft\Windows NT\CurrentVersion\Diagnostics

    Value Name: GPMgmtLogFileOnly
    Value Type: REG_DWORD
    Value Data: 1

    Log File: C:\Documents and Settings\<user>\Local Settings\Temp\gpmgmt.log

     

    RSOP – Resultant Set of Policies Debug Logging
    Debug Logging for RSoP Procedures:
    HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon

    Value Name: RsopDebugLevel
    Value Type: REG_DWORD
    Value Data: 0x00010004

    Log File: %windir%\system32\debug\USERMODE\GPDAS.LOG

    WMI Debug Logging
    ASKPERF blog post: WMI Debug Logging

    I hope this was interesting and shed some light on how to start analyzing the gpsvc log.

    Thank you,

    David Ani

    Troubleshoot ADFS 2.0 with these new articles

    $
    0
    0

    Hi all, here’s a quick public service announcement to highlight some recently published ADFS 2.0 troubleshooting guidance. We get a lot of questions about configuring and troubleshooting ADFS 2.0, so our support and content teams have pitched in to create a series of troubleshooting articles to cover the most common scenarios.

    ADFS 2.0 connectivity problems: “This page cannot be displayed” – You receive a “This page cannot be displayed” error message when you try to access an application on a website that uses AD FS 2.0. Provides a resolution.

    ADFS 2.0 ADFS service configuration and startup issues-ADFS service won’t start – Provides troubleshooting steps for ADFS service configuration and startup problems.

    ADFS 2.0 Certificate problems-An error occurred during an attempt to build the certificate chain – A certificate-related change in AD FS 2.0 causes certificate, SSL, and trust errors triggers errors including Event 133. Provides a resolution.

    ADFS 2.0 authentication problems: “Not Authorized HTTP error 401″ – You cannot authenticate an account in AD FS 2.0, that you are prompted for credentials, and event 111 is logged. Provides a resolution.

    ADFS 2.0 claims rules problems: “Access is denied” – You receive an “Access Denied” error message when you try to access an application in AD FS 2.0. Provides a resolution.

    We hope you will find these troubleshooters useful. You can provide feedback and comments at the bottom of each KB if you want to help us improve them.

    Windows 10 Group Policy (.ADMX) Templates now available for download

    $
    0
    0

    Hi everyone, Ajay here.  I wanted to let you all know that we have released the Windows 10 Group Policy (.ADMX) templates on our download center as an MSI installer package. These .ADMX templates are released as a separate download package so you can manage group policy for Windows 10 clients more easily.

    This new package includes additional (.ADMX) templates which are not included in the RTM version of Windows 10.

     

    1. DeliveryOptimization.admx
    2. fileservervssagent.admx
    3. gamedvr.admx
    4. grouppolicypreferences.admx
    5. grouppolicy-server.admx
    6. mmcsnapins2.admx
    7. terminalserver-server.admx
    8. textinput.admx
    9. userdatabackup.admx
    10. windowsserver.admx

    To download the Windows 10 Group Policy (.ADMX) templates, please visit http://www.microsoft.com/en-us/download/details.aspx?id=48257

    To review which settings are new in Windows 10, review the Windows 10 ADMX spreadsheet here: http://www.microsoft.com/en-us/download/details.aspx?id=25250

    Ajay Sarkaria


    Manage Developer Mode on Windows 10 using Group Policy

    $
    0
    0

    Hi All,

    We’ve had a few folks want to know how to disable Developer Mode using Group Policy, but still allow side-loaded apps to be installed.  Here is a quick note how to do this. (A more AD-centric post from Linda Taylor is on it way)

    On the Windows 10 device, click on Windows logo key‌ clip_image001 and then click on Settings.

    clip_image002

    Click on Update & Security

    clip_image003

    From the left-side pane, select For developers and from the right-side pane, choose the level that you need.

    clip_image004

    · If you choose Sideload apps: You can install an .appx and any certificate that is needed to run the app with the PowerShell script that is created with the package. Or you can use manual steps to install the certificate and package separately.

    · If you choose Developer mode: You can debug your apps on that device. You can also sideload any apps if you choose developer mode, even ones that you have not developed on the device. You just have to install the .appx with its certificate for sideloading.

    Use Group Policy Editor (gpedit) to enable your device:

    Using Group Policy Editor (gpedit.msc), a developer mode can be enabled or disabled on computers running Windows 10.

    1. Open the Windows Run box using keyboard, press Windows logo key‌  +R

    2. Type in gpedit.msc and then press Enter.

    3. In Group Policy Editor navigate to Computer Configuration\Administrative Templates\Windows Components\App Package Deployment.

    4. From the right-side pane, double click on Allow all trusted apps to install and click on Enabled button.

    5. Click on Apply and then OK .

    Notes:

    · Allow all trusted apps to install

    o If you want to disable access to everything in for developers’ disable this policy setting.

    o If you enable this policy setting, you can install any LOB or developer-signed Windows Store app.

    If you want to allow side-loading apps to install but disable the other options in developer mode disable "Developer mode" and enable "Allow all trusted apps to install"

    · Group policies are applied every 90 minutes, plus or minus a random amount up to 30 minutes. To apply the policy immediately, run gpupdate from the command prompt.

    For more information on Developer Mode, see the following MSDN article:
    https://msdn.microsoft.com/library/windows/apps/xaml/dn706236.aspx?f=255&MSPPError=-2147217396

    SHA1 Key Migration to SHA256 for a two tier PKI hierarchy

    $
    0
    0

    Hello. Jim here again to take you through the migration steps for moving your two tier PKI hierarchy from SHA1 to SHA256. I will not be explaining the differences between the two or the supportability / security implementations of either. That information is readily available, easily discoverable and is referenced in the links provided below. Please note the following:

    Server Authentication certificates: CAs must begin issuing new certificates using only the SHA-2 algorithm after January 1, 2016. Windows will no longer trust certificates signed with SHA-1 after January 1, 2017.

    If your organization uses its own PKI hierarchy (you do not purchase certificates from a third-party), you will not be affected by the SHA1 deprecation. Microsoft's SHA1 deprecation plan ONLY APPLIES to certificates issued by members of the Microsoft Trusted Root Certificate program.  Your internal PKI hierarchy may continue to use SHA1; however, it is a security risk and diligence should be taken to move to SHA256 as soon as possible.

    In this post, I will be following the steps documented here with some modifications: Migrating a Certification Authority Key from a Cryptographic Service Provider (CSP) to a Key Storage Provider (KSP) -https://technet.microsoft.com/en-us/library/dn771627.aspx

    The steps that follow in this blog will match the steps in the TechNet article above with the addition of screenshots and additional information that the TechNet article lacks.

    Additional recommended reading:

    The following blog written by Robert Greene will also be referenced and should be reviewed – http://blogs.technet.com/b/askds/archive/2015/04/01/migrating-your-certification-authority-hashing-algorithm-from-sha1-to-sha2.aspx

    This Wiki article written by Roger Grimes should also be reviewed as well – http://social.technet.microsoft.com/wiki/contents/articles/31296.implementing-sha-2-in-active-directory-certificate-services.aspx

    Microsoft Trusted Root Certificate: Program Requirements – https://technet.microsoft.com/en-us/library/cc751157.aspx

    The scenario for this exercise is as follows:

    A two tier PKI hierarchy consisting of an Offline ROOT and an Online subordinate enterprise issuing CA.

    Operating Systems:
    Offline ROOT and Online subordinate are both Windows 2008 R2 SP1

    OFFLINE ROOT
    CANAME – CONTOSOROOT-CA

    clip_image001

    ONLINE SUBORDINATE ISSUING CA
    CANAME – ContosoSUB-CA

    clip_image003

    First, you should verify whether your CA is using a Cryptographic Service Provider (CSP) or Key Storage Provider (KSP). This will determine whether you have to go through all the steps or just skip to changing the CA hash algorithm to SHA2. The command for this is in step 3. The line to take note of in the output of this command is “Provider =”. If the Provider = line is any of the top five service providers highlighted below, the CA is using a CSP and you must do the conversion steps. The RSA#Microsoft Software Key Storage Provider and everything below it are KSP’s.

    clip_image005

    Here is sample output of the command – Certutil –store my <Your CA common name>

    As you can see, the provider is a CSP.

    clip_image006

    If you are using a Hardware Storage Module (HSM) you should contact your HSM vendor for special guidance on migrating from a CSP to a KSP. The steps for changing the Hashing algorithm to a SHA2 algorithm would still be the same for HSM based CA’s.

    There are some customers that use their HSM for the CA private / public key, but use Microsoft CSP’s for the Encryption CSP (used for the CA Exchange certificate).

    We will begin at the OFFLINE ROOT.

    BACKUP! BACKUP! BACKUP the CA and Private KEY of both the OFFLINE ROOT and Online issuing CA. If you have more than one CA Certificate (you have renewed multiple times), all of them will need to be backed up.

    Use the MMC to backup the private key or use the CERTSRV.msc and right click the CA name to backup as follows on both the online subordinate issuing and the OFFLINE ROOT CA’s –

    clip_image008

    clip_image010

    Provide a password for the private key file.

    clip_image012

    You may also backup the registry location as indicated in step 1C.

    Step 2– Stop the CA Service

    Step 3- This command was discussed earlier to determine the provider.

    • Certutil –store my <Your CA common name>

    Step 4 and Step 6 from the above referenced TechNet articleshould be done via the UI.

    a. Open the MMC – load the Certificates snapin for the LOCAL COMPUTER

    b. Right click each CA certificate (If you have more than 1) – export

    c. Yes, export the private key

    d. Check – Include all certificates in the certification path if possible

    e. Check – Delete the private key if the export is successful

    clip_image014

    f. Click next and continue with the export.

    Step 5
    Copy the resultant .pfx file to a Windows 8 or Windows Server 2012 computer

    Conversion requires a Windows Server 2012 certutil.exe, as Windows Server 2008 (and prior) do not support the necessary KSP conversion commands. If you want to convert a CA certificate on an ADCS version prior to Windows Server 2012, you must export the CA certificate off of the CA, import onto Windows Server 2012 or later using certutil.exe with the -KSP option, then export the newly signed certificate as a PFX file, and re-import on the original server.

    Run the command in Step 5 on the Windows 8 or Windows Server 2012 computer.

    • Certutil –csp <KSP name> -importpfx <Your CA cert/key PFX file>

    clip_image016

    Step 6

    a. To be done on the Windows 8 or Windows Server 2012 computer as previously indicated using the MMC.

    b. Open the MMC – load the Certificates snapin for the LOCAL COMPUTER

    c. Right click the CA certificate you just imported – All Tasks – export

    *I have seen an issue where the “Yes, export the private key” is dimmed after running the conversion command and trying to export via the MMC. If you encounter this behavior, simply reimport the .PFX file manually and check the box Mark this key as exportable during the import. This will not affect the previous conversion.

    d. Yes, export the private key.

    e. Check – Include all certificates in the certification path if possible

    f. Check – Delete the private key if the export is successful

    g. Click next and continue with the export.

    h. Copy the resultant .pfx file back to the destination 2008 R2 ROOTCA

    Step 7

    You can again use the UI (MMC) to import the .pfx back to the computer store on the ROOTCA

    *Don’t forget during the import to Mark this key as exportable.

    clip_image018

    ***IMPORTANT***

    If you have renewed you CA multiple times with the same key, after exporting the first CA certificate as indicated above in step 4 and step 6, you are breaking the private key association with the previously renewed CA certificates.  This is because you are deleting the private key upon successful export.  After doing the conversion and importing the resultant .pfx file on the CA (remembering to mark the private key as exportable), you must run the following command from an elevated command prompt for each of the additional CA certificates that were renewed previously:

    certutil –repairstore MY serialnumber 

    The Serial number is found on the details tab of the CA certificate.  This will repair the association of the public certificate to the private key.


    Step 8

    Your CSP.reg file must contain the information highlighted at the top –

    clip_image020

    Step 8c

    clip_image022

    Step 8d– Run CSP.reg

    Step 9

    Your EncryptionCSP.reg file must contain the information highlighted at the top –

    clip_image024

    Step 9c– verification – certutil -v -getreg ca\encryptioncsp\EncryptionAlgorithm

    Step 9d– Run EncryptionCsp.reg

    Step 10

    Change the CA hash algorithm to SHA256

    clip_image026

    Start the CA Service

    Step 11

    For a root CA: You will not see the migration take effect for the CA certificate itself until you complete the migration of the root CA, and then renew the certificate for the root CA.

    Before we renew the OFFLINE ROOT certificate this is how it looks:

    clip_image028

    Renewing the CA’s own certificate with a new or existing (same) key would depend on the remaining validity of the certificate. If the certificate is at or nearing 50% of its lifetime, it would be a good idea to renew with a new key. See the following for additional information on CA certificate renewal –

    https://technet.microsoft.com/en-us/library/cc730605.aspx

    After we renew the OFFLINE ROOT certificate with a new key or the same key, its own Certificate will be signed with the SHA256 signature as indicated in the screenshot below:

    clip_image030

    Your OFFLINE ROOT CA is now completely configured for SHA256.

    Running CERTUTIL –CRL will generate a new CRL file also signed using SHA256

    clip_image032

    By default, CRT, CRL and delta CRL files are published on the CA in the following location – %SystemRoot%\System32\CertSrv\CertEnroll. The format of the CRL file name is the "sanitized name" of the CA plus, in parentheses, the "key id" of the CA (if the CA certificate has been renewed with a new key) and a .CRL extension. See the following for more information on CRL distribution points and the CRL file name – https://technet.microsoft.com/en-us/library/cc782162%28v=ws.10%29.aspx

    Copy this new .CRL file to a domain joined computer and publish it to Active Directory while logged on as an Enterprise Administrator from an elevated command prompt.

    Do the same for the new SHA256 ROOT CA certificate.

    • certutil -f -dspublish <.CRT file> RootCA
    • certutil –f -dspublish <.CRL file>

    Now continue with the migration of the Online Issuing Subordinate CA.

    Step 1– Backup the CA database and Private Key.

    Backup the CA registry settings

    Step 2– Stop the CA Service.

    Step 3- Get the details of your CA certificates

    Certutil –store my “Your SubCA name”

    image

    I have never renewed the Subordinate CA certificate so there is only one.

    Step 4 – 6

    As you know from what was previously accomplished with the OFFLINE ROOT, steps 4-6 are done via the MMC and we must do the conversion on a Windows 8 or Windows 2012 or later computer for reasons explained earlier.

    clip_image035

    *When you import the converted SUBCA .pfx file via the MMC, you must remember to again Mark this key as exportable.

    Step 8 – Step 9

    Creating and importing the registry files for CSP and CSP Encryption (see above)

    Step 10- Change the CA hash algorithm to SHA-2

    clip_image037

    Now in the screenshot below you can see the Hash Algorithm is SHA256.

    clip_image039

    The Subordinate CA’s own certificate is still SHA1. In order to change this to SHA256 you must renew the Subordinate CA’s certificate. When you renew the Subordinate CA’s certificate it will be signed with SHA256. This is because we previously changed the hash algorithm on the OFFLINE ROOT to SHA256.

    Renew the Subordinate CA’s certificate following the proper steps for creating the request and submitting it to the OFFLINE ROOT. Information on whether to renew with a new key or the same key was provided earlier. Then you will copy the resultant .CER file back to the Subordinate CA and install it via the Certification Authority management interface.

    If you receive the following error when installing the new CA certificate –

    clip_image041

    Check the newly procured Subordinate CA certificate via the MMC. On the certification path tab, it will indicate under certificate status that – “The signature of the certificate cannot be verified”

    This error could have several causes. You did not –dspublish the new OFFLINE ROOT .CRT file and .CRL file to Active Directory as previously instructed.

    clip_image043

    Or you did publish the Root CA certificate but the Subordinate CA has not done Autoenrollment (AE) yet and therefore has not downloaded the “NEW” Root CA certificate via AE methods, or AE may be disabled on the CA all together.

    After the files are published to AD and after verification of AE and group policy updates on the Subordinate CA, the install and subsequent starting of Certificate Services will succeed.

    Now in addition to the Hash Algorithm being SHA256 on the Subordinate CA, the Signature on its own certificate will also be SHA256.

    clip_image045

    The Subordinate CA’s .CRL files are also now signed with SHA256 –

    clip_image047

    Your migration to SHA256 on the Subordinate CA is now completed.

    I hope you found this information helpful and informative. I hope it will make your SHA256 migration project planning and implementation less daunting.

    Jim Tierney

    “Administrative limit for this request was exceeded" Error from Active Directory

    $
    0
    0

    Hello, Ryan Ries here with my first AskDS post! I recently ran into an issue with a particular environment where Active Directory and UNIX systems were being integrated.  Microsoft has several attributes in AD to facilitate this, and one of those attributes is the memberUid attribute on security group objects.  You add user IDs to the memberUid attribute of the security group, and Active Directory will treat that as group membership from UNIX systems for the purposes of authentication/authorization.

    All was well and good for a long time. The group grew and grew to over a thousand users, until one day we wanted to add another UNIX user, and we were greeted with this error:

    “The administrative limit for this request was exceeded.”

    Wait, there’s a limit on this attribute? I wonder what that limit is.

    MSDN documentation states that the rangeUpper property of the memberUid attribute is 256,000. This support KB also mentions that:

    “The attribute size limit for the memberUID attribute in the schema is 256,000 characters. It depends on the individual value length on how many user identifiers (UIDs) will fit into the attribute.”

    And you can even see it for yourself if you fancy a gander at your schema:

    Something doesn’t add up here – we’ve only added around 1200 users to the memberUid attribute of this security group. Sure it’s a big group, but that doesn’t exceed 256,000 characters; not even close. Adding up all the names that I’ve added to the attribute, I figure it adds up to somewhere around 10,000 characters. Not 256,000.

    So what gives?

    (If you’ve been following along and you’ve already figured out the problem yourself, then please contact us! We’re hiring!)

    The problem here is that we’re hitting a different limit as we continue to add members to the memberUid attribute, way before we get to 256k characters.

    The memberUid attribute is a multivalued attribute, however it is not a linked attribute.  This means that it has a limitation on its maximum size that is less than the 256,000 characters shown on the memberUid attributeSchema object.

    You can distinguish between which attributes are linked or not based on whether those attributeSchema objects have values in their linkID attribute.

    Example of a multivalued and linked attribute:

    Example of a multivalued but not linked attribute:

    So if the limit is not really 256,000 characters, then what is it?

    From How the Data Store Works on TechNet:

    “The maximum size of a database record is 8110 bytes, based on an 8-kilobyte (KB) page size. Because of variable overhead requirements and the variable number of attributes that an object might have, it is impossible to provide a precise limit for the maximum number of multivalues that an object can store in its attributes. …

    The only value that can actually be computed is the maximum number of values in a nonlinked, multivalued attribute when the object has only one attribute (which is impossible). In Windows 2000 Active Directory, this number is computed at 1575 values. From this value, taking various overhead estimates into account and generalizing about the other values that the object might store, the practical limit for number of multivalues stored by an object is estimated at 800 nonlinked values per object across all attributes.

    Attributes that represent links do not count in this value. For example, the members linked, multivalued attribute of a group object can store many thousands of values because the values are links only.

    The practical limit of 800 nonlinked values per object is increased in Windows Server 2003 and later. When the forest has a functional level of Windows Server 2003 or higher, for a theoretical record that has only one attribute with the minimum of overhead, the maximum number of multivalues possible in one record is computed at 3937. Using similar estimates for overhead, a practical limit for nonlinked multivalues in one record is approximately 1200. These numbers are provided only to point out that the maximum size of an object is somewhat larger in Windows Server 2003 and later.”

    (Emphasis is mine.)

    Alright, so according to the above article, if I’m in an Active Directory domain running all Server 2003 or better, which I am, then a “practical” limit for non-linked multi-value attributes should be approximately 1200 values.

    So let’s put that to the test, shall we?

    I wrote a quick and dirty test script with PowerShell that would generate a random 8-character string from a pool of characters (i.e., a random fictitious user ID,) and then add that random user ID to the memberUid attribute of a security group, in a loop until the script encounters an error because the script can’t add any more values:

    # This script is for testing purposes only!
    $ValidChars = @('a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j',
    'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't',
    'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D',
    'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N',
    'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X',
    'Y', 'Z', '0', '1', '2', '3', '4', '5', '6', '7','8', '9')

    [String]$Str = [String]::Empty
    [Int]$Bytes = 0
    [Int]$Uids = 0
    While ($Uids -LT 1000000)
    {
    $Str = [String]::Empty
    1..8 | % { $Str += ($ValidChars | Get-Random) }
    Try
    {
    Set-ADGroup 'TestGroup' -Add @{ memberUid = $Str } -ErrorAction Stop
    }
    Catch
    {
    Write-Error $_.Exception.Message
    Write-Host "$Bytes bytes $Uids users added"
    Break
    }
    $Bytes += 8
    $Uids += 1
    }

    Here’s the output from when I run the script:

    Huh… whaddya’ know? Approximately 1200 users before we hit the “administrative limit,” just like the article suggests.

    One way of getting around this attribute's maximum size would be to use nested groups, or to break the user IDs apart into two separate groups… although this may cause you to have to change some code on your UNIX systems. It’s typically not a fun day when you first realize this limit exists. Better to know about it beforehand.

    Another attribute in Active Directory that could potentially hit a similar limit is the servicePrincipalName attribute, as you can read about in this AskPFEPlat article.

    Until next time!

    Ryan Ries

    Using Repadmin with ADLDS and Lingering objects

    $
    0
    0

     

    Hi! Linda Taylor here from the UK Directory Services escalation team. This time on ADLDS, Repadmin, lingering objects and even PowerShell….

    The other day a colleague was trying to remove a lingering object in ADLDS. He asked me about which repadmin syntax would work for ADLDS and it occurred to us both that all the documented examples we found for repadmin were only for AD DS.

    So, here are some ADLDS specific examples of repadmin use.

    For the purpose of this post I will be using 2 servers with ADLDS. Both servers belong to Root.contoso.com Domain and they replicate a partition called DC=Fabrikam.

      LDS1 runs ADLDS on port 50002.
      RootDC1 runs ADLDS on port 51995.

    1. Who is replicating my partition?

    If you have many servers in your replica set you may want to find out which ADLDS servers are replicating a specific partition. ….Yes! The AD PowerShell module works against ADLDS.

    You just need to add the :port on the end of the servername.

    One way to list which servers are replicating a specific application partition is to query the attribute msDs-MasteredBy on the respective partition. This attribute contains a list of NTDS server settings objects for the servers which replicate this partition.

    You can do this with ADSIEDIT or ldp.exe or PowerShell or any other means.

    Powershell Example: Use the Get-ADObject comandlet and I will target my command at localhost:51995.  (I am running this on RootDC1)

    powershell_lindakup_ADLDS

    Notice there are 2 NTDS Settings objects returned and servername is recorded as ServerName$ADLDSInstanceName.

    So this tells me that according to localhost:51995 , DC=Fabrikam partition is replicated between Server LDS1$instance1 and server ROOTDC1$instance1.

    2. REPADMIN for ADLDS

    Generic rules and Tips:

    • For most commands the golden rule is to simply use the port inside the DSA_NAME or DSA_LIST parameters like lds1:50002 or lds1.contoso.com:50002. That’s it!

    For example:

    CMD

     

    • There are some things which do not apply to ADLDS. That is anything which involves FSMO’s like PDC and RID which ADLDS does not have or Global Catalog – again no such thing in ADLDS.
    • A very useful switch for ADLDS is the /homeserver switch:

    Usually by default repadmin assumes you are working with AD and will use the locator or attempt to connect to local server on port 389 if this fails. However, for ADLDS the /Homeserver switch allows you to specify an ADLDS server:port.

    For example, If you want to get replication status for all ADLDS servers in a configuration set (like for AD you would run repadmin /showrepl * /csv), for ADLDS you can run the following:

    Repadmin /showrepl /homeserver:localhost:50002 * /csv >out.csv

    Then you can open the OUT.CSV using something like Excel or even notepad and view a nice summary of the replication status for all servers. You can then sort this and chop it around to your liking.

    The below explanation of HOMESERVER is taken from repadmin /listhelp output:

    If the DSA_LIST argument is a resolvable server name (such as a DNS or WINS name) this will be used as the homeserver. If a non-resolvable parameter is used for the DSA_LIST, repadmin will use the locator to find a server to be used as the homeserver. If the locator does not find a server, repadmin will try the local box (port 389).

    The /homeserver:[dns name] option is available to explicitly control home server selection.

    This is especially useful when there are more than one forest or configuration set possible. For

    example, the DSA_LIST command "fsmo_istg:site1" would target the locally joined domain's directory, so to target an AD/LDS instance, /homeserver:adldsinstance:50000 could be used to resolve the fsmo_istg to site1 defined in the ADAM configuration set on adldsinstance:50000 instead of the fsmo_istg to site1 defined in the locally joined domain.

    Finally, a particular gotcha that can send you in the wrong troubleshooting direction is a LDAP 0x51 “server down” error which is returned if you forget to add the DSA_NAME and/or port to your repadmin command. Like this:

    lindakup_CMD2_ADLDS

    3. Lingering objects in ADLDS

    Just like in AD, you can get lingering objects in AD LDS .The only difference being that there is no Global Catalog in ADLDS, and thus no lingering objects are possible in a Read Only partition.

    EVENT ID 1988 or 2042:

    If you bring an outdated instance (past TSL) back online In ADLDS you may see event 1988 as per http://support.microsoft.com/kb/870695/EN-US “Outdated Active Directory objects generate event ID 1988”.

    On WS 2012 R2 you will see event 2042 telling you that it has been over TombStoneLifetime since you last replicated so replication is disabled.

    What to do next?

    First you want to check for lingering objects and remove if necessary.

    1. To check for lingering objects you can use repadmin /removelingeringobjects with the /advisory_mode

    My colleague Ian Farr or “Posh chap” as we call him, recently worked with a customer on such a case and put together a great PowerShell blog with a One-Liner for detecting and removing lingering objects from ADLDS with PowerShell. Check it out here:

    http://blogs.technet.com/b/poshchap/archive/2014/05/09/one-liner-collect-ad-lds-lingering-object-advisory-mode-1946-events.aspx

    Example event 1946:

    Event1946

    2.  Once you have detected any lingering objects and you have made a decision that you need to remove them, you can remove them using the same repadmin command as in Iain’s blog but without the advisory_mode.

    Example command to remove lingering objects:

    Repadmin /removelingeringobjects lds1:50002 8fc92fdd-e5ec-45fb-b7d3-120f9f9f192 DC=Fabrikam

    Where Lds1:50002 is the LDS instance and port where to remove lingering objects

    8fc92fdd-e5ec-45fb-b7d3-120f9f9f192 is DSA guid of a good LDS server/instance

    DC=Fabrikam is the partition where to remove lingering objects

    For each lingering object removed you will see event 1945.

    Event1945

    You can use Iain’s one-liner again to get a list of all the objects which were removed.

    As a good practice you should also do the lingering object checks for the Configuration partition.

    Once all lingering objects are removed replication can be re-enabled again and you can go down the pub…(maybe).

    I hope this is useful.

    Linda.

    Speaking in Ciphers and other Enigmatic tongues…update!

    $
    0
    0

    Hi! Jim Tierney here again to talk to you about Cryptographic Algorithms, SCHANNEL and other bits of wonderment. My original post on the topic has gone through a rewrite to bring you up to date on recent changes in this space. 
    So, your company purchases this new super awesome vulnerability and compliance management software suite, and they just ran a scan on your Windows Server 2008 domain controllers and lo! The software reports back that you have weak ciphers enabled, highlighted in RED, flashing, with that "you have failed" font, and including a link to the following Microsoft documentation –

    KB245030 How to Restrict the Use of Certain Cryptographic Algorithms and Protocols in Schannel.dll:

    http://support.microsoft.com/kb/245030/en-us

    The report may look similar to this:

    SSL Server Has SSLv2 Enabled Vulnerability port 3269/tcp over SSL

    THREAT:
    The Secure Socket Layer (SSL) protocol allows for secure communication between a client and a server.
    There are known flaws in the SSLv2 protocol. A man-in-the-middle attacker can force the communication to a less secure level and then attempt to break the weak encryption. The attacker can also truncate encrypted messages.

    SOLUTION:
    Disable SSLv2.

    Upon hearing this information, you fire up your browser and read the aforementioned KB 245030 top to bottom and RDP into your DC’s and begin checking the locations specified by the article. Much to your dismay you notice the locations specified in the article are not correct concerning your Windows 2008 R2 DC’s. On your 2008 R2 DC’s you see the following at this registry location
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL:

    clip_image001

    "Darn you Microsoft documentation!!!!!!" you scream aloud as you shake your fist in the general direction of Redmond, WA….

    This is how it looks on a Windows 2003 Server:

    clip_image002

    Easy now…

    The registry key’s and their content in Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 2012 and 2012 R2 look different from Windows Server 2003 and prior.

    Here is the registry location on Windows 7 – 2012 R2 and its default contents:

    Windows Registry Editor Version 5.00

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel]
    "EventLogging"=dword:00000001[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Ciphers]
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\CipherSuites]
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Hashes][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\KeyExchangeAlgorithms][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]
    "DisabledByDefault"=dword:00000001

    Allow me to explain the above content that is displayed in standard REGEDIT export format:

     

    • The Ciphers key should contain no values or subkeys
    • The CipherSuites key should contain no values or subkeys
    • The Hashes key should contain no values or subkeys
    • The KeyExchangeAlgorithms key should contain no values or subkeys
    • The Protocols key should contain the following sub-keys and value:
      Protocols
          SSL 2.0
             Client
                 DisabledByDefault REG_DWORD 0x00000001 (value)

    The following table lists the Windows SCHANNEL protocols and whether or not they are enabled or disabled by default in each operating system listed:

    image

    *Remember to install the following update if you plan on or are currently using SHA512 certificates:

    SHA512 is disabled in Windows when you use TLS 1.2
    http://support.microsoft.com/kb/2973337/EN-US

    Similar to Windows Server 2003, these protocols can be disabled for the server or client architecture. Meaning that either the protocol can be omitted from the list of supported protocols included in the Client Hello when initiating an SSL connection, or it can be disabled on the server so that even if a client requests SSL 2.0 in a client hello, the server will not respond with that protocol.

    The client and server subkeys designate each protocol. You can disable a protocol for either the client or the server, but disabling Ciphers, Hashes, or CipherSuites affects BOTH client and server sides. You would have to create the necessary subkeys beneath the Protocols key to achieve this.

    For example:

    Windows Registry Editor Version 5.00

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]

    "DisabledByDefault"=dword:00000001

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Server][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Client][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Server][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.0][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.0\Client][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.0\Server][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Client][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Server][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Client][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Server]

    This is how it looks in the registry after they have been created:

    clip_image005

    Client SSL 2.0 is disabled by default on Windows Server 2008, 2008 R2, 2012 and 2012 R2.

    This means the computer will not use SSL 2.0 to initiate a Client Hello.

    So it looks like this in the registry:

    clip_image006

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]DisabledByDefault =dword:00000001

    Just like Ciphers and KeyExchangeAlgorithms, Protocols can be enabled or disabled.
    To disable other protocols, select which side of the conversation on which you want to disable the protocol, and add the "Enabled"=dword:00000000 value. The example below disables the SSL 2.0 for the server in addition to the SSL 2.0 for the client.

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]

    DisabledByDefault =dword:00000001 <Default client disabled as I said earlier>

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Server]

    Enabled =dword:00000000 <disables SSL 2.0 server side>

    clip_image007

    After this, you will need to reboot the server. You probably do not want to disable TLS settings. I just added them here for a visual reference.

    ***For Windows server 2008 R2, if you want to enable Server side TLS 1.1 and 1.2, you MUST create the registry entries as follows:

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Server]DisabledByDefault =dword:00000000

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Server]DisabledByDefault =dword:00000000

    So why would you go through all this trouble to disable protocols and such, anyway? Well, there may be a regulatory requirement that your company's web servers should only support Federal Information Processing Standards (FIPS) 140-1/2 certified cryptographic algorithms and protocols. Currently, TLS is the only protocol that satisfies such a requirement. Luckily, enforcing this compliant behavior does not require you to manually modify registry settings as described above. You can enforce FIPS compliance via group policy as explained by the following:

    The effects of enabling the "System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing" security setting in Windows XP and in later versions of Windowshttp://support.microsoft.com/kb/811833

    The 811833 article talks specifically about the group policy setting below which by default is NOT defined –

    Computer Configuration\ Windows Settings \Security Settings \Local Policies\ Security Options

    clip_image008

    The policy above when applied will modify the following registry locations and their value content.
    Be advised that this FipsAlgorithmPolicy information is stored in different ways as well –

    Windows 7/2008
    Windows Registry Editor Version 5.00

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\FipsAlgorithmPolicy]
    "Enabled"=dword:00000000 <Default is disabled>


    Windows 2003/XP
    Windows Registry Editor Version 5.00

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa]
    Fipsalgorithmpolicy =dword:00000000 <Default is disabled>

    Enabling this group policy setting effectively disables everything except TLS.

    More Examples
    Let’s continue with more examples. A vulnerability report may also indicate the presence of other Ciphers it deems to be “weak”.

    Below I have built a .reg file that when imported will disable the following Ciphers:

    56-bit DES

    40-bit RC4

    Behold!

    Windows Registry Editor Version 5.00[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\AES 128][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\AES 256][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\DES 56]
    "Enabled"=dword:00000000[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\NULL]
    "Enabled"=dword:00000000[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 128/128][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 40/128]
    "Enabled"=dword:00000000[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 56/128][HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\Triple DES 168]

    After importing these registry settings, you must reboot the server.

    The vulnerability report might also mention that 40-bit DES is enabled, but that would be a false positive because Windows Server 2008 doesn't support 40-bit DES at all. For example, you might see this in a vulnerability report:

    Here is the list of weak SSL ciphers supported by the remote server:
    Low Strength Ciphers (< 56-bit key)
    SSLv3
    EXP-ADH-DES-CBC-SHA Kx=DH(512) Au=None Enc=DES(40) Mac=SHA1 export

    TLSv1
    EXP-ADH-DES-CBC-SHA Kx=DH(512) Au=None Enc=DES(40) Mac=SHA1 export

    If this is reported and it is necessary to get rid of these entries you can also disable the Diffie-Hellman Key Exchange algorithm (another components of the two cipher suites described above — designated with Kx=DH(512)).

    To do this, make the following registry changes:

    Windows Registry Editor Version 5.00

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\KeyExchangeAlgorithms\Diffie-Hellman]
    "Enabled"=dword:00000000

    You have to create the sub-key Diffie-Hellman yourself. Make this change and reboot the server.

    This step is NOT advised or required….I am offering it as an option to you to make the vulnerability scanning tool pass the test.

    Keep in mind, also, that this will disable any cipher suite that relies upon Diffie-Hellman for key exchange.

    You will probably not want to disable ANY cipher suites that rely on Diffie-Hellman. Secure communications such as IPSec and SSL both use Diffie-Hellman for key exchange. If you are running OpenVPN on a Linux/Unix server you are probably using Diffie-Hellman for key exchange. The point I am trying to make here is you should not have to disable the Diffie-Hellman Key Exchange algorithm to satisfy a vulnerability scan.

    Advanced Ciphers have arrived!!!
    Advanced ciphers were added to Windows 8.1 / Windows Server 2012 R2 computers by KB 2929781, released in April 2014 and again by monthly rollup KB 2919355, released in May 2014

    Updated cipher suites were released as part of two fixes.

    KB 2919355 for Windows 8.1 and Windows Server 2012 R2 computers

    MS14-066 for Windows 7 and Windows 8 clients and Windows Server 2008 R2 and Windows Server 2012 Servers.

    While these updates shipped new ciphers, the cipher suite priority ordering could not correctly be updated.

    KB 3042058, released Tuesday, March 2015 is a follow up package to correct that issue. This is NOT applicable to 2008 (non R2)

    You can set a preference list for which cipher suites the server will negotiate first with a client that supports them.

    You can review this MSDN article on how to set the cipher suite prioritization list via GPO: http://msdn.microsoft.com/en-us/library/windows/desktop/bb870930(v=vs.85).aspx#adding__removing__and_prioritizing_cipher_suites

    Default location and ordering of Cipher Suites:

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Cryptography\Configuration\Local\SSL\00010002

    clip_image010

    Location of Cipher Suite ordering that is modified by setting this group policy –

    Computer Configuration\Administrative Templates\Network\SSL Configuration Settings\SSL Cipher Suite Order

    clip_image012

    When the SSL Cipher Suite Order group policy is modified and applied successfully it modifies the following location in the registry:

    HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Cryptography\Configuration\SSL\00010002

    The Group Policy would dictate the effective cipher suites. Once this policy is applied, the settings here take precedence over what is in the default location. The GPO should override anything else configured on the computer. The Microsoft Schannel team does not support directly manipulating the registry.

    Group Policy settings are domain settings configured by a domain administrator and should always have precedence over local settings configured by local administrators.

    Being secure is a good thing and depending on your environment, it may be necessary to restrict certain cryptographic algorithms from use. Just make sure you do your diligence about testing these settings. It is also well worth your time to really understand how the security vulnerability software your company just purchased does it’s testing. A double sided network trace will reveal both sides of the client – server hello and what cryptographic algorithms are being offered from each side over the wire.

    Jim “Insert cryptic witticism here” Tierney

    Viewing all 36 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>