Friday, December 30, 2016

Updating Device Guard Code Integrity Policies

In previous posts about Device Guard, I spent a lot of time talking about initial code integrity (CI) configurations and bypasses. What I haven't covered until now however is an extremely important topic: how does one effectively install software and update CI policies according? In this post, I will walk you through how I got Chrome installed on my Surface Book running on an enforced Device Guard code integrity policy.

The first questions I posed to myself were:
  1. Should I place my system into audit mode, install the software, and base an updated policy on CodeIntegrity event log entries?
  2. Or should I install the software on a separate, non Device Guard protected system, analyze the file footprint, develop a policy based on the installed files, deploy, and test?
My preference is option #2 as I would prefer to not place a system back into audit mode if I can avoid it. That said, audit mode would yield the most accurate results as it would tell you exactly which binaries would have been blocked that you would want to base whitelist rules off of. In this case, there's no right or wrong answer. My decision to go with option #2 was to base my rules solely off binaries that execute post-installation, not during installation. My mantra with whitelisting is to be as restrictive as is reasonable.

So how did I go about beginning to enumerate the file footprint of Chrome?
  1. I opened Chrome, ran it as I usually would, and used PowerShell to enumerate loaded modules.
  2. I also happened to know that the Google updater runs as a scheduled task so I wanted to obtain the binaries executed via scheduled tasks as well.
I executed the following to get a rough sense of where Chrome files were installed:

(Get-Process -Name *Chrome*).Modules.FileName | Sort-Object -Unique
(Get-ScheduledTask -TaskName *Google*).Actions.Execute | Sort-Object -Unique

To my surprise and satisfaction, Google manages to house nearly all of its binaries in C:\Program Files (x86)\Google. This allows for a great starting point for building Chrome whitelist rules.

Next, I had to ask myself the following:
  1. Am I okay with whitelisting anything signed by Google?
  2. Do I only want to whitelist Chrome? i.e. All Chrome-related EXEs and all DLLs they rely upon.
  3. I will probably want want Chrome to be able to update itself without Device Guard getting in the way, right?
While I like the idea of whitelisting just Chrome, there are going to be some potential pitfalls. By whitelisting just Chrome, I would need to be aware of every EXE and DLL that Chrome requires to function. I can certainly do that but it would be a relatively work-intensive effort. With that list, I would then create whitelist rules using the FilePublisher file rule level. This would be great initially and it would potentially be the most restrictive strategy while allowing Chrome to update itself. The issue is that what happens when Google decides to include one or more additional DLLs in the software installation? Device Guard will block them and I will be forced to update my policy again. I'm all about applying a paranoid mindset to my policy but at the end of the day, I need to get work done other than constantly updating CI policies.

So the whitelist strategy I choose in this instance is to allow code signed by Google and to allow Chrome to update itself. This strategy equates to using the "Publisher" file rule level - "a combination of the PcaCertificate level (typically one certificate below the root) and the common name (CN) of the leaf certificate. This rule level allows organizations to trust a certificate from a major CA (such as Symantec), but only if the leaf certificate is from a specific company (such as Intel, for device drivers)."

I like the "Publisher" file rule level because it offers the most flexibility, longevity for a specific vendor's code signing certificate. If you look at the certificate chain for chrome.exe, you will see that the issuing PCA (i.e. the issuer above the leaf certificate) is Symantec. Obviously, we wouldn't want to whitelist all code signed by certs issued by Symantec but I'm okay allowing code signed by Google who received their certificate from Symantec.

Certificate chain for chrome.exe
So now I'm ready to create the first draft of my code integrity rules for Chrome.

I always start by creating a FilePublisher rule set for the binaries I want to whitelist because it allows me to associate what binaries are tied to their respective certificates.

$GooglePEs = Get-SystemDriver -ScanPath 'C:\Program Files (x86)\Google' -UserPEs
New-CIPolicy -FilePath Google_FilePub.xml -DriverFiles $GooglePEs -Level FilePublisher -UserPEs

What resulted was the following ruleset. Everything looked fine except for a single Microsoft rule generated which was associated with d3dcompiler_47.dll. I looked in my master rule policy and I already had this rule. Me being obsessive compulsive wanted a pristine ruleset including only Google rules. This is good practice anyway once you get in the habit of managing large whitelist rulesets. You'll want to keep separate policy XMLs for each whitelisting scenario you run into and then merge accordingly. After removing the MS binary from the list, what resulted was a much cleaner ruleset (Publisher applied this time) consisting of only two signer rules.

$OnlyGooglePEs = $GooglePEs | ? { -not $_.FriendlyName.EndsWith('d3dcompiler_47.dll') }
New-CIPolicy -FilePath Google_Publisher.xml -DriverFiles $OnlyGooglePEs -Level Publisher -UserPEs

So now, all I should need to do is merge the new rules into my master ruleset, redeploy, reboot, and if all works well, Chrome should install and execute without issue.

$MasterRuleXml = 'FinalPolicy.xml'
$ChromeRules = New-CIPolicyRule -DriverFiles $OnlyGooglePEs -Level Publisher
Merge-CIPolicy -OutputFilePath FinalPolicy_Merged.xml -PolicyPaths $MasterRuleXml -Rules $ChromeRules
ConvertFrom-CIPolicy -XmlFilePath .\FinalPolicy_Merged.xml -BinaryFilePath SIPolicy.p7b
# Finally, on the Device Guard system, replace the existing
# SIPolicy.p7b with the one that was just generated and reboot.

One thing I neglected to account for was the initial Chrome installer binary. I could have incorporated the binary into this process but I wanted to try my luck that Google used the same certificates to sign the installer binary. To my luck, they did and everything installed and executed perfectly. I would consider myself lucky in this case because I selected a software publisher (Google) who employs decent code signing practices.

Conclusion

In future blog posts, I will document my experiences deploying software that doesn't adhere to proper signing practices or doesn't even sign their code. Hopefully, the Google Chrome case study will, at a minimum, ease you into the process of updating code integrity policies for new software deployments.

The bottom line is that this isn't an easy process. Are there ways in which Microsoft could improve the code integrity policy generation/update/deployment/auditing experience? Absolutely! Even if they did though, the responsibility ultimately lies on you to make informed decisions about what software you trust and how you choose to enforce that trust!

4 comments:

  1. Hi,

    Because of your amazing blog i also started to use Device Guard, so far i'm very impressed, thanks for your infos. But i have one problem with my DG protected system:

    I have a Windows 10 Universal App, that always generates an (unsigned) .js file at every startup in it's folder, the filepath is "C:\Users\*\AppData\Local\Packages\SkyDeutschlandAG.SkyGo_5syynrx1xchwe\LocalState\Crittercism\Breadcrumbs.js" of that Application. Of Course Device Guard is blocking that - doing a FileName Rule for this js file doesn't work, the New-CIPolicy can't create a xml for that - only Hash Rule would create a XML, but that doesn't work because the file will change after a new start of the application. I suppose there is no way to have a working DG solution for that?

    ReplyDelete
    Replies
    1. While I haven't played much with modern app enforcement, this sounds like a case where Microsoft recommends using AppLocker alongside Device Guard. I would ensure that "Required:Enforce Store Applications" is not present in your CI policy and refer to the following docs:

      * "Windows Defender Device Guard with AppLocker" - https://docs.microsoft.com/en-us/windows/device-security/device-guard/introduction-to-device-guard-virtualization-based-security-and-code-integrity-policies#other-features-that-relate-to-windows-defender-device-guard
      * https://docs.microsoft.com/en-us/windows/device-security/applocker/manage-packaged-apps-with-applocker#understanding-packaged-apps-and-packaged-app-installers-for-applocker

      While I'm sure that's not the ideal solution you're looking for, I hope that helps. The only thing thus far (as far as I can tell) that AppLocker still has that DG doesn't is user/group specific rules and specific policy enforcement options for modern apps.

      Delete
    2. Thanks for your answer! We use AppLocker for over a year now under Win 10 and it does a good job on modern apps, so i was just curious how DG will handle this. I already deleted the option "Required: Enforce Store Applications" im my CI Policy, but that didn't help. It's an modern app that always recreates the Breadcrumbs.js file, which is not signed of course and always has a different hash - so DG does only it's job i would say :-) and other modern apps just work fine with DG, it's a special case where it doesn't work, maybe a redesign of this special app would be good - just not creating the .js file every time or doing it without .js :-)

      Delete
    3. Bummer. Sorry that didn't work out. I would reach out to the Device Guard team and supply your feedback. They can be reached at dgext@microsoft.com. Thanks for your feedback!

      Delete