Windows on AWS; The Final Frontier

Windows on AWS; The Final Frontier

Just kidding, mostly. Anyone can next, next, finish their way to deploying a Windows-based EC2 instance in AWS. The trick, as I have learned, is making it actually useful for me.

What do I mean by that? Well, coming from an entirely on-premises Active Directory environment I had a number of challenges when it came to automatically provisioning instances in AWS and making them work in tandem with my existing environment. Something as simple as renaming the new Windows machine became hours of toil, trial, and, most of all, error. Then came the step of joining it to the AD domain, adding a user to the local administrators group, setting the IP as static vs DHCP, and configuring backups.

As I took on this endeavor, one of the things that surprised me the most was the sheer lack of documentation and prior user experiences attempting to do what I was. Seeing as these are the basic building blocks of Windows Server provisioning I was amazed that I couldn't find more references for a 12 year old platform (EC2). I suppose that's why I'm writing this. All of the following is presented from a fairly high level and assumes a certain level of familiarity with AWS, CloudFormation, and Active Directory.

The basic building blocks that allowed me to successfully accomplish my goal are the following:

  • An existing VPC with Subnets
  • A VPN tunnel from my existing AWS VPC to my on-premises environment (Not required if using AWS Directory Service as your domain)
  • AD Connect
  • CloudFormation
  • AWS Systems Manager
  • Powershell (with the AWS tools for Powershell installed)
  • Group Policy

While it seems pretty straightforward, the execution was anything but. I'll start with CloudFormation.

CloudFormation is the native AWS orchestration engine. It allows for the automated provisioning of resources within AWS using either JSON or YAML templates. I'll admit, JSON was completely new to me as I started this project so that presented its own world of challenges. If you're completely new to editing/creating JSON, I would use a text editor that supports JSON natively. I used Visual Studio Code with the CloudFormation extension. I also started with a template provided by AWS.

While I went through the process of importing my image used in the on-premises environment, it's not required. In fact, unless you've heavily customized the image it might even be a better use of your time to simple take AWS's standard AMI, customize it, and save it as your new image for deployment.

Here is the JSON template I ended up with. It starts by asking you for the following:

  • What type of instance do you want to deploy (t2,micro, m4.xlarge, etc.)?
  • What should the name of the new instance be?
  • What is your existing AD FQDN (FLast@domain.com)?
  • What Subnet/Availability Zone should this be provisioned in?
  • What is the use case/class of this server (Prod, Dev, Test [used for tagging purposes])?
  • Should this server be backed up automatically (Used for tagging)?
  • What Operating System would you like to deploy (Server 2012R2, Server 2016, etc.)?

As part of the server provisioning process, the powershell script found in the UserData section of the CF template is run. This script actually builds two additional scripts on the machine (Add_Admin.ps1 & Signal_Complete.ps1). After that, it signals CF to continue provisioning (via a wait condition), renames the machine to the specified value, and then reboots.

Once the machine comes back up, using AWS Systems Manager in conjunction with with either AD Connect or AWS AD, the machine is automatically joined to the domain and rebooted again.

After the machine finishes booting, three scripts are triggered via a Group Policy immediate task:

  • Add_Admin.ps1 (created by CF)
  • DHCP_2_Static.ps1 (stored in netlogon or a shared location)
  • Signal_Complete.ps1 (created by CF)

This adds the specified user to the local administrators group, takes the current IP address assigned by AWS DHCP and sets it statically, and signals CF that the build is complete. I also used the alerting feature for CF to send me an email via SNS telling me when it was finished. In total, depending on the size instance you choose, the deployment takes about 9-11 minutes. After the template was finished, I was able to then publish it in AWS Service Catalog to give it a prettier front-end.

The next step was backing it up. Using powershell, I created a couple of scripts to be run daily from an EC2 instance with an IAM role allowing access to EC2. Essentially these scripts, Snapshot_EC2.ps1 & Remove_Snapshots_EC2.ps1, run as scheduled tasks once a day. The snapshot script reaches out and looks for any instances tagged with the key of Backup and the value of Auto (set by the CF template), gets any EBS volumes attached to it, and snapshots them. The remove snapshots script then queries instances for the same tag, gets their EBS volumes, gets the snapshots associated with them, and deletes them after a set number of days (30 in my example).

The EC2 instance I have running this operation is started automatically via a CloudWatch Rule and Lambda, and then stopped after a set amount of time via a second CloudWatch Rule (though in hindsight I could have just had the removal script shut the machine down when it finishes ¯\_(ツ)_/¯ ).

Thats it! Easy right?

All in all it took me about a week or so to muddle my way to a workable solution. Hopefully this helps some other folks out there. Feel free to drop me a line with any questions or comments.


To view or add a comment, sign in

More articles by Frank Fioretti

  • Growing Pains

    When I saw the announcement from AWS that their Postgres compatible version of Aurora now supported Kerberos…

    1 Comment

Others also viewed

Explore content categories