Blog: Deploying Windows IIS and SQL Server on AWS with CloudFormation

In order to modernize their existing applications, databases, and infrastructure, a major state-run university with nine campuses, approached DB Best with a desire to not only create their ideal database environment but to cut costs along the way. Throughout our intensive discovery stage, we determined that we could help our client cut costs by migrating their Windows Internet Information Services (IIS)  and SQL Server to the Amazon Cloud (AWS). Over the course of our partnership, DB Best not only shifted them to the cloud, but built, tested, and deployed the next-generation of their database infrastructure. Oh, and we did it all a year sooner than planned. How? Read on to find out we standardized deployment of Windows IIS and SQL Server on AWS using CloudFormation templates.

Where we applied our best practices for migrating Windows solutions utilizing SQL Server to AWS

There were three phases we guided our customer through to migrate their web application servers to EC2 instances on AWS and their SQL Server databases to Amazon RDS for SQL Server.

  • Designed and implemented their architecture for HA/DR
  • Automation (and training and documentation) to provision test and production environments and migrate their database to RDS and deploy their application
  • Analyzed and optimized their application performance

Why Amazon RDS for SQL Server?

When taking stock of their infrastructure our client informed us that they wanted us to shift their internal career site to the cloud. The site was running on a SQL Server database engine with web servers deployed on the Windows Server Internet Information Services (IIS) platform. The site enabled employees to apply for other job positions within the state. Hiring managers and HR could then evaluate the job submissions and find the best candidate for the job. We classified this solution as “Business Critical” where downtime of only a few hours in the event of a service interruption would be acceptable.

Migrating Windows Solutions Utilizing SQL Server to AWS - RDS vs EC2

However, since the solution was confined to a geographical area within the United States that happened to have an Amazon data center, there was no need to create a multi-data center high availability solution using Always On Availability Groups. This made Amazon RDS for SQL Server a highly attractive solution for our client for the following reasons:

  • Small DBA staff. The university had a very limited DBA staff which needed to spend more time supporting the development teams and not on the day to day operations of their database systems. Since RDS does the heavy lifting for managed backups, point-in-time restore, and patch management, the university had one less thing to worry about.
  • Ability to script the entire deployment. One of the primary goals for the project was that the deployment had to be completely automated. The operations and development teams appreciated that all of the configuration and deployment processes could be scripted using the AWS CloudFormation. This included everything needed for the infrastructure, web server EC2 stack, and the RDS database stack so that everything remains in sync.
  • Database Mirroring in RDS was good enough. Because this was a business critical system with limited scale required at the database engine level, Database Mirroring in SQL Server could easily satisfy our client’s Recovery Time Objective service level agreement.

How to design high availability for Windows IIS and SQL Server on AWS

Overall, the application uses a two-tier application architecture with IIS on the front-end to manage web services and SQL Server running on Amazon RDS for the backend. The application was deployed to US-West-2 into the 2a and 2b availability zones as shown below.

High availability architecture for Windows IIS and SQL Server on Amazon RDS

All web traffic is routed through Amazon Route 53 with load balancing across either availability zone. The auto-scale EC2 instances running IIS then direct their database traffic to the database mirrored primary replica on RDS. In the event of either an automatic failover, manual failover, or forced service failover, the IIS servers redirect their traffic to the new primary replica. If the old primary replica database is no longer serviceable, the automation will create a new secondary replica on RDS and added to the database mirroring configuration.

While RDS handles managed backups, we created a Lambda function which makes a copy of the backup file and places it into an Amazon S3 bucket until the expiration of the retention period for the backup.

Our customer required the auditing of all user and programmatic activities of their AWS activities. So, all systems are monitored using Amazon CloudWatch with log files written to Amazon S3. If a critical event occurs, a message is sent to the DBA team using Amazon Simple Notification Service (SNS).

Automating deployment using CloudFormation and Visual Studio

Our client’s development team was already using Visual Studio for its development. In addition, the development team was considering working with the operations team to incorporate Continuous Integration (CI) and Continuous Deployment (CD) using Visual Studio. Since our client had no investment in deploying infrastructure via software, AWS CloudFormation was an obvious choice for deploying their environment.

It all starts with the main configuration file for CloudFormation. Here is a snippet of the main JSON file used to define the configuration files and then references the three JSON configuration files for infrastructure, RDS, and ECS.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
{
"AwsAccessKey"
: "XXXXXXXXXXXXXXX",
"AwsSecretKey"
: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"AwsRegion"
: "us-west-2",
"DomainNameInDB"
: "xxxxx.edu",
"EnvironmentName"
: "DEV",
"TemplatesFolderPath"
: "C:\\workfolder",
"BackupsFolderPath"
: "C:\\workfolder\\backups\\2017-12-12",
"AppPackageFilePath"
: "C:\\workfolder\\1.0.zip",
"LambdaFilePathCreateSnapshot"
: "C:\\workfolder\\CreateSnapShot.zip",
"LambdaFilePathMoveLogs"
: "C:\\workfolder\\MoveSqlLogsToS3.zip",
"InfrastructureTemplateName"
: "infrastructure.template.json",
"RdsTemplateName"
: "rds.template.json",
"Ec2TemplateName"
: "ec2.template.json",
"TimeZone"
: "Pacific Standard Time",
"InfrastructureStackParams"
: {
"AdminCidr"
: "0.0.0.0/32",
"PublicCidr"
: "0.0.0.0/32",
"IDOfVPC"
: "vpc-xxxxxxxx",
"VPCCIDR"
: "10.0.0.0/16",
"IDOfInternetGateway"
: "igw-xxxxxxxx"
},
"Ec2StackParams"
: {
"CertArn"
: "arn:aws:acm:us-west-2:xxxxxxxxxxx:certificate/xxxxxxxxxxxxxxxxxxxxxxxxxxx",
"ImageId"
: "ami-2ea61356",
"InstanceType":"t2.micro",
"KeyName"
: "xxxx-west-key",
"DBJobBuilderUser"
: "jbuilderdbo",
"DBJobBuilderUserPassword"
: "xxxxxxxx",
"AutoScalingDesiredCapacity":"1",
"AutoScalingMinSize":"1",
"AutoScalingMaxSize":"1"
},
"DatabaseStackParams"
: {
"EngineName":"sqlserver-ex",
"DBInstanceClass":"db.t2.micro",
"DBUser"
: "xxxxxxxx",
"DBPassword"
: "xxxxxxxx"
}
}

With CloudFormation, you can use a WYSIWYG designer to define the overall system. Here is what the EC2 configuration looked like for the web servers across the different availability zones with auto-scaling controlled by a CPU usage alarm.

Using an AWS CloudFormation template for automating the deployment of Windows IIS and SQL Server on AWS

 

In this portion of the overall RDS configuration, you can see how we set up the Lambda function used to back up the SQL Server database to S3 storage.

Migrating Windows Solutions Utilizing SQL Server to AWS RDS Cloud Formation template

Here is a portion of the resulting JSON that defines the Lambda function for performing the database backup.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
"LambdaFunction":{
"Type"
: "AWS::Lambda::Function",
"Properties"
: {
"FunctionName"
: { "Ref": "LambdaFunctionName" },
"Handler"
: { "Ref": "LambdaHandler" },
"Role"
: { "Ref": "Role/Arn" } ,
"Code"
: {
"S3Bucket"
: { "Ref": "LambdaCodeS3Bucket" } ,
"S3Key"
: { "Ref": "LambdaCodeZipFileName" }
},
"Runtime"
: {"Ref": "LambdaRuntime" ) ,
"Timeout"
: {"Ref": "LambdaExecutionTimeOut" } ,
"Environment"; {
"Variables"
: {
"AccessKey"
: {"Ref": "AccessKey" } ,
"SecretKey"
: {"Ref": "SecretKey" } ,
"DBInstanceName"
: ( "Ref": "DbInstance" } ,
"CurrentRegionSting"
: {"Ref": "AWS::Region" } ,
"TargetRegionSting"
: {"Ref": "LambdaTargetRegion" },
"LastSnapshotBucket"
: {"Ref": "LastSnapshotBucket" } ,
"LastSnapshotfileName"
: {"Ref": "LastSnapshotFileName" } ,
"MinutesBetweenSnapshots"
: {"Ref": "MinutesBetweenSnapshots" ),
"RetentionPeriodinDays"
: {"Ref": "RetentionPeriodinDays"},
"Environment"
: {"Ref": "EnvironmentName" }
}
}
}
}
}

Finally, here is an example of how we integrated PowerShell to run as part of the CloudFormation JSON file to configure Auto-Scaling for our EC2 instances. The ability to add scripting code, like PowerShell makes scripting of deployments incredibly flexible.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
"LauncnConfig" : {
"Type"
: "AWS::Autoscaling::LaunchConfiguration",
"Properties"
: {
"KeyName"
: { "Ref" : "KeyName" },
"Imageid"
: { ''Ref" : nimageid" },
"SecurityGroups"
: [ { "Ref" : "WeblnstanceSecurityGroupld" } ] ,
"InstanceType"
: { "Ref" : "InstanceType" } ,
"IamInstanceProfile"
: {"Ref" : "InstanceProfile" };
"UserData"
: { "Fn::Base64" : { "Fn::Join" : [ "", [
"\n"
{ "Fn::Sub"
: "Read-S3Object -BucketName ${DeploymentBucketName} -Key $ {AppZipName}. zip -File c: \\inetpub\\..."
" Add-Type -assembly 'System.IO.Compression.FileSystem'\n",
" [System. IO.Compression.ZipFile]::ExtractToDirectory('c:\\inetpub\\deploy\\...", {"Ref"
: "AppZipName"},".zip',.."
" $executionPolicy = Get-ExecutionPolicy\n",
"Set-ExecutionPolicy Bypass -scope Process -force\n",
"get-childitem c:\\inetpub\\deploy -recurse -force | ?{$_.name -eq 'deploy.psl'} | \n",
{ "Fn
: :Sub" : " ForEach-Object { Invoke-Expression {$_.FullName + ' ''${AppZipName}'' ''${DBServerAddress}"
"
Set-ExecutionPolicy $executionPolicy -scope Process -force\n",
"
Get-Childitem c:\\inetpub\\deploy -Recurse I Remove-Item -force -Recurse\n",
"
cfn-signal -e 0 --stack ", { "Ref": AWS:StackName" } , " --resource ''WebServerGroup'' --region " { "Ref","..."}"
"
\n"
] ]
} }
}
}

AWS makes the experience even better using the AWS Toolkit for Visual Studio. The development and operations team can initiate the creation of the production and dev/test environments directly in Visual Studio by only modifying the template file parameters and submitting the CloudFormation request for a complete DevOps experience.

More importantly, they can easily check for dependencies so that they won’t impact existing services as part of a deployment cycle.

Just because you are using RDS doesn’t mean you don’t need your DBAs

Although the customer’s system worked; we became aware of inefficiencies in the code. Our DBA team noticed some very specific issues when we performed load testing to help define the sizing of the RDS instance. As a result, we found that the developers were not taking advantage of cached resultsets when using the .NET Entity Framework in their applications which resulted in making unnecessary SELECT statement requests.

Another issue which we found was that one of the function calls used to retrieve data returned over 1300 records and then relied on the .NET LINQ (Language Integrated Query) to filter the result set. Yet, the application only needed 20% of the data each time. We modified the base query to return just the data needed to reduce the execution time for the operation.

Sometimes, we find out customers are so worried about just keeping things running, they don’t step back to look for obvious performance issues. By going in and fixing these issues, we were about to make their user acceptance testing script run on average 588% faster than their on-premises solution. As a result, we were about to save them additional costs by configuring a smaller RDS instance.

Customer benefits of migrating Windows solutions utilizing SQL Server to AWS

We were able to accelerate their timeline, however, they did not have the staff to do the work. We were able to migrate their on-premises application to the cloud to easily scale out to support additional campuses while improving HA and DR.

In conclusion, we were able to help our customer modernize their data estate seamlessly and efficiently. We were also able to skill up the client’s team of DBA’s so that they could become self-sufficient in handling any potential issues that may arise within their data infrastructure. By migrating Windows solutions utilizing SQL Server to AWS, we were able to help our customer leverage Amazon RDS, which will provide our customer with all the manageability and scalability that they will ever need.

With a managed PaaS infrastructure, modernized applications and a team that is now 100% skilled up to manage their own data infrastructure our customer is now very happy. Which, at the end of the day is why we do what we do.

Contact us to find out how we might be able to help you improve your overall business by improving the way your business handles its information.