CI/CD with AWS Elastic Beanstalk, Azure DevOps and Angular 7

CI/CD with AWS Elastic Beanstalk, Azure DevOps and Angular 7

Setting up a continuous integration pipeline is essential when working with a large team. It's too easy to assume things 'just work.' When they don't, it's great to know which commit sent it off the rails.

Azure DevOps shines in Azure but also is useful in the AWS environment. As the 'cloud wars' has heated up, many organizations have found using both AWS and Azure as a way to balance costs and reduce the chance of vendor lock-in.

As of the writing of this post, you'll want to set up your AWS settings and use an AWS YAML command BEFORE you create a new pipeline. It's a quirky little bug documented in this Github issue. If that makes do sense, don't worry - follow the steps below and it won't strike!

We will also set this up to deploy to Nginx with Docker. Since Angular sites are static HTML and compiled JavaScript, the most cost-effective method to deploy to production on AWS is to use S3 and CloudFront. I may do a separate blog post on how to set that up. The issue you’ll run into with Cloudfront is that it works by caching your files at edge locations, and invalidating that cache comes with a cost. In development and testing environments, caches aren’t your friend, you want product owners and QA using the latest version to test, so deploying to a Docker gives us a simple cost-effective environment that we can switch out as needed during the day.

Step One. Create a New Project and Add AWS Service Connection

Create a new project on https://dev.azure.com

In the lower left hand side of the screen you'll see your Project Settings. Click that and go to Pipelines -> Service Connections. Add a new service connection to AWS and the screen below shows up. The Connection Name can be anything you want but you'll need to use it for the awsCredentials field in the yaml. In this example I used AWS-test. the Access Key ID and Secret Access Key are the only required values you'll need.

Step Two. Fix our Angular setup and add a Dockerfile

Since this isn't a lesson on Angular, or Git, we'll use John Papa's tour of heroes:
https://github.com/johnpapa/angular-tour-of-heroes. Just fork that to your Github account to start. We do have to make a couple changes. You can see my commit here

update path in angular.json

change the output path from dist to elastic-beanstalk/dist. We'll have a couple new files to include with our dist files which will be added to this elastic-beanstalk folder:

 "outputPath": "elastic-beanstalk/dist",
add a Dockerfile to the elastic-beanstalk folder

Make sure it is spelled Dockerfile and not dockerfile. In AWS this filename is case sensitive. Since all our 'server' has to do is send static files to the client, this isn't very complicated:

FROM nginx:latest

RUN rm -rf /usr/share/nginx/html/*
## Copy angular output to nginx
COPY ./dist /usr/share/nginx/html

## Set the permission for NGINX web folder
RUN chmod 777 -R /usr/share/nginx/html

## Overwrite the default NGINX config
## using the custom config file
COPY ./custom-nginx-file.conf /etc/nginx/conf.d/default.conf

EXPOSE 8080

CMD ["nginx", "-g", "daemon off;"]
add a Dockerrun.aws.json to the elastic-beanstalk folder
{
  "AWSEBDockerrunVersion": "1",
  "Ports": [{
      "ContainerPort": "8080"
  }]
}
add a custom-nginx-file.conf to the elastic-beanstalk folder
server {
    listen 8080;
    location / {
        root /usr/share/nginx/html;
        index index.html index.htm;
        try_files $uri $uri/ /index.html =404;
    }
}

Step Three. Add a New Pipeline

Once that's forked, choose Click on 'New Pipeline' from your new Project and select the forked repo you just created on Github. You'll notice on this page there is a selection to use the visual designer. We won't use this selection in this tutorial, but it is a great way to get familiar with the Azure DevOps toolset. You can visually add building blocks, and once you have them figured out, you can view the corresponding YAML.

While the visual designer is nice for getting familiar with the available commands, the YAML puts your DevOps steps in code and under version control, which is where you want it.

Step Four. Create your Beanstalk

You don't have to do anything too fancy yet. Just go to the Elastic Beanstalk page in the AWS Console, click Create New Application, give it a name, I used angular-docker-test in my example.

Step Five. YAML

You can copy-paste the YAML file below. Make sure the awsCredentials, regionName, and applicationName are correct, and are the values you used in the steps above.

While it's running, we'll do a deep dive into what this YAML means below:

trigger:
- master

pool:
  vmImage: 'Ubuntu-16.04'

steps:
- task: NodeTool@0
  inputs:
    versionSpec: '8.x'
  displayName: 'Install Node.js'

- script: |
    npm install -g @angular/cli
    npm install
    ng build --prod
    cd elastic-beanstalk
    zip -r output.zip ./*
  displayName: 'install, build and zip'

- task: S3Upload@1
  displayName: 'S3 Upload: angular-docker-bucket'
  inputs:
    awsCredentials: 'AWS-test'
    regionName: 'us-east-2'
    bucketName: 'angular-docker-bucket'
    sourceFolder: 'elastic-beanstalk'
    globExpressions: output.zip
    createBucket: true

- task: BeanstalkDeployApplication@1
  displayName: 'Deploy to Elastic Beanstalk: angular-docker-test'
  inputs:
    awsCredentials: 'AWS-test'
    regionName: 'us-east-2'
    applicationName: 'angular-docker-test'
    environmentName: 'AngularDockerTest-env'
    applicationType: s3
    deploymentBundleBucket: 'angular-docker-bucket'
    deploymentBundleKey: output.zip
    logRequest: true
    logResponse: true

Our build server is Ubuntu, and we install NodeJs version 8.x. Note that this is just the server building our solution, not the server we are installing on to, which is specified in the Dockerfile above.

pool:
  vmImage: 'Ubuntu-16.04'
steps:
- task: NodeTool@0
  inputs:
    versionSpec: '8.x'
  displayName: 'Install Node.js'

Next we build our Angular solution, just like we would do from the command line. We need the entire compiled source and Dockerfiles in a zip file for the next step, so we use Ubuntu's zip command

- script: |
    npm install -g @angular/cli
    npm install
    ng build --prod
    cd elastic-beanstalk
    zip -r output.zip ./*
  displayName: 'install, build and zip'

We take our built output.zip file and upload it to S3:

- task: S3Upload@1
  displayName: 'S3 Upload: angular-docker-bucket'
  inputs:
    awsCredentials: 'AWS-test'
    regionName: 'us-east-2'
    bucketName: 'angular-docker-bucket'
    sourceFolder: 'elastic-beanstalk'
    globExpressions: output.zip
    createBucket: true

Now we deploy to Beanstalk. Notice that we are passing the S3 file from the last line and using it to update our elastic beanstalk environment.

- task: BeanstalkDeployApplication@1
  displayName: 'Deploy to Elastic Beanstalk: angular-docker-test'
  inputs:
    awsCredentials: 'AWS-test'
    regionName: 'us-east-2'
    applicationName: 'angular-docker-test'
    environmentName: 'AngularDockerTest-env'
    applicationType: s3
    deploymentBundleBucket: 'angular-docker-bucket'
    deploymentBundleKey: output.zip
    logRequest: true
    logResponse: true

Lots of places you can go from here, both in terms of Azure DevOps and the complexity of your Elastic Beanstalk configuration. That said, this is a great stopping point!

Comments

Creating URL Redirects with Boto3

Even though most of my back-end development is with C#, I still find Python to be a great tool to work with AWS. I highly recommend Jupyter Notebooks and the AWS Boto3 SDK.

When I first started learning the AWS toolset, I began by using the Management Console. While this is great for some operations, being able to add and maintain resources programmatically makes Dev Ops much more comfortable and more repeatable. Also, using Jupyter notebooks allows you to connect documentation to code, which is excellent when working on a team.

I am assuming you have Anaconda's distribution of Python, and the Boto 3 AWS Python SDK installed.

Creating a bucket is easy.

import boto3
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='redirect-22992299', 
                 CreateBucketConfiguration={ 'LocationConstraint': 'us-east-2'})

The 'Resource' we create here represents the higher level interface to AWS. Here we are creating a new S3 Bucket. Notice that the Bucket name is unique across all users, so you will need to pick a unique identifier here.

Next we will allow the public to read our S3 bucket.

bucket.Acl().put(ACL='public-read') 

We still need an actual redirect, and to create our website redirects. For these, we will add two objects to our newly created S3 bucket. We could write an html redirect in the 'Body' statement below, but AWS provides a WebsiteRedirectionLocation, which we will use so that AWS will redirect before ever sending html to the client.

bucket.put_object(Bucket='redirect-22992299', 
                    Key='index.html', 
                    Body='0',
                    WebsiteRedirectLocation='http://www.designingforthefuture.com')
bucket.put_object(Bucket='redirect-22992299', 
                    Key='error.html', 
                    Body='0',
                    WebsiteRedirectLocation='http://www.designingforthefuture.com')

Even though we allowed the public access to our S3 bucket above, by default the objects created in the bucket will be private. To set public access to our two newly created objects we will set the permissions:

object = s3.Bucket('redirect-22992299').Object('index.html')
object.Acl().put(ACL='public-read')

Now we designate our S3 bucket as a website. To do this, you'll notice we can't use our original s3 object. We create a new object for the lower-level 'client' interface to AWS. These map closer to the AWS services, but are lower-level.

website_configuration = {
    'ErrorDocument': {'Key': 'error.html'},
    'IndexDocument': {'Suffix': 'index.html'},
}
s3client = boto3.client('s3')
s3client.put_bucket_website(
    Bucket='redirect-22992299',
    WebsiteConfiguration=website_configuration
)

Done! With this bucket set, I added a A alias in Route 53 to use the redirect. You can also set these when your S3 object has expired and you want users to be redirected to your current site.

See the complete Jupyter Notebook for full details: https://github.com/rinckd/designingforthefuture-blog/blob/master/notebooks/AWS/01_S3_buckets/01_S3_Buckets.ipynb

Comments