CI/CD with AWS Elastic Beanstalk, Azure DevOps and Angular 7

CI/CD with AWS Elastic Beanstalk, Azure DevOps and Angular 7

Setting up a continuous integration pipeline is essential when working with a large team. It's too easy to assume things 'just work.' When they don't, it's great to know which commit sent it off the rails.

Azure DevOps shines in Azure but also is useful in the AWS environment. As the 'cloud wars' has heated up, many organizations have found using both AWS and Azure as a way to balance costs and reduce the chance of vendor lock-in.

As of the writing of this post, you'll want to set up your AWS settings and use an AWS YAML command BEFORE you create a new pipeline. It's a quirky little bug documented in this Github issue. If that makes do sense, don't worry - follow the steps below and it won't strike!

We will also set this up to deploy to Nginx with Docker. Since Angular sites are static HTML and compiled JavaScript, the most cost-effective method to deploy to production on AWS is to use S3 and CloudFront. I may do a separate blog post on how to set that up. The issue you’ll run into with Cloudfront is that it works by caching your files at edge locations, and invalidating that cache comes with a cost. In development and testing environments, caches aren’t your friend, you want product owners and QA using the latest version to test, so deploying to a Docker gives us a simple cost-effective environment that we can switch out as needed during the day.

Step One. Create a New Project and Add AWS Service Connection

Create a new project on https://dev.azure.com

In the lower left hand side of the screen you'll see your Project Settings. Click that and go to Pipelines -> Service Connections. Add a new service connection to AWS and the screen below shows up. The Connection Name can be anything you want but you'll need to use it for the awsCredentials field in the yaml. In this example I used AWS-test. the Access Key ID and Secret Access Key are the only required values you'll need.

Step Two. Fix our Angular setup and add a Dockerfile

Since this isn't a lesson on Angular, or Git, we'll use John Papa's tour of heroes:
https://github.com/johnpapa/angular-tour-of-heroes. Just fork that to your Github account to start. We do have to make a couple changes. You can see my commit here

update path in angular.json

change the output path from dist to elastic-beanstalk/dist. We'll have a couple new files to include with our dist files which will be added to this elastic-beanstalk folder:

 "outputPath": "elastic-beanstalk/dist",
add a Dockerfile to the elastic-beanstalk folder

Make sure it is spelled Dockerfile and not dockerfile. In AWS this filename is case sensitive. Since all our 'server' has to do is send static files to the client, this isn't very complicated:

FROM nginx:latest

RUN rm -rf /usr/share/nginx/html/*
## Copy angular output to nginx
COPY ./dist /usr/share/nginx/html

## Set the permission for NGINX web folder
RUN chmod 777 -R /usr/share/nginx/html

## Overwrite the default NGINX config
## using the custom config file
COPY ./custom-nginx-file.conf /etc/nginx/conf.d/default.conf

EXPOSE 8080

CMD ["nginx", "-g", "daemon off;"]
add a Dockerrun.aws.json to the elastic-beanstalk folder
{
  "AWSEBDockerrunVersion": "1",
  "Ports": [{
      "ContainerPort": "8080"
  }]
}
add a custom-nginx-file.conf to the elastic-beanstalk folder
server {
    listen 8080;
    location / {
        root /usr/share/nginx/html;
        index index.html index.htm;
        try_files $uri $uri/ /index.html =404;
    }
}

Step Three. Add a New Pipeline

Once that's forked, choose Click on 'New Pipeline' from your new Project and select the forked repo you just created on Github. You'll notice on this page there is a selection to use the visual designer. We won't use this selection in this tutorial, but it is a great way to get familiar with the Azure DevOps toolset. You can visually add building blocks, and once you have them figured out, you can view the corresponding YAML.

While the visual designer is nice for getting familiar with the available commands, the YAML puts your DevOps steps in code and under version control, which is where you want it.

Step Four. Create your Beanstalk

You don't have to do anything too fancy yet. Just go to the Elastic Beanstalk page in the AWS Console, click Create New Application, give it a name, I used angular-docker-test in my example.

Step Five. YAML

You can copy-paste the YAML file below. Make sure the awsCredentials, regionName, and applicationName are correct, and are the values you used in the steps above.

While it's running, we'll do a deep dive into what this YAML means below:

trigger:
- master

pool:
  vmImage: 'Ubuntu-16.04'

steps:
- task: NodeTool@0
  inputs:
    versionSpec: '8.x'
  displayName: 'Install Node.js'

- script: |
    npm install -g @angular/cli
    npm install
    ng build --prod
    cd elastic-beanstalk
    zip -r output.zip ./*
  displayName: 'install, build and zip'

- task: S3Upload@1
  displayName: 'S3 Upload: angular-docker-bucket'
  inputs:
    awsCredentials: 'AWS-test'
    regionName: 'us-east-2'
    bucketName: 'angular-docker-bucket'
    sourceFolder: 'elastic-beanstalk'
    globExpressions: output.zip
    createBucket: true

- task: BeanstalkDeployApplication@1
  displayName: 'Deploy to Elastic Beanstalk: angular-docker-test'
  inputs:
    awsCredentials: 'AWS-test'
    regionName: 'us-east-2'
    applicationName: 'angular-docker-test'
    environmentName: 'AngularDockerTest-env'
    applicationType: s3
    deploymentBundleBucket: 'angular-docker-bucket'
    deploymentBundleKey: output.zip
    logRequest: true
    logResponse: true

Our build server is Ubuntu, and we install NodeJs version 8.x. Note that this is just the server building our solution, not the server we are installing on to, which is specified in the Dockerfile above.

pool:
  vmImage: 'Ubuntu-16.04'
steps:
- task: NodeTool@0
  inputs:
    versionSpec: '8.x'
  displayName: 'Install Node.js'

Next we build our Angular solution, just like we would do from the command line. We need the entire compiled source and Dockerfiles in a zip file for the next step, so we use Ubuntu's zip command

- script: |
    npm install -g @angular/cli
    npm install
    ng build --prod
    cd elastic-beanstalk
    zip -r output.zip ./*
  displayName: 'install, build and zip'

We take our built output.zip file and upload it to S3:

- task: S3Upload@1
  displayName: 'S3 Upload: angular-docker-bucket'
  inputs:
    awsCredentials: 'AWS-test'
    regionName: 'us-east-2'
    bucketName: 'angular-docker-bucket'
    sourceFolder: 'elastic-beanstalk'
    globExpressions: output.zip
    createBucket: true

Now we deploy to Beanstalk. Notice that we are passing the S3 file from the last line and using it to update our elastic beanstalk environment.

- task: BeanstalkDeployApplication@1
  displayName: 'Deploy to Elastic Beanstalk: angular-docker-test'
  inputs:
    awsCredentials: 'AWS-test'
    regionName: 'us-east-2'
    applicationName: 'angular-docker-test'
    environmentName: 'AngularDockerTest-env'
    applicationType: s3
    deploymentBundleBucket: 'angular-docker-bucket'
    deploymentBundleKey: output.zip
    logRequest: true
    logResponse: true

Lots of places you can go from here, both in terms of Azure DevOps and the complexity of your Elastic Beanstalk configuration. That said, this is a great stopping point!

Comments

Creating URL Redirects with Boto3

Even though most of my back-end development is with C#, I still find Python to be a great tool to work with AWS. I highly recommend Jupyter Notebooks and the AWS Boto3 SDK.

When I first started learning the AWS toolset, I began by using the Management Console. While this is great for some operations, being able to add and maintain resources programmatically makes Dev Ops much more comfortable and more repeatable. Also, using Jupyter notebooks allows you to connect documentation to code, which is excellent when working on a team.

I am assuming you have Anaconda's distribution of Python, and the Boto 3 AWS Python SDK installed.

Creating a bucket is easy.

import boto3
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='redirect-22992299', 
                 CreateBucketConfiguration={ 'LocationConstraint': 'us-east-2'})

The 'Resource' we create here represents the higher level interface to AWS. Here we are creating a new S3 Bucket. Notice that the Bucket name is unique across all users, so you will need to pick a unique identifier here.

Next we will allow the public to read our S3 bucket.

bucket.Acl().put(ACL='public-read') 

We still need an actual redirect, and to create our website redirects. For these, we will add two objects to our newly created S3 bucket. We could write an html redirect in the 'Body' statement below, but AWS provides a WebsiteRedirectionLocation, which we will use so that AWS will redirect before ever sending html to the client.

bucket.put_object(Bucket='redirect-22992299', 
                    Key='index.html', 
                    Body='0',
                    WebsiteRedirectLocation='http://www.designingforthefuture.com')
bucket.put_object(Bucket='redirect-22992299', 
                    Key='error.html', 
                    Body='0',
                    WebsiteRedirectLocation='http://www.designingforthefuture.com')

Even though we allowed the public access to our S3 bucket above, by default the objects created in the bucket will be private. To set public access to our two newly created objects we will set the permissions:

object = s3.Bucket('redirect-22992299').Object('index.html')
object.Acl().put(ACL='public-read')

Now we designate our S3 bucket as a website. To do this, you'll notice we can't use our original s3 object. We create a new object for the lower-level 'client' interface to AWS. These map closer to the AWS services, but are lower-level.

website_configuration = {
    'ErrorDocument': {'Key': 'error.html'},
    'IndexDocument': {'Suffix': 'index.html'},
}
s3client = boto3.client('s3')
s3client.put_bucket_website(
    Bucket='redirect-22992299',
    WebsiteConfiguration=website_configuration
)

Done! With this bucket set, I added a A alias in Route 53 to use the redirect. You can also set these when your S3 object has expired and you want users to be redirected to your current site.

See the complete Jupyter Notebook for full details: https://github.com/rinckd/designingforthefuture-blog/blob/master/notebooks/AWS/01_S3_buckets/01_S3_Buckets.ipynb

Comments

Posting Jupyter Notebooks to Blogs

It is nice to be able to publish Jupyter notebooks to straight to Github. I love looking through François Chollet's work with Jupyter Notebooks and Keras. For example, this page: https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/5.4-visualizing-what-convnets-learn.ipynb

At the same time, in a blog it's nice to be able to talk about smaller snippets of code.
The best way to do this is to use the nbconvert function that is installed with jupyter notebooks.

You can convert a notebook with this command line snippet:

jupyter nbconvert --to html --template basic .\protobufs.ipynb

the --template basic argument will strip the html headers off your file, which you will have to do manually if you just use the File->Save functionality inside a Jupyter notebook session.

Grab the generated .html file and add something like this to the top to style things to your liking:

<style type="text/css">
.highlight{background: #f8f8f8; overflow:auto;width:auto;border:solid gray;border-width:.1em .1em .1em .1em;padding:0em .5em;border-radius: 4px;}
.k{color: #338822; font-weight: bold;}
.kn{color: #338822; font-weight: bold;}
.mi{color: #000000;}
.o{color: #000000;}
.ow{color: #BA22FF;  font-weight: bold;}
.nb{color: #338822;}
.n{color: #000000;}
.s{color: #cc2222;}
.se{color: #cc2222; font-weight: bold;}
.si{color: #C06688; font-weight: bold;}
.nn{color: #4D00FF; font-weight: bold;}
</style>

Then paste it in to your blog. Simple.

Comments

.NET CORE ASYNC CONSOLE APPS

.NET CORE ASYNC CONSOLE APPS

Not Your Dad's .NET!

Microsoft Architect David Fowler called the Generic Host model a Hidden Gem, and I agree. The ability to use the same constructor based dependency injection model common to ASP.NET Core in Console Apps makes for easier testing, secret storing and configuration. It means that your microservice or serverless app, can use many of the same patterns and architecture you use when developing in ASP.NET Core.

So, how does it work? I'll walk through the process of creating a small API Library with Visual Studio Code and the .NET CORE CLI. This method should work on Mac, PC, or Linux. Even if you are on a PC, I highly recommend trying this out. All of the configuration for .NET CORE can be done outside of Visual Studio these days. Even though my day-to-day workflow includes VS 2017, I usually use a hybrid approach where I have VS Code open for various tasks.

As the diagram shows, you'll want to make sure you are using .NET Core 2.1 or higher to take advantage of the features we will be using. As you'll see, we'll also want to take advantage of C# 7.1. If this is your first time working with .NET Core download the latest .NET CORE SDK

1. Create a new .NET CORE Console App

Open Visual Studio Code and choose Open Folder. Open the Folder you want to design your new application in. Then select the View menu and open then Terminal. In the terminal type in the following lines:

dotnet new sln --name ApiTester
dotnet new console --name Api.ConsoleApp
dotnet new classlib --name Api.Library
dotnet sln add .\Api.ConsoleApp\Api.ConsoleApp.csproj
dotnet sln add .\Api.Library\Api.Library.csproj
dotnet restore
dotnet run .\ApiTester.sln --project .\Api.ConsoleApp\

If you see Hello World! repeated back to you, you're in!

The Visual Studio .sln file on Mac / Linux allows you to run dotnet restore and target all the projects in the Solution at the same time. For Window's users it gives you a .sln file you can open Visual Studio 2019.

vs-code

2. Main is Async These Days

So, now we'll need to add the Hosting package:

dotnet add .\Api.ConsoleApp\Api.ConsoleApp.csproj package  Microsoft.Extensions.Hosting

and update your Api.ConsoleApp/Program.cs to the following:

using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using System;
using System.Threading.Tasks;

namespace Api.ConsoleApp
{
    class Program
    {
        static async Task Main(string[] args)
        {
            var host = new HostBuilder()
            .ConfigureHostConfiguration(configBuilder => {
                // TODO
            })
            .ConfigureServices((hostContext, services) => {
                // TODO
            })
            .Build();
            await host.RunAsync();
        }
    }
}

If you try building and running this with:

dotnet run .\ApiTester.sln --project .\Api.ConsoleApp\

you'll most likely get in error:

error CS1983: The return type of an async method must be void.

At the time of writing this, the default version of C# is still 7.0.

If you were in Visual Studio 2017, you'd have to do some mousing around to set this straight. But luckily, in VS Code this is straightforward. Just update the Api.ConsoleApp.csproj to use C# 7.1

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp2.2</TargetFramework>
    <LangVersion>7.1</LangVersion>
  </PropertyGroup>

Now, when you run the Console App, you'll see the Host Builder in action.

3. Using the HostBuilder

If you work at a large company, you likely have old code. Microsoft is certainly not an exception. By using the Generic HostBuilder class you are actually leap-frogging into the future, beyond where the current ASP.NET Core platform is at.

image26

This is still a bit of speculation, but it appears that the Generic Host is under development to replace the IWebHost in a future release. See for example, this post:

net-core

If you are familiar with ASP.NET Core, there isn't a Startup Class as found with Asp.NET Core. If you are interested in why that is the case. See this issue on Github. Also, you'll find that not all code is reuseable between your Console App and Asp.NET Core. The IWebHostBuilder is part of the Microsoft.AspNetCore.Hosting.Abstractions library. Unfortunately, the IHost interface is found in Microsoft.Extensions.Hosting.Abstractions. This means, if you want to reuse code that uses something in the Microsoft.AspNetCore.Hosting it won't work! You'll have to end up doing some annoying copy-pasta. See https://andrewlock.net/the-asp-net-core-generic-host-namespace-clashes-and-extension-methods/ for a great discussion of this. That said, most of your business and data access layer code should be fine. More importantly, this is the direction Microsoft is at least intending to go. So, embrace it!

We will do something useful with this Console App in the next sections. Stay Tuned!

Comments

Back to Blogging

Back to Blogging

After a long hiatus, I've decided to write again. Most the topics will be about .NET, Angular, and AWS because that's where my passions lie currently. That said, I want to talk a little about Domain Driven Design and micro-services. As I have gotten older, I have made my way through many code bases and online projects both good and bad.

While I can't pretend to write perfect code, I do have a better understanding of what makes code maintainable. Though Domain Driven Design doesn't have hard rules, time and time again, when I see that a programmer has used a well thought out domain and clear repository pattern I breathe a sigh of relief. Yes, the code may be ten years old, but refactoring is going to be possible.

Comments