Installing Cumulus

Software requirements (Linux/MacOS)

Make sure all the requirements are installed before starting the main installation.

Git Repositories:

Credentials:

  • CMR Username and password. Optional, if you are not exporting metadata to CMR.

  • EarthData Client login username and password. User must have the ability to administer and/or create applications in URS. It’s recommended to obtain an account in the test environment (UAT).

The following software installation procedure will only work on a Linux distribution based on the Debian architecture

git

Open Terminal, enter the following code to install git

$ sudo apt install git

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency

cURL

Open Terminal, enter the following code to install cURL

$ sudo apt install curl

cURL is a computer software project providing a library and command-line tool for transferring data using various protocols

yarn

Open Terminal, enter the following code to install yarn

$ curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
$ echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
$ sudo apt-get update && sudo apt-get install yarn

Yarn is a package manager for your code. It allows you to use and share code with other developers from around the world.

npm

Open Terminal, enter the following code to install npm

$ sudo apt install npm

npm is the package manager for JavaScript

nodejs

Open Terminal, enter the following code to install nodejs

 curl -sL https://deb.nodesource.com/setup_7.x -o nodesource_setup.sh

To view the .sh file (Optional)

$ nano nodesource_setup.sh

Run the script under sudo

$ sudo bash nodesource_setup.sh
$ sudo apt-get install nodejs

Also, install essential dependencies

$ sudo apt-get install build-essential

Node.js is an open-source, cross-platform JavaScript run-time environment for executing JavaScript code server-side.

nvm

Open Terminal, enter the following code to install nvm

$ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bash

To verify the install

$ command -v nvm

On Linux, after running the install script, if you get nvm: command not found or see no feedback from your terminal, close your current terminal, open a new terminal, and try verifying again

$ nvm install 6.10.0
$ nvm use 6.10.0

Some version of nvm is not compatible with Cumulus deployment, choose the version precisely.

lerna

Open Terminal, enter the following code to install lerna

 $ yarn global add lerna

A tool for managing JavaScript projects with multiple packages

We use Lerna to manage multiple Cumulus packages in the same repo. Install lerna as a global module

AWS CLI

Open Terminal, enter the following code to install AWS CLI

$ sudo apt install awscli

The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

kes

Open Terminal, enter the following code to install kes

 $ sudo npm install -g kes

Kes helps with managing, deploying AWS resources using CloudFormation. Also, to deploy lambda functions and create API gateway resources

Download Cumulus repository

Step 1: Clone Cumulus repository

Open Terminal, enter the following command

$ git clone https://github.com/cumulus-nasa/cumulus.git

git clone <url> clones the <url> repository to local system

Output:

Step 2: Change directory

$ cd cumulus

Changes directory to the root repository

Step 3: Deploying particular version of Cumulus (Optional)

$ git checkout \<ref/branch/tag\>

Checks out particular reference of version(tag) or branch of Cumulus core

Step 4: Install local dependencies

$ npm install

npm install will install all the latest version of modules listed as dependencies in package.json

Output:

Step 5: Starting bootstrap

$ npm run ybootstrap

npm run uses locally installed executable

Output:

Step 6: Build Application

$ npm run build

Build the Cumulus application

Output:

Download DAAC repository

If you already are working with an existing ghrc-deploy repository that is configured appropriately for the version of Cumulus you intend to deploy or update, open Terminal, run npm install inside the ghrc-deploy directory to install dependencies (we are just performing step 4) then skip to AWS configuration.

Step 1: Change directory

$ cd /home/user

Changes to the same directory level as the Cumulus repository download

Step 2: Clone DAAC repository

$ git clone https://github.com/cumulus-nasa/template-deploy ghrc-deploy

Clone template-deploy repo and name appropriately for your DAAC or organization. Example:ghrc-deploy.

git clone <url> Clones the <url> repository to local system

Output:

Step 3: Enter repository root directory

$ cd ghrc-deploy

Changes directory to ghrc-deploy

Output:

Step 4: Install packages

$ npm install

npm install will install all the latest version of modules listed as dependencies in package.json

Output:

The npm install command will add the kes utility to the ghrc-deploy's node_packages directory and will be utilized later for most of the AWS deployment commands

Step 5: Copy the sample template into your repository

$ cp -r ../cumulus/packages/deployment/app.example ./app

cp -r copies files from source to destination

Output:

Step 6: Build Application

$ npm run build

Build the Cumulus application

Output:

Step 7: Commit Changes (Optional)

$ git remote set-url origin https://github.com/cumulus-nasa/ghrc-deploy
$ git push origin master

Create a new repository ghrc-deploy to add/commit your DAAC’s configuration changes

AWS configuration

Create S3 Buckets

Buckets can be created on the command line with AWS CLI or via the web interface on the AWS console

Creating S3 Buckets through web interface:

Step 1: Sign in to AWS console

Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/

S3 console will be similar to

Step 2: Create bucket

Click create bucket on console. The following S3 buckets should be created (replacing prefix with whatever you’d like, generally your organization/DAAC’s name):

<prefix>-internal
<prefix>-private
<prefix>-protected
<prefix>-public

These buckets do not need any non-default permissions to function with Cumulus, however your local security requirements may vary

Step 3: Choose Bucket name

In the Bucket name field, type a unique DNS-compliant name for your new bucket. Create your own bucket name using the naming guidelines:

  • The name must be unique across all existing bucket names in Amazon S3
  • After you create the bucket you cannot change the name, so choose wisely
  • Choose a bucket name that reflects the objects in the bucket because the bucket name is visible in the URL that points to the objects that you’re going to put in your bucket

For information about naming buckets, see Rules for Bucket Naming in the Amazon Simple Storage Service Developer Guide

Step 4: Choose Region

Choose your preferred region. Click Create

Sample Video

Set Access Keys

Step 1: Sign in to AWS console

Sign in to the AWS Management Console and open the Amazon IAM Dashboard at https://console.aws.amazon.com/iam

Step 2: Get user’s security credentials

This User should have IAM Create-User permissions (Admin Role)

New User

  • On the IAM Dashboard page, choose User in navigation bar, click Add user to create a new user
  • Enter Username and Access type, Click Next
  • Set permission for the new user
  • Review the permission, click create user
  • The users Access key ID & Secret access key will be visible. Then choose Download.csv to save the key files on your computer

The secret access key can’t be retrieved again, after closing the dialog box

Existing User

  • On the IAM Dashboard page, choose User in navigation bar, select the existing user, then click security credentials

  • Click create access key to generate a new Access key ID & Secret access key

Step 3: Export the access keys

Set the environment variable in Terminal,

$ export AWS_ACCESS_KEY_ID=<AWS access key>
$ export AWS_SECRET_ACCESS_KEY=<AWS secret key>
$ export AWS_REGION=<region> [region code eg:us-west-2]

export sets the environment variable.

If you don’t want to set environment variables, access keys can be stored locally via the AWS CLI

$ aws configure

Enter the keys, region name, and output format as shown below

AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json

Create a deployer role

Step 1: Add new deployment

Add new deployment to ghrc-deploy/deployer/config.yml

The deployer configuration sets up an IAM role with permissions for deploying the Cumulus stack

All deployments in the various config.yml files inherit from the default deployment, and new deployments only need to override relevant settings

The various config fields are described below. All items in < > brackets are intended to be configured with user-set values

Sample new deployment added to config.yml:

<deployer-deployment-name>:
  prefix: <stack-prefix>
  stackName: <stack-name>
  stackNameNoDash: <DashlessStackName>
  buckets:
    internal: <prefix>-internal
    shared_data_bucket: cumulus-data-shared

deployer-deployment-name: The name (e.g. dev) of the ‘deployment’ - this key tells kes which configuration set (in addition to the default values) to use when creating the cloud formation template

prefix: This value will prefix CloudFormation-created deployer resources and permissions

This value must match the prefix used in the IAM role creation and the Cumulus application stack name must start with <prefix>

stackName: The name of this deployer stack in CloudFormation (e.g. -deployer)

stackNameNoDash: A representation of the stack name that has dashes removed. This will be used for components that should be associated with the stack but do not allow dashes in the identifier

internal The internal bucket name previously created in the Create S3 Buckets step. Preferably -internal for ease of identification

shared_data_bucket Devseed-managed shared bucket (contains custom ingest lambda functions/common ancillary files)

Step 2: Deploy deployer stack

Use the kes utility installed with Cumulus to deploy your configurations to AWS. This must be done from the ghrc-deploy repository root

$ cd /home/user/ghrc-deploy
$ ./node_modules/.bin/kes cf deploy --kes-folder deployer --deployment <deployer-deployment-name> --region <region>

A successful completion will result in output similar to:

$ ./node_modules/.bin/kes cf deploy --kes-folder deployer --deployment default --region us-east-1

Template saved to deployer/cloudformation.yml
Uploaded: s3://ghrcitsc-internal/ghrcitsc-cumulus-iam/cloudformation.yml
Waiting for the CF operation to complete
CF operation is in state of CREATE_COMPLETE

This creates a new DeployerRole role in the IAM Console named <deployer-stack-name>-DeployerRole-<generatedhashvalue>. its Role ARN for later

Create IAM role

Step 1: Add new deployment

Add new deployment to ghrc-deploy/iam/config.yml

The iam configuration creates 4 roles and an instance profile used internally by the Cumulus stack

Sample new deployment added to config.yml:

<iam-deployment-name>:          # e.g. dev (Note: Omit brackets, i.e. NOT <dev>)
  prefix: <stack-prefix>  # prefixes CloudFormation-created iam resources and permissions, MUST MATCH prefix in deployer stack
  stackName: <stack-name> # name of this iam stack in CloudFormation (e.g. <prefix>-iams)
  buckets:
    internal: <prefix>-internal  # Note: these are the bucket names, not the prefix from above
    private: <prefix>-private
    protected: <prefix>-protected
    public: <prefix>-public

The various config fields are described below. All items in < > brackets are intended to be configured with user-set values

iam-deployment-name: The name (e.g. dev) of the ‘deployment’ - this key tells kes which configuration set (in addition to the default values) to use when creating the cloud formation template

prefix: This value will prefix CloudFormation-created deployer resources and permissions

This value must match the prefix used in the Deployer role creation and the cumulus stack name must start with <prefix>

stackName: The name of this deployer stack in CloudFormation (e.g. ghrc-deployer)

buckets: The buckets created in the Create S3 Buckets step

Step 2: Deploy iam stack

$ ./node_modules/.bin/kes cf deploy --kes-folder iam --deployment <iam-deployment-name> --region <region>

Example:

$ ./node_modules/.bin/kes cf deploy --kes-folder iam --deployment default --region us-east-1

Compiling the main template
Template saved to iam/cloudformation.yml
Uploaded: s3://ghrcitsc-internal/ghrcitsc-cumulus-iam/cloudformation.yml

If this deployment fails check the deployment details in the AWS Cloud Formation Console for information. Permissions may need to be updated by your AWS administrator

If the iam deployment command succeeds, you should see 4 new roles in the IAM Console:

<stack-name>-ecs
<stack-name>-lambda-api-gateway
<stack-name>-lambda-processing
<stack-name>-steprole

img

The same information can be obtained from the AWS CLI command: aws iam list-roles

The iam deployment also creates an instance profile named <stack-name>-ecs that can be viewed from the AWS CLI command: aws iam list-instance-profiles

Step 3: Assign a policy to a user

This AssumeRole policy, when applied to a user, allows the user to act with the permissions described by the DeployerRole

Using the command line interface or IAM console create and assign a policy to a user who will deploy Cumulus

In IAM console, choose Policies from the navigation bar, click create policy, select JSON tab Paste the below code into “JSON” tab of the policy creator interface

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "<arn:DeployerRole>"
        }
    ]
}

img

Replace the <arn:DeployerRole> with Role ARN value created when you deployed the ghrc-deployer stack

For example, Create a new user called ghrc-client in IAM console and assign the AssumeRole policy.

The AWS CLI command aws iam list-roles | grep Arn will show you the ARNs

if this command aws iam list-roles | grep Arn fails, iam permission should be added to AssumeRole policy user In IAM console -> Users -> select ghrc-client -> permissions -> select the existing policy -> Edit policy -> Import managed policy -> check IAMFullAccess policy

Step 4: Update AWS Access keys

Create Access keys for the AssumeRole policy user and replace the previous values in environment variables

4[a]: Sign in to AWS console

Sign in to the AWS Management Console and open the Amazon IAM Dashboard at https://console.aws.amazon.com/iam

4[b]: Get user’s security Credentials

  • On the IAM Dashboard page, choose User in navigation bar, select the AssumeRole policy user, then click security credentials
  • Click create access key to generate a new Access key ID & Secret access key

4[c]: Export the access keys

Set the environment variable in Terminal,

$ export AWS_ACCESS_KEY_ID=<AWS access key>
$ export AWS_SECRET_ACCESS_KEY=<AWS secret key>
$ export AWS_REGION=<region> [region code eg:us-west-2]

export sets the environment variable.

If you don’t want to set environment variables, access keys can be stored locally via the AWS CLI

$ aws configure

Enter the keys, region name, and output format as shown below

AWS Access Key ID [******EXAMPLE]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [****CYEXAMPLEKEY]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [us-west-2]: us-west-2
Default output format [json]: json

Building a Docker image

Step 1: Installing Docker

Docker is an open-source project for automating the deployment of applications as portable, self-sufficient containers that can run on the cloud or on-premises.

$ sudo apt install docker.io

Step 2: Installing Pip

Pip is a package management system used to install and manage software packages written in Python.

$ sudo apt install python-pip

Step 3: Upgrade AWS CLI

$ pip install awscli --upgrade --user

Step 4: Authenticate Docker to an Amazon ECR registry

$ aws ecr get-login --no-include-email --region <AWS-region>

If you receive an Unknown options: --no-include-email error, remove --no-include-email and execute the command. Replace <AWS-region> with your AWS region code. Example: --region us-east-1

The resulting output is a docker login command that you use to authenticate your Docker client to your Amazon ECR registry.

Output will be in the format:

docker login -u AWS -p <password> https://aws_account_id.dkr.ecr.us-east-1.amazonaws.com

Sample Output:

docker login -u AWS -p eyJwYXlsb2FkIjoiUWZp\
                       NM0l3VXVUL0xxZklibG1DTVhPaTg5dmdlRG80VHAxZzNiZ3F1ajFvNUI4SFY2cCswRk12TWVYS3FLY09LN2FZc20vNXpJZ05TT1Y2QjB4dHNKZTRrVkNVdEtOeEx3YVJEVEMvN2RHd0tOVExTZDg1SDlhRE9yTTZWUGxjYUYzcEl3T3pFcEhScXB3YkNqdE5Ob1ozbWVLNk1OC9LSWNqTjRJRTBQNk16QXBpSjJ0eHZPc1MydWhDVjloZlM2dGNkZTFzZGZnQnJTbHZ3UVdpT1ZiU3BhMU5QN2k2TVdmMGJ3VitmRTl6MGdVYk9VdjR0K05aVnlrTHBzQWMrVFJwVUlkRElJV1EwRExOeXZKVHhUVllXNFFrTDlrd1JPYkN1VVdlUGovNmpkaDdCaG5MOHZNcXlKdWgwREEwZXBSeFlrZkV4eWZkbEJ5MmFJVThUa1dSSXIxVW9oQ1gxeE01Uk9LWk1vNlgwSS9yZmplUGNJYjhSOE5iMlhUUEZYWTl4NStVYzY4WHp1dGpZRTEwRGg5MGVqK3VIUkcyK3ZVZ0t2WFlsNXgzQmFDbkp6eFJ6VjF4RG5kM3VYN1RCSnVNSGRQWGdOaEpPQzhZUXZnOW11TjJaS3kzczlNYVI3NzdVSDdBcVBUclpSLzhNMG9SRkhYNnIvTENLc096S2tNNWt3blR1Mk1tbHdTMSt4d2VBNHYyaEdlSUt2NU1FNkV1WS9TY2VwSDBKdk1kZEFMNjVuYUxacThIUGZKbWt0Z2lvRjFYdjNtOS8xQisxL2xnYVRqZTJZeHI2MUdpM1Mydk96Wkh2OHg5NWFiRWZmOEUyQXYrREhBTXJEcGY0UkpOVXNwczkwbEVPdDlaaVhlbDRKVHNoNk9vbi9EcVVadVArTVFUOFlkMCtiNU5XRGp2dW96QWFRTGRvU3hrczN0U2lXVjR2ZitadTRVSE10V0pGOEM0SDJIa01rQW1QWExVN0pMTkxlNGtNWlNGVzcxMnZHbkxZbUJIUmdFbld2Y2V3WndLS3h2Rmd0RlR6NVBVNmVWdVFOaGJ1VkZycHRXc2VpSWNLVmRsbnppRm55UTNNemEwVXJ0UVgyMHBheE16Z3VtUXdjIiwiZGF0YWtleSI6IkFRRUJBSGh3bTBZYUlTSmVSdEptNW4xRzZ1cWVla1h1b1hYUGU1VUZScTgvMTR3QUFBSDR3ZkFZSktvWklodmNOQVFjR29HOHdiUUlCQURCb0Jna3Foa2lHOXcwQkJ3RXdIZ1lKWUlaSUFXVURCQUV1TUJFRURBQ29YQ0tIcBmdkgvODRQd0lCRUlB2dDaG8wcm0ydm92ejBTd0RRVEg5Y0d4K0ZXcXJLRm80T3NtTVRzeVd2WUltMnQzLzE2MjF4NldLYk0zMlhzMnBiQ0E1dGpYN1FlcFZpTE09IiwidmVyc2lvbiI6IjIiLCJ0eXBlIjoiREFUQV9LRVkiLCJleHBpcmF0aW9uIjoxNTE2NDIxMzk0fQ== https://831488300700.dkr.ecr.us-east-1.amazonaws.com

Step 5: Login with credentials

Copy paste the output from previous step

$ sudo docker login -u AWS -p eyJwYXlsb2FkIjoiUWZp\
                       NM0l3VXVUL0xxZklibG1DTVhPaTg5dmdlRG80VHAxZzNiZ3F1ajFvNUI4SFY2cCswRk12TWVYS3FLY09LN2FZc20vNXpJZ05TT1Y2QjB4dHNKZTRrVkNVdEtOeEx3YVJEVEMvN2RHd0tOVExTZDg1SDlhRE9yTTZWUGxjYUYzcEl3T3pFcEhScXB3YkNqdE5Ob1ozbWVLNk1OC9LSWNqTjRJRTBQNk16QXBpSjJ0eHZPc1MydWhDVjloZlM2dGNkZTFzZGZnQnJTbHZ3UVdpT1ZiU3BhMU5QN2k2TVdmMGJ3VitmRTl6MGdVYk9VdjR0K05aVnlrTHBzQWMrVFJwVUlkRElJV1EwRExOeXZKVHhUVllXNFFrTDlrd1JPYkN1VVdlUGovNmpkaDdCaG5MOHZNcXlKdWgwREEwZXBSeFlrZkV4eWZkbEJ5MmFJVThUa1dSSXIxVW9oQ1gxeE01Uk9LWk1vNlgwSS9yZmplUGNJYjhSOE5iMlhUUEZYWTl4NStVYzY4WHp1dGpZRTEwRGg5MGVqK3VIUkcyK3ZVZ0t2WFlsNXgzQmFDbkp6eFJ6VjF4RG5kM3VYN1RCSnVNSGRQWGdOaEpPQzhZUXZnOW11TjJaS3kzczlNYVI3NzdVSDdBcVBUclpSLzhNMG9SRkhYNnIvTENLc096S2tNNWt3blR1Mk1tbHdTMSt4d2VBNHYyaEdlSUt2NU1FNkV1WS9TY2VwSDBKdk1kZEFMNjVuYUxacThIUGZKbWt0Z2lvRjFYdjNtOS8xQisxL2xnYVRqZTJZeHI2MUdpM1Mydk96Wkh2OHg5NWFiRWZmOEUyQXYrREhBTXJEcGY0UkpOVXNwczkwbEVPdDlaaVhlbDRKVHNoNk9vbi9EcVVadVArTVFUOFlkMCtiNU5XRGp2dW96QWFRTGRvU3hrczN0U2lXVjR2ZitadTRVSE10V0pGOEM0SDJIa01rQW1QWExVN0pMTkxlNGtNWlNGVzcxMnZHbkxZbUJIUmdFbld2Y2V3WndLS3h2Rmd0RlR6NVBVNmVWdVFOaGJ1VkZycHRXc2VpSWNLVmRsbnppRm55UTNNemEwVXJ0UVgyMHBheE16Z3VtUXdjIiwiZGF0YWtleSI6IkFRRUJBSGh3bTBZYUlTSmVSdEptNW4xRzZ1cWVla1h1b1hYUGU1VUZScTgvMTR3QUFBSDR3ZkFZSktvWklodmNOQVFjR29HOHdiUUlCQURCb0Jna3Foa2lHOXcwQkJ3RXdIZ1lKWUlaSUFXVURCQUV1TUJFRURBQ29YQ0tIcBmdkgvODRQd0lCRUlB2dDaG8wcm0ydm92ejBTd0RRVEg5Y0d4K0ZXcXJLRm80T3NtTVRzeVd2WUltMnQzLzE2MjF4NldLYk0zMlhzMnBiQ0E1dGpYN1FlcFZpTE09IiwidmVyc2lvbiI6IjIiLCJ0eXBlIjoiREFUQV9LRVkiLCJleHBpcmF0aW9uIjoxNTE2NDIxMzk0fQ== https://831488300700.dkr.ecr.us-east-1.amazonaws.com

You will get Login Succeeded at the end on successful login.

Step 6: Build Docker

Change directory to the location where Dockerfile is located, then execute

$ sudo docker build -t `<YourImageName>` .

. at the end is part of a code, which includes all the files in a directory. <YourImageName> should be replaced with your preferred image name without <>. Example: $ sudo docker build -t msut2 .

Step 7: Push Docker image to Amazon ECR

The Docker image is uploaded to the AWS ECR

$ sudo docker push <aws_account_id.dkr.ecr.us-east-1.amazonaws.com>/<YourImageName>[:TAG]

Example:

$ sudo docker push 831488300700.dkr.ecr.us-east-1.amazonaws.com/msut2:latest

Create Earthdata application

Step 1: Login EarthData

Go to EarthData website

If you are an existing user, enter Username and Password to login EarthData (or) If you are a new user, click Register to create an account

Registration page will be similar to

After successful login, your profile will be similar to

Click Application menu, select My Applications

Step 2: Create/Use an Application

The User should have administrative privileges to create an Application in EarthData

If you don’t have an existing Application, Click Create a New Application

Specify the necessary details to create a new Application

Use any URL for the Redirect URL, it will be deleted in a later step

Step 3: Add Redirect URL’s

Click the home icon in your Application, you can view your Application’s details

Make a note of the Client ID and Application’s Password, this should be specified in the ghrc-deploy/app/.env file. The Client ID is in mixed case letters

The following step should be done after deploying ghrc-deploy/app/config.yml. You will get the endpoints URL after successful deployment.

Click Manage menu , select Redirect Uris

Add the Redirect URLs (API & Distribution)

Example:

https://<czbbkscuy6>.execute-api.us-east-1.amazonaws.com/dev/token
https://<kido2r7kji>.execute-api.us-east-1.amazonaws.com/dev/redirect

Configure Cumulus stack

You should either add a new root-level key for your configuration or modify the existing default configuration key to whatever you’d like your new deployment to be.

If you’re re-deploying based on an existing configuration you can skip this configuration step unless values have been updated or you’d like to add a new deployment to your deployment configuration file.

Step 1: Edit the ghrc-deploy/app/config.yml file

The various configuration sections are described below with a sample config.yml file

Sample config.yml

<cumulus-deployment-name>:
  stackName: <prefix>-cumulus
  stackNameNoDash: <Prefix>Cumulus

  apiStage: dev

  vpc:
    vpcId: <vpc-id>
    subnets:
      - <subnet-id>

  ecs:
    instanceType: t2.micro
    desiredInstances: 0
    availabilityZone: <subnet-id-availabilityZone>

  buckets:
    internal: <prefix>-internal

  iams:
    ecsRoleArn: arn:aws:iam::<aws-account-id>:role/<iams-prefix>-ecs
    lambdaApiGatewayRoleArn: arn:aws:iam::<aws-account-id>:role/<iams-prefix>-lambda-api-gateway
    lambdaProcessingRoleArn: arn:aws:iam::<aws-account-id>:role/<iams-prefix>-lambda-processing
    stepRoleArn: arn:aws:iam::<aws-account-id>:role/<iams-prefix>-steprole
    instanceProfile: arn:aws:iam::<aws-account-id>:instance-profile/<iams-prefix>-ecs

  urs_url: https://uat.urs.earthdata.nasa.gov/ #make sure to include the trailing slash

    # if not specified the value of the apigateway backend endpoint is used
    # api_backend_url: https://apigateway-url-to-api-backend/ #make sure to include the trailing slash

    # if not specified the value of the apigateway dist url is used
    # api_distribution_url: https://apigateway-url-to-distribution-app/ #make sure to include the trailing slash

  # URS users who should have access to the dashboard application.(EarthData username)
  users:
    - username: <user>
    - username: <user2>

deployer-deployment-name: The name (e.g. dev) of the ‘deployment’ - this key tells kes which configuration set (in addition to the default values) to use when creating the cloud formation template

This value is used by kes only to identify the configuration set to use and should not appear in any AWS object

stackName: The name of this deployer stack in CloudFormation (e.g. -deployer).

This stack name must start with the prefix listed in the IAM role configuration, or the deployment will fail

stackNameNoDash: A representation of the stack name that has dashes removed. This will be used for components that should be associated with the stack but do not allow dashes in the identifier.

vpc: Configure your virtual private cloud. You can find <vpc-id> and <subnet-id> values on the VPC Dashboard. vpcId from Your VPCs, and subnets here. When you choose a subnet, be sure to also note its availability zone, to configure ecs.

ecs: Configuration for the Amazon EC2 Container Service (ECS) instance. Update availabilityZone with information from EC2 Dashboard. you can Launch Instance or use existing Running Instance. Note instanceType and desiredInstances have been selected for a sample install. You will have to specify appropriate values to deploy and use ECS machines. See EC2 Instance Types for more information.

buckets: The config buckets should map to the same names you used when creating buckets in the AWS Configuration step.

iams: Add the ARNs for each of the four roles and one instanceProfile created in the IAM Roles step. You can retrieve the ARNs from:

$ aws iam list-roles | grep Arn
$ aws iam list-instance-profiles | grep Arn

To locate IAM Roles in the Console

Select Roles in Navigation bar, then selecting the automatically created roles that correspond to the ‘iams’ roles in the configuration file. Within each you’ll find the Role ARN with the ROLE ARN displayed at the top of the tab:

iam roles image

users: List of EarthData users you wish to have access to your dashboard application. These users will be populated in your <stackname>-UsersTable DynamoDb (in addition to the default_users defined in the Cumulus default template).

DynamoDb image

Step 2: Configure EarthData application

The Cumulus stack is expected to authenticate with Earthdata Login. You must create and register a new application. Use the User Acceptance Tools (UAT) site unless you changed urs_url above. Follow the directions on how to register an application. Use any url for the Redirect URL, it will be deleted in a later step.

Step 3: Set up environment file

If you’re adding a new deployment to an existing configuration repository or re-deploying an existing Cumulus configuration you should skip to Deploy the Cumulus Stack, as these values should already be configured.

In ghrc-deploy directory, open Terminal and enter the following code

$ nano app/.env

Creates a new .env file in app sub-directory, add the following values

 CMR_PASSWORD=cmrpassword
 EARTHDATA_CLIENT_ID=clientid
 EARTHDATA_CLIENT_PASSWORD=clientpassword

Save (ctrl + o) and close (ctrl + x) the file

For security, it is highly recommended that you prevent apps/.env from being accidentally committed to the repository by keeping it in the .gitignore file at the root of this repository.

Step 4: Deploy the Cumulus Stack

Once the preceding configuration steps have completed, run the following to deploy Cumulus from your ghrc-deploy root directory:

$ ./node_modules/.bin/kes cf deploy --kes-folder app --region <region> \
  --template ./node_modules/@cumulus/deployment/app \
  --deployment <cumulus-deployment-name> --role <arn:deployerRole>

You can monitor the progress of the stack deployment from the AWS CloudFormation Console

AWS CloudFormation Console will be similar to this:

A successful completion will result in output similar to:

$ ./node_modules/.bin/kes cf deploy --kes-folder app --region us-east-1b \
  --template ../cumulus/packages/deployment/app \
  --deployment default --role arn:aws:iam::820488300700:role/ghrcitsc-cumulus-deployer-DeployerRole-1T8XWPH4JJXRO


  Compiling the main template

    adding: daac-ops-api/ (stored 0%)
    adding: daac-ops-api/index.js (deflated 85%)


    adding: discover-granules/ (stored 0%)
    adding: discover-granules/index.js (deflated 82%)


    adding: post-to-cmr/ (stored 0%)
    adding: post-to-cmr/index.js (deflated 83%)


    adding: sync-granule/ (stored 0%)
    adding: sync-granule/index.js (deflated 82%)


    adding: amsr2-l1-discover/ (stored 0%)
    adding: amsr2-l1-discover/index.js (deflated 84%)

  Already Uploaded: s3://ghrcitsc-internal/ghrcitsc-cumulus/lambdas/3596994818023e59ff65247f52ceed25b4be2be9/sync-granule.zip
  Already Uploaded: s3://ghrcitsc-internal/ghrcitsc-cumulus/lambdas/4135748ee2c4cb4e51597fadf89b4c282428e275/post-to-cmr.zip
  Already Uploaded: s3://ghrcitsc-internal/ghrcitsc-cumulus/lambdas/4af1c2527061e7e72e828036ba6eb20644435c97/discover-granules.zip
  Already Uploaded: s3://ghrcitsc-internal/ghrcitsc-cumulus/lambdas/b48c86f2a0acee7415a2c336cc2b3641e66419de/daac-ops-api.zip
  Already Uploaded: s3://ghrcitsc-internal/ghrcitsc-cumulus/lambdas/6744f43c48964d3f9781cb831a832df109a126ce/amsr2-l1-discover.zip
  Template saved to app/cloudformation.yml
  Uploaded: s3://ghrcitsc-internal/ghrcitsc-cumulus/cloudformation.yml
  Waiting for the CF operation to complete
  CF operation is in state of CREATE_COMPLETE

  Here are the important URLs for this deployment:

  Distribution:  https://pfee8g6b8k.execute-api.us-east-1.amazonaws.com/dev/
  Add this url to URS:  https://pfee8g6b8k.execute-api.us-east-1.amazonaws.com/dev/redirect

  Api:  https://ow8sxcm8mi.execute-api.us-east-1.amazonaws.com/dev/
  Add this url to URS:  https://ow8sxcm8mi.execute-api.us-east-1.amazonaws.com/dev/token

  Uploading Workflow Input Templates
  Uploaded: s3://ghrcitsc-internal/ghrcitsc-cumulus/workflows/DiscoverGranules.json
  Uploaded: s3://ghrcitsc-internal/ghrcitsc-cumulus/workflows/list.json
  distribution endpoints with the id pfee8g6b8k redeployed.
  api endpoints with the id ow8sxcm8mi redeployed.




Be sure to copy the URLs, as you will use them to update your EarthData application.

Deploy Cumulus dashboard

Step 1: Create S3 bucket for Dashboard:

Buckets can be created on the command line with AWS CLI or via the web interface on the AWS console

Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/

Click create bucket on the console. name it <prefix>-dashboard. Same <prefix> used to create the previous S3 buckets

Choose your preferred region. Click Create

Step 2: Configure the bucket to host a website:

In AWS S3 console, Select <prefix>-dashboard bucket then, click Properties tab -> enable Static Website Hosting, Enter index.html in Index document field. Click save. (or)

In CLI:

aws s3 website s3://<prefix>-dashboard --index-document index.html

The bucket’s URL will be

 http://<prefix>-dashboard.s3-website-<region>.amazonaws.com

Also, you can find the URL on AWS S3 console, Select <prefix>-dashboard bucket then, click Properties tab -> click Static Website Hosting, Endpoint provides your URL.

Step 3: Install dashboard repository:

Clone the repository in the same directory level as the Cumulus repository download

$ git clone https://github.com/cumulus-nasa/cumulus-dashboard
$ cd cumulus-dashboard

Changes the directory to cumulus-dashboard

$ npm install

npm install will install all modules listed as dependencies in package.json

Step 4: Configure dashboard:

Modify the config.js document in cumulus-dashboard/app/scripts/config/

Replace the default apiRoot https://wjdkfyb6t6.execute-api.us-east-1.amazonaws.com/dev/ with your app’s apiRoot.

The apiRoot can be found a number of ways. The easiest is to note it in the output of the app deployment step. But you can also find it from the AWS console -> Amazon API Gateway -> APIs -> <prefix>-cumulus-backend -> Dashboard, and reading the URL at the top “invoke this API”

The app deployment step looks like this:

kes cf deploy --kes-folder app --region <region> \
  --template ../cumulus/packages/deployment/app \
  --deployment <cumulus-deployment-name>

When executing the above code, an output will be generated. apiRoot is displayed at the end with a tag name Api

Example:

Api:  https://<czbbkscuy6>.execute-api.us-east-1.amazonaws.com/dev/

Step 5: Build dashboard repository:

Build the dashboard from the directory cumulus-dashboard

$ npm run build

Step 6: Deploy dashboard:

Deploy dashboard to S3 bucket from the cumulus-dashboard directory:

In AWS CLI:

  $ aws s3 sync dist s3://<prefix>-dashboard --acl public-read

the aws user must have write permissions to <prefix>-dashboard to execute the above code

(or)

From the S3 Console:

In AWS S3 console, Select <prefix>-dashboard bucket, click ‘upload’.

Add the contents of the ‘cumulus-dashboard/dist’ subdirectory to the upload. Then select ‘Next’.

On the permissions window allow the public to view. Click upload (or)

Upload the subdirectory, paste the following code in <prefix>-dashboard -> permissions tab -> Bucket Policy to grant read permission to the public

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::<prefix>-dashboard/*"
        }
    ]
}

Replace <prefix>-dashboard in the above code

You should be able to visit the dashboard website at http://<prefix>-dashboard.s3-website-<region>.amazonaws.com or find the URL in AWS S3 console <prefix>-dashboard -> “Properties” -> “Static website hosting” -> “Endpoint” and login with a user that you configured for access in the Configure Cumulus Stack step.

Updating Cumulus deployment

Once deployed for the first time, any future updates to the role/stack configuration files/version of Cumulus can be deployed and will update the appropriate portions of the stack as needed.

Update roles:

Execute the code in ghrc-deploy directory.

$ kes cf deploy --kes-folder iam --deployment <deployment-name> \
  --region <region> # e.g. us-east-1

Update stack:

If you modified any values in app/config.yml or need to update the stack, execute the following code in ghrc-deploy directory.

$ kes cf deploy --kes-folder app --region <region> \
  --template ../cumulus/packages/deployment/app \
  --deployment <cumulus-deployment-name> --role <arn:deployerRole>

Cumulus Versioning:

Cumulus uses a global versioning approach, meaning version numbers are consistent across all packages and tasks, and semantic versioning to track major, minor, and patch version (i.e. 1.0.0). We use Lerna to manage versioning. Any change will force lerna to increment the version of all packages.

Publishing to NPM:

Execute the code in cumulus directory.

$ lerna publish

To specify the level of change for the new version

$ lerna publish --cd-version (major | minor | patch | prerelease)