{
"Version": "2012-10-17",
"Id": "S3-Account-Permissions",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": [ "arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:root"]
},
"Action": [ "s3:GetObject", "s3:PutObject" ],
"Resource": [ "arn:aws:s3:::mybucket", "arn:aws:s3:::mybucket/*" ]
}
]
}
# check if python3 is installed
$ python3 --version
# otherwise install it
$ brew install python3
# check if pip3 is installed
$ pip3 --version
# otherwise install it
$ curl -O https://bootstrap.pypa.io/get-pip.py
$ python3 get-pip.py --user
# use pip3 to install awscli
$ pip3 install awscli --upgrade --user
$ aws --version
#if command not found, add it to the PATH depending on the version of Python you have (3.x)
$ cd ~
$ nano .bash_profile
PATH="~/Library/Python/3.5/bin:<SOME OTHER EXISTING BINARIES>:${PATH}"
$ source ~/.bash_profile
$ aws --version
$ aws configure
AWS Access Key ID [None]: [YOUR Access Key ID]
AWS Secret Access Key [None]: [Secret Access Key]
Default region name [None]: us-east-1
Default output format [None]: ENTER
# get your user info
$ aws iam get-user
1. Access your aws admin account.
2. Explore various aws services from the console.
3. Go to IAM Management Console.
4. Create another user ($USER1) with following IAM Policies:
a. AmazonS3ReadOnlyAccess
5. Create account using this new user and see what it can and cannot do.
6. Go to IAM Management Console.
7. Generate your Access key ID and Secret access key. Save it in a file.
8. Install aws-cli on your machine.
9. Configure aws-cli using your access keys.
10. execute:
$ aws iam get-user
$ aws iam list-policies
$ aws iam create-user --user-name $USER2
$ aws iam create-login-profile --user-name $USER2 --password .Egen2017.
Delete both the users: $USER1 and $USER2
1. Buy a domain from godaddy.com or Route53.
2. Create a hosted zone for the domain.
3. In godaddy.com domain settings, update the DNS NameServers for the domain
with the ones provided by Route53.
4. Add CNAME records ($USER-route.egen.cloud) for the domain and point it to some other domain.
5. Try opening the CNAME in the browser. Should work in 5+ minutes.
1. Go to Amazon SES Console and Verify a New Domain
2. Enter the domain: alerts.$USER.egen.cloud
3. Since egen.cloud is on Route53 in the same account,
SES will automatically setup every record in your Route53 hosted zone.
4. Click Use Route53 and then Create Record Sets.
5. Create two files destination.json and message.json similar as explained
http://docs.aws.amazon.com/cli/latest/reference/ses/send-email.html#examples
6. Once verification is complete, open terminal, execute this
aws ses send-email --from alerts@alerts.$USER.egen.cloud --destination file://destination.json --message file://message.json
7. Check your email inbox.
8. Code sample for sending emails via aws-ses-sdk
http://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-an-email-using-sdk-programmatically.html
1. Go to AWS SNS Console
2. Open Text Messaging (SMS) section
3. Send a text message (SMS) of the type Transactional
4. Via aws cli
aws sns publish --phone-number [PHONE_NUMBER_WITH_COUNTRY_CODE] --message "Hi How are you"
5. Java sample for sending SMS via aws-sns-sdk
http://docs.aws.amazon.com/sns/latest/dg/sms_publish-to-phone.html#sms_publish_sdk_java
- create a S3 bucket
a. go to S3 Console
b. create a bucket $USER-cdn-bucket with
Manage public permissions: Grant public read access to this bucket
c. upload a static site files to this bucket
d. select all files and then make them public
e. create another S3 bucket for logs $USER-cdn-logs
- create cloudfront distribution
a. go to CloudFront Console and click Create Distribution
b. Origin Settings:
Origin Domain Name: Select your s3 bucket $USER-cdn-bucket
Restrict Bucket Access: No
c. Add CNAME: $USER-cdn.egen.cloud
d. Default Root Object: index.html
e. Select all default settings for other options
f. Click Create Distribution
- review cloudfront distribution
a. once distribution is Deployed, click on your distribution
b. copy the Domain Name and open it in the browser
EBS-backed Instances
Instance Store-backed Instances
- launch an EC2 instance in the default VPC
a. Select AMI: Ubuntu 16.x
b. Select instance type: m4.large
c. Instance Details: Default settings
d. Add Storage: additional 100GB GP2 EBS volume
e. Add Tag: Name with value $USER-instance to both instance and volume
f. Security Group: Existing Default Security Group
g. Review
i. Launch
j. Create and download a new keypair and save it as a name $USER-awskey
k. Save this $USER-awskey.pem file in the home folder of your Mac.
l. launch instance and copy the instance ID
- Review EC2 instance by searching with the tag $USER-instance
- SSH to the EC2 instance
a. copy the IPv4 Public IP from the instance details panel
b. open terminal, execute
ssh -i $USER-awskey.pem ubuntu@instance-public-ip-address
c. if it doesn't work, check and open the port 22 (SSH Port) in the security group.
d. if "UNPROTECTED PRIVATE KEY FILE" error is throw, change the perms on the .pem file by executing
chmod 600 $USER-awskey.pem
e. you should be SSHed in the EC2 instance now.
- Explore EC2 instance
pwd
whoami
date
cat /etc/hostname
ps aux
ps aux --sort -rss
netstat -tulpn
- Create a Route53 A record for your instance's IP
a. Open Route53 console
b. select egen.cloud hosted zone.
c. Create Record Set of Type: A - IPv4 Address
$USER-instance.egen.cloud
Public IPv4 of your instance
d. Try $USER-instance.egen.cloud in the browser. Nothing works?
- Install nginx on the EC2 instance
a. SSH to the box
b. execute following to install and check port 80 used by nginx
sudo apt-get update
sudo apt-get install nginx
netstat -tulpn
c. Try http://$USER-instance.egen.cloud in the browser. Works?
d. if it doesn't work, check and open the port 80 (HTTP Port) in the security group.
e. You should see default nginx page
- Install Oracle Java8 on your EC2 instance
a. execute following
sudo apt-get update
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
b. Accept Oracle's agreement and Binary License
c. check if it's installed
java -version
which java
- Install Node.js on your EC2 instance
a. execute following to install nodejs using nvm
sudo apt-get update
sudo apt-get install build-essential libssl-dev
curl -sL https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh -o install_nvm.sh
bash install_nvm.sh
source ~/.profile
nvm ls-remote
nvm install 8.9.0
node -v
npm -v
- deploy front-end app
a. copy your front-end dist code to the EC2 instance
scp -i $USER-awskey.pem -r aws-bootcamp-ui/dist ubuntu@$USER-instance.egen.cloud:/home/ubuntu
b. SSH to the EC2 instance
ssh -i $USER-awskey.pem ubuntu@$USER-instance.egen.cloud
sudo rm -r /var/www/html/*
sudo cp -R dist/* /var/www/html
c. update nginx config for your site:
cd /etc/nginx/conf.d
sudo nano $USER-instance.conf
d. add following to the .conf file
server {
listen 80;
server_name $USER-instance.egen.cloud;
location / {
root /var/www/html/;
index index.html index.htm;
#Html5 Mode
try_files $uri $uri/ /index.html;
}
}
e. test and restart nginx
sudo nginx -t
sudo service nginx restart
f. Try http://$USER-instance.egen.cloud in the browser. See your site there?
- deploy nodejs/springboot app
a. if it's springboot app, copy your local jar to the EC2 instance
scp -i $USER-awskey.pem aws-bootcamp-spring/build/aws-bootcamp-spring.jar ubuntu@$USER-instance.egen.cloud:/home/ubuntu
b. if it's nodejs app, copy your source code to the EC2 instance
scp -i $USER-awskey.pem -r aws-bootcamp-nodejs/* ubuntu@$USER-instance.egen.cloud:/home/ubuntu
c. SSH to the EC2 instance
ssh -i $USER-awskey.pem ubuntu@$USER-instance.egen.cloud
d. if it's nodejs app, execute following:
cd aws-bootcamp-nodejs
npm install
nohup node server.js 8080 &> output8080.out&
e. if it's springboot app, execute following:
nohup java -Dserver.port=8080 -jar aws-bootcamp-spring.jar &> output8080.out&
f. Open appropriate ports for the rest apis in the default security group: e.g. 3000, 8080 etc.
g. See if you can access http://$USER-instance.egen.cloud:3000 or http://$USER-instance.egen.cloud:8080
- configure nginx to act as proxy
a. edit conf file
cd /etc/nginx/conf.d
sudo nano $USER-instance.conf
b. edit to following:
server {
listen 80;
server_name $USER-instance.egen.cloud;
location / {
root /var/www/html;
index index.html index.htm;
#Html5 Mode
try_files $uri $uri/ /index.html;
}
location /api/ {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:8080/;
}
}
c. Check following in the browser:
http://$USER-instance.egen.cloud/api
d. block 8080 from the security groups and test the following (should not work)
http://$USER-instance.egen.cloud:8080
- configure nginx for load balancing
a. create additional instance of the rest api on different ports
nohup node server.js 8081 &> output8081.out&
nohup node server.js 8082 &> output8082.out&
nohup java -Dserver.port=8081 -jar aws-bootcamp-spring.jar &> output8081.out&
nohup java -Dserver.port=8082 -jar aws-bootcamp-spring.jar &> output8082.out&
b. edit nginx config with following:
upstream apiservers {
server 127.0.0.1:8080;
server 127.0.0.1:8081;
server 127.0.0.1:8082;
}
c. and update proxy_pass to the following in the .conf
proxy_pass http://apiservers/;
- create a vpc
a. Go to VPC Console and click Create VPC
Name Tag: $USER-vpc
Use AWS VPC IPv4 CIDR block assigned to you in Column D
Tenancy: Default
b. Update VPC to enable DNS Hostnames
- create 2 subnets (repeat below 2times for 2 AZs)
a. Click Create Subnet
Name tag: $USER-vpc-subnet-[1a-1b]
VPC: Select your $USER-vpc
Availability Zone: us-east-[1a-1b]
IPv4 CIDR block: Use the ones assigned to you in Column E one by one
- create 2 security groups
a. Add name for the default security group for your VPC: $USER-vpc-sg-default
b. Create two new security group:
$USER-vpc-sg-public
$USER-vpc-sg-private
c. You shoud now see three security group in your VPC.
d. Update inbound rules for $USER-vpc-sg-private and $USER-vpc-sg-public
Type: ALL Traffic
Protocol: ALL
Port Range: ALL
Source: select your security groups one by one
e. Each of your custom security groups should have 2 inbound rules for ALL Traffic.
- update your default Network ACL
a. Search for Network ACL default to your VPC
b. Give it a name: $USER-vpc-nwacl
- create 1 internet gateway
a. Create a new Internet Gateway with name: $USER-vpc-internet-gateway
b. Attach this gateway to your VPC
- update default routing table for the VPC
a. Name your default routing table: $USER-vpc-routing-table
b. Add all your vpc subnets to it
c. Open all routes to your internet gateway
Destination: 0.0.0.0/0
Target: Id of your internet gateway
- launch EC2 instance in the VPC with public security group
a. Select AMI: Ubuntu 16.x
b. Select instance type: m4.large
c. Instance Details:
Network: select $USER-vpc
Subnet: Select any subnet from your VPC
Auto Assign IP: Enable
d. Add Storage: additional 100GB GP2 EBS volume
e. Add Tag: Name with value $USER-manager to both instance and volume
f. Security Group: Select Existing Public Security Group $USER-vpc-sg-public
g. Review
i. Launch
j. Use the existing keypair $USER-awskey
k. launch instance and copy the instance ID
- launch 2 EC2 instance in the VPC with private security group
a. Select AMI: Ubuntu 16.x
b. Select instance type: m4.large
c. Instance Details:
Network: select $USER-vpc
Subnet: Select any subnet from your VPC
Auto Assign IP: Enable
d. Add Storage: additional 100GB GP2 EBS volume
e. Add Tag: Name with value $USER-node1 and $USER-node2
f. Security Group: Select Existing Public Security Group $USER-vpc-sg-private
g. Review
i. Launch
j. Use the existing keypair $USER-awskey
k. launch instance and copy the instance ID
l. Try SSHing to these instances. See if it works. (NO)
- create Route53 entries for your public instance
a. Open egen.cloud hosted zone
b. Add one A record for the manager:
$USER-manager.egen.cloud
- SSH config for private instances
a. Copy your .pem key from local machine to the manager instance
scp -i $USER-awskey.pem $USER-awskey.pem ubuntu@$USER-manager.egen.cloud:/home/ubuntu
b. SSH to your manager
ssh -i $USER-awskey.pem ubuntu@$USER-manager.egen.cloud
c. From the manager, you can SSH to your private instances
ssh -i $USER-awskey.pem ubuntu@instance-private-ip-address
- create Route53 Private Hosted Zone for your VPC
a. Create new Hosted Zone
Domain Name: $USER
Type: Private Hosted Zone
VPC ID: Select Your VPC by ID
b. create A records for your instances using private IP address
manager.$USER ---> manager instance's Private IP
node1.$USER ----> node1 instance's Private IP
node2.$USER ----> node2 instance's Private IP
c. Instead of using IPs now among instances, you can use private DNS
d. try pinging instances from each other
ping manager.$USER
ping node1.$USER
ping node2.$USER
- create a simple nginx based Dockerfile with following content
a. create a new folder $USER-site
mkdir $USER-site
cd $USER-site
nano Dockerfile
b. Add following to the Dockerfile
FROM nginx
c. execute following to build the image
docker build -t $USER-site-image .
docker images
d. start containers from the image
docker run -d -p 3000:80 --name $USER-site1 $USER-site-image
docker run -d -p 3001:80 --name $USER-site2 $USER-site-image
e. check browser with localhost:3000 and localhost:3001
- pull an existing Docker image and run containers
a. pull docker image
docker pull salitrapraveen/whalehost
b. start containers
docker run -d -p 3000:80 --name $USER-whale1 salitrapraveen/whalehost
docker run -d -p 3001:80 --name $USER-whale2 salitrapraveen/whalehost
- helpful Docker commands
docker ps
docker ps -a
docker images
docker container prune -f
docker image prune -af
docker build -t [name:tag] .
docker push [name:tag]
docker tag [imagename] [name:tag]
docker run -d -p [hostport]:[containerport] --name [containername] [imagename]
docker exec -it [containerid] /bin/bash
docker logs [containerid]
docker stop [containerid]
docker start [containerid]
docker rm [containerid]
docker rmi [imageid or imagename]
- create a Dockerfile for nginx based static site
a. create following default.conf in the source code
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
b. keep following Dockerfile with the source code
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY default.conf /etc/nginx/conf.d/default.conf
COPY dist /usr/share/nginx/html
RUN chmod 755 -R /usr/share/nginx/html
b. create and tag the image
docker build -t $USER-site-image .
c. run the containers
docker run -d -p 3000:80 --name $USER-site $USER-site-image
- install docker on the manager node
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
apt-cache policy docker-ce
sudo apt-get install -y docker-ce
sudo systemctl status docker
sudo usermod -aG docker ${USER}
- init docker swarm on the manager node
docker swarm init --advertise-addr [manager_private_ip_address]
docker swarm init --advertise-addr 172.31.18.18
- try docker swarm commands
docker service ls
docker node ls
- join other nodes to the swarm
a. on the manager, get join token for other worker nodes
docker swarm join-token worker
b. copy the result of the above command, SSH to the other node and execute it
c. come back to the manager, and see if node has joined
docker node ls
- create a docker service
a. on the manager node
docker service create --replicas 1 --name [servicename] [image:tag]
docker service ls
docker service ps [servicename]
docker service inspect [servicename]
docker service logs [servicename]
docker service rm [servicename]
- assign a port to the service
docker service create --replicas 1 --name [servicename] -p [hostport]:[serviceport] [image:tag]
- scale the service
docker service scale [servicename]=[number of instances]
docker service ls
- run the visualizer
a. execute
docker service create \
--name=viz \
--publish=4324:8080/tcp \
--constraint=node.role==manager \
--mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
dockersamples/visualizer
b. open the port 4324 in security groups and open it in browser
- update a docker service
docker service update