This is a video demo of mounting an FSx drive on AWS between 2 windows-based EC2 instances.
Category: AWS
Outstanding Kubernetes course -Kubernetes Hands-On Deploy by Richard Chesterwood
This is just an amazing course. A no-brainer “just but it with confidence” purchase. Please see my video on what this course will offer you.
https://www.udemy.com/kubernetes-microservices/
Please watch
Moving the ASP.NET Microservices course application up into a Kubernetes cluster in AWS
In my link to the Frank Ozz course (a definite must purchase) on Udemy here , it shows how to deploy his application into a Kubernetes cluster. In these examples, I ported them up to AWS running Kops as the Kubernetes cluster.
I show 2 examples:
One where you have a Load Balancer for the token server and one where you expose a port within the K8 cluster itself for the token server (no Load Balancer).
Disclaimer: Whatever cloud platform you use, please make sure to delete your cluster/instances when you’re done IF you’re experimenting. You don’t want to incur unnecessary charges.
Also (VERY IMPORTANT) since this example uses a tokenserver microservice that fronts the IdentityServer4, it’s critical that you use SSL terminate your loadbalancer -or SSL terminate your Kubernetes Ingress controller depending on which way you go. You do “not” want to run this (or any) app in production only using http. I have a previous video that shows how to SSL terminate the ingress controller.
You can use wildcard certs and the ACM on AWS for the load balancer(s) -or- a free CA like “Lets Encrypt” (or any other CA you chose) for your Kubernetes ingress controllers.
Having said that above, here are the 2 videos.
Using the LoadBalancer
Using NodePort
A working (but quirky) demo a Kubernetes Ingress controller in AWS with TLS/SSL termination.
Admittingly, I struggled a little bit in this video due to a browser caching issue and needing to clear out HSTS (which was forwarding http requests over to https due to a previous demo). As an extra, you get to see how I had to get around that by clearing out HSTS from Chrome 🙂
Please refer to this video to see my “quirky” demo on getting a sample website up and running in a Kubernetes cluster that was SSL/TLS terminated in a Kubernetes Ingress controller. The demo is in AWS and I used KOPS to spin up the cluster.
AWS Sagemaker for ML on AWS (one example w/the AWS Linear Learner algorithm)
I created 2 videos based on my exploration and demonstrates AWS Sagemaker using the “linear learner” algorithm which you can see here:
https://docs.aws.amazon.com/sagemaker/latest/dg/linear-learner.html
BTW, you’ll see this example using the “mnist.pkl.gz” dataset which is the globally known MNIST dataset. Info about that can be found here:
https://en.wikipedia.org/wiki/MNIST_database
Part 1 – Setup/Train and Deploy
Part 2 – Tear down/deleting endpoints, model and S3 artifacts.
The MNIST database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems.[1][2] The database is also widely used for training and testing in the field of machine learning.[3][4] It was created by "re-mixing" the samples from NIST's original datasets. The creators felt that since NIST's training dataset was taken from American Census Bureau employees, while the testing dataset was taken from American high school students, it was not well-suited for machine learning experiments.[5] Furthermore, the black and white images from NIST were normalized to fit into a 28x28 pixel bounding box and anti-aliased, which introduced grayscale levels.[5]
AWS recommends using this instead of their original ML service (which is not available for new accounts).
– Simplified ML service which allows you to build/deploy your ML models (using many different out of the box algorithms) to AWS. The built in algorithms are not pre-trained so we need to format the training data to fit the model input specifications. Sagemaker will save the model parameters to S3 once training is completed. You can set up https end points.
– Linear Learner and Factorization Machine algorithms supported for “classification and regression” and Seq2seq for text summarization (speech to text). K-means Clustering for Clustering (logically grouping data) and Principal components analysis (Dimensionality Reduction). Xgboost, DeepAR (Face recognition), etc..
Regression – Output prediction is a continuous real value
Classification – Output prediction is a categorical binary value (a vegetable or mineral for example)
– Uses services like AWS Glue:
AWS Glue is a fully managed ETL (extract, transform, and load) service that makes it simple and cost-effective to categorize your data, clean it, enrich it, and move it reliably between various data stores. You can use that to move data around from Redshift, Aurora and such as input to your ML models.
– Has many built in algorithms so you as the developer don’t have to write any code. Each of the “models” (out of the box) are hosted in Docker containers on AWS.
– Uses the open source Jupyter (Python) notebooks which is used by many data scientists to load/train the models with input data.
– Once you train your models, you can create “endpoints” where your deployed model can be accessed programmatically by your software, etc..
– To use a built in algorithm
1) Retrieve the training data (Explore and clean the data)
2) Format and serialize the data (put it in the format that the algorithm wants to see) and then upload it to S3
3) Train with the built in algorithm (stored in containers), set up the estimators and train with the input data.
4) Deploy the model which creates an endpoint configuration and endpoint for the prediction responses.
5) Use the endpoint for inference.
– boto3 python sdk offers access to other AWS services such as S3 and EC2, etc..