GCP Workshop
Hello Everyone !
I have recently completed 2 days workshop on Google Cloud Platform (GCP) by LinuxWorld Informatics Pvt Ltd under the great mentor Vimal Daga sir. As always, it was a wonderful experience under his guidance. In just eight hours Vimal sir shared a lot of knowledge to us about GCP so that we can utilize the power and resources provided by google. Started with basics like what is cloud and why we should outsource our infrastructure, making everyone comfortable with cloud and then slowly moving towards some advance concept. As I was part of his training of Hybrid Multi Cloud, so could relate it with AWS cloud and was able to understand the concepts easily. I got to learn multiple services by GCP including,
> Google Compute Engine > GKE > GAE
> VPC, network peering > IAM > SQL
> storage service > Load Balancer
This is the summary of what I learned from the workshop,
Day 1
- we need os to run a program, to install os we require compute unit and storage, we can purchase and use own resources but for that we have to invest more cost and time, also it's risky
- so can outsource our infrastructure with cloud computing, cloud computing can be private, public or hybrib
- gcp provids public cloud resources on demand, for every resource google has a product, for compute it has GCE, for network VPC
- there are multiple zones in a region important for disaster recovery - ways to connect with the google api, webui(console), cli or program
- creating projects on gcp which is compulsory to manage resources and quotas
- created project with cmd line, gcloud project create projectid --name=projectname
- enabling api services, gcloud service enable <service> --project projectid - creating instances after enabling compute engine api
- virtual private network VPC, provide Network As A Service NAAS, for creating labs or subnet
- launching os in a subnet is compulsory
- created two vpc in two different projects with different regions, created instance in both of the vpc, set the firewall rules for vpc, n/w peering to connect both the os with private, secure and fast network of gcp
Day 2
- connecting with gcp account through cli, installed gcloud SDK, launched instance from cli
- doing ssh connectivity with the instance, by default it will use putty
- we create multiple projects for isolation, manage qouta and provide access to users through IAM
- container technology, docker, it doesn't has capability to manage and monitor the container, using k8s a management tool
- creating our own master-slave cluster will be painful and we will have to manage it so using managed service GKE, Google Kubernetes Engine which provide KAAS
- creating a k8s cluster, giving all zones, creating a node pool
- kubectl program to connect with the master, run the gcloud container cluster command, it will get the credentials of cluster for configuration of kubectl, then we can run the kubectl cmds
- creating deployment that uses replica set controller to monitor and complete our desires
- expose the deployment with LoadBalancer service of kubernetes, it will create the internal load balancer of gcp
- storage in gcp provide object storage like S3 in aws, using the managed database storage service
- created a MySQL instance, edit instance and network, created a database inside it
- created a setup, client will connect to the load balancer, LB will connect to the wordpress in the k8s cluster, wordpress will get storage from the MySQL database created
- we use IAM for creating different roles with different powers like owner, viewer, editor - adding members to a project, give it role and can add conditions
- GAE, Google App Engine, useful for developers which provide PAAS, created app and then deployed code using gcloud app deploy command in the directory of code, it also creates versions so can do rollback, gcloud app deploy --project projectname, to deploy on specified project
- Cloud Functions provide function as a service like aws Lambda
Very Thankful to Vimal Daga sir for this amazing workshop
Thank You So Much !