diff --git a/page_content/compute_engine_overview.txt b/page_content/compute_engine_overview.txt new file mode 100644 index 0000000..5ce5c4b --- /dev/null +++ b/page_content/compute_engine_overview.txt @@ -0,0 +1,177 @@ +Compute Engine is an infrastructure as a service (IaaS) product that offers +self-managed virtual machine (VM) instances and bare +metal instances. Compute Engine offers VMs with a KVM hypervisor, +operating systems for both Linux and Windows, and local and durable +storage options. You can configure and control Compute Engine +resources using the Google Cloud console, the Google Cloud CLI, or using a +REST-based API. You can also use a variety of programming languages available +with Google's + +. + + +Here are some of the benefits of using Compute Engine: + + + + +Extensibility: + Compute Engine integrates with Google Cloud +technologies such as Cloud Storage, Google Kubernetes Engine, and +BigQuery, to extend beyond the basic computational capability to +create more complex and sophisticated applications. + + +Scalability: + Scale the number of compute resources as needed without +having to manage your own infrastructure. This is useful for businesses that +experience sudden increases in traffic, because you can quickly add more +instances to handle the increase and remove the instances after they are no +longer needed. + + +Reliability: + Google's infrastructure is highly reliable, with a 99.9% +uptime guarantee. + + +Cost-effectiveness: + Compute Engine offers a variety of pricing +options to fit your budget. Also, you only pay for the resources that you use, +and there are no up-front costs. + + + + +What Compute Engine provides + + +Compute Engine provides flexibility so that you can run a wide-range +of applications and workloads that support your needs. From batch processing +to webserving or high performance computing you can configure +Compute Engine to meet your needs. + + +Location selection + + +Google offers worldwide regions for you to deploy Compute Engine +resources. You can choose a region that best fits the requirements of your +workload: + + + + +Region-specific restrictions + + +User latency by region + + +Latency requirements of your application + + +Amount of control over latency + + +Balance between low latency and simplicity + + + + +For more information about regions and zones, see + +. + + +Compute Engine machine types + + +Compute Engine provides a comprehensive set of machine families, each +containing machine types to choose from when you create a compute instance. Each +machine family is comprised of machine series and predefined machine types +within each series. + + +Compute Engine offers general-purpose, compute-optimized, +storage-optimized, memory-optimized, and accelerator-optimized machine +families. If a preconfigured, general-purpose machine type doesn't meet your +needs, then you can create a custom machine type with customized CPU and memory +resources for some of the machine series. + + +For more information, see the + +. + + +Operating systems + + +Compute Engine provides many preconfigured public operating system +images for both Linux and Windows. Most public images are provided for no +additional cost, but there are some + + for which you are +billed. You are not billed for importing custom images, but you will incur an + + while you keep +the custom image in your project. + + +Storage options + + +You can choose from several block storage options, including Persistent Disk, +Google Cloud Hyperdisk, and Local SSD: + + + + +Persistent Disk: + High-performance and redundant network storage. Each +volume is striped across hundreds of physical disks. + + +Hyperdisk: + The fastest redundant network storage for +Compute Engine, with configurable performance and volumes that can be +resized dynamically. Each volume is striped across hundreds of physical disks. +You can also reduce costs and disk management complexity by purchasing +capacity and performance in advance with Hyperdisk Storage Pools. Hyperdisk Storage Pools provide +an aggregate amount of capacity and performance that you can share among the +disks created in the pool. + + +Local SSD: + Physical drives that are attached directly to the same +server as a compute instance. They offer better performance, but are not +durable. If the instance is shut down, then the Local SSD disks are deleted. + + + + +Each option has unique price and performance. For cost comparisons, see + +. For more information about +disk types, see +. + + +What's next + + + + +See the + and + + that are available for your use. + + +Read an +. + + +Learn about the various +. \ No newline at end of file diff --git a/page_content/deploy_a_function.txt b/page_content/deploy_a_function.txt new file mode 100644 index 0000000..71eb788 --- /dev/null +++ b/page_content/deploy_a_function.txt @@ -0,0 +1,200 @@ +Deploy a function +This guide shows you how to deploy a function from source code using the +`gcloud functions` + command. To learn how to deploy a function using the +`gcloud run` + command, see +Deploy a Cloud Run function using the gcloud CLI +. +The deployment process takes your source code and configuration settings and +builds a runnable image + that Cloud Run functions +manages automatically in order to handle requests to your function. +Deployment basics +For an introduction on the type of functions you can deploy, see +Write Cloud Run functions +. +Users deploying functions must have the +Cloud Functions Developer +IAM role or a role that includes the same permissions. See also +Additional configuration for deployment +. +In the Google Cloud console, activate Cloud Shell. +Activate Cloud Shell + At the bottom of the Google Cloud console, a + +Cloud Shell + session starts and displays a command-line prompt. Cloud Shell is a shell environment + with the Google Cloud CLI + already installed and with values already set for + your current project. It can take a few seconds for the session to initialize. + +Use the +`gcloud functions deploy` +command to deploy a function: +```gcloud functions deploy YOUR_FUNCTION_NAME \ + --region=YOUR_REGION \ + --runtime=YOUR_RUNTIME \ + --source=YOUR_SOURCE_LOCATION \ + --entry-point=YOUR_CODE_ENTRYPOINT \ + TRIGGER_FLAGS``` +The first argument, +`YOUR_FUNCTION_NAME` +, is a name for +your deployed function. The function name must start with a letter +followed by up to 62 letters, numbers, hyphens, or underscores, and must end +with a letter or a number. The name of the Cloud Run service that +is created for your function will replace underscores with hyphens and +uppercase letters will be converted to lowercase. For example, +`Function_1` + will be given the name +`function-1` + in Cloud Run. +Note: + Run +`gcloud config set functions/gen2 true` + to set all future + first time deployments. +The +`--region` + flag +specifies the region in which to deploy your function. See +Locations + for a list of regions supported by +Cloud Run. +The +`--runtime` + flag +specifies which language runtime your function uses. See +Runtime support + for a list of supported +runtime IDs. +The +`--source` + flag +specifies the location of your function source code. +The +`--entry-point` +flag specifies the entry point to your function in your source code. This is +the code that will be executed when your function runs. The value of this +flag must be a function name or fully-qualified class name that exists in +your source code. For more information, see +Function entry point +. +To specify the +trigger + for your +function, additional flags (represented as +`TRIGGER_FLAGS` + above) are required, depending on +the trigger you want to use: +Trigger flags +Trigger description +`--trigger-http` +Trigger the function with an HTTP(S) request. +`--trigger-topic=YOUR_PUBSUB_TOPIC` +Trigger the function when a message is published to the specified + Pub/Sub topic. +`--trigger-bucket=YOUR_STORAGE_BUCKET` +Trigger the function when an object is created or overwritten in the + specified Cloud Storage bucket. +`--trigger-event-filters=EVENTARC_EVENT_FILTERS` +Trigger the function with Eventarc when an + event that matches the specified filters occurs. +For a complete reference on the deployment command and its flags, see the +`gcloud functions deploy` +documentation. +For more details about +`gcloud functions deploy` + configuration flags, +refer to +Cloud Run documentation +. +When deployment finishes successfully, functions appear with a green check +mark in the Cloud Run overview page in the +Google Cloud console +. +The initial deployment of a function may take several minutes, while the +underlying infrastructure is provisioned. Redeploying an existing function +is faster, and incoming traffic is automatically migrated to the newest version. +Note: + Instances provisioned with a previous version of a function may continue +running and processing traffic for several minutes after a new deployment has +finished. This ensures that traffic sent to your function while a deployment is +in progress isn't dropped. Also note that when a deployment fails, +if there is a previous version of the function, it will continue to be +available in most cases. +HTTP endpoint URL +When you create a function with the +`gcloud functions` + command or the +Cloud Functions v2 API, by default, the function has a +`cloudfunctions.net` + HTTP endpoint URL. If you take this function and deploy it +on Cloud Run, your function also receives a +`run.app` + HTTP endpoint +URL. However, functions created in Cloud Run won't have an +`cloudfunctions.net` + HTTP endpoint URL. A function's +`cloudfunctions.net` + URL +and +`run.app` + URL behave in exactly the same way. They are interchangeable, +and are used to trigger your function. +Terraform examples +For examples about how to deploy functions using Terraform, see the +Terraform HTTP example + and +Terraform Pub/Sub example +. +Configure networking +Functions created using the +Cloud Functions v2 API +(for example, by using +`gcloud functions` +, the REST API, or Terraform) can be +managed with the +Cloud Run Admin API +as well as the Cloud Functions v2 API. +Note: + If you created a Cloud Run function using +`gcloud run` + commands or the Cloud Run Admin API, you can't manage that function +with +`gcloud functions` + commands or the Cloud Functions v2 API. +To learn more about managing networks for functions, including how to route +VPC network traffic +, see +Best practices for Cloud Run networking +. +Learn how to deploy Cloud Run functions on Cloud Run +Deploying functions on Cloud Run is similar to the steps described in +this document, but with some added advantages: +You can use the Google Cloud console, as well as the gcloud CLI +( +`gcloud run deploy` +). +The steps for specifying triggers are slightly different. To learn more, see +triggers and retries +and +examples of function triggers +. +Cloud Run offers a broader array of configuration options: +Minimum instances +Concurrency +Container configuration +CPU limits +Memory limits +Request timeout +Secrets +Environment variables +Execution environment +HTTP/2 +Service accounts +Cloud SQL connections +Session affinity and traffic splitting +Tags +Networking \ No newline at end of file diff --git a/page_content/deploy_to_compute_engine.txt b/page_content/deploy_to_compute_engine.txt new file mode 100644 index 0000000..0b5b896 --- /dev/null +++ b/page_content/deploy_to_compute_engine.txt @@ -0,0 +1,2882 @@ +This guide explains how to perform zero-downtime blue/green deployments on +Compute Engine Managed Instance Groups (MIGs) using Cloud Build and +Terraform. +Cloud Build enables you to automate a variety of developer processes, +including building and deploying applications to various Google Cloud runtimes +such as Compute Engine, +, +, +and +. + enable you to operate +applications on multiple identical Virtual Machines (VMs). You can make your +workloads scalable and highly available by taking advantage of automated MIG +services, including: autoscaling, autohealing, regional (multiple zone) +deployment, and automatic updating. Using the blue/green continuous deployment +model, you will learn how to gradually transfer user traffic from one MIG (blue) +to another MIG (green), both of which are running in production. +Design overview +The following diagram shows the blue/green deployment model used by the code +sample described in this document: + +At a high level, this model includes the following components: +Two Compute Engine VM pools: Blue and Green. +Three external HTTP(S) load balancers: +A Blue/Green load balancer, that routes traffic from end users to either +the Blue or the Green pool of VM instances. +A Blue load balancer that routes traffic from QA engineers and +developers to the Blue VM instance pool. +A Green load balancer that routes traffic from QA engineers and +developers to the Green instance pool. +Two sets of users: +End users who have access to the Blue/Green load balancer, which points +them to either the Blue or the Green instance pool. +QA engineers and developers who require access to both sets of pools for +development and testing purposes. They can access both the Blue and the +Green load balancers, which routes them to the Blue Instance pool and the +Green instance pool respectively. +The Blue and the Green VMs pools are implemented as Compute Engine MIGs, and +external IP addresses are routed into the VMs in the MIG using external HTTP(s) +load balancers. The code sample described in this document uses Terraform to +configure this infrastructure. +The following diagram illustrates the developer operations that happens in the +deployment: + +In the diagram above, the red arrows represent the bootstrapping flow that +occurs when you set up the deployment infrastructure for the first time, and the +blue arrows represent the GitOps flow that occurs during every deployment. +To set up this infrastructure, you run a setup script that starts the bootstrap +process and sets up the components for the GitOps flow. +The setup script executes a Cloud Build pipeline that performs the +following operations: +Creates a repository in +named +copy-of-gcp-mig-simple + and copies the source code from the GitHub +sample repository to the repository in Cloud Source Repositories. +Creates two + named +apply + and +destroy +. +Note: + Cloud Build supports first-class integration with GitHub, +GitLab, and Bitbucket. Cloud Source Repositories is used in this sample for +demonstration purposes. +Caution: + Effective June 17, 2024, Cloud Source Repositories isn't available + to new customers. If your organization hasn't + previously used Cloud Source Repositories, you can't enable the API or use + Cloud Source Repositories. New projects not connected to an organization can't enable the + Cloud Source Repositories API. Organizations that have used Cloud Source Repositories prior to + June 17, 2024 are not affected by this change. +The +apply + trigger is attached to a Terraform file named +main.tfvars + in the +Cloud Source Repositories. This file contains the Terraform variables representing +the blue and the green load balancers. +To set up the deployment, you update the variables in the +main.tfvars + file. +The +apply + trigger runs a Cloud Build pipeline that executes +tf_apply + and performs the following operations: +Creates two Compute Engine MIGs (one for green and one for blue), four +Compute Engine VM instances (two for the green MIG and two for the blue +MIG), the three load balancers (blue, green, and the splitter), and three +public IP addresses. +Prints out the IP addresses that you can use to see the deployed +applications in the blue and the green instances. +The destroy trigger is triggered manually to delete all the resources created by +the apply trigger. +Objectives +Use Cloud Build and Terraform to set up external HTTP(S) load +balancers with Compute Engine VM instance group backends. +Perform blue/green deployments on the VM instances. +Costs + + In this document, you use the following billable components of Google Cloud: + + + + + + To generate a cost estimate based on your projected usage, + use the +. + + New Google Cloud users might be eligible for a +. + +When you finish the tasks that are described in this document, you can avoid + continued billing by deleting the resources that you created. For more information, see +. +Before you begin + + + + Sign in to your Google Cloud account. If you're new to + Google Cloud, + to evaluate how our products perform in + real-world scenarios. New customers also get $300 in free credits to + run, test, and deploy workloads. + + + + the Google Cloud CLI. + +If you're using an external identity provider (IdP), you must first + +. + + To + the gcloud CLI, run the following command: + +gcloud + +init +. +Note +: If you don't plan to keep the + resources that you create in this procedure, create a project instead of + selecting an existing project. After you finish these steps, you can + delete the project, removing all resources associated with the project. +Create a Google Cloud project: +gcloud projects create +PROJECT_ID +Replace +PROJECT_ID + with a name for the Google Cloud project you are creating. +Select the Google Cloud project that you created: +gcloud config set project +PROJECT_ID +Replace +PROJECT_ID + with your Google Cloud project name. +. + + the Google Cloud CLI. + +If you're using an external identity provider (IdP), you must first + +. + + To + the gcloud CLI, run the following command: + +gcloud + +init +. +Note +: If you don't plan to keep the + resources that you create in this procedure, create a project instead of + selecting an existing project. After you finish these steps, you can + delete the project, removing all resources associated with the project. +Create a Google Cloud project: +gcloud projects create +PROJECT_ID +Replace +PROJECT_ID + with a name for the Google Cloud project you are creating. +Select the Google Cloud project that you created: +gcloud config set project +PROJECT_ID +Replace +PROJECT_ID + with your Google Cloud project name. +. + +Trying it out +Run the setup script from the Google code sample repository: +bash <(curl https://raw.githubusercontent.com/GoogleCloudPlatform/cloud-build-samples/main/mig-blue-green/setup.sh) +When the setup script asks for user consent, enter +yes +. +The script finishes running in a few seconds. +In the Google Cloud console, open the Cloud Build +Build history +page: +Click on the latest build. +You see the +Build details + page, which shows a Cloud Build +pipeline with three build steps: the first build step creates a repository in +Cloud Source Repositories, the second step clones the contents of the sample +repository in GitHub to Cloud Source Repositories, and the third step adds two +build triggers. +Open Cloud Source Repositories: +From the repositories list, click +copy-of-gcp-mig-simple +. +In the +History + tab at the bottom of the page, you'll see one commit with +the description +A copy of https://github.com/GoogleCloudPlatform/cloud-build-samples.git +made by Cloud Build to create a repository named +copy-of-gcp-mig-simple +. +Open the Cloud Build +Triggers + page: +You'll see two build triggers named +apply + and +destroy +. The +apply + trigger +is attached to the +infra/main.tfvars + file in the +main + branch. This trigger +is executed anytime the file is updated. The +destroy + trigger is a manual +trigger. +To start the deploy process, update the +infra/main.tfvars + file: +In your terminal window, create and navigate into a folder named +deploy-compute-engine +: +mkdir ~/deploy-compute-engine +cd ~/deploy-compute-engine +Clone the +copy-of-gcp-mig-simple + repo: +gcloud source repos clone copy-of-mig-blue-green +Navigate into the cloned directory: +cd ./copy-of-mig-blue-green +Update +infra/main.tfvars + to replace blue with green: +sed + +- +i +'' + +- +e + +'s/blue/green/g' + +infra +/ +main +. +tfvars +Add the updated file: +git add . +Commit the file: +git commit -m "Promote green" +Push the file: +git push +Making changes to +infra/main.tfvars + triggers the execution of the +apply +trigger, which starts the deployment. +Open Cloud Source Repositories: +From the repositories list, click +copy-of-gcp-mig-simple +. +You'll see the commit with the description +Promote green + in the +History + tab at the bottom of the page. +To view the execution of the +apply + trigger, open the +Build history + page +in the Google Cloud console: +Open the +Build details + page by clicking on the first build. +You will see the +apply + trigger pipeline with two build steps. The first +build step executes Terraform apply to create the Compute Engine and load +balancing resources for the deployment. The second build step prints out +the IP address where you can see the application running. +Open the IP address corresponding to the green MIG in a browser. You'll see +a screenshot similar to the following showing the deployment: + +Go to the Compute Engine +Instance group + page to see the Blue and the +Green instance groups: +Open the +VM instances + page to see the four VM instances: +Open the +External IP addresses + page to see the three load balancers: +Understanding the code +Source code for this code sample includes: +Source code related to the setup script. +Source code related to the Cloud Build pipelines. +Source code related to the Terraform templates. +Setup script +setup.sh + is the setup script that runs the bootstrap process and creates the +components for the blue/green deployment. The script performs the following +operations: +Enables the Cloud Build, Resource Manager, +Compute Engine, and Cloud Source Repositories APIs. +Grants the +roles/editor + IAM role to the +Cloud Build service account in your project. This role is +required for Cloud Build to create and set up the necessary +GitOps components for the deployment. +Grants the +roles/source.admin + IAM role to the +Cloud Build service account in your project. This role is +required for the Cloud Build service account to create the +Cloud Source Repositories in your project and clone the contents of the sample +GitHub repository to your Cloud Source Repositories. +Generates a Cloud Build pipeline named +bootstrap.cloudbuild.yaml + inline, that: +Creates a new repository in Cloud Source Repositories. +Copies the source code from the sample GitHub repository to the +new repository in Cloud Source Repositories. +Creates the apply and destroy build triggers. +set + +-e +BLUE += +'\033[1;34m' +RED += +'\033[1;31m' +GREEN += +'\033[1;32m' +NC += +'\033[0m' +echo + +-e + +"\n +${ +GREEN +} +######################################################" +echo + +-e + +"# #" +echo + +-e + +"# Zero-Downtime Blue/Green VM Deployments Using #" +echo + +-e + +"# Managed Instance Groups, Cloud Build & Terraform #" +echo + +-e + +"# #" +echo + +-e + +"###################################################### +${ +NC +} +\n" +echo + +-e + +"\nSTARTED +${ +GREEN +} +setup.sh: +${ +NC +} +" +echo + +-e + +"\nIt's +${ +RED +} +safe to re-run +${ +NC +} + this script to +${ +RED +} +recreate +${ +NC +} + all resources.\n" +echo + +"> Checking GCP CLI tool is installed" +gcloud + +--version + > +/dev/null + +2>&1 +readonly + +EXPLICIT_PROJECT_ID += +" +$1 +" +readonly + +EXPLICIT_CONSENT += +" +$2 +" +if + +[ + +-z + +" +$EXPLICIT_PROJECT_ID +" + +] +; + +then + +echo + +"> No explicit project id provided, trying to infer" + +PROJECT_ID += +" +$( +gcloud + +config + +get-value + +project +) +" +else + +PROJECT_ID += +" +$EXPLICIT_PROJECT_ID +" +fi +if + +[ + +-z + +" +$PROJECT_ID +" + +] +; + +then + +echo + +"ERROR: GCP project id was not provided as parameter and could not be inferred" + +exit + +1 +else + +readonly + +PROJECT_NUM += +" +$( +gcloud + +projects + +describe + +$PROJECT_ID + +--format += +'value(projectNumber)' +) +" + +if + +[ + +-z + +" +$PROJECT_NUM +" + +] +; + +then + +echo + +"ERROR: GCP project number could not be determined" + +exit + +1 + +fi + +echo + +-e + +"\nYou are about to:" + +echo + +-e + +" * modify project +${ +RED +}${ +PROJECT_ID +} +/ +${ +PROJECT_NUM +}${ +NC +} +" + +echo + +-e + +" * +${ +RED +} +enable +${ +NC +} + various GCP APIs" + +echo + +-e + +" * make Cloud Build +${ +RED +} +editor +${ +NC +} + of your project" + +echo + +-e + +" * +${ +RED +} +execute +${ +NC +} + Cloud Builds and Terraform plans to create" + +echo + +-e + +" * +${ +RED +} +4 VMs +${ +NC +} +, +${ +RED +} +3 load balancers +${ +NC +} +, +${ +RED +} +3 public IP addresses +${ +NC +} +" + +echo + +-e + +" * incur +${ +RED +} +charges +${ +NC +} + in your billing account as a result\n" +fi +if + +[ + +" +$EXPLICIT_CONSENT +" + +== + +"yes" + +] +; + +then + +echo + +"Proceeding under explicit consent" + +readonly + +CONSENT += +" +$EXPLICIT_CONSENT +" +else + +echo + +-e + +"Enter +${ +BLUE +} +'yes' +${ +NC +} + if you want to proceed:" + +read + +CONSENT +fi +if + +[ + +" +$CONSENT +" + +! += + +"yes" + +] +; + +then + +echo + +-e + +"\nERROR: Aborted by user" + +exit + +1 +else + +echo + +-e + +"\n......................................................" + +echo + +-e + +"\n> Received user consent" +fi +# +# Executes action with one randomly delayed retry. +# +function + +do_with_retry + +{ + +COMMAND += +" +$@ +" + +echo + +"Trying +$COMMAND +" + +( +eval + +$COMMAND + && +echo + +"Success on first try" +) + +|| + +( + +\ + +echo + +"Waiting few seconds to retry" + +&& + +sleep + +10 + && +\ + +echo + +"Retrying +$COMMAND +" + && +\ + +eval + +$COMMAND + +\ + +) +} +echo + +"> Enabling required APIs" +# Some of these can be enabled later with Terraform, but I personally +# prefer to do all API enablement in one place with gcloud. +gcloud + +services + +enable + +\ + +--project += +$PROJECT_ID + +\ + +cloudbuild.googleapis.com + +\ + +cloudresourcemanager.googleapis.com + +\ + +compute.googleapis.com + +\ + +sourcerepo.googleapis.com + +\ + +--no-user-output-enabled + +\ + +--quiet +echo + +"> Adding Cloud Build to roles/editor" +gcloud + +projects + +add-iam-policy-binding + +\ + +" +$PROJECT_ID +" + +\ + +--member += +"serviceAccount: +$PROJECT_NUM +@cloudbuild.gserviceaccount.com" + +\ + +--role += +'roles/editor' + +\ + +--condition += +None + +\ + +--no-user-output-enabled + +\ + +--quiet +echo + +"> Adding Cloud Build to roles/source.admin" +gcloud + +projects + +add-iam-policy-binding + +\ + +" +$PROJECT_ID +" + +\ + +--member += +"serviceAccount: +$PROJECT_NUM +@cloudbuild.gserviceaccount.com" + +\ + +--condition += +None + +\ + +--role += +'roles/source.admin' + +\ + +--no-user-output-enabled + +\ + +--quiet +echo + +"> Configuring bootstrap job" +rm + +-rf + +"./bootstrap.cloudbuild.yaml" +cat + +<< +'EOT_BOOT' + > +"./bootstrap.cloudbuild.yaml" +tags: +- + +"mig-blue-green-bootstrapping" +steps: +- + +id: + +create_new_cloud_source_repo + +name: + +"gcr.io/cloud-builders/gcloud" + +script: + +| + +#!/bin/bash + +set + +-e + +echo + +"(Re)Creating source code repository" + +gcloud + +source + +repos + +delete + +\ + +"copy-of-mig-blue-green" + +\ + +--quiet + +|| + +true + +gcloud + +source + +repos + +create + +\ + +"copy-of-mig-blue-green" + +\ + +--quiet +- + +id: + +copy_demo_source_into_new_cloud_source_repo + +name: + +"gcr.io/cloud-builders/gcloud" + +env: + +- + +"PROJECT_ID= +$PROJECT_ID +" + +- + +"PROJECT_NUMBER= +$PROJECT_NUMBER +" + +script: + +| + +#!/bin/bash + +set + +-e + +readonly + +GIT_REPO += +"https://github.com/GoogleCloudPlatform/cloud-build-samples.git" + +echo + +"Cloning demo source repo" + +mkdir + +/workspace/from/ + +cd + +/workspace/from/ + +git + +clone + +$GIT_REPO + +./original + +cd + +./original + +echo + +"Cloning new empty repo" + +mkdir + +/workspace/to/ + +cd + +/workspace/to/ + +gcloud + +source + +repos + +clone + +\ + +"copy-of-mig-blue-green" + +cd + +./copy-of-mig-blue-green + +echo + +"Making a copy" + +cp + +-r + +/workspace/from/original/mig-blue-green/* + +./ + +echo + +"Setting git identity" + +git + +config + +user.email + +\ + +" +$PROJECT_NUMBER +@cloudbuild.gserviceaccount.com" + +git + +config + +user.name + +\ + +"Cloud Build" + +echo + +"Commit & push" + +git + +add + +. + +git + +commit + +\ + +-m + +"A copy of +$GIT_REPO +" + +git + +push +- + +id: + +add_pipeline_triggers + +name: + +"gcr.io/cloud-builders/gcloud" + +env: + +- + +"PROJECT_ID= +$PROJECT_ID +" + +script: + +| + +#!/bin/bash + +set + +-e + +echo + +"(Re)Creating destroy trigger" + +gcloud + +builds + +triggers + +delete + +"destroy" + +--quiet + +|| + +true + +gcloud + +builds + +triggers + +create + +manual + +\ + +--name += +"destroy" + +\ + +--repo += +"https://source.developers.google.com/p/ +$PROJECT_ID +/r/copy-of-mig-blue-green" + +\ + +--branch += +"master" + +\ + +--build-config += +"pipelines/destroy.cloudbuild.yaml" + +\ + +--repo-type += +CLOUD_SOURCE_REPOSITORIES + +\ + +--quiet + +echo + +"(Re)Creating apply trigger" + +gcloud + +builds + +triggers + +delete + +"apply" + +--quiet + +|| + +true + +gcloud + +builds + +triggers + +create + +cloud-source-repositories + +\ + +--name += +"apply" + +\ + +--repo += +"copy-of-mig-blue-green" + +\ + +--branch-pattern += +"master" + +\ + +--build-config += +"pipelines/apply.cloudbuild.yaml" + +\ + +--included-files += +"infra/main.tfvars" + +\ + +--quiet +EOT_BOOT +echo + +"> Waiting API enablement propagation" +do_with_retry + +"(gcloud builds list --project " +$PROJECT_ID +" --quiet && gcloud compute instances list --project " +$PROJECT_ID +" --quiet && gcloud source repos list --project " +$PROJECT_ID +" --quiet) > /dev/null 2>&1" + > +/dev/null + +2>&1 +echo + +"> Executing bootstrap job" +gcloud + +beta + +builds + +submit + +\ + +--project + +" +$PROJECT_ID +" + +\ + +--config + +./bootstrap.cloudbuild.yaml + +\ + +--no-source + +\ + +--no-user-output-enabled + +\ + +--quiet +rm + +./bootstrap.cloudbuild.yaml +echo + +-e + +"\n +${ +GREEN +} +All done. Now you can: +${ +NC +} +" +echo + +-e + +" * manually run 'apply' and 'destroy' triggers to manage deployment lifecycle" +echo + +-e + +" * commit change to 'infra/main.tfvars' and see 'apply' pipeline trigger automatically" +echo + +-e + +"\n +${ +GREEN +} +Few key links: +${ +NC +} +" +echo + +-e + +" * Dashboard: https://console.cloud.google.com/home/dashboard?project= +$PROJECT_ID +" +echo + +-e + +" * Repo: https://source.cloud.google.com/ +$PROJECT_ID +/copy-of-mig-blue-green" +echo + +-e + +" * Cloud Build Triggers: https://console.cloud.google.com/cloud-build/triggers;region=global?project= +$PROJECT_ID +" +echo + +-e + +" * Cloud Build History: https://console.cloud.google.com/cloud-build/builds?project= +$PROJECT_ID +" +echo + +-e + +"\n............................." +echo + +-e + +"\n +${ +GREEN +} +COMPLETED! +${ +NC +} +" +Cloud Build pipelines +apply.cloudbuild.yaml + and +destroy.cloudbuild.yaml + are the +Cloud Build config files that the setup script uses to set up the +resources for the GitOps flow. +apply.cloudbuild.yaml + contains two build steps: +tf_apply build + build step that calls the function +tf_install_in_cloud_build_step +, which installs Terraform. +tf_apply +that creates the resources used in the GitOps flow. The functions +tf_install_in_cloud_build_step + and +tf_apply + are defined in +bash_utils.sh + and the build step uses the +source + command to call +them. +describe_deployment + build step that calls the function +describe_deployment + that prints out the IP addresses of the load +balancers. +destroy.cloudbuild.yaml + calls +tf_destroy + that deletes all the resources +created by +tf_apply +. +The functions +tf_install_in_cloud_build_step +, +tf_apply +, +describe_deployment +, and +tf_destroy + are defined in the file +bash_utils.sh +. +The build config files use the +source + command to call the functions. +steps +: + +- + +id +: + +run-terraform-apply + +name +: + +"gcr.io/cloud-builders/gcloud" + +env +: + +- + +"PROJECT_ID=$PROJECT_ID" + +script +: + +| + +#!/bin/bash + +set -e + +source /workspace/lib/bash_utils.sh + +tf_install_in_cloud_build_step + +tf_apply + +- + +id +: + +describe-deployment + +name +: + +"gcr.io/cloud-builders/gcloud" + +env +: + +- + +"PROJECT_ID=$PROJECT_ID" + +script +: + +| + +#!/bin/bash + +set -e + +source /workspace/lib/bash_utils.sh + +describe_deployment +tags +: + +- + +"mig-blue-green-apply" +steps +: + +- + +id +: + +run-terraform-destroy + +name +: + +"gcr.io/cloud-builders/gcloud" + +env +: + +- + +"PROJECT_ID=$PROJECT_ID" + +script +: + +| + +#!/bin/bash + +set -e + +source /workspace/lib/bash_utils.sh + +tf_install_in_cloud_build_step + +tf_destroy +tags +: + +- + +"mig-blue-green-destroy" +The following code shows the function +tf_install_in_cloud_build_step + that's +defined in +bash_utils.sh +. The build config files call this function to +install Terraform on the fly. It creates a Cloud Storage bucket to +record the Terraform status. +function + +tf_install_in_cloud_build_step + +{ + +echo + +"Installing deps" + +apt + +update + +apt + +install + +\ + +unzip + +\ + +wget + +\ + +-y + +echo + +"Manually installing Terraform" + +wget + +https://releases.hashicorp.com/terraform/1.3.4/terraform_1.3.4_linux_386.zip + +unzip + +-q + +terraform_1.3.4_linux_386.zip + +mv + +./terraform + +/usr/bin/ + +rm + +-rf + +terraform_1.3.4_linux_386.zip + +echo + +"Verifying installation" + +terraform + +-v + +echo + +"Creating Terraform state storage bucket +$BUCKET_NAME +" + +gcloud + +storage + +buckets + +create + +\ + +"gs:// +$BUCKET_NAME +" + +|| + +echo + +"Already exists..." + +echo + +"Configure Terraform provider and state bucket" +cat + +< +"/workspace/infra/provider.tf" +terraform + +{ + +required_version + += + +">= 0.13" + +backend + +"gcs" + +{ + +bucket + += + +" +$BUCKET_NAME +" + +} + +required_providers + +{ + +google + += + +{ + +source + += + +"hashicorp/google" + +version + += + +">= 3.77, < 5.0" + +} + +} +} +EOT_PROVIDER_TF + +echo + +" +$( +cat + +/workspace/infra/provider.tf +) +" +} +The following code snippet shows the function +tf_apply + that's defined in +bash_utils.sh +. It first calls +terraform init + that loads all modules and +custom libraries and then runs +terraform apply + to load the variables from +the +main.tfvars + file. +function + +tf_apply + +{ + +echo + +"Running Terraform init" + +terraform + +\ + +-chdir += +" +$TF_CHDIR +" + +\ + +init + +echo + +"Running Terraform apply" + +terraform + +\ + +-chdir += +" +$TF_CHDIR +" + +\ + +apply + +\ + +-auto-approve + +\ + +-var + +project += +" +$PROJECT_ID +" + +\ + +-var-file += +"main.tfvars" +} +The following code snippet shows the function +describe_deployment + that's +defined in +bash_utils.sh +. It uses +gcloud compute addresses describe + to fetch +the IP addresses of the load balancers using the name and prints them out. +function + +describe_deployment + +{ + +NS += +"ns1-" + +echo + +-e + +"Deployment configuration:\n +$( +cat + +infra/main.tfvars +) +" + +echo + +-e + +\ + +"Here is how to connect to:" + +\ + +"\n\t* active color MIG: http:// +$( +gcloud + +compute + +addresses + +describe + +${ +NS +} +splitter-address-name + +--region += +us-west1 + +--format += +'value(address)' +) +/" + +\ + +"\n\t* blue color MIG: http:// +$( +gcloud + +compute + +addresses + +describe + +${ +NS +} +blue-address-name + +--region += +us-west1 + +--format += +'value(address)' +) +/" + +\ + +"\n\t* green color MIG: http:// +$( +gcloud + +compute + +addresses + +describe + +${ +NS +} +green-address-name + +--region += +us-west1 + +--format += +'value(address)' +) +/" + +echo + +"Good luck!" +} +The following code snippet shows the function +tf_destroy + that's defined in +bash_utils.sh +. It calls +terraform init + that loads all modules and custom +libraries and then runs +terraform destroy + that unloads the Terraform variables. +function + +tf_destroy + +{ + +echo + +"Running Terraform init" + +terraform + +\ + +-chdir += +" +$TF_CHDIR +" + +\ + +init + +echo + +"Running Terraform destroy" + +terraform + +\ + +-chdir += +" +$TF_CHDIR +" + +\ + +destroy + +\ + +-auto-approve + +\ + +-var + +project += +" +$PROJECT_ID +" + +\ + +-var-file += +"main.tfvars" +} +Terraform templates +You'll find all the Terraform configuration files and variables in the +copy-of-gcp-mig-simple/infra/ + folder. +main.tf +: this is the Terraform configuration file +main.tfvars +: this file defines the Terraform variables. +mig/ + and +splitter/ +: these folders contain the modules that define the +load balancers. The +mig/ + folder contains the Terraform configuration file +that defines the MIG for the Blue and the Green load balancers. The Blue and +the Green MIGs are identical, therefore they are defined once and +instantiated for the blue and the green objects. The Terraform configuration +file for the splitter load balancer is in the +splitter/ + folder . +The following code snippet shows the contents of +infra/main.tfvars +. It +contains three variables: two that determine what application version to deploy +to the Blue and the Green pools and a variable for the active color: Blue or +Green. Changes to this file triggers the deployment. +MIG_VER_BLUE = "v1" +MIG_VER_GREEN = "v1" +MIG_ACTIVE_COLOR = "blue" +The following is a code snippet from +infra/main.tf +. In this snippet: +A variable is defined for the Google Cloud project. +Google is set as the Terraform provider. +A variable is defined for namespace. All objects created by Terraform are +prefixed with this variable so that multiple versions of the application can +be deployed in the same project and the object names don't collide with each +other. +Variables +MIG_VER_BLUE +, +MIG_VER_BLUE +, and +MIG_ACTIVE_COLOR + are the +bindings for the variables in the +infra/main.tfvars + file. +variable + +"project" + +{ + +type + += + +string + +description + += + +"GCP project we are working in." +} +provider + +"google" + +{ + +project + += + +var.project + +region + += + +"us-west1" + +zone + += + +"us-west1-a" +} +variable + +"ns" + +{ + +type + += + +string + +default + += + +"ns1-" + +description + += + +"The namespace used for all resources in this plan." +} +variable + +"MIG_VER_BLUE" + +{ + +type + += + +string + +description + += + +"Version tag for 'blue' deployment." +} +variable + +"MIG_VER_GREEN" + +{ + +type + += + +string + +description + += + +"Version tag for 'green' deployment." +} +variable + +"MIG_ACTIVE_COLOR" + +{ + +type + += + +string + +description + += + +"Active color (blue | green)." +} +The following code snippet from +infra/main.tf + shows the instantiation of the +splitter module. This module takes in the active color so that the splitter load +balancer knows which MIG to deploy the application. +module + +"splitter-lb" + +{ + +source + += + +"./splitter" + +project + += + +var.project + +ns + += + +"${var.ns}splitter-" + +active_color + += + +var.MIG_ACTIVE_COLOR + +instance_group_blue + += + +module.blue.google_compute_instance_group_manager_default.instance_group + +instance_group_green + += + +module.green.google_compute_instance_group_manager_default.instance_group +} +The following code snippet from +infra/main.tf + defines two identical modules +for Blue and Green MIGs. It takes in the color, the network, and the subnetwork +which are defined in the splitter module. +module + +"blue" + +{ + +source + += + +"./mig" + +project + += + +var.project + +app_version + += + +var.MIG_VER_BLUE + +ns + += + +var.ns + +color + += + +"blue" + +google_compute_network + += + +module.splitter-lb.google_compute_network + +google_compute_subnetwork + += + +module.splitter-lb.google_compute_subnetwork_default + +google_compute_subnetwork_proxy_only + += + +module.splitter-lb.google_compute_subnetwork_proxy_only +} +module + +"green" + +{ + +source + += + +"./mig" + +project + += + +var.project + +app_version + += + +var.MIG_VER_GREEN + +ns + += + +var.ns + +color + += + +"green" + +google_compute_network + += + +module.splitter-lb.google_compute_network + +google_compute_subnetwork + += + +module.splitter-lb.google_compute_subnetwork_default + +google_compute_subnetwork_proxy_only + += + +module.splitter-lb.google_compute_subnetwork_proxy_only +} +The file +splitter/main.tf + defines the objects that are created for the +splitter MIG. The following is a code snippet from +splitter/main.tf + that +contains the logic to switch between the Green and the Blue MIG. It's backed by +the service +google_compute_region_backend_service +, which can route traffic to +two backend regions: +var.instance_group_blue + or +var.instance_group_green +. +capacity_scaler + defines how much of the traffic to route. +The following code routes 100% of the traffic to the specified color, but you +can update this code for canary deployment to route the traffic to a subset of +the users. +resource + +"google_compute_region_backend_service" + +"default" + +{ + +name + += + +local.l7-xlb-backend-service + +region + += + +"us-west1" + +load_balancing_scheme + += + +"EXTERNAL_MANAGED" + +health_checks + += + +[ +google_compute_region_health_check.default.id +] + +protocol + += + +"HTTP" + +session_affinity + += + +"NONE" + +timeout_sec + += + +30 + +backend + +{ + +group + += + +var.instance_group_blue + +balancing_mode + += + +"UTILIZATION" + +capacity_scaler + += + +var.active_color + +== + +"blue" + +? + +1 + +: + +0 + +} + +backend + +{ + +group + += + +var.instance_group_green + +balancing_mode + += + +"UTILIZATION" + +capacity_scaler + += + +var.active_color + +== + +"green" + +? + +1 + +: + +0 + +} +} +The file +mig/main.tf + defines the objects pertaining to the Blue and the Green +MIGs. The following code snippet from this file defines the Compute Engine +instance template that's used to create the VM pools. Note that this instance +template has the Terraform lifecycle property set to +create_before_destroy +. +This is because, when updating the version of the pool, you cannot use the +template to create the new version of the pools when it is still being used by +the previous version of the pool. But if the older version of the pool is +destroyed before creating the new template, there'll be a period of time when +the pools are down. To avoid this scenario, we set the Terraform lifecycle to +create_before_destroy + so that the newer version of a VM pool is created first +before the older version is destroyed. +resource + +"google_compute_instance_template" + +"default" + +{ + +name + += + +local.l7-xlb-backend-template + +disk + +{ + +auto_delete + += + +true + +boot + += + +true + +device_name + += + +"persistent-disk-0" + +mode + += + +"READ_WRITE" + +source_image + += + +"projects/debian-cloud/global/images/family/debian-10" + +type + += + +"PERSISTENT" + +} + +labels + += + +{ + +managed-by-cnrm + += + +"true" + +} + +machine_type + += + +"n1-standard-1" + +metadata + += + +{ + +startup-script + += + +<< +EOF + #! /bin/bash + +sudo + +apt-get + +update + +sudo + +apt-get + +install + +apache +2 + +-y + +sudo + +a +2 +ensite + +default-ssl + +sudo + +a +2 +enmod + +ssl + +vm_hostname += +"$(curl -H "Metadata-Flavor:Google" + +\ + +http +: +//169.254.169.254/computeMetadata/v1/instance/name)" + +sudo + +echo + +"

Hello, World!

version: ${var.app_version}
ns: ${var.ns}
hostname: $vm_hostname

" + +| + +\ + +tee + +/var/www/html/ +index.html + +sudo + +systemctl + +restart + +apache +2 + +EOF + +} + +network_interface + +{ + +access_config + +{ + +network_tier + += + +"PREMIUM" + +} + +network + += + +var.google_compute_network.id + +subnetwork + += + +var.google_compute_subnetwork.id + +} + +region + += + +"us-west1" + +scheduling + +{ + +automatic_restart + += + +true + +on_host_maintenance + += + +"MIGRATE" + +provisioning_model + += + +"STANDARD" + +} + +tags + += + +[ +"load-balanced-backend" +] + # NOTE: the name of this resource must be unique for every update; + # this is wy we have a app_version in the name; this way + # new resource has a different name vs old one and both can + # exists at the same time + +lifecycle + +{ + +create_before_destroy + += + +true + +} +} +Clean up + To avoid incurring charges to your Google Cloud account for the resources used in this + tutorial, either delete the project that contains the resources, or keep the project and + delete the individual resources. + +Delete individual resources +Delete the Compute Engine resources created by the apply trigger: +Open the Cloud Build +Triggers + page: +In the +Triggers + table, locate the row corresponding to the +destroy +trigger, and click +Run +. When the trigger completes execution, the +resources created by the +apply + trigger are deleted. +Delete the resources created during bootstrapping by running the following +command in your terminal window: +bash <(curl https://raw.githubusercontent.com/GoogleCloudPlatform/cloud-build-samples/main/mig-blue-green/teardown.sh) +Delete the project +Caution +: Deleting a project has the following effects: + +Everything in the project is deleted. + If you used an existing project for + the tasks in this document, when you delete it, you also delete any other work you've + done in the project. + +Custom project IDs are lost. + When you created this project, you might have created a custom project ID that you want to use in + the future. To preserve the URLs that use the project ID, such as an +appspot.com + URL, delete selected resources inside the project instead of deleting the whole project. + + If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects + can help you avoid exceeding project quota limits. + +Delete a Google Cloud project: +gcloud projects delete +PROJECT_ID +What's next +Learn more about +. +Learn how to +. \ No newline at end of file diff --git a/page_content/troubleshooting_using_the_serial_console.txt b/page_content/troubleshooting_using_the_serial_console.txt new file mode 100644 index 0000000..c555251 --- /dev/null +++ b/page_content/troubleshooting_using_the_serial_console.txt @@ -0,0 +1,1419 @@ +Linux + + Windows + +This page describes how to enable interactive access to an instance's +serial console to debug boot and networking issues, troubleshoot malfunctioning +instances, interact with the GRand Unified Bootloader (GRUB), and perform other +troubleshooting tasks. +Note: + You can't enable interactive access to the serial console for bare metal +instances; the serial console is read-only for bare metal instances. To execute +commands interactively, you can connect to the instance using SSH after the +instance starts. +A virtual machine (VM) instance has four virtual serial ports. Interacting +with a serial port is similar to using a terminal window, in that input and +output is entirely in text mode and there is no graphical interface or mouse +support. The instance's operating system, BIOS, and other system-level +entities often write output to the serial ports, and can accept input such +as commands or answers to prompts. Typically, these system-level entities use +the first serial port (port 1) and serial port 1 is often referred to as the +serial console. +If you only need to view serial port output without issuing any commands to +the serial console, you can call the +method or use Cloud Logging to read information that your instance has +written to +its serial port; see +. +However, if you run into problems accessing your instance through SSH or need to +troubleshoot an instance that is not fully booted, you can enable interactive +access to the serial console, which lets you connect to and interact with any of +your instance's serial ports. For example, you can directly run commands +and respond to prompts in the serial port. +When you enable or disable the serial port, you can use any Boolean value that +is accepted by the metadata server. For more information, see +. +Before you begin + + If you haven't already, then set up authentication. + + is + the process by which your identity is verified for access to Google Cloud services and APIs. + To run code or samples from a local development environment, you can authenticate to + Compute Engine by selecting one of the following options: +Select the tab for how you plan to use the samples on this page: +Console + When you use the Google Cloud console to access Google Cloud services and + APIs, you don't need to set up authentication. + +gcloud + After + the Google Cloud CLI, + + it by running the following command: + +gcloud + +init + If you're using an external identity provider (IdP), you must first + +. + +Note: + If you installed the gcloud CLI previously, make sure you have + the latest version by running +gcloud components update +. + +Note: + If you installed the gcloud CLI previously, make sure you have + the latest version by running +gcloud components update +. + +. +REST + To use the REST API samples on this page in a local development environment, + you use the credentials you provide to the gcloud CLI. + After + the Google Cloud CLI, + + it by running the following command: + +gcloud + +init + If you're using an external identity provider (IdP), you must first + +. + + For more information, see + + in the Google Cloud authentication documentation. +Permissions required for this task +To perform this task, you must have the following + +: + +compute.instances.setMetadata + on the VM if enabling + interactive access on a specific VM + +compute.projects.setCommonInstanceMetadata + on the project, if + enabling interactive access for all VMs in the project + +iam.serviceAccountUser + role on the instance's service account + +Enabling interactive access on the serial console +Enable interactive serial console access for individual VM instances or for +an entire project. +Caution: + The interactive serial console does not support IP-based access +restrictions such as IP allowlists, unless you use +. +If you enable the interactive serial console on an instance, clients can attempt +to connect to that instance from any IP address. Anybody can connect to that +instance if they know the correct SSH key, username, project ID, zone, and +instance name. +Enabling access for a project +Enabling interactive serial console access on a project enables access for all +VM instances that are part of that project. +By default, interactive serial port access is disabled. You can also explicitly +disable it by setting the +serial-port-enable + key to +FALSE +. In +either case, any per-instance setting overrides the project-level setting or +the default setting. + Console +In the Google Cloud console, go to the +Metadata + page. +Click +Edit + to edit metadata entries. +Add a new entry that uses the key +serial-port-enable + and value +TRUE +. +Save your changes. + gcloud +Using the Google Cloud CLI, enter the +command as follows: +gcloud compute project-info add-metadata \ + --metadata serial-port-enable=TRUE + REST +In the API, make a request to the +method, providing the +serial-port-enable + key with a value of +TRUE +: +{ + "fingerprint": "FikclA7UBC0=", + "items": [ + { + "key": "serial-port-enable", + "value": "TRUE" + } + ] +} +Enabling access for a VM instance +Enable interactive serial console access for a specific instance. A per-instance +setting, if it exists, overrides any project-level setting. You can also +disable access for a specific instance, even if access is enabled on the project +level, by setting +serial-port-enable + to +FALSE +, instead of +TRUE +. Similarly, +you can enable access for one or more instances even if it is disabled for the +project, explicitly or by default. + Console +In the Google Cloud console, go to the +VM instances + page. +Click the instance you want to enable access for. +Click +Edit +. +Under the +Remote access + section, toggle the +Enable connecting to +serial ports + checkbox. +Save your changes. + gcloud +Using the Google Cloud CLI, enter the +command, replacing +instance-name + with the name of +your instance. +gcloud compute instances add-metadata +instance-name + \ + --metadata serial-port-enable=TRUE + REST +In the API, make a request to the +method with the +serial-port-enable + key and a value of +TRUE +: +POST https://compute.googleapis.com/compute/v1/projects/myproject/zones/us-central1-a/instances/example-instance/setMetadata +{ + "fingerprint": "zhma6O1w2l8=", + "items": [ + { + "key": "serial-port-enable", + "value": "TRUE" + } + ] +} +Configure serial console for a bare metal instance +For bare metal instances, increase the bit rate, also known as baud rate, for +the serial console to 115,200 bps (~11.5kB/sec). Using a slower speed results +in garbled or missing console output. +Bootloader configuration varies between operating systems and OS versions. Refer +to the OS distributor's documentation for instructions. +If modifying the bit rate on the command line for the current session, use a +command similar to the following: +console=ttyS0,115200 +If modifying the GRUB configuration, use a command similar to the following: +serial --speed=115200 +Make sure that you update the actual bootloader configuration. This can be done +with +update-grub +, +grub2-mkconfig +, or a similar command. +Connecting to a serial console +Compute Engine offers regional serial console gateways for each Google Cloud +region. After enabling interactive access for a VM's serial console, you +can connect to a regional serial console. +The serial console authenticates users with +. Specifically, you must add your +public SSH key to the project or instance metadata and store your private key +on the local machine from which you want to connect. The gcloud CLI +and the Google Cloud console automatically add SSH keys to the project for you. +If you are using a third-party client, you might need to add SSH keys manually. +If you are using a third-party client, you can additionally +. +When you use the Google Cloud CLI to connect, host key authentication is done +automatically on your behalf. +Caution: + Directly connecting to the serial console using its IP address rather +than its hostname is not recommended. Serial console IP addresses can change +without notice. + Console +To connect to a VM's regional serial console, do the following: +In the Google Cloud console, go to the +VM instances + page. +Click the instance you want to connect to. +Under +Remote access +, click +Connect to serial console + to +connect on the default port (port 1). +If you want to connect to another serial port, click the down arrow next +to the +Connect to serial console + button and change the port number +accordingly. +For Windows instances, open the drop-down menu next to +the button and connect to +Port 2 + to access the serial console. + gcloud +Caution: + As of March 31, 2025, the serial console SSH host key endpoint +was deprecated and a new endpoint was introduced. We recommend that you + to +version 515.0.0 or later to enable you to use the new endpoint. For more +information, see +. +To connect to a VM's regional serial console, use the +: + gcloud compute connect-to-serial-port +VM_NAME + + --port= +PORT_NUMBER + +Replace the following: +VM_NAME +: the name of the VM whose serial console +you want to connect to. +PORT_NUMBER +: the port number you want to connect. +For Linux VMs, use +1 +, for Windows VMs, use +2 +. To learn more about +port numbers, see +. +Note: + the default port number is +1 +. + Other SSH clients +Note: + You must have added your public key to the project or instance +metadata +before you can use a third-party SSH client. If you have used the +gcloud CLI in the past to connect to other instances in the +same project, your +PUBLIC_KEY_FILE + is likely located +at +$HOME/.ssh/google_compute_engine.pub +. If you have never connected to +an instance in this project before (so have never added public keys), you +need to add your SSH keys to the project or instance metadata before +you can connect using a third-party SSH client. See +for more information. +You can connect to an instance's serial console using other third-party SSH +clients, as long as the client lets you connect to TCP port 9600. Before +you connect, you can optionally +. +Caution: + As of March 31, 2025, the serial console SSH host key endpoint +was deprecated and a new endpoint was introduced. If you previously +, +we recommend that you repeat this process using the new endpoint. For more +information, see +. +To connect to a VM's regional serial console, run one of the following +commands, depending on your VM's OS: +To connect to a Linux VM: +ssh -i +PRIVATE_SSH_KEY_FILE + -p 9600 +PROJECT_ID +. +ZONE +. +VM_NAME +. +USERNAME +. +OPTIONS +@ +REGION +-ssh-serialport.googleapis.com +To connect to a Windows VM: +ssh -i +PRIVATE_SSH_KEY_FILE + -p 9600 +PROJECT_ID +. +ZONE +. +VM_NAME +. +USERNAME +. +OPTIONS +.port=2@ +REGION +-ssh-serialport.googleapis.com +Replace the following: +PRIVATE_SSH_KEY_FILE +: The private SSH key for the +instance. +PROJECT_ID +: The project ID for this VM instance. +ZONE +: The zone of the VM instance. +REGION +: The region of the VM instance. +VM_NAME +: The name of the VM instance. +USERNAME +: The username you are using to connect +to your instance. Typically, this is the username on your local machine. +OPTIONS +: Additional options you can specify for +this connection. For example, you can specify a certain serial port and +specify any +. The port number can be +1 through 4, inclusively. To learn more about port numbers, see +. +If omitted, you will connect to serial port 1. +Caution +: The global serial console gateway was +deprecated on April 30, 2024 and is no longer available for use in new +projects or projects where it hasn't previously been used. If you +use the global serial console gateway, transition to using regional gateways +instead. For more information, see +. +To connect to the global serial console gateway, replace +REGION +-ssh-serialport.googleapis.com + with +ssh-serialport.googleapis.com + as the hostname. +If you are having trouble connecting using a third-party SSH client, you can +run the +gcloud compute connect-to-serial-port + command with the +--dry-run +command-line option to see the SSH command that it would have run on your +behalf. Then you can compare the options with the command you are using. +Validate third-party SSH client connections +When you use a third-party SSH client that isn't the Google Cloud CLI, you can +ensure that you're protected against impersonation or man-in-the-middle attacks +by checking Google's Serial Port SSH host key. To set up your system to check +the SSH host key, complete the following steps: +Download the SSH host key for the serial console you will be using: +For regional connections, the SSH host key for a region can be found at +https://www.gstatic.com/vm_serial_port_public_keys/ +REGION +/ +REGION +.pub +Caution: + As of March 31, 2025, the previous endpoint of +https://www.gstatic.com/vm_serial_port/ +REGION +/ +REGION +.pub +is deprecated. For more information, see +. +For global connections, download +Open your known hosts file, generally located at +~/.ssh/known_hosts +. +Add the contents of the SSH host key, with the server's hostname +prepended to the key. For example, if the us-central1 server key contains the line +ssh-rsa AAAAB3NzaC1yc... +, then +~/.ssh/known_hosts + should have a line +like this: +[us-central1-ssh-serialport.googleapis.com]:9600 ssh-rsa AAAAB3NzaC1yc... +For security reasons, Google might occasionally change the Google Serial Port +SSH host key. If your client fails to authenticate the server key, immediately +end the connection attempt and complete the earlier steps to download a new +Google Serial Port SSH host key. +If, after updating the host key, you continue to receive a host authentication +error from your client, stop attempts to connect to the serial port and contact +Google support. Don't provide any credentials over a connection where +has failed. +Disconnecting from the serial console +To disconnect from the serial console: +Press the +ENTER + key. +Type +~. + (tilde, followed by a period). +You can discover other commands by typing +~? + or by examining the man page +for SSH: +man ssh +Don't try to disconnect using any of the following methods: +The +CTRL+ALT+DELETE + key combination or other similar combinations. This +doesn't work because the serial console does not recognize PC keyboard +combinations. +The +exit + or +logout + command doesn't work because the guest is not aware +of any network or modem connections. Using this command causes the console +to close and then reopen again, and you remain connected to the session. If +you would like to enable +exit + and +logout + commands for your session, +you can enable it by setting the + option. +Connecting to a serial console with a login prompt +If you are trying to troubleshoot an issue with a VM that has booted +completely or trying to troubleshoot an issue that occurs after VM +has booted past single user mode, you might be prompted for login information +when trying to access the serial console. +By default, Google-supplied Linux system images are not configured to allow +password-based logins for local users. However, Google-supplied Windows images +are configured to allow password-based logins for local users. +If your VM is running an image that is preconfigured with serial port logins, +you need to set up a local password on the VM so that you can sign in to the +serial console, if prompted. You can set up a local password after connecting to +the VM or by using a start-up script. +Note: + This step is not required if you are interacting with the system during +or prior to boot or with some serial-port-based service that does not require a +password. This step is also not required if you have configured +getty + to sign +in automatically without a password using the "-a root" flag. +Setting up a local password using a startup script +You can use a startup script to set up a local password that lets you +connect to the serial console during or after VM creation. +To set up a local password in an existing VM, select one of the following +options: + Linux +In the Google Cloud console, go to the +VM instances + page. + +In the +Name + column, click the name of the VM for which you want to +add a local password. +The details page of the VM opens. +Click +edit + +Edit +. +The page to edit the details of the VM opens. +In the +Metadata + +> + +Automation +section, do the following: +If the VM has an existing startup script, then remove it and store +the script somewhere safe. +Add the following startup script: +# +!/bin/bash +useradd +USERNAME +echo ' +USERNAME +: +PASSWORD +' | chpasswd +usermod -aG google-sudoers +USERNAME +Replace the following: +USERNAME +: the username that you want to add. +PASSWORD +: the password for the username. As +some operating systems require minimal password length and +complexity, specify a password as follows: +Use at least 12 characters. +Use a mix of upper and lower case letters, numbers, and +symbols. +Click +Save +. +The details page of the VM opens. +Click +Reset +. +. +When prompted, enter your login information. + Windows +In the Google Cloud console, go to the +VM instances + page. + +In the +Name + column, click the name of the VM for which you want to +add a local password. +The details page of the VM opens. +Click +edit + +Edit +. +The page to edit the details of the VM opens. +In the +Metadata + section, do the following: +If the VM has an existing startup script, then store the script +somewhere safe, and then, to delete the script, click +delete + +Delete item +. +Click +Add item +. +In the +Key + field, enter +windows-startup-script-cmd +. +In the +Value + field, enter the following script: +net user +USERNAME + +PASSWORD + /ADD /Y +net localgroup administrators +USERNAME + /ADD +Replace the following: +USERNAME +: the username that you want to add. +PASSWORD +: the password for the username. As +some operating systems require minimal password length and +complexity, specify a password as follows: +Use at least 12 characters. +Use a mix of upper and lower case letters, numbers, and +symbols. +Click +Save +. +The details page of the VM opens. +Click +Reset +. +. +When prompted, enter your login information. +After the user has been created, replace the startup script with the startup +script that you stored in this section. +Setting up a local password using +passwd + on the VM +The following instructions describe how to set up a local password for a +user on a VM so that the user can log on to the +serial console of that VM using the specified password. +Connect to the VM. Replace +instance-name +with the name of your instance. +gcloud compute ssh +instance-name +On the VM, create a local password with the following command. This +sets a password for the user that you are currently logged in as. +sudo passwd $(whoami) +Follow the prompts to create a password. +Next, log out of the instance and +. +Enter your login information when prompted. +Setting up a login on other serial ports +Login prompts are enabled on port 1 by default on most Linux operating systems. +However, port 1 can often be overwhelmed by logging data and other information +being printed to the port. Instead, you can choose to enable a login +prompt on another port, such as port 2 (ttyS1), by executing one of +the following commands on your VM. You can see a list of available +ports for an VM in +. +The following table lists images preconfigured with a serial console login and +the default ports. +Operating system +Ports with a login prompt by default +Service management +CentOS 6 +1 +upstart +CentOS 7 +1 +systemd +CoreOS +1 +systemd +COS +1 +systemd +Debian 8 +1 +systemd +Debian 9 +1 +systemd +OpenSUSE 13 +1 +systemd +OpenSUSE Leap +1 +systemd +RHEL 6 +1 +upstart +RHEL 7 +1 +systemd +SLES 11 +1 +sysvinit +SLES 12 +1 +systemd +Ubuntu 14.04 +1 +upstart +Ubuntu 16.04 +1 +systemd +Ubuntu 17.04 +1 +systemd +Ubuntu 17.10 +1 +systemd +Windows +COM2 +N/A +To enable login prompts on additional serial ports, complete the following +instructions. +Note: + The following instructions are divided between operating systems running +, +, or +. +For all sets of the instructions, you can replace +ttyS1 + with another + if you want to. + systemd +For Linux operating systems using +systemd +: +Enable the service temporarily until the next reboot: +sudo systemctl start serial-getty@ttyS1.service +Enable the service permanently, starting with the next reboot: +sudo systemctl enable serial-getty@ttyS1.service + upstart +For Linux operating systems using +upstart +: +Create a new +/etc/init/ttyS1.conf + file to reflect +ttyS1 + by copying +and modifying an existing +ttyS0.conf + file. For example: +On Ubuntu 14.04: +sudo sh -c "sed -e s/ttyS0/ttyS1/g < /etc/init/ttyS0.conf > /etc/init/ttyS1.conf" +Note: + Ubuntu 12.04 does not have a +ttyS0.conf +, so Google +recommends that you copy the +ttysS0.conf + file from Ubuntu +14.04 and use it on the instance running Ubuntu 12.04. +On RHEL 6.8 and CentOS 6.8 +sudo sh -c "sed -ne '/^# # ttyS0/,/^# exec/p' < /etc/init/serial.conf | sed -e 's/ttyS0/ttyS1/g' -e 's/^# *//' > /etc/init/ttyS1.conf" +Start on a login prompt on +ttyS1 + without restarting: +sudo start ttyS1 + sysvinit +For Linux operating systems using +sysvinit +, run the following command: + sudo sed -i~ -e 's/^#T([01])/T\1/' /etc/inittab + sudo telinit q +Understanding serial port numbering +Each virtual machine instance has four serial ports. For consistency with the +API, each port is numbered 1 through 4. Linux and other similar systems number +their serial ports 0 through 3. For example, on many operating system images, +the +corresponding devices are +/dev/ttyS0 + through +/dev/ttyS3 +. Windows refers to +serial ports as +COM1 + through +COM4 +. To connect to what Windows considers +COM3 + and Linux considers +ttyS2 +, you would specify port 3. Use +the following table to help you figure out which port you want to connect to. +Virtual machine instance serial ports +Standard Linux serial ports +Windows COM ports +1 +/dev/ttyS0 +COM1 +2 +/dev/ttyS1 +COM2 +3 +/dev/ttyS2 +COM3 +4 +/dev/ttyS3 +COM4 +Note that many Linux images use port 1 ( +/dev/ttyS0 +) for logging messages from +the kernel and system programs. +Sending a serial break +The + +feature lets you perform low-level tasks regardless of the system's +state. For example, you can sync file systems, reboot the instance, +end processes, and +unmount file systems using the Magic SysRq key feature. +To send a Magic SysRq command using a simulated serial break: +Press the +ENTER + key. +Type +~B + (tilde, followed by uppercase +B +). +Type the Magic SysRq command. +Note: + The Magic SysRq key is normally implemented by using PC keyboard +scan codes +but there is no direct equivalent on a serial port. This is the recommended +method of accessing the Magic SysRq feature. +Viewing serial console audit logs +Compute Engine provides audit logs to track who has connected and +disconnected from an instance's serial console. To view logs, you must have +or be a project viewer or editor. +In the Google Cloud console, go to the +Logs Explorer + page. +Expand the drop-down menu and select +GCE VM Instance +. +In the search bar, type +ssh-serialport.googleapis.com + and press +Enter +. +A list of audit logs appears. The logs describe connections and +disconnections from a +serial console. Expand any of the entries to get more information. +For any of the audit logs, you can: +Expand the +protoPayload + property. +Look for +methodName + to see activity this log applies to (either a +connection or disconnection request). For example, if this log tracks a +disconnection from the serial console, the method name would say +"google.ssh-serialport.v1.disconnect" +. Similarly, a connection log would +say +"google.ssh-serialport.v1.connect" +. An audit log entry is recorded at +the beginning and end of each session on the serial console. +There are different audit log properties for different log types. For example, +audit logs relating to connections have properties that are +specific to connection logs, while audit logs for disconnections have +their own set of properties. There are certain audit log properties that +are also shared between both log types. +All serial console logs +The following table provides audit log properties and their values for all +serial console logs: +Property +Value +requestMetadata.callerIp +The IP address and port number from which the connection originated. +serviceName +ssh-serialport.googleapis.com +resourceName +A string containing the project ID, zone, instance name, and + serial port number to indicate which serial console this pertains to. + For example, +projects/ +myproject +/zones/us-east1-a/instances/ +example-instance +/SerialPort/2 + is port number 2, also known as COM2 or /dev/ttyS1, for the instance + +example-instance +. +resource.labels +Properties identifying the instance ID, zone, and project ID. +timestamp +A timestamp indicating when the session began or ended. +severity +NOTICE +operation.id +An ID string uniquely identifying the session; you can use this to +associate a disconnect entry with the corresponding connection entry. +operation.producer +ssh-serialport.googleapis.com +Connection logs +The following table provides audit log properties and their values specific for +connection logs: +Property +Value +methodName +google.ssh-serialport.v1.connect +status.message +Connection succeeded. +request.serialConsoleOptions +Any +that were specified with the request, including the serial port number. +request.@type +type.googleapis.com/google.compute.SerialConsoleSessionBegin +request.username +The username specified for this request. This is used to select the public +key to match. +operation.first +TRUE +status.code +For successful connection requests, a +status.code + value of +google.rpc.Code.OK + indicates that the operation completed +successfully without any errors. Because the enum value for this property is + +0 +, the +status.code + property is not displayed. +However, any code that checks for a +status.code + value of +google.rpc.Code.OK + will work as expected. +Disconnection logs +The following table provides audit log properties and their values specific for +disconnection logs: +Property +Value +methodName +google.ssh-serialport.v1.disconnect +response.duration +The amount of time, in seconds, that the session lasted. +response.@type +type.googleapis.com/google.compute.SerialConsoleSessionEnd +operation.last +TRUE +Failed connection logs +When a connection fails, Compute Engine creates an audit log entry. A +failed connection log looks very similar to a successful connection entry, but +has the following properties to indicate a failed connection. +Property +Value +severity +ERROR +status.code +The +that best describes the error. The following are possible error codes that might +appear: +google.rpc.Code.INVALID_ARGUMENT +: The connection failed because +the client provided an invalid port number or tried to reach an unknown +channel. See the list of +. +google.rpc.Code.PERMISSION_DENIED +: You have not enabled +interactive serial console in the metadata server. For more information, +see +. +google.rpcCode.UNAUTHENTICATED +: No SSH keys found or no + matching SSH key found for this instance. Check that you are +. +google.rpc.Code.UNKNOWN +: There was an unknown error with your +request. You can reach out to Google on the +or +. +status.message +The human-readable message for this entry. +Disabling interactive serial console access +You can disable interactive serial console access by changing metadata on the +specific instance or project, or by setting an + that +disables interactive serial console access to all VM instances for one or more +projects that are part of the organization. +Disabling interactive serial console on a particular instance or project +Project owners and editors, as well as users who have been granted the +compute.instanceAdmin.v1 + role, can disable access to the serial console by +changing the metadata on the particular instance or project. Similar to +, +set the +serial-port-enable + metadata to +FALSE +: +serial-port-enable=FALSE +For example, using the Google Cloud CLI, you can apply this metadata +to a specific instance like so: +gcloud compute instances add-metadata +instance-name + \ + --metadata=serial-port-enable=FALSE +To apply the metadata to the project: +gcloud compute project-info add-metadata \ + --metadata=serial-port-enable=FALSE +Disabling interactive serial console access through Organization Policy +If you have been granted the +orgpolicy.policyAdmin + role on the organization, +you can set an +that prevents interactive access to the serial console, regardless of whether +interactive serial console access is enabled on the metadata server. After the +organization policy is set, the policy effectively overrides the +serial-port-enable + metadata key, +and no users of the organization or project can enable interactive serial +console access. By default, this constraint is set to +FALSE +. +The constraint for disabling interactive serial console access is as follows: +compute.disableSerialPortAccess +Complete the following instructions to set this policy on the organization. +After setting up a policy, you can grant exemptions on a per-project basis. + gcloud +To set the policy using the Google Cloud CLI, run the +resource-manager enable-enforce + command. Replace +organization-id + with your +. +For example, +1759840282 +. +gcloud resource-manager org-policies enable-enforce \ + --organization +organization-id + compute.disableSerialPortAccess + REST +To set a policy in the API, make a +POST + request to the following URL. + Replace +organization-name + with your + +. + For example, +organizations/1759840282 +. + POST https://cloudresourcemanager.googleapis.com/v1/ +organization-name +:setOrgPolicy +The request body should contain a +policy + object with the following + constraint: +"constraint": "constraints/compute.disableSerialPortAccess" +For example: + { + "policy": + { + "booleanPolicy": + { + "enforced": TRUE + }, + "constraint": "constraints/compute.disableSerialPortAccess" + } + } + +The policy is immediately effective, so any projects under the organization +immediately stop allowing interactive access to the serial console. +To temporarily disable the policy, use the +disable-enforce + command: +gcloud resource-manager org-policies disable-enforce \ + --organization +organization-id + compute.disableSerialPortAccess +Alternatively, you can make an API request where the request body sets the +enforced + parameter to +FALSE +: +{ + "policy": + { + "booleanPolicy": + { + "enforced": FALSE + }, + "constraint": "constraints/compute.disableSerialPortAccess" + } +} +Setting the organization policy at the project level +You can set the same organizational policy on a per-project basis. This +overrides the setting at the organization level. + gcloud +To turn off enforcement of this policy for a specific project. Replace +project-id + with your project ID. +gcloud resource-manager org-policies disable-enforce \ + --project +project-id + compute.disableSerialPortAccess +You can turn on enforcement of this policy by using the +enable-enforce +command with the same values. + REST +In the API, make a +POST + request to the following URL to enable interactive +serial console access for the project, replacing +project-id + with the project ID: +POST https://cloudresourcemanager.googleapis.com/v1/projects/ +project-id +:setOrgPolicy +The request body should contain a +policy + object with the following +constraint: +"constraint": "constraints/compute.disableSerialPortAccess" +For example: +{ + "policy": + { + "booleanPolicy": + { + "enforced": FALSE + }, + "constraint": "constraints/compute.disableSerialPortAccess" + } +} +Tips and tricks +If you are having trouble connecting using a standard SSH client, but +gcloud compute connect-to-serial-port + connects successfully, it might be +helpful to run +gcloud compute connect-to-serial-port + with the +--dry-run +command-line option to see the SSH command that it would have run on your +behalf, and compare the options with the command you are using. +If you're using a Windows VM with OS Login enabled and encounter an +UNAUTHENTICATED + error, verify that your public SSH keys have been posted to your project or instance metadata. To learn more, see +. +Setting the bit rate, also known as baud rate, you can set any bit rate you +like, such as +stty 9600 +, but the feature normally forces the effective rate to +115,200 bps (~11.5kB/sec). This is because many public images default to +slow bit +rates, such as 9,600 on the serial console, and would boot slowly. +Some OS images have inconvenient defaults on the serial port. For example, +on CentOS 7, the +stty icrnl + default for the Enter key on the console is to +send a +CR +, also known as +^M +. The bash shell might mask +this until you try to set a password, at which point you might wonder why it +seems stuck at the +password: + prompt. +Some public images have job control keys that are disabled by default if you +attach a shell to a port in certain ways. Some examples of these keys include +^Z + and +^C +. The +setsid + command might fix this. Otherwise, if you see a +job control is disabled in this shell + message, be careful not to run +commands that you will need to interrupt. +You might find it helpful to tell the system the size of the window you're +using, so that bash and editors can manage it properly. Otherwise, you might +experience odd display behavior because bash or editors attempt to manipulate +the +display based on incorrect assumptions about the number of rows and columns +available. Use the +stty rows Y cols X + command and +stty -a + flag to see +what the setting is. For example: +stty rows 60 cols 120 +(if your window is 120 chars by 60 lines). +If, for example, you connect using SSH from machine A to machine B, and then +to machine C, creating a nested SSH session, and you want to use +tilde (~) commands to disconnect or send a serial break signal, you will need +to add +enough extra tilde characters to the command to get to the right SSH client. A +command following a single tilde is interpreted by the SSH client on +machine A; a command following two consecutive tildes (Enter~~) is +interpreted by the client on machine B, and so forth. You only need to press +Enter one time because that +is passed all the way through to the innermost SSH destination. This is true +for any use of SSH clients that provide the tilde escape feature. +If you lose track of how many tilde characters you need, press the Enter +key and then type tilde characters one at a time until the instance echoes +the tilde back. This echo indicates that you have reached the end of the +chain and you +now know that to send a tilde command to the most nested SSH client, you +need one less tilde than however many tildes you typed. +Advanced options +You can also use the following advanced options with the serial port. +Controlling max connections +You can set the +max-connections + property to control how many concurrent +connections can be made to this serial port at a time. The default and +maximum number of connections is 5. For example: +gcloud compute connect-to-serial-port +instance-name + \ + --port +port-number + \ + --extra-args max-connections=3 +ssh -i +private-ssh-key-file + -p 9600 +project-id +. +zone +. +instance-name +. +username +.max-connections=3@ssh-serialport.googleapis.com +Setting replay options +By default, each time you connect to the serial console, you will receive +a replay of the last 10 lines of data, regardless of whether the last 10 lines +have been seen by another SSH client. You can change this setting and control +how many and which lines are returned by setting the following options: +replay-lines=N +: Set +N + to the number of lines you want replayed. For +example, if +N + is 50, then the last 50 lines of the console output is +included. +replay-bytes=N +: Replays the most recent +N + bytes. You can also set +N + to +new + which replays all output that has not yet been sent to any client. +replay-from=N +: Replays output starting from an absolute byte index +that you provide. You can get the current byte index +of serial console output by making a +request. If you set +replay-from +, all other replay options are ignored. +With the Google Cloud CLI, append the following to your +connect-to-serial-port + command, where +N + is the specified number of lines +(or bytes or absolute byte index, depending on which replay option you are +selecting): +--extra-args replay-lines=N +If you are using a third-party SSH client, provide this option in your SSH +command: +ssh -i +private-ssh-key-file + -p 9600 myproject.us-central1-f.example-instance.jane.port=3.replay-lines=N@ssh-serialport.googleapis.com +You can also use a combination of these options as well. For example: +replay-lines=N + and +replay-bytes=new +Replay the specified number of lines OR replay all output not previously +sent to any client, whichever is larger. The first client to connect with this +flag combination will see all the output that has been sent to the serial +port, and clients that connect subsquently will only see the last +N + lines. Examples: +gcloud compute connect-to-serial-port +instance-name +--port +port-number + --extra-args replay-lines=N,replay-bytes=new +ssh -i +private-ssh-key-file + -p 9600 +project-id +. +zone +. +instance-name +. +username +.replay-lines=N.replay-bytes=new@ssh-serialport.googleapis.com +replay-lines=N + and +replay-bytes=M +Replay lines up to, but not more than, the number of lines or bytes +described by these flags, whichever is less. This option won't replay more +than +N + or +M + bytes. +gcloud compute connect-to-serial-port +instance-name +--port +port-number + --extra-args replay-lines=N,replay-bytes=M +ssh -i +private-ssh-key-file + -p 9600 +project-id +. +zone +. +instance-name +. +username +.replay-lines=N.replay-bytes=M@ssh-serialport.googleapis.com +Handling dropped output +The most recent 1 MiB of output for each serial port is always available and +generally, your SSH client shouldn't miss any output from the serial port. +If, for some reason, your SSH client stops accepting output for a period of +time but does not disconnect, and more than 1 MiB of new data is produced, +your SSH client might miss some output. When your SSH +client is not accepting data fast enough to keep up with the output on the +serial console port, you can set the +on-dropped-output + property to determine +how the console behaves. +Set any of the following applicable options with this property: +insert-stderr-note +: Insert a note on the SSH client's +stderr + indicating +that output was dropped. This is the default option. +ignore +: Silently drops output and does nothing. +disconnect +: Stop the connection. +For example: +gcloud compute connect-to-serial-port +instance-name + \ + --port +port-number + \ + --extra-args on-dropped-output=ignore +ssh -i +private-ssh-key-file + -p 9600 +project-id +. +zone +. +instance-name +. +username +.on-dropped-output=ignore@ssh-serialport.googleapis.com +Enabling disconnect using exit or logout commands +You can enable disconnecting on exit or logout commands by setting the +on-dtr-low + property to +disconnect + when you connect to the serial console. +On the Google Cloud CLI, append the following flag to your +connect-to-serial-port + command: +--extra-args on-dtr-low=disconnect +If you are using a third-party SSH client, provide this option in your SSH +command: +ssh -i +private-ssh-key-file + -p 9600 myproject.us-central1-f.example-instance.jane.port=3.on-dtr-low=disconnect@ssh-serialport.googleapis.com +Enabling the +disconnect + option might cause your instance to disconnect one or +more times when you are rebooting the instance because the operating system +resets the serial ports while booting up. +Note: + With some operating systems, this setting has no effect +on serial port 1. However, it should work on ports 2 through 4 for most +operating systems, and on port 1 for some systems. +The default setting for the +on-dtr-low + option is +none +. If you use the +default setting +none +, you can reboot your instance without being disconnected +from the serial console, but the console won't disconnect through normal +means such as +exit + or +logout + commands, or normal key combinations like +Ctrl+D. +What's next +Learn more about the + API. +Learn how to retain and view +even after a VM instance is deleted. +Read more +. +Learn more about applying +. +Learn about +. \ No newline at end of file diff --git a/raw_soup/deploy_a_function_RAW.txt b/raw_soup/deploy_a_function_RAW.txt new file mode 100644 index 0000000..f93af10 --- /dev/null +++ b/raw_soup/deploy_a_function_RAW.txt @@ -0,0 +1,165 @@ +
+

Deploy a function

+

This guide shows you how to deploy a function from source code using the +`gcloud functions` command. To learn how to deploy a function using the +`gcloud run` command, see +Deploy a Cloud Run function using the gcloud CLI.

+

The deployment process takes your source code and configuration settings and +builds a runnable image that Cloud Run functions +manages automatically in order to handle requests to your function.

+

Deployment basics

+

For an introduction on the type of functions you can deploy, see +Write Cloud Run functions.

+

Users deploying functions must have the +Cloud Functions Developer +IAM role or a role that includes the same permissions. See also +Additional configuration for deployment.

+
    +
  1. +

    In the Google Cloud console, activate Cloud Shell.

    +

    Activate Cloud Shell

    +

    + At the bottom of the Google Cloud console, a + Cloud Shell + session starts and displays a command-line prompt. Cloud Shell is a shell environment + with the Google Cloud CLI + already installed and with values already set for + your current project. It can take a few seconds for the session to initialize. +

    +
  2. +
  3. Use the `gcloud functions deploy` +command to deploy a function:

    +
    ```gcloud functions deploy YOUR_FUNCTION_NAME \ + --region=YOUR_REGION \ + --runtime=YOUR_RUNTIME \ + --source=YOUR_SOURCE_LOCATION \ + --entry-point=YOUR_CODE_ENTRYPOINT \ + TRIGGER_FLAGS``` +

    The first argument, `YOUR_FUNCTION_NAME`, is a name for +your deployed function. The function name must start with a letter +followed by up to 62 letters, numbers, hyphens, or underscores, and must end +with a letter or a number. The name of the Cloud Run service that +is created for your function will replace underscores with hyphens and +uppercase letters will be converted to lowercase. For example, +`Function_1` will be given the name `function-1` in Cloud Run.

    + +
      +
    • The `--region` flag +specifies the region in which to deploy your function. See +Locations for a list of regions supported by +Cloud Run.

    • +
    • The `--runtime` flag +specifies which language runtime your function uses. See +Runtime support for a list of supported +runtime IDs.

    • +
    • The `--source` flag +specifies the location of your function source code.

    • +
    • The `--entry-point` +flag specifies the entry point to your function in your source code. This is +the code that will be executed when your function runs. The value of this +flag must be a function name or fully-qualified class name that exists in +your source code. For more information, see +Function entry point.

    • +
    • To specify the trigger for your +function, additional flags (represented as +`TRIGGER_FLAGS` above) are required, depending on +the trigger you want to use:

      + + + + + + + + + + + + + + + + + + + + + +
      Trigger flagsTrigger description
      `--trigger-http`Trigger the function with an HTTP(S) request.
      `--trigger-topic=YOUR_PUBSUB_TOPIC`Trigger the function when a message is published to the specified + Pub/Sub topic.
      `--trigger-bucket=YOUR_STORAGE_BUCKET`Trigger the function when an object is created or overwritten in the + specified Cloud Storage bucket.
      `--trigger-event-filters=EVENTARC_EVENT_FILTERS`Trigger the function with Eventarc when an + event that matches the specified filters occurs.
      +

      For a complete reference on the deployment command and its flags, see the +`gcloud functions deploy` +documentation.

      +

      For more details about `gcloud functions deploy` configuration flags, +refer to Cloud Run documentation.

    • +
  4. +
+

When deployment finishes successfully, functions appear with a green check +mark in the Cloud Run overview page in the +Google Cloud console.

+

The initial deployment of a function may take several minutes, while the +underlying infrastructure is provisioned. Redeploying an existing function +is faster, and incoming traffic is automatically migrated to the newest version.

+ +

HTTP endpoint URL

+

When you create a function with the `gcloud functions` command or the +Cloud Functions v2 API, by default, the function has a +`cloudfunctions.net` HTTP endpoint URL. If you take this function and deploy it +on Cloud Run, your function also receives a `run.app` HTTP endpoint +URL. However, functions created in Cloud Run won't have an +`cloudfunctions.net` HTTP endpoint URL. A function's `cloudfunctions.net` URL +and `run.app` URL behave in exactly the same way. They are interchangeable, +and are used to trigger your function.

+

Terraform examples

+

For examples about how to deploy functions using Terraform, see the +Terraform HTTP example and +Terraform Pub/Sub example.

+

Configure networking

+

Functions created using the Cloud Functions v2 API +(for example, by using `gcloud functions`, the REST API, or Terraform) can be +managed with the Cloud Run Admin API +as well as the Cloud Functions v2 API.

+ +

To learn more about managing networks for functions, including how to route +VPC network traffic, see +Best practices for Cloud Run networking.

+

Learn how to deploy Cloud Run functions on Cloud Run

+

Deploying functions on Cloud Run is similar to the steps described in +this document, but with some added advantages:

+ + +
\ No newline at end of file