--- title: 'GCP ACE Problems 001' disqus: hackmd tags: Cathay --- {%hackmd BJrTq20hE %} GCP ACE Problems 001 === ![downloads](https://img.shields.io/github/downloads/atom/atom/total.svg) ## Hint :::info :bulb: This article provides solutions and a detailed explanation of GCP ACE certification problems. ::: -------------------------- 1. Every employee of your company has a Google account. Your operational team needs to manage a large number of instances on Compute Engine. Each member of this team needs only administrative access to the servers. Your security team wants to ensure that the deployment of credentials is operationally efficient and must be able to determine who accessed a given instance. What should you do? A. Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key in the metadata of each instance. B. Ask each member of the team to generate a new SSH key pair and to send you their public key. Use a configuration management tool to deploy those keys on each instance. C. Ask each member of the team to generate a new SSH key pair and to add the public key to their Google account. Grant the ג€compute.osAdminLoginג€ role to the Google group corresponding to this team. D. Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key as a project-wide public SSH key in your Cloud Platform project and allow project-wide public SSH keys on each instance. -------------------------- 最佳解是: C 選項C最適合此情況,因為它提供了最大的操作效率和安全性。該解決方案要求每個成員創建新的SSH密鑰對並將公鑰添加到其Google帳戶中。然後,您可以授予相應團隊的Google組的“compute.osAdminLogin”角色,從而授予其對Compute Engine實例的完整管理權限。這樣可以最大限度地提高操作效率,因為您不需要為每個實例單獨配置SSH密鑰,並且可以輕鬆地在Google帳戶中添加或刪除成員。同時,使用Google帳戶中的SSH密鑰可以更好地保護安全性,因為不需要通過電子郵件或其他不安全的方式發送私有密鑰。 選項A不適合的原因是它需要將私鑰分配給每個團隊成員,這樣可能會導致私鑰被泄露或損壞。此外,需要為每個實例單獨配置SSH密鑰,這樣會降低操作效率。 選項B不適合的原因是,它要求每個成員生成新的SSH密鑰對並將其公鑰發送給您。這樣會增加操作成本並降低操作效率。此外,必須使用配置管理工具將每個成員的SSH密鑰部署到每個實例上,這樣也會降低操作效率。 選項D不適合的原因是,它要求將公鑰配置為項目範圍公共SSH密鑰。這樣會降低安全性,因為任何人都可以使用公鑰訪問所有實例。此外,需要將私鑰分配給每個團隊成員,這樣可能會導致私鑰被泄露或損壞。 -------------------------- 2. You need to create a custom VPC with a single subnet. The subnet's range must be as large as possible. Which range should you use? A. 0.0.0.0/0 B. 10.0.0.0/8 C. 172.16.0.0/12 D. 192.168.0.0/16 -------------------------- 最佳解是: B 因為 10.0.0.0/8 的範圍最大,可用的 IP 位址最多,可提供超過 16,000,000 個 IP 位址,適用於需要大量 IP 地址的情況,而其他選項都比較小,不適合使用。 A. 0.0.0.0/0 包含所有可能的 IP 位址,但不安全,也不符合需求。 C. 172.16.0.0/12 可提供超過 1,000,000 個 IP 位址,但不如 B 選項大。 D. 192.168.0.0/16 可提供超過 65,000 個 IP 位址,但不如 B 選項大。 -------------------------- 3. You want to select and configure a cost-effective solution for relational data on Google Cloud Platform. You are working with a small set of operational data in one geographic location. You need to support point-in-time recovery. What should you do? A. Select Cloud SQL (MySQL). Verify that the enable binary logging option is selected. B. Select Cloud SQL (MySQL). Select the create failover replicas option. C. Select Cloud Spanner. Set up your instance with 2 nodes. D. Select Cloud Spanner. Set up your instance as multi-regional. -------------------------- 最佳解是: A 在這種情況下,最好的解決方案是選擇Cloud SQL(MySQL)並啟用二進制日誌選項。這是因為Cloud SQL(MySQL)是一個成本效益高的解決方案,並且可以支持點到時間恢復。在啟用二進制日誌選項的情況下,Cloud SQL(MySQL)會定期將日誌記錄到Google Cloud Storage中,這樣您就可以使用這些日誌進行點到時間恢復。其他選項不好的原因如下: B. 雖然創建故障轉移副本可以提供高可用性,但並不支持點到時間恢復。 C. Cloud Spanner是一個分佈式數據庫解決方案,它可以提供高可用性和高性能,但不是成本效益最高的解決方案。而且,只有在需要在多個地理位置訪問您的數據時才需要使用Cloud Spanner。 D. 將Cloud Spanner設置為多區域的主要原因是實現高可用性和跨地理位置的數據訪問。然而,在這種情況下,只需要在一個地理位置存儲數據,因此不需要使用Cloud Spanner。 -------------------------- 4. You want to configure autohealing for network load balancing for a group of Compute Engine instances that run in multiple zones, using the fewest possible steps. You need to configure re-creation of VMs if they are unresponsive after 3 attempts of 10 seconds each. What should you do? A. Create an HTTP load balancer with a backend configuration that references an existing instance group. Set the health check to healthy (HTTP) B. Create an HTTP load balancer with a backend configuration that references an existing instance group. Define a balancing mode and set the maximum RPS to 10. C. Create a managed instance group. Set the Autohealing health check to healthy (HTTP) D. Create a managed instance group. Verify that the autoscaling setting is on. -------------------------- 最佳解是: C C選項是最佳解。為了實現網絡負載均衡器的自動修復,我們需要創建一個管理實例組並在其中設置自動修復。C選項創建了一個受管實例組,其中設置了自動修復的健康檢查。當某個虛擬機不可用時,將刪除該虛擬機並重新創建它,以確保實例的可用性。此外,我們還可以定義要執行的操作,例如發送通知或啟動腳本。 A選項是不好的選擇,因為它使用了HTTP健康檢查,但沒有指定自動修復操作。在某些情況下,例如虛擬機崩潰或系統故障時,需要對虛擬機進行自動修復。 B選項是不好的選擇,因為它定義了一個請求率,而不是指定健康檢查或自動修復操作。同樣,在某些情況下,需要對虛擬機進行自動修復。 D選項是不好的選擇,因為它沒有設置自動修復的健康檢查,只是確認了自動縮放的設置。自動縮放可以在某些情況下增加實例的數量,但不能確保實例的可用性。 -------------------------- 5. You are using multiple configurations for gcloud. You want to review the configured Kubernetes Engine cluster of an inactive configuration using the fewest possible steps. What should you do? A. Use gcloud config configurations describe to review the output. B. Use gcloud config configurations activate and gcloud config list to review the output. C. Use kubectl config get-contexts to review the output. D. Use kubectl config use-context and kubectl config view to review the output. -------------------------- 最佳解是: C 使用 kubectl config get-contexts 來查看未使用的配置的 Kubernetes Engine 叢集。 選項 A 不適用,因為它僅顯示有關配置的信息,而不會顯示 Kubernetes Engine 叢集。 選項 B 不適用,因為在切換到不活動配置時,使用 gcloud config list 將顯示不正確的 Kubernetes Engine 叢集。 選項 D 不適用,因為它需要在查看叢集之前使用 kubectl config use-context,這需要更多的步驟。 -------------------------- 6. Your company uses Cloud Storage to store application backup files for disaster recovery purposes. You want to follow Google's recommended practices. Which storage option should you use? A. Multi-Regional Storage B. Regional Storage C. Nearline Storage D. Coldline Storage -------------------------- 最佳解是: D 建議使用Coldline Storage。 Coldline Storage為低成本儲存選項,適合長期保存的資料,尤其是數據備份和存檔資料,可在需要時快速存取。此外,冷層存儲還具有經濟性,因此是長期存儲和檔案備份的理想選擇。 其他選項不適合的原因是: A. Multi-Regional Storage 用於頻繁存取和高可用性的數據。 B. Regional Storage 用於頻繁讀寫的中等冷熱數據。 C. Nearline Storage 用於不經常訪問但需要快速存取的數據。 -------------------------- 7. Several employees at your company have been creating projects with Cloud Platform and paying for it with their personal credit cards, which the company reimburses. The company wants to centralize all these projects under a single, new billing account. What should you do? A. Contact cloud-billing@google.com with your bank account details and request a corporate billing account for your company. B. Create a ticket with Google Support and wait for their call to share your credit card details over the phone. C. In the Google Platform Console, go to the Resource Manage and move all projects to the root Organizarion. D. In the Google Cloud Platform Console, create a new billing account and set up a payment method. -------------------------- 最佳解是: D 選項A不適合的原因是:該選項建議使用電子郵件與 Google Cloud 平台進行聯繫,並且沒有提到如何建立新的結算帳戶。 選項B不適合的原因是:該選項建議建立一個 Google 支援工單並等待其回電,並且沒有提到如何建立新的結算帳戶。 選項C不適合的原因是:此選項建議將所有專案移至根目錄組織,而非建立新的結算帳戶。 因此,最佳選項是D,即在Google Cloud Platform Console中建立一個新的結算帳戶並設置付款方式,以便集中所有與公司有關的 Cloud Platform 專案。 -------------------------- 8. You have an application that looks for its licensing server on the IP 10.0.3.21. You need to deploy the licensing server on Compute Engine. You do not want to change the configuration of the application and want the application to be able to reach the licensing server. What should you do? A. Reserve the IP 10.0.3.21 as a static internal IP address using gcloud and assign it to the licensing server. B. Reserve the IP 10.0.3.21 as a static public IP address using gcloud and assign it to the licensing server. C. Use the IP 10.0.3.21 as a custom ephemeral IP address and assign it to the licensing server. D. Start the licensing server with an automatic ephemeral IP address, and then promote it to a static internal IP address. -------------------------- 最佳解是: A 因為應用程式會使用特定 IP 來連線到授權伺服器,因此必須保留該 IP 位址。最好的做法是使用 gcloud 預留一個靜態內部 IP 位址,並將其分配給授權伺服器。由於 IP 位址是靜態的,因此它們不會在 VM 重新啟動後更改,這將確保應用程式可以正確連線到授權伺服器。選項 B 不是最佳解,因為它涉及到使用公共 IP 位址,這可能會產生安全風險。選項 C 只是一個暫時的解決方案,而選項 D 涉及到在伺服器運行時更改 IP 位址,這可能會導致應用程式無法連線。 -------------------------- 9. You are deploying an application to App Engine. You want the number of instances to scale based on request rate. You need at least 3 unoccupied instances at all times. Which scaling type should you use? A. Manual Scaling with 3 instances. B. Basic Scaling with min_instances set to 3. C. Basic Scaling with max_instances set to 3. D. Automatic Scaling with min_idle_instances set to 3. -------------------------- 最佳解是: D 選項D是最佳解,因為這是使用App Engine最佳的自動縮放方法來實現要求。選項A不適合,因為手動縮放需要手動配置和管理,不夠自動化。選項B也不適合,因為基本縮放會根據請求的數量自動增加實例數,但它不會保持至少3個空閒實例。選項C也不適合,因為當請求率很低時,可能會有比3個實例更少的實例在運行,這將會導致不足的容量問題。因此,選項D是最佳選擇,因為它可以保證一定數量的閒置實例。 -------------------------- 10. You have a development project with appropriate IAM roles defined. You are creating a production project and want to have the same IAM roles on the new project, using the fewest possible steps. What should you do? A. Use gcloud iam roles copy and specify the production project as the destination project. B. Use gcloud iam roles copy and specify your organization as the destination organization. C. In the Google Cloud Platform Console, use the 'create role from role' functionality. D. In the Google Cloud Platform Console, use the 'create role' functionality and select all applicable permissions. -------------------------- 最佳解是: A 使用 gcloud iam roles copy 命令可以将已有的角色复制到目标项目,这是最简单、最快捷的方式。选项 B 不适用于此情况,因为它会将角色复制到组织级别,不是项目级别。选项 C 和 D 都需要手动选择权限,不如直接复制现有的角色来得方便。 -------------------------- 11. You need a dynamic way of provisioning VMs on Compute Engine. The exact specifications will be in a dedicated configuration file. You want to follow Google's recommended practices. Which method should you use? A. Deployment Manager B. Cloud Composer C. Managed Instance Group D. Unmanaged Instance Group -------------------------- 最佳解是: A 使用 Google 推薦的最佳方式來動態配置虛擬機器,Deployment Manager 是最佳選擇。Deployment Manager 可以使用配置文件來指定 Compute Engine 實例的完整配置,而不需要任何交互式干預。Cloud Composer 適用於管理工作流程,Managed Instance Group 可以自動擴展以應對較大的工作負載,而 Unmanaged Instance Group 則需要手動維護。 -------------------------- 12. You have a Dockerfile that you need to deploy on Kubernetes Engine. What should you do? A. Use kubectl app deploy <dockerfilename>. B. Use gcloud app deploy <dockerfilename>. C. Create a docker image from the Dockerfile and upload it to Container Registry. Create a Deployment YAML file to point to that image. Use kubectl to create the deployment with that file. D. Create a docker image from the Dockerfile and upload it to Cloud Storage. Create a Deployment YAML file to point to that image. Use kubectl to create the deployment with that file. -------------------------- 最佳解是: C 创建一个 Dockerfile,使用它构建一个 Docker 镜像,然后上传到 Container Registry。在 Kubernetes 上,你可以使用 Deployment YAML 文件来指向该镜像,并使用 kubectl 命令创建 Deployment。 选项 A 和 B 都不是 Kubernetes Engine 的命令,无法用于部署 Dockerfile。 选项 D 是错误的,因为应该上传到 Container Registry 而不是 Cloud Storage。 -------------------------- 13. Your development team needs a new Jenkins server for their project. You need to deploy the server using the fewest steps possible. What should you do? A. Download and deploy the Jenkins Java WAR to App Engine Standard. B. Create a new Compute Engine instance and install Jenkins through the command line interface. C. Create a Kubernetes cluster on Compute Engine and create a deployment with the Jenkins Docker image. D. Use GCP Marketplace to launch the Jenkins solution. -------------------------- 最佳解是: D 在GCP Marketplace中,有現成的Jenkins解決方案,可以在幾個點擊內完成Jenkins的部署。這是最快捷的方法。其他選項也可以實現,但需要更多的手動操作和配置。 A選項是將Jenkins Java WAR部署到App Engine Standard,需要自己配置和管理。 B選項需要手動安裝Jenkins和進行相關配置,需要更多的時間和管理。 C選項需要創建一個Kubernetes集群和部署Jenkins Docker映像,需要更多的配置和管理。 -------------------------- 14. You need to update a deployment in Deployment Manager without any resource downtime in the deployment. Which command should you use? A. gcloud deployment-manager deployments create --config <deployment-config-path> B. gcloud deployment-manager deployments update --config <deployment-config-path> C. gcloud deployment-manager resources create --config <deployment-config-path> D. gcloud deployment-manager resources update --config <deployment-config-path> -------------------------- 最佳解是: B 使用gcloud deployment-manager deployments update命令可以更新Deployment Manager中的部署,而不会中断任何资源。该命令还可以对指定的资源进行有选择性的更新,以便实现更精细的控制。 其他选项的解释: A. gcloud deployment-manager deployments create命令用于创建新的部署。 C. gcloud deployment-manager resources create命令用于创建新的资源。 D. gcloud deployment-manager resources update命令用于更新指定的资源,但不会更新整个部署。 -------------------------- 15. You need to run an important query in BigQuery but expect it to return a lot of records. You want to find out how much it will cost to run the query. You are using on-demand pricing. What should you do? A. Arrange to switch to Flat-Rate pricing for this query, then move back to on-demand. B. Use the command line to run a dry run query to estimate the number of bytes read. Then convert that bytes estimate to dollars using the Pricing Calculator. C. Use the command line to run a dry run query to estimate the number of bytes returned. Then convert that bytes estimate to dollars using the Pricing Calculator. D. Run a select count (*) to get an idea of how many records your query will look through. Then convert that number of rows to dollars using the Pricing Calculator. -------------------------- 最佳解是: B 解釋: 使用dry run query來預估讀取的bytes數量,再使用 Pricing Calculator 來轉換為費用。其他選項都沒有提到dry run來估算費用,因此不適合。A選項讓你切換至Flat-Rate pricing,這不是必要的。C選項讓你預估返回的bytes,這不是要付費的數量,因此不適合。D選項讓你只估算了記錄數量,但沒有估算需要讀取的bytes數量,因此也不適合。 -------------------------- 16. You have a single binary application that you want to run on Google Cloud Platform. You decided to automatically scale the application based on underlying infrastructure CPU usage. Your organizational policies require you to use virtual machines directly. You need to ensure that the application scaling is operationally efficient and completed as quickly as possible. What should you do? A. Create a Google Kubernetes Engine cluster, and use horizontal pod autoscaling to scale the application. B. Create an instance template, and use the template in a managed instance group with autoscaling configured. C. Create an instance template, and use the template in a managed instance group that scales up and down based on the time of day. D. Use a set of third-party tools to build automation around scaling the application up and down, based on Stackdriver CPU usage monitoring. -------------------------- 最佳解是: B 解析: 在此情境下,最好的選項是使用管理實例群組 (Managed Instance Group)。管理實例群組是Google Cloud Platform自動擴展的建議方式,因為它們可以快速而可靠地擴展並縮小實例數目,以應對基礎架構CPU使用情況的變化。透過使用實例範本 (Instance Template),管理實例群組可以快速地在擴展時啟動新的虛擬機器,同時維護指定的最小和最大實例數。其他選項不適用的原因如下: A. 雖然 Kubernetes Engine 具有自動擴展功能,但此選項會要求使用 Kubernetes 容器技術,而不是直接使用虛擬機器。 C. 此選項建議在特定時間內調整實例群組,而不是應對基礎架構CPU使用情況的變化,因此不是最好的選擇。 D. 使用第三方工具的選項可能會增加複雜性和成本,並且可能不如使用GCP原生的管理實例群組更快和可靠。 -------------------------- 17. You are analyzing Google Cloud Platform service costs from three separate projects. You want to use this information to create service cost estimates by service type, daily and monthly, for the next six months using standard query syntax. What should you do? A. Export your bill to a Cloud Storage bucket, and then import into Cloud Bigtable for analysis. B. Export your bill to a Cloud Storage bucket, and then import into Google Sheets for analysis. C. Export your transactions to a local file, and perform analysis with a desktop tool. D. Export your bill to a BigQuery dataset, and then write time window-based SQL queries for analysis. -------------------------- 最佳解是: D BigQuery是一個分析工具,專門用於處理大型數據集,因此,使用BigQuery導入您的GCP費用賬單是一種很好的方法來進行服務費用分析。在BigQuery中,您可以使用SQL查詢語言對您的費用進行複雜的分析和報告,並使用BigQuery內置的時間函數來生成週期性的費用報告。其他選項也能夠進行分析,但是缺乏BigQuery強大的分析功能,因此不是最佳解。 -------------------------- 18. You need to set up a policy so that videos stored in a specific Cloud Storage Regional bucket are moved to Coldline after 90 days, and then deleted after one year from their creation. How should you set up the policy? A. Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 275 days (365-90) B. Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 365 days. C. Use gsutil rewrite and set the Delete action to 275 days (365-90). D. Use gsutil rewrite and set the Delete action to 365 days. -------------------------- 最佳解: B 解釋: 為了將存儲在特定 Cloud Storage 區域存儲桶中的視頻在90天後轉移到 Coldline,並在創建後一年後刪除它們,我們需要使用 Cloud Storage 對象生命周期管理。我們可以使用年齡條件來設置時間段和滿足條件時要採取的操作。 選項 A 不正確,因為它將刪除操作設置為 275 天,這是不正確的,因為我們需要在視頻創建後一年後刪除它們,而不是在 275 天後刪除。 選項 C 和 D 不正確,因為它們建議使用 gsutil rewrite,這對於此任務是不必要的。我們可以使用 Cloud Storage 對象生命周期管理來設置策略。 因此,正確的答案是 B。使用 Cloud Storage 對象生命周期管理,使用年齡條件,設置 SetStorageClass 和 Delete 操作。將 SetStorageClass 操作設置為 90 天,將 Delete 操作設置為 365 天。 -------------------------- 19. You have a Linux VM that must connect to Cloud SQL. You created a service account with the appropriate access rights. You want to make sure that the VM uses this service account instead of the default Compute Engine service account. What should you do? A. When creating the VM via the web console, specify the service account under the 'Identity and API Access' section. B. Download a JSON Private Key for the service account. On the Project Metadata, add that JSON as the value for the key compute-engine-service- account. C. Download a JSON Private Key for the service account. On the Custom Metadata of the VM, add that JSON as the value for the key compute-engine- service-account. D. Download a JSON Private Key for the service account. After creating the VM, ssh into the VM and save the JSON under ~/.gcloud/compute-engine-service- account.json. -------------------------- 最佳解: C 選項解釋: 下載服務帳戶的 JSON 私鑰。在 VM 的自定義元數據中,將 JSON 添加為鍵 compute-engine-service-account 的值。 這是正確的選項。通過將服務帳戶的 JSON 私鑰添加到自定義元數據的 compute-engine-service-account 鍵中,就可以讓 VM 使用指定的服務帳戶連接 Cloud SQL。請注意,如果您是通過 gcloud 命令行界面創建 VM,則可以使用以下命令來指定服務帳戶: ```lua= gcloud compute instances create INSTANCE_NAME --metadata="compute-engine-service-account=SERVICE_ACCOUNT_EMAIL" gcloud compute instances create INSTANCE_NAME --metadata="co ``` A. 在通過 Web 控制台創建 VM 時,在「身份和 API 存取」部分指定服務帳戶。這個選項是可以的,但是它需要在 VM 創建時就指定服務帳戶。如果已經創建了 VM,那麼就不能使用這個選項了。 B. 下載服務帳戶的 JSON 私鑰。在項目元數據中,將 JSON 添加為鍵 compute-engine-service-account 的值。 這個選項是錯誤的。compute-engine-service-account 是預設服務帳戶的元數據鍵,而不是用戶指定的服務帳戶。因此,將服務帳戶的 JSON 私鑰添加到這個鍵的值中是沒有意義的。 D. 下載服務帳戶的 JSON 私鑰。創建 VM 後,ssh 進入 VM,將 JSON 保存在 ~/.gcloud/compute-engine-service-account.json 下。 這個選項是可行的,但是需要手動 ssh 到 VM 中,這樣會增加操作的複雜性。另外,如果您需要讓多個 VM 使用同一個服務帳戶,那麼每個 VM 都需要進行這樣的操作。因此,這個選項不如選項 C 那麼方便。 `` D選項中,我們可以下載服務帳戶的私鑰,然後將其保存到VM上的特定位置,但是這需要手動操作,並且可能會導致私鑰不小心被刪除或泄漏。使用自定義元數據是更好的解決方案,因為它可以自動化服務帳戶的配置,而且更加安全。 -------------------------- 20. You created an instance of SQL Server 2017 on Compute Engine to test features in the new version. You want to connect to this instance using the fewest number of steps. What should you do? A. Install a RDP client on your desktop. Verify that a firewall rule for port 3389 exists. B. Install a RDP client in your desktop. Set a Windows username and password in the GCP Console. Use the credentials to log in to the instance. C. Set a Windows password in the GCP Console. Verify that a firewall rule for port 22 exists. Click the RDP button in the GCP Console and supply the credentials to log in. D. Set a Windows username and password in the GCP Console. Verify that a firewall rule for port 3389 exists. Click the RDP button in the GCP Console, and supply the credentials to log in. -------------------------- 最佳解為B。 在GCP Console中設定Windows使用者名稱和密碼,然後使用RDP(遠端桌面協議)客戶端登入實例是最少步驟的方法。安裝適當的RDP客戶端後,只需提供使用者名稱和密碼就可以與實例建立遠端桌面連線。 選項A和D都要求驗證端口3389是否存在防火牆規則。然而,這並不是必需的步驟,因為 Compute Engine 會自動設定該端口的防火牆規則。 選項C不正確,因為它說要驗證端口22的防火牆規則,但是RDP使用的是端口3389。 -------------------------- 21. You have one GCP account running in your default region and zone and another account running in a non-default region and zone. You want to start a new Compute Engine instance in these two Google Cloud Platform accounts using the command line interface. What should you do? A. Create two configurations using gcloud config configurations create [NAME]. Run gcloud config configurations activate [NAME] to switch between accounts when running the commands to start the Compute Engine instances. B. Create two configurations using gcloud config configurations create [NAME]. Run gcloud configurations list to start the Compute Engine instances. C. Activate two configurations using gcloud configurations activate [NAME]. Run gcloud config list to start the Compute Engine instances. D. Activate two configurations using gcloud configurations activate [NAME]. Run gcloud configurations list to start the Compute Engine instances. -------------------------- 最佳解是: A 选项A提议创建两个配置,然后使用gcloud config configurations activate [NAME]命令在两个帐户之间切换以运行命令。这将允许您为每个帐户指定不同的默认区域和区域,并轻松地在两个帐户之间进行切换。 选项B和C都建议使用gcloud configurations list和gcloud config list命令来启动Compute Engine实例,但这些命令不允许您在不同的帐户之间切换。 选项D建议激活两个配置,然后使用gcloud configurations list命令启动Compute Engine实例,但是这个命令不是有效的gcloud命令。正确的命令是gcloud config configurations list。 -------------------------- 22. You significantly changed a complex Deployment Manager template and want to confirm that the dependencies of all defined resources are properly met before committing it to the project. You want the most rapid feedback on your changes. What should you do? A. Use granular logging statements within a Deployment Manager template authored in Python. B. Monitor activity of the Deployment Manager execution on the Stackdriver Logging page of the GCP Console. C. Execute the Deployment Manager template against a separate project with the same configuration, and monitor for failures. D. Execute the Deployment Manager template using the ג€"-preview option in the same project, and observe the state of interdependent resources. -------------------------- 最佳解是: D 使用"-preview"選項在相同的專案中執行 Deployment Manager 範本,然後觀察互相依存的資源狀態是最快的方式。此選項可讓您預覽部署的結果而不會實際執行操作。這可節省時間,並讓您快速檢查範本是否符合預期。 選項A (使用 Python 撰寫 Deployment Manager 範本時使用粒度日誌記錄語句) 可能會提供更詳細的日誌,但需要更長的時間以確定所有資源是否有適當的相依性。 選項B (在 GCP 控制台的 Stackdriver Logging 頁面監視 Deployment Manager 執行的活動) 可能會提供更豐富的詳細信息,但不是最快的方式。 選項C (將 Deployment Manager 範本與相同配置的其他專案一起執行,並監視失敗) 不是最快的方式,因為它需要在不同的專案之間移動範本,可能需要花費更長的時間。 -------------------------- 23. You have a project for your App Engine application that serves a development environment. The required testing has succeeded and you want to create a new project to serve as your production environment. What should you do? A. Use gcloud to create the new project, and then deploy your application to the new project. B. Use gcloud to create the new project and to copy the deployed application to the new project. C. Create a Deployment Manager configuration file that copies the current App Engine deployment into a new project. D. Deploy your application again using gcloud and specify the project parameter with the new project name to create the new project. -------------------------- 最佳解是: A 在使用GCP的App Engine應用程式時,創建新的生產環境應使用gcloud命令創建新項目,然後使用gcloud命令將應用程序部署到新項目。這是最簡單且標準的方法,不需要使用Deployment Manager或復制應用程序。 B 選項不可行,因為不能簡單地複製App Engine應用程序。 C 是可能的,但不如使用gcloud命令簡單。 D 選項類似於A選項,也不是最佳選擇,因為不需要重新部署應用程序,而是只需要在新的項目中重新部署它。 -------------------------- 24. You need to configure IAM access audit logging in BigQuery for external auditors. You want to follow Google-recommended practices. What should you do? A. Add the auditors group to the 'logging.viewer' and 'bigQuery.dataViewer' predefined IAM roles. B. Add the auditors group to two new custom IAM roles. C. Add the auditor user accounts to the 'logging.viewer' and 'bigQuery.dataViewer' predefined IAM roles. D. Add the auditor user accounts to two new custom IAM roles. -------------------------- 最佳解是: A 解釋: 根據Google的建議,最好將審計人員加入“logging.viewer”和“bigQuery.dataViewer”預定義的IAM角色中,以便他們可以查看BigQuery中的IAM訪問日誌。因此選項A是正確的。 選項B不適合,因為沒有必要創建新的自定義IAM角色。 選項C不適合,因為將外部審計人員添加到IAM角色可能會暴露公司的安全風險。 選項D也不適合,因為這沒有必要且會增加管理成本。 -------------------------- 25. You need to set up permissions for a set of Compute Engine instances to enable them to write data into a particular Cloud Storage bucket. You want to follow Google-recommended practices. What should you do? A. Create a service account with an access scope. Use the access scope 'https://www.googleapis.com/auth/devstorage.write_only'. B. Create a service account with an access scope. Use the access scope 'https://www.googleapis.com/auth/cloud-platform'. C. Create a service account and add it to the IAM role 'storage.objectCreator' for that bucket. D. Create a service account and add it to the IAM role 'storage.objectAdmin' for that bucket. -------------------------- 最佳解是: C 首先,我们需要一个服务账号,服务账号是由 Google Cloud Platform 管理的谷歌账号,用于身份验证。然后,我们需要为服务账号授予特定的权限。在这种情况下,我们需要一个服务账号,允许它在特定存储桶中创建对象。为此,我们可以将该服务账号添加到具有 "storage.objectCreator" IAM 角色的存储桶中。这是 Google 推荐的最佳实践。 选项 A 是不正确的,因为 "https://www.googleapis.com/auth/devstorage.write_only" 权限不足以使 Compute Engine 实例能够写入 Cloud Storage 存储桶。该权限只允许应用程序将数据写入存储桶。 选项 B 是不正确的,因为 "https://www.googleapis.com/auth/cloud-platform" 权限过于宽泛。如果授予实例此权限,它们将获得对 Google Cloud Platform 中所有资源的访问权限。 选项 D 是不必要的,并不是最佳实践。如果授予实例 "storage.objectAdmin" 权限,它们将获得管理存储桶的权限,这超出了它们需要写入对象的要求。 -------------------------- 26. You have sensitive data stored in three Cloud Storage buckets and have enabled data access logging. You want to verify activities for a particular user for these buckets, using the fewest possible steps. You need to verify the addition of metadata labels and which files have been viewed from those buckets. What should you do? A. Using the GCP Console, filter the Activity log to view the information. B. Using the GCP Console, filter the Stackdriver log to view the information. C. View the bucket in the Storage section of the GCP Console. D. Create a trace in Stackdriver to view the information. -------------------------- 最佳解是: B 在此情境中,我們想要查看特定使用者對於這三個 Cloud Storage bucket 的活動記錄,包括 metadata labels 的添加以及哪些檔案被檢視過。透過 Stackdriver log 可以更輕鬆地完成這個目標,因為 Stackdriver log 可以更詳細地記錄 Cloud Storage bucket 的活動記錄,並且可以透過過濾器方便地查看特定使用者的活動。因此,選擇 B 是最適合的選擇。 其他選項的解釋如下: A. 使用 GCP Console 的活動記錄雖然可以過濾特定使用者的活動,但在此情境中並無法取得足夠的詳細記錄。 C. 在 Storage 畫面中只能看到 bucket 的詳細資訊,並沒有包含特定使用者的活動記錄。 D. 創建 Stackdriver 的 trace 可以追蹤應用程式中的請求,但無法提供 Cloud Storage bucket 的詳細記錄。 -------------------------- 27. You are the project owner of a GCP project and want to delegate control to colleagues to manage buckets and files in Cloud Storage. You want to follow Google- recommended practices. Which IAM roles should you grant your colleagues? A. Project Editor B. Storage Admin C. Storage Object Admin D. Storage Object Creator -------------------------- 最佳解是: B Storage Admin角色可以管理Cloud Storage bucket的權限,也可以管理bucket內容,如:建立、刪除、修改等。這是在GCP中授予對Cloud Storage的完整管理權限的建議角色。 其他選項的解釋如下: A. Project Editor角色可以修改所有資源,不只是Cloud Storage,而且擁有太多不必要的權限。 C. Storage Object Admin角色可以管理bucket內容,但是不具有管理權限,不能創建、刪除bucket等操作。 D. Storage Object Creator角色僅能建立bucket內容,沒有管理權限,不能刪除或修改bucket內容。 -------------------------- 28. You have an object in a Cloud Storage bucket that you want to share with an external company. The object contains sensitive data. You want access to the content to be removed after four hours. The external company does not have a Google account to which you can grant specific user-based access privileges. You want to use the most secure method that requires the fewest steps. What should you do? A. Create a signed URL with a four-hour expiration and share the URL with the company. B. Set object access to 'public' and use object lifecycle management to remove the object after four hours. C. Configure the storage bucket as a static website and furnish the object's URL to the company. Delete the object from the storage bucket after four hours. D. Create a new Cloud Storage bucket specifically for the external company to access. Copy the object to that bucket. Delete the bucket after four hours have passed. -------------------------- 最佳解是: A 创建一个带有四小时过期时间的签名URL,并与公司共享URL。 选项A是最安全和最简单的选择。创建签名URL可确保对URL的访问受到安全限制,并且只有持有URL的公司可以访问敏感数据。使用四小时的到期时间可以确保访问仅在有限时间内保持开放,从而最大限度地减少敏感数据泄漏的风险。另外三个选项中,选项B将对象设置为“public”,这会将数据暴露给任何人,增加了风险。选项C中的URL可能会被攻击者轻松地获取并访问敏感数据。选项D增加了管理负担,因为需要创建一个新的存储桶,并且在删除之前必须手动将对象复制到该存储桶。 -------------------------- 29. You are creating a Google Kubernetes Engine (GKE) cluster with a cluster autoscaler feature enabled. You need to make sure that each node of the cluster will run a monitoring pod that sends container metrics to a third-party monitoring solution. What should you do? A. Deploy the monitoring pod in a StatefulSet object. B. Deploy the monitoring pod in a DaemonSet object. C. Reference the monitoring pod in a Deployment object. D. Reference the monitoring pod in a cluster initializer at the GKE cluster creation time. -------------------------- 最佳解是: B 因為要讓每一個節點都能運行一個監控 pod,所以適合使用 DaemonSet 物件。DaemonSet 物件可確保在每個節點上運行一個 pod 執行個體,以便每個節點都運行該監控 pod。而另一方面,StatefulSet 物件是用來保證有狀態的應用程序部署的,而Deployment 物件則是用來部署沒有特定限制的 pod。最後,選項 D 中的「cluster initializer」不是用來部署 pod 的,而是用來定義叢集層級的 Kubernetes 初始設定的。 -------------------------- 30. You want to send and consume Cloud Pub/Sub messages from your App Engine application. The Cloud Pub/Sub API is currently disabled. You will use a service account to authenticate your application to the API. You want to make sure your application can use Cloud Pub/Sub. What should you do? A. Enable the Cloud Pub/Sub API in the API Library on the GCP Console. B. Rely on the automatic enablement of the Cloud Pub/Sub API when the Service Account accesses it. C. Use Deployment Manager to deploy your application. Rely on the automatic enablement of all APIs used by the application being deployed. D. Grant the App Engine Default service account the role of Cloud Pub/Sub Admin. Have your application enable the API on the first connection to Cloud Pub/ Sub. -------------------------- 最佳解是: A 在 App Engine 應用程式中使用 Cloud Pub/Sub 訊息的前提是要先啟用 Cloud Pub/Sub API。因此,最好的做法是透過 GCP 控制台中的 API 庫啟用它,以便可以在應用程式中使用。選項 B 是不正確的,因為您必須手動啟用 Cloud Pub/Sub API。選項 C 也是不正確的,因為自動啟用所有使用的 API 並不是 Deployment Manager 的功能。選項 D 也是不正確的,因為您需要在應用程式的程式碼中明確啟用 API,而不是僅僅授予服務帳戶權限即可。 -------------------------- 31. You need to monitor resources that are distributed over different projects in Google Cloud Platform. You want to consolidate reporting under the same Stackdriver Monitoring dashboard. What should you do? A. Use Shared VPC to connect all projects, and link Stackdriver to one of the projects. B. For each project, create a Stackdriver account. In each project, create a service account for that project and grant it the role of Stackdriver Account Editor in all other projects. C. Configure a single Stackdriver account, and link all projects to the same account. D. Configure a single Stackdriver account for one of the projects. In Stackdriver, create a Group and add the other project names as criteria for that Group. -------------------------- 最佳解是: C 解釋:選項C是正確的,因為您可以為多個項目配置單個Stackdriver帳戶,以便將所有資源的報告集中在同一個位置。您可以為每個項目設定連結,以便可以從同一個Stackdriver帳戶中查看所有項目的監視報告。這比其他選項更簡單,因為您不需要為每個項目都建立一個Stackdriver帳戶或在多個項目之間配置複雜的共享VPC和許可權。 選項A不是最佳選擇,因為您需要設定複雜的共享VPC,這需要在所有參與的項目之間進行管理和設定。 選項B也不是最佳選擇,因為您需要為每個項目都創建一個Stackdriver帳戶,並為每個項目的Service Account授予其他項目的Stackdriver帳戶的權限,這樣可能會變得非常繁瑣和難以管理。 選項D是錯誤的,因為您需要為每個項目都設置Stackdriver帳戶,然後在Stackdriver中設置Group,這樣可能會導致管理困難,並且不符合資源集中的要求。 -------------------------- 32. You are deploying an application to a Compute Engine VM in a managed instance group. The application must be running at all times, but only a single instance of the VM should run per GCP project. How should you configure the instance group? A. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 1. B. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 1. C. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 2. D. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 2. -------------------------- 最佳解是: B 由於只需要一個VM實例運行應用程序,因此需要將自動縮放關閉,設置最小和最大實例數為1。選項B正確地執行了這些操作。選項A和C設置了多個實例,並且自動縮放設置錯誤。選項D設置了多個實例,因此不符合要求。 -------------------------- 33. You want to verify the IAM users and roles assigned within a GCP project named my-project. What should you do? A. Run gcloud iam roles list. Review the output section. B. Run gcloud iam service-accounts list. Review the output section. C. Navigate to the project and then to the IAM section in the GCP Console. Review the members and roles. D. Navigate to the project and then to the Roles section in the GCP Console. Review the roles and status. -------------------------- 最佳解是: C 在 GCP Console 中查看和管理 IAM 使用者和角色是最簡單直接的方法。選項 C 提供了正確的步驟,可以進入到指定的 GCP 專案,點擊 IAM 頁籤,並且可以檢視在該專案中設定的使用者和角色。 選項 A 和選項 B 會列出不同的資訊。選項 A 是列出 IAM 角色清單,而非使用者。選項 B 則是列出服務帳戶列表。 選項 D 是在專案中查看角色清單,但無法查看使用者和角色的分配。因此,選項 C 是最好的選擇。 -------------------------- 34. You have one project called proj-sa where you manage all your service accounts. You want to be able to use a service account from this project to take snapshots of VMs running in another project called proj-vm. What should you do? A. Download the private key from the service account, and add it to each VMs custom metadata. B. Download the private key from the service account, and add the private key to each VM's SSH keys. C. Grant the service account the IAM Role of Compute Storage Admin in the project called proj-vm. D. When creating the VMs, set the service account's API scope for Compute Engine to read/write. -------------------------- 最佳解是: C 在GCP中,如果您想讓一個服務帳戶能夠在另一個項目中進行操作,您需要將該服務帳戶添加到目標項目中,並授予該服務帳戶具有足夠權限的IAM角色。因此,在這種情況下,您應該授予proj-sa項目中的服務帳戶Compute Storage Admin的IAM角色,以便可以在proj-vm項目中進行快照。選項A和B並不正確,因為在VM的自定義元數據和SSH金鑰中添加私鑰是不安全的。選項D也不正確,因為這只是授予該服務帳戶在該項目中進行讀寫操作的API範圍,但不會授予該服務帳戶進行快照所需的實際權限。 -------------------------- 35. You created a Google Cloud Platform project with an App Engine application inside the project. You initially configured the application to be served from the us- central region. Now you want the application to be served from the asia-northeast1 region. What should you do? A. Change the default region property setting in the existing GCP project to asia-northeast1. B. Change the region property setting in the existing App Engine application from us-central to asia-northeast1. C. Create a second App Engine application in the existing GCP project and specify asia-northeast1 as the region to serve your application. D. Create a new GCP project and create an App Engine application inside this new project. Specify asia-northeast1 as the region to serve your application. -------------------------- 最佳解是: B 因為問題是要將App Engine應用程式從us-central區域改為asia-northeast1區域。要實現這個,只需更改現有App Engine應用程式的region屬性設置為asia-northeast1即可。因此選擇B是正確的。 選項A是不正確的,因為改變項目的預設區域不會影響App Engine應用程式的區域。 選項C也是不正確的,因為在現有項目中創建第二個App Engine應用程式不是必要的,而且會增加管理的複雜性。 選項D也是不正確的,因為創建一個新的GCP項目和App Engine應用程式是不必要的。 -------------------------- 36. You need to grant access for three users so that they can view and edit table data on a Cloud Spanner instance. What should you do? A. Run gcloud iam roles describe roles/spanner.databaseUser. Add the users to the role. B. Run gcloud iam roles describe roles/spanner.databaseUser. Add the users to a new group. Add the group to the role. C. Run gcloud iam roles describe roles/spanner.viewer - -project my-project. Add the users to the role. D. Run gcloud iam roles describe roles/spanner.viewer - -project my-project. Add the users to a new group. Add the group to the role. -------------------------- 最佳解是: B 为三个用户授予 Cloud Spanner 实例中的表数据查看和编辑访问权限,最好的选择是将这些用户添加到新的组,并将该组分配到 roles/spanner.databaseUser 角色。选项 B 提供了这个功能,因此是最佳选择。 选项 A 描述了 roles/spanner.databaseUser 角色,但它缺少将用户添加到该角色的步骤,并且不会将这些用户分为单独的组。 选项 C 和 D 描述了 roles/spanner.viewer 角色,但是该角色只允许用户查看而不是编辑表数据。因此,这些选项不适合这个问题。 -------------------------- 37. You create a new Google Kubernetes Engine (GKE) cluster and want to make sure that it always runs a supported and stable version of Kubernetes. What should you do? A. Enable the Node Auto-Repair feature for your GKE cluster. B. Enable the Node Auto-Upgrades feature for your GKE cluster. C. Select the latest available cluster version for your GKE cluster. D. Select ג€Container-Optimized OS (cos)ג€ as a node image for your GKE cluster. -------------------------- 最佳解是: B Google Kubernetes Engine(GKE)支持多個版本的Kubernetes,並且定期發布新版本。為了確保GKE集群運行支持且穩定的Kubernetes版本,建議啟用Node自動升級功能。此功能會自動將集群中每個節點的Kubernetes版本升級到最新的穩定版本。 答案A,啟用Node自動修復功能,重點在於在節點無法響應時進行修復或替換,但並不解決運行支持且穩定的Kubernetes版本的需求。 答案C,選擇最新可用的集群版本,可能不總是最佳選擇,因為新版本可能存在尚未識別的錯誤或問題。 答案D,選擇Container-Optimized OS(cos)作為節點映像,重點在於使用為容器運行優化的輕量級和安全的操作系統,但並不解決運行支持且穩定的Kubernetes版本的需求。 -------------------------- 38. You have an instance group that you want to load balance. You want the load balancer to terminate the client SSL session. The instance group is used to serve a public web application over HTTPS. You want to follow Google-recommended practices. What should you do? A. Configure an HTTP(S) load balancer. B. Configure an internal TCP load balancer. C. Configure an external SSL proxy load balancer. D. Configure an external TCP proxy load balancer. -------------------------- 最佳解是: C 因為題目中提到要使用 HTTPS 來提供服務,所以必須使用 SSL Load Balancer。在 Google Cloud Platform 中,SSL Load Balancer 有兩種選擇:SSL Proxy Load Balancer 和 TCP Proxy Load Balancer。根據 Google 的建議,如果您需要使用 Google Cloud Load Balancing 服務來為 HTTPS 流量提供 SSL 終止,則建議使用 SSL Proxy Load Balancer,這是專門設計用於 HTTPS 流量的負載平衡器。選擇 A、B、D 這三個選項都不符合題目要求。 -------------------------- 39. You have 32 GB of data in a single file that you need to upload to a Nearline Storage bucket. The WAN connection you are using is rated at 1 Gbps, and you are the only one on the connection. You want to use as much of the rated 1 Gbps as possible to transfer the file rapidly. How should you upload the file? A. Use the GCP Console to transfer the file instead of gsutil. B. Enable parallel composite uploads using gsutil on the file transfer. C. Decrease the TCP window size on the machine initiating the transfer. D. Change the storage class of the bucket from Nearline to Multi-Regional. -------------------------- 最佳解是: B 因為使用 gsutil 啟用平行複合上傳可以將單個文件分成多個較小的部分,然後並行上傳這些部分以最大化可用帶寬。這將允許您在 WAN 連接的帶寬限制下快速上傳整個文件。 選項A是不好的選擇,因為使用 GCP Console 上傳大型文件會比 gsutil 慢得多。 選項C是不好的選擇,因為降低 TCP 窗口大小可能會減少帶寬的使用,從而導致上傳速度變慢。 選項D是不好的選擇,因為將存儲桶的存儲類別更改為多區域存儲類別可能會增加存儲費用,但不會增加上傳速度。 -------------------------- 40. You've deployed a microservice called myapp1 to a Google Kubernetes Engine cluster using the YAML file specified below: You need to refactor this configuration so that the database password is not stored in plain text. You want to follow Google-recommended practices. What should you do? A. Store the database password inside the Docker image of the container, not in the YAML file. B. Store the database password inside a Secret object. Modify the YAML file to populate the DB_PASSWORD environment variable from the Secret. C. Store the database password inside a ConfigMap object. Modify the YAML file to populate the DB_PASSWORD environment variable from the ConfigMap. D. Store the database password in a file inside a Kubernetes persistent volume, and use a persistent volume claim to mount the volume to the container. -------------------------- 最佳解是: B 為了避免在 YAML 文件中明文存儲敏感信息,應將密碼存儲在 Kubernetes 集群的 Secret 中,這是 Google 建議的最佳做法。 Secret 是一種用於存儲和管理敏感信息的 Kubernetes 资源对象。密碼被存儲在 Secret 中,然後通過容器的環境變量進行引用。將 DB_PASSWORD 環境變量的值更改為從 Secret 加載,可以保護密碼不被不必要的人員看到,也可以在需要時方便地更改密碼。 選項 A 不是最佳解,因為將密碼存儲在 Docker 映像中不是最安全的方式,如果 Docker 映像被盜取,密碼也會泄露。 選項 C 也是可行的,但是通常 Secret 被認為是更適合存儲敏感信息的選項,因為它們經過了加密,而 ConfigMap 則未加密。 選項 D 不是最佳解,因為將密碼存儲在 PV 中仍然存在安全風險,並且會導致運維複雜性增加。 -------------------------- 41. You are running an application on multiple virtual machines within a managed instance group and have autoscaling enabled. The autoscaling policy is configured so that additional instances are added to the group if the CPU utilization of instances goes above 80%. VMs are added until the instance group reaches its maximum limit of five VMs or until CPU utilization of instances lowers to 80%. The initial delay for HTTP health checks against the instances is set to 30 seconds. The virtual machine instances take around three minutes to become available for users. You observe that when the instance group autoscales, it adds more instances then necessary to support the levels of end-user traffic. You want to properly maintain instance group sizes when autoscaling. What should you do? A. Set the maximum number of instances to 1. B. Decrease the maximum number of instances to 3. C. Use a TCP health check instead of an HTTP health check. D. Increase the initial delay of the HTTP health check to 200 seconds. -------------------------- 最佳解是: D 如果你的VM實例在啟動後需要幾分鐘才能準備好接受流量,則適當的做法是增加健康檢查的初始延遲時間。初始延遲時間是健康檢查的時間,該時間是在VM實例啟動後,但在可以接受請求之前等待的時間。在這種情況下,增加HTTP健康檢查的初始延遲時間為200秒,因為實例啟動需要3分鐘的時間,而實例啟動的過程中還需要安裝應用程式並進行其他初始化。設置初始延遲時間以等待實例可用可以幫助防止自動擴展增加過多的實例。 選項A的最大實例數目是1,這不符合需要多個VM實例的情況。 選項B將最大實例數目減少到3,這可能不足以應對高流量。 選項C建議使用TCP健康檢查,但這不會解決增加太多VM實例的問題。 -------------------------- 42. You need to select and configure compute resources for a set of batch processing jobs. These jobs take around 2 hours to complete and are run nightly. You want to minimize service costs. What should you do? A. Select Google Kubernetes Engine. Use a single-node cluster with a small instance type. B. Select Google Kubernetes Engine. Use a three-node cluster with micro instance types. C. Select Compute Engine. Use preemptible VM instances of the appropriate standard machine type. D. Select Compute Engine. Use VM instance types that support micro bursting. -------------------------- 最佳解是: C 选项 C 建议使用 Compute Engine 和使用可抢占式 VM 实例。可抢占式 VM 实例的价格低于常规 VM 实例,并且适合短期,不紧急或灵活的工作负载,如批处理作业。可抢占式 VM 实例的主要限制是它们可以在任何时间被系统回收,因此您需要考虑失败后处理。 选项 A 和 B 建议使用 Kubernetes 引擎,这是一种托管的 Kubernetes 解决方案,其中工作负载运行在 Docker 容器中。由于 Kubernetes 是一种弹性和自适应系统,因此适合长时间运行的工作负载,但对于每夜运行的短暂任务,这可能会带来额外的管理负担。 选项 D 建议使用支持微爆发的 VM 实例类型,这意味着 CPU 和内存资源可以快速增加以处理突发负载。这可能会比常规 VM 实例更贵,因此不适合最小化成本的要求。 -------------------------- 43. You recently deployed a new version of an application to App Engine and then discovered a bug in the release. You need to immediately revert to the prior version of the application. What should you do? A. Run gcloud app restore. B. On the App Engine page of the GCP Console, select the application that needs to be reverted and click Revert. C. On the App Engine Versions page of the GCP Console, route 100% of the traffic to the previous version. D. Deploy the original version as a separate application. Then go to App Engine settings and split traffic between applications so that the original version serves 100% of the requests. -------------------------- 最佳解是: C 使用 App Engine Versions 頁面進行版本控制和版本管理,並且支持在不同版本之間進行流量切換,因此可以選擇在該頁面將所有流量切換回之前的版本。此操作是即時的並且可以快速恢復應用程式的正常運行狀態。 其他選項的解釋: A. "gcloud app restore" 是用來恢復從 App Engine 應用程式備份中恢復應用程式狀態的命令,不適用於此情境。 B. 點擊 "Revert" 按鈕只會還原某個版本,而不會將所有流量切換回該版本,因此不適用於此情境。 D. 部署原始版本作為單獨的應用程式並在兩個應用程式之間分配流量,需要額外的部署,操作較為複雜,不適用於此情境。 -------------------------- 44. You want to configure 10 Compute Engine instances for availability when maintenance occurs. Your requirements state that these instances should attempt to automatically restart if they crash. Also, the instances should be highly available including during system maintenance. What should you do? A. Create an instance template for the instances. Set the 'Automatic Restart' to on. Set the 'On-host maintenance' to Migrate VM instance. Add the instance template to an instance group. B. Create an instance template for the instances. Set 'Automatic Restart' to off. Set 'On-host maintenance' to Terminate VM instances. Add the instance template to an instance group. C. Create an instance group for the instances. Set the 'Autohealing' health check to healthy (HTTP). D. Create an instance group for the instance. Verify that the 'Advanced creation options' setting for 'do not retry machine creation' is set to off. -------------------------- 最佳解是: A 選項 A 是最佳選擇。它會滿足要求,因為它會自動重啟實例並在主機維護期間進行遷移,以實現高可用性。具體來說,我們可以執行以下步驟: 首先,我們需要創建一個實例模板,並將 'Automatic Restart' 設置為 on。這樣可以確保實例在崩潰時能夠自動重啟。 其次,我們需要將 'On-host maintenance' 設置為 'Migrate VM instance'。這樣可以確保在主機維護期間,實例能夠遷移到另一個主機上運行,以實現高可用性。 最後,我們需要將實例模板添加到一個實例組中。這樣可以確保實例能夠自動創建,並且在實例崩潰時能夠自動重啟和在主機維護期間進行遷移。 選項 B 是錯誤的選擇,因為將 'Automatic Restart' 設置為 off 會導致實例崩潰後無法自動重啟。同時,將 'On-host maintenance' 設置為 'Terminate VM instances' 會導致實例在主機維護期間被終止,無法實現高可用性。 選項 C 是不太好的選擇,因為它僅設置了一個健康檢查,並沒有設置實例自動重啟或在主機維護期間進行遷移的選項。因此,它不能滿足我們的要求。 選項 D 也是不太好的選擇,因為它僅檢查是否要重試實例的創建。這不能滿足我們的要求,因為它並沒有設置實例自動重啟或在主機維護期間進行遷移的選項。 -------------------------- 45. You host a static website on Cloud Storage. Recently, you began to include links to PDF files on this site. Currently, when users click on the links to these PDF files, their browsers prompt them to save the file onto their local system. Instead, you want the clicked PDF files to be displayed within the browser window directly, without prompting the user to save the file locally. What should you do? A. Enable Cloud CDN on the website frontend. B. Enable 'Share publicly' on the PDF file objects. C. Set Content-Type metadata to application/pdf on the PDF file objects. D. Add a label to the storage bucket with a key of Content-Type and value of application/pdf. -------------------------- 最佳解是: C 選項 C 是最佳選擇。這是因為當用戶單擊PDF文件鏈接時,瀏覽器顯示的類型決定瀏覽器該如何處理文件。通常情況下,如果Content-Type被設置為"application/pdf",瀏覽器會嘗試在瀏覽器窗口中顯示PDF,而不是提示下載。 因此,我們可以執行以下步驟: 首先,進入 Cloud Storage 控制台,瀏覽到包含 PDF 文件的存儲桶。 其次,選擇 PDF 文件對象,並在 "Metadata" 選項卡中添加一個名為 "Content-Type" 的新元數據,將其值設置為 "application/pdf"。這樣瀏覽器就知道如何處理該文件,並且會在瀏覽器窗口中顯示 PDF,而不是提示下載。 選項 A 是不太好的選擇,因為 Cloud CDN 並不能直接解決此問題。Cloud CDN 會加速網站,但無法控制瀏覽器如何處理文件下載。 選項 B 是不好的選擇,因為在 PDF 文件對象上啟用"公開共享"不會影響瀏覽器如何處理下載文件。它只是允許公開訪問對象。 選項 D 也是不好的選擇,因為添加標籤到存儲桶並不能讓瀏覽器直接在瀏覽器窗口中顯示 PDF。標籤只是一個可擴展的元數據,並不能告訴瀏覽器如何處理文件下載 -------------------------- 46. You have a virtual machine that is currently configured with 2 vCPUs and 4 GB of memory. It is running out of memory. You want to upgrade the virtual machine to have 8 GB of memory. What should you do? A. Rely on live migration to move the workload to a machine with more memory. B. Use gcloud to add metadata to the VM. Set the key to required-memory-size and the value to 8 GB. C. Stop the VM, change the machine type to n1-standard-8, and start the VM. D. Stop the VM, increase the memory to 8 GB, and start the VM. -------------------------- 最佳解是: C 因為只需要更改虛擬機器的內存大小,而不需要更改虛擬機器的 vCPU 數量,因此可以更改虛擬機器的機器類型。C選項是正確的解決方案,它建議先停止虛擬機器,然後將機器類型更改為具有所需內存的機器類型,最後啟動虛擬機器。 A選項不是一個好選擇,因為即使您通過進行現場遷移來將虛擬機器遷移到具有更多內存的機器上,也不會導致虛擬機器內存的實際增加。 B選項不是正確的解決方案。這是因為在虛擬機器中添加 metadata,並不能將實際的內存數量增加到所需的8GB。 D選項也不是正確的解決方案。這是因為虛擬機器的內存不能直接增加,需要先更改機器類型,然後啟動虛擬機器。 -------------------------- 47. You have production and test workloads that you want to deploy on Compute Engine. Production VMs need to be in a different subnet than the test VMs. All the VMs must be able to reach each other over Internal IP without creating additional routes. You need to set up VPC and the 2 subnets. Which configuration meets these requirements? A. Create a single custom VPC with 2 subnets. Create each subnet in a different region and with a different CIDR range. B. Create a single custom VPC with 2 subnets. Create each subnet in the same region and with the same CIDR range. C. Create 2 custom VPCs, each with a single subnet. Create each subnet in a different region and with a different CIDR range. D. Create 2 custom VPCs, each with a single subnet. Create each subnet in the same region and with the same CIDR range. -------------------------- 最佳解是:A 選項A可以滿足所有要求。透過建立一個自訂 VPC 並在其中建立兩個子網路,每個子網路都在不同的區域且具有不同的 CIDR 範圍,可以確保產品和測試 VM 均可使用內部 IP 互相訪問,同時保持安全性和可擴展性。 選項B不適合的原因是,如果將兩個子網路都建立在相同的區域和使用相同的 CIDR 範圍,這將導致 IP 衝突和路由問題,因為不同的子網路需要使用不同的 IP 范圍。 選項C不適合的原因是,建立兩個自訂 VPC 意味著產品和測試 VM 將處於不同的 VPC 中,因此無法直接使用內部 IP 訪問對方,需要額外配置路由規則才能實現這一目的。 選項D不適合的原因是,與選項B相似,如果將兩個子網路都建立在相同的區域和使用相同的 CIDR 範圍,這將導致 IP 衝突和路由問題,因為不同的子網路需要使用不同的 IP 范圍。此外,透過建立兩個 VPC,還需要進行跨 VPC 設置才能實現產品和測試 VM 之間的內部 IP 訪問,這樣可能會增加配置和管理的複雜性。 -------------------------- 48. You need to create an autoscaling managed instance group for an HTTPS web application. You want to make sure that unhealthy VMs are recreated. What should you do? A. Create a health check on port 443 and use that when creating the Managed Instance Group. B. Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group. C. In the Instance Template, add the label 'health-check'. D. In the Instance Template, add a startup script that sends a heartbeat to the metadata server. -------------------------- 最佳解是:A 在創建自動縮放管理實例群組時,選擇在端口 443 上創建健康檢查,並在創建 Managed Instance Group 時使用此健康檢查,可以確保 VM 運行狀態良好,並且在 VM 檢測到故障或宕機時自動重啟。 選項B不適合的原因是,多區域配置不會自動使 VM 重新創建。VM 檢測到故障或宕機時,Multi-Zone 模式只是將 VM 的副本從其他區域遷移到可用區域,並不會自動重新創建新的 VM。 選項C不適合的原因是,將標籤添加到模板並不會直接導致 VM 在出現問題時自動重啟。需要使用健康檢查設置,並在創建 Managed Instance Group 時引用它。 選項D不適合的原因是,向元數據服務器發送心跳無法自動檢測 VM 是否出現問題,並在 VM 宕機時自動重啟。需要使用健康檢查設置來實現此目的。 -------------------------- 49. Your company has a Google Cloud Platform project that uses BigQuery for data warehousing. Your data science team changes frequently and has few members. You need to allow members of this team to perform queries. You want to follow Google-recommended practices. What should you do? A. 1. Create an IAM entry for each data scientist's user account. 2. Assign the BigQuery jobUser role to the group. B. 1. Create an IAM entry for each data scientist's user account. 2. Assign the BigQuery dataViewer user role to the group. C. 1. Create a dedicated Google group in Cloud Identity. 2. Add each data scientist's user account to the group. 3. Assign the BigQuery jobUser role to the group. D. 1. Create a dedicated Google group in Cloud Identity. 2. Add each data scientist's user account to the group. 3. Assign the BigQuery dataViewer user role to the group. -------------------------- 最佳解是: D 這個問題中,我們需要允許數據科學家執行BigQuery查詢,而且根據Google建議的最佳做法,我們可以使用Cloud Identity來管理成員資格,以及使用BigQuery提供的兩個預設角色來控制對資源的訪問權限。 在這個解決方案中,我們可以使用以下步驟來授予對BigQuery資源的查看權限: 創建一個專門的Google組(例如:bigquery-users)。 將數據科學家的使用者帳戶添加到該組。 為該組分配BigQuery預設的數據查看者(dataViewer)角色。 選項D符合這些要求,因為它使用了上述步驟來授予對BigQuery資源的訪問權限。 相比之下,選項A僅將作業使用者(jobUser)角色分配給組,這意味著組中的用戶可以提交查詢,但無法查看任何資料。這並不符合題目中的要求。 選項B與A相似,只是將資料查看者(dataViewer)角色分配給了組。這將允許組中的用戶查看資料,但無法提交任何查詢。同樣,這並不符合題目中的要求。 選項C雖然使用了Cloud Identity,但僅將作業使用者(jobUser)角色分配給了該組。因此,這個選項與選項A有相同的問題。 -------------------------- 50. Your company has a 3-tier solution running on Compute Engine. The configuration of the current infrastructure is shown below. Each tier has a service account that is associated with all instances within it. You need to enable communication on TCP port 8080 between tiers as follows: Instances in tier #1 must communicate with tier #2. Instances in tier #2 must communicate with tier #3. What should you do? A. 1. Create an ingress firewall rule with the following settings: ג€¢ Targets: all instances ג€¢ Source filter: IP ranges (with the range set to 10.0.2.0/24) ג€¢ Protocols: allow all 2. Create an ingress firewall rule with the following settings: ג€¢ Targets: all instances ג€¢ Source filter: IP ranges (with the range set to 10.0.1.0/24) ג€¢ Protocols: allow all B. 1. Create an ingress firewall rule with the following settings: ג€¢ Targets: all instances with tier #2 service account ג€¢ Source filter: all instances with tier #1 service account ג€¢ Protocols: allow TCP:8080 2. Create an ingress firewall rule with the following settings: ג€¢ Targets: all instances with tier #3 service account ג€¢ Source filter: all instances with tier #2 service account ג€¢ Protocols: allow TCP: 8080 C. 1. Create an ingress firewall rule with the following settings: ג€¢ Targets: all instances with tier #2 service account ג€¢ Source filter: all instances with tier #1 service account ג€¢ Protocols: allow all 2. Create an ingress firewall rule with the following settings: ג€¢ Targets: all instances with tier #3 service account ג€¢ Source filter: all instances with tier #2 service account ג€¢ Protocols: allow all D. 1. Create an egress firewall rule with the following settings: ג€¢ Targets: all instances ג€¢ Source filter: IP ranges (with the range set to 10.0.2.0/24) ג€¢ Protocols: allow TCP: 8080 2. Create an egress firewall rule with the following settings: ג€¢ Targets: all instances ג€¢ Source filter: IP ranges (with the range set to 10.0.1.0/24) ג€¢ Protocols: allow TCP: 8080 -------------------------- 最佳解是: B 選項B包括了以下兩個設置: 創建一個入站防火牆規則,將以下設置應用於TCP端口8080: 目標:所有擁有第2層服務帳戶的實例 源過濾器:所有擁有第1層服務帳戶的實例 協議:允許TCP:8080 創建一個入站防火牆規則,將以下設置應用於TCP端口8080: 目標:所有擁有第3層服務帳戶的實例 源過濾器:所有擁有第2層服務帳戶的實例 協議:允許TCP:8080 這是最佳解,因為它遵循了最小權限原則,只允許必要的通信。在這種情況下,僅限允許來自上一層的流量通過,並僅允許到達下一層的流量通過。它也只允許特定帳戶的實例之間通信,而不是所有實例。此外,由於是入站防火牆規則,因此可以避免僅使用出站防火牆規則所可能引起的安全漏洞。 選項A和C允許所有實例之間進行通信,而不僅僅是需要通信的特定實例。這可能會導致安全漏洞。 選項D是關於出站防火牆規則,而不是入站規則,因此不會實現要求。此外,出站規則可能會在安全性方面存在一些風險。 -------------------------- 51. You are given a project with a single Virtual Private Cloud (VPC) and a single subnetwork in the us-central1 region. There is a Compute Engine instance hosting an application in this subnetwork. You need to deploy a new instance in the same project in the europe-west1 region. This new instance needs access to the application. You want to follow Google-recommended practices. What should you do? A. 1. Create a subnetwork in the same VPC, in europe-west1. 2. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint. B. 1. Create a VPC and a subnetwork in europe-west1. 2. Expose the application with an internal load balancer. 3. Create the new instance in the new subnetwork and use the load balancer's address as the endpoint. C. 1. Create a subnetwork in the same VPC, in europe-west1. 2. Use Cloud VPN to connect the two subnetworks. 3. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint. D. 1. Create a VPC and a subnetwork in europe-west1. 2. Peer the 2 VPCs. 3. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint. -------------------------- 最佳解是: B 在 europe-west1 地區建立一個 VPC 和子網路,並透過內部負載平衡器來公開應用程式。然後在新的子網路中建立新的實例,並將負載平衡器的地址用作端點。這樣做是 Google 建議的最佳做法,因為它將允許您在不同的區域中創建實例,同時保持安全性和可用性。 選項 A 的方法建立了一個新的子網路,並在新的子網路中建立了新的實例,但需要使用第一個實例的私有地址作為端點,這樣做將使安全性變差,因為您需要在不同的子網路中使用相同的 IP 地址。 選項 C 的方法使用 Cloud VPN 來連接兩個子網路,這將增加複雜性並且需要進行更多的配置,同時可能會影響性能。 選項 D 的方法建立了一個新的 VPC 和子網路,然後將這兩個 VPC 進行對等連接。這也可能會增加複雜性,並且需要進行更多的配置。此外,對等連接需要跨越不同的區域,這可能會影響性能。 -------------------------- 52. Your projects incurred more costs than you expected last month. Your research reveals that a development GKE container emitted a huge number of logs, which resulted in higher costs. You want to disable the logs quickly using the minimum number of steps. What should you do? A. 1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE container resource. B. 1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE Cluster Operations resource. C. 1. Go to the GKE console, and delete existing clusters. 2. Recreate a new cluster. 3. Clear the option to enable legacy Stackdriver Logging. D. 1. Go to the GKE console, and delete existing clusters. 2. Recreate a new cluster. 3. Clear the option to enable legacy Stackdriver Monitoring. -------------------------- 最佳解是: A 選項A是正確的。在Stackdriver Logging中禁用特定容器資源的日誌是最快的方法。其他選項不會直接關閉日誌,且需要額外的步驟。 選項B中的 "GKE Cluster Operations" 資源關閉將停用整個GKE叢集的操作日誌,並不會直接停用特定的容器資源日誌。 選項C和D都涉及刪除現有的GKE叢集,重新創建新的叢集,並不是最經濟實惠的方法,也會有短暫的停機時間。因此選項A是最佳選擇。 -------------------------- 53. You have a website hosted on App Engine standard environment. You want 1% of your users to see a new test version of the website. You want to minimize complexity. What should you do? A. Deploy the new version in the same application and use the --migrate option. B. Deploy the new version in the same application and use the --splits option to give a weight of 99 to the current version and a weight of 1 to the new version. C. Create a new App Engine application in the same project. Deploy the new version in that application. Use the App Engine library to proxy 1% of the requests to the new version. D. Create a new App Engine application in the same project. Deploy the new version in that application. Configure your network load balancer to send 1% of the traffic to that new application. -------------------------- 最佳解是: B 选项B是最佳解。使用--splits选项可以在同一个应用程序中同时部署新版本和当前版本,并按比例分配流量。在这种情况下,将新版本设置为1%,当前版本设置为99%即可。这是一种简单的方法,可以避免创建新的应用程序或配置网络负载均衡器。选项A不会将流量分割,选项C和D需要创建新的应用程序或进行网络配置,更加复杂。 选项A:使用--migrate选项部署新版本,并将整个流量从当前版本迁移到新版本。这并不是最优选择,因为它将100%的流量重定向到新版本,而不是只有1%。 选项C和D:这些选项需要创建新的应用程序或配置网络负载均衡器。这将增加复杂性,不如使用选项B简单且更直接。 -------------------------- 54. You have a web application deployed as a managed instance group. You have a new version of the application to gradually deploy. Your web application is currently receiving live web traffic. You want to ensure that the available capacity does not decrease during the deployment. What should you do? A. Perform a rolling-action start-update with maxSurge set to 0 and maxUnavailable set to 1. B. Perform a rolling-action start-update with maxSurge set to 1 and maxUnavailable set to 0. C. Create a new managed instance group with an updated instance template. Add the group to the backend service for the load balancer. When all instances in the new managed instance group are healthy, delete the old managed instance group. D. Create a new instance template with the new application version. Update the existing managed instance group with the new instance template. Delete the instances in the managed instance group to allow the managed instance group to recreate the instance using the new instance template. -------------------------- 最佳解是: A 選項A建議使用 rolling-update 部署策略,在更新過程中最多允許一個實例不可用,逐步將新版本部署到現有的實例中,可以確保更新過程中可用容量不會減少。 選項B建議使用 rolling-update 部署策略,但將 maxSurge 設置為 1,這將在更新過程中引入額外的容量,可能會增加部署風險。 選項C建議創建一個新的 managed instance group,這需要重新配置負載平衡器,並在新的 managed instance group 中運行新版本。這樣做可能會降低可用性,因為新的 managed instance group 需要時間來達到完全健康的狀態。 選項D建議更新現有的 managed instance group,但需要刪除現有的實例以便 managed instance group 使用新的 instance template 創建新的實例。這樣做也可能會降低可用性,因為刪除現有實例可能會導致部分請求失敗。 -------------------------- 55. You are building an application that stores relational data from users. Users across the globe will use this application. Your CTO is concerned about the scaling requirements because the size of the user base is unknown. You need to implement a database solution that can scale with your user growth with minimum configuration changes. Which storage solution should you use? A. Cloud SQL B. Cloud Spanner C. Cloud Firestore D. Cloud Datastore -------------------------- 最佳解是: B Cloud Spanner是一個全球分佈式的關聯式數據庫,能夠自動橫向擴展,提供高可用性和一致性,支持跨多個區域的事務。它還提供了內建的備份和恢復功能,並且可以方便地與其他Google Cloud服務集成。由於Cloud Spanner具有自動橫向擴展功能,所以它能夠輕鬆地處理用戶數據的增長,而不需要對其進行大量的配置更改。 其他選項的簡短解釋如下: Cloud SQL: 是一個基於關係型數據庫的服務,需要手動調整容量來擴展。 Cloud Firestore: 是一個NoSQL文檔數據庫,雖然它可以自動擴展,但它的資料結構不是關聯型的。 Cloud Datastore: 是一個NoSQL文檔數據庫,同樣可以自動擴展,但是它已被Cloud Firestore取代。 -------------------------- 56. You are the organization and billing administrator for your company. The engineering team has the Project Creator role on the organization. You do not want the engineering team to be able to link projects to the billing account. Only the finance team should be able to link a project to a billing account, but they should not be able to make any other changes to projects. What should you do? A. Assign the finance team only the Billing Account User role on the billing account. B. Assign the engineering team only the Billing Account User role on the billing account. C. Assign the finance team the Billing Account User role on the billing account and the Project Billing Manager role on the organization. D. Assign the engineering team the Billing Account User role on the billing account and the Project Billing Manager role on the organization. -------------------------- 最佳解是: C 由題可知,財務團隊需要能夠關聯項目與計費賬戶,因此他們需要擁有 Billing Account User 角色。但是工程團隊不應該具有此權限,因此他們不應擁有 Billing Account User 角色。Project Billing Manager 角色可讓用戶管理項目的計費。將財務團隊指定為 Billing Account User 角色和 Project Billing Manager 角色的組合,可確保他們可以關聯項目和計費賬戶,但不會對項目進行其他更改。因此,選擇 C 是最佳答案。 選項 A 不是最佳答案,因為該角色僅允許用戶查看賬單詳細信息,而不允許用戶關聯項目和計費賬戶。 選項 B 不是最佳答案,因為這樣做將允許工程團隊關聯項目和計費賬戶。 選項 D 不是最佳答案,因為這樣做將允許工程團隊關聯項目和計費賬戶。 -------------------------- 57. You have an application running in Google Kubernetes Engine (GKE) with cluster autoscaling enabled. The application exposes a TCP endpoint. There are several replicas of this application. You have a Compute Engine instance in the same region, but in another Virtual Private Cloud (VPC), called gce-network, that has no overlapping IP ranges with the first VPC. This instance needs to connect to the application on GKE. You want to minimize effort. What should you do? A. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend. 2. Set the service's externalTrafficPolicy to Cluster. 3. Configure the Compute Engine instance to use the address of the load balancer that has been created. B. 1. In GKE, create a Service of type NodePort that uses the application's Pods as backend. 2. Create a Compute Engine instance called proxy with 2 network interfaces, one in each VPC. 3. Use iptables on this instance to forward traffic from gce-network to the GKE nodes. 4. Configure the Compute Engine instance to use the address of proxy in gce-network as endpoint. C. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend. 2. Add an annotation to this service: cloud.google.com/load-balancer-type: Internal 3. Peer the two VPCs together. 4. Configure the Compute Engine instance to use the address of the load balancer that has been created. D. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend. 2. Add a Cloud Armor Security Policy to the load balancer that whitelists the internal IPs of the MIG's instances. 3. Configure the Compute Engine instance to use the address of the load balancer that has been created. -------------------------- 最佳解是: A 選項A是最好的選擇,因為它是最簡單的解決方案,而且不需要任何特別的網路配置。只需在GKE中創建一個LoadBalancer服務,然後將其配置為使用應用程序Pod作為後端,這樣就可以自動為應用程序創建一個外部負載均衡器,這個負載均衡器的IP地址可以被GCE實例使用。選項B需要在一個新的GCE實例中設置代理,這需要額外的管理和設置。選項C需要在兩個VPC之間建立對等連接,並且需要使用私有負載均衡器,這對於最小化工作量來說是不必要的。選項D需要使用Cloud Armor來限制對負載均衡器的訪問,這對於此特定問題來說也是不必要的。 -------------------------- 58. Your organization is a financial company that needs to store audit log files for 3 years. Your organization has hundreds of Google Cloud projects. You need to implement a cost-effective approach for log file retention. What should you do? A. Create an export to the sink that saves logs from Cloud Audit to BigQuery. B. Create an export to the sink that saves logs from Cloud Audit to a Coldline Storage bucket. C. Write a custom script that uses logging API to copy the logs from Stackdriver logs to BigQuery. D. Export these logs to Cloud Pub/Sub and write a Cloud Dataflow pipeline to store logs to Cloud SQL. -------------------------- 最佳解是: B 由於這個組織有數百個Google Cloud項目,因此需要一種成本效益高且易於實施的方法來存儲3年的審計日誌文件。 選擇將Cloud Audit日誌匯出到Coldline存儲桶是最佳的解決方案。 Coldline存儲桶是一種成本效益高,專門用於長期存儲不經常訪問的數據的存儲類型。 相比之下,選項A需要使用BigQuery,這可能會導致更高的成本。 選項C和D都需要開發自定義解決方案,這需要更多的時間和資源,因此也不是最佳選擇。 -------------------------- 59. You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost. How should you run this reverse proxy? A. Create a Cloud Memorystore for Redis instance with 32-GB capacity. B. Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory. C. Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes. D. Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB. -------------------------- 最佳解是: A 選擇A,建立一個32 GB的Cloud Memorystore for Redis instance,與標準的Compute Engine相比,Memorystore具有更高的可用性和更簡單的維護,它會自動管理升級和維護Redis實例,而且無需擔心資料的持久性,因為Memorystore自動會將資料存儲到磁盤上。Redis本身還可以將RAM中的資料快照到磁碟上,以防實例故障或關閉,可以應對意外的情況。因為這個HTTP reverse proxy幾乎沒有CPU負載,而且Memorystore是一種由Google Cloud自動管理的Redis實例,所以這是一個非常經濟實惠的解決方案。 選擇B,選擇自定義的計算實例類型,這樣會比使用標準實例更昂貴,而且還需要自己進行升級和維護。 選擇C,運行在Kubernetes Engine上會比較複雜,而且需要額外的設置和管理,需要一些額外的時間和成本。 選擇D,使用n1-standard-1的實例並添加32GB的SSD持久磁盤來達到要求,但是相比Memorystore,它需要更多的配置和管理,因此成本和時間方面都會比較高。 -------------------------- 60. You are hosting an application on bare-metal servers in your own data center. The application needs access to Cloud Storage. However, security policies prevent the servers hosting the application from having public IP addresses or access to the internet. You want to follow Google-recommended practices to provide the application with access to Cloud Storage. What should you do? A. 1. Use nslookup to get the IP address for storage.googleapis.com. 2. Negotiate with the security team to be able to give a public IP address to the servers. 3. Only allow egress traffic from those servers to the IP addresses for storage.googleapis.com. B. 1. Using Cloud VPN, create a VPN tunnel to a Virtual Private Cloud (VPC) in Google Cloud. 2. In this VPC, create a Compute Engine instance and install the Squid proxy server on this instance. 3. Configure your servers to use that instance as a proxy to access Cloud Storage. C. 1. Use Migrate for Compute Engine (formerly known as Velostrata) to migrate those servers to Compute Engine. 2. Create an internal load balancer (ILB) that uses storage.googleapis.com as backend. 3. Configure your new instances to use this ILB as proxy. D. 1. Using Cloud VPN or Interconnect, create a tunnel to a VPC in Google Cloud. 2. Use Cloud Router to create a custom route advertisement for 199.36.153.4/30. Announce that network to your on-premises network through the VPN tunnel. 3. In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com. -------------------------- 最佳解是: D 選項 D 建議使用 Cloud VPN 或 Interconnect 創建一個隧道,並在本地網絡上配置 DNS 解析。通過將路由廣告發送到隧道,您可以將對 Storage.googleapis.com 的所有流量轉發到特定的子網絡。這樣,您的應用程序服務器可以通過轉發到 Storage.googleapis.com 的特定子網絡,使用 VPC 網絡中的 Cloud Storage 資源。這個解決方案可以繞過安全策略,而且不需要讓伺服器擁有公共 IP 或是存取網際網路的權限。 選項 A 是錯誤的,因為它需要在安全策略的限制下為服務器提供公共 IP 地址。這樣的做法可能違反安全策略。 選項 B 的做法是通過 Squid proxy server 提供對 Cloud Storage 的訪問權限。但是,這樣需要在 VPC 內啟動 Squid proxy server。此外,必須確保 Squid server 的高可用性,否則這個方案可能會影響應用的可靠性。 選項 C 的做法是將應用程序服務器遷移到 Google Cloud 中。這種做法可能需要調整現有應用程序的配置。此外,這種做法可能需要大量的準備工作,並且需要考慮現有設施的成本問題。 因此,選項 D 是最佳解,因為它可以繞過安全策略,不需要讓服務器擁有公共 IP 或是存取網際網路的權限,同時使用了 GCP 推薦的最佳實踐。 -------------------------- 61. You want to deploy an application on Cloud Run that processes messages from a Cloud Pub/Sub topic. You want to follow Google-recommended practices. What should you do? A. 1. Create a Cloud Function that uses a Cloud Pub/Sub trigger on that topic. 2. Call your application on Cloud Run from the Cloud Function for every message. B. 1. Grant the Pub/Sub Subscriber role to the service account used by Cloud Run. 2. Create a Cloud Pub/Sub subscription for that topic. 3. Make your application pull messages from that subscription. C. 1. Create a service account. 2. Give the Cloud Run Invoker role to that service account for your Cloud Run application. 3. Create a Cloud Pub/Sub subscription that uses that service account and uses your Cloud Run application as the push endpoint. D. 1. Deploy your application on Cloud Run on GKE with the connectivity set to Internal. 2. Create a Cloud Pub/Sub subscription for that topic. 3. In the same Google Kubernetes Engine cluster as your application, deploy a container that takes the messages and sends them to your application. -------------------------- 最佳解是: B 在 Cloud Run 中處理 Cloud Pub/Sub 主題的消息,推薦使用 Cloud Pub/Sub 訂閱來接收消息。因此選項 B 是最佳解。具體步驟如下: 為 Cloud Run 使用的服務帳戶授予 Pub/Sub Subscriber 角色。 為該主題創建一個 Cloud Pub/Sub 訂閱。 讓您的應用從該訂閱中拉取消息。 選項 A 會增加 Cloud Function 的成本,而且每個消息都要調用 Cloud Run。 選項 C 涉及將 Cloud Run 應用程序設置為 Cloud Pub/Sub 主題的推送端點,這可能比拉模型更複雜和難以管理。 選項 D 需要使用 GKE,這將增加管理和配置的複雜性。 -------------------------- 62. You need to deploy an application, which is packaged in a container image, in a new project. The application exposes an HTTP endpoint and receives very few requests per day. You want to minimize costs. What should you do? A. Deploy the container on Cloud Run. B. Deploy the container on Cloud Run on GKE. C. Deploy the container on App Engine Flexible. D. Deploy the container on GKE with cluster autoscaling and horizontal pod autoscaling enabled. -------------------------- 最佳解是: A 因為應用程式每天接收的請求量很少,所以使用 Cloud Run 是一個很好的選擇,因為它是一個按需計費的服務,僅在處理請求時才會產生費用。另外,它是一個全管理的服務,因此您不需要擔心基礎設施的管理問題。 選項 B 適合需要更多控制和定制性的應用程式,並且需要使用 Kubernetes 生態系統中的其他工具。這是一個相對昂貴的解決方案。 選項 C 是一個經濟實惠的選擇,但不像 Cloud Run 那樣容易自動擴展和管理。 選項 D 適用於需要更多控制和靈活性的應用程式,但同樣不像 Cloud Run 那麼容易自動擴展和管理。此外,GKE 需要更多的基礎設施管理。 -------------------------- 63. Your company has an existing GCP organization with hundreds of projects and a billing account. Your company recently acquired another company that also has hundreds of projects and its own billing account. You would like to consolidate all GCP costs of both GCP organizations onto a single invoice. You would like to consolidate all costs as of tomorrow. What should you do? A. Link the acquired company's projects to your company's billing account. B. Configure the acquired company's billing account and your company's billing account to export the billing data into the same BigQuery dataset. C. Migrate the acquired company's projects into your company's GCP organization. Link the migrated projects to your company's billing account. D. Create a new GCP organization and a new billing account. Migrate the acquired company's projects and your company's projects into the new GCP organization and link the projects to the new billing account. -------------------------- 最佳解是: C 在這個情境下,最佳的解決方案是將收購公司的專案遷移到現有組織中,並將其連接到現有的結算帳戶。這樣可以讓您將所有費用結合到一個帳戶中,更方便管理和監控。選項 A 可以讓您將收購公司的專案連接到現有帳戶,但無法將費用結合到一個帳戶中。選項 B 允許您在同一個 BigQuery 資料集中收集收購公司和現有公司的帳單資料,但仍需要分開結算。選項 D 是一個可行的解決方案,但是需要較長的時間來遷移專案並重新組織公司的 GCP 結構。因此,選項 C 是最佳的解決方案。 -------------------------- 64. You built an application on Google Cloud that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to table data. You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices. What should you do? A. Add the support team group to the roles/monitoring.viewer role B. Add the support team group to the roles/spanner.databaseUser role. C. Add the support team group to the roles/spanner.databaseReader role. D. Add the support team group to the roles/stackdriver.accounts.viewer role. -------------------------- 最佳解是: A 將支援團隊的群組添加到roles/monitoring.viewer角色是最好的解決方案。此角色僅允許訪問監控數據,並且符合Google推薦的最佳實踐。另一方面,roles/spanner.databaseUser角色允許使用者創建和修改數據庫,roles/spanner.databaseReader角色允許使用者查詢和閱讀數據,這兩個角色都會授予訪問數據的權限。roles/stackdriver.accounts.viewer角色僅允許訪問Stackdriver帳戶和關聯的資源,與Cloud Spanner數據庫和監視無關。 -------------------------- 65. For analysis purposes, you need to send all the logs from all of your Compute Engine instances to a BigQuery dataset called platform-logs. You have already installed the Cloud Logging agent on all the instances. You want to minimize cost. What should you do? A. 1. Give the BigQuery Data Editor role on the platform-logs dataset to the service accounts used by your instances. 2. Update your instances' metadata to add the following value: logs-destination: bq://platform-logs. B. 1. In Cloud Logging, create a logs export with a Cloud Pub/Sub topic called logs as a sink. 2. Create a Cloud Function that is triggered by messages in the logs topic. 3. Configure that Cloud Function to drop logs that are not from Compute Engine and to insert Compute Engine logs in the platform-logs dataset. C. 1. In Cloud Logging, create a filter to view only Compute Engine logs. 2. Click Create Export. 3. Choose BigQuery as Sink Service, and the platform-logs dataset as Sink Destination. D. 1. Create a Cloud Function that has the BigQuery User role on the platform-logs dataset. 2. Configure this Cloud Function to create a BigQuery Job that executes this query: INSERT INTO dataset.platform-logs (timestamp, log) SELECT timestamp, log FROM compute.logs WHERE timestamp > DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY) 3. Use Cloud Scheduler to trigger this Cloud Function once a day. -------------------------- 最佳解是: C 選項C提供了最簡單的方法將所有計算引擎日誌傳輸到BigQuery。它通過使用Cloud Logging中的篩選器,選擇要傳輸的特定日誌,並且可以指定BigQuery目的地資料集。此選項不需要建立Cloud Function,也不需要為實例的服務帳戶分配BigQuery編輯角色,因此比選項A更簡單且更安全,因為您不必將對平台日誌的完全訪問權限授予所有計算引擎實例。 選項A需要授予服務帳戶對BigQuery資料集的編輯權限,這可能超出了您的限制。此外,這需要在所有計算引擎實例中配置元數據,這可能需要很長時間且容易出錯。 選項B需要使用Cloud Function處理日誌數據,這需要額外的計算和存儲成本。此外,需要設定Pub/Sub作為輸送媒介,這也會帶來額外的費用。 選項D需要建立一個Cloud Function,此Function需要BigQuery使用者角色以將數據插入BigQuery。此外,這需要設定Cloud Scheduler以觸發Cloud Function,這會帶來額外的費用。 -------------------------- 66. You are using Deployment Manager to create a Google Kubernetes Engine cluster. Using the same Deployment Manager deployment, you also want to create a DaemonSet in the kube-system namespace of the cluster. You want a solution that uses the fewest possible services. What should you do? A. Add the cluster's API as a new Type Provider in Deployment Manager, and use the new type to create the DaemonSet. B. Use the Deployment Manager Runtime Configurator to create a new Config resource that contains the DaemonSet definition. C. With Deployment Manager, create a Compute Engine instance with a startup script that uses kubectl to create the DaemonSet. D. In the cluster's definition in Deployment Manager, add a metadata that has kube-system as key and the DaemonSet manifest as value. -------------------------- 最佳解是: B 使用 Deployment Manager Runtime Configurator 可以创建一个新的 Config 资源,该资源包含 DaemonSet 的定义。这样可以在同一 Deployment Manager 部署中创建 GKE 集群和 DaemonSet,而不需要使用其他服务或资源。 其他选项的不好之处: A. 在 Deployment Manager 中添加集群的 API 作为新的类型提供程序并使用新类型创建 DaemonSet。虽然这是可能的,但是这需要更多的配置步骤,需要设置 Type Provider 和 Kubernetes API 配置,并且可能会导致不必要的复杂性。 C. 在使用 kubectl 创建 DaemonSet 的启动脚本中创建 Compute Engine 实例。这是一个不必要的步骤,因为可以使用 Deployment Manager 和 GKE API 来创建 DaemonSet。 D. 在 Deployment Manager 中将 kube-system 作为键和 DaemonSet 清单作为值的元数据中添加集群的定义。这种方法可能会导致更多的复杂性,并且在部署和管理集群时需要额外的步骤。 -------------------------- 67. You are building an application that will run in your data center. The application will use Google Cloud Platform (GCP) services like AutoML. You created a service account that has appropriate access to AutoML. You need to enable authentication to the APIs from your on-premises environment. What should you do? A. Use service account credentials in your on-premises application. B. Use gcloud to create a key file for the service account that has appropriate permissions. C. Set up direct interconnect between your data center and Google Cloud Platform to enable authentication for your on-premises applications. D. Go to the IAM & admin console, grant a user account permissions similar to the service account permissions, and use this user account for authentication from your data center. -------------------------- 最佳解是: B 選擇B的理由是,透過使用gcloud建立服務帳戶的金鑰檔案,可以讓您在本地端應用程式中以服務帳戶的身份進行身分驗證和認證。您可以使用這個金鑰檔案,設定您的API存取權杖以及呼叫Google Cloud上的API。 不選擇A的理由是,使用服務帳戶憑證在本地端應用程式中進行身分驗證和認證是可行的,但是必須要謹慎處理金鑰,並且保護您的資料。 不選擇C的理由是,設置直接連接可能需要額外的硬體和網路設施,這可能會導致複雜性增加且造成不必要的成本支出。 不選擇D的理由是,授予使用者帳戶類似服務帳戶權限的權限是不必要的,而且會在安全性和帳戶管理上增加不必要的複雜性。 -------------------------- 68. You are using Container Registry to centrally store your company's container images in a separate project. In another project, you want to create a Google Kubernetes Engine (GKE) cluster. You want to ensure that Kubernetes can download images from Container Registry. What should you do? A. In the project where the images are stored, grant the Storage Object Viewer IAM role to the service account used by the Kubernetes nodes. B. When you create the GKE cluster, choose the Allow full access to all Cloud APIs option under 'Access scopes'. C. Create a service account, and give it access to Cloud Storage. Create a P12 key for this service account and use it as an imagePullSecrets in Kubernetes. D. Configure the ACLs on each image in Cloud Storage to give read-only access to the default Compute Engine service account. -------------------------- 最佳解是: B 這個問題要求我們確保Kubernetes能夠從Container Registry下載映像。選項B建議選擇“允許對所有Cloud API進行完全訪問”,這意味著您的GKE節點將獲得適當的權限,以訪問您的Container Registry。這是最簡單的選項,不需要額外的設置或IAM許可。其他選項的原因如下: A. 授予Storage Object Viewer IAM角色與GKE節點所使用的服務帳戶不太相關,不建議使用。 C. 此選項建議使用p12 key來作為Kubernetes的imagePullSecrets,這需要額外的設置並且較為繁瑣,不建議使用。 D. 設置每個映像的ACLs以授予預設的Compute Engine服務帳戶只讀訪問權限,這也需要額外的設置,且在一個大型的Container Registry中不太可行。因此不建議使用。 -------------------------- 69. You deployed a new application inside your Google Kubernetes Engine cluster using the YAML file specified below. You check the status of the deployed pods and notice that one of them is still in PENDING status: You want to find out why the pod is stuck in pending status. What should you do? A. Review details of the myapp-service Service object and check for error messages. B. Review details of the myapp-deployment Deployment object and check for error messages. C. Review details of myapp-deployment-58ddbbb995-lp86m Pod and check for warning messages. D. View logs of the container in myapp-deployment-58ddbbb995-lp86m pod and check for warning messages. -------------------------- 最佳解是: C 因為Pod是Kubernetes中的最小調度單元,而且它代表了正在運行的一個容器。當Pod無法正常運行時,它可能會出現PENDING狀態。要找出為什麼Pod無法運行,可以查看該Pod的事件和日誌。 在此情況下,應該查看 myapp-deployment-58ddbbb995-lp86m Pod 的詳細信息,並檢查警告消息。這些警告消息可能會提示容器獲取不到必要的資源或者容器鏡像下載失敗等問題。 選項A和B並不直接與Pod相關聯,因此它們可能無法提供有關Pod為什麼無法運行的信息。 選項D的確可以查看容器的日誌,但是在沒有更多線索的情況下,可能需要花費大量時間來檢查日誌以尋找問題。而查看Pod的詳細信息會提供更多有關該Pod的上下文信息,從而更快地定位問題。 -------------------------- 70. You are setting up a Windows VM on Compute Engine and want to make sure you can log in to the VM via RDP. What should you do? A. After the VM has been created, use your Google Account credentials to log in into the VM. B. After the VM has been created, use gcloud compute reset-windows-password to retrieve the login credentials for the VM. C. When creating the VM, add metadata to the instance using 'windows-password' as the key and a password as the value. D. After the VM has been created, download the JSON private key for the default Compute Engine service account. Use the credentials in the JSON file to log in to the VM. -------------------------- 最佳解是: B 選項B提供了在Windows VM上設置RDP登錄所需的認證。這可以通過gcloud compute reset-windows-password命令獲取,該命令會生成一個新的隨機密碼。使用此密碼可以通過RDP協議登錄到VM。 選項A不正確,因為使用Google帳戶憑據登錄不是與Windows VM的RDP連接所需的憑據。 選項C不正確,因為添加元數據密鑰'windows-password'並不會自動設置Windows VM的RDP憑據。 選項D不正確,因為預設的Compute Engine服務帳戶JSON私鑰僅用於通過API或gcloud訪問Google Cloud資源,不能用於RDP連接到Windows VM。 -------------------------- 71. You want to configure an SSH connection to a single Compute Engine instance for users in the dev1 group. This instance is the only resource in this particular Google Cloud Platform project that the dev1 users should be able to connect to. What should you do? A. Set metadata to enable-oslogin=true for the instance. Grant the dev1 group the compute.osLogin role. Direct them to use the Cloud Shell to ssh to that instance. B. Set metadata to enable-oslogin=true for the instance. Set the service account to no service account for that instance. Direct them to use the Cloud Shell to ssh to that instance. C. Enable block project wide keys for the instance. Generate an SSH key for each user in the dev1 group. Distribute the keys to dev1 users and direct them to use their third-party tools to connect. D. Enable block project wide keys for the instance. Generate an SSH key and associate the key with that instance. Distribute the key to dev1 users and direct them to use their third-party tools to connect. -------------------------- 最佳解是: A 选项 A 是最佳解,因为它使用了 Google Cloud Platform 中的内置功能,同时最小化了安全风险和管理工作量。通过为实例设置 enable-oslogin=true 元数据,并授予 dev1 组 compute.osLogin 角色,可以让 dev1 用户使用他们的 Google 帐号登录到该实例。由于该实例是该项目中唯一一个 dev1 用户应该连接到的资源,因此可以放心地授权他们访问。最后,使用 Cloud Shell 来 SSH 连接到实例可以提供更安全和便捷的方法,因为 Cloud Shell 不需要使用任何密钥或密码。 选项 B 似乎也使用了 enable-oslogin=true 元数据,但是将服务账户设置为“没有服务账户”可能会导致其他服务不可用。此外,使用 Cloud Shell 进行连接比使用第三方工具更安全和方便。 选项 C 和 D 需要在实例和用户之间手动分发 SSH 密钥,这增加了管理复杂性和安全风险,并且不如选项 A 那么方便。 -------------------------- 72. You need to produce a list of the enabled Google Cloud Platform APIs for a GCP project using the gcloud command line in the Cloud Shell. The project name is my-project. What should you do? A. Run gcloud projects list to get the project ID, and then run gcloud services list --project <project ID>. B. Run gcloud init to set the current project to my-project, and then run gcloud services list --available. C. Run gcloud info to view the account value, and then run gcloud services list --account <Account>. D. Run gcloud projects describe <project ID> to verify the project value, and then run gcloud services list --available. -------------------------- 最佳解是: A 解釋: 選項 A 是正確的答案。使用 gcloud services list --project <project ID> 命令可以獲得已啟用的 GCP API 的列表。使用 gcloud projects list 命令來獲得項目 ID。 選項 B 不正確。使用 gcloud init 設置當前項目為 my-project,然後運行 gcloud services list --available 會顯示可用的 API 列表,但不會顯示已啟用的 API。 選項 C 不正確。使用 gcloud info 查看帳戶值,然後運行 gcloud services list --account <Account> 會列出帳戶的 API,而不是項目的已啟用 API。 選項 D 不正確。使用 gcloud projects describe <project ID> 驗證項目值,然後運行 gcloud services list --available 會顯示可用的 API 列表,但不會顯示已啟用的 API。 -------------------------- 73. You are building a new version of an application hosted in an App Engine environment. You want to test the new version with 1% of users before you completely switch your application over to the new version. What should you do? A. Deploy a new version of your application in Google Kubernetes Engine instead of App Engine and then use GCP Console to split traffic. B. Deploy a new version of your application in a Compute Engine instance instead of App Engine and then use GCP Console to split traffic. C. Deploy a new version as a separate app in App Engine. Then configure App Engine using GCP Console to split traffic between the two apps. D. Deploy a new version of your application in App Engine. Then go to App Engine settings in GCP Console and split traffic between the current version and newly deployed versions accordingly. -------------------------- 最佳解是: D 在App Engine環境中部署新版本的應用程序後,可以使用GCP控制台將流量分配給當前版本和新部署版本。這是最簡單和最符合問題描述的解決方案。 選項A和B是不適當的,因為它們要求將應用程序部署到Google Kubernetes Engine或Compute Engine實例中,這不符合問題描述中的環境。另外,這兩個選項都需要更複雜的設置和管理。 選項C需要將新版本的應用程序作為獨立應用程序部署,這會增加管理和維護的複雜性。在App Engine中,可以簡單地部署新版本,而不必為其創建獨立應用程序。 因此,最佳的選擇是D,直接在App Engine中部署新版本,然後使用GCP控制台進行流量分配。 -------------------------- 74. You need to provide a cost estimate for a Kubernetes cluster using the GCP pricing calculator for Kubernetes. Your workload requires high IOPs, and you will also be using disk snapshots. You start by entering the number of nodes, average hours, and average days. What should you do next? A. Fill in local SSD. Fill in persistent disk storage and snapshot storage. B. Fill in local SSD. Add estimated cost for cluster management. C. Select Add GPUs. Fill in persistent disk storage and snapshot storage. D. Select Add GPUs. Add estimated cost for cluster management. -------------------------- 最佳解是: A 在輸入節點數量、平均小時和平均天數後,下一步應該填寫本地SSD、持久磁盤存儲和快照存儲。 選項A是正確的,因為本地SSD和持久磁盤存儲都是存儲資料和快照所必需的。這些存儲選項的成本需要計算在集群的成本預算中。 選項B和D中的估計集群管理成本不是必需的步驟,因為該試算器會自動計算這些費用。因此,這些選項只會導致成本估計的不準確。 選項C中的添加GPU選項是不必要的,因為題目描述中並未提到需要使用GPU。因此,添加GPU會增加不必要的成本。而且添加GPU的選項並不會影響到需要填寫的存儲選項。 因此,最佳的選擇是A,填寫本地SSD、持久磁盤存儲和快照存儲。 -------------------------- 75. You are using Google Kubernetes Engine with autoscaling enabled to host a new application. You want to expose this new application to the public, using HTTPS on a public IP address. What should you do? A. Create a Kubernetes Service of type NodePort for your application, and a Kubernetes Ingress to expose this Service via a Cloud Load Balancer. B. Create a Kubernetes Service of type ClusterIP for your application. Configure the public DNS name of your application using the IP of this Service. C. Create a Kubernetes Service of type NodePort to expose the application on port 443 of each node of the Kubernetes cluster. Configure the public DNS name of your application with the IP of every node of the cluster to achieve load-balancing. D. Create a HAProxy pod in the cluster to load-balance the traffic to all the pods of the application. Forward the public traffic to HAProxy with an iptable rule. Configure the DNS name of your application using the public IP of the node HAProxy is running on. -------------------------- 最佳解是: A 创建一个类型为NodePort的Kubernetes服务,以及一个Kubernetes Ingress来通过Cloud Load Balancer公开该服务,这是一种常见的将Kubernetes集群中的服务公开到外部的方法,通过负载均衡器提供HTTPS服务,也可以实现自动伸缩。 选项B使用的是类型为ClusterIP的Kubernetes服务,仅在集群内部可用,并且不能直接从外部访问。 选项C建议将应用程序公开到每个节点上,这不是一个可扩展和可维护的解决方案。 选项D需要手动维护HAProxy pod和iptables规则,不如使用GKE的自动托管负载均衡器。 -------------------------- 76. You need to enable traffic between multiple groups of Compute Engine instances that are currently running two different GCP projects. Each group of Compute Engine instances is running in its own VPC. What should you do? A. Verify that both projects are in a GCP Organization. Create a new VPC and add all instances. B. Verify that both projects are in a GCP Organization. Share the VPC from one project and request that the Compute Engine instances in the other project use this shared VPC. C. Verify that you are the Project Administrator of both projects. Create two new VPCs and add all instances. D. Verify that you are the Project Administrator of both projects. Create a new VPC and add all instances. -------------------------- 最佳解是: B 在此情境下,共用一個 VPC 是最佳做法。在此之前,要確保這兩個 GCP 項目都屬於相同的 GCP 組織。使用 Shared VPC 的方法,你可以共享一個 VPC 在兩個項目之間,而不需要複製任何實例或資源。這樣可以簡化管理並減少管理成本。其他選項不太適用,因為它們要求你創建新的 VPC 並將所有實例添加到其中。 -------------------------- 77. You want to add a new auditor to a Google Cloud Platform project. The auditor should be allowed to read, but not modify, all project items. How should you configure the auditor's permissions? A. Create a custom role with view-only project permissions. Add the user's account to the custom role. B. Create a custom role with view-only service permissions. Add the user's account to the custom role. C. Select the built-in IAM project Viewer role. Add the user's account to this role. D. Select the built-in IAM service Viewer role. Add the user's account to this role. -------------------------- 最佳解是: A 创建自定义角色并将用户添加到自定义角色中。自定义角色应具有查看项目的权限,但不能修改项目。这是一种最佳实践,因为它允许您以精细的方式控制特定用户的权限,而不是给他们访问项目的所有权限。选项B提供的是查看服务权限,而不是项目权限,因此不适合此情况。选项C和D都是使用内置的IAM Viewer角色,其中包含比要求的更多的权限,例如查看日志等。因此,他们也不是最佳选择。 -------------------------- 78. You are operating a Google Kubernetes Engine (GKE) cluster for your company where different teams can run non-production workloads. Your Machine Learning (ML) team needs access to Nvidia Tesla P100 GPUs to train their models. You want to minimize effort and cost. What should you do? A. Ask your ML team to add the ג€accelerator: gpuג€ annotation to their pod specification. B. Recreate all the nodes of the GKE cluster to enable GPUs on all of them. C. Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPUs. Dedicate this cluster to your ML team. D. Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification. -------------------------- 最佳解是: D 在GKE集群中添加一個新的啟用了GPU的節點池將是最佳選擇,因為它最大程度地減少了努力和成本。現有的非GPU節點池可以繼續支持非生產負載,並且ML團隊可以選擇在需要時添加cloud.google.com/gke-accelerator:nvidia-tesla-p100 nodeSelector到他們的pod規格。其他選項有以下問題: A: 這個選項只是將pod標記,但是它不能保證在GPU上運行。此外,如果您有多個團隊需要使用GPU,那麼您需要重複進行此操作。 B: 重新創建所有節點來啟用GPU是不必要和昂貴的,因為只有ML團隊需要GPU。 C: 創建一個新的Kubernetes集群並不是最好的選擇,因為您將需要管理兩個不同的系統,這可能會增加管理成本和努力。 -------------------------- 79. Your VMs are running in a subnet that has a subnet mask of 255.255.255.240. The current subnet has no more free IP addresses and you require an additional 10 IP addresses for new VMs. The existing and new VMs should all be able to reach each other without additional routes. What should you do? A. Use gcloud to expand the IP range of the current subnet. B. Delete the subnet, and recreate it using a wider range of IP addresses. C. Create a new project. Use Shared VPC to share the current network with the new project. D. Create a new subnet with the same starting IP but a wider range to overwrite the current subnet. -------------------------- 最佳解是: A 選項A是正確的選擇。可以使用gcloud命令擴展現有子網的IP範圍,從而提供足夠的IP地址以容納新的VM。由於擴展現有子網範圍,現有VM和新VM都可以互相訪問,因此不需要任何額外的路由。 選項B是不好的選擇,因為刪除現有的子網並重新創建它將需要重新配置與該子網相關聯的VM,這可能需要大量的時間和努力。 選項C也是不好的選擇,因為它涉及到創建一個新的項目,並使用Shared VPC將現有網絡共享給新項目,這可能需要大量的時間和努力。 選項D是不好的選擇,因為創建一個新的子網來覆蓋現有的子網可能需要重新配置VM,這可能需要大量的時間和努力。 -------------------------- 80. Your organization uses G Suite for communication and collaboration. All users in your organization have a G Suite account. You want to grant some G Suite users access to your Cloud Platform project. What should you do? A. Enable Cloud Identity in the GCP Console for your domain. B. Grant them the required IAM roles using their G Suite email address. C. Create a CSV sheet with all users' email addresses. Use the gcloud command line tool to convert them into Google Cloud Platform accounts. D. In the G Suite console, add the users to a special group called cloud-console-users@yourdomain.com. Rely on the default behavior of the Cloud Platform to grant users access if they are members of this group. -------------------------- 最佳解是: B 在GCP中,IAM的權限是基於GCP帳戶的,而G Suite帳戶不是GCP帳戶。因此,需要將G Suite用戶添加到GCP帳戶中以授予他們在GCP項目中的權限。最好的方法是使用其G Suite電子郵件地址授予他們所需的IAM角色。選項A是錯誤的,因為Cloud Identity不是為了授予G Suite用戶對GCP項目的訪問權限而存在的。選項C是錯誤的,因為這需要一些手動工作,而且不夠直觀。選項D也是錯誤的,因為GCP不會默認授予cloud-console-users@yourdomain.com組中的用戶對項目的訪問權限。 -------------------------- 81. You have a Google Cloud Platform account with access to both production and development projects. You need to create an automated process to list all compute instances in development and production projects on a daily basis. What should you do? A. Create two configurations using gcloud config. Write a script that sets configurations as active, individually. For each configuration, use gcloud compute instances list to get a list of compute resources. B. Create two configurations using gsutil config. Write a script that sets configurations as active, individually. For each configuration, use gsutil compute instances list to get a list of compute resources. C. Go to Cloud Shell and export this information to Cloud Storage on a daily basis. D. Go to GCP Console and export this information to Cloud SQL on a daily basis. -------------------------- 最佳解是: A 使用gcloud命令行工具和两个配置文件,一次在生产环境和另一次在开发环境中,可以轻松地自动化此过程。可以编写一个简单的脚本来切换配置文件并调用gcloud compute instances list命令,以在每个项目中列出所有计算实例。 B选项是不正确的,因为gsutil不是用于列出计算实例的正确工具。 C选项是不正确的,因为Cloud Shell是交互式的,无法用于自动化脚本。 D选项是不正确的,因为Cloud SQL不是用于存储计算实例列表的适当工具。 -------------------------- 82. You have a large 5-TB AVRO file stored in a Cloud Storage bucket. Your analysts are proficient only in SQL and need access to the data stored in this file. You want to find a cost-effective way to complete their request as soon as possible. What should you do? A. Load data in Cloud Datastore and run a SQL query against it. B. Create a BigQuery table and load data in BigQuery. Run a SQL query on this table and drop this table after you complete your request. C. Create external tables in BigQuery that point to Cloud Storage buckets and run a SQL query on these external tables to complete your request. D. Create a Hadoop cluster and copy the AVRO file to NDFS by compressing it. Load the file in a hive table and provide access to your analysts so that they can run SQL queries. -------------------------- 最佳解是: B 在 BigQuery 中创建一个表格,并将数据加载到表格中,然后使用 SQL 查询进行数据分析。这是最简单和最经济有效的解决方案。此选项的好处包括: BigQuery 针对数据扫描收费。因此,只有在需要查询数据时才会产生费用。 BigQuery 可以直接从 Cloud Storage 加载数据,因此不需要在加载之前将数据复制到 BigQuery。这可确保数据一致性并减少存储成本。 处理大数据集时,使用 BigQuery 可以获得良好的性能。 其他选项的问题: A. Cloud Datastore 不是用于分析大型数据集的最佳解决方案。数据存储在一种非关系型数据存储中,需要使用不同的 API 和查询语言。 C. BigQuery 支持外部表格,但是在 SQL 查询中访问外部数据时可能会导致性能问题,并且这种方法不如直接加载数据快速和经济实惠。 D. Hadoop 集群需要大量的资源和管理。而且,这不是最简单和最经济实惠的解决方案。 -------------------------- 83. You need to verify that a Google Cloud Platform service account was created at a particular time. What should you do? A. Filter the Activity log to view the Configuration category. Filter the Resource type to Service Account. B. Filter the Activity log to view the Configuration category. Filter the Resource type to Google Project. C. Filter the Activity log to view the Data Access category. Filter the Resource type to Service Account. D. Filter the Activity log to view the Data Access category. Filter the Resource type to Google Project. -------------------------- 最佳解是: A 在Activity log中,您可以使用“Configuration”類別中的“Service Account”资源類別來过滤服务帐号的创建时间。选项A正确回答了这个问题。选项B和D中的“Google Project”资源类型并不会提供有关服务帐号创建时间的信息。选项C中的“Data Access”类别也不相关。 -------------------------- 84. You deployed an LDAP server on Compute Engine that is reachable via TLS through port 636 using UDP. You want to make sure it is reachable by clients over that port. What should you do? A. Add the network tag allow-udp-636 to the VM instance running the LDAP server. B. Create a route called allow-udp-636 and set the next hop to be the VM instance running the LDAP server. C. Add a network tag of your choice to the instance. Create a firewall rule to allow ingress on UDP port 636 for that network tag. D. Add a network tag of your choice to the instance running the LDAP server. Create a firewall rule to allow egress on UDP port 636 for that network tag. -------------------------- 最佳解是: C 在這個情境下,最好的解決方案是將一個網絡標記分配給正在運行LDAP服務器的虛擬機,並創建一個允許該標記通過UDP端口636進行入站通信的防火牆規則。因此選擇C。以下是其他選項的解釋: A. 為虛擬機添加網絡標記allow-udp-636是一個好主意,但這個標記沒有任何預設含義。更重要的是,沒有規則明確地允許入站流量進入該虛擬機。 B. 在Google Cloud Platform中,路由定義了網絡中流量的路徑。建立一條名為allow-udp-636的路由不會影響虛擬機能否接收流量,因為路由並不允許或拒絕流量,而僅僅定義了路由流量的規則。 D. 允許通過TCP或UDP端口的入站或出站流量需要創建不同的防火牆規則。因為這是入站流量的問題,這個選項不會解決問題。 -------------------------- 85. You need to set a budget alert for use of Compute Engineer services on one of the three Google Cloud Platform projects that you manage. All three projects are linked to a single billing account. What should you do? A. Verify that you are the project billing administrator. Select the associated billing account and create a budget and alert for the appropriate project. B. Verify that you are the project billing administrator. Select the associated billing account and create a budget and a custom alert. C. Verify that you are the project administrator. Select the associated billing account and create a budget for the appropriate project. D. Verify that you are project administrator. Select the associated billing account and create a budget and a custom alert. -------------------------- 最佳解是: A 首先要確認自己是該項目的計費管理員,然後選擇相關的計費帳戶,並為適當的項目創建預算和警報。選擇A是正確的,因為項目計費管理員有權在項目層級創建預算和警報。選項B不正確,因為沒有必要創建自定義警報。選項C和D不正確,因為項目管理員沒有權限在項目層面上設置預算和警報。 -------------------------- 86. You are migrating a production-critical on-premises application that requires 96 vCPUs to perform its task. You want to make sure the application runs in a similar environment on GCP. What should you do? A. When creating the VM, use machine type n1-standard-96. B. When creating the VM, use Intel Skylake as the CPU platform. C. Create the VM using Compute Engine default settings. Use gcloud to modify the running instance to have 96 vCPUs. D. Start the VM using Compute Engine default settings, and adjust as you go based on Rightsizing Recommendations. -------------------------- 最佳解是: A 選擇A,當創建VM時使用n1-standard-96作為機器類型,因為它具有96個虛擬CPU和624GB RAM,這將為應用程序提供類似於本地部署的環境,可以確保該應用程序在GCP上運行的性能和可用性。 選項B建議使用Intel Skylake作為CPU平台,但這並不會提供足夠的CPU核心數。 選項C建議創建一個VM,然後使用gcloud將運行實例修改為96個vCPU,但這樣做可能會造成一些性能問題和不必要的停機時間。 選項D建議使用Compute Engine默認設置啟動VM,然後根據權利調整建議進行調整,但這樣做可能會導致不必要的停機時間和不穩定性。 -------------------------- 87. You want to configure a solution for archiving data in a Cloud Storage bucket. The solution must be cost-effective. Data with multiple versions should be archived after 30 days. Previous versions are accessed once a month for reporting. This archive data is also occasionally updated at month-end. What should you do? A. Add a bucket lifecycle rule that archives data with newer versions after 30 days to Coldline Storage. B. Add a bucket lifecycle rule that archives data with newer versions after 30 days to Nearline Storage. C. Add a bucket lifecycle rule that archives data from regional storage after 30 days to Coldline Storage. D. Add a bucket lifecycle rule that archives data from regional storage after 30 days to Nearline Storage. -------------------------- 最佳解是: B 添加一个存储周期规则,将在30天后具有较新版本的数据存档到Nearline Storage中是最佳选择。因为数据可以访问一次,然后被存档,这意味着这些数据不需要快速访问,但是需要保留。而Nearline Storage比Coldline Storage更适合需要定期访问的数据。将数据从区域存储归档到云存储是一个好的决策,因为这将有助于降低成本。因此,选项B是最佳的选择。 选项A不是最佳选择,因为将数据存档到Coldline Storage后,访问数据需要更长的时间。此外,为了节省成本,如果对数据的访问很少,可以将数据存档到Nearline Storage而不是Coldline Storage。 选项C不是最佳选择,因为数据在存档前将被从区域存储中删除,这意味着除非将数据恢复到区域存储,否则无法对其进行访问和更新。 选项D不是最佳选择,因为Nearline Storage比Coldline Storage更适合需要定期访问的数据。选项B是最佳选择。 --------------------------