Download Google Cloud Associate Cloud Engineer Practice Exam Questions and Answers with Explanations 2023 and more Exams Information Technology in PDF only on Docsity!
Google Cloud Associate Cloud Engineer Practice Exam
Questions and Answers with Explanations 2023
1 - Results
Return to review Attempt 1 All knowledge areas All questions Question 1: Incorrect A large enterprise has created multiple organizations in GCP. They would like to connect the VPC networks across organizations. What should they do?
- Implement VPC Network Peering between VPCs (Correct)
- Implement a VPN between VPCs
- Define firewall rules to allow egress traffic to other VPC networks
- Implement a Shared VPC (Incorrect) Explanation Since the connected networks are in different organizations, they must use VPC Network Peering. VPC sharing is only available within a single organization. Firewall rule changes may be needed, but that is not sufficient. VPNs are used to connect GCP networks with on premises networks. For more information, see https://cloud.google.com/vpc/docs/vpc-peering. Question 2: Correct You want to use Cloud Identity to create identities. You have received a verification record for your domain. Where would you add that record?
In the domain's DNS setting (Correct) In the billing account for your organization Explanation Cloud Identity provides domain verification records, which are added to DNS settings for the domain. IAM is used to control access granted to identities, it is not a place to manage domains. The billing account is used for payment tracking, it is not a place to manage domains. Resources do have metadata, but that metadata is not used to manage domains. For more information on verifying domains, see https://cloud.google.com/identity/docs/verify-domain.
- In IAM settings for each identity
- In the metadata of each resource created in your organization
- Question 3: Incorrect You want to run a Kubernetes cluster for a high availability set of applications. What type of cluster would you use?
- Multi-zonal (Incorrect)
- Single zone
- Regional
Shielded VM (Correct) Hardened VM Explanation Shielded VMs are hardened virtual machines that use Secure Boot, virtual trusted platform module enabled Measured Boot, and integrity monitoring. Preemptible VMs can be taken back by Google at any time but cost significantly less than standard prices. Hardened VM is not a valid option in Compute Engine. GPU-enabled VMs can improve the performance of compute intensive applications, such as training machine learning (Correct)
- Multi-regional Explanation Regional clusters have replicas of the control plane while single zone and multi-zonal clusters have only one control plane. There is no such thing as multi-regional cluster. For more information, see https://cloud.google.com/blog/products/containers- kubernetes/best-practices-for-creating-a-highly-available-gke-cluster.
- Preemptible VM
- GPU-enabled VM
- Question 4: Correct You will be running an application that requires high levels of security. You want to ensure the application does not run on a server that has been compromised by a rootkit or other kernel-level malware. What kind of virtual machine would you use?
Ensure firewall rules allow traffic to port 22 to allow SSH connections. (Correct) Question 5: Correct A software development team is using Google Container Registry to manage container images. You have recently joined the team and want to view metadata about existing container images. What command would you use?
- gcloud images container list
- gcloud container images list (Correct)
- gcloud container list metadata
- gcloud container metadata list Explanation The correct command is gcloud container images list. The other options are not valid gcloud commands. For more information, see https://cloud.google.com/sdk/gcloud/reference/container/images/list.
- Question 6: Incorrect A developer is trying to upload files from their local device to a Compute Engine VM using the gcloud scp command. The copy command is failing. What would you check to try to correct the problem? models. For more information, see https://cloud.google.com/security/shielded- cloud/shielded-vm.
Add the identity of the developer to the administrator group for the VM. (Incorrect) Grant the identity the roles/compute.admin role Explanation To copy files to a VM, a firewall rule must be in place to allow traffic on port 22, the default SSH port. Administrator privileges are not needed to upload a file so the other three options are not correct. For more information, see https://cloud.google.com/compute/docs/instances/transfer-files. Grant the identity compute.admin permission
- Question 7: Correct A manager in your company is having trouble tracking the use and cost of resources across several projects. In particular, they do not know which resources are created by different teams they manage. What would you suggest the manager use to help better understand which resources are used by which team?
- Labels (Correct)
- Trace logs
- Audit logs
- IAM policies Explanation
Cloud SQL (Correct) Cloud Spanner Explanation Cloud SQL is a managed relational database service suitable for regionally used applications. Cloud Spanner is also a managed relational database but it is designed for multi-region and global applications. BigQuery is not used for transaction processing systems. Cloud Dataproc is a managed Spark/Hadoop service, not a relational database. For more information, see https://cloud.google.com/sql/docs. Labels are key-value pairs attached to resources and used to manage them. The manager could use a key-value pair with the key 'team-name' and the value the name of the team that created the resource. Audit logs do not necessarily have the names of teams that own a resource. Traces are used for performance monitoring and analysis. IAM policies are used to control access to resources, not to track which team created them. For more information, see https://cloud.google.com/resource- manager/docs/creating-managing-labels.
- Cloud Dataproc
- Bigtable
- Question 9: Correct You have just created a custom mode network using the command: gcloud compute networks create. You want to eventually deploy instances in multiple regions. What is the next thing you should do? Question 8: Correct As a consultant to a mid-sized retailer you have been asked to help choose a managed database platform for the company's inventory management application. The retailer's market is limited to the Northeast United States. What service would you recommend?
--enable-stackdriver-kubernetes (Correct)
- Create subnets in regions where you plan to deploy instances (Correct)
- Create a VPN between the custom model network and other networks in the VPC.
- Create firewall rules to load balance traffic
- Create subnets in all regions Explanation After creating a custom mode network, you will need to create subnets in regions where instances will be deployed. You do not have to create subnets in all regions but an instance cannot be deployed to a region without a subnet. Firewalls are used to control the ingress and egress of data, they are not used to load balance. VPNs are used to provide connectivity between Google Cloud and outside networks, such as an on premises network. For more information, see https://cloud.google.com/vpc/docs/using- vpc and https://cloud.google.com/sdk/gcloud/reference/compute/networks/subnets/create.
- --enable-cloud-operations Question 10: Correct You will be creating a GKE cluster and want to use Cloud Operations for GKE instead of legacy monitoring and logging. If you create the cluster using a gcloud container clusters create command, what parameter would you specify to explicitly enable Cloud Operations for GKE?
--enable-gke-monitor Explanation The correct way to enable Cloud Operations for GKE is to use the parameter --enable- stackdriver-kubernetes. The other options are not valid parameter names. For more information, see https://cloud.google.com/sdk/gcloud/reference/container/clusters/create.
- --disable-legacy-monitoring
- Question 11: Incorrect Your company has a complicated billing structure for GCP projects. You would like to set up multiple configurations for use with the command line interface. What command would you use to create those?
- gcloud config configurations set (Incorrect)
- gcloud configurations set
- gcloud configurations create
- gcloud config configurations create (Correct) Explanation The correct command is gcloud config configurations create. Gcloud configurations crae, gcloud config configurations set, and gcloud configurations set are not valid
Bigtable (Correct) BigQuery Explanation Bigtable is a wide column database with low latency writes that is well suited for IoT data storage. BigQuery is a data warehouse service. Cloud Dataproc is a managed Spark/Hadoop service. Cloud Spanner is a global-scale relational database designed for transaction processing. For more information, see https://cloud.google.com/bigtable/docs/schema-design and https://cloud.google.com/bigtable/docs/schema-design-steps. gcloud commands to create configurations. For more information, see https://cloud.google.com/sdk/gcloud/reference/config/configurations/create.
- Cloud Spanner
- Cloud Dataproc
- Question 13: Incorrect A data warehouse administrator is trying to load data from Cloud Storage to BigQuery. What permissions will they need?
- bigquery.tables.create (Correct) Question 12: Correct A startup has created an IoT application that analyzes data from sensors deployed on vehicles. The application depends on a database that can write large volumes of data at low latency. The startup has used HBase in the past but want to migrate to a managed database service. What service would you recommend?
- bigquery.jobs.list
- bigquery.tables.list
- bigquery.tables.updateData (Correct)
- bigquery.jobs.create (Correct) Explanation To load data, an identity must have bigquery.tables.create, bigquery.tables.updateData, and bigquery.jobs.create. bigquery.tables.list is needed to list tables and metadata on tables. For more information, see https://cloud.google.com/bigquery/docs/batch- loading-data and https://cloud.google.com/bigquery/docs/access-control.
- Create a VPN between projects
- Create firewall rules to load balance traffic between each project's subnets.
- Create a shared VPC Question 14: Incorrect A group of developers are creating a multi-tiered application. Each tier is in its own project. The developer would like to work with a common VPC network. What would you use to implement this?
Create routes between subnets of each project (Incorrect) Explanation A shared VPC allows projects to share a common VPC network. VPNs are used to link VPCs to on premises networks. Routes and firewall rules are not sufficient for implementing a common VPC. Firewall rules are not used to load balance, they are used to control the ingress and egress of traffic on a network. For more information, see https://cloud.google.com/vpc/docs/shared-vpc and https://cloud.google.com/composer/docs/how-to/managing/configuring-shared-vpc. (Correct)
- Question 15: Incorrect You have created a Kubernetes Engine cluster that will run machine learning training processes and machine learning prediction processes. The training processes require more CPU and memory than the prediction processes. How would you configure the cluster to support this?
- Use multiple pods with some configured for more CPU and memory. (Incorrect)
- Use two node pools, one configured with more CPU and memory than the other. (Correct)
- Increase the number of replica sets for the machine learning training process.
- Increase the number of deployments for the machine learning training process. Explanation
App Engine (Correct) Compute Engine Explanation App Engine is designed for applications written in supported languages, including Python 3, that need to run at low cost, and need to scale in response to rapid increases in load. App Engine is a managed service and as such minimizes operational overhead. Compute Engine and Kubernetes Engine both require more management overhead. Cloud Functions are used to respond to events in GCP, not to execute a continually running application. For more information, see https://cloud.google.com/appengine/docs/standard. Node pools are used to configure resources for particular workloads. All nodes in a node pool are configured the same. Replica sets and deployments do not control the number of CPUs or amount of memory.
- Cloud Functions
- Kubernetes Engine
- Question 17: Correct An auditor is reviewing your GCP use. They have asked for access to any audit logs available in GCP. What audit logs are available for each project, folder, and organization?
- Question 16: Correct A client of yours has a Python 3 application that usually has very little load but sometimes experiences sudden and extreme spikes in traffic. They want to run it in GCP but they want to keep costs as low as possible. They also want to minimize management overhead. What service would you recommend?
- User Login
- Performance Metrics
- Data Access (Correct)
- Policy Access
- Admin Activity (Correct)
- System Event (Correct) Explanation Cloud Audit Logs maintain three audit logs: Admin Activity logs, Data Access logs, and System Event logs. There is no such thing as a Policy Access log, a User Login log, or a Performance Metric log in GCP Audit Logs. For more information, see https://cloud.google.com/logging/docs/audit. Question 18: Correct A startup is implementing an IoT application that will ingest data at high speeds. The architect for the startup has decided that data should be ingested in a queue that can store the data until the processing application is able to process it. The architect also wants to use a managed service in Google Cloud. What service would you recommend?
Bigtable Explanation Cloud Pub/Sub is a queuing service that is used to ingest data and store it until it can be processed. Bigtable is a NoSQL database, not a queueing service. Cloud Dataflow is a stream and batch processing service, not a queueing service. Cloud Dataproc is a managed Spark/Hadoop service. For more information, see https://cloud.google.com/pubsub/docs/overview.
- Cloud Dataflow
- Cloud Dataproc
- Question 19: Correct You want to deploy an application to a Kubernetes Engine cluster using a manifest file called my-app.yaml. What command would you use?
- gcloud deployment apply my-app.yaml
- kubectl deployment apply my-app.yaml
- gcloud containers deployment apply my-app.yaml
- kubectl apply - f my-app.yaml Cloud Pub/Sub (Correct)
gsutil rewrite - s nearline gs://free-photos-gcp (Correct) gsutil rewrite - from multiregional --to nearline gs://free-photos-gcp Explanation The correct command for changing the storage class is gsutil rewrite with the target storage class and bucket specified. Gsutil migrate is not a valid command. There is no need to specify the parameters - from or - to. For more information, see https://cloud.google.com/storage/docs/gsutil/commands/rewrite. (Correct) Explanation The correct answer is to use the "kubectl apply - f" with the name of the deployment file. Deployments are Kubernetes abstractions and are managed using kubectl, not gcloud. The other options are not valid commands. For more information, see https://kubernetes.io/docs/reference/kubectl/overview/.
- gsutil migrate - s nearline gs://free-photos-gcp
- gsutil migrate --from multiregional --to nearline gs://free-photos-gcp
- Question 21: Correct A photographer wants to share images they have stored in a Cloud Storage bucket called free-photos-on-gcp. What command would you use to allow all users to read these files?
- Question 20: Correct The contents of the a Cloud Storage bucket called free-photos-gcp are currently stored in multiregional storage class. You want to change the storage class to nearline. What command would you use?
- gcloud config login (Incorrect)
- gcloud auth login (Correct) gcloud iam ch allUsers:Viewer gs://free-photos-on-gcp
- gsutil iam ch allUsers:objectViewer gs://free-photos-on-gcp (Correct)
- gcloud ch allUsers:objectViewer gs://free-photos-on-gcp
- gsutil ch allUsers:Viewer gs://free-photos-on-gcp Explanation The correct command is gsutil iam ch allUsers:objectViewer gs://free-photos-on-gcp. Gsutil is used with Cloud Storage, not gcloud so the gcloud ch option is incorrect. The term objectViewer is the correct way to grant read access to objects in a bucket. For more information, see https://cloud.google.com/storage/docs/gsutil/commands/iam. gcloud login
- Question 22: Incorrect As a developer using GCP, you will need to set up a local development environment. You will want to authorize the use of gcloud commands to access resources. What commands could you use to authorize access?
gcloud init (Correct) Explanation Gcloud init will authorize access and perform other common setup steps. Gcloud auth login will authorize access only. Gcloud login and gcloud config login are not valid commands. For more information, see https://cloud.google.com/sdk/docs/initializing. Question 23: Correct You are creating a set of virtual machines in Compute Engine. GCP will automatically assign an IP address to each. What type of IP address will be assigned?
- Regional external address
- Global internal address
- Global external address
- Regional internal address (Correct) Explanation GCP assigns regional internal IP addresses for VM instances, including GKE pods, nodes, and services. They are also used for Internal TCP/UDP Load Balancing and Internal HTTP(S) Load Balancing. For more information, see https://cloud.google.com/compute/docs/ip-addresses. Question 24: Incorrect You have created a set of firewall rules to control ingress and egress traffic to a network. Traffic that you intended to allow to leave the network appears to be blocked. What could you do to get information to help you diagnose the problem?
Use Cloud Debugger to debug the firewall rules (Incorrect) Enable firewall rule logging for each of the firewall rules (Correct) Explanation Firewall rule logging can be enabled for each firewall rule. Each time the rule is applied to allow or deny traffic, a connection record is created. Connection records can be viewed in Cloud Logging. Cloud Monitoring is used for collecting and view metrics on resource performance. Cloud Trace is used to understand performance in distributed systems. Cloud Debugger is used by developers to identify and correct errors in code. For more information, see https://cloud.google.com/vpc/docs/firewall-rules-logging.
- Enable Cloud Monitoring of each firewall rule
- Enable Cloud Trace of each firewall rule
- Question 25: Incorrect The CFO of you company feels you are spending too much on BigQuery. You determine that a few long running queries are costing more than they should. You would like to experiment with different ways of writing these queries. You'd like to know the estimated cost of running each query without actually running them. How could you do this?
- Use the --estimate-cost option with the bq command
- Use the Pricing Calculator
Users do not have IAM permissions that allow them access to objects in buckets. Prior to setting uniform bucket-level access, those users had access through ACLs. (Correct) Applying uniform bucket-level access removes all access privileges. No user will have access until permissions are reset. Users do not have permissions through ACLs that allow them access to objects in buckets. Prior to setting uniform bucket-level access, those users had access through IAM. (Incorrect)
- Use the --estimate-cost with the gcloud command
- Use the --dry-run option with a bq query command (Correct) Explanation The correct answer is to use the --dry-run option with the bq select command. The Pricing Calculator can give you an estimate of aggregate costs based on storage and amount of data queried but it does not provide estimates of a the cost of running a specific query. There is no --estimate-cost option with either the bq or gcloud command. For more information, see https://cloud.google.com/bigquery/docs/estimate-costs.
- Question 26: Correct During an audit, auditors determined that there are insufficient access controls on Cloud Storage buckets. The auditors recommend you use uniform bucket-level access. After applying uniform bucket-level access some users that had access to objects in buckets no longer have access. What could be the cause?
ACLs are removed when uniform bucket-level access is applied. ACLs must be recreated. Explanation Access is granted to Cloud Storage objects using IAM or access control lists (ACLs). When uniform bucket-level access is applied, users only have access through IAM roles and permissions. A users that could access objects before uniform bucket-level access is applied but not after must have had access through ACLs. For more information, see https://cloud.google.com/storage/docs/uniform-bucket-level-access.
- Question 27: Correct A new team member has just created a new project in GCP. What role is automatically granted to them when they create the project?
- roles/browser
- roles/editor
- roles/viewer
- roles/owner (Correct) Explanation When you create a project, you are automatically granted the roles/owner role. The owner role includes permissions granted by roles/editor, roles/viewer, and roles/browser. For more information, see https://cloud.google.com/resource- manager/docs/access-control-proj. Question 28: Incorrect You want to load balance an application that receives traffic from other resources in the same VPC. All traffic is TCP with IPv4 addresses. What load balancer would you recommend?
Network TCP/UDP Load Balancing (Incorrect) Internal TCP/UDP Load Balancing (Correct) Explanation Internal TCP/UDP Load Balancing is used for internal traffic, that is not from the internet. SSL Proxy, TCP Proxy, and Network TCP/UDP load balancing are used with external traffic. For more information, see https://cloud.google.com/load- balancing/docs/choosing-load-balancer.
- TCP Proxy Load Balancing
- SSL Proxy Load Balancing
- Question 29: Incorrect Your organization has created multiple projects in several folders. You have been assigned to manage them and want to get descriptive information about each project. What command would you use to get metadata about a project?
- gcloud projects describe <PROJECT_NAME> (Incorrect)
- gcloud describe projects <PROJECT_NAME>
Cloud Build (Incorrect) Cloud Data Fusion (Correct) gcloud projects describe <PROJECT_ID> (Correct)
- gcloud describe projects <PROJECT_ID> Explanation The correct command is 'gcloud projects describe <PROJECT_ID'>. 'gcloud projects describe <PROJECT_NAME>' is incorrect because PROJECT_NAME is not used in this command. 'gcloud describe projects' is wrong because 'describe' and 'projects' are in the wrong order in the command. 'gcloud describe project <PROJECT_NAME>' is incorrect because it uses PROJECT_NAME instead of PROJECT_ID. For more information, see https://cloud.google.com/sdk/gcloud/reference/projects/describe.
- Compute Engine
- Cloud Dataprep Question 30: Incorrect A client has asked for your advice about building a data transformation pipeline. The pipeline will read data from Cloud Storage and Cloud Spanner, merge data from the two sources and write the data to a BigQuery data set. The client does not want to manage servers or other infrastructure, if possible. What GCP service would you recommend?
Question 31: Correct A group of data scientists need access to data stored in Cloud Bigtable. You want to follow Google recommended best practices for security. What role would you assign to the data scientist to allow them to read data from Bigtable?
- roles/bigtable.admin
- roles/bigtable.reader (Correct)
- roles/bigtable.user
- roles/bigtable.owner Explanation The role/bigtable.reader gives the data scientist the ability to read data but not write data or modify the database. This follows the Principle of Least Privilege as recommended by Google. Roles/bigtable.admin gives permissions to administer all instances in a project, which is not needed by a data scientist. Roles/bigtable.user provides read and write permissions but data scientist do not need read permission. There is no predefined role called roles/bigtable.owner. For more information, see https://cloud.google.com/bigtable/docs/access-control. Question 32: Correct An application running in Compute Engine sometimes gets spikes in load. You want to add instances automatically when load increases significantly and plan to Explanation Cloud Data Fusion is a managed service that is designed for building data transformation pipelines. Compute Engine is not a managed service. Cloud Dataprep is used to prepare data for analytics and machine learning. Cloud Build is a service for creating container images. For more information, see https://cloud.google.com/data- fusion/docs/how-to.
Instance template (Correct) Persistent Disk Explanation An instance template is needed to enable Compute Engine to automatically add instances to a managed instance group. Snapshots are not required to add instances to a managed instance group. Persistent disks are not needed to control the addition of nodes to a managed instance group. Load balancers are used with managed instance groups but are not the thing that automatically adds nodes to the managed instance group. For more information, see https://cloud.google.com/compute/docs/instance- groups/creating-groups-of-managed-instances.
- Snapshot
- Load balancer
- Question 33: Incorrect You have created a target pool with instances in two zones which are in the same region. The target pool is not functioning correctly. What could be the cause of the problem?
- The target pool is not sending metrics to Cloud Monitoring.
- The target pool is not sending logs to Cloud Logging. use managed instance groups. What would you need to create in order to automatically scale the cluster?
- STDERR (Correct)
- SYSLOG (Incorrect)
- The target pool nodes are configured with different memory specifications (Incorrect)
- The target pool is missing a health check. (Correct) Explanation Target pools must have a health check to function properly. Nodes can be in different zones but must be in the same region. Cloud Monitoring and Cloud Logging are useful but they are not required for the target pool to function properly. Nodes in a pool have the same configuration. For more information, see https://cloud.google.com/load- balancing/docs/target-pools. SYSERR
- STDOUT Question 34: Incorrect Kubernetes Engine collects application logs by default when the log data is written where?