close
AWS Cloud Practicioner Documentation by AWS V1.0
open

Course Overview

Course overview

Welcome to the AWS Sysops Administrator Assossiate Question Page - For the exam code SOA-C02.

Exam Essentials

AWS Fundamentals

Know how to create an AWS account and secure the root user. 

The exam validates your ability to implement security controls to meet compliance requirements. Understanding how to operate the IAM service and securing the AWS root account user are essential practices for this exam objective. This includes setting robust password policies, enabling MFA for users, and managing policies.

Understand how to use the AWS Management Console and CLI. 

Another important skill validated by the exam is performing operations by using the AWS Management Console and the AWS CLI. Experiment with the AWS CLI and observe the results using the management console. It will provide you with visibility and familiarity with basic commands, options, and parsing results. Keep in mind that the exam may have one or more exam lab components where a scenario made of a set of tasks to perform will be requested. You may be expected to perform the tasks using the AWS Management Console or AWS CLI.

Be familiar with the AWS Personal Health Dashboard. 

Identifying, classifying, and remediatingincidents is an essential part of your role as a SysOps administrator. Understand how to obtain and respond to AWS service degradation, scheduled changes, or resourceimpacting issues by leveraging the AWS Health API and the AWS Personal Health Dashboard.

Understand the AWS global infrastructure and all components. 

Implementing high availability and resilient environments will demand that you differentiate between the use of a single availability zone and multi-AZ deployments for a variety of services. Understanding the AWS global infrastructure and all components is critical for such implementation types.

Understand the purpose and function of as many AWS services as possible, starting with those listed in the exam guide. 

The easiest type of questions in the certification exam are those that challenge you to remember the name of a service, what the service does, and under what use case you choose the service as compared to a different solution. Navigate to the certification exam guide appendix and make a note of all the services listed as being in scope. For each of those services, try to remember what they do, their anatomy, and how to use them. As of this writing, there are over 65 services and features that might be covered on the exam. The certification exam guide appendix also shows a list of out-of-scope services and features. While the exam will probably not include complex scenarios that will test deep knowledge of out-of-scope services, it is important that you at least recognize the name and function of all services listed even if you are yet to learn how to operate them.

Account Creation, Security and Compliance

Know the shared responsibility model. 

The shared responsibility model is fundamental to everything in AWS security. Understand how the line of responsibility shifts as you move from unmanaged services like EC2 to managed services such as RDS or S3.

Recognize that everything is an API. 

This fact drives IAM policies but also helps explain the role of services like AWS Organizations and AWS Control Tower and their ability to provide guardrails and service control policies based on API actions.

Remember authentication vs. authorization. 

Recall which services serve which function.

Know Directory Services use cases. 

There are three directory services and each has a different use case. You will be expected to be able to select the best service for a customer’s use case.

Know IAM policies well. 

Because everything is an API and policies define authorization based on APIs, you will find policies everywhere. Recognize the different types of policies and where they are used.

Understand the common tasks in IAM. 

Be prepared to do common tasks in IAM, such as enabling and managing multifactor authentication, setting the password policy, and running a credential report. This list is not exhaustive, but just a reminder not to forget to review the simple and common tasks of a systems administrator.

AWS Cost Management

Understand the importance of implementing AWS cost allocation tags. AWS cost allocation tags provide extended visibility and reporting across AWS Budgets, Cost and Usage Reports, and Cost Explorer in customizable tags such as cost center, project name, or even departmental identification. This can provide a starting point for cost optimization automation tasks.

Understand how to export AWS Cost and Usage Reports. Properly storing and reviewing AWS Cost and Usage reports provides valuable insight into prior cost optimization initiatives and AWS cost history. Some organizations will be required to meet compliance or regulatory needs and must store AWS Cost and Usage reports for archival purposes. This process is accomplished by storing AWS Cost and Usage Reports in an Amazon S3 bucket and disabling the overwrite feature.

Understand how to create AWS Budget notifications and actions. AWS Budget notifications provide advanced warning of potential cost overages. This advanced warning provides an opportunity to automate preventive measures to avoid costly overages.

Understand how to identify and remediate unused resources using AWS Cost Explorer. AWS Cost Explorer can help identify underused or unused services using custom filters and dimensions. The built-in AWS Cost Explorer reports allow a breakdown of the top five cost- accruing AWS Services and analysis of Reserved Instance utilization.

Understand AWS managed service opportunities to reduce cost. AWS Managed services provide a simplified administration and configuration of commonly used building blocks like containers, databases, and storage. These managed services can reduce IT administration overhead and reduce overall administrative costs.

Understand when to use Savings Plans. Savings Plans are a flexible pricing model used to lower prices of services utilized under the Compute, EC2 Instance, and SageMaker Savings Plans. AWS Cost Explorer provides recommendations for which Savings Plans will realize the biggest savings.

Automated Security Services and Compliance

Understand how to review Trusted Advisor security checks. Trusted Advisor offers several security checks and guidance for an AWS account. It offers basic security checks for IAM Use, MFA on Root Account, Security Groups – Specific Ports Unrestricted, and Amazon S3 Bucket Permissions. Understand that Business and Enterprise support unlocks additional security checks, which assist an organization in reporting or automating security best practices.

Understand Security Hub findings and reports. Security Hub provides a centralized management and reporting location for security findings in an AWS account. Security Hub enables further automation and management of security findings by integrating with other AWS services like GuardDuty, Macie, AWS WAF, and Shield. Automation using EventBridge and Security Hub is important when creating automated remediation actions based on findings from any source.

Understand GuardDuty threat detection findings. GuardDuty provides analysis of threats for any VPC flow log, DNS log, or CloudTrail network activity. Any threats identified within a GuardDuty-monitored region is available as a finding in the GuardDuty console. GuardDuty provides findings for EC2 resources, S3 resources, and IAM resources, as well as Kubernetes resources within an account. You can review findings for GuardDuty in the GuardDuty console, Security Hub, or the AWS CLI, or by using API operations.

Be able to review findings in Inspector. Inspector offers findings for vulnerabilities discovered within your web applications hosted in the AWS Cloud. Inspector offers findings for package vulnerability types and network reachability types. You can review findings using the Inspector console and dashboard, or in AWS Security Hub, or by exporting the findings to CSV or JSON format for use with other applications.

Understand how to implement encryption at rest using AWS KMS. Key Management Service offers the creation and management of symmetric and asymmetric keys to encrypt data stored within AWS at rest. Understand how AWS KMS keys are rotated, managed, and used with envelope encryption to securely store data in AWS services like S3, and how AWS KMS is used to encrypt EBS volumes to protect data at rest.

Know how to implement encryption in transit using AWS Certificate Manager. The AWS Certificate Manager (ACM) service lets you create and manage certificates to protect data in transit. Understand how ACM integrates with CloudFront, Elastic Load Balancing, and when to use private certificates within an infrastructure. Know how to enforce a data classification scheme with Macie. The Macie service offers pattern detection and analysis of data stored in S3 to identify and classify PII. Know how to review Macie findings and create data discovery jobs to enforce data classification in S3 buckets. Understand how to securely store secrets using Secrets Manager. The Secrets Manager service tightly integrates with several AWS services. Understand how to create, store, and rotate secrets for use with RDS, DocumentDB, and Redshift clusters. Understand which secrets you can automatically rotate and how to access secrets using CloudFormation.

Be able to configure AWS network protection services using AWS Shield. Shield offers DDoS protection for AWS resources and edge locations. Know how to enable Shield Advanced and review findings from Shield to mitigate and reduce DDoS attacks.

Understand how to configure AWS network protection services using AWS WAF. AWS WAF provides protection for web applications running in the AWS Cloud. Understand how to create and apply AWS WAF ACLs, rules, and rule groups. Know when to use AWS-managed rule groups and when to create custom rule groups, and how to review findings and apply remediation using Firewall Manager for out-of-compliance rule groups.

Compute

Know what to monitor. Be sure to have basic hands-on knowledge of all the many monitoring and optimization tools such as Cost Explorer, Compute Optimizer, and, of course, CloudWatch. Don’t neglect basics like monitoring tab resources such as EC2 and ELB. For each compute type, be sure you know what metrics are important. Know basic versus detailed monitoring.

Know how and when to optimize compute pricing. Savings plans and Spot instances are very important mechanisms for reducing and managing cost and availability. Know the capabilities and limitations of Compute Optimizer.

Be familiar with EC2 enhanced capabilities. The best way to learn is to go through the setup screens item by item. Be sure you can give a short explanation of what each feature does and when you would use it. Take a look at the lefthand navigation in the EC2 console. Is there anything there you aren’t fluent in? Click each option and explore.

Get hands-on knowledge. In the console, can you perform every step in the life cycle of an AMI, EBS, EC2, and Image Builder?

Know your auto scaling. Auto Scaling has a significant number of options. Be sure to review these in more depth and understand which would be appropriate for solving any given scaling problem.

Storage, Migration and Transfer

Understand how Amazon FSx uses multiple availability zones. AWS FSx as a service is a fully managed AWS service designed to increase fault tolerance and high availability. You can increase fault tolerance and high availability with Amazon FSx storage services by enabling cross-region replication or by extending the storage services using a hybrid architecture approach.

Understand how to use AWS DataSync in migrations. AWS DataSync provides an automated option for keeping migration data updated until the application migration date. Once the application is ready to move fully into AWS, the data has already been synchronously updated and ready for migration completion. You can also use AWS DataSync as a migration tool to transfer large datasets during off-peak business hours to reduce network bandwidth.

Understand how to implement AWS Backup for cloud- native backup management. AWS Backup provides a centralized and automated method for managing backups across supported AWS services. You can create and maintain backups centrally using backup and restoration plans. You can use on-demand backups for resources as needed to augment scheduled frequency backups in backup plans. AWS Backup integration with AWS Organizations allows full management across all AWS accounts in an AWS Organization.

Understand how to implement Amazon Data Lifecycle Manager. Amazon Data Lifecycle Manager provides automated EBS Snapshot and EBS-backed AMI creation and management within an AWS account. Amazon Data Lifecycle Manager creates EBS Snapshots using snapshot Lifecycle policies to automate protection of individual EBS volumes or all EBS volumes attached to an EC2 instance. Amazon Data Lifecycle Manager supports cross-account copy event policies to automate snapshot copies across accounts. Policy schedules handle the frequency of EBS Snapshots, fast snapshot restore settings, and cross-region copy rules.

Understand use cases for enabling Amazon S3 cross- region replication. Amazon S3 cross-region replication increases the fault tolerance of application data stored in Amazon S3 buckets. This feature is also useful in creating backup and disaster recovery options for application data between regions, overall increasing data fault tolerance and in some cases application fault tolerance.

Understand how to implement Amazon S3 Lifecycle rules. You can use Amazon S3 Lifecycle rules to move data between storage classes. Lifecycle rules can assist in reducing storage costs by moving data between Amazon S3 Standard storage class into other longer-term storage classes like Amazon S3 Glacier Flexible Retrieval tier or Amazon S3 Deep Archive tier. You can configure Lifecycle rules to move data based on the needs of your organization and the retention needs using Amazon S3 Deep Archive and Amazon S3 Glacier storage tiers.

Understand use cases for Amazon S3 Glacier storage classes. You can use Amazon S3 Glacier storage classes for longer-term cold storage of data that does not require frequent access. Financial services, healthcare, and organizations required to maintain data for extended periods of time can benefit from the cost savings of Amazon S3 Glacier storage classes. Depending on the frequency of retrieval, each Amazon S3 Glacier Storage class offers different recovery times and associated costs when retrieving data from the service. When designing a backup and recovery plan, take into consideration the potential duration of recovery that can impact recovery time objectives (RTOs).

Understand how to configure Amazon S3 static web hosting. Amazon S3 static web hosting enables a cost- effective solution to provide static web content, web redirects, or simple web solutions to expand business capabilities. Organizations often configure Amazon S3 static web hosting to provide splash pages or scalable short-term web file sharing. You can also use S3 static web hosting as a disaster recovery method to supply vital details or serve as a landing page when an organization is experiencing issues.

Understand how to monitor Amazon EBS volume performance. Each Amazon EBS volume type offers unique performance benefits to applications running on Amazon EC2. Depending on the needs of the application, increasing read or write IOPS can drastically improve performance of an application. Monitoring EBS volume performance using Amazon CloudWatch is critical when working with high-performance applications to determine bottlenecks and potential performance risks, or to ensure the proper volume types and features are enabled.

Understand how to implement S3 performance features. Amazon S3 offers several performance features to enhance upload reliability for network outage sensitive areas using multipart uploads into Amazon S3 buckets and to increase transfer speeds to and from Amazon S3 buckets using Amazon S3 Transfer Acceleration. Multipart uploads offer significant performance benefits when working with large datasets by offering improved throughput, the ability to pause and resume, and quick recovery from any network issues. Amazon S3 Transfer Acceleration shortens the distance between client applications and AWS servers by using the CloudFront Edge locations for Amazon S3 bucket access. Amazon S3 Transfer Acceleration also maximizes bandwidth utilization by minimizing the effect of distance on bandwidth.

Databases

Know the engines supported by RDS. This may seem pretty basic, but don’t overlook some basic memorization. Amazon Aurora is one of the engines in RDS. Don’t be confused that Aurora is compatible with PostgreSQL and MySQL. So, the engines are Oracle, SQL Server, MySQL, MariaDB, PostgreSQL, IBM DB2, and Amazon Aurora.

Be able to compare Memcached and Redis. Both are supported caching engines in Amazon ElastiCache, but they have different features and capabilities. At a basic level, Memcached is easier to get set up but lacks some of the more advanced features of Redis. You will want to be able to compare and contrast the two engines and recognize when to use each.

Know what can be monitored and where to find the data. Databases are very resource intensive as well as business critical. Performance monitoring is an essential task, and AWS provides a variety of ways to monitor performance. Of particular importance are Enhanced Metrics and Performance Insights. However, don’t forget about common tools like CloudWatch, SNS, and EventBridge.

Have a caching strategy. Given a scenario, be able to select an appropriate caching strategy. Two of the most common strategies are lazy loading and write-through.

Understand pricing. Remember that for the exam you will not need to know exact prices for any services. Instead, you will need to be able to choose the most cost-effective solution for a given scenario.

Understand performance and cost. You will almost always be balancing performance and cost. Pay particular attention to which the customer in a scenario is most concerned with.

Understand high availability and disaster recovery. Be able to differentiate between high availability and disaster recovery. These concerns overlap significantly, but there are differences. There are many options for both, so you will want to be familiar with them and when each should be used.

Monitoring, Logging and Remediation

Recognize use cases for which CloudWatch is well suited. CloudWatch is a monitoring service, and it’s going to be your first line of defense in troubleshooting. You’ll collect and track metrics and set alarms, from simple metrics such as CPU usage to application-tuned custom ones that you define.

Recite the core components of a CloudWatch event. An event in CloudWatch has the event itself, a target, and a rule. Events indicate changes in your environment, targets process events, and rules match events and route them to targets. Get these three concepts straight, and you’ll be set to handle the more conceptual exam questions.

Recognize names of CloudWatch metrics. You will be asked about default AWS monitoring—CPU, status, disk, network, but not memory—and about the various metric names. Although memorizing all of them is tough, you should review AWS-defined metric names before taking the exam.

Explain how a CloudWatch alarm works. An alarm is a construct that watches a metric and is triggered at certain thresholds. This alarm can then itself trigger actions, such as an Auto Scaling group scaling out or perhaps a Lambda function running. You should be able to describe this process and recognize its usefulness.

List and explain the three CloudWatch alarm states. An alarm can be in three states: OK, ALARM, and INSUFFICIENT_DATA. OK means the metric is within the defined threshold; ALARM means the alarm is “going off” because the metric is outside or has crossed the defined threshold. INSUFFICIENT_DATA should be obvious: There’s not enough to report yet.

Create custom metrics for anything above the hypervisor. It’s subtle, but important: CloudWatch knows nothing about specific tasks that are affecting performance, because applications all live above the AWS virtualization layer. This is why CloudWatch doesn’t provide a memory metric that lives above the hypervisor. You can create custom metrics, but they’ll require an agent, and they will be more limited than metrics that can interact below that virtualization layer.

Differentiate between CloudWatch, CloudTrail, and AWS Config. In a nutshell, CloudWatch is for real-time performance and health monitoring of your environment. CloudTrail monitors API logs and events within your AWS environment. AWS Config monitors the configuration of your environment. All three provide compliance, auditing, and security.

Describe the two types of trails: cross-region and single-region. A cross-region trail functions in all regions of your account. All logs are then placed in a single S3 bucket. A single-region trail applies to one region only and can place logs in any S3 bucket, regardless of that bucket’s region.

Explain how cross-region trails automatically function in new regions. A cross-region trail will automatically begin capturing activity in any new region that is stood up in an environment without any user intervention. Logs for new activity are placed in the same S3 bucket as logs for existing regions and aggregated in seamlessly.

Describe the best practices for acting on and reviewing CloudTrail logs. It is not enough to simply turn on CloudTrail and create a few trails. You should set up CloudWatch alarms related to those trails and potentially send events out via SNS. You should also be continually reviewing logs via a CloudWatch (not CloudTrail!) dashboard that has alarms connected to your CloudTrail logs in S3. Further, you may want to consider using a tool like Amazon Athena for deeper analysis of large log file stores.

Explain the use of AWS Config in monitoring, especially as compared to CloudWatch and CloudTrail. CloudWatch monitors the status of running applications. CloudTrail logs and provides audit trails, especially for API calls. AWS Config is distinct from both of these as it is concerned with the configuration of resources, rather than their runtime state. Anything that affects the setup of a resource and its interaction with other AWS resources is largely under this umbrella.

List the benefits of AWS Config. AWS Config provides centralized configuration management without requiring third-party tools. It also provides configuration audit trails, a sort of configuration equivalent to the API audit trails provided by CloudTrail. And through both of these AWS Config adds a layer of security and compliance to your application by ensuring changes to your environments are always surfaced and evaluated.

Explain AWS Config rules. A rule simply states that a certain configuration—or more often, a certain part of a configuration—should be within a set of values. That rule is broken when a change moves configuration outside of allowed thresholds for those values.

Explain how AWS Config rules are evaluated. There is typically code associated with a rule defining how that rule is evaluated. If you define a custom rule, you’ll write your own code to evaluate configuration and report back as to whether the configuration follows or breaks the custom rule. This code is then attached to the rule as a Lambda function.

Describe the two ways rule evaluation can be triggered. Two triggers cause AWS Config rules to evaluate: change-based triggers and periodic triggers. A change-based trigger causes a rule to evaluate a configuration when there’s a change in the environment. A periodic trigger evaluates a configuration at a predefined frequency.

Explain how AWS Systems Manager is able to help with operational tasks. AWS Systems Manager provides tools that allow you to monitor and maintain your instances, while allowing for the creation of patch baselines and compliance monitoring.

Explain the use of the various components of AWS Systems Manager. Know what the various components of AWS Systems Manager do. The Run command allows you to execute command documents against AWS resources. Patch Manager allows you to automate the installation of security patches and application updates. The Parameter Store creates a central location to store secrets and other parameters like license keys. Session Manager allows you to remotely administer your systems without opening up ports in your security groups. State Manager helps you monitor the compliance of your systems in regard to versioning and proving that baseline software is installed.

Networking

Calculate CIDR. The exam will not expect you to do math in your head, so you will not need to calculate arbitrary CIDR ranges. However, you will want to be able to recognize large versus small ranges. For example, 10.0.0.0/16 is the largest available range with 65,536 IP addresses. 10.0.0.0/28 is the smallest with 16 IP addresses. Five IP addresses are always reserved for AWS use. That means that the available IP addresses in a /28 CIDR would be 11.

Practice building a VPC. AWS has published a handy reference deployment (formerly a QuickStart) at https://aws.amazon.com/solutions/implementations/vpc. Study the structure and addresses. Can you set this up manually?

Understand public vs. private IP addresses. All VPC resources will receive a private IP address. There are three private IP ranges: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16. Public IP addresses can, optionally, be assigned to most resources. Know how to create elastic IP addresses and attach them to an EC2 instance.

Know your gateways and connectivity. VPCs are designed to be an isolated network environment in the same way that your on-premises datacenter is an isolated network. A variety of gateways and endpoints are available. Be familiar with when to use each. Ask: How would I connect to S3? How would I connect to my on-premises network? How would I connect to another VPC?

Understand how to monitor your network. Many monitoring tools are available in AWS, including CloudWatch, CloudTrail, and X-Ray. For monitoring network traffic in the VPC, Traffic Mirroring will be your best solution. Know how to enable it and what it can monitor.

Understand network architecture. Studying architectural diagrams can be very helpful: for example, see https://aws.amazon.com/architecture/reference-architecture- diagrams. Make note of the paths and route tables. Note which services are used in a given scenario.

Content Delivery

Understand how DNS works. Domain Name Service (DNS) is used to resolve names to IP addresses, and vice versa. In a forward lookup, a name is resolved to an IP address, and in a reverse lookup, an IP address is resolved to a hostname. Your client will query the local DNS server for a record. If your local DNS knows the address, it will respond with it; if it doesn’t, it will reach out to the top- level domain (TLD) DNS servers and work its way down the chain until it locates the authoritative DNS server and gets the response for the query.

Know the various DNS record types. Know the main DNS record types and when you would want to use them. Know how to use A records, PTR records, CNAME records, alias records, MX records, and TXT. Alias records are important to Route 53 and the exam.

Understand what routing policies do. You should know how to use the routing policies and set them up, including traffic flows.

Know how health checks work. Remember the different types of health checks and how they work in relation to failovers.

Remember how CloudFront works. Know how to implement a distribution, how to invalidate a cached object, and what OAI and OAC do.

Know the purpose of AWS Global Accelerator and when to use it. Know what an accelerator and anycast IP addresses are.

Deployment, Provisioning and Automation

Remember that you are responsible for managing your app. The great thing about Elastic Beanstalk is that you are responsible for managing your application but that AWS is responsible for maintaining the underlying services since Elastic Beanstalk is a managed service. You still need to ensure that your platform is patched, a task that is made simpler with managed updates.

Remember the deployment modes for applications. Deployment modes are a popular line of questioning on the exam. Remember the differences and use cases between all-at-once, rolling, rolling with additional batches, and immutable.

Understand what CloudFormation does. CloudFormation allows you to build your infrastructure from a template, which ensures that resources are built the same way every time. CloudFormation allows you to do infrastructure-as-a-service (IaaS).

Define the relationship between templates and stacks. Templates are the definition of your environment, whereas stacks are instances of the template. This means that the stack contains all the resources defined in the template. Stacks are an all-or-nothing deal; if any one resource fails to be built successfully, then the entire stack will fail and be rolled back.

Remember what the sections in a CloudFormation template are used for. You need to remember what the various sections in the CloudFormation template are used for. You won’t be expected to write your own template on the exam, but you may be shown samples and asked questions based on what you are seeing.

Module 1 - Questions

Questions Dump

Miguel Fidalgo Questions

Question 1

  • “A company has an internal web application that runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group in a single Availability Zone. A SysOps administrator must make the application highly available. Which action should the SysOps administrator take to meet this requirement?”

R: Update the Auto Scaling group to launch new instances in a second Availability Zone in the same AWS Region.

Question 2

  • “A company hosts a website on multiple Amazon EC2 instances that run in an Auto Scaling group. Users are reporting slow responses during peak times between 6 PM and 11 PM every weekend. A SysOps administrator must implement a solution to improve performance during these peak times. What is the MOST operationally efficient solution that meets these requirements?”

R: Configure a scheduled scaling action with a recurrence option to change the desired capacity before and after peak times

Question 3

  • “A company is running a website on Amazon EC2 instances behind an Application Load Balancer (ALB). The company configured an Amazon CloudFront distribution and set the ALB as the origin. The company created an Amazon Route 53 CNAME record to send all traffic through the CloudFront distribution. As an unintended side effect, mobile users are now being served the desktop version of the website. Which action should a SysOps administrator take to resolve this issue?”

R: Configure the CloudFront distribution behavior to forward the User-Agent header.

Question 4

  • “A SysOps administrator has enabled AWS CloudTrail in an AWS account. If CloudTrail is disabled, it must be re-enabled immediately. What should the SysOps administrator do to meet these requirements WITHOUT writing custom code?”

R: Create an AWS Config rule that is invoked when CloudTrail configuration changes. Apply the AWS-ConfigureCloudTrailLogging automatic remediation action.

Question 5

  • “A company hosts its website on Amazon EC2 instances behind an Application Load Balancer. The company manages its DNS with Amazon Route 53, and wants to point its domain’s zone apex to the website. Which type of record should be used to meet these requirements?”

R: An alias record for the domain’s zone apex

Question 6

  • “A company must ensure that any objects uploaded to an S3 bucket are encrypted. Which of the following actions will meet this requirement? (Choose two.)”

R: 1. Implement Amazon S3 default encryption to make sure that any object being uploaded is encrypted before it is stored. 2. Implement S3 bucket policies to deny unencrypted objects from being uploaded to the buckets

Question 7

  • “A company has a stateful web application that is hosted on Amazon EC2 instances in an Auto Scaling group. The instances run behind an Application Load Balancer (ALB) that has a single target group. The ALB is configured as the origin in an Amazon CloudFront distribution. Users are reporting random logouts from the web application. Which combination of actions should a SysOps administrator take to resolve this problem? (Choose two.)”

R: 1. Configure cookie forwarding in the CloudFront distribution cache behavior. 2. Enable sticky sessions on the ALB target group.

Question 8

  • “A company is running a serverless application on AWS Lambda. The application stores data in an Amazon RDS for MySQL DB instance. Usage has steadily increased, and recently there have been numerous “too many connections” errors when the Lambda function attempts to connect to the database. The company already has configured the database to use the maximum max_connections value that is possible. What should a SysOps administrator do to resolve these errors?”

R: Use Amazon RDS Proxy to create a proxy. Update the connection string in the Lambda function.

Question 9

  • “A SysOps administrator is deploying an application on 10 Amazon EC2 instances. The application must be highly available. The instances must be placed on distinct underlying hardware. What should the SysOps administrator do to meet these requirements?”

R: Launch the instances into a spread placement group in a single AWS Region.

Question 10

  • “A SysOps administrator is troubleshooting an AWS CloudFormation template whereby multiple Amazon EC2 instances are being created. The template is working in us-east-1, but it is failing in us-west-2 with the error code: AMI [ami-12345678] does not exist How should the Administrator ensure that the AWS CloudFormation template is working in every region?”

R: Modify the AWS CloudFormation template by including the AMI IDs in the ג€Mappingsג€ section. Refer to the proper mapping within the template for the proper AMI ID

Question 11

  • “A SysOps administrator is provisioning an Amazon Elastic File System (Amazon EFS) file system to provide shared storage across multiple Amazon EC2 instances. The instances all exist in the same VPC across multiple Availability Zones. There are two instances in each Availability Zone. The SysOps administrator must make the file system accessible to each instance with the lowest possible latency. Which solution will meet these requirements?”

R: Create a mount target in each Availability Zone of the VPC. Use the mount target to mount the EFS file system on the instances in the respective Availability Zone.

Question 12

  • “A SysOps administrator has successfully deployed a VPC with an AWS CloudFormation template. The SysOps administrator wants to deploy the same template across multiple accounts that are managed through AWS Organizations. Which solution will meet this requirement with the LEAST operational overhead?”

R: Use AWS CloudFormation StackSets from the management account to deploy the template in each of the accounts.

Question 13

  • “A company is running distributed computing software to manage a fleet of 20 Amazon EC2 instances for calculations. The fleet includes 2 control nodes and 18 task nodes to run the calculations. Control nodes can automatically start the task nodes. Currently, all the nodes run on demand. The control nodes must be available 24 hours a day, 7 days a week. The task nodes run for 4 hours each day. A SysOps administrator needs to optimize the cost of this solution. Which combination of actions will meet these requirements? (Choose two.)”

      Purchase EC2 Instance Savings Plans for the control nodes
    
      Use Spot Instances for the task nodes. Use On-Demand Instances if there is no Spot availability.
    

Question 14

  • “A company is supposed to receive a data file every hour in an Amazon S3 bucket. An S3 event notification invokes an AWS Lambda function each time a file arrives. The function processes the data for use by an application. The application team notices that sometimes the file does not arrive. The application team wants to receive a notification whenever the file does not arrive. What is the MOST operationally efficient solution that meets these requirements?”

Create an Amazon CloudWatch alarm to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to alert the application team when the Invocations metric of the Lambda function is zero for an hour. Configure the alarm to treat missing data as breaching.

Question 15

“A company recently acquired another corporation and all of that corporation’s AWS accounts. A financial analyst needs the cost data from these accounts. A SysOps administrator uses Cost Explorer to generate cost and usage reports. The SysOps administrator notices that ““No Tagkey”” represents 20% of the monthly cost. What should the SysOps administrator do to tag the ““No Tagkey”” resources?”

Use Tag Editor to find and tag all the untagged resources

Question 16

“While setting up an AWS managed VPN connection, a SysOps administrator creates a customer gateway resource in AWS. The customer gateway device resides in a data center with a NAT gateway in front of it. What address should be used to create the customer gateway resource?”

The public IP address of the NAT device in front of the customer gateway device

Question 17

“A company has a web application that is experiencing performance problems many times each night. A root cause analysis reveals sudden increases in CPU utilization that last 5 minutes on an Amazon EC2 Linux instance. A SysOps administrator must find the process ID (PID) of the service or process that is consuming more CPU. What should the SysOps administrator do to collect the process utilization information with the LEAST amount of effort?”

Configure the Amazon CloudWatch agent procstat plugin to capture CPU process metrics.

Question 18

“A SysOps administrator configured AWS Backup to capture snapshots from a single Amazon EC2 instance that has one Amazon Elastic Block Store (Amazon EBS) volume attached. On the first snapshot, the EBS volume has 10 GiB of data. On the second snapshot, the EBS volume still contains 10 GiB of data, but 4 GiB have changed. On the third snapshot, 2 GiB of data have been added to the volume, for a total of 12 GiB. How much total storage is required to store these snapshots?”

16 GiB

Question 19

“A team is managing an AWS account that is a member of an organization in AWS Organizations. The organization has consolidated billing features enabled. The account hosts several applications. A SysOps administrator has applied tags to resources within the account to reflect the environment. The team needs a report of the breakdown of charges by environment. What should the SysOps administrator do to meet this requirement?”

Activate the tag keys for cost allocation on the organization’s management account

Question 20

“A company uses an AWS CloudFormation template to provision an Amazon EC2 instance and an Amazon RDS DB instance. A SysOps administrator must update the template to ensure that the DB instance is created before the EC2 instance is launched. What should the SysOps administrator do to meet this requirement?”

Add the DependsOn attribute to the EC2 instance resource, and provide the logical name of the RDS resource.

Question 21

“A company hosts a static website on Amazon S3. The website is served by an Amazon CloudFront distribution with a default TTL of 86,400 seconds. The company recently uploaded an updated version of the website to Amazon S3. However, users still see the old content when they refresh the site. A SysOps administrator must make the new version of the website visible to users as soon as possible. Which solution meets these requirements?”

Create an invalidation on the CloudFront distribution for the old S3 objects.

Question 22

“A SysOps administrator is responsible for managing a company’s cloud infrastructure with AWS CloudFormation. The SysOps administrator needs to create a single resource that consists of multiple AWS services. The resource must support creation and deletion through the CloudFormation console. Which CloudFormation resource type should the SysOps administrator create to meet these requirements?”

Custom::MyCustomType

Question 23

“A new website will run on Amazon EC2 instances behind an Application Load Balancer. Amazon Route 53 will be used to manage DNS records. What type of record should be set in Route 53 to point the website’s apex domain name (for example, company.com) to the Application Load Balancer?”

ALIAS

Question 24

“A company is implementing security and compliance by using AWS Trusted Advisor. The company’s SysOps team is validating the list of Trusted Advisor checks that it can access. Which factor will affect the quantity of available Trusted Advisor checks?”

The AWS Support plan

Question 25

“A SysOps administrator is investigating issues on an Amazon RDS for MariaDB DB instance. The SysOps administrator wants to display the database load categorized by detailed wait events. How can the SysOps administrator accomplish this goal?”

Enable Amazon RDS Performance Insights

Question 26

“A company is planning to host an application on a set of Amazon EC2 instances that are distributed across multiple Availability Zones. The application must be able to scale to millions of requests each second. A SysOps administrator must design a solution to distribute the traffic to the EC2 instances. The solution must be optimized to handle sudden and volatile traffic patterns while using a single static IP address for each Availability Zone. Which solution will meet these requirements?”

Network Load Balancer

Question 27

“A SysOps administrator is using AWS CloudFormation StackSets to create AWS resources in two AWS Regions in the same AWS account. A stack operation fails in one Region and returns the stack instance status of OUTDATED. What is the cause of this failure?”

The CloudFormation template is trying to create a global resource that is not unique.

Question 28

“A SysOps administrator must configure Amazon S3 to host a simple nonproduction webpage. The SysOps administrator has created an empty S3 bucket from the AWS Management Console. The S3 bucket has the default configuration in place. Which combination of actions should the SysOps administrator take to complete this process? (Choose two.)”

  • Turn off the ““Block all public access”” setting. Set a bucket policy that allows ““Principal””: the s3:GetObject action.

  • Create an index.html document. Configure static website hosting, and upload the index document to the S3 bucket”

Question 29

“A company is using an Amazon Aurora MySQL DB cluster that has point-in-time recovery, backtracking, and automatic backup enabled. A SysOps administrator needs to be able to roll back the DB cluster to a specific recovery point within the previous 72 hours. Restores must be completed in the same production DB cluster. Which solution will meet these requirements?”

Use backtracking to rewind the existing DB cluster to the desired recovery point.

Question 30

“A user working in the Amazon EC2 console increased the size of an Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 Windows instance. The change is not reflected in the file system. What should a SysOps administrator do to resolve this issue?”

Extend the file system with operating system-level tools to use the new storage capacity.

Question 31

“A SysOps administrator is using Amazon EC2 instances to host an application. The SysOps administrator needs to grant permissions for the application to access an Amazon DynamoDB table. Which solution will meet this requirement?”

Create an IAM role to access the DynamoDB table. Assign the IAM role to the EC2 instance profile.

Question 32

“A SysOps administrator wants to protect objects in an Amazon S3 bucket from accidental overwrite and deletion. Noncurrent objects must be kept for 90 days and then must be permanently deleted. Objects must reside within the same AWS Region as the original S3 bucket. Which solution meets these requirements?”

Enable S3 Versioning on the S3 bucket. Create an S3 Lifecycle policy for the bucket to expire noncurrent objects after 90 days.

Question 33

“A company has an application that customers use to search for records on a website. The application’s data is stored in an Amazon Aurora DB cluster. The application’s usage varies by season and by day of the week. The website’s popularity is increasing, and the website is experiencing slower performance because of increased load on the DB cluster during periods of peak activity. The application logs show that the performance issues occur when users are searching for information. The same search is rarely performed multiple times. A SysOps administrator must improve the performance of the platform by using a solution that maximizes resource efficiency. Which solution will meet these requirements?”

Deploy an Aurora Replica for the DB cluster. Modify the application to use the reader endpoint for search operations. Use Aurora Auto Scaling to scale the number of replicas based on load.

Question 34

“A company uses AWS Organizations to manage multiple AWS accounts. Corporate policy mandates that only specific AWS Regions can be used to store and process customer data. A SysOps administrator must prevent the provisioning of Amazon EC2 instances in unauthorized Regions by anyone in the company. What is the MOST operationally efficient solution that meets these requirements?”

Create a service control policy (SCP) in AWS Organizations to deny the ec2:RunInstances action in all unauthorized Regions. Attach this policy to the root level of the organization.

Question 35

“A company’s public website is hosted in an Amazon S3 bucket in the us-east-1 Region behind an Amazon CloudFront distribution. The company wants to ensure that the website is protected from DDoS attacks. A SysOps administrator needs to deploy a solution that gives the company the ability to maintain control over the rate limit at which DDoS protections are applied. Which solution will meet these requirements?”

Deploy a global-scoped AWS WAF web ACL with an allow default action. Configure an AWS WAF rate-based rule to block matching traffic. Associate the web ACL with the CloudFront distribution

Question 36

“A SysOps administrator developed a Python script that uses the AWS SDK to conduct several maintenance tasks. The script needs to run automatically every night. What is the MOST operationally efficient solution that meets this requirement?”

Convert the Python script to an AWS Lambda function. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to invoke the function every night.

Question 37

“A SysOps administrator must create a solution that immediately notifies software developers if an AWS Lambda function experiences an error. Which solution will meet this requirement?”

Create an Amazon Simple Notification Service (Amazon SNS) topic with an email subscription for each developer. Create an Amazon CloudWatch alarm by using the Errors metric and the Lambda function name as a dimension. Configure the alarm to send a notification to the SNS topic when the alarm state reaches ALARM

Question 38

“A company has a private Amazon S3 bucket that contains sensitive information. A SysOps administrator needs to keep logs of the IP addresses from authentication failures that result from attempts to access objects in the bucket. The logs must be stored so that they cannot be overwritten or deleted for 90 days. Which solution will meet these requirements?”

Turn on access logging for the S3 bucket. Configure the access logs to be saved in a second S3 bucket. Turn on S3 Object Lock on the second S3 bucket, and configure a default retention period of 90 days.

Question 39

“A SysOps administrator migrates NAT instances to NAT gateways. After the migration, an application that is hosted on Amazon EC2 instances in a private subnet cannot access the internet. Which of the following are possible reasons for this problem? (Choose two.)”

  • The application is using a protocol that the NAT gateway does not support.

  • The NAT gateway is not in the Available state

Question 40

“A company runs an application on an Amazon EC2 instance. A SysOps administrator creates an Auto Scaling group and an Application Load Balancer (ALB) to handle an increase in demand. However, the EC2 instances are failing the health check. What should the SysOps administrator do to troubleshoot this issue?”

Verify that the application is running on the protocol and the port that the listener is expecting

Question 41

“A SysOps administrator has created an AWS Service Catalog portfolio and has shared the portfolio with a second AWS account in the company. The second account is controlled by a different administrator. Which action will the administrator of the second account be able to perform?”

Add a product from the imported portfolio to a local portfolio.

Question 42

“A company has migrated its application to AWS. The company will host the application on Amazon EC2 instances of multiple instance families. During initial testing, a SysOps administrator identifies performance issues on selected EC2 instances. The company has a strict budget allocation policy, so the SysOps administrator must use the right resource types with the performance characteristics to match the workload. What should the SysOps administrator do to meet this requirement?”

Review and take action on AWS Compute Optimizer recommendations. Purchase Compute Savings Plans to reduce the cost that is required to run the compute resources.

Question 43

“A SysOps administrator is tasked with deploying a company’s infrastructure as code. The SysOps administrator want to write a single template that can be reused for multiple environments. How should the SysOps administrator use AWS CloudFormation to create a solution?”

Use parameters in a CloudFormation template

Question 44

“A SysOps administrator is responsible for a large fleet of Amazon EC2 instances and must know whether any instances will be affected by upcoming hardware maintenance. Which option would provide this information with the LEAST administrative overhead?”

Review the AWS Personal Health Dashboard

Question 45

“A SysOps administrator is attempting to deploy resources by using an AWS CloudFormation template. An Amazon EC2 instance that is defined in the template fails to launch and produces an InsufficientInstanceCapacity error. Which actions should the SysOps administrator take to resolve this error? (Choose two.)”

  • Modify the AWS CloudFormation template to not specify an Availability Zone for the EC2 instance.

  • Modify the AWS CloudFormation template to use a different EC2 instance type.

Question 46

“A company hosts a web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses Amazon Route 53 to route traffic. The company also has a static website that is configured in an Amazon S3 bucket. A SysOps administrator must use the static website as a backup to the web application. The failover to the static website must be fully automated. Which combination of actions will meet these requirements? (Choose two.)”

  • Create a primary failover routing policy record. Configure the value to be the ALB. Associate the record with a Route 53 health check.

  • Create a secondary failover routing policy record. Configure the value to be the static website.

Question 47

“A data analytics application is running on an Amazon EC2 instance. A SysOps administrator must add custom dimensions to the metrics collected by the Amazon CloudWatch agent. How can the SysOps administrator meet this requirement?”

  • Create an append_dimensions field in the Amazon CloudWatch agent configuration file to collect the metrics

Question 48

“A company stores its data in an Amazon S3 bucket. The company is required to classify the data and find any sensitive personal information in its S3 files. Which solution will meet these requirements?”

Enable Amazon Macie. Create a discovery job that uses the managed data identifier.

Question 49

“A company hosts a web portal on Amazon EC2 instances. The web portal uses an Elastic Load Balancer (ELB) and Amazon Route 53 for its public DNS service. The ELB and the EC2 instances are deployed by way of a single AWS CloudFormation stack in the us-east-1 Region. The web portal must be highly available across multiple Regions. Which configuration will meet these requirements?”

R: Deploy a copy of the stack in the us-west-2 Region. Create an additional A record in Route 53 that includes the ELB in us-west-2 as an alias target. Configure the A records with a failover routing policy and health checks. Use the ELB in us-east-1 as the primary record and the ELB in us-west-2 as the secondary record.

Question 50/298

“A SysOps administrator is investigating why a user has been unable to use RDP to connect over the internet from their home computer to a bastion server running on an Amazon EC2 Windows instance. Which of the following are possible causes of this issue? (Choose two.)” “A network ACL associated with the bastion’s subnet is blocking the network traffic The route table associated with the bastion’s subnet does not have a route to the internet gateway.”

Question 51/298

  • “A SysOps administrator is examining the following AWS CloudFormation template” Image “Why will the stack creation fail?”

R: The PrivateDnsName cannot be set from a CloudFormation template.

Question 52

“A new application runs on Amazon EC2 instances and accesses data in an Amazon RDS database instance. When fully deployed in production, the application fails. The database can be queried from a console on a bastion host. When looking at the web server logs, the following error is repeated multiple times: *** Error Establishing a Database Connection Which of the following may be causes of the connectivity problems? (Choose two.)”

  • The security group for the database does not have the appropriate ingress rule from the web server to the database.

  • The port used by the application developer does not match the port specified in the RDS configuration.

Question 53

“A compliance team requires all administrator passwords for Amazon RDS DB instances to be changed at least annually. Which solution meets this requirement in the MOST operationally efficient manner?”

Store the database credentials in AWS Secrets Manager. Configure automatic rotation for the secret every 365 days.

Question 54

“A SysOps administrator is responsible for managing a fleet of Amazon EC2 instances. These EC2 instances upload build artifacts to a third-party service. The third-party service recently implemented a strict IP allow list that requires all build uploads to come from a single IP address. What change should the systems administrator make to the existing build fleet to comply with this new requirement?”

Move all of the EC2 instances behind a NAT gateway and provide the gateway IP address to the service.

Question 55

“A company uses an Amazon CloudFront distribution to deliver its website. Traffic logs for the website must be centrally stored, and all data must be encrypted at rest. Which solution will meet these requirements?”

Create an Amazon S3 bucket that is configured with default server-side encryption that uses AES-256. Configure CloudFront to use the S3 bucket as a log destination

Question 56

“An organization created an Amazon Elastic File System (Amazon EFS) volume with a file system ID of fs-85ba41fc, and it is actively used by 10 Amazon EC2 hosts. The organization has become concerned that the file system is not encrypted. How can this be resolved?”

Enable encryption on a newly created volume and copy all data from the original volume. Reconnect each host to the new volume

Question 57

“A company uses an AWS Service Catalog portfolio to create and manage resources. A SysOps administrator must create a replica of the company’s existing AWS infrastructure in a new AWS account. What is the MOST operationally efficient way to meet this requirement?”

Share the AWS Service Catalog portfolio with the new AWS account. Import the portfolio into the new AWS account.

Question 58

“A SysOps administrator must manage the security of an AWS account. Recently, an IAM user’s access key was mistakenly uploaded to a public code repository. The SysOps administrator must identify anything that was changed by using this access key. How should the SysOps administrator meet these requirements?”

Search AWS CloudTrail event history for all events initiated with the compromised access key within the suspected timeframe.

Question 59

“A company runs a retail website on multiple Amazon EC2 instances behind an Application Load Balancer (ALB). The company must secure traffic to the website over an HTTPS connection. Which combination of actions should a SysOps administrator take to meet these requirements? (Choose two.)”

  • Attach the certificate to the ALB

  • Create a public certificate in AWS Certificate Manager (ACM).

Question 60

“A company has a stateful, long-running workload on a single xlarge general purpose Amazon EC2 On-Demand Instance Metrics show that the service is always using 80% of its available memory and 40% of its available CPU. A SysOps administrator must reduce the cost of the service without negatively affecting performance. Which change in instance type will meet these requirements?”

Change to one large memory optimized On-Demand Instance

Question 61

“A company asks a SysOps administrator to ensure that AWS CloudTrail files are not tampered with after they are created. Currently, the company uses AWS Identity and Access Management (IAM) to restrict access to specific trails. The company’s security team needs the ability to trace the integrity of each file. What is the MOST operationally efficient solution that meets these requirements?”

Enable the CloudTrail file integrity feature on the trail. The security team can use the digest file that is created by CloudTrail to verify the integrity of the delivered files

Question 62

When the AWS Cloud infrastructure experiences an event that may impact an organization, which AWS service can be used to see which of the organization’s resources are affected?

AWS Personal Health Dashboard

Question 63

“A company is using an AWS KMS customer master key (CMK) with imported key material. The company references the CMK by its alias in the Java application to encrypt data. The CMK must be rotated every 6 months. What is the process to rotate the key?”

Create a new CMK with new imported material, and update the key alias to point to the new CMK.

Question 64

“The security team is concerned because the number of AWS Identity and Access Management (IAM) policies being used in the environment is increasing. The team tasked a SysOps administrator to report on the current number of IAM policies in use and the total available IAM policies. Which AWS service should the administrator use to check how current IAM policy usage compares to current service limits?”

AWS Trusted Advisor

Question 65

“A SysOps administrator is trying to set up an Amazon Route 53 domain name to route traffic to a website hosted on Amazon S3. The domain name of the website is www.example.com and the S3 bucket name DOC-EXAMPLE-BUCKET. After the record set is set up in Route 53, the domain name www.anycompany.com does not seem to work, and the static website is not displayed in the browser. Which of the following is a cause of this?”

The S3 bucket name must match the record set name in Route 53

Question 66

“A SysOps administrator has used AWS CloudFormation to deploy a serverless application into a production VPC. The application consists of an AWS Lambda function, an Amazon DynamoDB table, and an Amazon API Gateway API. The SysOps administrator must delete the AWS CloudFormation stack without deleting the DynamoDB table. Which action should the SysOps administrator take before deleting the AWS CloudFormation stack?”

Add a Retain deletion policy to the DynamoDB resource in the AWS CloudFormation stack.

Question 67

“A SysOps administrator is notified that an Amazon EC2 instance has stopped responding. The AWS Management Console indicates that the system checks are failing. What should the administrator do first to resolve this issue?”

Stop and then start the EC2 instance so that it can be launched on a new host.

Question 68

“A software development company has multiple developers who work on the same product. Each developer must have their own development environments, and these development environments must be identical. Each development environment consists of Amazon EC2 instances and an Amazon RDS DB instance. The development environments should be created only when necessary, and they must be terminated each night to minimize costs. What is the MOST operationally efficient solution that meets these requirements?”

Provide developers with access to the same AWS CloudFormation template so that they can provision their development environment when necessary. Schedule a nightly Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda function to delete the AWS CloudFormation stacks.

Question 69

“A company is partnering with an external vendor to provide data processing services. For this integration, the vendor must host the company’s data in an Amazon S3 bucket in the vendor’s AWS account. The vendor is allowing the company to provide an AWS Key Management Service (AWS KMS) key to encrypt the company’s data. The vendor has provided an IAM role Amazon Resources Name (ARN) to the company for this integration. What should a SysOps administrator do to configure this integration?”

Create a new KMS key. Add the vendor’s IAM role ARN to the KMS key policy. Provide the new KMS key ARN to the vendor

Question 70

“A SysOps administrator is using AWS Systems Manager Patch Manager to patch a fleet of Amazon EC2 instances. The SysOps administrator has configured a patch baseline and a maintenance window. The SysOps administrator also has used an instance tag to identify which instances to patch. The SysOps administrator must give Systems Manager the ability to access the EC2 instances. Which additional action must the SysOps administrator perform to meet this requirement?”

Attach an IAM instance profile with access to Systems Manager to the instances.

Question 71

“A company hosts its website on Amazon EC2 instances in the us-east-1 Region. The company is preparing to extend its website into the eu-central-1 Region, but the database must remain only in us-east-1. After deployment, the EC2 instances in eu-central-1 are unable to connect to the database in us-east-1. What is the MOST operationally efficient solution that will resolve this connectivity issue?”

Create a VPC peering connection between the two Regions. Add the private IP address range of the instances to the inbound rule of the database security group.

Question 72

“A company wants to create an automated solution for all accounts managed by AWS Organizations to detect any security groups that use 0.0.0.0/0 as the source address for inbound traffic. The company also wants to automatically remediate any noncompliant security groups by restricting access to a specific CIDR block that corresponds with the company’s intranet. Which set of actions should the SysOps administrator take to create a solution?”

Create an AWS Config rule to detect noncompliant security groups. Set up automatic remediation to change the 0.0.0.0/0 source address to the approved CIDR block

Question 73

“A company requires that all activity in its AWS account be logged using AWS CloudTrail. Additionally, a SysOps administrator must know when CloudTrail log files are modified or deleted. How should the SysOps administrator meet these requirements?”

Enable log file integrity validation. Use the AWS CLI to validate the log files.

Question 74

“A company is planning to host its stateful web-based applications on AWS. A SysOps administrator is using an Auto Scaling group of Amazon EC2 instances. The web applications will run 24 hours a day, 7 days a week throughout the year. The company must be able to change the instance type within the same instance family later in the year based on the traffic and usage patterns. Which EC2 instance purchasing option will meet these requirements MOST cost-effectively?”

Convertible Reserved Instances

Question 75

“An application runs on Amazon EC2 instances in an Auto Scaling group. Following the deployment of a new feature on the EC2 instances, some instances were marked as unhealthy and then replaced by the Auto Scaling group. The EC2 instances terminated before a SysOps administrator could determine the cause of the health status changes. To troubleshoot this issue, the SysOps administrator wants to ensure that an AWS Lambda function is invoked in this situation. How should the SysOps administrator meet these requirements?”

Add a lifecycle hook to the Auto Scaling group to invoke the Lambda function through Amazon EventBridge (Amazon CloudWatch Events).

Question 76

“A company runs an application that hosts critical data for several clients. The company uses AWS CloudTrail to track user activities on various AWS resources. To meet new security requirements, the company needs to protect the CloudTrail log files from being modified, deleted, or forged. Which solution will meet these requirement?”

Enable CloudTrail log file integrity validation

Question 77

“A global company operates out of five AWS Regions. A SysOps administrator wants to identify all the company’s tagged and untagged Amazon EC2 instances. The company requires the output to display the instance ID and tags. What is the MOST operationally efficient way for the SysOps administrator to meet these requirements?”

Use Tag Editor in AWS Resource Groups. Select all Regions, and choose a resource type of AWS::EC2::Instance.

Question 78

“A company needs to upload gigabytes of files every day. The company need to achieve higher throughput and upload speeds to Amazon S3. Which action should a SysOps administrator take to meet this requirement?”

Enable S3 Transfer Acceleration and use the acceleration endpoint when uploading files.

Question 79

“A SysOps administrator maintains the security and compliance of a company’s AWS account. To ensure the company’s Amazon EC2 instances are following company policy, a SysOps administrator wants to terminate any EC2 instance that do not contain a department tag. Noncompliant resources must be terminated in near-real time. Which solution will meet these requirements?”

Create an AWS Config rule with the required-tags managed rule to identify noncompliant resources. Configure automatic remediation to run the AWS- TerminateEC2Instance automation document to terminate noncompliant resources

Question 80

“A company uploaded its website files to an Amazon S3 bucket that has S3 Versioning enabled. The company uses an Amazon CloudFront distribution with the S3 bucket as the origin. The company recently modified the files, but the object names remained the same. Users report that old content is still appearing on the website. How should a SysOps administrator remediate this issue?”

Create a CloudFront invalidation, and add the path of the updated files.

Question 81

“A company has two VPC networks named VPC A and VPC B. The VPC A CIDR block is 10.0.0.0/16 and the VPC B CIDR block is 172.31.0.0/16. The company wants to establish a VPC peering connection named pcx-12345 between both VPCs. Which rules should appear in the route table of VPC A after configuration? (Choose two.)” “Destination: 10.0.0.0/16, Target: Local Destination: 172.31.0.0/16, Target: pcx-12345”

Question 82

“A company analyzes sales data for its customers. Customers upload files to one of the company’s Amazon S3 buckets, and a message is posted to an Amazon Simple Queue Service (Amazon SQS) queue that contains the object Amazon Resource Name (ARN). An application that runs on an Amazon EC2 instance polls the queue and processes the messages. The processing time depends on the size of the file. Customers are reporting delays in the processing of their files. A SysOps administrator decides to configure Amazon EC2 Auto Scaling as the first step. The SysOps administrator creates an Amazon Machine Image (AMI) that is based on the existing EC2 instance. The SysOps administrator also creates a launch template that references the AMI. How should the SysOps administrator configure the Auto Scaling policy to improve the response time?” Create a custom metric based on the ApproximateNumberOfMessagesVisible metric and the number of instances in the InService state in the Auto Scaling group. Modify the application to calculate the metric and post the metric to Amazon CloudWatch once each minute. Create an Auto Scaling policy based on this metric to scale the number of instances.

Question 83

“A company runs a multi-tier web application with two Amazon EC2 instances in one Availability Zone in the us-east-1 Region. A SysOps administrator must migrate one of the EC2 instances to a new Availability Zone. Which solution will accomplish this?” Create an Amazon Machine Image (AMI) from the EC2 instance and launch it in a different Availability Zone. Terminate the original instance

Question 84

“A company is expanding its fleet of Amazon EC2 instances before an expected increase of traffic. When a SysOps administrator attempts to add more instances, an InstanceLimitExceeded error is returned. What should the SysOps administrator do to resolve this error?” Use Service Quotas to request an EC2 quota increase.

Question 85

“A company wants to prohibit its developers from using a particular family of Amazon EC2 instances. The company uses AWS Organizations and wants to apply the restriction across multiple accounts. What is the MOST operationally efficient way for the company to apply service control policies (SCPs) to meet these requirements?” Add the accounts to an organizational unit (OU). Apply the SCPs to the OU.

Question 86

“An application is running on an Amazon EC2 instance in a VPC with the default DHCP option set. The application connects to an on-premises Microsoft SQL Server database with the DNS name mssql.example.com. The application is unable to resolve the database DNS name. Which solution will fix this problem?” Create an Amazon Route 53 Resolver outbound endpoint. Add a forwarding rule for the domain example.com. Associate the forwarding rule with the VPC.

Question 87

“A company’s application is hosted by an internet provider at app.example.com. The company wants to access the application by using www.company.com, which the company owns and manages with Amazon Route 53. Which Route 53 record should be created to address this?” CNAME record

Question 88

“A company expanded its web application to serve a worldwide audience. A SysOps administrator has implemented a multi-Region AWS deployment for all production infrastructure. The SysOps administrator must route traffic based on the location of resources. Which Amazon Route 53 routing policy should the SysOps administrator use to meet this requirement?” Geoproximity routing policy

Question 89

“A SysOps administrator wants to upload a file that is 1 TB in size from on-premises to an Amazon S3 bucket using multipart uploads. What should the SysOps administrator do to meet this requirement?” Use the s3 cp command

Question 90

“An application team is working with a SysOps administrator to define Amazon CloudWatch alarms for an application. The application team does not know the application’s expected usage or expected growth. Which solution should the SysOps administrator recommend?” Create CloudWatch alarms that are based on anomaly detection

Question 91

“A company runs a stateless application that is hosted on an Amazon EC2 instance. Users are reporting performance issues. A SysOps administrator reviews the Amazon CloudWatch metrics for the application and notices that the instance’s CPU utilization frequently reaches 90% during business hours. What is the MOST operationally efficient solution that will improve the application’s responsiveness?” Create an Auto Scaling group, and assign it to an Application Load Balancer. Configure a target tracking scaling policy that is based on the average CPU utilization of the Auto Scaling group.

Question 92

“An ecommerce company uses an Amazon ElastiCache for Memcached cluster for in-memory caching of popular product queries on the shopping site. When viewing recent Amazon CloudWatch metrics data for the ElastiCache cluster, the SysOps administrator notices a large number of evictions. Which of the following actions will reduce these evictions? (Choose two.)” “Add an additional node to the ElastiCache cluster. Increase the individual node size inside the ElastiCache cluster.”

Question 93

“A SysOps administrator wants to provide access to AWS services by attaching an IAM policy to multiple IAM users. The SysOps administrator also wants to be able to change the policy and create new versions. Which combination of actions will meet these requirements? (Choose two.)” “Create a customer managed policy. Add the users to an IAM user group. Attach the policy to the group.”

Question 94

“A company stores critical data in Amazon S3 buckets. A SysOps administrator must build a solution to record all S3 API activity. Which action will meet this requirement?” Create an AWS CloudTrail trail to log data events for all S3 objects.

Question 95

“A company runs an application that uses a MySQL database on an Amazon EC2 instance. The EC2 instance has a General Purpose SSD Amazon Elastic Block Store (Amazon EBS) volume. The company made changes to the application code and now wants to perform load testing to evaluate the impact of the code changes. A SysOps administrator must create a new MySQL instance from a snapshot of the existing production instance. This new instance needs to perform as similarly as possible to the production instance. Which restore option meets these requirements?” Use EBS fast snapshot restore to create a new General Purpose SSD EBS volume from the production snapshot.

Question 96

“A team of on-call engineers frequently needs to connect to Amazon EC2 instances in a private subnet to troubleshoot and run commands. The instances use either the latest AWS-provided Windows Amazon Machine Images (AMIs) or Amazon Linux AMIs. The team has an existing 1AM role for authorization. A SysOps administrator must provide the team with access to the instances by granting IAM permissions to this role. Which solution will meet this requirement?” Add a statement to the 1AM role policy to allow the ssm:StartSession action on the instances. Instruct the team to use AWS Systems Manager Session Manager to connect to the instances by using the assumed IAM role

Question 97

“A company needs to ensure strict adherence to a budget for 25 applications deployed on AWS. Separate teams are responsible for storage, compute, and database costs. A SysOps administrator must implement an automated solution to alert each team when their projected spend will exceed a quarterly amount that has been set by the finance department. The solution cannot incur additional compute, storage, or database costs. Which solution will meet these requirements?” Use AWS Budgets to create a cost budget for each team, filtering by the services they own. Specify the budget amount defined by the finance department along with a forecasted cost threshold. Enter the appropriate email recipients for each budget.

Question 98

“A company hosts a static website on Amazon S3. An Amazon CloudFront distribution presents this site to global users. The company uses the Managed- CachingDisabled CloudFront cache policy. The company’s developers confirm that they frequently update a file in Amazon S3 with new information. Users report that the website presents correct information when the website first loads the file. However, the users’ browsers do not retrieve the updated file after a refresh. What should a SysOps administrator recommend to fix this issue?” Add a Cache-Control header field with max-age=0 to the S3 object.

Question 99

“A company has a policy that requires all Amazon EC2 instances to have a specific set of tags. If an EC2 instance does not have the required tags, the noncompliant instance should be terminated. What is the MOST operationally efficient solution that meets these requirement?” Create an AWS Config rule to check if the required tags are present. If an EC2 instance is noncompliant, invoke an AWS Systems Manager Automation document to terminate the instance.

Question 100

“A SysOps administrator wants to manage a web server application with AWS Elastic Beanstalk. The Elastic Beanstalk service must maintain full capacity for new deployments at all times. Which deployment policies satisfy this requirement? (Choose two.)” “Immutable Rolling with additional batch”

Question 101

“A company has an Auto Scaling group of Amazon EC2 instances that scale based on average CPU utilization. The Auto Scaling group events log indicates an InsufficientInstanceCapacity error. Which actions should a SysOps administrator take to remediate this issue? (Choose two.)” “Configure the Auto Scaling group in different Availability Zones Change the instance type that the company is using.”

Question 102

“A SysOps administrator needs to control access to groups of Amazon EC2 instances using AWS Systems Manager Session Manager. Specific tags on the EC2 instances have already been added. Which additional actions should the administrator take to control access? (Choose two.)” “Attach an IAM policy to the users or groups that require access to the EC2 instances. Create an IAM policy that grants access to any EC2 instances with a tag specified in the Condition element.”

Question 103

“A company has an AWS Lambda function in Account A. The Lambda function needs to read the objects in an Amazon S3 bucket in Account B. A SysOps administrator must create corresponding IAM roles in both accounts. Which solution will meet these requirements?” In Account A, create a Lambda execution role to assume the role in Account B. In Account B. create a role that the function can assume to gain access to the S3 bucket

Question 104

“An AWS Lambda function is intermittently failing several times a day. A SysOps administrator must find out how often this error has occurred in the last 7 days. Which action will meet this requirement in the MOST operationally efficient manner?” Use Amazon CloudWatch Logs Insights to query the associated Lambda function logs.

Question 105

“A company is using Amazon CloudFront to serve static content for its web application to its users. The CloudFront distribution uses an existing on-premises website as a custom origin. The company requires the use of TLS between CloudFront and the origin server. This configuration has worked as expected for several months. However, users are now experiencing HTTP 502 (Bad Gateway) errors when they view webpages that include content from the CloudFront distribution. What should a SysOps administrator do to resolve this problem?” Examine the expiration date on the certificate on the origin site. Validate that the certificate has not expired. Replace the certificate if necessary.

Question 106

“An Amazon CloudFront distribution has a single Amazon S3 bucket as its origin. A SysOps administrator must ensure that users can access the S3 bucket only through requests from the CloudFront endpoint. Which solution will meet these requirements?” Create an origin access identity (OAI). Assign the OAI to the CloudFront distribution. Update the S3 bucket policy to restrict access to the OAI

Question 107

“A SysOps administrator is designing a solution for an Amazon RDS for PostgreSQL DB instance. Database credentials must be stored and rotated monthly. The applications that connect to the DB instance send write-intensive traffic with variable client connections that sometimes increase significantly in a short period of time. Which solution should a SysOps administrator choose to meet these requirements?” Configure AWS Secrets Manager to automatically rotate the credentials for the DB instance. Use RDS Proxy to handle the increases in database connections.

Question 108

“A company wants to reduce costs for jobs that can be completed at any time. The jobs currently run by using multiple Amazon EC2 On-Demand Instances and the jobs take slightly less than 2 hours to complete. If a job falls for any reason it must be restarted from the beginning. Which solution will meet these requirements MOST cost-effectively?” Submit a request for Spot Instances with a defined duration for the jobs

Question 109

“An environment consists of 100 Amazon EC2 Windows instances. The Amazon CloudWatch agent is deployed and running on all EC2 Instances with a baseline configuration file to capture log files. There is a new requirement to capture the DHCP log files that exist on 50 of the instances. What is the MOST operationally efficient way to meet this new requirement?” Create an additional CloudWatch agent configuration file to capture the DHCP logs. Use the AWS Systems Manager Run Command to restart the CloudWatch agent on each EC2 instance with the append-config option to apply the additional configuration file.

Question 110

“A company has 10 Amazon EC2 instances in its production account. A SysOps administrator must ensure that email notifications are sent to administrators each time there is an EC2 instance state change. Which solution will meet this requirements?” Create an Amazon EventBridge (Amazon CloudWatch Events) rule that publishes a message to an Amazon Simple Notification Service (Amazon SNS) topic when an EC2 instance state changes. This SNS topic then sends notifications to its email subscribers.

Question 111

“A company has an application that runs on a fleet of Amazon EC2 instances behind an Elastic Load Balancer. The instances run in an Auto Scaling group. The application’s performance remains consistent throughout most of each day. However, an increase in user traffic slows the performance during the same 4-hour period of time each day. What is the MOST operationally efficient solution that will resolve this issue?” Create a scheduled scaling action to scale out the number of EC2 instances shortly before the increase in user traffic occurs

Question 112

“A company hosts an application on an Amazon EC2 instance in a single AWS Region. The application requires support for non-HTTP TCP traffic and HTTP traffic. The company wants to deliver content with low latency by leveraging the AWS network. The company also wants to implement an Auto Scaling group with an Elastic Load Balancer. How should a SysOps administrator meet these requirements?” Create an Auto Scaling group with a Network Load Balancer (NLB). Add an accelerator with AWS Global Accelerator with the NLB as an endpoint.

Question 113

“A SysOps administrator has an AWS CloudFormation template that is used to deploy an encrypted Amazon Machine Image (AMI). The CloudFormation template will be used in a second account so the SysOps administrator copies the encrypted AMI to the second account. When launching the new CloudFormation stack in the second account, it fails. Which action should the SysOps administrator take to correct the issue?” Re-encrypt the destination AMI with an AWS Key Management Service (AWS KMS) key from the destination account.

Question 114

“A company’s SysOps administrator deploys four new Amazon EC2 instances by using the standard Amazon Linux 2 Amazon Machine Image (AMI). The company needs to be able to use AWS Systems Manager to manage the instances. The SysOps administrator notices that the instances do not appear in the Systems Manager console. What must the SysOps administrator do to resolve this issue? “ Attach an IAM instance profile to the instances. Ensure that the instance profile contains the AmazonSSMManagedInstanceCore policy.

Question 115

“A SysOps administrator is maintaining a web application using an Amazon CloudFront web distribution, an Application Load Balancer (ALB), Amazon RDS, and Amazon EC2 in a VPC. All services have logging enabled. The administrator needs to investigate HTTP Layer 7 status codes from the web application. Which log sources contain the status codes? (Choose two.) “ “ALB access logs Most Voted CloudFront access togs”

Question 116

“A company wants to be alerted through email when IAM CreateUser API calls are made within its AWS account. Which combination of actions should a SysOps administrator take to meet this requirement? (Choose two.) “ “Create an Amazon EventBridge (Amazon CloudWatch Events) rule with AWS CloudTrail as the event source and IAM CreateUser as the specific API call for the event pattern. Use an Amazon Simple Notification Service (Amazon SNS) topic as an event target with an email subscription”

Question 117

“A database is running on an Amazon RDS Multi-AZ DB instance. A recent security audit found the database to be out of compliance because it was not encrypted. Which approach will resolve the encryption requirement? “ Take a snapshot of the RDS instance, copy and encrypt the snapshot, and then restore to the new RDS instance. “A company using AWS Organizations requires that no Amazon S3 buckets in its production accounts should ever be deleted.

Question 118

What is the SIMPLEST approach the SysOps administrator can take to ensure S3 buckets in those accounts can never be deleted? “ Use service control policies to deny the s3:DeleteBucket action on all buckets in production accounts.

Question 119

“A company has an application that is running on Amazon EC2 instances in a VPC. The application needs access to download software updates from the internet. The VPC has public subnets and private subnets. The company’s security policy requires all EC2 instances to be deployed in private subnets. What should a SysOps administrator do to meet these requirements? “ Add a NAT gateway to public subnet. In the route table for the private subnets, add a route to the NAT gateway.

Question 120

“A development team recently deployed a new version of a web application to production. After the release, penetration testing revealed a cross-site scripting vulnerability that could expose user data. Which AWS service will mitigate this issue? “ AWS WAF

Question 121

“A SysOps administrator must configure a resilient tier of Amazon EC2 instances for a high performance computing (HPC) application. The HPC application requires minimum latency between nodes. Which actions should the SysOps administrator take to meet these requirements? (Choose two.) “ “Launch the EC2 instances into a cluster placement group. Place the EC2 instances in an Auto Scaling group within a single subnet. “

Question 122

“A company’s customers are reporting increased latency while accessing static web content from Amazon S3. A SysOps administrator observed a very high rate of read operations on a particular S3 bucket. What will minimize latency by reducing load on the S3 bucket? “ Create an Amazon CloudFront distribution with the S3 bucket as the origin.

Question 123

“A SysOps administrator needs to develop a solution that provides email notification and inserts a record into a database every time a file is put into an Amazon S3 bucket. What is the MOST operationally efficient solution that meets these requirements? “ Set up an S3 event notification that targets an Amazon Simple Notification Service (Amazon SNS) topic. Create two subscriptions for the SNS topic. Use one subscription to send the email notification. Use the other subscription to invoke an AWS Lambda function that inserts the record into the database.

Question 124

“A company hosts a web application on Amazon EC2 instances behind an Application Load Balancer. The instances are in an Amazon EC2 Auto Scaling group. The application is accessed with a public URL. A SysOps administrator needs to implement a monitoring solution that checks the availability of the application and follows the same routes and actions as a customer. The SysOps administrator must receive a notification if less than 95% of the monitoring runs find no errors. Which solution will meet these requirements? “

Create an Amazon CloudWatch Synthetics canary with a script that follows customer routes. Schedule the canary to run on a recurring schedule. Create a CloudWatch alarm that publishes a message to an Amazon Simple Notification Service (Amazon SNS) topic when the SuccessPercent metric is less than 95%.

Question 125

“A SysOps administrator uses AWS Systems Manager Session Manager to connect to instances. After the SysOps administrator launches a new Amazon EC2 instance, the EC2 instance does not appear in the Session Manager list of systems that are available for connection. The SysOps administrator verifies that Systems Manager Agent is installed, updated, and running on the EC2 instance.

The EC2 instance does not have an attached IAM role that allows Session Manager to connect to the EC2 instance.

Question 126

“A SysOps administrator is unable to launch Amazon EC2 instances into a VPC because there are no available private IPv4 addresses in the VPC. Which combination of actions must the SysOps administrator take to launch the instances? (Choose two.) “

  • Associate a secondary IPv4 CIDR block with the VPC.

  • Create a new subnet for the VPC.

Question 127

“A SysOps administrator is creating an Amazon EC2 Auto Scaling group in a new AWS account. After adding some instances, the SysOps administrator notices that the group has not reached the minimum number of instances. The SysOps administrator receives the following error message: Launching a new EC2 instance. Status Reason: Your quota allows for 0 more running instance(s). You requested at least 1. Launching EC2 instance failed. Which action will resolve this issue? “ Request a quota increase for the instance type family by using Service Quotas on the AWS Management Console.

Question 128

“A SysOps administrator is creating two AWS CloudFormation templates. The first template will create a VPC with associated resources, such as subnets, route tables, and an internet gateway. The second template will deploy application resources within the VPC that was created by the first template. The second template should refer to the resources created by the first template. How can this be accomplished with the LEAST amount of administrative effort? “ Add an export field to the outputs of the first template and import the values in the second template.

Question 129

“A company runs a web application on three Amazon EC2 instances behind an Application Load Balancer (ALB). The company notices that random periods of increased traffic cause a degradation in the application’s performance. A SysOps administrator must scale the application to meet the increased traffic. Which solution meets these requirements? “ Deploy the application to an Auto Scaling group of EC2 instances with a target tracking scaling policy. Attach the ALB to the Auto Scaling group.

Question 130

“A company has a high-performance Windows workload. The workload requires a storage volume that provides consistent performance of 10,000 IOPS. The company does not want to pay for additional unneeded capacity to achieve this performance. Which solution will meet these requirements with the LEAST cost? “

Use a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume that is configured with 10,000 provisioned IOPS

Question 131

“A SysOps administrator must create a solution that automatically shuts down any Amazon EC2 instances that have less than 10% average CPU utilization for 60 minutes or more. Which solution will meet this requirement in the MOST operationally efficient manner? “ Implement an Amazon CloudWatch alarm for each EC2 instance to monitor average CPU utilization. Set the period at 1 hour, and set the threshold at 10%. Configure an EC2 action on the alarm to stop the instance

Question 132

“A SysOps administrator is unable to authenticate an AWS CLI call to an AWS service. Which of the following is the cause of this issue? “

There is no access key.

Question 133

“A company requires that all IAM user accounts that have not been used for 90 days or more must have their access keys and passwords immediately disabled. A SysOps administrator must automate the process of disabling unused keys using the MOST operationally efficient method. How should the SysOps administrator implement this solution? “ Set up an AWS Config managed rule to identify IAM users that have not been active for 90 days. Set up an AWS Systems Manager automation runbook to disable the AWS access keys for these IAM users

Question 134

“A company creates custom AMI images by launching new Amazon EC2 instances from an AWS CloudFormation template. It installs and configures necessary software through AWS OpsWorks, and takes images of each EC2 instance. The process of installing and configuring software can take between 2 to 3 hours, but at times, the process stalls due to installation errors. The SysOps administrator must modify the CloudFormation template so if the process stalls, the entire stack will fail and roll back. Based on these requirements, what should be added to the template? “ CreationPolicy with a timeout set to 4 hours.

Question 135

“A company runs workloads on 90 Amazon EC2 instances in the eu-west-1 Region in an AWS account. In 2 months, the company will migrate the workloads from eu-west-1 to the eu-west-3 Region. The company needs to reduce the cost of the EC2 instances. The company is willing to make a 1-year commitment that will begin next week. The company must choose an EC2 instance purchasing option that will provide discounts for the 90 EC2 instances regardless of Region during the 1-year period. Which solution will meet these requirements?” Purchase a Compute Savings Plan.

Question 136

“A SysOps administrator has created a VPC that contains a public subnet and a private subnet. Amazon EC2 instances that were launched in the private subnet cannot access the internet. The default network ACL is active on all subnets in the VPC, and all security groups allow all outbound traffic. Which solution will provide the EC2 instances in the private subnet with access to the internet? “ Create a NAT gateway in the public subnet. Create a route from the private subnet to the NAT gateway

Question 137

“A company plans to run a public web application on Amazon EC2 instances behind an Elastic Load Balancer (ELB). The company’s security team wants to protect the website by using AWS Certificate Manager (ACM) certificates. The ELB must automatically redirect any HTTP requests to HTTPS. Which solution will meet these requirements? “ Create an Application Load Balancer that has one HTTP listener on port 80 and one HTTPS protocol listener on port 443. Attach an SSL/TLS certificate to listener port 443. Create a rule to redirect requests from port 80 to port 443

Question 138

“A company wants to track its AWS costs in all member accounts that are part of an organization in AWS Organizations. Managers of the member accounts want to receive a notification when the estimated costs exceed a predetermined amount each month. The managers are unable to configure a billing alarm. The IAM permissions for all users are correct. What could be the cause of this issue? “ The management/payer account does not have billing alerts turned on.

Question 139

“A company is using Amazon Elastic Container Service (Amazon ECS) to run a containerized application on Amazon EC2 instances. A SysOps administrator needs to monitor only traffic flows between the ECS tasks. Which combination of steps should the SysOps administrator take to meet this requirement? (Choose two.) “ “ Configure VPC Flow Logs on the elastic network interface of each task. Most Voted Specify the awsvpc network mode in the task definition.”

Question 140

“A company uses AWS Organizations to manage multiple AWS accounts. The company’s SysOps team has been using a manual process to create and manage IAM roles. The team requires an automated solution to create and manage the necessary IAM roles for multiple AWS accounts. What is the MOST operationally efficient solution that meets these requirements? “ “ Use AWS CloudFormation StackSets with AWS Organizations to deploy and manage IAM roles for the AWS accounts”

Question 141

“A SysOps administrator needs to configure automatic rotation for Amazon RDS database credentials. The credentials must rotate every 30 days. The solution must integrate with Amazon RDS. Which solution will meet these requirements with the LEAST operational overhead?” Store the credentials in AWS Secrets Manager. Configure automatic rotation with a rotation interval of 30 days.

Question 142

“A company’s SysOps administrator attempts to restore an Amazon Elastic Block Store (Amazon EBS) snapshot. However, the snapshot is missing because another system administrator accidentally deleted the snapshot. The company needs the ability to recover snapshots for a specified period of time after snapshots are deleted. Which solution will provide this functionality? “ “ Create a Recycle Bin retention rule for EBS snapshots for the desired retention period.”

Question 143

“A SysOps administrator recently configured Amazon S3 Cross-Region Replication on an S3 bucket. Which of the following does this feature replicate to the destination S3 bucket by default? “ Object metadata

Question 144

“A company has a workload that is sending log data to Amazon CloudWatch Logs. One of the fields includes a measure of application latency. A SysOps administrator needs to monitor the p90 statistic of this field over time. What should the SysOps administrator do to meet this requirement? “ Create a metric filter on the log data

Question 145

“A company wants to archive sensitive data on Amazon S3 Glacier. The company’s regulatory and compliance requirements do not allow any modifications to the data by any account. Which solution meets these requirements? “ “ Attach a vault lock policy to an S3 Glacier vault that contains the archived data. Use the lock ID to validate the vault lock policy within 24 hours. “

Question 146

“A company manages an application that uses Amazon ElastiCache for Redis with two extra-large nodes spread across two different Availability Zones. The company’s IT team discovers that the ElastiCache for Redis cluster has 75% freeable memory. The application must maintain high availability. What is the MOST cost-effective way to resize the cluster? “ Perform an online resizing for the ElastiCache for Redis cluster. Change the node types from extra-large nodes to large nodes.

Question 147

“A company must migrate its applications to AWS. The company is using Chef recipes for configuration management. The company wants to continue to use the existing Chef recipes after the applications are migrated to AWS. What is the MOST operationally efficient solution that meets these requirements? “ Use AWS OpsWorks to create a stack and add layers with Chef recipes.

Question 148

“A company uses AWS Organizations to manage its AWS accounts. A SysOps administrator must create a backup strategy for all Amazon EC2 instances across all the company’s AWS accounts. Which solution will meet these requirements in the MOST operationally efficient way? “ Use AWS Backup in the management account to deploy policies for all accounts and resources

Question 149

“A SysOps administrator is reviewing VPC Flow Logs to troubleshoot connectivity issues in a VPC. While reviewing the logs, the SysOps administrator notices that rejected traffic is not listed. What should the SysOps administrator do to ensure that all traffic is logged? “ Create a new flow log that has a filter setting to capture all traffic A company is expanding its use of AWS services across its portfolios. The company wants to provision AWS accounts for each team to ensure a separation of business processes for security, compliance, and billing. Account creation and bootstrapping should be completed in a scalable and efficient way so new accounts are created with a defined baseline and governance guardrails in place. A SysOps administrator needs to design a provisioning process that saves time and resources. Use AWS Control Tower to create a template in Account Factory and use the template to provision new accounts

Question 150

“A SysOps administrator noticed that the cache hit ratio for an Amazon CloudFront distribution is less than 10%. Which collection of configuration changes will increase the cache hit ratio for the distribution? (Choose two.) “ “ Ensure that only required cookies, query strings, and headers are forwarded in the Cache Behavior Settings Increase the CloudFront time to live (TTL) settings in the Cache Behavior Settings. “

Question 151

“A SysOps administrator is attempting to download patches from the internet into an instance in a private subnet. An internet gateway exists for the VPC, and a NAT gateway has been deployed on the public subnet; however, the instance has no internet connectivity. The resources deployed into the private subnet must be inaccessible directly from the public internet.

Public Subnet (10.0.1.0/24) Route Table

Destination Target - 10.0.0.0/16 local 0.0.0.0/0 IGW

Private Subnet (10.0.2.0/24) Route Table

Destination Target - 10.0.0.0/16 local

What should be added to the private subnet’s route table in order to address this issue, given the information provided? “ “ 0.0.0.0/0 NAT”

Question 152

“A company is undergoing an external audit of its systems, which run wholly on AWS. A SysOps administrator must supply documentation of Payment Card Industry Data Security Standard (PCI DSS) compliance for the infrastructure managed by AWS. Which set of actions should the SysOps administrator take to meet this requirement? “ Download the applicable reports from the AWS Artifact portal and supply these to the auditors.

Question 153

“A company has an initiative to reduce costs associated with Amazon EC2 and AWS Lambda. Which action should a SysOps administrator take to meet these requirements? “ “ Use AWS Compute Optimizer and take action on the provided recommendations.”

Question 154

“A company wants to use only IPv6 for all its Amazon EC2 instances. The EC2 instances must not be accessible from the internet, but the EC2 instances must be able to access the internet. The company creates a dual-stack VPC and IPv6-only subnets. How should a SysOps administrator configure the VPC to meet these requirements?” Create and attach an egress-only internet gateway. Create a custom route table that includes an entry to point all IPv6 traffic to the egress-only internet gateway. Attach the custom route table to the IPv6-only subnets

Question 155

“A company has an existing web application that runs on two Amazon EC2 instances behind an Application Load Balancer (ALB) across two Availability Zones. The application uses an Amazon RDS Multi-AZ DB Instance. Amazon Route 53 record sets route requests for dynamic content to the load balancer and requests for static content to an Amazon S3 bucket. Site visitors are reporting extremely long loading times. Which actions should be taken to improve the performance of the website? (Choose two.)” “Add Amazon CloudFront caching for static content Implement Amazon EC2 Auto Scaling for the web servers”

Question 156

“A company is running an application on premises and wants to use AWS for data backup. All of the data must be available locally. The backup application can write only to block-based storage that is compatible with the Portable Operating System Interface (POSIX). Which backup solution will meet these requirements? “ “ Use AWS Storage Gateway, and configure it to use gateway-stored volumes.”

Question 157

“A global company handles a large amount of personally identifiable information (PII) through an internal web portal. The company’s application runs in a corporate data center that is connected to AWS through an AWS Direct Connect connection. The application stores the PII in Amazon S3. According to a compliance requirement, traffic from the web portal to Amazon S3 must not travel across the internet. What should a SysOps administrator do to meet the compliance requirement? “ Provision an interface VPC endpoint for Amazon S3. Modify the application to use the interface endpoint

Question 158

“A SysOps administrator notices a scale-up event for an Amazon EC2 Auto Scaling group. Amazon CloudWatch shows a spike in the RequestCount metric for the associated Application Load Balancer. The administrator would like to know the IP addresses for the source of the requests. Where can the administrator find this information? “ “ Elastic Load Balancer access logs”

Question 159

“A company’s SysOps administrator deploys a public Network Load Balancer (NLB) in front of the company’s web application. The web application does not use any Elastic IP addresses. Users must access the web application by using the company’s domain name. The SysOps administrator needs to configure Amazon Route 53 to route traffic to the NLB. Which solution will meet these requirements MOST cost-effectively? “ Create a Route 53 alias record for the NLB.

Question 160

“A company runs an encrypted Amazon RDS for Oracle DB instance. The company wants to make regular backups available in another AWS Region. What is the MOST operationally efficient solution that meets these requirements? “ “ Modify the DB instance. Enable cross-Region automated backups.”

Question 161

“A company is rolling out a new version of its website. Management wants to deploy the new website in a limited rollout to 20% of the company’s customers. The company uses Amazon Route 53 for its website’s DNS solution. Which configuration will meet these requirements?” Create a weighted routing policy. Within the policy, configure a weight of 80 for the record pointing to the original resource. Configure a weight of 20 for the record pointing to the new resource.

Question 162

“A SysOps administrator created an AWS CloudFormation template that provisions Amazon EC2 instances, an Elastic Load Balancer (ELB), and an Amazon RDS DB instance. During stack creation, the creation of the EC2 instances and the creation of the ELB are successful. However, the creation of the DB instance fails. What is the default behavior of CloudFormation in this scenario? “ “ CloudFormation will roll back the stack but will not delete the stack. “

Question 163

“A SysOps administrator needs to automate the invocation of an AWS Lambda function. The Lambda function must run at the end of each day to generate a report on data that is stored in an Amazon S3 bucket. What is the MOST operationally efficient solution that meets these requirements? “ Create an Amazon EventBridge (Amazon CloudWatch Events) rule that has a schedule and the Lambda function as a target.

Question 164

“A company is releasing a new static website hosted on Amazon S3. The static website hosting feature was enabled on the bucket and content was uploaded; however, upon navigating to the site, the following error message is received:

403 Forbidden - Access Denied

What change should be made to fix this error? “ Add a bucket policy that grants everyone read access to the bucket objects.

Question 165

“A company is storing media content in an Amazon S3 bucket and uses Amazon CloudFront to distribute the content to its users. Due to licensing terms, the company is not authorized to distribute the content in some countries. A SysOps administrator must restrict access to certain countries. What is the MOST operationally efficient solution that meets these requirements? “ Enable the geo restriction feature in the CloudFront distribution to prevent access from unauthorized countries

Question 166

“A SysOps administrator created an Amazon VPC with an IPv6 CIDR block, which requires access to the internet. However, access from the internet towards the VPC is prohibited. After adding and configuring the required components to the VPC, the administrator is unable to connect to any of the domains that reside on the internet. What additional route destination rule should the administrator add to the route tables? “ “ Route ::/0 traffic to an egress-only internet gateway”

Question 167

“A company hosts several write-intensive applications. These applications use a MySQL database that runs on a single Amazon EC2 instance. The company asks a SysOps administrator to implement a highly available database solution that is ideal for multi-tenant workloads. Which solution should the SysOps administrator implement to meet these requirements? “ Migrate the database to an Amazon Aurora multi-master DB cluster

Question 168

“A company has a memory-intensive application that runs on a fleet of Amazon EC2 instances behind an Elastic Load Balancer (ELB). The instances run in an Auto Scaling group. A SysOps administrator must ensure that the application can scale based on the number of users that connect to the application. Which solution will meet these requirements? “ Create a scaling policy that will scale the application based on the ActiveConnectionCount Amazon CloudWatch metric that is generated from the ELB.

Question 169

“A SysOps administrator creates a new VPC that includes a public subnet and a private subnet. The SysOps administrator successfully launches 11 Amazon EC2 instances in the private subnet. The SysOps administrator attempts to launch one more EC2 instance in the same subnet. However, the SysOps administrator receives an error message that states that not enough free IP addresses are available.”

D. Create a new private subnet to hold the required EC2 instances. Most Voted

Question 170

“A company needs to automatically monitor an AWS account for potential unauthorized AWS Management Console logins from multiple geographic locations.”

D. Configure Amazon GuardDuty to monitor the UnauthorizedAccess:IAMUser/ConsoleLoginSuccess.B finding.

Question 171

“A company has an Amazon RDS DB instance. The company wants to implement a caching service while maintaining high availability. Which combination of actions will meet these requirements? (Choose two.) “

  • Create an Amazon ElastiCache for Redis data store. Most Voted

  • D. Enable Multi-AZ for the data store.”

Question 172

“A company monitors its account activity using AWS CloudTrail, and is concerned that some log files are being tampered with after the logs have been delivered to the account’s Amazon S3 bucket. Moving forward, how can the SysOps administrator confirm that the log files have not been modified after being delivered to the S3 bucket? “ B. Enable log file integrity validation and use digest files to verify the hash value of the log file. Most Voted

Question 173

“A SysOps administrator is reviewing AWS Trusted Advisor warnings and encounters a warning for an S3 bucket policy that has open access permissions. While discussing the issue with the bucket owner, the administrator realizes the S3 bucket is an origin for an Amazon CloudFront web distribution. Which action should the administrator take to ensure that users access objects in Amazon S3 by using only CloudFront URLs? “ Create an origin access identity and grant it permissions to read objects in the S3 bucket

Question 174

“A SysOps administrator is reviewing AWS Trusted Advisor recommendations. The SysOps administrator notices that all the application servers for a finance application are listed in the Low Utilization Amazon EC2 Instances check. The application runs on three instances across three Availability Zones. The SysOps administrator must reduce the cost of running the application without affecting the application’s availability or design. Which solution will meet these requirements? “ Apply rightsizing recommendations from AWS Cost Explorer to reduce the instance size

Question 175

“A company hosts its website in the us-east-1 Region. The company is preparing to deploy its website into the eu-central-1 Region. Website visitors who are located in Europe should access the website that is hosted in eu-central-1. All other visitors access the website that is hosted in us-east-1. The company uses Amazon Route 53 to manage the website’s DNS records. Which routing policy should a SysOps administrator apply to the Route 53 record set to meet these requirements? “ Geolocation routing policy

Question 176

“An organization with a large IT department has decided to migrate to AWS. With different job functions in the IT department, it is not desirable to give all users access to all AWS resources. Currently the organization handles access via LDAP group membership. What is the BEST method to allow access using current LDAP credentials? “ “ Federate the LDAP directory with IAM using SAML. Create different IAM roles to correspond to different LDAP groups to limit permissions.”

Question 177

“A SysOps administrator has created an Amazon EC2 instance using an AWS CloudFormation template in the us-east-1 Region. The administrator finds that this template has failed to create an EC2 instance in the us-west-2 Region. What is one cause for this failure? “ The Amazon Machine Image (AMI) ID referenced in the CloudFormation template could not be found in the us-west-2 Region. A user has launched two EBS backed EC2 instances in the US-East-1a region. The user wants to change the zone of one of the instances. How can the user change it? Create an AMI of the running instance and launch the instance in a separate AZ

Question 178

“A SysOps administrator is investigating a company’s web application for performance problems. The application runs on Amazon EC2 instances that are in an Auto Scaling group. The application receives large traffic increases at random times throughout the day. During periods of rapid traffic increases, the Auto Scaling group is not adding capacity fast enough. As a result, users are experiencing poor performance. The company wants to minimize costs without adversely affecting the user experience when web traffic surges quickly. The company needs a solution that adds more capacity to the Auto Scaling group for larger traffic increases than for smaller traffic increases. How should the SysOps administrator configure the Auto Scaling group to meet these requirements? “ Create a step scaling policy with settings to make larger adjustments in capacity when the system is under heavy load

Question 179

“A company has a compliance requirement that no security groups can allow SSH ports to be open to all IP addresses. A SysOps administrator must implement a solution that will notify the company’s SysOps team when a security group rule violates this requirement. The solution also must remediate the security group rule automatically. Which solution will meet these requirements? “ Activate the AWS Config restricted-ssh managed rule. Add automatic remediation to the AWS Config rule by using the AWS Systems Manager Automation AWS-DisablePublicAccessForSecurityGroup runbook. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to notify the SysOps team when the rule is noncompliant

Question 180

“A company has an application that runs only on Amazon EC2 Spot Instances. The instances run in an Amazon EC2 Auto Scaling group with scheduled scaling actions. However, the capacity does not always increase at the scheduled times, and instances terminate many times a day. A SysOps administrator must ensure that the instances launch on time and have fewer interruptions. Which action will meet these requirements? “ Specify the capacity-optimized allocation strategy for Spot Instances. Add more instance types to the Auto Scaling group.

Question 181

“A company plans to deploy a database on an Amazon Aurora MySQL DB cluster. The database will store data for a demonstration environment. The data must be reset on a daily basis. What is the MOST operationally efficient solution that meets these requirements? “ Enable the Backtrack feature during the creation of the DB cluster. Specify a target backtrack window of 48 hours. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda function on a daily basis. Configure the function to perform a backtrack operation.

Question 182

“A SysOps administrator is setting up an automated process to recover an Amazon EC2 instance in the event of an underlying hardware failure. The recovered instance must have the same private IP address and the same Elastic IP address that the original instance had. The SysOps team must receive an email notification when the recovery process is initiated. Which solution will meet these requirements? “ “ Create an Amazon CloudWatch alarm for the EC2 instance, and specify the StatusCheckFailed_System metric. Add an EC2 action to the alarm to recover the instance. Add an alarm notification to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the SysOps team email address to the SNS topic.”

Question 183

“A company has a public website that recently experienced problems. Some links led to missing webpages, and other links rendered incorrect webpages. The application infrastructure was running properly, and all the provisioned resources were healthy. Application logs and dashboards did not show any errors, and no monitoring alarms were raised. Systems administrators were not aware of any problems until end users reported the issues. The company needs to proactively monitor the website for such issues in the future and must implement a solution as soon as possible. Which solution will meet these requirements with the LEAST operational overhead? “ Create an Amazon CloudWatch Synthetics canary. Use the CloudWatch Synthetics Recorder plugin to generate the script for the canary run. Configure the canary in line with requirements. Create an alarm to provide alerts when issues are detected

Question 184

“A SysOps administrator is responsible for a company’s security groups. The company wants to maintain a documented trail of any changes that are made to the security groups. The SysOps administrator must receive notification whenever the security groups change. Which solution will meet these requirements? “ Set up AWS Config to record security group changes. Specify an Amazon S3 bucket as the location for configuration snapshots and history files. Create an Amazon Simple Notification Service (Amazon SNS) topic for notifications about configuration changes. Subscribe the SysOps administrator’s email address to the SNS topic

Question 185

“An ecommerce company has built a web application that uses an Amazon Aurora DB cluster. The DB cluster includes memory optimized instance types with both a writer node and a reader node. Traffic volume changes throughout the day. During sudden traffic surges, Amazon CloudWatch metrics for the DB cluster indicate high RAM consumption and an increase in select latency. A SysOps administrator must implement a configuration change to improve the performance of the DB cluster. The change must minimize downtime and must not result in the loss of data. Which change will meet these requirements? “ Add an Aurora Replica to the DB cluster

Question 186

“A company has a simple web application that runs on a set of Amazon EC2 instances behind an Elastic Load Balancer in the eu-west-2 Region. Amazon Route 53 holds a DNS record for the application with a simple routing policy. Users from all over the world access the application through their web browsers. The company needs to create additional copies of the application in the us-east-1 Region and in the ap-south-1 Region. The company must direct users to the Region that provides the fastest response times when the users load the application. What should a SysOps administrator do to meet these requirements? “ “ In each new Region, create a new Elastic Load Balancer and a new set of EC2 instances to run a copy of the application. Transition to a latency routing policy.”

Question 187

“A company creates a new member account by using AWS Organizations. A SysOps administrator needs to add AWS Business Support to the new account. Which combination of steps must the SysOps administrator take to meet this requirement? (Choose two.) “

    - Sign in to the new account by using IAM credentials. Change the support plan

    - Create an IAM user that has administrator privileges in the new account"

Question 188

“A SysOps administrator creates two VPCs, VPC1 and VPC2, in a company’s AWS account The SysOps administrator deploys a Linux Amazon EC2 instance in VPC1 and deploys an Amazon RDS for MySQL DB instance in VPC2. The DB instance is deployed in a private subnet. An application that runs on the EC2 instance needs to connect to the database. What should the SysOps administrator do to give the EC2 instance the ability to connect to the database? “ Configure VPC peering between the two VPCs.

Question 189

“A company uses an Amazon S3 bucket to store data files. The S3 bucket contains hundreds of objects. The company needs to replace a tag on all the objects in the S3 bucket with another tag. What is the MOST operationally efficient way to meet this requirement? “ Use S3 Batch Operations. Specify the operation to replace all object tags.

Question 190

“A company needs to take an inventory of applications that are running on multiple Amazon EC2 instances. The company has configured users and roles with the appropriate permissions for AWS Systems Manager. An updated version of Systems Manager Agent has been installed and is running on every instance. While configuring an inventory collection, a SysOps administrator discovers that not all the instances in a single subnet are managed by Systems Manager. What must the SysOps administrator do to fix this issue? “ Ensure that all the EC2 instances have an instance profile with Systems Manager access

Question 191

“A company stores sensitive data in an Amazon S3 bucket. The company must log all access attempts to the S3 bucket. The company’s risk team must receive immediate notification about any delete events. Which solution will meet these requirements? “ Enable S3 server access logging for audit logs. Set up an Amazon Simple Notification Service (Amazon SNS) notification for the S3 bucket. Select DeleteObject for the event type for the alert system

Question 192

“A SysOps administrator receives an alert from Amazon GuardDuty about suspicious network activity on an Amazon EC2 instance. The GuardDuty finding lists a new external IP address as a traffic destination. The SysOps administrator does not recognize the external IP address. The SysOps administrator must block traffic to the external IP address that GuardDuty identified. Which solution will meet this requirement? “ Create a network ACL. Add an outbound deny rule for traffic to the external IP address.

Question 193

“A company’s reporting job that used to run in 15 minutes is now taking an hour to run. An application generates the reports. The application runs on Amazon EC2 instances and extracts data from an Amazon RDS for MySQL database. A SysOps administrator checks the Amazon CloudWatch dashboard for the RDS instance and notices that the Read IOPS metrics are high, even when the reports are not running. The SysOps administrator needs to improve the performance and the availability of the RDS instance. Which solution will meet these requirements? “ Deploy an RDS read replica. Update the reporting job to query the reader endpoint.

Question 194

“A company’s SysOps administrator regularly checks the AWS Personal Health Dashboard in each of the company’s accounts. The accounts are part of an organization in AWS Organizations. The company recently added 10 more accounts to the organization. The SysOps administrator must consolidate the alerts from each account’s Personal Health Dashboard. Which solution will meet this requirement with the LEAST amount of effort? “ Enable organizational view in AWS Health

Question 195

“A company runs an application on Amazon EC2 instances. The EC2 instances are in an Auto Scaling group and run behind an Application Load Balancer (ALB). The application experiences errors when total requests exceed 100 requests per second. A SysOps administrator must collect information about total requests for a 2-week period to determine when requests exceeded this threshold. What should the SysOps administrator do to collect this data? “ “ Use the ALB’s RequestCount metric. Configure a time range of 2 weeks and a period of 1 minute. Examine the chart to determine peak traffic times and volumes”

Question 196

“A company recently migrated its application to a VPC on AWS. An AWS Site-to-Site VPN connection connects the company’s on-premises network to the VPC. The application retrieves customer data from another system that resides on premises. The application uses an on-premises DNS server to resolve domain records. After the migration, the application is not able to connect to the customer data because of name resolution errors. Which solution will give the application the ability to resolve the internal domain names? “ Create an Amazon Route 53 Resolver outbound endpoint. Configure the outbound endpoint to forward DNS queries against the on-premises domain to the on-premises DNS server.

Question 197

“A company’s web application is available through an Amazon CloudFront distribution and directly through an internet-facing Application Load Balancer (ALB). A SysOps administrator must make the application accessible only through the CloudFront distribution and not directly through the ALB. The SysOps administrator must make this change without changing the application code. Which solution will meet these requirements? “ Add a custom HTTP header to the origin settings for the distribution. In the ALB listener, add a rule to forward requests that contain the matching custom header and the header’s value. Add a default rule to return a fixed response code of 40

Question 198

“A company runs several workloads on AWS. The company identifies five AWS Trusted Advisor service quota metrics to monitor in a specific AWS Region. The company wants to receive email notification each time resource usage exceeds 60% of one of the service quotas. Which solution will meet these requirements? “ “ Create five Amazon CloudWatch alarms, one for each Trusted Advisor service quota metric. Configure an Amazon Simple Notification Service (Amazon SNS) topic for email notification each time that usage exceeds 60% of one of the service quotas”

Question 199

“A company needs to implement a managed file system to host Windows file shares for users on premises. Resources in the AWS Cloud also need access to the data on these file shares. A SysOps administrator needs to present the user file shares on premises and make the user file shares available on AWS with minimum latency. What should the SysOps administrator do to meet these requirements? “ “ Set up an Amazon FSx File Gateway”

Question 200

“A company is hosting applications on Amazon EC2 instances. The company is hosting a database on an Amazon RDS for PostgreSQL DB instance. The company requires all connections to the DB instance to be encrypted. What should a SysOps administrator do to meet this requirement? “ Enforce SSL connections to the database by using a custom parameter group

Question 201

“A company recently purchased Savings Plans. The company wants to receive email notification when the company’s utilization drops below 90% for a given day. Which solution will meet this requirement? “ Use AWS Budgets to create a Savings Plans budget to track the daily utilization of the Savings Plans. Configure an Amazon Simple Notification Service (Amazon SNS) topic for email notification when the utilization drops below 90% for a given day

Question 202

“A company uses an Amazon Simple Queue Service (Amazon SQS) standard queue with its application. The application sends messages to the queue with unique message bodies. The company decides to switch to an SQS FIFO queue. What must the company do to migrate to an SQS FIFO queue? “ Create a new SQS FIFO queue. Turn on content-based deduplication on the new FIFO queue. Update the application to include a message group ID in the messages

Question 203

“A company’s SysOps administrator must ensure that all Amazon EC2 Windows instances that are launched in an AWS account have a third-party agent installed. The third-party agent has an .msi package. The company uses AWS Systems Manager for patching, and the Windows instances are tagged appropriately. The third-party agent requires periodic updates as new versions are released. The SysOps administrator must deploy these updates automatically. Which combination of steps will meet these requirements with the LEAST operational effort? (Choose two.) “

  • “Create a Systems Manager Distributor package for the third-party agent

  • Create a Systems Manager State Manager association to run the AWS-ConfigureAWSPackage document. Populate the details of the third-party agent package. Specify instance tags based on the appropriate tag value for Windows with a schedule of 1 day.

Question 204

“A company runs hundreds of Amazon EC2 instances in a single AWS Region. Each EC2 instance has two attached 1 GiB General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volumes. A critical workload is using all the available IOPS capacity on the EBS volumes. According to company policy, the company cannot change instance types or EBS volume types without completing lengthy acceptance tests to validate that the company’s applications will function properly. A SysOps administrator needs to increase the I/O performance of the EBS volumes as quickly as possible. Which action should the SysOps administrator take to meet these requirements? “ Increase the size of the 1 GiB EBS volumes

Question 205

“A company needs to deploy a new workload on AWS. The company must encrypt all data at rest and must rotate the encryption keys once each year. The workload uses an Amazon RDS for MySQL Multi-AZ database for data storage. Which configuration approach will meet these requirements? “ Create a new AWS Key Management Service (AWS KMS) customer managed key. Enable automatic key rotation. Enable RDS encryption on the database at creation time by using the KMS key

Question 206

“A company has an application that is deployed to two AWS Regions in an active-passive configuration. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB) in each Region. The instances are in an Amazon EC2 Auto Scaling group in each Region. The application uses an Amazon Route 53 hosted zone for DNS. A SysOps administrator needs to configure automatic failover to the secondary Region. What should the SysOps administrator do to meet these requirements? “ “ Configure Route 53 alias records that point to each ALB. Choose a failover routing policy. Set Evaluate Target Health to Yes”

Question 207

“A company is implementing a monitoring solution that is based on machine learning. The monitoring solution consumes Amazon EventBridge (Amazon CloudWatch Events) events that are generated by Amazon EC2 Auto Scaling. The monitoring solution provides detection of anomalous behavior such as unanticipated scaling events and is configured as an EventBridge (CloudWatch Events) API destination. During initial testing, the company discovers that the monitoring solution is not receiving events. However, Amazon CloudWatch is showing that the EventBridge (CloudWatch Events) rule is being invoked. A SysOps administrator must implement a solution to retrieve client error details to help resolve this issue. Which solution will meet these requirements with the LEAST operational effort?”

Add an Amazon Simple Queue Service (Amazon SQS) standard queue as a dead-letter queue for the target. Process the messages in the dead-letter queue to retrieve error details

Question 208

“A company is storing backups in an Amazon S3 bucket. The backups must not be deleted for at least 3 months after the backups are created. What should a SysOps administrator do to meet this requirement? “

Enable S3 Object Lock on a new S3 bucket in compliance mode. Place all backups in the new S3 bucket with a retention period of 3 months

Question 209

“A SysOps administrator needs to track the costs of data transfer between AWS Regions. The SysOps administrator must implement a solution to send alerts to an email distribution list when transfer costs reach 75% of a specific threshold. What should the SysOps administrator do to meet these requirements? “

Use AWS Budgets to create a cost budget for data transfer costs. Set an alert at 75% of the budgeted amount. Configure the budget to send a notification to the email distribution list when costs reach 75% of the threshold

Question 210

“A company needs to archive all audit logs for 10 years. The company must protect the logs from any future edits. Which solution will meet these requirements? “

Store the data in an Amazon S3 Glacier vault. Configure a vault lock policy for write-once, read-many (WORM) access.

Question 211

“A company’s AWS Lambda function is experiencing performance issues. The Lambda function performs many CPU-intensive operations. The Lambda function is not running fast enough and is creating bottlenecks in the system. What should a SysOps administrator do to resolve this issue? “

Increase the amount of memory for the Lambda function

Question 212

“A company hosts a web application on an Amazon EC2 instance. The web server logs are published to Amazon CloudWatch Logs. The log events have the same structure and include the HTTP response codes that are associated with the user requests. The company needs to monitor the number of times that the web server returns an HTTP 404 response. What is the MOST operationally efficient solution that meets these requirements? “

Create a CloudWatch Logs metric filter that counts the number of times that the web server returns an HTTP 404 response.

Question 213

“A company is attempting to manage its costs in the AWS Cloud. A SysOps administrator needs specific company-defined tags that are assigned to resources to appear on the billing report. What should the SysOps administrator do to meet this requirement? “

Activate the tags as user-defined cost allocation tags.

Question 214

“A company is expanding globally and needs to back up data on Amazon Elastic Block Store (Amazon EBS) volumes to a different AWS Region. Most of the EBS volumes that store the data are encrypted, but some of the EBS volumes are unencrypted. The company needs the backup data from all the EBS volumes to be encrypted. Which solution will meet these requirements with the LEAST management overhead? “

Configure a lifecycle policy in Amazon Data Lifecycle Manager (Amazon DLM) to create the EBS volume snapshots with cross-Region backups enabled. Encrypt the snapshot copies by using AWS Key Management Service (AWS KMS).

Question 215

“A SysOps administrator creates an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that uses AWS Fargate. The cluster is deployed successfully. The SysOps administrator needs to manage the cluster by using the kubectl command line tool. Which of the following must be configured on the SysOps administrator’s machine so that kubectl can communicate with the cluster API server? “

The kubeconfig file

Question 216

“A company wants to collect data from an application to use for analytics. For the first 90 days, the data will be infrequently accessed but must remain highly available. During this time, the company’s analytics team requires access to the data in milliseconds. However, after 90 days, the company must retain the data for the long term at a lower cost. The retrieval time after 90 days must be less than 5 hours. Which solution will meet these requirements MOST cost-effectively? “

Store the data in S3 Standard-Infrequent Access (S3 Standard-IA) for the first 90 days. Set up an S3 Lifecycle rule to move the data to S3 Glacier Flexible Retrieval after 90 days.”

Question 217

“A company’s application currently uses an IAM role that allows all access to all AWS services. A SysOps administrator must ensure that the company’s IAM policies allow only the permissions that the application requires. How can the SysOps administrator create a policy to meet this requirement? “

Turn on AWS CloudTrail. Generate a policy by using AWS Identity and Access Management Access Analyzer.

Question 218

“A company is deploying a third-party unit testing solution that is delivered as an Amazon EC2 Amazon Machine Image (AMI). All system configuration data is stored in Amazon DynamoDB. The testing results are stored in Amazon S3. A minimum of three EC2 instances are required to operate the product. The company’s testing team wants to use an additional three EC2 instances when the Spot Instance prices are at a certain threshold. A SysOps administrator must implement a highly available solution that provides this functionality. Which solution will meet these requirements with the LEAST operational overhead? “

Define an Amazon EC2 Auto Scaling group by using a launch template. Use the provided AMI in the launch template. Configure three On-Demand Instances and three Spot instances. Configure a maximum Spot Instance price in the launch template

Question 219

“A SysOps administrator creates an AWS CloudFormation template to define an application stack that can be deployed in multiple AWS Regions. The SysOps administrator also creates an Amazon CloudWatch dashboard by using the AWS Management Console. Each deployment of the application requires its own CloudWatch dashboard. How can the SysOps administrator automate the creation of the CloudWatch dashboard each time the application is deployed? “

Export the existing CloudWatch dashboard as JSON. Update the CloudFormation template to define an AWS::CloudWatch::Dashboard resource. Include the exported JSON in the resource’s DashboardBody property”

Question 220

“A company updates its security policy to clarify cloud hosting arrangements for regulated workloads. Workloads that are identified as sensitive must run on hardware that is not shared with other customers or with other AWS accounts within the company. Which solution will ensure compliance with this policy? “

Deploy workloads only to Dedicated Hosts.

Question 221

“A user is measuring the CPU utilization of a private data center machine every minute. The machine provides the aggregate of data every hour, such as Sum of data, Min value, Max value, and Number of Data points. The user wants to send these values to CloudWatch. How can the user achieve this?”

Send the data using the put-metric-data command with the statistic-values parameter

Question 222

“A SysOps administrator wants to monitor the free disk space that is available on a set of Amazon EC2 instances that have Amazon Elastic Block Store (Amazon EBS) volumes attached. The SysOps administrator wants to receive a notification when the used disk space of the EBS volumes exceeds a threshold value, but only when the DiskReadOps metric also exceeds a threshold value. The SysOps administrator has set up an Amazon Simple Notification Service (Amazon SNS) topic. How can the SysOps administrator receive notification only when both metrics exceed their threshold values? “

Install the Amazon CloudWatch agent on the EC2 instances. Create a metric alarm for the disk space and a metric alarm for the DiskReadOps metric. Create a composite alarm that includes the two metric alarms to publish a notification to the SNS topic.

Question 223

“A company updates its security policy to prohibit the public exposure of any data in Amazon S3 buckets in the company’s account. What should a SysOps administrator do to meet this requirement? “

Turn on S3 Block Public Access from the account level.

Question 224

“A company’s SysOps administrator needs to change the AWS Support plan for one of the company’s AWS accounts. The account has multi-factor authentication (MFA) activated, and the MFA device is lost. What should the SysOps administrator do to sign in? “

Sign in as a root user by using email and phone verification. Set up a new MFA device. Change the root user password.

Question 225

“A company is creating a new multi-account architecture. A SysOps administrator must implement a login solution to centrally manage user access and permissions across all AWS accounts. The solution must be integrated with AWS Organizations and must be connected to a third-party Security Assertion Markup Language (SAML) 2.0 identity provider (IdP). What should the SysOps administrator do to meet these requirements? “

Enable and configure AWS Single Sign-On with the third-party IdP

Question 226

“A company is managing many accounts by using a single organization in AWS Organizations. The organization has all features enabled. The company wants to turn on AWS Config in all the accounts of the organization and in all AWS Regions. What should a SysOps administrator do to meet these requirements in the MOST operationally efficient way? “

Use AWS CloudFormation Stack Sets to deploy stack instances that turn on AWS Config in all accounts and in all Regions.

Question 227

“A SysOps administrator needs to delete an AWS CloudFormation stack that is no longer in use. The CloudFormation stack is in the DELETE_FAILED state. The SysOps administrator has validated the permissions that are required to delete the CloudFormation stack. Which of the following are possible causes of the DELETE_FAILED state? (Choose two.) “

  • There are additional resources associated with a security group in the stack

  • There are Amazon S3 buckets that still contain objects in the stack

Question 228

“A SysOps administrator needs to configure a solution that will deliver digital content to a set of authorized users through Amazon CloudFront. Unauthorized users must be restricted from access. Which solution will meet these requirements? “

Store the digital content in an Amazon S3 bucket that has public access blocked. Use an origin access identity (OAI) to deliver the content through CloudFront. Restrict S3 bucket access with signed URLs in CloudFront.

Question 229

“A SysOps administrator must ensure that a company’s Amazon EC2 instances auto scale as expected. The SysOps administrator configures an Amazon EC2 Auto Scaling lifecycle hook to send an event to Amazon EventBridge (Amazon CloudWatch Events), which then invokes an AWS Lambda function to configure the EC2 instances. When the configuration is complete, the Lambda function calls the complete-lifecycle-action event to put the EC2 instances into service. In testing, the SysOps administrator discovers that the Lambda function is not invoked when the EC2 instances auto scale. What should the SysOps administrator do to resolve this issue? “

Add a permission to the Lambda function so that it can be invoked by the EventBridge (CloudWatch Events) rule.

Question 230

“A company has mandated the use of multi-factor authentication (MFA) for all IAM users, and requires users to make all API calls using the CLI. However, users are not prompted to enter MFA tokens, and are able to run CLI commands without MFA. In an attempt to enforce MFA, the company attached an IAM policy to all users that denies API calls that have not been authenticated with MFA. What additional step must be taken to ensure that API calls are authenticated using MFA? “

Require users to use temporary credentials from the get-session token command to sign API calls.

Question 231

“A SysOps administrator is configuring AWS Client VPN to connect users on a corporate network to AWS resources that are running in a VPC. According to compliance requirements, only traffic that is destined for the VPC can travel across the VPN tunnel. How should the SysOps administrator configure Client VPN to meet these requirements? “

On the Client VPN endpoint, turn on the split-tunnel option

Question 232

“A web application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Auto Scaling group across multiple Availability Zones. A SysOps administrator notices that some of these EC2 instances show up as healthy in the Auto Scaling group but show up as unhealthy in the ALB target group. What is a possible reason for this issue? “

The target group health check is configured with an incorrect port or path

Question 233

“A SysOps administrator notices a scale up event for an Amazon EC2 Auto Scaling group. Amazon CloudWatch shows a spike in the RequestCount metric for the associated Application Load Balancer. The administrator would like to know the IP addresses for the source of the requests. Where can the administrator find this information? “

Elastic Load Balancer access logs

Question 234

“A company plans to migrate several of its high performance computing (HPC) virtual machines (VMs) to Amazon EC2 instances on AWS. A SysOps administrator must identify a placement group for this deployment. The strategy must minimize network latency and must maximize network throughput between the HPC VMs. Which strategy should the SysOps administrator choose to meet these requirements? “

Deploy the instances in a cluster placement group in one Availability Zone.

Question 235

“An errant process is known to use an entire processor and run at 100%. A SysOps administrator wants to automate restarting an Amazon EC2 instance when the problem occurs for more than 2 minutes. How can this be accomplished? “

Create an Amazon CloudWatch alarm for the EC2 instance with detailed monitoring. Add an action to restart the instance

Question 236

“A company maintains a large set of sensitive data in an Amazon S3 bucket. The company’s security team asks a SysOps administrator to help verify that all current objects in the S3 bucket are encrypted. What is the MOST operationally efficient solution that meets these requirements? “

Create an S3 Inventory configuration on the S3 bucket. Include the appropriate status fields.

Question 237

“Users are periodically experiencing slow response times from a relational database. The database runs on a burstable Amazon EC2 instance with a 350 GB General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volume. A SysOps administrator monitors the EC2 instance in Amazon CloudWatch and observes that the VolumeReadOps metric drops to less than 10% of its peak value during the periods of slow response. What should the SysOps administrator do to ensure consistently high performance? “

Activate unlimited mode on the EC2 instance

Question 238

“A SysOps administrator is optimizing the cost of a workload. The workload is running in multiple AWS Regions and is using AWS Lambda with Amazon EC2 On-Demand Instances for the computer. The overall usage is predictable. The amount of computer that is consumed in each Region varies, depending on the users’ locations. Which approach should the SysOps administrator use to optimize this workload? “

Purchase Computer Savings Plans based on the usage during the past 30 days

Question 239

“A software company runs a workload on Amazon EC2 instances behind an Application Load Balancer (ALB). A SysOps administrator needs to define a custom health check for the EC2 instances. What is the MOST operationally efficient solution? “

Configure the health check on the ALB and ensure that the Health Check Path setting is correct.

Question 240

“A SysOps administrator is required to monitor free space on Amazon EBS volumes attached to Microsoft Windows-based Amazon EC2 instances within a company’s account. The administrator must be alerted to potential issues. What should the administrator do to receive email alerts before low storage space affects EC2 instance performance? “

Use the Amazon CloudWatch agent to send disk space metrics, then set up CloudWatch alarms using an Amazon SNS topic.

Question 241

“A company applies user-defined tags to resources that are associated with the company’s AWS workloads. Twenty days after applying the tags, the company notices that it cannot use the tags to filter views in the AWS Cost Explorer console. What is the reason for this issue? “

The company has not activated the user-defined tags for cost allocation.

Question 242

“A company has a critical serverless application that uses multiple AWS Lambda functions. Each Lambda function generates 1 GB of log data daily in its own Amazon CloudWatch Logs log group. The company’s security team asks for a count of application errors, grouped by type, across all of the log groups. What should a SysOps administrator do to meet this requirement? “

Perform a CloudWatch Logs Insights query that uses the stats command and count function.

Question 243

“A company with multiple AWS accounts needs to obtain recommendations for AWS Lambda functions and identify optimal resource configurations for each Lambda function. How should a SysOps administrator provide these recommendations? “

Enable AWS Compute Optimizer and export the Lambda function recommendations

Question 244

“A company uses AWS CloudFormation templates to deploy cloud infrastructure. An analysis of all the company’s templates shows that the company has declared the same components in multiple templates. A SysOps administrator needs to create dedicated templates that have their own parameters and conditions for these common components. Which solution will meet this requirement? “

Develop CloudFormation nested stacks.

Question 245

“A SysOps administrator is building a process for sharing Amazon RDS database snapshots between different accounts associated with different business units within the same company. All data must be encrypted at rest. How should the administrator implement this process?”

Update the key policy to grant permission to the AWS KMS encryption key used to encrypt the snapshot with all relevant accounts, then share the snapshot with those accounts

Question 246

“A SysOps administrator configures an Amazon S3 gateway endpoint in a VPC. The private subnets inside the VPC do not have outbound internet access. User logs in to an Amazon EC2 instance in one of the private subnets and cannot upload a file to an Amazon S3 bucket in the same AWS Region. Which solution will solve this problem? “

Update the S3 bucket policy to allow s3:PutObject access from the private subnet CIDR block

Question 247

“A user has created an Auto Scaling group using CLI. The user wants to enable CloudWatch detailed monitoring for that group. How can the user configure this?”

By default detailed monitoring is enabled for Auto Scaling

Question 248

“A SysOps administrator is helping a development team deploy an application to AWS. The AWS CloudFormation template includes an Amazon Linux EC2 instance, an Amazon Aurora DB cluster, and a hardcoded database password that must be rotated every 90 days. What is the MOST secure way to manage the database password? “

Use the AWS::SecretsManager::Secret resource with the GenerateSecretString property to automatically generate a password. Use the AWS::SecretsManager::RotationSchedule resource to define a rotation schedule for the password. Configure the application to retrieve the secret from AWS Secrets Manager to access the database

Question 249

“A company’s SysOps administrator maintains a highly available environment. The environment includes Amazon EC2 instances and an Amazon RDS Multi-AZ database. The EC2 instances are in an Auto Scaling group behind an Application Load Balancer. Recently, the company conducted a failover test. The SysOps administrator needs to decrease the failover time of the RDS database by at least 10%. Which solution will meet this requirement? “

Create an RDS proxy. Point the application to the proxy endpoint.

Question 250

“A company’s VPC has connectivity to an on-premises data center through an AWS Site-to-Site VPN. The company needs Amazon EC2 instances in the VPC to send DNS queries for example.com to the DNS servers in the data center. Which solution will meet these requirements? “

Create an Amazon Route 53 Resolver outbound endpoint. Create a forwarding rule on the resolver that sends all queries for example.com to the on-premises DNS servers. Associate this rule with the VPC.

Question 251

“A SysOps administrator is tasked with analyzing database performance. The database runs on a single Amazon RDS DB instance. The SysOps administrator finds that, during times of peak traffic, resources on the database are overutilized due to the amount of read traffic. Which actions should the SysOps administrator take to improve RDS performance? (Choose two.) “

  • Add a read replica

  • Modify the application to use Amazon ElastiCache for Memcached

Question 252

“A company’s SysOps administrator has created an Amazon EC2 instance with custom software that will be used as a template for all new EC2 instances across multiple AWS accounts. The Amazon Elastic Block Store (Amazon EBS) volumes that are attached to the EC2 instance are encrypted with AWS managed keys. The SysOps administrator creates an Amazon Machine Image (AMI) of the custom EC2 instance and plans to share the AMI with the company’s other AWS accounts. The company requires that all AMIs are encrypted with AWS Key Management Service (AWS KMS) keys and that only authorized AWS accounts can access the shared AMIs. Which solution will securely share the AMI with the other AWS accounts? “

In the account where the AMI was created, create a customer managed KMS key. Modify the key policy to provide kms:DescribeKey, kms:ReEncrypt*, kms:CreateGrant, and kms:Decrypt permissions to the AWS accounts that the AMI will be shared with. Create a copy of the AMI, and specify the KMS key. Modify the permissions on the copied AMI to specify the AWS account numbers that the AMI will be shared with.

Question 253

“A company is migrating its production file server to AWS. All data that is stored on the file server must remain accessible if an Availability Zone becomes unavailable or when system maintenance is performed. Users must be able to interact with the file server through the SMB protocol. Users also must have the ability to manage file permissions by using Windows ACLs. Which solution will meet these requirements? “

Create an Amazon FSx for Windows File Server Multi-AZ file system.

Question 254

“A SysOps administrator needs to create alerts that are based on the read and write metrics of Amazon Elastic Block Store (Amazon EBS) volumes that are attached to an Amazon EC2 instance. The SysOps administrator creates and enables Amazon CloudWatch alarms for the DiskReadBytes metric and the DiskWriteBytes metric. A custom monitoring tool that is installed on the EC2 instance with the same alarm configuration indicates that the volume metrics have exceeded the threshold. However, the CloudWatch alarms were not in ALARM state. Which action will ensure that the CloudWatch alarms function correctly?”

Reconfigure the CloudWatch alarms to use the VolumeReadBytes metric and the VolumeWriteBytes metric for the EBS volumes

Question 255

“A company recently moved its server infrastructure to Amazon EC2 instances. The company wants to use Amazon CloudWatch metrics to track instance memory utilization and available disk space. What should a SysOps administrator do to meet these requirements? “

Install and configure the CloudWatch agent on all the instances. Attach an IAM role to allow the instances to write logs to CloudWatch

Question 256

“A company recently deployed MySQL on an Amazon EC2 instance with a default boot volume. The company intends to restore a 1.75 TB database. A SysOps administrator needs to provision the correct Amazon Elastic Block Store (Amazon EBS) volume. The database will require read performance of up to 10,000 IOPS and is not expected to grow in size. Which solution will provide the required performance at the LOWEST cost?”

Deploy a 2 TB General Purpose SSD (gp3) volume. Set the IOPS to 10,000.

Question 257

“A SysOps administrator is setting up a fleet of Amazon EC2 instances in an Auto Scaling group for an application. The fleet should have 50% CPU available at all times to accommodate bursts of traffic. The load will increase significantly between the hours of 09:00 and 17:00, 7 days a week. How should the SysOps administrator configure the scaling of the EC2 instances to meet these requirements? “

Create a target tracking scaling policy that runs when the CPU utilization is higher than 50%. Create a scheduled scaling policy that ensures that the fleet is available at 09:00. Create a second scheduled scaling policy that scales in the fleet at 17:00.

Question 258

“A company is running Amazon EC2 On-Demand Instances in an Auto Scaling group. The instances process messages from an Amazon Simple Queue Service (Amazon SQS) queue. The Auto Scaling group is set to scale based on the number of messages in the queue. Messages can take up to 12 hours to process completely. A SysOps administrator must ensure that instances are not interrupted during message processing. What should the SysOps administrator do to meet these requirements? “

Enable instance scale-in protection for the specific instance in the Auto Scaling group at the start of message processing by calling the Amazon EC2 Auto Scaling API from the processing script. Disable instance scale-in protection after message processing is complete by calling the Amazon EC2 Auto Scaling API from the processing script An organization is trying to create various IAM users. Which of the below mentioned options is not a valid IAM username? john#cloud

Question 259

“A company is developing a mobile shopping web app. The company needs an environment that is configured to encrypt all resources in transit and at rest. A security engineer must develop a solution that will encrypt traffic in transit to the company’s Application Load Balancer and Amazon API Gateway resources. The solution also must encrypt traffic at rest for Amazon S3 storage. What should the security engineer do to meet these requirements?”

Use AWS Certificate Manager (ACM) for encryption in transit. Use AWS Key Management Service for encryption at rest.

Question 260

“A SysOps administrator needs to collect the content of log files from a custom application that is deployed across hundreds of Amazon EC2 instances running Ubuntu. The log files need to be stored in Amazon CloudWatch Logs. How should the SysOps administrator collect the application log files with the LOWEST operational overhead? “

Store a CloudWatch agent configuration in the AWS Systems Manager Parameter Store. Install the CloudWatch agent on each EC2 instance by using Systems Manager. Configure each agent to collect the application log files.

Question 261

“A user wants to upload a complete folder to AWS S3 using the S3 Management console. How can the user perform this activity?”

Use the Enable Enhanced Uploader option from the S3 console while uploading objects

Question 262

“A SysOps administrator is creating a simple, public-facing website running on Amazon EC2. The SysOps administrator created the EC2 instance in an existing public subnet and assigned an Elastic IP address to the instance. Next, the SysOps administrator created and applied a new security group to the instance to allow incoming HTTP traffic from 0.0.0.0/0. Finally, the SysOps administrator created a new network ACL and applied it to the subnet to allow incoming HTTP traffic from 0.0.0.0/0. However, the website cannot be reached from the internet. What is the cause of this issue? “

The SysOps administrator did not create an outbound rule that allows ephemeral port return traffic in the new network ACL.

Question 263

“A company has an application that uses an Amazon Elastic File System (Amazon EFS) file system. A recent incident that involved an application logic error corrupted several files. The company wants to improve its ability to back up and recover the EFS file system. The company must be able to recover individual files rapidly. Which solution meets these requirements MOST cost-effectively? “

Enable AWS Backup in Amazon EFS to back up the file system to a backup vault. Use a partial restore job to retrieve individual files.

Question 264

“A company needs to view a list of security groups that are open to the internet on port 3389. What should a SysOps administrator do to meet this requirement? “

Use AWS Trusted Advisor to find security groups that allow unrestricted access on port 3389.

Question 265

“A company website contains a web tier and a database tier on AWS. The web tier consists of Amazon EC2 instances that run in an Auto Scaling group across two Availability Zones. The database tier runs on an Amazon RDS for MySQL Multi-AZ DB instance. The database subnet network ACLs are restricted to only the web subnets that need access to the database. The web subnets use the default network ACL with the default rules. The company’s operations team has added a third subnet to the Auto Scaling group configuration. After an Auto Scaling event occurs, some users report that they intermittently receive an error message. The error message states that the server cannot connect to the database. The operations team has confirmed that the route tables are correct and that the required ports are open on all security groups. Which combination of actions should a SysOps administrator take so that the web servers can communicate with the DB instance? (Choose two.) “

  • On the network ACLs for the database subnets, create an inbound Allow rule of type MySQL/Aurora (3306). Specify the source as the third web subnet.

  • On the network ACLs for the database subnets, create an outbound Allow rule of type TCP with the ephemeral port range and the destination as the third web subnet.

Question 266

“A SysOps administrator has been able to consolidate multiple, secure websites onto a single server, and each site is running on a different port. The administrator now wants to start a duplicate server in a second Availability Zone and put both behind a load balancer for high availability. What would be the command line necessary to deploy one of the sites’ certificates to the load balancer? “

aws elb set-load-balancer-listener-ssl-certificate –load-balancer-name my-load-balancer –-load-balancer-port 443 –-ssl-certificate-id arn:aws:iam::123456789012:server-certificate/new-server-cert

Question 267

A user is creating a Cloudformation stack. Which of the below mentioned limitations does not hold true for Cloudformation?

One account by default is limited to 100 templates

Question 268

“An AWS CloudFormation template creates an Amazon RDS instance. This template is used to build up development environments as needed and then delete the stack when the environment is no longer required. The RDS-persisted data must be retained for further use, even after the CloudFormation stack is deleted. How can this be achieved in a reliable and efficient way? “

Use the Snapshot Deletion Policy in the CloudFormation template definition of the RDS instance

Question 269

“An organization has created a Queue named modularqueue with SQS. The organization is not performing any operations such as SendMessage, ReceiveMessage, DeleteMessage, GetQueueAttributes, SetQueueAttributes, AddPermission, and RemovePermission on the queue. What can happen in this scenario?”

AWS SQS can delete queue after 30 days without notification

Question 270

“A company uses Amazon S3 to aggregate raw video footage from various media teams across the US. The company recently expanded into new geographies in Europe and Australia. The technical teams located in Europe and Australia reported delays when uploading large video files into the destination S3 bucket in the United States. What are the MOST cost effective ways to increase upload speeds into the S3 bucket? (Choose two.) “

  • Use Amazon S3 Transfer Acceleration for file uploads into the destination S3 bucket

  • Use multipart uploads for file uploads into the destination S3 bucket from the branch offices in Europe and Australia.

Question 271

“A SysOps administrator needs to provision a new fleet of Amazon EC2 Spot Instances in an Amazon EC2 Auto Scaling group. The Auto Scaling group will use a wide range of instance types. The configured fleet must come from pools that have the most availability for the number of instances that are launched. Which solution will meet these requirements”

Launch the Spot Instances by using the capacity optimized strategy

Question 272

“A SysOps administrator creates a custom Amazon Machine Image (AMI) in the eu-west-2 Region and uses the AMI to launch Amazon EC2 instances. The SysOps administrator needs to use the same AMI to launch EC2 instances in two other Regions: us-east-1 and us-east-2. What must the SysOps administrator do to use the custom AMI in the additional Regions”

Copy the AMI to the additional Regions.

Question 273

“A company has many accounts in an organization in AWS Organizations. The company must automate resource provisioning from the organization’s management account to the member accounts. Which solution will meet this requirement? “

Create an AWS CloudFormation stack set. Deploy the stack set to all member accounts

Question 274

“A company is building an interactive application for personal finance. The application stores financial data in Amazon S3, and the data must be encrypted. The company does not want to provide its own encryption keys. However, the company wants to maintain an audit trail that shows when an encryption key was used and who used the key. Which solution will meet these requirements? “

Use server-side encryption with AWS KMS managed encryption keys (SSE-KMS) to encrypt the user data on Amazon S3.

Question 275

“A company has an AWS CloudFormation template that creates an Amazon S3 bucket. A user authenticates to the corporate AWS account with their Active Directory credentials and attempts to deploy the CloudFormation template. However, the stack creation fails. Which factors could cause this failure? (Choose two.) “

  • The user’s IAM policy does not allow the cloudformation:CreateStack action.

  • The user’s IAM policy does not allow the s3:CreateBucket action.

Question 276

“An Amazon RDS for PostgreSQL DB cluster has automated backups turned on with a 7-day retention period. A SysOps administrator needs to create a new RDS DB cluster by using data that is no more than 24 hours old from the original DB cluster. Which solutions will meet these requirements with the LEAST operational overhead? (Choose two.) “

  • Identify the most recent automated snapshot. Restore the snapshot to a new RDS DB cluster

  • Create a read replica instance in the original RDS DB cluster. Promote the read replica to a standalone DB cluster.

Question 277

“A company is managing a website with a global user base hosted on Amazon EC2 with an Application Load Balancer (ALB). To reduce the load on the web servers, a SysOps administrator configures an Amazon CloudFront distribution with the ALB as the origin. After a week of monitoring the solution, the administrator notices that requests are still being served by the ALB and there is no change in the web server load. What are possible causes for this problem? (Choose two.) “

  • The DNS is still pointing to the ALB instead of the CloudFront distribution.

  • The default, minimum, and maximum Time to Live (TTL) are set to 0 seconds on the CloudFront distribution.

Question 278

“A SysOps administrator needs to configure the Amazon Route 53 hosted zone for example.com and www.example.com to point to an Application Load Balancer (ALB). Which combination of actions should the SysOps administrator take to meet these requirements? (Choose two.) “

  • Configure an alias record for example.com to point to the CNAME of the ALB.

  • Configure an alias record for www.example.com to point to the Route 53 example.com record

Question 279

“A company has a hybrid environment. The company has set up an AWS Direct Connect connection between the company’s on-premises data center and a workload that runs in a VPC. The company uses Amazon Route 53 for DNS on AWS. The company uses a private hosted zone to manage DNS names for a set of services that are hosted on AWS. The company wants the on-premises servers to use Route 53 for DNS resolution of the private hosted zone. Which solution will meet these requirements? “

Create a Route 53 inbound endpoint. Ensure that security groups and routing allow the traffic from the on-premises data center. Configure the DNS server on the on-premises network to conditionally forward DNS queries for the private hosted zone’s domain name to the IP addresses of the inbound endpoint.

Question 280

“A SysOps administrator is evaluating Amazon Route 53 DNS options to address concerns about high availability for an on-premises website. The website consists of two servers: a primary active server and a secondary passive server. Route 53 should route traffic to the primary server if the associated health check returns 2xx or 3xx HTTP codes. All other traffic should be directed to the secondary passive server. The failover record type, set ID, and routing policy have been set appropriately for both primary and secondary servers. Which next step should be taken to configure Route 53?”

Create an A record for each server. Associate the records with the Route 53 HTTP health check.

Question 281

“A company is building a web application on AWS. The company is using Amazon CloudFront with a domain name of www.example.com. All traffic to CloudFront must be encrypted in transit. The company already has provisioned an SSL certificate for www.example.com in AWS Certificate Manager (ACM). Which combination of steps should a SysOps administrator take to encrypt the traffic in transit? (Choose two.) “

  • For each cache behavior in the CloudFront distribution, modify the Viewer Protocol Policy setting to redirect HTTP to HTTPS.

  • Enter the alternate domain name (CNAME) of www.example.com for the CloudFront distribution. Select the custom SSL certificate.

Question 282

“A company runs an application on hundreds of Amazon EC2 instances in three Availability Zones. The application calls a third-party API over the public internet. A SysOps administrator must provide the third party with a list of static IP addresses so that the third party can allow traffic from the application. Which solution will meet these requirements? “

Add a NAT gateway in the public subnet of each Availability Zone. Make the NAT gateway the default route of all private subnets in those Availability Zones.

Question 283

“A company manages its multi-account environment by using AWS Organizations. The company needs to automate the creation of daily incremental backups of any Amazon Elastic Block Store (Amazon EBS) volume that is marked with a Lifecycle: Production tag in one of its primary AWS accounts. The company wants to prevent users from using Amazon EC2 * permissions to delete any of these production snapshots. What should a SysOps administrator do to meet these requirements? “

Associate a service control policy (SCP) with the account to deny users the ability to delete EBS snapshots. Create an Amazon EventBridge rule with a 24-hour cron schedule. Configure EBS Create Snapshot as the target. Target all EBS volumes with the specified tags

Question 284

“A company hosts a Windows-based file server on a fleet of Amazon EC2 instances across multiple Availability Zones. The current setup does not allow application servers to access files simultaneously from the EC2 fleet. Which solution will allow this access in the MOST operationally efficient way? “

Create an Amazon FSx for Windows File Server Multi-AZ file system. Copy the files to the Amazon FSx file system. Adjust the connections from the application servers to use the share that the Amazon FSx file system exposes.

Question 285

“A company has deployed an application on Amazon EC2 instances in a single VPC. The company has placed the EC2 instances in a private subnet in the VPC. The EC2 instances need access to Amazon S3 buckets that are in the same AWS Region as the EC2 instances. A SysOps administrator must provide the EC2 instances with access to the S3 buckets without requiring any changes to the EC2 instances or the application. The EC2 instances must not have access to the internet. Which solution will meet these requirements? “

Create an S3 gateway endpoint that uses the default gateway endpoint policy. Associate the private subnet with the gateway endpoint.

Question 286

“A company has a public web application that experiences rapid traffic increases after advertisements appear on local television. The application runs on Amazon EC2 instances that are in an Auto Scaling group. The Auto Scaling group is not keeping up with the traffic surges after an advertisement runs. The company often needs to scale out to 100 EC2 instances during the traffic surges. The instance startup times are lengthy because of a boot process that creates machine-specific data caches that are unique to each instance. The exact timing of when the advertisements will appear on television is not known. A SysOps administrator must implement a solution so that the application can function properly during the traffic surges. Which solution will meet these requirements? “

Create e warm pool. Keep enough instances in the Stopped state to meet the increased demand.

Question 287

“A company hosts an internal application on Amazon EC2 On-Demand Instances behind an Application Load Balancer (ALB). The instances are in an Amazon EC2 Auto Scaling group. Employees use the application to provide product prices to potential customers. The Auto Scaling group is configured with a dynamic scaling policy and tracks average CPU utilization of the instances. Employees have noticed that sometimes the application becomes slow or unresponsive. A SysOps administrator finds that some instances are experiencing a high CPU load. The Auto Scaling group cannot scale out because the company is reaching the EC2 instance service quota. The SysOps administrator needs to implement a solution that provides a notification when the company reaches 70% or more of the EC2 instance service quota. Which solution will meet these requirements in the MOST operationally efficient manner? “

Use the Service Quotas console to create an Amazon CloudWatch alarm for the EC2 instances. Configure the alarm with quota utilization equal to or greater than 70%. Configure the alarm to publish an Amazon Simple Notification Service (Amazon SNS) notification when the alarm enters ALARM state.”

Question 288

“A company has a policy that all Amazon EC2 instance logs must be published to Amazon CloudWatch Logs. A SysOps administrator is troubleshooting an EC2 instance that is running Amazon Linux 2. The EC2 instance is not publishing logs to CloudWatch Logs. The Amazon CloudWatch agent is running on the EC2 instance, and the agent configuration file is correct. What should the SysOps administrator do to resolve the issue? “

Ensure that the IAM role that is attached to the EC2 instance has permissions in CloudWatch Logs for the CreateLogGroup, CreateLogStream, PutLogEvents, and DescribeLogStreams actions.

Question 289

“A company runs a workload on an Amazon EC2 instance. The workload needs a temporary cache that contains data that changes frequently. The workload does not need to retain the cache across instance restarts. Which storage option will provide the HIGHEST performance for the cache? “

EC2 instance store

Question 290

“A company runs multiple workloads across an organization in AWS Organizations. The company’s finance team needs detailed dashboards to track cost changes and provide detailed cost metrics. The finance team needs to track trends as granular as every hour. What should a SysOps administrator do to meet these requirements in the MOST operationally efficient way? “

Generate an AWS Cost and Usage Report. Store the report in Amazon S3. Use Amazon Athena to query the data. Use Amazon QuickSight to develop dashbosrds based on the data in the AWS Cost and Usage Report.

Question 291

“A company has a core application that must run 24 hours a day, 7 days a week. The application uses Amazon EC2. AWS Fargate, and AWS Lambda. The company uses a combination of operating systems across different AWS Regions. The company needs to maximize cost savings while committing to a pricing model that offers flexibility to make changes. What should the company do to meet these requirements? “

Purchase a Compute Savings Plan that is based on Savings Plans recommendations

Question 292

“A company’s architecture team must receive immediate email notification whenever new Amazon EC2 instances are launched in the company’s main AWS production account. What should a SysOps administrator do to meet this requirement? “

Create an Amazon Simple Notification Service (Amazon SNS) topic and a subscription that uses the email protocol. Enter the architecture team’s email address as the subscriber. Create an Amazon EventBridge rule that reacts when EC2 instances are launched. Specify the SNS topic as the rule’s target.

Question 293

“A SysOps administrator needs to update an AWS account name. What should the SysOps administrator do to accomplish this goal? “

Sign in as the AWS account root user to make the change.

Question 294

“A team of developers is using several Amazon S3 buckets as centralized repositories. Users across the world upload large sets of files to these repositories. The development team’s applications later process these files. A SysOps administrator sets up a new S3 bucket, DOC-EXAMPLE-BUCKET, to support a new workload, The rew S3 bucket also receives regular uploads cf large sets of files from users worldwide. When the new S3 bucket is put into production, the upload performance from certain geographic areas is lower than the upload performance that the existing $3 buckets provide What should the SysOps administrator do to remediate this issue? “

Enable S3 Transfer Acceleration for the new S3 bucket. Verify that the developers are using the DOC-EXAMPLE-BUCKET.s3-accelerate.amazonaws.com endpoint name in their API calls”

Question 295

“A user has launched a Windows based EC2 instance. However, the instance has some issues and the user wants to check the log. When the user checks the Instance console output from the AWS console, what will it display?”

The last three system events’ log errors

Dumps Base Exam

September 6, 2023 What Changed with AWS Certified SysOps Administrator – Associate SOA-C02 Exam?

AWS Certified SysOps Administrator – Associate SOA-C02 exam is a great Amazon certification exam that helps organizations identify and develop talent with critical skills for implementing cloud initiatives. As of March 28, 2023, the exam format has been updated, and it will not include exam labs until further notice. Currently, the exam consists of two types of questions: Multiple choices and Multiple responses. The updated SOA-C02 exam dumps are an excellent resource for preparing for the exam. They provide you with the opportunity to practice with actual questions and answers, which can help you identify your strengths and weaknesses. Moreover, they can help you build your confidence and reduce test anxiety, which can improve your performance on the exam. AWS Certified SysOps Administrator – Associate SOA-C02 Exam Free Dumps

Congratulations - you have completed this exam.

Your answers are shown below:

  1. A Sysops administrator creates an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that uses AWS Fargate. The cluster is deployed successfully. The Sysops administrator needs to manage the cluster by using the kubect1 command line tool.

Which of the following must be configured on the Sysops administrator’s machine so that kubect1 can communicate with the cluster API server?

The kubeconfig filecorrect
The kube-proxy Amazon EKS add-on
The Fargate profile
The eks-connector.yaml file

Question was not answered Explanation:

The kubeconfig file is a configuration file used to store cluster authentication information, which is required to make requests to the Amazon EKS cluster API server. The kubeconfig file will need to be configured on the SysOps administrator’s machine in order for kubectl to be able to communicate with the cluster API server.

https://aws.amazon.com/blogs/developer/running-a-kubernetes-job-in-amazon-eks-on-aws-fargate-using-aws-stepfunctions/

  1. A Sysops administrator needs to configure automatic rotation for Amazon RDS database credentials.

The credentials must rotate every 30 days. The solution must integrate with Amazon RDS.

Which solution will meet these requirements with the LEAST operational overhead?

Store the credentials in AWS Systems Manager Parameter Store as a secure string. Configure automatic rotation with a rotation interval of 30 days.
Store the credentials in AWS Secrets Manager. Configure automatic rotation with a rotation interval of 30 days.correct
Store the credentials in a file in an Amazon S3 bucket. Deploy an AWS Lambda function to automatically rotate the credentials every 30 days.
Store the credentials in AWS Secrets Manager. Deploy an AWS Lambda function to automatically rotate the credentials every 30 days.

Question was not answered Explanation:

Storing the credentials in AWS Secrets Manager and configuring automatic rotation with a rotation interval of 30 days is the most efficient way to meet the requirements with the least operational overhead. AWS Secrets Manager automatically rotates the credentials at the specified interval, so there is no need for an additional AWS Lambda function or manual rotation. Additionally, Secrets Manager is integrated with Amazon RDS, so the credentials can be easily used with the RDS database.

  1. A company has an application that runs only on Amazon EC2 Spot Instances. The instances run in an Amazon EC2 Auto Scaling group with scheduled scaling actions.

However, the capacity does not always increase at the scheduled times, and instances terminate many times a day. A Sysops administrator must ensure that the instances launch on time and have fewer interruptions.

Which action will meet these requirements?

Specify the capacity-optimized allocation strategy for Spot Instances. Add more instance types to the Auto Scaling group.correct
Specify the capacity-optimized allocation strategy for Spot Instances. Increase the size of the instances in the Auto Scaling group.
Specify the lowest-price allocation strategy for Spot Instances. Add more instance types to the Auto Scaling group.
Specify the lowest-price allocation strategy for Spot Instances. Increase the size of the instances in the Auto Scaling group.

Question was not answered Explanation:

Specifying the capacity-optimized allocation strategy for Spot Instances and adding more instance types to the Auto Scaling group is the best action to meet the requirements. Increasing the size of the instances in the Auto Scaling group will not necessarily help with the launch time or reduce interruptions, as the Spot Instances could still be interrupted even with larger instance sizes.

  1. A company stores its data in an Amazon S3 bucket. The company is required to classify the data and find any sensitive personal information in its S3 files.

Which solution will meet these requirements?

Create an AWS Config rule to discover sensitive personal information in the S3 files and mark them as noncompliant.
Create an S3 event-driven artificial intelligence/machine learning (AI/ML) pipeline to classify sensitive personal information by using Amazon Recognition.
Enable Amazon GuardDuty. Configure S3 protection to monitor all data inside Amazon S3.
Enable Amazon Macie. Create a discovery job that uses the managed data identifier.correct

Question was not answered Explanation:

Amazon Macie is a security service designed to help organizations find, classify, and protect sensitive data stored in Amazon S3. Amazon Macie uses machine learning to automatically discover, classify, and protect sensitive data in Amazon S3. Creating a discovery job with the managed data identifier will allow Macie to identify sensitive personal information in the S3 files and classify it accordingly. Enabling AWS Config and Amazon GuardDuty will not help with this requirement as they are not designed to automatically classify and protect data.

  1. A company has an application that customers use to search for records on a website. The application’s data is stored in an Amazon Aurora DB cluster. The application’s usage varies by season and by day of the week.

The website’s popularity is increasing, and the website is experiencing slower performance because of increased load on the DB cluster during periods of peak activity. The application logs show that the performance issues occur when users are searching for information. The same search is rarely performed multiple times.

A SysOps administrator must improve the performance of the platform by using a solution that maximizes resource efficiency.

Which solution will meet these requirements?

Deploy an Amazon ElastiCache for Redis cluster in front of the DB cluster. Modify the application to check the cache before the application issues new queries to the database. Add the results of any queries to the cache.
Deploy an Aurora Replica for the DB cluster. Modify the application to use the reader endpoint for search operations. Use Aurora Auto Scaling to scale the number of replicas based on load. Most Votedcorrect
Use Provisioned IOPS on the storage volumes that support the DB cluster to improve performance sufficiently to support the peak load on the application.
Increase the instance size in the DB cluster to a size that is sufficient to support the peak load on the application. Use Aurora Auto Scaling to scale the instance size based on load.

Question was not answered Explanation:

https://docs.amazonaws.cn/en_us/AmazonRDS/latest/AuroraUserGuide/aurora-replicas-adding.html

  1. The security team is concerned because the number of AWS Identity and Access Management (IAM) policies being used in the environment is increasing. The team tasked a SysOps administrator to report on the current number of IAM policies in use and the total available IAM policies.

Which AWS service should the administrator use to check how current IAM policy usage compares to current service limits?

AWS Trusted Advisorcorrect
Amazon Inspector
AWS Config
AWS Organizations

Question was not answered

  1. A company has a stateless application that is hosted on a fleet of 10 Amazon EC2 On-Demand Instances in an Auto Scaling group. A minimum of 6 instances are needed to meet service requirements.

Which action will maintain uptime for the application MOST cost-effectively?

Use a Spot Fleet with an On-Demand capacity of 6 instances.correct
Update the Auto Scaling group with a minimum of 6 On-Demand Instances and a maximum of 10 On-Demand Instances.
Update the Auto Scaling group with a minimum of 1 On-Demand Instance and a maximum of 6 On-Demand Instances.
Use a Spot Fleet with a target capacity of 6 instances.

Question was not answered

  1. A SysOps administrator has launched a large general purpose Amazon EC2 instance to regularly process large data files. The instance has an attached 1 TB General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volume. The instance also is EBS-optimized. To save costs, the SysOps administrator stops the instance each evening and restarts the instance each morning.

When data processing is active, Amazon CloudWatch metrics on the instance show a consistent 3.000 VolumeReadOps. The SysOps administrator must improve the I/O performance while ensuring data integrity.

Which action will meet these requirements?

Change the instance type to a large, burstable, general purpose instance.
Change the instance type to an extra large general purpose instance.
Increase the EBS volume to a 2 TB General Purpose SSD (gp2) volume.correct
Move the data that resides on the EBS volume to the instance store.

Question was not answered

  1. With the threat of ransomware viruses encrypting and holding company data hostage, which action should be taken to protect an Amazon S3 bucket?

    Deny Post. Put. and Delete on the bucket. Enable server-side encryption on the bucket.correct Enable Amazon S3 versioning on the bucket. Enable snapshots on the bucket.

Question was not answered

  1. A SysOps administrator is evaluating Amazon Route 53 DNS options to address concerns about high availability for an on-premises website. The website consists of two servers: a primary active server and a secondary passive server. Route 53 should route traffic to the primary server if the associated health check returns 2xx or 3xx HTTP codes. All other traffic should be directed to the secondary passive server. The failover record type, set ID. and routing policy have been set appropriately for both primary and secondary servers.

Which next step should be taken to configure Route 53?

Create an A record for each server. Associate the records with the Route 53 HTTP health check.correct
Create an A record for each server. Associate the records with the Route 53 TCP health check.
Create an alias record for each server with evaluate target health set to yes. Associate the records with the Route 53 HTTP health check.
Create an alias record for each server with evaluate target health set to yes. Associate the records with the Route 53 TCP health check.

Question was not answered

  1. A SysOps administrator noticed that a large number of Elastic IP addresses are being created on the company’s AWS account, but they are not being associated with Amazon EC2 instances, and are incurring Elastic IP address charges in the monthly bill.

How can the administrator identify who is creating the Elastic IP addresses?

Attach a cost-allocation tag to each requested Elastic IP address with the IAM user name of the developer who creates it.
Query AWS CloudTrail logs by using Amazon Athena to search for Elastic IP address events.correct
Create a CloudWatch alarm on the ElPCreated metric and send an Amazon SNS notification when the alarm triggers.
Use Amazon Inspector to get a report of all Elastic IP addresses created in the last 30 days.

Question was not answered

  1. A company has an Amazon CloudFront distribution that uses an Amazon S3 bucket as its origin. During a review of the access logs, the company determines that some requests are going directly to the S3 bucket by using the website hosting endpoint. A SysOps administrator must secure the S3 bucket to allow requests only from CloudFront.

What should the SysOps administrator do to meet this requirement?

Create an origin access identity (OAI) in CloudFront. Associate the OAI with the distribution. Remove access to and from other principals in the S3 bucket policy. Update the S3 bucket policy to allow access only from the OAcorrect
Create an origin access identity (OAI) in CloudFront. Associate the OAI with the distribution. Update the S3 bucket policy to allow access only from the OA
Create a new origin, and specify the S3 bucket as the new origin. Update the distribution behavior to use the new origin. Remove the existing origin.
Create an origin access identity (OAI) in CloudFront. Associate the OAI with the distribution. Update the S3 bucket policy to allow access only from the OA
Disable website hosting. Create a new origin, and specify the S3 bucket as the new origin. Update the distribution behavior to use the new origin. Remove the existing origin.
Update the S3 bucket policy to allow access only from the CloudFront distribution. Remove access to and from other principals in the S3 bucket policy. Disable website hosting. Create a new origin, and specify the S3 bucket as the new origin. Update the distribution behavior to use the new origin. Remove the existing origin.

Question was not answered

  1. A SysOps administrator must create an IAM policy for a developer who needs access to specific AWS services.

Based on the requirements, the SysOps administrator creates the following policy:

Which actions does this policy allow? (Select TWO.)

Create an AWS Storage Gateway.correct
Create an IAM role for an AWS Lambda function.
Delete an Amazon Simple Queue Service (Amazon SQS) queue.
Describe AWS load balancers.correct
Invoke an AWS Lambda function.correct

Question was not answered

  1. A company is trying to connect two applications. One application runs in an on-premises data center that has a hostname of hostl .onprem.private. The other application runs on an Amazon EC2 instance that has a hostname of hostl.awscloud.private. An AWS Site-to-Site VPN connection is in place between the on-premises network and AWS.

The application that runs in the data center tries to connect to the application that runs on the EC2

instance, but DNS resolution fails. A SysOps administrator must implement DNS resolution between on-premises and AWS resources.

Which solution allows the on-premises application to resolve the EC2 instance hostname?

Set up an Amazon Route 53 inbound resolver endpoint with a forwarding rule for the onprem.private hosted zone. Associate the resolver with the VPC of the EC2 instance. Configure the on-premises DNS resolver to forward onprem.private DNS queries to the inbound resolver endpoint.
Set up an Amazon Route 53 inbound resolver endpoint. Associate the resolver with the VPC of the EC2 instance. Configure the on-premises DNS resolver to forward awscloud.private DNS queries to the inbound resolver endpoint.
Set up an Amazon Route 53 outbound resolver endpoint with a forwarding rule for the onprem.private hosted zone. Associate the resolver with the AWS Region of the EC2 instance. Configure the on-premises DNS resolver to forward onprem.private DNS queries to the outbound resolver endpoint.correct
Set up an Amazon Route 53 outbound resolver endpoint. Associate the resolver with the AWS Region of the EC2 instance. Configure the on-premises DNS resolver to forward awscloud.private DNS queries to the outbound resolver endpoint.

Question was not answered

  1. While setting up an AWS managed VPN connection, a SysOps administrator creates a customer gateway resource in AWS. The customer gateway device resides in a data center with a NAT gateway in front of it.

What address should be used to create the customer gateway resource?

The private IP address of the customer gateway device
The MAC address of the NAT device in front of the customer gateway device
The public IP address of the customer gateway device
The public IP address of the NAT device in front of the customer gateway devicecorrect

Question was not answered

  1. A large company is using AWS Organizations to manage its multi-account AWS environment. According to company policy, all users should have read-level access to a particular Amazon S3 bucket in a central account. The S3 bucket data should not be available outside the organization. A SysOps administrator must set up the permissions and add a bucket policy to the S3 bucket.

Which parameters should be specified to accomplish this in the MOST efficient manner?

Specify "' as the principal and PrincipalOrgld as a condition.correct
Specify all account numbers as the principal.
Specify PrincipalOrgld as the principal.
Specify the organization's management account as the principal.

Question was not answered Explanation:

https://aws.amazon.com/blogs/security/control-access-to-aws-resources-by-using-the-aws-organization-of-iam-principals/

  1. A SysOps administrator is attempting to download patches from the internet into an instance in a private subnet. An internet gateway exists for the VPC, and a NAT gateway has been deployed on the public subnet; however, the instance has no internet connectivity.

The resources deployed into the private subnet must be inaccessible directly from the public internet.

What should be added to the private subnet’s route table in order to address this issue, given the information provided?

0.0.0.0/0 IGW
0.0.0.0/0 NATcorrect
10.0.1.0/24 IGW
10.0.1.0/24 NAT

Question was not answered

  1. A SysOps administrator applies the following policy to an AWS CloudFormation stack:

What is the result of this policy?

Users that assume an IAM role with a logical ID that begins with "Production" are prevented from running the update-stack command.
Users can update all resources in the stack except for resources that have a logical ID that begins with "Production".correct
Users can update all resources in the stack except for resources that have an attribute that begins with "Production".
Users in an IAM group with a logical ID that begins with "Production" are prevented from running the update-stack command.

Question was not answered

  1. A company’s IT department noticed an increase in the spend of their developer AWS account. There are over 50 developers using the account, and the finance team wants to determine the service costs incurred by each developer.

What should a SysOps administrator do to collect this information? (Select TWO.)

Activate the createdBy tag in the account.correct
Analyze the usage with Amazon CloudWatch dashboards.
Analyze the usage with Cost Explorer.correct
Configure AWS Trusted Advisor to track resource usage.
Create a billing alarm in AWS Budgets.

Question was not answered

  1. A company website contains a web tier and a database tier on AWS. The web tier consists of Amazon EC2 instances that run in an Auto Scaling group across two Availability Zones. The database tier runs on an Amazon ROS for MySQL Multi-AZ DB instance. The database subnet network ACLs are restricted to only the web subnets that need access to the database. The web subnets use the default network ACL with the default rules.

The company’s operations team has added a third subnet to the Auto Scaling group configuration. After an Auto Scaling event occurs, some users report that they intermittently receive an error message. The error message states that the server cannot connect to the database. The operations team has confirmed that the route tables are correct and that the required ports are open on all security groups.

Which combination of actions should a SysOps administrator take so that the web servers can communicate with the DB instance? (Select TWO.)

On the default ACcorrect
create inbound Allow rules of type TCP with the ephemeral port range and the source as the database subnets.
On the default ACL, create outbound Allow rules of type MySQL/Aurora (3306). Specify the destinations as the database subnets.correct
On the network ACLs for the database subnets, create an inbound Allow rule of type MySQL/Aurora (3306). Specify the source as the third web subnet.correct
On the network ACLs for the database subnets, create an outbound Allow rule of type TCP with the ephemeral port range and the destination as the third web subnet.
On the network ACLs for the database subnets, create an outbound Allow rule of type MySQL/Aurora (3306). Specify the destination as the third web subnet.

Question was not answered

  1. A company is running an application on a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). The EC2 instances are launched by an Auto Scaling group and are automatically registered in a target group. A SysOps administrator must set up a notification to alert application owners when targets fail health checks.

What should the SysOps administrator do to meet these requirements?

Create an Amazon CloudWatch alarm on the UnHealthyHostCount metric. Configure an action to send an Amazon Simple Notification Service (Amazon SNS) notification when the metric is greater than 0.correct
Configure an Amazon EC2 Auto Scaling custom lifecycle action to send an Amazon Simple Notification Service (Amazon SNS) notification when an instance is in the Pending:Wait state.
Update the Auto Scaling group. Configure an activity notification to send an Amazon Simple Notification Service (Amazon SNS) notification for the Unhealthy event type.
Update the ALB health check to send an Amazon Simple Notification Service (Amazon SNS) notification when an instance is unhealthy.

Question was not answered

  1. A company wants to build a solution for its business-critical Amazon RDS for MySQL database. The database requires high availability across different geographic locations. A SysOps administrator must build a solution to handle a disaster recovery (DR) scenario with the lowest recovery time objective

(RTO) and recovery point objective (RPO).

Which solution meets these requirements?

Create automated snapshots of the database on a schedule. Copy the snapshots to the DR Region.
Create a cross-Region read replica for the database.correct
Create a Multi-AZ read replica for the database.
Schedule AWS Lambda functions to create snapshots of the source database and to copy the snapshots to a DR Region.

Question was not answered

  1. A SysOps administrator is using Amazon EC2 instances to host an application. The SysOps administrator needs to grant permissions for the application to access an Amazon DynamoDB table.

Which solution will meet this requirement?

Create access keys to access the DynamoDB table. Assign the access keys to the EC2 instance profile.
Create an EC2 key pair to access the DynamoDB table. Assign the key pair to the EC2 instance profile.
Create an IAM user to access the DynamoDB table. Assign the IAM user to the EC2 instance profile.
Create an IAM role to access the DynamoDB table. Assign the IAM role to the EC2 instance profile.correct

Question was not answered

  1. A company has a web application with a database tier that consists of an Amazon EC2 instance that runs MySQL. A SysOps administrator needs to minimize potential data loss and the time that is required to recover in the event of a database failure.

What is the MOST operationally efficient solution that meets these requirements?

Create an Amazon CloudWatch alarm for the StatusCheckFailed_System metric to invoke an AWS Lambda function that stops and starts the EC2 instance.
Create an Amazon RDS for MySQL Multi-AZ DB instance. Use a MySQL native backup that is stored in Amazon S3 to restore the data to the new database. Update the connection string in the web application.
Create an Amazon RDS for MySQL Single-AZ DB instance with a read replica. Use a MySQL native backup that is stored in Amazon S3 to restore the data to the new database. Update the connection string in the web application.
Use Amazon Data Lifecycle Manager (Amazon DLM) to take a snapshot of the Amazon Elastic Block Store (Amazon EBS) volume every hour. In the event of an EC2 instance failure, restore the EBS volume from a snapshot.correct

Question was not answered

  1. A company migrated an I/O intensive application to an Amazon EC2 general purpose instance. The EC2 instance has a single General Purpose SSD Amazon Elastic Block Store (Amazon EBS) volume attached.

Application users report that certain actions that require intensive reading and writing to the disk are taking much longer than normal or are failing completely. After reviewing the performance metrics of the EBS volume, a SysOps administrator notices that the VolumeQueueLength metric is consistently high during the same times in which the users are reporting issues. The SysOps administrator needs to resolve this problem to restore full performance to the application.

Which action will meet these requirements?

Modify the instance type to be storage optimized.
Modify the volume properties by deselecting Auto-Enable Volume 10.
Modify the volume properties to increase the IOPcorrect
Modify the instance to enable enhanced networking.

Question was not answered

  1. A SysOps administrator is trying to set up an Amazon Route 53 domain name to route traffic to a website hosted on Amazon S3. The domain name of the website is www.anycompany.com and the S3 bucket name is anycompany-static. After the record set is set up in Route 53, the domain name www.anycompany.com does not seem to work, and the static website is not displayed in the browser.

Which of the following is a cause of this?

The S3 bucket must be configured with Amazon CloudFront first.
The Route 53 record set must have an IAM role that allows access to the S3 bucket.
The Route 53 record set must be in the same region as the S3 bucket.
The S3 bucket name must match the record set name in Route 53.correct

Question was not answered

  1. An Amazon EC2 instance needs to be reachable from the internet.

The EC2 instance is in a subnet with the following route table: “image” Which entry must a SysOps administrator add to the route table to meet this requirement?

A route for 0.0.0.0/0 that points to a NAT gateway
A route for 0.0.0.0/0 that points to an egress-only internet gateway
A route for 0.0.0.0/0 that points to an internet gatewaycorrect
A route for 0.0.0.0/0 that points to an elastic network interface

Question was not answered

  1. A SysOps administrator has enabled AWS CloudTrail in an AWS account. If CloudTrail is disabled, it must be re-enabled immediately.

What should the SysOps administrator do to meet these requirements WITHOUT writing custom code?

Add the AWS account to AWS Organizations. Enable CloudTrail in the management account.
Create an AWS Config rule that is invoked when CloudTrail configuration changes. Apply the AWS-ConfigureCloudTrailLogging automatic remediation action.
Create an AWS Config rule that is invoked when CloudTrail configuration changes. Configure the rule to invoke an AWS Lambda function to enable CloudTrail.
Create an Amazon EventBridge (Amazon CloudWatch Events) hourly rule with a schedule pattern to run an AWS Systems Manager Automation document to enable CloudTrail.correct

Question was not answered

  1. A company has a stateless application that runs on four Amazon EC2 instances. The application requires tour instances at all times to support all traffic. A SysOps administrator must design a highly available, fault-tolerant architecture that continually supports all traffic if one Availability Zone

becomes unavailable.

Which configuration meets these requirements?

Deploy two Auto Scaling groups in two Availability Zones with a minimum capacity of two instances in each group.
Deploy an Auto Scaling group across two Availability Zones with a minimum capacity of four instances.
Deploy an Auto Scaling group across three Availability Zones with a minimum capacity of four instances.correct
Deploy an Auto Scaling group across three Availability Zones with a minimum capacity of six instances.

Question was not answered

  1. A company’s backend infrastructure contains an Amazon EC2 instance in a private subnet. The private subnet has a route to the internet through a NAT gateway in a public subnet. The instance must allow connectivity to a secure web server on the internet to retrieve data at regular intervals.

The client software times out with an error message that indicates that the client software could not establish the TCP connection.

What should a SysOps administrator do to resolve this error?

Add an inbound rule to the security group for the EC2 instance with the following parameters:
Type - HTTP, Source - 0.0.0.0/0.
Add an inbound rule to the security group for the EC2 instance with the following parameters:
Type - HTTPS, Source - 0.0.0.0/0.
Add an outbound rule to the security group for the EC2 instance with the following parameters:
Type - HTTP, Destination - 0.0.0.0/0.
Add an outbound rule to the security group for the EC2 instance with the following parameters: Type - HTTPcorrect
Destination - 0.0.0.0/0.

Question was not answered

  1. A software development company has multiple developers who work on the same product. Each developer must have their own development environment, and these development environments must be identical. Each development environment consists of Amazon EC2 instances and an Amazon RDS DB instance. The development environments should be created only when necessary, and they must be terminated each night to minimize costs.

What is the MOST operationally efficient solution that meets these requirements?

Provide developers with access to the same AWS CloudFormation template so that they can provision their development environment when necessary. Schedule a nightly cron job on each development instance to stop all running processes to reduce CPU utilization to nearly zero.
Provide developers with access to the same AWS CloudFormation template so that they can provision their development environment when necessary. Schedule a nightly Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda function to delete the AWS CloudFormation stacks.correct
Provide developers with CLI commands so that they can provision their own development environment when necessary. Schedule a nightly Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda function to terminate all EC2 instances and the DB instance.
Provide developers with CLI commands so that they can provision their own development environment when necessary. Schedule a nightly Amazon EventBridge (Amazon CloudWatch Events) rule to cause AWS CloudFormation to delete all of the development environment resources.

Question was not answered

  1. A company runs a stateless application that is hosted on an Amazon EC2 instance. Users are reporting performance issues. A SysOps administrator reviews the Amazon CloudWatch metrics for the application and notices that the instance’s CPU utilization frequently reaches 90% during business hours.

What is the MOST operationally efficient solution that will improve the application’s responsiveness?

Configure CloudWatch logging on the EC2 instance. Configure a CloudWatch alarm for CPU utilization to alert the SysOps administrator when CPU utilization goes above 90%.
Configure an AWS Client VPN connection to allow the application users to connect directly to the EC2 instance private IP address to reduce latency.
Create an Auto Scaling group, and assign it to an Application Load Balancer. Configure a target tracking scaling policy that is based on the average CPU utilization of the Auto Scaling group.correct
Create a CloudWatch alarm that activates when the EC2 instance's CPU utilization goes above
80%. Configure the alarm to invoke an AWS Lambda function that vertically scales the instance.

Question was not answered

  1. A company is testing Amazon Elasticsearch Service (Amazon ES) as a solution for analyzing system logs from a fleet of Amazon EC2 instances. During the test phase, the domain operates on a single-node cluster. A SysOps administrator needs to transition the test domain into a highly available production-grade deployment.

Which Amazon ES configuration should the SysOps administrator use to meet this requirement?

Use a cluster of four data nodes across two AWS Regions. Deploy four dedicated master nodes in each Region.
Use a cluster of six data nodes across three Availability Zones. Use three dedicated master nodes.correct
Use a cluster of six data nodes across three Availability Zones. Use six dedicated master nodes.
Use a cluster of eight data nodes across two Availability Zones. Deploy four master nodes in a failover AWS Region.

Question was not answered

  1. A company recently acquired another corporation and all of that corporation’s AWS accounts. A financial analyst needs the cost data from these accounts. A SysOps administrator uses Cost Explorer to generate cost and usage reports. The SysOps administrator notices that “No Tagkey” represents 20% of the monthly cost.

What should the SysOps administrator do to tag the “No Tagkey” resources?

Add the accounts to AWS Organizations. Use a service control policy (SCP) to tag all the untagged resources.
Use an AWS Config rule to find the untagged resources. Set the remediation action to terminate the resources.
Use Cost Explorer to find and tag all the untagged resources.
Use Tag Editor to find and taq all the untaqqed resources.correct

Question was not answered Explanation:

“You can add tags to resources when you create the resource. You can use the resource’s service console or API to add, change, or remove those tags one resource at a time. To add tags to―or edit or delete tags of―multiple resources at once, use Tag Editor. With Tag Editor, you search for the resources that you want to tag, and then manage tags for the resources in your search results.” https://docs.aws.amazon.com/ARG/latest/userguide/tag-editor.html

  1. A company is using Amazon Elastic File System (Amazon EFS) to share a file system among several Amazon EC2 instances. As usage increases, users report that file retrieval from the EFS file system is slower than normal.

Which action should a SysOps administrator take to improve the performance of the file system?

Configure the file system for Provisioned Throughput.correct
Enable encryption in transit on the file system.
Identify any unused files in the file system, and remove the unused files.
Resize the Amazon Elastic Block Store (Amazon EBS) volume of each of the EC2 instances.

Question was not answered

  1. A SysOps administrator is helping a development team deploy an application to AWS Trie AWS CloudFormat on temp ate includes an Amazon Linux EC2 Instance an Amazon Aurora DB cluster and a hard coded database password that must be rotated every 90 days

What is the MOST secure way to manage the database password?

Use the AWS SecretsManager Secret resource with the GenerateSecretString property to automatically generate a password Use the AWS SecretsManager RotationSchedule resource lo define a rotation schedule lor the password Configure the application to retrieve the secret from AWS Secrets Manager access the databasecorrect
Use me AWS SecretsManager Secret resource with the SecretStrmg property Accept a password as a CloudFormation parameter Use the AllowedPatteen property of the CloudFormaton parameter to require e minimum length, uppercase and lowercase letters and special characters Configure me application to retrieve the secret from AWS Secrets Manager to access the database
Use the AWS SSM Parameter resource Accept input as a Qoudformatton parameter to store the parameter as a secure sting Configure the application to retrieve the parameter from AWS Systems Manager Parameter Store to access the database
Use the AWS SSM Parameter resource Accept input as a Cloudf ormetton parameter to store the parameter as a string Configure the application to retrieve the parameter from AWS Systems Manager Parameter Store to access the database

Question was not answered

  1. An application team uses an Amazon Aurora MySQL DB cluster with one Aurora Replica. The application team notices that the application read performance degrades when user connections exceed 200. The number of user connections is typically consistent around 180. with occasional sudden increases above 200 connections. The application team wants the application to automatically scale as user demand increases or decreases.

Which solution will meet these requirements?

Migrate to a new Aurora multi-master DB cluster. Modify the application database connection string.
Modify the DB cluster by changing to serverless mode whenever user connections exceed 200.
Create an auto scaling policy with a target metric of 195 Database Connectionscorrect
Modify the DB cluster by increasing the Aurora Replica instance size.

Question was not answered

  1. A company’s SysOps administrator has created an Amazon EC2 instance with custom software that will be used as a template for all new EC2 instances across multiple AWS accounts. The Amazon Elastic Block Store (Amazon EBS) volumes that are attached to the EC2 instance are encrypted with AWS managed keys.

The SysOps administrator creates an Amazon Machine Image (AMI) of the custom EC2 instance and plans to share the AMI with the company’s other AWS accounts. The company requires that all AMIs are encrypted with AWS Key Management Service (AWS KMS) keys and that only authorized AWS accounts can access the shared AMIs.

Which solution will securely share the AMI with the other AWS accounts?

In the account where the AMI was created, create a customer master key (CMK). Modify the key policy to provide kms:DescribeKey, kms ReEncrypf, kms:CreateGrant, and kms:Decrypt permissions to the AWS accounts that the AMI will be shared with. Modify the AMI permissions to specify the AWS account numbers that the AMI will be shared with.
In the account where the AMI was created, create a customer master key (CMK). Modify the key policy to provide kms:DescribeKey, kms:ReEncrypt*. kms:CreateGrant, and kms;Decrypt permissions to the AWS accounts that the AMI will be shared with. Create a copy of the AMcorrect
and specify the CM
Modify the permissions on the copied AMI to specify the AWS account numbers that the AMI will be shared with.
In the account where the AMI was created, create a customer master key (CMK). Modify the key policy to provide kms:DescrlbeKey, kms:ReEncrypt kms:CreateGrant, and kms:Decrypt permissions to the AWS accounts that the AMI will be shared with. Create a copy of the AM
and specify the CM
Modify the permissions on the copied AMI to make it public.
In the account where the AMI was created, modify the key policy of the AWS managed key to provide kms:DescnbeKey. kms:ReEncrypt kms:CreateGrant, and kms:Decrypt permissions to the AWS accounts that the AMI will be shared with. Modify the AMI permissions to specify the AWS account numbers that the AMI will be shared with.

Question was not answered Explanation:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-explicit.html

  1. A SysOps administrator is provisioning an Amazon Elastic File System (Amazon EFS) file system to provide shared storage across multiple Amazon EC2 instances. The instances all exist in the same VPC across multiple Availability Zones. There are two instances. In each Availability Zone. The SysOps administrator must make the file system accessible to each instance with the lowest possible latency.

Which solution will meet these requirements?

Create a mount target for the EFS file system in the VP
Use the mount target to mount the file system on each of the instances
Create a mount target for the EFS file system in one Availability Zone of the VP
Use the mount target to mount the file system on the instances in that Availability Zone. Share the directory with the other instances.correct
Create a mount target for each instance. Use each mount target to mount the EFS file system on each respective instance.
Create a mount target in each Availability Zone of the VPC Use the mount target to mount the EFS file system on the Instances in the respective Availability Zone.

Question was not answered Explanation:

A mount target provides an IP address for an NFSv4 endpoint at which you can mount an Amazon EFS file system. You mount your file system using its Domain Name Service (DNS) name, which resolves to the IP address of the EFS mount target in the same Availability Zone as your EC2 instance. You can create one mount target in each Availability Zone in an AWS Region. If there are multiple subnets in an Availability Zone in your VPC, you create a mount target in one of the subnets. Then all EC2 instances in that Availability Zone share that mount target. https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html

  1. A SysOps administrator has used AWS Cloud Formal ion to deploy a serverless application Into a production VPC. The application consists of an AWS Lambda function an Amazon DynamoDB table, and an Amazon API Gateway API. The SysOps administrator must delete the AWS Cloud Formation stack without deleting the DynamoDB table.

Which action should the SysOps administrator take before deleting the AWS Cloud Formation stack?

Add a Retain deletion policy to the DynamoDB resource in the AWS CloudFormation stackcorrect
Add a Snapshot deletion policy to the DynamoDB resource in the AWS CloudFormation stack.
Enable termination protection on the AWS Cloud Formation stack.
Update the application's IAM policy with a Deny statement for the dynamodb:DeleteTabie action.

Question was not answered

  1. A SysOps administrator Is troubleshooting an AWS Cloud Formation template whereby multiple Amazon EC2 instances are being created.

The template is working In us-east-1. but it is failing In us-west-2 with the error code:

How should the administrator ensure that the AWS Cloud Formation template is working in every region?

Copy the source region's Amazon Machine Image (AMI) to the destination region and assign it the same Icorrect
Edit the AWS CloudFormatton template to specify the region code as part of the fully qualified AMI I
Edit the AWS CloudFormatton template to offer a drop-down list of all AMIs to the user by using the aws :: EC2:: ami :: imageiD control.
Modify the AWS CloudFormation template by including the AMI IDs in the "Mappings" section. Refer to the proper mapping within the template for the proper AMI I

Question was not answered

  1. A company runs us Infrastructure on Amazon EC2 Instances that run In an Auto Scaling group. Recently, the company promoted faulty code to the entire EC2 fleet. This faulty code caused the Auto Scaling group to scale the instances before any of the application logs could be retrieved.

What should a SysOps administrator do to retain the application logs after instances are terminated?

A Configure an Auto Scaling lifecycle hook to create a snapshot of the ephemeral storage upon termination of the instances.

B. Create a new Amazon Machine Image (AMI) that has the Amazon CloudWatch agent installed and configured to send logs to Amazon CloudWatch Logs. Update the launch template to use the new AMI.

C. Create a new Amazon Machine Image (AMI) that has a custom script configured to send logs to AWS CloudTrail. Update the launch template to use the new AMI.

D. Install the Amazon CloudWatch agent on the Amazon Machine Image (AMI) that is defined in the launch template. Configure the CloudWatch agent to back up the logs to ephemeral storage.

wrong
  1. A company has a critical serverless application that uses multiple AWS Lambda functions. Each Lambda function generates 1 GB of log data daily in tts own Amazon CloudWatch Logs log group. The company’s security team asks for a count of application errors, grouped by type, across all of the log groups.

What should a SysOps administrator do to meet this requirement?

Perform a CloudWatch Logs Insights query that uses the stats command and count function.correct
Perform a CloudWatch Logs search that uses the groupby keyword and count function.
Perform an Amazon Athena query that uses the SELECT and GROUP BY keywords.
Perform an Amazon RDS query that uses the SELECT and GROUP BY keywords.

Question was not answered

  1. A company monitors its account activity using AWS CloudTrail. and is concerned that some log files are being tampered with after the logs have been delivered to the account’s Amazon S3 bucket.

Moving forward, how can the SysOps administrator confirm that the log files have not been modified after being delivered to the S3 bucket?

Stream the CloudTrail logs to Amazon CloudWatch Logs to store logs at a secondary location.
Enable log file integrity validation and use digest files to verify the hash value of the log file.correct
Replicate the S3 log bucket across regions, and encrypt log files with S3 managed keys.
Enable S3 server access logging to track requests made to the log bucket for security audits.

Question was not answered Explanation:

When you enable log file integrity validation, CloudTrail creates a hash for every log file that it delivers. Every hour, CloudTrail also creates and delivers a file that Reference the log files for the last hour and contains a hash of each. This file is called a digest file. CloudTrail signs each digest file using the private key of a public and private key pair. After delivery, you can use the public key to validate the digest file. CloudTrail uses different key pairs for each AWS region

https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html

  1. A team of On-call engineers frequently needs to connect to Amazon EC2 Instances In a private subnet to troubleshoot and run commands. The Instances use either the latest AWS-provided Windows Amazon Machine Images (AMIs) or Amazon Linux AMIs.

The team has an existing IAM role for authorization. A SysOps administrator must provide the team with access to the Instances by granting IAM permissions to this

Which solution will meet this requirement?

Add a statement to the IAM role policy to allow the ssm:StartSession action on the instances. Instruct the team to use AWS Systems Manager Session Manager to connect to the Instances by using the assumed IAM role.correct
Associate an Elastic IP address and a security group with each instance. Add the engineers' IP addresses to the security group inbound rules. Add a statement to the IAM role policy to allow the ec2:AuthoflzeSecurityGroupIngress action so that the team can connect to the Instances.
Create a bastion host with an EC2 Instance, and associate the bastion host with the VP
Add a statement to the IAM role policy to allow the ec2:CreateVpnConnection action on the bastion host. Instruct the team to use the bastion host endpoint to connect to the instances.
D Create an internet-facing Network Load Balancer. Use two listeners. Forward port 22 to a target group of Linux instances. Forward port 3389 to a target group of Windows Instances. Add a statement to the IAM role policy to allow the ec2:CreateRoute action so that the team can connect to the Instances.

Question was not answered

  1. A company has an AWS Cloud Formation template that creates an Amazon S3 bucket. A user authenticates to the corporate AWS account with their Active Directory credentials and attempts to deploy the Cloud Formation template. However, the stack creation fails.

Which factors could cause this failure? (Select TWO.)

The user's IAM policy does not allow the cloudformation:CreateStack action.correct
The user's IAM policy does not allow the cloudformation:CreateStackSet action.
The user's IAM policy does not allow the s3:CreateBucket action.correct
The user's IAM policy explicitly denies the s3:ListBucket action.
The user's IAM policy explicitly denies the s3:PutObject action

Question was not answered

  1. A company runs a web application on three Amazon EC2 instances behind an Application Load Balancer (ALB). The company notices that random periods of increased traffic cause a degradation in the application’s performance. A SysOps administrator must scale the application to meet the increased traffic.

Which solution meets these requirements?

Create an Amazon CloudWatch alarm to monitor application latency and increase the size of each EC2 instance If the desired threshold is reached.
Create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor application latency and add an EC2 instance to the ALB if the desired threshold is reached.
Deploy the application to an Auto Scaling group of EC2 instances with a target tracking scaling policy. Attach the ALB to the Auto Scaling group.correct
Deploy the application to an Auto Scaling group of EC2 instances with a scheduled scaling policy. Attach the ALB to the Auto Scaling group.

Question was not answered Explanation:

docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html

  1. A company has a new requirement stating that all resources In AWS must be tagged according to a set policy.

Which AWS service should be used to enforce and continually Identify all resources that are not in compliance with the policy?

AWS CloudTrail
Amazon Inspector
AWS Configcorrect
AWS Systems Manager

Question was not answered

  1. A SysOps administrator is setting up an automated process to recover an Amazon EC2 instance In the event of an underlying hardware failure. The recovered instance must have the same private IP address and the same Elastic IP address that the original instance had. The SysOps team must receive an email notification when the recovery process is initiated.

Which solution will meet these requirements?

Create an Amazon CloudWatch alarm for the EC2 instance, and specify the SiatusCheckFailedjnstance metric. Add an EC2 action to the alarm to recover the instance. Add an alarm notification to publish a message to an Amazon Simple Notification Service (Amazon SNS> topic. Subscribe the SysOps team email address to the SNS topic.
Create an Amazon CloudWatch alarm for the EC2 Instance, and specify the StatusCheckFailed_System metric. Add an EC2 action to the alarm to recover the instance. Add an alarm notification to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the SysOps team email address to the SNS topic.correct
Create an Auto Scaling group across three different subnets in the same Availability Zone with a minimum, maximum, and desired size of 1. Configure the Auto Seating group to use a launch template that specifies the private IP address and the Elastic IP address. Add an activity notification for the Auto Scaling group to send an email message to the SysOps team through Amazon Simple Email Service (Amazon SES).
Create an Auto Scaling group across three Availability Zones with a minimum, maximum, and desired size of 1. Configure the Auto Scaling group to use a launch template that specifies the private IP address and the Elastic IP address. Add an activity notification for the Auto Scaling group to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the SysOps team email address to the SNS topic.

Question was not answered Explanation:

You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically recovers the instance if it becomes impaired due to an underlying hardware failure or a problem that requires AWS involvement to repair. Terminated instances cannot be recovered. A recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata. If the impaired instance has a public IPv4 address, the instance retains the public IPv4 address after recovery. If the impaired instance is in a placement group, the recovered instance runs in the placement group. When the StatusCheckFailed_System alarm is triggered, and the recover action is initiated, you will be notified by the Amazon SNS topic that you selected when you created the alarm and associated the recover action. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html

  1. A SysOps administrator must create a solution that immediately notifies software developers if an AWS Lambda function experiences an error.

Which solution will meet this requirement?

Create an Amazon Simple Notification Service (Amazon SNS) topic with an email subscription for each developer. Create an Amazon CloudWatch alarm by using the Errors metric and the Lambda function name as a dimension. Configure the alarm to send a notification to the SNS topic when the alarm state reaches ALARcorrect
Create an Amazon Simple Notification Service (Amazon SNS) topic with a mobile subscription for each developer. Create an Amazon EventBridge (Amazon CloudWatch Events) alarm by using LambdaError as the event pattern and the SNS topic name as a resource. Configure the alarm to send a notification to the SNS topic when the alarm state reaches ALAR
Verify each developer email address in Amazon Simple Email Service (Amazon SES). Create an Amazon CloudWatch rule by using the LambdaError metric and developer email addresses as dimensions. Configure the rule to send an email through Amazon SES when the rule state reaches
ALAR
Verify each developer mobile phone in Amazon Simple Email Service {Amazon SES). Create an Amazon EventBridge (Amazon CloudWatch Events) rule by using Errors as the event pattern and the Lambda function name as a resource. Configure the rule to send a push notification through Amazon SES when the rule state reaches ALAR

Question was not answered

  1. A SysOps administrator developed a Python script that uses the AWS SDK to conduct several maintenance tasks. The script needs to run automatically every night.

What is the MOST operationally efficient solution that meets this requirement?

Convert the Python script to an AWS Lambda (unction. Use an Amazon EventBridge (Amazon
CloudWatch Events) rule to invoke the function every night.correct
Convert the Python script to an AWS Lambda function. Use AWS CloudTrail to invoke the function every night.
Deploy the Python script to an Amazon EC2 Instance. Use Amazon EventBridge (Amazon CloudWatch Events) to schedule the instance to start and stop every night.
Deploy the Python script to an Amazon EC2 instance. Use AWS Systems Manager to schedule the instance to start and stop every night.

Question was not answered

  1. A SysOps administrator must create a solution that automatically shuts down any Amazon EC2 instances that have less than 10% average CPU utilization for 60 minutes or more.

Which solution will meet this requirement In the MOST operationally efficient manner?

Implement a cron job on each EC2 instance to run once every 60 minutes and calculate the current CPU utilization. Initiate an instance shutdown If CPU utilization is less than 10%.
Implement an Amazon CloudWatch alarm for each EC2 instance to monitor average CPU utilization. Set the period at 1 hour, and set the threshold at 10%. Configure an EC2 action on the alarm to stop the instance.correct
Install the unified Amazon CloudWatch agent on each EC2 instance, and enable the Basic level predefined metric set. Log CPU utilization every 60 minutes, and initiate an instance shutdown if CPU utilization is less than 10%.
Use AWS Systems Manager Run Command to get CPU utilization from each EC2 instance every 60 minutes. Initiate an instance shutdown if CPU utilization is less than 10%.

Question was not answered Explanation:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/UsingAlarmActions.html

  1. A company uses AWS Cloud Formation templates to deploy cloud infrastructure. An analysis of all the company’s templates shows that the company has declared the same components in multiple templates. A SysOps administrator needs to create dedicated templates that have their own parameters and conditions for these common components.

Which solution will meet this requirement?

Develop a CloudFormaiion change set.
Develop CloudFormation macros.
Develop CloudFormation nested stacks.correct
Develop CloudFormation stack sets.

Question was not answered

  1. A company has deployed AWS Security Hub and AWS Config in a newly implemented organization in AWS Organizations. A SysOps administrator must implement a solution to restrict all member accounts in the organization from deploying Amazon EC2 resources in the ap-southeast-2 Region. The solution must be implemented from a single point and must govern an current and future accounts. The use of root credentials also must be restricted in member accounts.

Which AWS feature should the SysOps administrator use to meet these requirements?

AWS Config aggregator
IAM user permissions boundaries
AWS Organizations service control policies (SCPs)correct
AWS Security Hub conformance packs

Question was not answered

  1. A company has a new requirement stating that all resources in AWS must be tagged according to a set policy.

Which AWS service should be used to enforce and continually identify all resources that are not in compliance with the policy?

AWS CloudTrail
Amazon Inspector
AWSConfigcorrect
AWS Systems Manager

Question was not answered

  1. A SysOps administrator has used AWS Cloud Formation to deploy a sereness application into a production VPC. The application consists of an AWS Lambda function, an Amazon DynamoOB table, and an Amazon API Gateway API. The SysOps administrator must delete the AWS Cloud Formation stack without deleting the DynamoOB table.

Which action should the SysOps administrator take before deleting the AWS Cloud Formation stack?

Add a Retain deletion policy to the DynamoOB resource in the AWS CloudFormation stack.correct
Add a Snapshot deletion policy to the DynamoOB resource In the AWS CloudFormation stack.
Enable termination protection on the AWS Cloud Formation stack.
Update the application's IAM policy with a Deny statement for the dynamodb: DeleteTabie action.

Question was not answered

  1. A company has a critical serverless application that uses multiple AWS Lambda functions. Each Lambda function generates 1 GB of log data daily in its own Amazon CloudWatch Logs log group. The company’s security team asks for a count of application errors, grouped by type, across all of the log groups.

What should a SysOps administrator do to meet this requirement?

Perform a CloudWatch Logs Insights query that uses the stats command and count function.correct
Perform a CloudWatch Logs search that uses the groupby keyword and count function.
Perform an Amazon Athena query that uses the SELECT and GROUP BY keywords.
Perform an Amazon RDS query that uses the SELECT and GROUP BY keywords.

Question was not answered

  1. A SysOps administrator needs to give users the ability to upload objects to an Amazon S3 bucket. The SysOps administrator creates a presigned URL and provides the URL to a user, but the user cannot upload an object to the S3 bucket. The presigned URL has not expired, and no bucket policy is applied to the S3 bucket.

Which of the following could be the cause of this problem?

The user has not properly configured the AWS CLI with their access key and secret access key.
The SysOps administrator does not have the necessary permissions to upload the object to the S3 bucket.correct
The SysOps administrator must apply a bucket policy to the S3 bucket to allow the user to upload the object.
The object already has been uploaded through the use of the presigned URL, so the presigned URL is no longer valid.

Question was not answered

  1. A SysOps administrator is responsible for a legacy. CPU-heavy application The application can only be scaled vertically Currently, the application is deployed on a single t2 large Amazon EC2 instance The system is showing 90% CPU usage and significant performance latency after a few minutes

What change should be made to alleviate the performance problem?

Change the Amazon EBS volume to Provisioned lOPs
Upgrade to a compute-optimized instancecorrect
Add additional 12 large instances to the application
Purchase Reserved Instances

Question was not answered

  1. A company is tunning a website on Amazon EC2 instances that are in an Auto Scaling group When the website traffic increases, additional instances lake several minutes to become available because of a long-running user data script that installs software A SysOps administrator must decrease the time that is required (or new instances to become available

Which action should the SysOps administrator take to meet this requirement?

Reduce the scaling thresholds so that instances are added before traffic increases
Purchase Reserved Instances to cover 100% of the maximum capacity of the Auto Scaling group
Update the Auto Scaling group to launch instances that have a storage optimized instance type
Use EC2 Image Builder to prepare an Amazon Machine Image (AMI) that has pre-installed softwarecorrect

Question was not answered Explanation:

automated way to update your image. Have a pipeline to update your image. When you boot from your AMI updates = scrits are already pre-installed, so no need to complete boot scripts in boot process. https://aws.amazon.com/image-builder/

  1. A SysOps administrator is notified that an Amazon EC2 instance has stopped responding The AWS Management Console indicates that the system status checks are failing.

What should the administrator do first to resolve this issue?

Reboot the EC2 instance so it can be launched on a new host
Stop and then start the EC2 instance so that it can be launched on a new hostcorrect
Terminate the EC2 instance and relaunch it
View the AWS CloudTrail log to investigate what changed on the EC2 instance

Question was not answered Explanation:

https://aws.amazon.com/premiumsupport/knowledge-center/ec2-windows-system-status-check-fail/

  1. A SysOps administrator has enabled AWS CloudTrail in an AWS account If CloudTrail is disabled it must be re-enabled immediately What should the SysOps administrator do to meet these requirements WITHOUT writing custom code’’

    Add the AWS account to AWS Organizations Enable CloudTrail in the management account Create an AWS Config rule that is invoked when CloudTrail configuration changes Apply the AWS-ConfigureCloudTrailLogging automatic remediation actioncorrect Create an AWS Config rule that is invoked when CloudTrail configuration changes Configure the rule to invoke an AWS Lambda function to enable CloudTrail Create an Amazon EventBridge (Amazon CloudWatch Events) hourly rule with a schedule pattern to run an AWS Systems Manager Automation document to enable CloudTrail

Question was not answered

  1. A recent audit found that most resources belonging to the development team were in violation of patch compliance standards The resources were properly tagged.

Which service should be used to quickly remediate the issue and bring the resources back into compliance?

AWS Config
Amazon Inspector
AWS Trusted Advisor
AWS Systems Managercorrect

Question was not answered

  1. An Amazon EC2 instance is running an application that uses Amazon Simple Queue Service (Amazon SQS} queues A SysOps administrator must ensure that the application can read, write, and delete messages from the SQS queues

Which solution will meet these requirements in the MOST secure manner?

Create an IAM user with an IAM policy that allows the sqs SendMessage permission, the sqs ReceiveMessage permission, and the sqs DeleteMessage permission to the appropriate queues Embed the IAM user's credentials in the application's configuration
Create an IAM user with an IAM policy that allows the sqs SendMessage permission, the sqs ReceiveMessage permission, and the sqs DeleteMessage permission to the appropriate queues Export the IAM user's access key and secret access key as environment variables on the EC2 instance
Create and associate an IAM role that allows EC2 instances to call AWS services Attach an IAM policy to the role that allows sqs." permissions to the appropriate queues
Create and associate an IAM role that allows EC2 instances to call AWS services Attach an IAM policy to the role that allows the sqs SendMessage permission, the sqs ReceiveMessage permission, and the sqs DeleteMessage permission to the appropriate queuescorrect

Question was not answered

  1. A development team recently deployed a new version of a web application to production After the release, penetration testing revealed a cross-site scripting vulnerability that could expose user data

Which AWS service will mitigate this issue?

AWS Shield Standard
AWS WAFcorrect
Elastic Load Balancing
Amazon Cognito

Question was not answered Explanation:

https://www.imperva.com/learn/application-security/cross-site-scripting-xss-attacks/

  1. A company uses an AWS CloudFormation template to provision an Amazon EC2 instance and an Amazon RDS DB instance A SysOps administrator must update the template to ensure that the DB instance is created before the EC2 instance is launched

What should the SysOps administrator do to meet this requirement?

Add a wait condition to the template Update the EC2 instance user data script to send a signal after the EC2 instance is started
Add the DependsOn attribute to the EC2 instance resource, and provide the logical name of the RDS resourcecorrect
Change the order of the resources in the template so that the RDS resource is listed before the EC2 instance resource
Create multiple templates Use AWS CloudFormation StackSets to wait for one stack to complete before the second stack is created

Question was not answered Explanation:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-dependson.html

Syntax The DependsOn attribute can take a single string or list of strings. “DependsOn” : [ String, … ] Example The following template contains an AWS::EC2::Instance resource with a DependsOn

attribute that specifies myDB, an AWS::RDS::DBInstance. When CloudFormation creates this stack, it first creates myDB, then creates Ec2Instance.

  1. A company has an existing web application that runs on two Amazon EC2 instances behind an Application Load Balancer (ALB) across two Availability Zones The application uses an Amazon RDS Multi-AZ DB Instance Amazon Route 53 record sets route requests tor dynamic content to the load balancer and requests for static content to an Amazon S3 bucket Site visitors are reporting extremely long loading times.

Which actions should be taken to improve the performance of the website? (Select TWO )

Add Amazon CloudFront caching for static contentcorrect
Change the load balancer listener from HTTPS to TCP
Enable Amazon Route 53 latency-based routing
Implement Amazon EC2 Auto Scaling for the web serverscorrect
Move the static content from Amazon S3 to the web servers

Question was not answered

  1. A company is running an application on premises and wants to use AWS for data backup All of the data must be available locally. The backup application can write only to block-based storage that is compatible with the Portable Operating System Interface (POSIX)

Which backup solution will meet these requirements?

Configure the backup software to use Amazon S3 as the target for the data backups
Configure the backup software to use Amazon S3 Glacier as the target for the data backups
Use AWS Storage Gateway, and configure it to use gateway-cached volumes
Use AWS Storage Gateway, and configure it to use gateway-stored volumescorrect

Question was not answered Explanation:

https://docs.aws.amazon.com/storagegateway/latest/userguide/StorageGatewayConcepts.html

  1. An organization created an Amazon Elastic File System (Amazon EFS) volume with a file system ID of fs-85ba4Kc. and it is actively used by 10 Amazon EC2 hosts. The organization has become concerned that the file system is not encrypted

How can this be resolved?

Enable encryption on each host's connection to the Amazon EFS volume Each connection must be recreated for encryption to take effect
Enable encryption on the existing EFS volume by using the AWS Command Line Interface
Enable encryption on each host's local drive Restart each host to encrypt the drive
Enable encryption on a newly created volume and copy all data from the original volume Reconnect each host to the new volumecorrect

Question was not answered Explanation:

https://docs.aws.amazon.com/efs/latest/ug/encryption.html

Amazon EFS supports two forms of encryption for file systems, encryption of data in transit and encryption at rest. You can enable encryption of data at rest when creating an Amazon EFS file system. You can enable encryption of data in transit when you mount the file system.

  1. While setting up an AWS managed VPN connection, a SysOps administrator creates a customer gateway resource in AWS The customer gateway device resides in a data center with a NAT gateway in front of it

What address should be used to create the customer gateway resource?

The private IP address of the customer gateway device
The MAC address of the NAT device in front of the customer gateway device
The public IP address of the customer gateway device
The public IP address of the NAT device in front of the customer gateway devicecorrect

Question was not answered

  1. An errant process is known to use an entire processor and run at 100% A SysOps administrator wants to automate restarting the instance once the problem occurs for more than 2 minutes

How can this be accomplished?

Create an Amazon CloudWatch alarm for the Amazon EC2 instance with basic monitoring Enable an action to restart the instance
Create a CloudWatch alarm for the EC2 instance with detailed monitoring Enable an action to restart the instancecorrect
Create an AWS Lambda function to restart the EC2 instance triggered on a scheduled basis every 2 minutes
Create a Lambda function to restart the EC2 instance, triggered by EC2 health checks

Question was not answered

  1. A SysOps administrator notices a scale-up event for an Amazon EC2 Auto Scaling group Amazon CloudWatch shows a spike in the RequestCount metric for the associated Application Load Balancer The administrator would like to know the IP addresses for the source of the requests

Where can the administrator find this information?

Auto Scaling logs
AWS CloudTrail logs
EC2 instance logs
Elastic Load Balancer access logscorrect

Question was not answered Explanation:

Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html

  1. An organization with a large IT department has decided to migrate to AWS With different job functions in the IT department it is not desirable to give all users access to all AWS resources Currently the organization handles access via LDAP group membership

What is the BEST method to allow access using current LDAP credentials?

Create an AWS Directory Service Simple AD Replicate the on-premises LDAP directory to Simple AD
Create a Lambda function to read LDAP groups and automate the creation of IAM users
Use AWS CloudFormation to create IAM roles Deploy Direct Connect to allow access to the on-premises LDAP server
Federate the LDAP directory with IAM using SAML Create different IAM roles to correspond to different LDAP groups to limit permissionscorrect

Question was not answered

  1. An Amazon S3 Inventory report reveals that more than 1 million objects in an S3 bucket are not encrypted These objects must be encrypted, and all future objects must be encrypted at the time they are written

Which combination of actions should a SysOps administrator take to meet these requirements? (Select TWO)

Create an AWS Config rule that runs evaluations against configuration changes to the S3 bucket When an unencrypted object is found run an AWS Systems Manager Automation document to encrypt the object in placecorrect
Edit the properties of the S3 bucket to enable default server-side encryptioncorrect
Filter the S3 Inventory report by using S3 Select to find all objects that are not encrypted Create an S3 Batch Operations job to copy each object in place with encryption enabledcorrect
Filter the S3 Inventory report by using S3 Select to find all objects that are not encrypted Send each object name as a message to an Amazon Simple Queue Service (Amazon SQS) queue Use the SQS queue to invoke an AWS Lambda function to tag each object with a key of "Encryption" and a value of "SSE-KMS"
Use S3 Event Notifications to invoke an AWS Lambda function on all new object-created events for the S3 bucket Configure the Lambda function to check whether the object is encrypted and to run an AWS Systems Manager Automation document to encrypt the object in place when an unencrypted object is found

Question was not answered Explanation:

https://aws.amazon.com/blogs/storage/encrypting-objects-with-amazon-s3-batch-operations/

  1. A company is using an AWS KMS customer master key (CMK) with imported key material The company Reference the CMK by its alias in the Java application to encrypt data The CMK must be rotated every 6 months

What is the process to rotate the key?

Enable automatic key rotation for the CMK and specify a period of 6 months
Create a new CMK with new imported material, and update the key alias to point to the new CMcorrect
Delete the current key material, and import new material into the existing CMK
Import a copy of the existing key material into a new CMK as a backup, and set the rotation schedule for 6 months

Question was not answered

  1. A company is running a serverless application on AWS Lambda The application stores data in an Amazon RDS for MySQL DB instance Usage has steadily increased and recently there have been numerous “too many connections” errors when the Lambda function attempts to connect to the database The company already has configured the database to use the maximum max_connections value that is possible

What should a SysOps administrator do to resolve these errors’?

Create a read replica of the database Use Amazon Route 53 to create a weighted DNS record that contains both databases
Use Amazon RDS Proxy to create a proxy Update the connection string in the Lambda functioncorrect
Increase the value in the max_connect_errors parameter in the parameter group that the database uses
Update the Lambda function's reserved concurrency to a higher value

Question was not answered Explanation:

https://aws.amazon.com/blogs/compute/using-amazon-rds-proxy-with-aws-lambda/

RDS Proxy acts as an intermediary between your application and an RDS database. RDS Proxy establishes and manages the necessary connection pools to your database so that your application creates fewer database connections. Your Lambda functions interact with RDS Proxy instead of your database instance. It handles the connection pooling necessary for scaling many simultaneous connections created by concurrent Lambda functions. This allows your Lambda applications to reuse existing connections, rather than creating new connections for every function invocation.

Check “Database proxy for Amazon RDS” section in the link to see how RDS proxy help Lambda handle huge connections to RDS MySQL https://aws.amazon.com/blogs/compute/using-amazon-rds-proxy-with-aws-lambda/

  1. A company stores files on 50 Amazon S3 buckets in the same AWS Region The company wants to connect to the S3 buckets securely over a private connection from its Amazon EC2 instances. The company needs a solution that produces no additional cost

Which solution will meet these requirements?

Create a gateway VPC endpoint lor each S3 bucket Attach the gateway VPC endpoints to each subnet inside the VPC
Create an interface VPC endpoint (or each S3 bucket Attach the interface VPC endpoints to each subnet inside the VPC
Create one gateway VPC endpoint for all the S3 buckets Add the gateway VPC endpoint to the VPC route tablecorrect
Create one interface VPC endpoint for all the S3 buckets Add the interface VPC endpoint to the VPC route table

Question was not answered

  1. A company uses AWS CloudFormation to deploy its application infrastructure Recently, a user accidentally changed a property of a database in a CloudFormation template and performed a stack update that caused an interruption to the application A SysOps administrator must determine how to modify the deployment process to allow the DevOps team to continue to deploy the infrastructure, but prevent against accidental modifications to specific resources.

Which solution will meet these requirements?

Set up an AWS Config rule to alert based on changes to any CloudFormation stack An AWS Lambda function can then describe the stack to determine if any protected resources were modified and cancel the operation
Set up an Amazon CloudWatch Events event with a rule to trigger based on any CloudFormation API call An AWS Lambda function can then describe the stack to determine if any protected resources were modified and cancel the operationcorrect
Launch the CloudFormation templates using a stack policy with an explicit allow for all resources and an explicit deny of the protected resources with an action of Update
Attach an IAM policy to the DevOps team role that prevents a CloudFormation stack from updating, with a condition based on the specific Amazon Resource Names (ARNs) of the protected resources

Question was not answered

  1. A company’s financial department needs to view the cost details of each project in an AWS account A SysOps administrator must perform the initial configuration that is required to view cost for each project in Cost Explorer

Which solution will meet this requirement?

Activate cost allocation tags Add a project tag to the appropriate resourcescorrect
Configure consolidated billing Create AWS Cost and Usage Reports
Use AWS Budgets Create AWS Budgets reports
Use cost categories to define custom groups that are based on AWS cost and usage dimensions

Question was not answered

  1. A company is managing multiple AWS accounts in AWS Organizations The company is reviewing internal security of Its AWS environment The company’s security administrator has their own AWS account and wants to review the VPC configuration of developer AWS accounts

Which solution will meet these requirements in the MOST secure manner?

Create an IAM policy in each developer account that has read-only access related to VPC
resources Assign the policy to an IAM user Share the user credentials with the security administrator
Create an IAM policy in each developer account that has administrator access to all Amazon EC2 actions, including VPC actions Assign the policy to an IAM user Share the user credentials with the security administrator
Create an IAM policy in each developer account that has administrator access related to VPC resources Assign the policy to a cross-account IAM role Ask the security administrator to assume the role from their account
Create an IAM policy m each developer account that has read-only access related to VPC
resources Assign the policy to a cross-account IAM role Ask the security administrator to assume the role from their accountcorrect

Question was not answered

  1. An application runs on multiple Amazon EC2 instances in an Auto Scaling group The Auto Scaling group is configured to use the latest version of a launch template A SysOps administrator must devise a solution that centrally manages the application logs and retains the logs for no more than 90 days

Which solution will meet these requirements?

Launch an Amazon Machine Image (AMI) that is preconfigured with the Amazon CloudWatch Logs agent to send logs to an Amazon S3 bucket Apply a 90-day S3 Lifecycle policy on the S3 bucket to expire the application logs
Launch an Amazon Machine Image (AMI) that is preconfigured with the Amazon CloudWatch Logs agent to send logs to a log group Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled rule to perform an instance refresh every 90 days
Update the launch template user data to install and configure the Amazon CloudWatch Logs agent to send logs to a log group Configure the retention period on the log group to be 90 dayscorrect
Update the launch template user data to install and configure the Amazon CloudWatch Logs agent to send logs to a log group Set the log rotation configuration of the EC2 instances to 90 days

Question was not answered

  1. A company has a mobile app that uses Amazon S3 to store images The images are popular for a week, and then the number of access requests decreases over time The images must be highly available and must be immediately accessible upon request A SysOps administrator must reduce S3 storage costs for the company.

Which solution will meet these requirements MOST cost-effectively?

Create an S3 Lifecycle policy to transition the images to S3 Glacier after 7 days
Create an S3 Lifecycle policy to transition the images to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 7 days
Create an S3 Lifecycle policy to transition the images to S3 Standard after 7 days
Create an S3 Lifecycle policy to transition the images to S3 Standard-Infrequent Access (S3 Standard-IA) after 7 dayscorrect

Question was not answered

  1. A SysOps administrator receives notification that an application that is running on Amazon EC2 instances has failed to authenticate to an Amazon RDS database. To troubleshoot, the SysOps administrator needs to investigate AWS Secrets Manager password rotation

Which Amazon CloudWatch log will provide insight into the password rotation?

AWS CloudTrail logs
EC2 instance application logscorrect
AWS Lambda function logs
RDS database logs

Question was not answered

  1. An AWS Lambda function is intermittently failing several times a day A SysOps administrator must find out how often this error has occurred in the last 7 days.

Which action will meet this requirement in the MOST operationally efficient manner?

Use Amazon Athena to query the Amazon CloudWatch logs that are associated with the Lambda function
Use Amazon Athena to query the AWS CloudTrail logs that are associated with the Lambda function
Use Amazon CloudWatch Logs Insights to query the associated Lambda function logscorrect
Use Amazon Elasticsearch Service (Amazon ES) to stream the Amazon CloudWatch logs for the Lambda function

Question was not answered

  1. A company has an internal web application that runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group in a single Availability Zone. A SysOps administrator must make the application highly available.

Which action should the SysOps administrator take to meet this requirement?

Increase the maximum number of instances in the Auto Scaling group to meet the capacity that is required at peak usage.
Increase the minimum number of instances in the Auto Scaling group to meet the capacity that is required at peak usage.
Update the Auto Scaling group to launch new instances in a second Availability Zone in the same AWS Region.correct
Update the Auto Scaling group to launch new instances in an Availability Zone in a second AWS Region.

Question was not answered Explanation:

“An Auto Scaling group can contain EC2 instances in one or more Availability Zones within the same Region. However, Auto Scaling groups cannot span multiple Regions”. As stated in https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-benefits.htm

  1. A SysOps administrator is building a process for sharing Amazon RDS database snapshots between different accounts associated with different business units within the same company. All data must be encrypted at rest.

How should the administrator implement this process?

Write a script to download the encrypted snapshot, decrypt it using the AWS KMS encryption key used to encrypt the snapshot, then create a new volume in each account.
Update the key policy to grant permission to the AWS KMS encryption key used to encrypt the snapshot with all relevant accounts, then share the snapshot with those accounts.correct
Create an Amazon EC2 instance based on the snapshot, then save the instance's Amazon EBS volume as a snapshot and share it with the other accounts. Require each account owner to create a new volume from that snapshot and encrypt it.
Create a new unencrypted RDS instance from the encrypted snapshot, connect to the instance using SSH/RD
export the database contents into a file, then share this file with the other accounts.

Question was not answered

  1. A SysOps administrator has an AWS CloudFormation template of the company’s existing infrastructure in us-west-2. The administrator attempts to use the template to launch a new stack in eu-west-1, but the stack only partially deploys, receives an error message, and then rolls back.

Why would this template fail to deploy? (Select TWO.)

The template referenced an IAM user that is not available in eu-west-1.correct
The template referenced an Amazon Machine Image (AMI) that is not available in eu-west-1.correct
The template did not have the proper level of permissions to deploy the resources.
The template requested services that do not exist in eu-west-1.correct
CloudFormation templates can be used only to update existing services.

Question was not answered

  1. A development team recently deployed a new version of a web application to production. After the release, penetration testing revealed a cross-site scripting vulnerability that could expose user data.

Which AWS service will mitigate this issue?

AWS Shield Standardcorrect
AWS WAF
Elastic Load Balancing
Amazon Cognito

Question was not answered

  1. A company is using an Amazon DynamoDB table for data. A SysOps administrator must configure replication of the table to another AWS Region for disaster recovery.

What should the SysOps administrator do to meet this requirement?

Enable DynamoDB Accelerator (DAX).
Enable DynamoDB Streams, and add a global secondary index (GSI).
Enable DynamoDB Streams, and-add a global table Region.correct
Enable point-in-time recovery.

Question was not answered

  1. A company hosts a web portal on Amazon EC2 instances. The web portal uses an Elastic Load Balancer (ELB) and Amazon Route 53 for its public DNS service. The ELB and the EC2 instances are deployed by way of a single AWS CloudFormation stack in the us-east-1 Region. The web portal must be highly available across multiple Regions.

Which configuration will meet these requirements?

Deploy a copy of the stack in the us-west-2 Region. Create a single start of authority (SOA) record in Route 53 that includes the IP address from each EL
Configure the SOA record with health checks. Use the ELB in us-east-1 as the primary record and the ELB in us-west-2 as the secondary record.correct
Deploy a copy of the stack in the us-west-2 Region. Create an additional A record in Route 53 that includes the ELB in us-west-2 as an alias target. Configure the A records with a failover routing policy and health checks. Use the ELB in us-east-1 as the primary record and the ELB in us-west-2 as the secondary record.
Deploy a new group of EC2 instances in the us-west-2 Region. Associate the new EC2 instances with the existing ELB, and configure load balancer health checks on all EC2 instances. Configure the ELB to update Route 53 when EC2 instances in us-west-2 fail health checks.
Deploy a new group of EC2 instances in the us-west-2 Region. Configure EC2 health checks on all EC2 instances in each Region. Configure a peering connection between the VPCs. Use the VPC in us-east-1 as the primary record and the VPC in us-west-2 as the secondary record.

Question was not answered Explanation:

When you create a hosted zone, Route 53 automatically creates a name server (NS) record and a start of authority (SOA) record for the zone.

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/migrate-dns-domain-in-use.html#migrate-dns-create-hosted-zone

https://en.wikipedia.org/wiki/SOA_record

  1. A SysOps administrator must set up notifications for whenever combined billing exceeds a certain threshold for all AWS accounts within a company. The administrator has set up AWS Organizations and enabled Consolidated Billing.

Which additional steps must the administrator perform to set up the billing alerts?

In the payer account: Enable billing alerts in the Billing and Cost Management console; publish an Amazon SNS message when the billing alert triggers.
In each account: Enable billing alerts in the Billing and Cost Management console; set up a billing alarm in Amazon CloudWatch; publish an SNS message when the alarm triggers.
In the payer account: Enable billing alerts in the Billing and Cost Management console; set up a billing alarm in the Billing and Cost Management console to publish an SNS message when the alarm triggers.
In the payer account: Enable billing alerts in the Billing and Cost Management console; set up a billing alarm in Amazon CloudWatch; publish an SNS message when the alarm triggers.correct

Question was not answered

  1. A company has multiple AWS Site-to-Site VPN connections between a VPC and its branch offices. The company manages an Amazon Elasticsearch Service (Amazon ES) domain that is configured with public access. The Amazon ES domain has an open domain access policy. A SysOps administrator needs to ensure that Amazon ES can be accessed only from the branch offices while preserving existing data.

Which solution will meet these requirements?

Configure an identity-based access policy on Amazon E
Add an allow statement to the policy that includes the Amazon Resource Name (ARN) for each branch office VPN connection.correct
Configure an IP-based domain access policy on Amazon E
Add an allow statement to the policy that includes the private IP CIDR blocks from each branch office network.
Deploy a new Amazon ES domain in private subnets in a VPC, and import a snapshot from the old domain. Create a security group that allows inbound traffic from the branch office CIDR blocks.
Reconfigure the Amazon ES domain in private subnets in a VP
Create a security group that allows inbound traffic from the branch office CIDR blocks.

Question was not answered

  1. A large company is using AWS Organizations to manage its multi-account AWS environment. According to company policy, all users should have read-level access to a particular Amazon S3 bucket in a central account. The S3 bucket data should not be available outside the organization. A SysOps administrator must set up the permissions and add a bucket policy to the S3 bucket.

Which parameters should be specified to accomplish this in the MOST efficient manner?

Specify '*' as the principal and PrincipalOrgld as a condition.
Specify all account numbers as the principal.
Specify PrincipalOrgld as the principal.correct
Specify the organization's management account as the principal.

Question was not answered

  1. A SysOps administrator is troubleshooting connection timeouts to an Amazon EC2 instance that has a public IP address. The instance has a private IP address of 172.31.16.139. When the SysOps administrator tries to ping the instance’s public IP address from the remote IP address 203.0.113.12, the response is “request timed out.”

The flow logs contain the following information:

What is one cause of the problem?

Inbound security group deny rule
Outbound security group deny rule
Network ACL inbound rules
Network ACL outbound rulescorrect

Question was not answered

  1. A company has multiple Amazon EC2 instances that run a resource-intensive application in a development environment. A SysOps administrator is implementing a solution to stop these EC2 instances when they are not in use.

Which solution will meet this requirement?

Assess AWS CloudTrail logs to verify that there is no EC2 API activity. Invoke an AWS Lambda function to stop the EC2 instances.
Create an Amazon CloudWatch alarm to stop the EC2 instances when the average CPU utilization is lower than 5% for a 30-minute period.correct
Create an Amazon CloudWatch metric to stop the EC2 instances when the VolumeReadBytes metric is lower than 500 for a 30-minute period.
Use AWS Config to invoke an AWS Lambda function to stop the EC2 instances based on resource
configuration changes.

Question was not answered Explanation:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/UsingAlarmActions.html#AddingStopActions

  1. A SysOps administrator needs to configure a solution that will deliver digital content to a set of authorized users through Amazon CloudFront. Unauthorized users must be restricted from access.

Which solution will meet these requirements?

Store the digital content in an Amazon S3 bucket that does not have public access blocked. Use signed URLs to access the S3 bucket through CloudFront.
Store the digital content in an Amazon S3 bucket that has public access blocked. Use an origin access identity (OAI) to deliver the content through CloudFront. Restrict S3 bucket access with signed URLs in CloudFront.correct
Store the digital content in an Amazon S3 bucket that has public access blocked. Use an origin access identity (OAI) to deliver the content through CloudFront. Enable field-level encryption.
Store the digital content in an Amazon S3 bucket that does not have public access blocked. Use signed cookies for restricted delivery of the content through CloudFront.

Question was not answered

  1. A company has attached the following policy to an IAM user:

Which of the following actions are allowed for the IAM user?

Amazon RDS DescribeDBInstances action in the us-east-1 Region
Amazon S3 Putobject operation in a bucket named testbucket
Amazon EC2 Describe Instances action in the us-east-1 Regioncorrect
Amazon EC2 AttachNetworkinterf ace action in the eu-west-1 Region

Question was not answered

  1. A company runs a web application on three Amazon EC2 instances behind an Application Load Balancer (ALB). The company notices that random periods of increased traffic cause a degradation in the application’s performance. A SysOps administrator must scale the application to meet the increased traffic.

Which solution meets these requirements?

Create an Amazon CloudWatch alarm to monitor application latency and increase the size of each EC2 instance if the desired threshold is reached.wrong
Create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor application latency and add an EC2 instance to the ALB if the desired threshold is reached.
Deploy the application to an Auto Scaling group of EC2 instances with a target tracking scaling policy. Attach the ALB to the Auto Scaling group.correct
Deploy the application to an Auto Scaling group of EC2 instances with a scheduled scaling policy.
Attach the ALB to the Auto Scaling group.
  1. A company’s public website is hosted in an Amazon S3 bucket in the us-east-1 Region behind an Amazon CloudFront distribution. The company wants to ensure that the website is protected from DDoS attacks. A SysOps administrator needs to deploy a solution that gives the company the ability to maintain control over the rate limit at which DDoS protections are applied.

Which solution will meet these requirements?

Deploy a global-scoped AWS WAF web ACL with an allow default action. Configure an AWS WAF rate-based rule to block matching traffic. Associate the web ACL with the CloudFront distribution.
Deploy an AWS WAF web ACL with an allow default action in us-east-1. Configure an AWS WAF rate-based rule to block matching traffic. Associate the web ACL with the S3 bucket.correct
Deploy a global-scoped AWS WAF web ACL with a block default action. Configure an AWS WAF rate-based rule to allow matching traffic. Associate the web ACL with the CloudFront distribution.
Deploy an AWS WAF web ACL with a block default action in us-east-1. Configure an AWS WAF rate-
based rule to allow matching traffic. Associate the web ACL with the S3 bucket.

Question was not answered

  1. A company hosts an internal application on Amazon EC2 instances. All application data and requests route through an AWS Site-to-Site VPN connection between the on-premises network and AWS. The company must monitor the application for changes that allow network access outside of the corporate network. Any change that exposes the application externally must be restricted automatically.

Which solution meets these requirements in the MOST operationally efficient manner?

Create an AWS Lambda function that updates security groups that are associated with the elastic network interface to remove inbound rules with noncorporate CIDR ranges. Turn on VPC Flow Logs, and send the logs to Amazon CloudWatch Logs. Create an Amazon CloudWatch alarm that matches traffic from noncorporate CIDR ranges, and publish a message to an Amazon Simple Notification Service (Amazon SNS) topic with the Lambda function as a target.
Create a scheduled Amazon EventBridge (Amazon CloudWatch Events) rule that targets an AWS Systems Manager Automation document to check for public IP addresses on the EC2 instances. If public IP addresses are found on the EC2 instances, initiate another Systems Manager Automation document to terminate the instances.
Configure AWS Config and a custom rule to monitor whether a security group allows inbound requests from noncorporate CIDR ranges. Create an AWS Systems Manager Automation document to remove any noncorporate CIDR ranges from the application security groups.correct
Configure AWS Config and the managed rule for monitoring public IP associations with the EC2 instances by tag. Tag the EC2 instances with an identifier. Create an AWS Systems Manager Automation document to remove the public IP association from the EC2 instances.

Question was not answered

Official Practice Question Set

Official Practice Question Set AWS Certified SysOps Administrator - Associate

Q 1

  • A company uses an AWS CloudFormation stack to create an Auto Scaling group of Amazon EC2 instances. The company needs to address a security vulnerability in the Amazon Machine Image (AMI) that the EC2 instances use. The company needs to update to the latest operating system to address the security concerns. The company’s application runs on the EC2 instances and must stay online at all times. At least one EC2 instance must stay operational during the change operation.

A SysOps administrator is updating the AMI ID in the CloudFormation template.

How can the SysOps administrator apply the AMI update to meet these requirements?

A. Use a direct update of the stack

B. run a change set for the stack

C. use the autoscalingrollingupdate policy

D. use the autoscalingreplaing update policy

Correct Answer: C

Q 2

  • A company’s customers report that they are receiving HTTP 404 errors on the company’s website. The company’s infrastructure includes several Amazon EC2 instances. The logs are sent from each instance to a consolidated log group in Amazon CloudWatch Logs. A SysOps administrator needs to create a notification to indicate when the HTTP 404 errors pass a certain threshold.

Which steps are required to meet this requirement? (Select TWO.)

A - Configure a log stream with log file compression

Incorrect. Compression is not necessary to monitor the HTTP 404 errors.

For more information about log compression in CloudWatch Logs, see Collect metrics, logs, and traces with the CloudWatch agent.

B Configure CloudWatch Logs Insights to generate metrics on HTTP 404 erros

Incorrect. With CloudWatch Logs Insights, you can search and analyze your log file data. CloudWatch Logs Insights does not generate CloudWatch metrics natively.

For more information about CloudWatch Logs Insights, see Analyzing log data with CloudWatch Logs Insights.

C Create metric filters with a filter pattern that identifies the http 404 errors.

Correct. You can create a count of 404 errors and exclude other 4xx errors with a filter pattern on 404 errors.

For more information about CloudWatch Logs filters, see Creating metrics from log events using filters.

D Create an Amazon CloudWatch alarm based on the count of HTTP 404 errors.

Correct. You can set an alarm to notify operators when the 404 filter metric exceeds a threshold.

For more information about CloudWatch alarms, see Using Amazon CloudWatch alarms.

E Create an Amazon EventBridge event based on the CloudWatch Logs Insights threshold.

Incorrect. With CloudWatch Logs Insights, you can search and analyze your log file data. CloudWatch Logs Insights does not generate EventBridge events.

For more information about CloudWatch Logs Insights, see Analyzing log data with CloudWatch Logs Insights.

Correct Answer: C & D

Q 3

  • A SysOps administrator sets up Traffic Mirroring on an Amazon EC2 instance. The SysOps administrator examines the logs from the traffic mirror and discovers that a number of packets have been truncated.

What should the SysOps administrator do to resolve this problem?

A Trace the truncated packets before they enter the AWS network

B Increase the maximum transmission unit (MTU) size for the mirror source

C Increase the maximum transmission unit (MTU) size for the mirror target (<- this)

D Use a Nitro instance type for the EC2 instance that is being mirrored

Correct Answer: C

  • Explanation

A. Incorrect. Truncated packets are dropped when the packets enter the AWS network and are not captured by the mirror target.

For more information about truncated packets, see Traffic mirror target concepts.

B. Incorrect. The packets are truncated because the source sent a larger packet than the mirror target can read. The increase of MTU size for the mirror source would not fix this issue.

For more information about truncated packets, see Traffic mirror target concepts.

C. Correct. Traffic Mirroring provides the ability to create a copy of a packet flow to examine the contents of a packet. This feature is useful for threat monitoring, content inspection, and troubleshooting.

A packet is truncated to the MTU value when both of the following are true:

The traffic mirror target is a standalone instance.
The traffic packet size from the mirror source is greater than the MTU size for the traffic mirror target.

For more information about Traffic Mirroring, see What is Traffic Mirroring?

For more information about truncated packets, see Traffic mirror target concepts.

D. Incorrect. Traffic Mirroring is available on current generations of EC2 instances as well as on Nitro instances. The source instance is sending mirrored traffic according to the logs. Therefore, the source instance is compatible with Traffic Mirroring.

For more information about Traffic Mirroring compatible instance types, see Amazon VPC Traffic Mirroring is now supported on select non-Nitro instance types.

Q 4

  • A company created a new application that uses a Spot Fleet for a variety of instance families across multiple Availability Zones.

What should a SysOps administrator do to ensure that the Spot Fleet is configured for cost optimization?

A Apply Spot pricing and Reserved Instances purchasing to the same instances

B Ensure instance capacity by specifying the desired target capacity. Specify how much of that capacity must be On-Demand Instances

C Use the lowestPrice allocation strategy in combination with the InstancePoolsToUseCount paramter in the Spot Fleet request

D Launch instances uo to the Spot Fleet target capacity or the maximum acceptable payment ammount.

Correct Answer: C

  • Explanation:

A - Incorrect. Instances can be launched either as Spot Instances or as Reserved Instances, but not as both.

For more information about Reserved Instances, see Reserved Instances.

B - Incorrect. The request for On-Demand capacity in a Spot Fleet request ensures that there is always instance capacity. However, the question asks for a solution that focuses on cost optimization and not instance capacity.

C - Correct. With this solution, a Spot Fleet automatically deploys the lowest price combination of instance types and Availability Zones based on the current Spot price across the number of Spot pools specified. You can use this combination to avoid the most expensive Spot Instances.

For more information about Spot Instance allocation strategies, see Spot Fleet configuration strategies.

D - Incorrect. This solution is the default behavior of a Spot Fleet. A Spot Fleet stops the launch of instances when it has reached the target capacity or the maximum amount a user will pay. This solution improves the likelihood of getting Spot Instances, but it does not optimize the cost.

Q 5

  • A SysOps administrator needs to deploy a duplicate AWS development environment into a new AWS Region. The environment will be periodically redeployed as part of the maintenance process. The SysOps administrator deploys the current working template into the new Region. The template’s Amazon EC2 instances fail to deploy, and AWS CloudFormation automatically rolls back the deployment.

After reviewing the logs, the SysOps administrator discovers that the EC2 instances are not deploying because of an Amazon Machine Image (AMI) ID value that is not valid in the new Region. The SysOps administrator must correct the CloudFormation template so that the instances deploy correctly.

What should the SysOps administrator do to meet this requirement with the LEAST operational overhead?

A Create a template that is specific to each Region

B Create a Mappings section in the template to reference the correct AMI ID value for both Regions (<- this)

C Remove the EC2 instances from the template. Manually deploy the EC2 instances.

D Create a parameter field that requests the AMI ID value. Pass the AMI ID value to the resource section for EC2 deployment.

Correct Answer: B

  • Explanation:

A - Incorrect. The creation and maintenance of a template that is specific to each Region would require additional work. Additionally, changes that are made to one template would not be reflected in infrastructure in other Regions.

B - Correct. The optional Mappings section matches a key to a corresponding set of named values. For example, if you set values based on a Region, you can create a mapping that uses the Region name as a key. This mapping contains the values you want to specify for each specific Region.

For more information about the Mappings section in CloudFormation, see Mappings.

C - Incorrect. This solution is not automated and would not minimize operational overhead. This solution is a manual process that the SysOps administrator would have to perform.

D - Incorrect. This solution would require additional work every time the template is launched. The SysOps administrator would have to look up the correct AMI ID and enter it into the parameter field before the template could deploy.

For more information about parameters in CloudFormation, see Parameters.

Q 6

  • A SysOps administrator needs to block malicious traffic that is reaching a company’s web servers. The malicious traffic is distributed over many IP addresses. The malicious traffic consists of a significantly higher number of connections than the company typically receives from legitimate users. The SysOps administrator needs to protect the web servers by using a solution that maximizes operational efficiency.

Which solution meets these requirements?

A Create a security group for the web servers. Add deny rules for malicious sources

B Set the network ACL for the subnet that is assigned to the web servers. Add deny rules for the malicious sources.

C Use Amazon CloudFront to cache all pages and remove the traffic from the web servers

D Place the web servers behind AWS WAF. Establish the rate limit to create a deny list (<- this)

Correct Answer: D

  • Explanation

A - Incorrect. Security groups allow specific IP address ranges. Security groups block any traffic that is not specifically allowed. It is not possible to deny specific IP address ranges with security groups.

For more information about security groups, see Control traffic to your AWS resources using security groups.

B - Incorrect. This solution would require extra operational effort to establish and maintain the deny rules.

For more information about network ACLs, see Control traffic to subnets using network ACLs.

C - Incorrect. It is not reasonable to expect all contents to be cacheable.

For more information about CloudFront, see What is Amazon CloudFront?.

D - Correct. AWS WAF is a web application firewall that helps protect web applications or APIs against common web exploits that can affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by offering you the ability to create security rules that block common attack patterns, such as SQL injection or cross-site scripting. You also can create rules that filter out specific traffic patterns that you define.

For more information about filtering IP addresses to protect your applications, see Working with IP match conditions.

Q 7

  • A photo sharing site delivers content worldwide from a library on Amazon S3 by using Amazon CloudFront. Some users try to access photos that do not exist. Other users try to access photos that the users are not authorized to view.

Which Amazon CloudWatch metric should a SysOps administrator monitor to better understand the extent of this issue?

A GetRequest S3 metric

B 4xx ErrorRate CloudFront metric

C 5xxErrorRate CloudFront metric

D PostRequests S3 metric

Correct Answer: B

A - Incorrect. The GetRequests metric shows access activity. The GetRequests metric does not show errors.

For more information about S3 metrics in CloudWatch, see Metrics and dimensions.

B - Correct. The 4xx errors include errors for objects that do not exist. These errors include 404 and 403 errors for objects that users are unauthorized to view. The 4xx errors are front-end errors that pertain to access to the object.

For more information about HTTP error codes, see List of Error Codes.

For more information about S3 metrics in CloudWatch, see Metrics and dimensions.

C - Incorrect. The 5xx errors are server-side errors, not access or authorization errors.

For more information about HTTP error codes, see List of Error Codes.

For more information about S3 metrics in CloudWatch, see Metrics and dimensions.

D - Incorrect. The PostRequests metric shows access activity. The PostRequests metric does not show errors.

For more information about S3 metrics in CloudWatch, see Metrics and dimensions.

Q 8

  • A company uses Amazon CloudFront to speed up the distribution of static and dynamic web content to users. Users are spread across different geographical regions around the world. The company wants to reduce the resource consumption on the content source servers.

Which solution will reduce the load on the origin?

A Use Lambda@Edge

B Use CloudFront Origin Shield

C Configure custom headers

D AWS Shield Advanced

Correct Answer: B

  • Explanation

A - Incorrect. You can use Lambda@Edge to run code at global AWS edge locations without the need to provision or manage servers. However, Lambda@Edge would not speed up the distribution of content.

For more information about Lambda@Edge, see Using AWS Lambda with CloudFront Lambda@Edge.

B - Correct. Origin Shield is an additional layer in the CloudFront caching infrastructure that helps minimize an origin’s load, improve its availability, and reduce its operating costs. CloudFront provides a reduced load on the origin because requests that CloudFront can serve from the cache do not go to the origin.

For more information about Origin Shield, see Using Amazon CloudFront Origin Shield.

C - Incorrect. Custom headers validate that requests to an origin were sent from CloudFront. You can use custom headers to direct traffic, but custom headers do not reduce the load on a single origin.

For more information about custom headers, see Adding custom headers to origin requests.

D - Incorrect. Shield Advanced is a managed DDoS protection service for applications that run on AWS. Shield Advanced does not help reduce the burden of the origin servers.

For more information about Shield Advanced, see AWS Shield FAQs.

Q 9

  • A company is preparing to launch a new website. The website runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The company launched the instances by using a custom Amazon Machine Image (AMI) from an EC2 instance that runs without problems in a different VPC in the same AWS Region.

After deployment of the new EC2 instances, external testing failed to connect to the website by using HTTPS. The website is reachable by using SSH through a bastion host instance that runs in the same VPC.

Which steps should a SysOps administrator take to find the problem? (Select TWO.)

A Ensure that the security group that is assigned to the instances is open to port 443 from the ALB's security group. (<- this)

B Ensure that the ALB is assigned to a subnet with 0.0.0.0/0 routed to an internet gateway (<- this)

C Ensure that the instances have a public IP address or an Elastic IP address that is assigned to each instance's elastic network interface.

D Ensure that the instances are located in a subnet with 0.0.0.0/0 access routed to an internet gateway.

E Ensure that both the system status and instance reachability checks for the instances are succeeding.

Correct Answer: A & B

A - Correct. The ALB must have port 443 open on its assigned security groups if the external user wants to connect over HTTPS. The instance’s security group also must be open to port 443 to complete the connection. The subnets that include the ALB must have access to the internet gateway.

For more information about how to troubleshoot network connectivity, see How do I troubleshoot instance connection timeout errors in Amazon VPC?

For more information about security groups, see Control traffic to your AWS resources using security groups.

For more information about subnets and route tables, see VPCs and subnets.

B - Correct. The ALB must have port 443 open on its assigned security groups if the external user wants to connect over HTTPS. The instance’s security group also must be open to port 443 to complete the connection. The subnets that include the ALB must have access to the internet gateway, ensuring the route table associated to the subnet allows all traffic to the internet gateway will allow external users to connect.

For more information about how to troubleshoot network connectivity, see How do I troubleshoot instance connection timeout errors in Amazon VPC?

For more information about security groups, see Control traffic to your AWS resources using security groups.

For more information about subnets and route tables, see VPCs and subnets.

C - Incorrect. Because the instances are behind an ALB, external users are not connecting directly to the instances by using a public IP address or an Elastic IP address. The ALB will direct traffic to the instances by using private—or internal—IP addresses.

D - Incorrect. Because the instances are behind an ALB, external users are not connecting directly to the instances. Therefore, there is no need for the instances to route through an internet gateway. The ALB will direct traffic to the instances by using private—or internal—IP addresses.

E - Incorrect. The instances are reachable by using private access through an SSH bastion host. This indicates that they are operational.

Q 10

  • A company has a key management service in its on-premises data center. The key management service uses an RSA asymmetric encryption algorithm. The company needs to integrate its system with a highly available, secure service in AWS to replace the on-premises service. The keys must be stored in dedicated hardware security modules that are validated by a third party. The company must control these hardware security modules.

Which solution will meet these requirements?

A Import the current keys from the on-premises environmet into an AWS CloudHSM cluster. (<- this)

B Revoke the current keys. Generate new asymmetric keys in AWS Key Management Service (AWS KMS).

C Generate a customer key from AWS Key Management Service (AWS KMS). Import the key into the on-premises environment.

D Launch an Amazon EC2 instance. Install an application that generates RSA keys. Import the existing keys into the application.

Correct Answer: A

  • Explanation

A - Correct. With CloudHSM, you can manage your own encryption keys by using FIPS 140-2 Level 3 validated hardware security modules (HSMs). While AWS manages the HSM appliance, CloudHSM does not have access to your keys. You control and manage your own keys. You can cluster together CloudHSM devices for a highly available environment.

For more information about CloudHSM clusters, see AWS CloudHSM Clusters.

B - Incorrect. AWS KMS does not allow the storage of asymmetric keys in a dedicated environment under your control. For more information about AWS KMS and asymmetric keys, see AWS Key Management Service supports asymmetric keys.

C - Incorrect. AWS KMS supports asymmetric keys. However, you cannot store the asymmetric keys in a dedicated environment under your control.

For more information about AWS KMS, see What is AWS Key Management Service?

D - Incorrect. A single EC2 instance does not meet the requirements for high availability.

For more information about EC2 instances, see Amazon EC2.

Q 11

  • A company’s disaster recovery policy states that Amazon Elastic Block Store (Amazon EBS) snapshots must be created daily. The EBS snapshots must be retained for 6 months and then must be deleted. A SysOps administrator must automate this process.

Which solution will meet these requirements with the LEAST operational overhead?

A Use Amazon EventBridge rules for AWS config.

B Use Amazon Data Lifecycle Manager (Amazon DLM)

C Use an Amazon S3 Lifecyle rule.

D Use AWS CodePipeline to create snapshots. Use an AWS CloudFormation template to retain and delete snapshots.

Correct Answer: B

  • Explanation

A - Incorrect. AWS Config and EventBridge are governance tools that assess, audit, evaluate, and remediate the configurations of AWS resources. However, AWS Config and EventBridge cannot automate the removal of EBS snapshots.

For more information about AWS Config, see What is AWS Config?

For more information about EventBridge, see What is Amazon EventBridge?

B - Correct. Amazon DLM can automate the creation, retention, and deletion of EBS snapshots and EBS-backed Amazon Machine Images (AMIs).

For more information about Amazon DLM, see Amazon Data Lifecycle Manager.

C - Incorrect. S3 Lifecycle rules define actions for Amazon S3 to take during an object’s lifetime. EBS snapshots are stored in Amazon S3. However, the EBS snapshots are not accessible through standard S3 APIs. The S3 Lifecycle rules cannot access EBS snapshots.

For more information about S3 Lifecycle rules, see Managing your storage lifecycle.

For more information about the deletion of EBS snapshots, see Delete an Amazon EBS snapshot.

D - Incorrect. CodePipeline is a continuous integration and continuous delivery (CI/CD) tool. CloudFormation automates the creation of AWS resource stacks. However, CodePipeline and CloudFormation cannot automate the removal of EBS snapshots.

For more information about CodePipeline, see What is AWS CodePipeline?

For more information about CloudFormation, see What is AWS CloudFormation?

Q 12

  • A company maintains its AWS accounts with AWS Organizations and uses AWS Firewall Manager. The company wants to change the administrator account of Firewall Manager to a different account within the organization.

What should a SysOps administrator do to accomplish this task?

A Remove the current administrator account from the organization that administers the firewall. Add the new administrator to the organization that administers the firewall.

B Modify the service control policy (SCP) to deny access to Firewall Manager on the current account. Modify the SCP to allow access to Firewall Manager on the new administrator account.

C Use the AWS Systems Manager console with the Organizations Maintenance account to switch Firewall Manager from the original administrator account to the new Firewall Manager administrator account number.

D Log in to the current Firewall administrator account. Use the revoke feature of Firewall Manager. Sign in to the AWS Managemenet Console by using the Organizations management. Enter the new Firewall Manager administrator account number.

Correct Answer: D

  • Explanation

A - Incorrect. You use the Firewall Manager console to remove the original Firewall Manager administrator account and to add the new Firewall Manager administrator account.

For more information about Firewall Manager, see AWS Firewall Manager.

For more information about Firewall Manager and Organizations, see AWS Firewall Manager and AWS Organizations.

B - Incorrect. SCPs allow or deny access to services from Organizations member accounts. SCPs do not control which account is the administrator account for Firewall Manager.

For more information about Firewall Manager, see AWS Firewall Manager.

For more information about Firewall Manager and Organizations, see AWS Firewall Manager and AWS Organizations.

C - Incorrect. This modification cannot be performed in the Systems Manager console.

For more information about how to change the Firewall Manager administrator account, see Changing the default administrator account.

For more information about Firewall Manager and Organizations, see AWS Firewall Manager and AWS Organizations.

D - Correct. You can designate only one account in an organization as a Firewall Manager administrator account. To create a new Firewall Manager administrator account, you must revoke the original administrator account first.

For more information about how to change the Firewall Manager administrator account, see Changing the default administrator account.

For more information about Firewall Manager and Organizations, see AWS Firewall Manager and AWS Organizations.

Q 13

  • A SysOps administrator needs to monitor a fleet of Amazon EC2 Linux instances by using Amazon CloudWatch. The SysOps administrator must not install any agents.

Which metrics can the SysOps administrator use CloudWatch to measure? (Select TWO.)

A CPU utilization.

B Memory utilization.

C Network packets in. 

D Network packets dropped.

E CPU ready time.

Correct Answer: A & C

  • Explanation

A - Correct. CloudWatch collects data about the performance of EC2 instances. CPUUtilization is one of the metrics that CloudWatch collects without the CloudWatch agent. With the agent, additional metrics are available.

For more information about CloudWatch metrics, see List the available CloudWatch metrics for your instances.

For more information about the CloudWatch agent, see Collect metrics, logs, and traces with the CloudWatch agent.

B - Incorrect. CloudWatch collects data about the performance of EC2 instances. However, the CloudWatch agent must be installed for CloudWatch to collect metrics about the memory utilization of an instance.

For more information about how to use CloudWatch metrics, see Using Amazon CloudWatch metrics.

For more information about the CloudWatch agent, see Collect metrics, logs, and traces with the CloudWatch agent.

C - Correct. CloudWatch collects data about the performance of EC2 instances. NetworkPacketsIn is one of the metrics that CloudWatch collects without the CloudWatch agent. With the agent, additional metrics are available.

For more information about CloudWatch metrics, see List the available CloudWatch metrics for your instances.

For more information about the CloudWatch agent, see Collect metrics, logs, and traces with the CloudWatch agent.

D - Incorrect. CloudWatch collects data about the performance of EC2 instances. However, metrics about network packet loss are not directly available from CloudWatch and would require you to install an additional tool on the instances.

For more information about how to use CloudWatch metrics, see Use Amazon CloudWatch metrics.

E - Incorrect. CloudWatch collects data about the performance of EC2 instances. CPU ready time is not a collectable metric available from CloudWatch.

For more information about how to use CloudWatch metrics, see Use Amazon CloudWatch metrics.

Q 14

  • A company is hosting a service-oriented architecture across multiple Amazon EC2 instances. Each instance hosts a different application. The services read and write messages to Amazon Simple Queue Service (Amazon SQS) queues for cross-service communication. The company uses Amazon CloudWatch for monitoring.

The company has configured a CloudWatch alarm to alert system operators when the value of the ApproximateNumberOfMessagesVisible metric is more than 50. A system operator just received an alert that the alarm has entered the ALARM state.

What could be the cause of the alarm?

A The visibility timeout for the SQS queue is set to a value that is too long in duration.

B the applications that are receiving the messages from the SQS queue are purging the messages from the queue after processing the messages.

C the delivery delay for the SQS queue is set to a value that is too long in duration

D The applications that are receiving the messages from the SQS queue are not deleting the mssages from the queue after processing them.

Correct Answer: D

A - Incorrect. A duration that is too long for the visibility timeout would reduce the value that is tracked by the ApproximateNumberOfMessagesVisible metric. Therefore, it would not cause the issue of the ApproximateNumberOfMessagesVisible value being too high.

For more information about visibility timeout, see Amazon SQS visibility timeout.

B - Incorrect. A purge of the SQS queue would delete all messages in the queue. Therefore, the queue would not form a backlog.

For more information about how to purge SQS queues, see Purging messages from an Amazon SQS queue (console).

C - Incorrect. A duration that is too long for the delivery delay would not cause the SQS queue to form a backlog. It would artificially reduce the number of visible messages in the queue.

For more information about the delivery delay parameter for Amazon SQS, see Amazon SQS delay queues.

D - Correct. Amazon SQS does not automatically delete a message after retrieving it, in case the message was not received. To delete a message, send a separate request that acknowledges a message has been successfully received and processed. A message must be received before it can be deleted.

If you fail to delete a message after successfully processing it, the message would be placed back into the queue even though the message already has been processed. This process eventually could cause a backlog of messages in the queue because no message is ever deleted from the queue.

For more information about SQS queues, see Receive and delete a message (console).

Q 15

  • A company has deployed a new application that runs on Amazon EC2 instances. The company’s security team wants the application team to verify that all common vulnerabilities and exposures are addressed regularly throughout the application’s life span.

How can the application team meet this requirement?

A Perform regular assessments with Amazon Inspector.

B Perform regular assessments with AWS Trusted Advisor.

C Integrate the AWS Health Dashboard with Amazon EventBridge events to get security notifications.

D Grant the security team access to AWS Artifact.

Correct Answer: A

  • Explanation

A - Correct. Amazon Inspector discovers potential security issues by using security rules to analyze AWS resources. Amazon Inspector also integrates with AWS Security Hub to provide a view of your security posture across multiple AWS accounts.

For more information about Amazon Inspector, see Amazon Inspector Classic Assessment Templates and Assessment Runs.

B - Incorrect. Trusted Advisor provides recommendations that help you follow AWS best practices. Trusted Advisor does not check for vulnerabilities on the instance itself.

For more information about Trusted Advisor, see AWS Trusted Advisor.

C - Incorrect. AWS Health Dashboard provides visibility into both public events and account-specific events. These events can be upcoming maintenance issues for a service in a Region or account-specific events, such as a deprecated resource in your account. However, AWS Health Dashboard does not check for vulnerabilities and exposures.

For more information about AWS Health Dashboard, see AWS Health Dashboard.

For more information about AWS Health Dashboard integration with Amazon EventBridge, see Monitoring AWS Health Events with Amazon EventBridge.

D - Incorrect. AWS Artifact provides access to compliance documentation. AWS Artifact does not check for vulnerabilities and exposures.

For more information about AWS Artifact, see AWS Artifact.

Q 16

  • An IT director needs a monthly breakdown of cloud computing expenditures for each department in a company. The company uses AWS Organizations to manage the AWS accounts and has multiple AWS accounts in each organization.

Which combination of steps will provide this financial information? (Select TWO.)

A Use AWS Systems Manager Fleet Manager to identify resources that are not tagged in each account. Apply a tag that is named Department to any untagged resources.

B Activate a cost allocation tag that is named Department in the AWS Billing and Cost Management console in the Organizations management account. Use a tag policy to a mandate a Department tag on new resources.

C Use the AWS Resource Groups Tag Editor to identify resources that are not tagged in each account. Apply a tag that is named Department to any untagged resources.

D Activate a cost allocation tag that is named Department within the AWS Billing and Cost Management console in each account in the organization.

E Create an AWS Config rule across all accounts in the organization to mark resources that lack a Department tag as noncompliant.

Correct Answer: B & C

  • Explanation

A - Incorrect. Fleet Manager helps you remotely view the performance of your fleet of servers that run on AWS. Fleet Manager will not help identify resources that are not tagged in each account.

For more information about Fleet Manager, see AWS Systems Manager Fleet Manager.

B - Correct. You must activate a tag in the Billing and Cost Management console before viewing the expense by cost allocation tag. You should mandate the use of tags to ensure that the resources are tagged correctly.

For more information about tag policies, see Tag policies.

C - Correct. With Resource Groups, you can create, maintain, and view a collection of resources that share common tags. Tag Editor manages tags across services and AWS Regions. Tag Editor can perform a global search and can edit a large number of tags at one time.

For more information about resource groups and tagging, see Using Tag Editor.

D - Incorrect. Cost allocation tags are managed at the management account level, not separately on each AWS account.

For more information about tagging Organizations resources, see Tagging AWS Organizations resources.

E - Incorrect. AWS Config rules can identify resources that are noncompliant, but AWS Config rules do not tag the resources. Even if you tag resources as noncompliant, this step does not provide any financial information. Therefore, you cannot receive financial reporting as required by the scenario.

For more information about AWS Config rules, see Tagging Your AWS Config Resources.

Q 17

  • A SysOps administrator must ensure that AWS CloudFormation deployment changes are properly tracked for governance.

Which AWS service should the SysOps administrator use to meet this requirement?

A AWS Artifact

B AWS Config

C Amazon Inspector

D AWS Trusted Advisor

Correct Answer: B

  • Explanation

A - Incorrect. AWS Artifact keeps compliance-related reports and agreements. AWS Artifact does not track CloudFormation changes.

For more information about AWS Artifact, see What is AWS Artifact?

B - Correct. AWS Config can track changes to CloudFormation stacks. A CloudFormation stack is a collection of AWS resources that you can manage as a single unit. With AWS Config, you can review the historical configuration of your CloudFormation stacks and review all changes that occurred to them.

For more information about how AWS Config can track changes to CloudFormation deployments, see cloudformation-stack-drift-detection-check.

C - Incorrect. Amazon Inspector is used for security compliance of instances and applications, but it does not track CloudFormation changes.

For more information about Amazon Inspector, see What is Amazon Inspector?.

D - Incorrect. Trusted Advisor provides real-time guidance to help users follow AWS best practices to provision their resources. However, Trusted Advisor does not provide guidance about CloudFormation deployments.

For more information about Trusted Advisor, see AWS Trusted Advisor.

Q 18

  • A company is running a website that stores data in an Amazon RDS for MySQL DB instance. The company expects the data that is stored in the database to grow significantly during the next 6 months.

A SysOps administrator can see that the DB instance will run out of storage space in that time period. The current DB instance uses General Purpose SSD (gp2) volumes for storage.

Which action can the SysOps administrator take to scale the storage for the DB instance?

A Launch an RDS read replica

B Enable storage autoscaling for the DB instance.

C Turn on Multi-AZ feature for the DB instance

D Change the DB instance storage type to standard magnetic

Correct Answer: B

A - Incorrect. An RDS read replica will not scale the storage for the DB instance. RDS read replicas offload read requests for data.

For more information about RDS read replicas, see Working with DB instance read replicas.

B - Correct. With RDS storage autoscaling, you can set the desired maximum storage limit. Autoscaling will manage the storage size. RDS storage autoscaling monitors actual storage consumption and then scales capacity automatically when actual utilization approaches the provisioned storage capacity.

For more information about storage autoscaling, see Managing capacity automatically with Amazon RDS storage autoscaling.

C - Incorrect. A Multi-AZ deployment for the DB instance will launch another DB instance in another subnet to make the database highly available. This deployment does not scale the storage.

For more information about Multi-AZ deployments, see Configuring and managing a Multi-AZ deployment.

D - Incorrect. A change in the type of storage could affect performance, but it will not scale the amount of data that can be stored on the DB instance.

For more information about RDS storage options, see Amazon RDS DB instance storage.

Q 19

  • A company has an application that runs on a fleet of Amazon EC2 instances that run Microsoft Windows. The company needs to apply patches to the operating system each month. The company uses AWS Systems Manager Patch Manager to apply the patches on a schedule. When the fleet is being patched, users of the application report delayed service responses.

What should the company do to MINIMIZE the impact on users during patch deployment?

A Change the number of instances patched at any one time to 100%

B Create a snapshot of each instance in the fleet by using a Systems Manager Automation runbook before the start of the patch process.

C Configure the maintenance window to patch 10% of the instances in the patch group at a time.

D Create a patched Amazon Machine Image (AMI). Configure the maintenance windows option to deploy the patched AMI on only 10% of the fleet at a time.

Correct Answer: C

  • Explanation:

A - Incorrect. With this approach, all the instances are patched at the same time. If a reboot is necessary, all the instances will reboot at the same time. This solution would have a greater impact on the users.

For more information about how to apply rate controls during patching, see Creating a maintenance window for patching (console).

B - Incorrect. The creation of a snapshot is a good safeguard. However, a snapshot does not reduce the risk of an outage while patches are applied.

For more information about snapshots, see Amazon EC2 backup and recovery with snapshots and AMIs.

C - Correct. A rate control concurrency of 10% ensures that only 1 out of 10 instances will get patched at a time. This process will leave enough capacity to run most workloads without interruption. You can set rate control as an absolute number or a percentage.

For more information about how to apply rate controls during patching, see Creating a maintenance window for patching (console).

D - Incorrect. Patch Manager applies patches. Patch Manager does not deploy AMIs.

For more information about AMIs, see Instances and AMIs.

Q 20

  • A SysOps administrator attached the following IAM policy to a developer’s IAM user account:
```
{

    "Version": "2012-10-17",

    "Statement": {

        "Effect": "Allow",

        "Action": "dynamodb:GetItem",

        "Resource": "*",

        "Condition": {

            "DateGreaterThan": {

                "aws:CurrentTime": "2020-07-01T00:00:00Z"

            },

            "DateLessThan": {

                "aws:CurrentTime": "2020-12-31T23:59:59Z"

            },

            "StringEquals": {

                "aws:SourceVpc": "vpc-111bbb22"

            }

        }

    }

}
```

Which permission will the developer have for using GetItem?

A Access is allowed on or between July 1, 2020, and December 21, 2020 (UTC), and if the request is initiated from vpc-111bbb22

B Access is allowed on or between July 1, 2020, and December 21, 2020 (UTC), or if the request is initiated from vpc-111bbb22

C Access is allowed on or between July 1, 2020, and December 21, 2020 (UTC), and if the request uses a VPC endpoint in vpc-111bbb22

D Access is allowed on or between July 1, 2020, and December 21, 2020 (UTC), or if the request uses a VPC endpoint in vpc-111bbb22

Correct Answer: A

  • Explanation

A - Correct. This IAM policy includes multiple conditions. One condition allows access to actions based on date and time. Another condition requires the API call to originate from a specific VPC. This policy grants the permissions necessary to complete this action from the AWS API or AWS CLI only.

For more information about how to create a condition with multiple keys or values, see Conditions with multiple context keys or values.

B - Incorrect. Multiple conditions all need to evaluate as true for the condition block to match. This response incorrectly states that one or the other of the conditions cause the IAM policy to meet the condition.

C - Incorrect. The aws:SourceVpc condition is related to the VPC from where the API call initiates.

D - Incorrect. The aws:SourceVpc condition is related to the VPC from where the API call initiates. Additionally, multiple conditions all need to evaluate as true for the condition block to match.

Tuts Node Ultimate AWS 2022 Questions

EC2 Quiz

Question 1

How do you change the EC2 instance type in the AWS console?

1. By doing a right-click and select "Instance settings" then select "Change instance type"
2. By stopping the EC2 instance, then doing a right-click and select "Instance settings" then select "Change instance type". Finally, start the EC2 instance

Correct Answer

  1. By stopping the EC2 instance, then doing a right-click and select “Instance settings” then select “Change instance type”. Finally, start the EC2 instance

Question 2

You would like to make sure your EC2 instances have the highest performance while talking to each other as you are performing big data analysis. Which placement group should you choose?

1. Cluster
2. Spread
3. Partition

Correct Answer

  1. Cluster

Question 3

You have an EC2 instance where Termination Protection is enabled and Shutdown Behavior is set to Terminate. From within the EC2 instance, you shut down the OS using shutdown. What will happen?

1. The EC2 instance will not shut down
2. The EC2 instance will get terminated
3. The EC2 instance will be in a "stopped" state

Correct Answer

  1. The EC2 instance will get terminated

Question 4

You’re trying to launch an EC2 instance and you’re getting the following error InstanceLimitExceeded. What can you do to resolve this issue?

1. Launch the EC2 instance in a different AZ because it's a vCPU limit on a per-AZ level
2. Launch the EC2 instance in a different AWS Region because it's a vCPU limit on a per-region level
3. Change the AMI used to launch the EC2 instance as AMIs are regional
4. AWS does not have enough on-demand capacity regarding the particular AZ

Correct Answer

  1. Launch the EC2 instance in a different AWS Region because it’s a vCPU limit on a per-region level

Explanation

When you launch an EC2 instance and you get this error InstanceLimitExceeded, then you have reached your limit of a maximum number of vCPUs per AWS Region. Either launch the EC2 instance in a different AWS Region or contact AWS Support to increase your limit of the AWS Region.

Question 5

You are getting an error InsufficientInstanceCapacity while trying to launch an EC2 instance. What’s the problem?

1. You need to request a service limit increase in the AWS Support page for the AZ you're launching the instance into
2. AWS does not have enough on-demand capacity regarding the particular AWS Region
3. AWS does not have enough on-demand capacity regarding the particular AZ
4. You need to request a service limit increase in the AWS Support page for the Region you're launching the instance into

Correct Answer

  1. AWS does not have enough on-demand capacity regarding the particular AZ

Explanation

https://aws.amazon.com/premiumsupport/knowledge-center/ec2-insufficient-capacity-errors/

Question 6

After launching an EC2 instance, its state goes from pending to terminating immediately. What is NOT a reason for this error?

1. You've reached your EBS volume limit
2. An EBS snapshot is corrupted
3. The root EBS volume is encrypted and you do not have the permissions to the KMS key for decryption
4. You've reached the instance limit per region assigned to your account

Correct Answer

  1. You’ve reached the instance limit per region assigned to your account

Question 7

You plan on running an open-source MongoDB database year-round on EC2. Which instance launch mode should you choose?

1. On-Demand Instance
2. Reserved Instance
3. Spot Instance

Correct Answer

  1. Reserved Instance

Question 8

You’re trying to SSH into your EC2 instance and you are facing the following error Connection timed out. Which of the following is NOT a reason for this error?

1. Your .pem file on your Linux machine doesn't have 400 permissions
2. EC2 instance doesn't have a public IPv4
3. Route Tables is missing routes
4. Security Group or NACL is not configured correctly

Correct Answer

  1. Your .pem file on your Linux machine doesn’t have 400 permissions

Question 9

Your t2.small EC2 instance constantly runs out of CPU credits and therefore the performance is degraded. What is NOT a solution for this problem?

1. Upgrade the EC2 instance type to t2.medium or higher
2. Purchase CPU credits for your EC2 instances
3. Turn on t2 Unlimited
4. Upgrade to non-t* type of EC2 instance

Correct Answer

  1. Purchase CPU credits for your EC2 instances

Question 10

You have installed Unified CloudWatch Agent on an EC2 instance to collect custom metrics from your EC2 instance. You want to know individual processes running on your EC2 instance and their system utilization. What would you use?

1. Configure Unified CloudWatch Agent with StatsD protocol
2. Configure Unified CloudWatch Agent with collectd protocol
3. Configure Unified CloudWatch Agent with procstat plugin

Correct Answer

  1. Configure Unified CloudWatch Agent with procstat plugin

Question 11

You want to stop your EC2 instance and at the same time, you don’t want to lose the memory state, processes, etc. What would you do?

1. Terminate
2. Stop
3. Hibernate
4. Reboot

Correct Answer

  1. Hibernate

Question 12

You have an application that is known to perform memory leaks on EC2 instances and therefore you would like to monitor the EC2 instance’s RAM using CloudWatch. How can you achieve this?

1. Enable EC2 detailed monitoring
2. Push RAM as a custom metric using the Unified CloudWatch Agent
3. Use EC2 basic monitoring

Correct Answer

  1. Push RAM as a custom metric using the Unified CloudWatch Agent

AMI Quiz

Question 1

You are launching an EC2 instance in us-east-1 using AWS Lambda in us-east-1 using this Python script snippet:

python:

ec2.create_instances(ImageId='ami-0dc2d3e4c0f9ebd18', MinCount=1, MaxCount=1)

It works well, so you decide to deploy your AWS Lambda function in us-west-1 as well. There, the function does not work and fails with InvalidAMIID.NotFound error. What’s the problem?

1. The new Lambda function is missing IAM permissions
2. AMI is region locked and the same AMI ID can not be used across regions
3. The AMI needs to first be shared with another region. The same AMI ID can then be used

Correct Answer

  1. AMI is region locked and the same AMI ID can not be used across regions

Question 2

What are the steps required to migrate an EC2 instance to another AZ?

1. By doing right-click and select "Move", then select the desired AZ
2. Create an AMI from the EC2 instance, then use this AMI to create a new EC2 instance in the desired subnet/AZ

Correct Answer

  1. Create an AMI from the EC2 instance, then use this AMI to create a new EC2 instance in the desired subnet/AZ

Question 3

You have an AMI that has an encrypted EBS Snapshot. You want to share this AMI with another AWS account. You have shared the AMI with the desired AWS account, but the other AWS account can’t use it. How would you solve this problem?

1. The other AWS account needs to logout and login again to refresh its credentials
2. You can't share an AMI that has an encrypted EBS Snapshot
3. You need to share the KMS CMK used to encrypt the AMI with the other AWS account

Correct Answer

  1. You need to share the KMS CMK used to encrypt the AMI with the other AWS account

Question 4

Your company has a critical application that’s hosted on 100s of EC2 instances. The security team has created an AMI that’s updated and has all the security patches installed. The DevOps team must create the EC2 instances from the AMI approved by the security team, but there’s no IAM policy to prevent them from using another AMI. What AWS service would you use to ensure that all the EC2 instances are launched using the approved AMI?

1. Amazon Inspector
2. Amazon GuardDuty
3. AWS Config
4. AWS Security Hub

Correct Answer

  1. AWS Config

SSM & OpsWorks Quiz

Question 1

You have launched RHEL Linux EC2 instances, and you have attached the IAM role that allows them full access to SSM. Yet, you do not see them in the SSM Console. What’s likely the issue? 1

The SSM Service is down 2

The SSM Agent isn’t installed or running on the EC2 instances 3

You first need to register these instances using the AWS CLI 4

RHEL Linux EC2 instances aren’t compatible with SSM Correct Answer Question 2

After discovering a security issue, you would like to apply OS patching across all your EC2 instances. What’s the best way of achieving this? 1

Use AWS Lambda 2

Use SSM 3

Use an automated script to SSH into all EC2 instances and apply the patch Correct Answer 2

Use SSM Question 3

You would like to externally maintain the configuration values of your main database, to be picked up at runtime by your application. What’s the best place to store them to maintain control and version history? 1

SSM Parameter Store 2

Amazon S3 3

EBS 4

DynamoDB Correct Answer 1

SSM Parameter Store Question 4

You have a fleet of EC2 instances and you want to apply a patch to all of them without SSH into each EC2 instance. What’s the easiest way to patch this fleet of EC2 instances? 1

AWS Lambda 2

Amazon CloudWatch Events 3

SSM Resource Groups 4

SSM Run Command Correct Answer 4

SSM Run Command Question 5

What would you use to automate patching of your managed instances (EC2, on-premises)? 1

SSM Run Command 2

SSM Patch Manager 3

SSM Inventory 4

SSM Automation Correct Answer 2

SSM Patch Manager Question 6

Your operations team has deep knowledge of Chef recipes and would like to use them to manage your ever-growing fleet of EC2 instances. What do you recommend? 1

Use SSM and enable Chef Compatibility 2

Run Chef on an EC2 instance 3

Use AWS OpsWorks 4

Enable the Chef API for CloudWatch Correct Answer 3

Use AWS OpsWorks

EC2 High Availabitly and Scalability Quiz

Question 1

Scaling an EC2 instance from r4.large to r4.4xlarge is called: 1

Horizontal Scaling 2

Vertical Scaling Correct Answer 2

Vertical Scaling Question 2

Running an application on an Auto Scaling Group that scales the number of EC2 instances in and out is called: 1

Horizontal Scaling 2

Vertical Scaling Correct Answer 1

Horizontal Scaling Question 3

You would like to route incoming requests to different EC2 instances based on the hostname that was passed in an HTTP request. You should use: 1

Application Load Balancer 2

Network Load Balancer 3

Classic Load Balancer 4

NGINX Load Balancer Correct Answer 1

Application Load Balancer Question 4

You have an application running on EC2 instances, all in an Auto Scaling Group. Your Auto Scaling Group is directly connected to your ELB and the health checks are linked. It is meant to scale up when the average CPU utilization goes over 60%. You’ve seen a very unequal load on your EC2 instances, some peaking at 100% while others are down at 10%. Therefore your ASG has not been scaling up to deal with the increased demand. What’s the issue? 1

Your ASG has suspended the scaling process 2

Your Load Balancer has Sticky Sessions enabled 3

Your CloudWatch metric is not working properly Correct Answer 2

Your Load Balancer has Sticky Sessions enabled Question 5

You would like to expose a fixed static IP to your end-users for compliance purposes, so they can write firewall rules that will be stable and approved by regulators. Which Load Balancer should you use? 1

Application Load Balancer with Elastic IP attached to it 2

Classic Load Balancer 3

Network Load Balancer Correct Answer 3

Network Load Balancer Question 6

Which of the following is NOT a valid target while you create a Target Group for your Application Load Balancer? 1

Private IPv4 2

Lambda Functions 3

Public IPv4 4

EC2 Instances Correct Answer 3

Public IPv4 Question 7

You have a Network Load Balancer that distributes traffic across a set of EC2 instances in us-east-1. You have 2 EC2 instances in us-east-1b AZ and 5 EC2 instances in us-east-1e AZ. You have noticed that the CPU utilization is higher in the EC2 instances in us-east-1b AZ. After more investigation, you noticed that the traffic is equally distributed across the two AZs. How would you solve this problem? 1

Enable Cross-Zone Load Balancing 2

Enable Sticky Sessions 3

Enable Access Logs Correct Answer 1

Enable Cross-Zone Load Balancing Question 8

You want to create a custom application-based cookie in your Application Load Balancer. Which of the following you can use as a cookie name? 1

AWSALBAPP 2

AWSALBTG 3

APPUSERC 4

AWSALB Correct Answer 3

APPUSERC Question 9

Some of your user’s requests are completely being lost due to the metric SpilloverCount being greater than 0. This is now happening daily. Your application is running on EC2 managed by an ASG. What should you do to prevent this issue from happening? 1

Pre-warm your Load Balancer 2

Monitor for BackendConnectionErrors and scale the ASG based on that metric 3

Enable ALB Access Logs and scale based on CloudWatch Logs 4

Monitor for SurgeQueueLength and scale the ASG based on that metric Correct Answer 4

Monitor for SurgeQueueLength and scale the ASG based on that metric Question 10

You have an application that is RAM intensive that increases the RAM usage based on the number of clients requests it receives. This application is behind an Elastic Load Balancer and managed by an ASG. How do you handle scaling for this application? 1

Scale based on CPU Utilization CloudWatch metric 2

Scale based on Number of Request Per Instance CloudWatch metric 3

Scale based on RAM Utilization CloudWatch metric 4

Scale based on Network In CloudWatch metric Correct Answer 2

Scale based on Number of Request Per Instance CloudWatch metric Question 11

You have an Application Load Balancer backed by a set of EC2 instances. You have de-registered an EC2 instance, but the in-flight requests haven’t been completed. What would you configure in the ALB to give time to the in-flight requests to complete successfully? 1

Deregistration Delay 2

Cross-Zone Load Balancing 3

ELB Health Checks 4

Sticky Sessions Correct Answer 1

Deregistration Delay Question 12

You have a fleet of EC2 instances behind an ALB. Each EC2 instance needs time to warm up before it can receive its full share of requests. How would you linearly increase the traffic to each EC2 instance? 1

Configure Connection Draining on your Targets 2

Use AWS Lambda to linearly increase the traffic to your EC2 instances 3

Configure Slow Start Mode in your Target Group 4

Check ALB Health Checks Correct Answer 3

Configure Slow Start Mode in your Target Group Question 13

Which of the following is NOT a supported request routing algorithm in Elastic Load Balancer? 1

Least Outstanding Requests 2

BGP 3

Flow Hash 4

Round Robin Correct Answer 2

BGP Question 14

A company has an ASG where random EC2 instances suddenly crashed in the past month. They can’t troubleshoot why the EC2 instances crash as the ASG terminates the unhealthy EC2 instances and replaces them with new EC2 instances. What will you do to troubleshoot the issue and prevent unhealthy EC2 instances from being terminated by the ASG? 1

Use AWS Lambda to pause the EC2 instance before terminating 2

Use ASG Lifecycle Hooks to pause the EC2 instance in the Terminating state for troubleshooting 3

Use CloudWatch Logs to troubleshoot the issue Correct Answer 2

Use ASG Lifecycle Hooks to pause the EC2 instance in the Terminating state for troubleshooting Question 15

The following are valid Auto Scaling Group Health Checks, EXCEPT: 1

EC2 Status Checks 2

Route 53 Health Checks 3

ELB Health Checks 4

Custom Health Checks Correct Answer 2

Route 53 Health Checks

Elastic Beanstalk Quiz

Question 1

Your application has complex runtime and OS dependencies and is taking a really long time to be deployed on Elastic Beanstalk. What should you do to improve the deployment time? 1

Use a faster Internet connection 2

Package and deploy a golden AMI 3

Use AWS Lambda to resolve dependencies quicker Correct Answer 2

Package and deploy a golden AMI Question 2

You have two message queueing applications hosted on on-premise servers that you want to migrate to AWS. You want a fully managed AWS service to quickly host your applications, so you have decided to use Elastic Beanstalk. Which environment tier would you use that’s suitable for your applications? 1

Worker Environment Tier 2

Web Server Environment Tier Correct Answer 1

Worker Environment Tier

CloudFormation Quiz

Question 1

You would like to link the success of your CloudFormation template to the success of installing and properly configuring launched EC2 instances. How can you achieve this? 1

Use DependsOn 2

Use cfn-init to let CloudFormation know of the success status 3

Use WaitCondition and cfn-signal to let CloudFormation know of the success status Correct Answer 3

Use WaitCondition and cfn-signal to let CloudFormation know of the success status Question 2

Your CloudFormation stacks fail with Failed to receive a signal from 1 out of 2 instances. What’s the likely problem? 1

One EC2 instance failed to send the cfn-signal before the timeout 2

One EC2 instance sent a cfn-signal indicating its failure 3

The CloudFormation Security Group blocked a signal from reaching it Correct Answer 1

One EC2 instance failed to send the cfn-signal before the timeout Question 3

You would like to troubleshoot why an EC2 instance keeps on failing to correctly bootstrap itself using the cfn-init signal. Each time, it fails, and then sends the cfn-signal to CloudFormation. Therefore, CloudFormation fails and deletes the newly created resources. What should you do? 1

Remove the cfn-signal part and SSH into the instance 2

Set OnFailure=DO_NOTHING 3

Increase the WaitCondition timeout 4

Suspend the CloudFormation processes Correct Answer 2

Set OnFailure=DO_NOTHING Explanation

You can configure the CloudFormation stack to do nothing when there’s a failure so you can troubleshoot the error. To do so, configure the “onFailure” option while creating the stack. Question 4

You have put a great deal of expertise into configuring an ELB properly to comply with your organizational policies using CloudFormation. You would like your snippet of code to be re-used by the teams who need to provision an ELB. What should you do? 1

Package the code as a CloudFormation dependency and have people resolve it using a package manager 2

Upload the CloudFormation template on S3 and tell people to use it as a CloudFormation Nested Stack 3

Upload the CloudFormation template on GitHub and tell people to copy and paste your code into their stacks Correct Answer 2

Upload the CloudFormation template on S3 and tell people to use it as a CloudFormation Nested Stack Question 5

What CloudFormation feature helps you analyze the upcoming changes on a CloudFormation stack update without actually executing them? 1

ChangeSets 2

cfn-init 3

cfn-signal 4

Nested Stacks Correct Answer 1

ChangeSets Question 6

You are using CloudFormation to deploy test environments on the fly, and these environments include an RDS database. You want the database to go away on stack deletion, but the company policy is to keep the data for further analysis if needed. What should you do? 1

Enable Termination Protection on the stack 2

Add a DeletionPolicy=Retain to the RDS resource 3

Add a DeletionPolicy=Snapshot to the RDS resource 4

Add a DeletionPolicy=Delete to the RDS resource Correct Answer 3

Add a DeletionPolicy=Snapshot to the RDS resource Question 7

How can you prevent Stacks from being accidentally deleted? 1

Apply an IAM policy to all users to prevent CloudFormation stack deletion API 2

Use Termination Protection 3

Protect the CloudFormation templates with passwords stored in SSM Parameter Store Correct Answer 2

Use Termination Protection Question 8

A SysOps Administrator created an AWS CloudFormation template for the first time. The stack failed with a status of ROLLBACK_COMPLETE. The Administrator identified and resolved the template issue that caused the failure. How should the Administrator continue with the stack deployment? 1

Delete the failed stack and create a new stack 2

Execute a ChangeSet on the failed stack 3

Perform an update-stack action on the failed stack 4

Run a validate-template command Correct Answer 1

Delete the failed stack and create a new stack Question 9

You work for a company that uses AWS Organization to manage multiple AWS accounts. You want to create a CloudFormation stack in multiple AWS accounts in multiple AWS Regions. What is the easiest way to achieve this? 1

CloudFormation ChangeSets 2

AWS Organizations 3

AWS CLI 4

CloudFormation StackSets Correct Answer 4

CloudFormation StackSets Question 10

What CloudFormation feature can you use to detect changes made to your stack resources outside CloudFormation? 1

ChangeSets 2

CloudFormation Drift 3

StackSet 4

Pseudo Parameters Correct Answer 2

CloudFormation Drift Question 11

You have 2 CloudFormation templates. Template A contains the networking components (VPC, Subnets, SGs, …) and template B contains the application infrastructure (EC2 instances, ALB, EBS, …). You want to attach the SGs in template A to the EC2 instances in template B. How would you achieve this task? 1

Export the SGs Ids in the Outputs section from template A, then import exported values in template B using Fn::ImportValue 2

You can’t do this. You have to merge template A and template B in one template 3

Write a Lambda function that takes Security Groups IDs from template A and injects them into template B Correct Answer 1

Export the SGs Ids in the Outputs section from template A, then import exported values in template B using Fn::ImportValue Question 12

Which of the following are NOT valid CloudFormation Pseudo Parameters? 1

AWS::AccountId 2

AWS::Region 3

AWS::AccountName 4

AWS::StackId Correct Answer 3

AWS::AccountName Question 13

You have a CloudFormation stack that has a WaitCondition and an EC2 instance that sends several signals to the WaitCondition using cfn-signal. The timeout expired and the stack creation failed because CloudFormation hasn’t received any signals. You have reviewed and found that CloudFormation helper scripts are installed and working properly, also the EC2 instance has a connection to the Internet and can reach CloudFormation Service successfully. How do you further troubleshoot the issue? 1

Delete the stack, then re-create it again 2

You should use cfn-init to send the signals instead of cfn-signal 3

Connect to the EC2 instance and view the log files /var/log/cloud-init.log and /var/log/cfn-init.log Correct Answer 3

Connect to the EC2 instance and view the log files /var/log/cloud-init.log and /var/log/cfn-init.log Question 14

You have created a CloudFormation stack that has a lot of resources (ASG, ALB, EC2, RDS DB, S3 buckets, …). One of your teammates doesn’t know that the ALB has been created as part of the CloudFormation stack, so he deleted the ALB and created a new ALB. Later on, you attempted to update the stack, the update failed and the stack can’t be rolled back with the following error UPDATE_ROLLBACK_FAILED. What would you do to resolve this issue? 1

Delete the stack, then re-create the stack again 2

Use CloudFormation Drift to resolve the issue 3

You can fix the errors manually (re-create the deleted ALB with the same configuration) or you can skip the ALB while updating the stack 4

Contact AWS Support Correct Answer 3

You can fix the errors manually (re-create the deleted ALB with the same configuration) or you can skip the ALB while updating the stack

EC2 Storage & Data Management EBS and EFS Quiz

Question 1

Which of the following EBS volume types can NOT be used as a boot volume? 1

st1 / sc1 2

gp2 / gp3 3

io1 / io2 Correct Answer 1

st1 / sc1 Question 2

Your 8 GB gp2 volume is frequently bursting and quickly running out of IOPS. What can you do to increase its performance? 1

Purchase more bursting credits 2

Enable unlimited burst mode 3

Increase the volume size Correct Answer 3

Increase the volume size Question 3

You would like to increase your EBS drive size while attached to an EC2 instance. How can you do this with minimal operational overhead? 1

Stop the EC2 instance, snapshot the volume and create a new volume with a greater size from the snapshot. Attach the new volume to the EC2 instance and start it 2

Do not stop the EC2 instance, change the volume size on the fly, and after the resizing is done, from the OS, repartition your drive to take advantage of the greater capacity 3

Do not stop the EC2 instance, change the volume size on the fly, and after the resizing is done, do not do anything as the OS will automatically leverage the increased size Correct Answer 2

Do not stop the EC2 instance, change the volume size on the fly, and after the resizing is done, from the OS, repartition your drive to take advantage of the greater capacity Question 4

How do you encrypt an unencrypted volume that is attached to an EC2 instance? 1

Do it straight from AWS Console, while the EC2 instance is running 2

Do it straight from AWS Console, after stopping the EC2 instance 3

Stop the EC2 instance, take a snapshot of the EBS volume, create a new EBS volume from the snapshot and tick “encrypt”. Attach the encrypted volume to the EC2 instance Correct Answer 3

Stop the EC2 instance, take a snapshot of the EBS volume, create a new EBS volume from the snapshot and tick “encrypt”. Attach the encrypted volume to the EC2 instance Question 5

You have a set of EBS snapshots that you want to be fully initialized at creation time for maximum performance without the need to run any commands in the EC2 instance OS. What can you use to achieve this? 1

Enable Fast Snapshot Restore on the EBS snapshot 2

Use AWS Lambda to initialize the EBS snapshot 3

Use Amazon Data Lifecycle Manager Correct Answer 1

Enable Fast Snapshot Restore on the EBS snapshot Question 6

You have an io1 EBS volume that’s attached to an EC2 instance. Later on, you want another EC2 instance to access the same data on the EBS volume. What is the best approach to achieve this? 1

Create a new EBS volume, then copy the data to the new EBS volume and attach it to the new EC2 instance 2

Enable EBS Multi-Attach 3

Do nothing, io1 EBS volumes can be attached to multiple EC2 instances Correct Answer 2

Enable EBS Multi-Attach Question 7

You have an 8 TB EBS volume. Currently, the size of the data stored on the EBS volume is 2 TB. What can you do to decrease the EBS volume size to decrease your costs? 1

From AWS Console, resize your EBS volume and decrease its size 2

From inside your EC2 instance, repartition your OS drives, then resize your EBS volume and decrease its size 3

You can’t decrease the size of your EBS volumes. Create a new smaller EBS volume, then migrate your data to it 4

Do nothing, as AWS automatically increases/decreases your EBS volumes based on your usage Correct Answer 3

You can’t decrease the size of your EBS volumes. Create a new smaller EBS volume, then migrate your data to it Question 8

You would like to have the same data being accessible as an NFS drive across all Availability Zones on all your EC2 instances. What do you recommend? 1

Mount an S3 bucket 2

Mount an EFS file system 3

Mount an EBS volume 4

Mount an Instance Store Correct Answer 2

Mount an EFS file system Question 9

You have an EFS file system that’s shared across 1000s of EC2 instances used for big data analysis. The EFS file system has a lot of data that are not frequently used, and you want to continuously move this data to the EFS-IA storage tier to reduce costs. What’s the most effective way to do this? 1

Create a scheduled CloudWatch Event that runs daily and invokes an AWS Lambda that checks files in the EFS Standard and moves them to EFS-IA 2

Use Amazon Data Lifecycle Manager 3

Do nothing, AWS automatically moves infrequently accessed files from EFS Standard to EFS-IA 4

Configure EFS Lifecycle Management Correct Answer 4

Configure EFS Lifecycle Management Question 10

Which of the following can NOT be used to secure access to files/data stored on an EFS file system? 1

AWS IAM & Security Groups 2

Amazon Cognito 3

NFS-Level Permissions (users, groups) 4

EFS Access Points Correct Answer 2

Amazon Cognito Question 11

How can you encrypt an unencrypted EFS file system? 1

From AWS Console, select your EFS file system and enable encryption 2

Contact AWS Support and request an EFS Encryption 3

Create a new encrypted EFS file system, then migrate data using DataSync from the un-encrypted EFS file system 4

Do nothing, AWS automatically encrypts all EFS file systems Correct Answer 3

Create a new encrypted EFS file system, then migrate data using DataSync from the un-encrypted EFS file system

S3 Fundamentals Quiz

Question 1

I tried creating an S3 bucket named “dev” but it didn’t work. This is a new AWS Account and I have no buckets at all. What is the cause? 1

I’m missing IAM permissions to create an S3 bucket 2

Bucket names must be globally unique and “dev” is already taken Correct Answer 2

Bucket names must be globally unique and “dev” is already taken Question 2

You have enabled versioning in your S3 bucket which already contains a lot of files. Which version will the existing files have? 1

1 2

0 3

-1 4

null Correct Answer 4

null Question 3

Your client wants to make sure that file encryption is happening in S3, but he wants to fully manage the encryption keys and never store them in AWS. You recommend him to use ……… 1

SSE-S3 2

SSE-KMS 3

SSE-C 4

Client-Side Encryption Correct Answer 3

SSE-C Question 4

You have a website that loads files from an S3 bucket. When you try the URL of the files directly in your Chrome browser it works, but when the website you’re visiting tries to load these files it doesn’t. What’s the problem? 1

The bucket policy is wrong 2

CORS is wrong 3

The IAM policy is wrong 4

Encryption is wrong Correct Answer 2

CORS is wrong

S3 Storage and Data Management for SysOps Quiz

Question 1

You have enabled versioning and want to be extra careful when it comes to deleting files on an S3 bucket. What should you enable to prevent accidental permanent deletions? 1

Use a bucket policy 2

Enable MFA Delete 3

Encrypt the files 4

Disable versioning Correct Answer 2

Enable MFA Delete Question 2

You would like all your files in an S3 bucket to be encrypted by default. What is the optimal way of achieving this? 1

Enable Default Encryption 2

Use a bucket policy that forces HTTPS connections 3

Enable Versioning Correct Answer 1

Enable Default Encryption Question 3

You suspect that some of your employees try to access files in an S3 bucket that they don’t have access to. How can you verify this is indeed the case without them noticing? 1

Restrict their IAM policies and look at CloudTrail logs 2

Use a bucket policy 3

Enable S3 Access Logs and analyze them using Athena Correct Answer 3

Enable S3 Access Logs and analyze them using Athena Question 4

You want the content of an S3 bucket to be fully available in different AWS Regions. That will help your team perform data analysis at the lowest latency and cost possible. What S3 feature should you use? 1

Amazon CloudFront Distributions 2

S3 Versioning 3

S3 Cross-Region Replication 4

S3 Static Website Hosting Correct Answer 3

S3 Cross-Region Replication Question 5

You are looking to provide temporary URLs to a growing list of federated users to allow them to perform a file upload on your S3 bucket to a specific location. What should you use? 1

S3 Pre-signed URL 2

S3 Bucket Policies 3

IAM Users 4

S3 CORS Correct Answer 1

S3 Pre-signed URL Question 6

You are looking to generate reports on the replication and encryption status of your objects in the S3 bucket. What should you use? 1

S3 Access Logs 2

S3 Analytics 3

S3 Inventory Correct Answer 3

S3 Inventory Question 7

How can you automate the transition of S3 objects between their different tiers? 1

AWS Lambda 2

S3 Lifecycle Rules 3

CloudWatch Events Correct Answer 2

S3 Lifecycle Rules Question 8

You are looking to get recommendations for S3 Lifecycle Rules. How can you analyze the optimal number of days to move objects between different storage tiers? 1

S3 Inventory 2

S3 Lifecycle Rules Advisor 3

S3 Analytics Correct Answer 3

S3 Analytics Question 9

What must be done before enabling S3 Replication between two S3 buckets? 1

Both buckets must have S3 Versioning enabled 2

Both buckets must have S3 Access Logs enabled 3

Both buckets must be owned by the same AWS Account Correct Answer 1

Both buckets must have S3 Versioning enabled Question 10

How to enforce your users to upload only encrypted objects in your S3 bucket? 1

Use AWS Lambda to encrypt every file before uploading it into the S3 bucket 2

Enable S3 Default Encryption or use a Bucket Policy to deny any object uploads without encryption using policy condition s3:x-amz-server-side-encryption 3

Enable S3 Versioning Correct Answer 2

Enable S3 Default Encryption or use a Bucket Policy to deny any object uploads without encryption using policy condition s3:x-amz-server-side-encryption Question 11

You have an S3 bucket on which you want to enable MFA Delete to prevent accidental deletions of objects in the bucket. You use AWS CLI to enable MFA Delete but an error occurs. What do you think is the reason for this error? 1

S3 Default Encryption must be enabled 2

You can’t use AWS CLI to enable MFA Delete, use the AWS Console 3

S3 Versioning must be enabled Correct Answer 3

S3 Versioning must be enabled Question 12

Which of the following S3 bucket policy conditions can you use to restrict access from specific VPC Endpoints? 1

aws:SourceVpc 2

aws:SourceVpce 3

aws:SourceIp Correct Answer 2

aws:SourceVpce Question 13

You have a shared large dataset that you’re hosting in an S3 bucket. 100s of applications are using your S3 bucket, which results in your S3 bucket policy becoming very complex and time-consuming to manage. What’s an alternative way you can use to simplify access for different applications to your shared dataset? 1

S3 Access Logs 2

S3 Access Points 3

S3 Inventory 4

AWS IAM Correct Answer 2

S3 Access Points Question 14

You have an S3 bucket with 1000s of objects stored in it. You want to perform an update to each object in the bucket. What’s the most effective approach to do? 1

Use S3 Inventory to get a list of all objects in your S3 bucket, then use S3 Select to process each file 2

Use S3 Inventory to get a list of all objects in your S3 bucket. Create an EC2 instance, upload the list to the EC2 instance, and process the objects there 3

Use S3 Batch Operations Correct Answer 3

Use S3 Batch Operations Question 15

While you’re uploading large files to an S3 bucket using Multi-part Upload, there are a lot of unfinished parts stored in the S3 bucket due to network issues. You are not using the unfinished parts and they cost you money. What is the best approach to remove these unfinished parts? 1

Use AWS Lambda to loop on each old/unfinished part and delete them 2

Use an S3 Lifecycle Policy to automate old/unfinished parts deletion 3

Request AWS Support to help you delete old/unfinished parts Correct Answer 2

Use an S3 Lifecycle Policy to automate old/unfinished parts deletion Question 16

How can you be notified when there’s an object uploaded to your S3 bucket? 1

S3 Select 2

S3 Inventory 3

S3 Event Notifications 4

S3 Analytics Correct Answer 3

S3 Event Notifications Question 17

You have a large dataset stored on-premises that you want to upload to the S3 bucket. The dataset is divided into 10 GB files. You have good bandwidth but your Internet connection isn’t stable. What is the best way to upload this dataset to S3 and ensure that the process is fast and avoid any problems with the Internet connection? 1

Use S3 Multi-part Upload & S3 Transfer Acceleration 2

Use S3 Select & Use S3 Transfer Acceleration 3

Use Multi-part Upload Only Correct Answer 1

Use S3 Multi-part Upload & S3 Transfer Acceleration Question 18

You have an S3 bucket that has S3 Versioning enabled. This S3 bucket has a lot of objects, and you would like to remove old object versions to reduce costs. What’s the best approach to automate the deletion of these old object versions? 1

Use S3 Lifecycle Rules - Transition Actions 2

Use S3 Lifecycle Rules - Expiration Actions 3

Use S3 Access Logs Correct Answer 2

Use S3 Lifecycle Rules - Expiration Actions Question 19

Which of the following is NOT a Glacier Deep Archive retrieval mode? 1

Standard (12 hours) 2

Bulk (48 hours) 3

Expedited (1 - 5 minutes) Correct Answer 3

Expedited (1 - 5 minutes) Question 20

You have 3 S3 buckets. One source bucket A, and two destination buckets B and C in different AWS Regions. You want to replicate objects from bucket A to both bucket B and C. How would you achieve this? 1

Configure replication from bucket A to bucket B, then from bucket B to bucket C 2

Configure replication from bucket A to bucket B, then from bucket A to bucket C 3

Configure replication from bucket A to bucket C, then from bucket C to bucket B Correct Answer 2

Configure replication from bucket A to bucket B, then from bucket A to bucket C Question 21

Which of the following is NOT a Glacier retrieval mode? 1

Expedited (1 - 5 minutes) 2

Standard (3 - 5 hours) 3

Instant (10 seconds) 4

Bulk (5 - 12 hours) Correct Answer 3

Instant (10 seconds) Question 22

For compliance reasons, your company has a policy mandate that database backups must be retained for 4 years. It shouldn’t be possible to erase them. What do you recommend? 1

S3 with Bucket Policies 2

EFS network drives with restrictive Linux permissions 3

Glacier Vaults with Vault Lock Policies Correct Answer 3

Glacier Vaults with Vault Lock Policies Question 23

Which of the following is a Serverless data analysis service allowing you to query data in S3? 1

S3 Analytics 2

Athena 3

Redshift 4

RDS Correct Answer 2

Athena Question 24

You want to restore an archive from S3 Glacier, so you have made a request and get a restore link. You tried the restore link after a few days but it’s not working. What’s the problem here? 1

Restore links have an expiration date, you have to request another one 2

The restore link generated is corrupted 3

There’s a problem with your Internet connection Correct Answer 1

Restore links have an expiration date, you have to request another one

Advanced Storage Section

Question 1

You need to move hundreds of Terabytes into Amazon S3, then process the data using a fleet of EC2 instances. You have a 1 Gbit/s broadband. You would like to move the data faster and possibly processing it while in transit. What do you recommend? 1

Use your network 2

Use AWS Data Migration 3

Use Snowball Edge Correct Answer Question 2

You want to expose virtually infinite storage for your tape backups. You want to keep the same software you’re using and want an iSCSI compatible interface. What do you use? 1

AWS Snowball 2

AWS Storage Gateway - File Gateway 3

AWS Storage Gateway - Volume Gateway 4

AWS Storage Gateway - Tape Gateway Correct Answer 4

AWS Storage Gateway - Tape Gateway Question 3

You have hundreds of Terabytes that you want to migrate to AWS S3 as soon as possible. You tried to use your network bandwidth and it will take around 3 weeks to complete the upload process. What’s the recommended approach to use in this situation? 1

AWS Snowball Edge 2

AWS Storage Gateway - Volume Gateway 3

S3 Multi-part Upload 4

AWS Data Migration Service Correct Answer 1

AWS Snowball Edge Question 4

You have a large dataset stored in S3 that you want to access from on-premises servers using the NFS or SMB protocol. Also, you want to authenticate access to these files through on-premise Microsoft AD. What would you use? 1

AWS Storage Gateway - Volume Gateway 2

AWS Storage Gateway - File Gateway 3

AWS Storage Gateway - Tape Gateway 4

AWS Data Migration Service Correct Answer 2

AWS Storage Gateway - File Gateway Question 5

You’re having issues activating your Storage Gateway VM. You have checked that the time is correct and that the Storage Gateway VM is synchronizing its time with an NTP server, but it’s still not working. What else you can check to resolve this issue? 1

Ensure that the Storage Gateway VM has port 80 opened 2

Ensure that the Storage Gateway VM has enough local storage Correct Answer 1

Ensure that the Storage Gateway VM has port 80 opened Question 6

You’re planning to migrate your company’s infrastructure from on-premises to AWS Cloud. You have an on-premises Microsoft Windows File Server that you want to migrate. What is the most suitable AWS service you can use? 1

AWS Storage Gateway - File Gateway 2

Amazon FSx for Windows (File Server) 3

AWS Managed Microsoft AD Correct Answer 2

Amazon FSx for Windows (File Server) Question 7

You’re migrating your on-premise data hosted on Windows File Server to AWS. You want a highly available and durable AWS service that’s in case of an AZ failure there’s no impact on your data. What should you choose? 1

Amazon S3 2

Two FSx for Windows with Single-AZ with DFRS replication set-up 3

FSx for Windows with Multi-AZ 4

Amazon RDS Correct Answer 3

FSx for Windows with Multi-AZ

CloudFront

Question 1

You have a static website hosted on an S3 Bucket. You have created a CloudFront Distribution that points to your S3 Bucket to better serve your requests and improve performance. After a while, you noticed that users can still access your website directly from the S3 Bucket. You want to enforce users to access the website only through CloudFront. How would you achieve that? 1

Send an email to your clients and tell them to not use the S3 endpoint 2

Configure your CloudFront Distribution and create an Origin Access Identity, then update your S3 Bucket Policy to only accept requests from your CloudFront Distribution OAI user 3

Use S3 Access Points to redirect clients to CloudFront Correct Answer 2

Configure your CloudFront Distribution and create an Origin Access Identity, then update your S3 Bucket Policy to only accept requests from your CloudFront Distribution OAI user Question 2

You have a CloudFront Distribution that serves your website hosted on a fleet of EC2 instances behind an Application Load Balancer. All your clients are from the United States, but you found that some malicious requests come from other countries. What’s the easiest and most cost-effective way to allow users from the US and block other countries? 1

Use NACLs and Security Groups to block certain countries 2

Create an AWS WAF and associate it with your CloudFront Distribution, then configure AWS WAF to block all countries except the US 3

Use CloudFront Geo Restriction Correct Answer 3

Use CloudFront Geo Restriction Question 3

You are looking to analyze the global traffic patterns of your website that is hosted in S3 and distributed by CloudFront. What should you use? 1

CloudFront Access Logs with Athena 2

CloudFront Trails with Athena 3

S3 Access Logs with Athena Correct Answer 1

CloudFront Access Logs with Athena Question 4

Amazon CloudFront generates a set of reports about your CloudFront Distribution activity. Which of the following is NOT a valid report? 1

Access Logs Report 2

Cache Statistics Report 3

Popular Objects Report 4

Top Referrers Report Correct Answer 1

Access Logs Report Question 5

You have a React Single Page Application hosted on an S3 Bucket and served through CloudFront Distribution. You have made an update to your React application and pushed it to S3, but the old version is still cached at CloudFront, and clients still see the old version. You want the new update to be propagated immediately. What would you do? 1

Delete and create a new CloudFront Distribution 2

Use CloudFront Invalidation 3

Tell your clients to remove cache from their browsers or use Incognito Mode Correct Answer 2

Use CloudFront Invalidation Question 6

When creating a new CloudFront Distribution, what provides the most caching efficiency while making sure users get Cache Behavior based on the “color” attribute in a cookie? 1

HTTP Methods 2

Headers 3

Cookies 4

Query String Parameters Correct Answer 3

Cookies

Databases for SysOps

Question 1

You manage many RDS DB instances and you want to be notified of when there’s a change in DB instance state, DB Parameter Groups, DB Security Groups, DB Snapshots, etc. What should you use? 1

CloudWatch Events 2

RDS Events & Event subscriptions 3

Amazon EventBridge Correct Answer 2

RDS Events & Event subscriptions Question 2

You have a MySQL RDS DB encrypted using a KMS CMK. You have taken a manual snapshot that you want to share with another AWS Account. You have shared the encrypted DB snapshot but the other account can’t access the DB snapshot. What are you missing here? 1

You have to share the KMS CMK used to encrypt the DB snapshot with the other AWS Account 2

You shared the snapshot with an incorrect AWS Account 3

You can’t share an encrypted DB snapshot Correct Answer 1

You have to share the KMS CMK used to encrypt the DB snapshot with the other AWS Account Question 3

A company has an application that’s composed of API Gateway, a set of Lambda functions, and a PostgreSQL RDS DB instance. The PostgreSQL RDS DB instance is hosted in a private subnet and the Lambda functions are also given access to run inside a VPC so they can access the database. As your traffic grows, you start getting TooManyConnections errors from your PostgreSQL RDS DB instance. What’s the best way to resolve this error? 1

Restart the PostgreSQL RDS DB instance 2

Contact AWS Support and request increase your AWS Lambda limit 3

Place an RDS Proxy before the PostgreSQL RDS DB instance Correct Answer 3

Place an RDS Proxy before the PostgreSQL RDS DB instance Explanation

RDS Proxy handles cleaning up idle connections and managing connection pools, instead of manually handle it in your Lambda functions and at the RDS database side. Question 4

You have migrated the MySQL database from on-premises to RDS. You have a lot of applications and developers interacting with your database. Each developer has an IAM user in the company’s AWS account. What’s a suitable approach to give access to developers to the MySQL RDS DB instance instead of creating a DB user for each one? 1

Enable IAM Database Authentication 2

Use Amazon Cognito 3

By default IAM users have access to your RDS database Correct Answer 1

Enable IAM Database Authentication Question 5

You have an un-encrypted RDS DB instance and you want to create Read Replicas. Can you configure the RDS Read Replicas to be encrypted? 1

Yes 2

No Correct Answer 2

No Question 6

For your RDS database, you can have up to ……. Read Replicas. 1

3 2

7 3

5 Correct Answer 3

5 Question 7

You have an application that stores its data in an RDS database. You’re expecting your application to receive a large number of writes in the next 24 hours. You have enabled Storage Auto Scaling for your RDS database so your RDS database storage can handle a large number of writes. Your RDS database has increased storage at 1 P.M, but when it tries to scale again at 3 P.M it fails. What do you expect the reason for this? 1

You can only scale up your RDS storage once within 24 hours 2

You can only scale up your RDS storage once within 6 hours 3

You can only scale up your RDS storage once within 48 hours Correct Answer 2

You can only scale up your RDS storage once within 6 hours Question 8

One analytics application is currently performing its queries against your main production database. These queries slow down the database which impacts the main user experience. What should you do to improve the situation? 1

Add Read Replicas 2

Enable Multi-AZ 3

Run the analytics queries at night 4

Increase the RDS instance size Correct Answer 1

Add Read Replicas Question 9

You want to enforce SSL connections to your PostgreSQL RDS database. What should you do? 1

Configure your Security Group 2

Configure Parameter Groups 3

Setup a proxy in front of your database and provide SSL termination Correct Answer 2

Configure Parameter Groups Question 10

Sometimes, your RDS database experiences failures and you would like to automatically recover it in case these failures happen. What should you use? 1

Use Network Load Balancer in front of RDS 2

Add Read Replicas 3

Enable Multi-AZ Correct Answer 3

Enable Multi-AZ Question 11

Your RDS backups are impacting your production database when they run. What can you do to improve the performance of your production database when backups are taken? 1

Enable Multi-AZ 2

Add Read Replicas 3

Configure backups to be asynchronous and use lower I/O Correct Answer 1

Enable Multi-AZ Question 12

What option of the Performance Insights dashboard should you use to figure out which SQL queries are affecting the most to the performance of your database? 1

Waits 2

Users 3

SQL Statements 4

Hosts Correct Answer 3

SQL Statements Question 13

You have an Aurora DB Cluster and Aurora Read Replicas keep on coming up as you have enabled Autoscaling. Your developers are complaining they cannot keep track of all the Read Replicas endpoints and their applications therefore do not use them. What do you recommend? 1

Create an AWS Lambda cron job that leverages the RDS API to update SSM Parameter Store with a full connection string 2

Use an Aurora Reader Endpoint 3

Disable Auto Scaling Correct Answer 2

Use an Aurora Reader Endpoint Question 14

An Aurora DB Cluster has 5 Read Replicas. Two db.r3.xlarge instances, two db.r3.4xlarge instances, and one db.r4.16xlarge. All the Read Replicas have the same priority tier Tier 0. When there’s a failover in the DB Cluster which Read Replica will be promoted? 1

db.r4.16xlarge 2

db.r3.xlarge 3

db.r3.4xlarge Correct Answer 1

db.r4.16xlarge Question 15

You have a production Aurora DB Cluster. You want to create a test environment that uses the same data in this prod DB Cluster. What is the most cost-effective way to do this? 1

Create a manual snapshot of the production DB Cluster, then use this snapshot to create a new DB cluster 2

Create a new DB Cluster and uses AWS Database Migration Service (DMS) to migrate data from the production DB Cluster 3

Use Aurora Database Cloning to create a new DB Cluster (clone) of the production DB Cluster Correct Answer 3

Use Aurora Database Cloning to create a new DB Cluster (clone) of the production DB Cluster Question 16

What is Aurora Feature that enables you to rewind the DB Cluster back and forth in time without creating a new DB Cluster? 1

Aurora Database Cloning 2

Aurora Serverless 3

Aurora Backtracking 4

Aurora Backup and Restore Correct Answer 3

Aurora Backtracking Question 17

How many Aurora Read Replicas can you have in a single Aurora DB Cluster? 1

15 2

5 3

10 Correct Answer 1

15 Question 18

You have an Aurora DB Cluster where the automatic backup is enabled with a retention period of 10 days. You’re using this Aurora DB Cluster for testing purposes, so you want to disable automatic backups to reduce costs. What should you do? 1

Use AWS CLI as this can’t be done from AWS Console 2

Take a snapshot from your Aurora DB Cluster, terminate the old DB Cluster, then create a new DB Cluster with Automatic Backups disabled 3

You can’t disable Aurora DB Cluster Automatic Backups Correct Answer 3

You can’t disable Aurora DB Cluster Automatic Backups Question 19

You have an ElastiCache Redis Cluster that serves a popular application. You have noticed that there are a large number of requests that go to the database because a large number of items are removed from the cache before they expire. To solve the problem you scaled up your ElastiCache Redis Cluster to a larger node type to increase memory. What should you do to be notified of this issue if it happens again? 1

Create a CloudWatch Alarm based on the Evictions metric 2

Create a CloudWatch Alarm based on the CPUUtilization metric 3

Create a CloudWatch Alarm based on the SwapUsage metric Correct Answer 1

Create a CloudWatch Alarm based on the Evictions metric Question 20

What is the maximum number of Read Replicas you can add in an ElastiCache Redis Cluster with Cluster-Mode Disabled? 1

15 2

10 3

5 Correct Answer 3

5 Question 21

A type of ElastiCache Redis Scaling that you can do while still serving requests during the scaling process? 1

Online Scaling 2

Offline Scaling Correct Answer 1

Online Scaling Question 22

Your application is hosted on a fleet of EC2 instances trying to connect to your ElasticCache Memcached Cluster and you often add and remove nodes. What’s the best way of making sure your EC2 instances correctly connect to them? 1

Create a CloudWatch Events that trigger an AWS Lambda which retrieves the list of cache nodes endpoints and update the text file 2

Create a CloudWatch Events that trigger an SNS topic notify you when there’s a change in the cache nodes 3

Use Memcached Auto Discovery Correct Answer 3

Use Memcached Auto Discovery

Monitoring Auditing and Performance

Question 1

You have an RDS DB instance that’s configured to push its database logs to CloudWatch. You want to create a CloudWatch alarm if there’s an Error found in the logs. How would you do that? 1

Create a CloudWatch Logs Metric Filter that filter the logs for the keyword Error, then create a CloudWatch Alarm based on that Metric Filter 2

Create a scheduled CloudWatch Event that triggers an AWS Lambda every 1 hour, scans the logs, and notify you through SNS topic 3

Create an AWS Config Rule that monitors Error in your database logs and notify you through SNS topic Correct Answer 1

Create a CloudWatch Logs Metric Filter that filter the logs for the keyword Error, then create a CloudWatch Alarm based on that Metric Filter Question 2

How would you monitor your EC2 instance memory usage in CloudWatch? 1

Enable EC2 Detailed Monitoring 2

Use Unified CloudWatch Agent to push memory usage as a custom metric to CloudWatch 3

By default, the EC2 instance pushes memory usage to CloudWatch Correct Answer 2

Use Unified CloudWatch Agent to push memory usage as a custom metric to CloudWatch Question 3

You would like to evaluate the compliance of your resource’s configurations over time. Which AWS service will you choose? 1

Amazon CloudWatch 2

AWS CloudTrail 3

AWS Config Correct Answer 3

AWS Config Question 4

Someone changed the configuration of a resource and made it non-compliant. Which AWS service can you use to find out who made the change? 1

Amazon CloudWatch 2

AWS CloudTrail 3

AWS Config Correct Answer 2

AWS CloudTrail Question 5

You have made a configuration change and would like to evaluate the impact of it on the performance of your application. Which AWS service do you use? 1

Amazon CloudWatch 2

AWS CloudTrail 3

AWS Config Correct Answer 1

Amazon CloudWatch Question 6

You would like to test out a complex CloudWatch Alarm that responds to a globally increased traffic on your application. You are in a test environment. How can you test out this alarm in a cost-effective manner and efficiently? 1

Setup a global EC2 fleet and increase the request rate to your application until you reach the Alarm state 2

Use the set-alarm-state CLI command 3

Change the alarm thresholds temporarily Correct Answer 2

Use the set-alarm-state CLI command Question 7

You would like to ensure that over time, none of your EC2 instances expose port 84 as it is known to have vulnerabilities with the OS you are using. What can you do to monitor this? 1

CloudWatch Metrics 2

CloudTrail Trails 3

AWS Config Rules 4

Create an AWS Lambda cron job Correct Answer 3

AWS Config Rules Question 8

You have enabled AWS Config to monitor Security Groups if there’s unrestricted SSH access to any of your EC2 instances. What AWS Config feature you can use to automatically re-configure your Security Groups to the correct state? 1

AWS Config Remediations 2

AWS Config Rules 3

AWS Config Notifications Correct Answer 1

AWS Config Remediations Question 9

How do you ensure that the CloudTrail logs stored in the S3 bucket haven’t been modified or deleted by other users? 1

Use CloudTrail Insights 2

Use CloudTrail Log File Integrity Validation 3

Use CloudTrail Events Correct Answer 2

Use CloudTrail Log File Integrity Validation Question 10

You have CloudTrail enabled for your AWS Account in all AWS Regions. What should you use to detect unusual activity in your AWS Account? 1

CloudTrail Events 2

CloudTrail Log File Integrity Validation 3

CloudTrail Insights Correct Answer 3

CloudTrail Insights Question 11

Which AWS services help you to be notified when your service quotas thresholds are approaching? 1

AWS GuardDuty & AWS CloudTrail 2

AWS Service Quota & AWS Trusted Advisor 3

AWS Config & CloudWatch Alarms Correct Answer 2

AWS Service Quota & AWS Trusted Advisor

AWS Account Management

Question 1

You’re using AWS Service Catalog to make it easy for your users to provision AWS resources. You have created a portfolio and a set of products. How should you standardize tags across provisioned products? 1

AWS Organization - Tag Policies 2

AWS Service Catalog - TagOptions Library 3

AWS Resource Groups & Tag Editor Correct Answer 2

AWS Service Catalog - TagOptions Library Question 2

What can you use to standardize tags across resources in all AWS Accounts inside your AWS Organization? 1

AWS Organization - Tag Policies 2

AWS Service Catalog - TagOptions Library 3

AWS Resource Groups & Tag Editor Correct Answer 1

AWS Organization - Tag Policies Question 3

You manage a set of AWS Accounts using AWS Organization which has Consolidated Billing feature enabled and Reserved Instance Discount Sharing turned on. You have an AWS Account that purchased a set of reserved EC2 instances that the owner doesn’t want to share with the AWS Organization. What should you do? 1

This can’t be done as the AWS Account is already part of the AWS Organization 2

Remove the AWS Account from the AWS Organization, turn off sharing, then add to the AWS Organization again 3

Disable Reserved Instance Discount Sharing at the AWS account level Correct Answer 3

Disable Reserved Instance Discount Sharing at the AWS account level Question 4

You have 5 AWS Accounts that you manage using AWS Organizations. You want to restrict access to certain AWS services in each account. How should you do that? 1

Using AWS Organizations SCP 2

Using IAM Roles 3

Using AWS Config Correct Answer 1

Using AWS Organizations SCP Question 5

You want to be notified in your company’s Slack channel when there’s a scheduled AWS maintenance on EC2 that affects your EC2 instances. What should you do? 1

From AWS Personal Health Dashboard, select Notifications Tab, then select your Slack channel as your Notifications Destination 2

Create a CloudWatch Event that will be triggered by AWS Personal Health, then select your Slack channel as a target for your CW Event 3

Create a CloudWatch Event triggered by AWS Personal Health and then create a Lambda function as a target for your CW Event. Use this Lambda function to send a message to your Slack channel Correct Answer 3

Create a CloudWatch Event triggered by AWS Personal Health and then create a Lambda function as a target for your CW Event. Use this Lambda function to send a message to your Slack channel Question 6

AWS EC2 experiences an outage and you would like to get a list of all your resources that are affected. What should you use? 1

AWS Organizations 2

AWS Service Health Dashboard 3

AWS Personal Health Dashboard 4

AWS Budgets Correct Answer 3

AWS Personal Health Dashboard Question 7

Before going ahead with an AWS service, your manager asks you to find out if Amazon ElasticSearch Service in ap-northeast-1 has had outages over the past year. Where can you find this information? 1

AWS Advisor 2

AWS Service Health Dashboard 3

AWS Personal Health Dashboard 4

AWS Config Correct Answer 2

AWS Service Health Dashboard Question 8

You have strong regulatory requirements to only allow fully audited AWS Services in production. You still want to allow your teams to experiment in a development environment while services are being audited. How can you best set this up? 1

Create an AWS Organization and create two Prod and Dev OUs, then Apply an SCP on the Prod OU 2

Provide the Dev team with a completely independent AWS account 3

Apply a global IAM policy on your Prod account 4

Create an AWS Config Rule Correct Answer 1

Create an AWS Organization and create two Prod and Dev OUs, then Apply an SCP on the Prod OU Question 9

Your data scientists need a self-service portal to provision their big data analysis environments. They are not AWS experts and would like something simple yet controlled. What do you advise? 1

Buy them the AWS CloudFormation Master Class Course 2

Setup an AWS Organization OU for your data scientists and give them a monthly allowance where they can create anything they want 3

Create a Service Catalog for your data scientists in which you upload products they should have access to Correct Answer 3

Create a Service Catalog for your data scientists in which you upload products they should have access to Question 10

For accounting reasons, you need to separate costs into categories in AWS, such as Environment. How do you achieve this? 1

Ask for a monthly AWS Data Export and run an Excel macro to aggregate your costs 2

Use Cost Allocation Tags 3

Use Billing Tags 4

Create multiple AWS accounts Correct Answer 2

Use Cost Allocation Tags Question 11

You need recommendations for the types of reserved EC2 instances you should buy to optimize your AWS costs. You also want to have access to a report detailing how utilized your reserved EC2 instances are. What do you recommend? 1

Setup a billing alarm 2

Use AWS Config 3

Use AWS Cost Explorer 4

Use AWS Budgets Correct Answer 3

Use AWS Cost Explorer Question 12

What should you use to be notified when your AWS usage costs exceed a certain threshold? 1

AWS Budgets 2

AWS Cost Explorer 3

AWS Cost Allocation Tags Correct Answer 1

AWS Budgets Question 13

You want to analyze your AWS resource usage and costs so you can take decisions to decrease costs. Your analytics team wants this data each day so they can run queries on it. What should you use? 1

Use AWS Cost Explorer 2

Use AWS Cost and Usage Reports service and configure it to deliver reports daily to an S3 bucket 3

Use AWS Cost Allocation Tags Correct Answer 2

Use AWS Cost and Usage Reports service and configure it to deliver reports daily to an S3 bucket Question 14

Which AWS service helps you improve performance and reduce costs by using Machine Learning to analyze your resources’ configurations and their utilization? 1

AWS Trusted Advisor 2

AWS Cost Explorer 3

AWS Compute Optimizer Correct Answer 3

AWS Compute Optimizer

Disaster Recovery

Question 1

You have an un-encrypted EFS filesystem that contains a lot of data. You want to encrypt this EFS filesystem, so you created a new encrypted EFS filesystem and want to migrate data to it. Which AWS service should you use? 1

AWS Snowball 2

AWS DataSync 3

AWS Database Migration Service (AWS DMS) Correct Answer 2

AWS DataSync Question 2

What’s the best approach to automate/manage cross-regions backups for AWS RDS DB instances? 1

CloudWatch Scheduled Event with AWS Lambda 2

EC2 Instance with Cron job 3

AWS Backup Correct Answer 3

AWS Backup

Security and Compliance for SysOps

Question 1

You would like to use a dedicated hardware module to manage your encryption keys and have full control over them. What do you recommend? 1

AWS KMS 2

AWS CloudHSM 3

AWS GuardDuty Correct Answer 2

AWS CloudHSM Question 2

When you enable Automatic Rotation on your KMS CMK, the backing key is rotated every ……… 1

1 year 2

3 years 3

6 months Correct Answer 1

1 year Question 3

You’ve created a Customer-managed CMK in KMS that you use to encrypt both S3 buckets and EBS snapshots. Your company policy mandates that your encryption keys be rotated every 3 months. What should you do? 1

Use AWS Managed Keys as they’re automatically rotated by AWS every 3 months 2

Re-configure your KMS CMK and enable Automatic Rotation, in the “Period” select 3 months 3

Rotate the KMS CMK manually. Create a new KMS CMK and use Key Aliases to reference the new KMS CMK. Keep the old KMS CMK so you can decrypt the old data Correct Answer 3

Rotate the KMS CMK manually. Create a new KMS CMK and use Key Aliases to reference the new KMS CMK. Keep the old KMS CMK so you can decrypt the old data Question 4

What should you use to control access to your KMS CMKs? 1

KMS IAM Policy 2

KMS Key Policies 3

AWS GuardDuty 4

KMS Access Control List (KMS ACL) Correct Answer 2

KMS Key Policies Question 5

Which AWS Service analyzes your AWS account and gives recommendations for cost optimization, performance, security, fault tolerance, and service limits? 1

AWS Cost and Usage Reports 2

AWS Security Hub 3

AWS Trusted Advisor 4

AWS GuardDuty Correct Answer 3

AWS Trusted Advisor Question 6

AWS GuardDuty scans the following data, EXCEPT: 1

DNS Logs 2

VPC Flow Logs 3

CloudTrail Logs 4

CloudWatch Logs Correct Answer 4

CloudWatch Logs Question 7

Which of the following AWS Services are you prohibited to run security assessments against? 1

Route 53 2

EC2 3

RDS 4

API Gateway Correct Answer 1

Route 53 Question 8

You have a website hosted on a fleet of EC2 instances fronted by an Application Load Balancer. What you should use to protect your website from common web application attacks (e.g., SQL Injection)? 1

AWS Shield 2

AWS Security Hub 3

AWS WAF 4

AWS GuardDuty Correct Answer 3

AWS WAF Question 9

According to AWS Shared Responsibility Model, what are you responsible for in RDS? 1

Security Group Rules 2

OS Patching 3

Database Patching 4

Underlying Hardware Security Correct Answer 1

Security Group Rules Question 10

Your user-facing website is a high-risk target for DDoS attacks and you would like to get 24/7 support in case they happen and AWS bill reimbursement for the incurred costs during the attack. What AWS service should you use? 1

AWS WAF 2

AWS Shield Advanced 3

AWS Shield Standard 4

AWS DDoS OpsTeam Correct Answer 2

AWS Shield Advanced Question 11

You would like to analyze OS vulnerabilities from within your EC2 instances. You need these analyses to occur weekly and provide you with concrete recommendations in case vulnerabilities are found. Which AWS service should you use? 1

AWS Trusted Advisor 2

AWS Config 3

Amazon Inspector 4

AWS GuardDuty Correct Answer 3

Amazon Inspector Question 12

Your development team tends to commit their AWS credentials in public repositories and is afraid of misuse of your AWS account. Which AWS service can notify you of suspicious account activity? 1

Amazon Inspector 2

AWS GuardDuty 3

AWS Trusted Advisor 4

AWS Enhanced Protection Correct Answer 2

AWS GuardDuty Question 13

To support an ongoing audit, you need to provide auditors with documents that prove that AWS has achieved certain compliance. How can you do this? 1

Contact AWS Support 2

AWS GuardDuty 3

Amazon Inspector 4

AWS Artifact Correct Answer 4

AWS Artifact Question 14

AWS Certificate Manager helps you easily provision, manage, and deploy SSL/TLS certificates. It’s integrated with the following AWS services EXCEPT: 1

EC2 2

Elastic Load Balancer 3

CloudFront 4

API Gateway Correct Answer 1

EC2 Question 15

What is the most suitable solution for storing RDS DB passwords which also provides you automatic rotation? 1

AWS SSM Parameter Store 2

AWS KMS 3

AWS Secrets Manager Correct Answer 3

AWS Secrets Manager

Identity

Question 1

During an IT audit, it has been highlighted that some of your users do not have MFA enabled. Where can you obtain a report giving you details about which users do not have MFA enabled? 1

STS 2

AWS IAM Credential Report 3

AWS Trusted Advisor 4

AWS GuardDuty Correct Answer 2

AWS IAM Credential Report Question 2

You have a mobile application and you would like to give your users access to their own personal space in Amazon S3. How do you achieve that? 1

Generate IAM user credentials for each of your application’s users 2

Use a bucket policy to make your bucket public 3

Use Amazon Cognito Identity Federation 4

Use SAML Identity Federation Correct Answer 3

Use Amazon Cognito Identity Federation Question 3

How to find the AWS resources that are accessible from outside your AWS Account? 1

Using IAM Access Advisor 2

Using IAM Credentials Report 3

Using IAM Access Analyzer Correct Answer 3

Using IAM Access Analyzer Question 4

You want to provide unauthenticated guess access to your web application developed and hosted using AWS Amplify. What should you use? 1

Amazon Cognito User Pools 2

Amazon Cognito Identity Pools 3

AWS IAM Users Correct Answer 2

Amazon Cognito Identity Pools

Networking - Route 53

Question 1

You have purchased “mycoolcompany.com” on Route 53 Registrar and would like for it to point to “lb1-1234.us-east-2.elb.amazonaws.com”. Which Route 53 record type is IMPOSSIBLE to set up for this? 1

CNAME 2

ALIAS Correct Answer 1

CNAME Question 2

You have deployed a new Elastic Beanstalk environment and would like to direct 5% of your production traffic to this new environment, to monitor for CloudWatch metrics and ensuring no issues exist. Which Route 53 routing policy allows you to do so? 1

Simple 2

Weighted 3

Failover 4

Latency Correct Answer 2

Weighted Question 3

After updating a Route 53 record to point “myapp.mydomain.com” from an old Load Balancer to a new load balancer, it looks like the users are still redirected to the old load balancer. You are wondering why … 1

Because of the Alias record! 2

Because of the CNAME record! 3

Because of the TTL! 4

Because of the health checks! Correct Answer 3

Because of the TTL! Question 4

You want your users to get the best possible user experience, minimizing the response time from your servers to your users. Which Route 53 routing policy should help? 1

Multi-Value 2

Weighted 3

Geolocation 4

Latency Correct Answer 4

Latency Question 5

You have a legal requirement that people in any country but France should not be able to access your website. Which Route 53 routing policy helps you in achieving this? 1

Latency 2

Geolocation 3

Multi-Value 4

Simple Correct Answer 2

Geolocation Question 6

You have purchased a domain on GoDaddy and would like to use it with Route 53. What should you do? 1

Request for a domain transfer 2

Create a private hosted zone and update the 3rd party registrar NS records to use Route 53 name servers 3

Create a public hosted zone and update Route 53 NS records to use 3rd party registrar name servers 4

Create a public hosted zone and update the 3rd party registrar NS records to use Route 53 name servers Correct Answer 4

Create a public hosted zone and update the 3rd party registrar NS records to use Route 53 name servers Question 7

You have purchased “mycoolcompany.com” from Route 53 Registrar. Also, you have created an S3 bucket “mycoolapplication” and uploaded your static website to the S3 bucket, then enabled Static Website Hosting. You’re trying to create an APEX Alias Route 53 record to point to your S3 bucket but you can’t. What are you missing here? 1

Your S3 bucket must be the same name as your Route 53 record “mycoolcompany.com” 2

Nothing, just refresh the AWS Console, and all good! 3

You can’t create an APEX Alias record for the S3 bucket with Static Website Hosting enabled Correct Answer 1

Your S3 bucket must be the same name as your Route 53 record “mycoolcompany.com”

Networking VPC Quiz

Question 1

What does this CIDR 10.0.4.0/28 correspond to? 1

10.0.4.0 to 10.0.4.15 2

10.0.4.0 to 10.0.32.0 3

10.0.4.0 to 10.0.4.28 4

10.0.0.0 to 10.0.16.0 Correct Answer 1

10.0.4.0 to 10.0.4.15 Explanation

/28 means 16 IPs (=2^(32-28) = 2^4), means only the last digit can change. Question 2

You have a corporate network of size 10.0.0.0/8 and a satellite office of size 192.168.0.0/16. Which CIDR is acceptable for your AWS VPC if you plan on connecting your networks later on? 1

172.16.0.0/12 2

172.16.0.0/16 3

10.0.16.0/16 4

192.168.4.0/18 Correct Answer 2

172.16.0.0/16 Explanation

CIDR not should overlap, and the max CIDR size in AWS is /16 Question 3

You plan on creating a subnet and want it to have at least capacity for 28 EC2 instances. What’s the minimum size you need to have for your subnet? 1

/28 2

/27 3

/26 4

/25 Correct Answer Question 4

You have attached an Internet Gateway to your VPC, but your EC2 instances still don’t have access to the Internet. What is NOT a possible issue? 1

Route Tables are missing entries 2

The EC2 instance doesn’t have a public IP 3

The Security Group doesn’t allow network in 4

The NACL doesn’t allow network traffic out Correct Answer 3

The Security Group doesn’t allow network in Explanation

Security groups are stateful and if traffic can go out, then it can go back in Question 5

You would like to provide Internet access to your EC2 instances in private subnets with IPv4 while making sure this solution requires the least amount of administration and scales seamlessly. What should you use? 1

NAT Instance with Source/Destination Check flag off 2

NAT Gateway 3

Egress Only Internet Gateway Correct Answer 2

NAT Gateway Question 6

VPC Peering has been established between VPC A and VPC B, and the route tables have been updated for VPC A. But, your EC2 instances cannot communicate. What is the likely issue? 1

Check the NACL 2

Check the EC2 instance Security Groups 3

Check the Route Tables in VPC B 4

Check if DNS Resolution is enabled Correct Answer 3

Check the Route Tables in VPC B Question 7

You have established a Direct Connect connection between your Corporate Data Center and VPC A in your AWS Account. You need to access VPC B in another AWS Region from your Corporate Data Center as well. What should you do? 1

Enable VPC Peering 2

Use a Customer Gateway 3

Set up a NAT Gateway 4

Use a Direct Connect Gateway Correct Answer 4

Use a Direct Connect Gateway Question 8

When using VPC Endpoints, what are the only two AWS services that have a Gateway Endpoint available? 1

Amazon S3 & DynamoDB 2

Amazon S3 & Amazon SQS 3

Amazon SQS & DynamoDB Correct Answer 1

Amazon S3 & DynamoDB Explanation

These two services have a VPC Gateway Endpoint (remember it), all the other ones have an Interface endpoint (powered by Private Link - means a private IP) Question 9

AWS reserves 5 IP addresses each time you create a new subnet in a VPC. When you create a subnet with CIDR 10.0.0.0/24, the following IP addresses are reserved, EXCEPT: 1

10.0.0.1 2

10.0.0.2 3

10.0.0.3 4

10.0.0.4 Correct Answer 4

10.0.0.4 Question 10

You have created a new VPC with 4 subnets in it. You begin to launch a set of EC2 instances inside these subnets but you noticed that these EC2 instances don’t get assigned public hostnames and DNS resolution isn’t working. What should you do to resolve this issue? 1

Enable DNS Resolution and DNS Hostnames in your VPC 2

Check route tables attached to your subnets 3

Make sure that your Internet Gateway is working properly Correct Answer 1

Enable DNS Resolution and DNS Hostnames in your VPC Question 11

You have 3 VPCs A, B, and C. You want to establish a VPC Peering connection between all the 3 VPCs. what should you do? 1

VPC Peering supports Transitive Peering, so you need to establish 2 VPC Peering connections (A-B, B-C) 2

Establish 3 VPC Peering connections (A-B, A-C, B-C) Correct Answer 2

Establish 3 VPC Peering connections (A-B, A-C, B-C) Question 12

How can you capture information about IP traffic inside your VPCs? 1

Enable VPC Traffic Mirroring 2

Enable CloudWatch Traffic Logs 3

Enable VPC Flow Logs Correct Answer 3

Enable VPC Flow Logs Question 13

If you want a 500 Mbps Direct Connect connection from your corporate data center to AWS. You would create a …………… connection. 1

Hosted 2

Dedicated Correct Answer 1

Hosted Question 14

You have an internal web application hosted in a private subnet in your VPC that you want to be used by other customers. You don’t want to expose the application to the Internet or opens your whole VPC to other customers. What should you do? 1

Use NAT Gateway 2

Use VPC Endpoint Services 3

Use VPC Peering Correct Answer 2

Use VPC Endpoint Services

22 Other Services

Question 1

How can you restrict access to your Amazon ElasticSearch domain to your company’s CIDR block? 1

Using IAM Policies 2

Using IP-Based Policies 3

Using AWS Shield 4

Using AWS WAF Correct Answer 2

Using IP-Based Policies Question 2

Which of the following are NOT a supported Kibana Authentication type in your Amazon ElasticSearch domain? 1

IAM Users and Roles 2

HTTP Basic Authentication 3

SAML 4

Amazon Cognito Correct Answer 1

IAM Users and Roles Question 3

What is the best way to find errors, exceptions, and request behavior in your AWS serverless application? 1

AWS Security Hub 2

AWS Amplify 3

AWS X-Ray 4

CloudWatch Logs Correct Answer 3

AWS X-Ray

Practice Test Stephane Maarek

Instructions

About this practice exam:

  • questions order and response orders are randomized
  • you can only review the answer after finishing the exam due to how Udemy works
  • it consists of 65 questions, the duration is 130 minutes, the passing score is 720

======

In case of an issue with a question:

  • ask a question in the Q&A
  • please take a screenshot of the question (because they’re randomized) and attach it
  • we will get back to you as soon as possible and fix the issue

Good luck, and happy learning!

Question 1

  • A media company runs its business on Amazon EC2 instances backed by Amazon S3 storage. The company is apprehensive about the consistent increase in costs incurred from S3 buckets. The company wants to make some decisions regarding data retention, storage, and deletion based on S3 usage and cost reports. As a SysOps Administrator, you have been hired to develop a solution to track the costs incurred by each S3 bucket in the AWS account.

How will you configure this requirement?

1. Configure AWS Budgets to see the cost against each S3 bucket in the AWS account

2. Use AWS Simple Monthly Calculator to check the cost against each S3 bucket in your AWS account

3. Use AWS Trusted Advisor's rich set of best practice checks to configure cost utilization for individual S3 buckets. Trusted Advisor also provides recommendations based on the findings derived from analyzing your AWS cloud architecture

4. Add a common tag to each bucket. Activate the tag as a cost allocation tag. Use the AWS Cost Explorer to create a cost report for the tag

Correct Answer 4. Add a common tag to each bucket. Activate the tag as a cost allocation tag. Use the AWS Cost Explorer to create a cost report for the tag

Explanation

Correct option:

Add a common tag to each bucket. Activate the tag as a cost allocation tag. Use the AWS Cost Explorer to create a cost report for the tag

Before you begin, your AWS Identity and Access Management (IAM) policy must have permission to: Access the Billing and Cost Management console, Perform the actions s3:GetBucketTagging and s3:PutBucketTagging.

Start by adding a common tag to each bucket. Activate the tag as a cost allocation tag. Use the AWS Cost Explorer to create a cost report for the tag. After you create the cost report, you can use it to review the cost of each bucket that has the cost allocation tag that you created.

You can set up a daily or hourly AWS Cost and Usage report to get more Amazon S3 billing details. However, these reports won't show you who made requests to your buckets. To get more information on certain Amazon S3 billing items, you must enable logging ahead of time. Then, you'll have logs that contain Amazon S3 request details.

Incorrect options:

Configure AWS Budgets to see the cost against each S3 bucket in the AWS account - AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your metrics drop below the threshold you define. It cannot showcase the cost of each S3 bucket.

Use AWS Simple Monthly Calculator to check the cost against each S3 bucket in your AWS account - The AWS Simple Monthly Calculator is an easy-to-use online tool that enables you to estimate the monthly cost of AWS services for your use case based on your expected usage. This useful tool helps estimate the cost of resources, but the current use case is not about estimations but being able to understand which bucket is incurring the maximum cost.

Use AWS Trusted Advisor's rich set of best practice checks to configure cost utilization for individual S3 buckets. Trusted Advisor also provides recommendations based on the findings derived from analyzing your AWS cloud architecture - AWS Trusted Advisor offers a rich set of best practice checks and recommendations across five categories. For Amazon S3 buckets, Trusted Advisor offers checks the following -

1) Checks buckets in Amazon S3 that have open access permissions 2) Checks the logging configuration of Amazon S3 buckets- whether it is enabled and for what duration 3) Checks for Amazon S3 buckets that do not have versioning enabled.

Trusted Advisor cannot however generate reports for costs incurred on S3 buckets.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/s3-find-bucket-cost/

https://aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-checklist/

https://aws.amazon.com/getting-started/hands-on/control-your-costs-free-tier-budgets/

Question 2

  • A startup uses Amazon S3 buckets for storing their customer data. The company has defined different retention periods for different objects present in their Amazon S3 buckets, based on the compliance requirements. But, the retention rules do not seem to work as expected.

Which of the following points are important to remember when configuring retention periods for objects in Amazon S3 buckets (Select two)?

Multi Select

1. When you apply a retention period to an object version explicitly, you specify a Retain Until Date for the object version

2. You cannot place a retention period on an object version through a bucket default setting

3. When you use bucket default settings, you specify a Retain Until Date for the object version

4. Different versions of a single object can have different retention modes and periods

5. The bucket default settings will override any explicit retention mode or period you request on an object version

Correct Answer

  1. When you apply a retention period to an object version explicitly, you specify a Retain Until Date for the object version
  2. Different versions of a single object can have different retention modes and periods

Explanation

Correct options:

When you apply a retention period to an object version explicitly, you specify a Retain Until Date for the object version - You can place a retention period on an object version either explicitly or through a bucket default setting. When you apply a retention period to an object version explicitly, you specify a Retain Until Date for the object version. Amazon S3 stores the Retain Until Date setting in the object version's metadata and protects the object version until the retention period expires.

Different versions of a single object can have different retention modes and periods - Like all other Object Lock settings, retention periods apply to individual object versions. Different versions of a single object can have different retention modes and periods.

For example, suppose that you have an object that is 15 days into a 30-day retention period, and you PUT an object into Amazon S3 with the same name and a 60-day retention period. In this case, your PUT succeeds, and Amazon S3 creates a new version of the object with a 60-day retention period. The older version maintains its original retention period and becomes deletable in 15 days.

Incorrect options:

You cannot place a retention period on an object version through a bucket default setting - You can place a retention period on an object version either explicitly or through a bucket default setting.

When you use bucket default settings, you specify a Retain Until Date for the object version - When you use bucket default settings, you don't specify a Retain Until Date. Instead, you specify a duration, in either days or years, for which every object version placed in the bucket should be protected.

The bucket default settings will override any explicit retention mode or period you request on an object version - If your request to place an object version in a bucket contains an explicit retention mode and period, those settings override any bucket default settings for that object version.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html

Question 3

  • After a developer had mistakenly shutdown a test instance, the Team Lead has decided to configure termination protection on all the instances. As a systems administrator, you have been tasked to review the termination policy and check its viability for the given requirements.

Which of the following choices are correct about Amazon EC2 instance’s termination policy (Select two)?

Multi Select

1. The DisableApiTermination attribute prevents you from terminating an instance by initiating shutdown from the instance

2. The DisableApiTermination attribute does not prevent you from terminating an instance by initiating shutdown from Amazon EC2 console

3. You can't enable termination protection for Spot Instances

4. To prevent instances that are part of an Auto Scaling group from terminating on scale in, use instance protection

5. The DisableApiTermination attribute prevents Amazon EC2 Auto Scaling from terminating an instance

Correct Answer

  1. You can’t enable termination protection for Spot Instances
  2. To prevent instances that are part of an Auto Scaling group from terminating on scale in, use instance protection

Explanation

Correct options:

You can't enable termination protection for Spot Instances - You can't enable termination protection for Spot Instances—a Spot Instance is terminated when the Spot price exceeds the amount you're willing to pay for Spot Instances. However, you can prepare your application to handle Spot Instance interruptions.

To prevent instances that are part of an Auto Scaling group from terminating on scale in, use instance protection - The DisableApiTermination attribute does not prevent Amazon EC2 Auto Scaling from terminating an instance. For instances in an Auto Scaling group, use the following Amazon EC2 Auto Scaling features instead of Amazon EC2 termination protection:

    - To prevent instances that are part of an Auto Scaling group from terminating on scale in, use instance protection.

    - To prevent Amazon EC2 Auto Scaling from terminating unhealthy instances, suspend the ReplaceUnhealthy process.

    - To specify which instances Amazon EC2 Auto Scaling should terminate first, choose a termination policy.

Incorrect options:

The DisableApiTermination attribute prevents you from terminating an instance by initiating shutdown from the instance - This is false. The DisableApiTermination attribute does not prevent you from terminating an instance by initiating shutdown from the instance

The DisableApiTermination attribute does not prevent you from terminating an instance by initiating shutdown from Amazon EC2 console - By default, you can terminate your instance using the Amazon EC2 console, command line interface, or API. To prevent your instance from being accidentally terminated using Amazon EC2, you can enable termination protection for the instance.

The DisableApiTermination attribute prevents Amazon EC2 Auto Scaling from terminating an instance - The DisableApiTermination attribute does not prevent Amazon EC2 Auto Scaling from terminating an instance.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/terminating-instances.html#Using_ChangingDisableAPITermination

Question 4

A company is moving their on-premises technology infrastructure to AWS Cloud. Compliance rules and regulatory guidelines mandate the company to use its own software that needs socket level configurations. As the company is new to AWS Cloud, they have reached out to you for guidance on this requirement.

As an AWS Certified SysOps Administrator, which option will you suggest for the given requirement?

1. Opt for On-Demand instances that are highly available and require no prior planning

2. Opt for Reserved Instances that allow you to plan and help install the necessary software

3. Opt for Amazon EC2 Dedicated Host

4. Opt for Amazon EC2 Dedicated Instance Correct Answer 3

Opt for Amazon EC2 Dedicated Host Explanation

Correct option:

Opt for Amazon EC2 Dedicated Host

An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to your use. Dedicated Hosts allow you to use your existing per-socket, per-core, or per-VM software licenses, including Windows Server, Microsoft SQL Server, SUSE, and Linux Enterprise Server. Hence, is the right choice for the current requirement.

Differences between Dedicated Hosts and Dedicated Instances: via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-hosts-overview.html

Incorrect options:

Opt for Amazon EC2 Dedicated Instance - Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that’s dedicated to a single customer. Dedicated Instances that belong to different AWS accounts are physically isolated at a hardware level, even if those accounts are linked to a single payer account. However, Dedicated Instances may share hardware with other instances from the same AWS account that are not Dedicated Instances.

Opt for On-Demand instances that are highly available and require no prior planning

Opt for Reserved Instances that allow you to plan and help install the necessary software

You cannot install your own software that needs socket level programming on On-Demand or Reserved Instances.

References:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-hosts-overview.html

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-instance.html

Question 5

Security and Compliance is a Shared Responsibility between AWS and the customer. As part of this Shared Responsibility, the customer is also responsible for securing the resources that he has procured under his AWS account.

Which of the following is the responsibility of the customer? 1

For Amazon S3 service, managing the operating system and platform is customer responsibility 2

AWS is responsible for patching and fixing flaws within the infrastructure, for patching the guest Operating Systems and applications of the customers 3

AWS is responsible for training their customers and their employees as part of Customer Specific training 4

For Amazon EC2 service, managing guest operating system (including updates and security patches), application software and Security Groups is the responsibility of the customer Correct Answer 4

For Amazon EC2 service, managing guest operating system (including updates and security patches), application software and Security Groups is the responsibility of the customer Explanation

Correct option:

For Amazon EC2 service, managing guest operating system (including updates and security patches), application software and Security Groups is the responsibility of the customer

Customer responsibility will be determined by the AWS Cloud services that a customer selects. This determines the amount of configuration work the customer must perform as part of their security responsibilities. For example, a service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and, as such, requires the customer to perform all of the necessary security configuration and management tasks. Customers that deploy an Amazon EC2 instance are responsible for the management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.

AWS Shared Responsibility Model: via - https://aws.amazon.com/compliance/shared-responsibility-model/

Incorrect options:

For Amazon S3 service, managing the operating system and platform is customer responsibility - For abstracted services, such as Amazon S3, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Customers are responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.

AWS is responsible for patching and fixing flaws within the infrastructure, for patching the guest Operating Systems and applications of the customers - As part of Patch management, AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.

AWS is responsible for training their customers and their employees as part of Customer Specific training - As part of Awareness & Training, AWS trains AWS employees, but a customer must train their own employees.

Reference:

https://aws.amazon.com/compliance/shared-responsibility-model/

Question 6

As a SysOps Administrator, you have been tasked to generate a report on all API calls made for Elastic Load Balancer from the AWS Management Console.

Which feature/service will you use to fetch this data? 1

CloudWatch metrics 2

Load Balancer Access logs 3

CloudTrail logs 4

Load Balancer Request tracing Correct Answer 3

CloudTrail logs Explanation

Correct option:

CloudTrail logs - Elastic Load Balancing is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Elastic Load Balancing. CloudTrail captures all API calls for Elastic Load Balancing as events. The calls captured include calls from the AWS Management Console and code calls to the Elastic Load Balancing API operations. If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for Elastic Load Balancing. If you don’t configure a trail, you can still view the most recent events in the CloudTrail console in Event history. Using the information collected by CloudTrail, you can determine the request that was made to Elastic Load Balancing, the IP address from which the request was made, who made the request, when it was made, and additional details.

Incorrect options:

CloudWatch metrics - You can use Amazon CloudWatch to retrieve statistics about data points for your load balancers and targets as an ordered set of time-series data, known as metrics. You can use these metrics to verify that your system is performing as expected.

Load Balancer Access logs - You can use access logs to capture detailed information about the requests made to your load balancer and store them as log files in Amazon S3. You can use these access logs to analyze traffic patterns and to troubleshoot issues with your targets.

Load Balancer Request tracing - You can use request tracing to track HTTP requests. The load balancer adds a header with a trace identifier to each request it receives.

Reference:

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-cloudtrail-logs.html

Question 7

A Systems Administrator has just configured an internet facing Load Balancer for traffic distribution across the EC2 instances placed in different Availability Zones. The clients, however, are unable to connect to the Load Balancer.

What is the most plausible reason for this issue? 1

It is an internal server error 2

A security group or network ACL is not allowing traffic from the client 3

The target returned the error code of 200 indicating an error on the server side 4

The target was incorrectly configured as a Lambda function and not an EC2 instance Correct Answer 2

A security group or network ACL is not allowing traffic from the client Explanation

Correct option:

A security group or network ACL is not allowing traffic from the client

If the load balancer is not responding to client requests, check for the following issues:

Your internet-facing load balancer may be attached to a private subnet - You must specify public subnets for your load balancer. A public subnet has a route to the Internet Gateway for your virtual private cloud (VPC).

A security group or network ACL does not allow traffic - The security group for the load balancer and any network ACLs for the load balancer subnets must allow inbound traffic from the clients and outbound traffic to the clients on the listener ports.

Incorrect options:

It is an internal server error - HTTP 500 is the error code for internal server error, generated by Load Balancer and sent back to the requesting client. But, in the given use case, the client is unable to connect to the Load Balancer itself.

The target returned the error code of 200 indicating an error on the server side - By default, the success code is 200. So, returning an HTTP 200 indicates a successful message.

The target was incorrectly configured as a Lambda function and not an EC2 instance - An ELB can be configured to have a Lambda Function as its target. This should not result in any access issues or errors.

Reference:

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-troubleshooting.html

Question 8

A Systems Administrator is configuring an Application Load Balancer (ALB) that fronts Amazon EC2 instances.

Which of the following options would you identify as correct for configuring the ALB? (Select two) Multi Select 1

The targets of a target group in an ALB should all belong to the same Availability Zone 2

Before you start using your Application Load Balancer, you must add one or more listeners 3

A target can be registered with only one target group at any given time 4

When you create a listener, you define actions and conditions for the default rule 5

You configure target groups of an ALB by attaching them to the listeners Correct Answer 2

Before you start using your Application Load Balancer, you must add one or more listeners 5

You configure target groups of an ALB by attaching them to the listeners Explanation

Correct options:

Before you start using your Application Load Balancer, you must add one or more listeners - A listener checks for connection requests from clients, using the protocol and port that you configure. The rules that you define for a listener determine how the load balancer routes requests to its registered targets. Each rule consists of a priority, one or more actions, and one or more conditions. When the conditions for a rule are met, then its actions are performed. You must define a default rule for each listener, and you can optionally define additional rules.

You configure target groups of an ALB by attaching them to the listeners - Each target group is used to route requests to one or more registered targets. When you create each listener rule, you specify a target group and conditions. When a rule condition is met, traffic is forwarded to the corresponding target group. You can create different target groups for different types of requests.

Load Balancer basic components: via - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html

Incorrect options:

The targets of a target group in an ALB should all belong to the same Availability Zone - A load balancer serves as the single point of contact for clients. The load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. This increases the availability of your application.

A target can be registered with only one target group at any given time - Each target group routes requests to one or more registered targets, such as EC2 instances, using the protocol and port number that you specify. You can register a target with multiple target groups.

When you create a listener, you define actions and conditions for the default rule - When you create a listener, you define actions for the default rule. Default rules can’t have conditions. If the conditions for none of a listener’s rules are met, then the action for the default rule is performed.

Reference:

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html

Question 9

A company wants to migrate a part of its on-premises infrastructure to AWS Cloud. As a starting point, the company is looking at moving their daily workflow files to AWS Cloud, such that the files are accessible from the on-premises systems as well as AWS Cloud. To reduce the management overhead, the company wants a fully managed service.

Which service/tool is the right choice for this requirement? 1

File Gateway of AWS Storage Gateway 2

Volume Gateway of AWS Storage Gateway 3

Amazon Simple Storage Service (Amazon S3) 4

Amazon Elastic Block Store (Amazon EBS) Correct Answer 1

File Gateway of AWS Storage Gateway Explanation

Correct option:

File Gateway of AWS Storage Gateway - AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. Storage Gateway provides a standard set of storage protocols such as iSCSI, SMB, and NFS, which allow you to use AWS storage without rewriting your existing applications. It provides low-latency performance by caching frequently accessed data on-premises, while storing data securely and durably in Amazon cloud storage services. Storage Gateway optimizes data transfer to AWS by sending only changed data and compressing data.

File Gateway presents a file-based interface to Amazon S3, which appears as a network file share. It enables you to store and retrieve Amazon S3 objects through standard file storage protocols. File Gateway allows your existing file-based applications or devices to use secure and durable cloud storage without needing to be modified. With S3 File Gateway, your configured S3 buckets will be available as Network File System (NFS) mount points or Server Message Block (SMB) file shares. Your applications read and write files and directories over NFS or SMB, interfacing to the gateway as a file server. In turn, the gateway translates these file operations into object requests on your S3 buckets.

Incorrect options:

Volume Gateway of AWS Storage Gateway - Volume Gateway provides an iSCSI target, which enables you to create block storage volumes and mount them as iSCSI devices from your on-premises or EC2 application servers. The Volume Gateway runs in either a cached or stored mode. Volume Gateway cannot be used for file storage.

Amazon Simple Storage Service (Amazon S3) - Amazon S3 is object storage built to store and retrieve any amount of data from anywhere. Amazon S3 provides a simple web service interface that you can use to store and retrieve any amount of data, at any time, from anywhere. Using this service, you can easily build applications that make use of cloud-native storage. The given use case needs a hybrid storage facility since the data will be accessed from the on-premises servers and applications on AWS Cloud. Hence, S3 is not the right choice.

Amazon Elastic Block Store (Amazon EBS) - Amazon Elastic Block Store (EBS) is an easy-to-use, high-performance, block-storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS. The given use case needs a hybrid storage facility since the data will be accessed from the on-premises servers and applications on AWS Cloud. Hence, EBS is not the right choice.

Reference:

https://aws.amazon.com/storagegateway/

Question 10

A SysOps Administrator was asked to enable versioning on an Amazon S3 bucket after a few objects were accidentally deleted by the development team.

Which of the following represent valid scenarios when a developer deletes an object in the versioning-enabled bucket? (Select two) Multi Select 1

A delete marker is set on the deleted object, but the actual object is not deleted 2

GET requests can retrieve delete marker objects 3

A delete marker has a key, version ID and Access Control List (ACL) associated with it 4

GET requests do not retrieve delete marker objects 5

The delete marker has the same data associated with it, as the actual object Correct Answer 1

A delete marker is set on the deleted object, but the actual object is not deleted 4

GET requests do not retrieve delete marker objects Explanation

Correct options:

A delete marker is set on the deleted object, but the actual object is not deleted - A delete marker in Amazon S3 is a placeholder (or marker) for a versioned object that was named in a simple DELETE request. Because the object is in a versioning-enabled bucket, the object is not deleted. But the delete marker makes Amazon S3 behave as if it is deleted. A delete marker has a key name (or key) and version ID like any other object. It does not have data associated with it. It is not associated with an access control list (ACL) value.

GET requests do not retrieve delete marker objects - The only way to list delete markers (and other versions of an object) is by using the versions subresource in a GET Bucket versions request. A simple GET does not retrieve delete marker objects.

What Delete Markers are: via - https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html

Incorrect options:

GET requests can retrieve delete marker objects

A delete marker has a key, version ID and Access Control List (ACL) associated with it

The delete marker has the same data associated with it, as the actual object

These three options contradict the explanation provided above, so these options are incorrect.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html

Question 11

An analytics company generates reports for various client applications, some of which have critical data. As per the company’s compliance guidelines, data has to be encrypted during data exchange, for all channels of communication. An Amazon S3 bucket is configured as a website endpoint and this is now being added as a custom origin for CloudFront.

How will you secure this channel, as per the company’s requirements? 1

Configure CloudFront that mandates viewers to use HTTPS to request objects from S3. Configure S3 bucket to support HTTPS communication only. This will force CloudFront to use HTTPS for communication between CloudFront and S3 2

Configure CloudFront to mandate viewers to use HTTPS to request objects from S3. However, CloudFront and S3 will use HTTP to communicate with each other 3

Communication between CloudFront and Amazon S3 is always on HTTP protocol since the network used for communication is internal to AWS and is inherently secure 4

CloudFront always forwards requests to S3 by using the protocol that viewers used to submit the requests. So, we only need to configure CloudFront to mandate the use of HTTPS for users Correct Answer 2

Configure CloudFront to mandate viewers to use HTTPS to request objects from S3. However, CloudFront and S3 will use HTTP to communicate with each other Explanation

Correct option:

Configure CloudFront to mandate viewers to use HTTPS to request objects from S3. CloudFront and S3 will use HTTP to communicate with each other

If your Amazon S3 bucket is configured as a website endpoint, you can’t configure CloudFront to use HTTPS to communicate with your origin because Amazon S3 doesn’t support HTTPS connections in that configuration.

HTTPS for Communication Between CloudFront and Your Amazon S3 Origin: via - https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-cloudfront-to-s3-origin.html

Incorrect options:

Configure CloudFront that mandates viewers to use HTTPS to request objects from S3. Configure S3 bucket to support HTTPS communication only. This will force CloudFront to use HTTPS for communication between CloudFront and S3 - As discussed above, HTTPS between CloudFront and Amazon S3 is not supported when the S3 bucket is configured as a website endpoint.

Communication between CloudFront and Amazon S3 is always on HTTP protocol since the network used for communication is internal to AWS and is inherently secure - When your origin is an Amazon S3 bucket, your options for using HTTPS for communications with CloudFront depend on how you’re using the bucket. If your Amazon S3 bucket is configured as a website endpoint, you can’t configure CloudFront to use HTTPS to communicate with your origin.

When your origin is an Amazon S3 bucket that supports HTTPS communication, CloudFront always forwards requests to S3 by using the protocol that viewers used to submit the requests.

CloudFront always forwards requests to S3 by using the protocol that viewers used to submit the requests. So, we only need to configure CloudFront to mandate the use of HTTPS for users - This option has been added as a distractor. As mentioned earlier, if your Amazon S3 bucket is configured as a website endpoint, you can’t configure CloudFront to use HTTPS while communicating with S3.

Reference:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-cloudfront-to-s3-origin.html

Question 12

A junior developer is tasked with creating necessary configurations for AWS CloudFormation that is extensively used in a project. After declaring the necessary stack policy, the developer realized that the users still do not have access to stack resources. The stack policy created by the developer looks like so:

{ “Statement” : [ { “Effect” : “Allow”, “Action” : “Update:”, “Principal”: “”, “Resource” : “” }, { “Effect” : “Deny”, “Action” : “Update:”, “Principal”: “*”, “Resource” : “LogicalResourceId/ProductionDatabase” } ] }

Why are the users unable to access the stack resources even after giving access permissions to all? 1

A stack policy applies only during stack updates, it doesn’t provide access controls. The developer needs to provide access through IAM policies 2

The stack policy is invalid and hence the users are not granted any permissions. The developer needs to fix the syntactical errors in the policy 3

Stack policies do not allow wildcard character value (*) for the Principal element of the policy 4

Stack policies are associated with a particular IAM role or an IAM user. Hence, they only work for the users you have explicitly attached the policy to Correct Answer 1

A stack policy applies only during stack updates, it doesn’t provide access controls. The developer needs to provide access through IAM policies Explanation

Correct option:

A stack policy applies only during stack updates, it doesn’t provide access controls. The developer needs to provide access through IAM policies - When you create a stack, all update actions are allowed on all resources. By default, anyone with stack update permissions can update all of the resources in the stack. You can prevent stack resources from being unintentionally updated or deleted during a stack update by using a stack policy. A stack policy is a JSON document that defines the update actions that can be performed on designated resources.

After you set a stack policy, all of the resources in the stack are protected by default. To allow updates on specific resources, you specify an explicit Allow statement for those resources in your stack policy. You can define only one stack policy per stack, but, you can protect multiple resources within a single policy.

A stack policy applies only during stack updates. It doesn’t provide access controls like an AWS Identity and Access Management (IAM) policy. Use a stack policy only as a fail-safe mechanism to prevent accidental updates to specific stack resources. To control access to AWS resources or actions, use IAM.

Incorrect options:

The stack policy is invalid and hence the users are not granted any permissions. The developer needs to fix the syntactical errors in the policy - This statement is incorrect and given only as a distractor.

Stack policies do not allow wildcard character value () for the Principal element of the policy - The Principal element specifies the entity that the policy applies to. This element is required while creating a policy but supports only the wildcard (), which means that the policy applies to all principals.

Stack policies are associated with a particular IAM role or an IAM user. Hence, they only work for the users you have explicitly attached the policy to - A stack policy applies to all AWS CloudFormation users who attempt to update the stack. You can’t associate different stack policies with different users.

Reference:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html

Question 13

A large IT company uses several AWS accounts for the different lines of business. Quite often, the systems administrator is faced with the problem of sharing Customer Master Keys (CMKs) across multiple AWS accounts for accessing AWS resources spread across these accounts.

How will you implement a solution to address this issue? 1

The key policy for the CMK must give the external account (or users and roles in the external account) permission to use the CMK. IAM policies in the external account must delegate the key policy permissions to its users and roles 2

Use AWS KMS service-linked roles to share access across AWS accounts 3

AWS Owned CMK can be used across AWS accounts. Configure an AWS Owned CMK and use it across accounts that need to share the key material 4

Declare a key policy for the CMK to give the external account permission to use the CMK. This key policy should be embedded with the first request of every transaction Correct Answer 1

The key policy for the CMK must give the external account (or users and roles in the external account) permission to use the CMK. IAM policies in the external account must delegate the key policy permissions to its users and roles Explanation

Correct option:

The key policy for the CMK must give the external account (or users and roles in the external account) permission to use the CMK. IAM policies in the external account must delegate the key policy permissions to its users and roles

You can allow IAM users or roles in one AWS account to use a customer master key (CMK) in a different AWS account. You can add these permissions when you create the CMK or change the permissions for an existing CMK.

To permit the usage of a CMK to users and roles in another account, you must use two different types of policies:

The key policy for the CMK must give the external account (or users and roles in the external account) permission to use the CMK. The key policy is in the account that owns the CMK.

IAM policies in the external account must delegate the key policy permissions to its users and roles. These policies are set in the external account and give permissions to users and roles in that account.

Incorrect options:

AWS Owned CMK can be used across AWS accounts. Configure an AWS Owned CMK and use it across accounts that need to share the key material - AWS owned CMKs are a collection of CMKs that an AWS service owns and manages for use in multiple AWS accounts. However, you cannot view, use, track, or audit them

Use AWS KMS service-linked roles to share access across AWS accounts - AWS Key Management Service uses AWS Identity and Access Management (IAM) service-linked roles. A service-linked role is a unique type of IAM role that is linked directly to AWS KMS. The service-linked roles are defined by AWS KMS and include all the permissions that the service requires to call other AWS services on your behalf. You cannot use AWS KMS service-linked roles to share access across AWS accounts.

Declare a key policy for the CMK to give the external account permission to use the CMK. This key policy should be embedded with the first request of every transaction - Key policy can not be directly shared across accounts.

References:

https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html

https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-owned-cmk

Question 14

An e-commerce company is running its server infrastructure on Amazon EC2 instance store-backed instances. For better performance, the company has decided to move their applications to another Amazon EC2 instance store-backed instance with a different instance type.

How will you configure a solution for this requirement? 1

You can’t resize an instance store-backed instance. Instead, you choose a new compatible instance and move your application to the new instance 2

You can’t resize an instance store-backed instance. Instead, configure an EBS volume to be the root device for the instance and migrate using the EBS volume 3

Create an image of your instance, and then launch a new instance from this image with the instance type that you need. Take any Elastic IP address that you’ve associated with your original instance and associate it with the new instance for uninterrupted service to your application 4

Create an image of your instance, and then launch a new instance from this image with the instance type that you need. Any public IP address associated with the instance can be moved with the instance for uninterrupted access of services Correct Answer 3

Create an image of your instance, and then launch a new instance from this image with the instance type that you need. Take any Elastic IP address that you’ve associated with your original instance and associate it with the new instance for uninterrupted service to your application Explanation

Correct option:

Create an image of your instance, and then launch a new instance from this image with the instance type that you need. Take any Elastic IP address that you’ve associated with your original instance and associate it with the new instance for uninterrupted service to your application

When you want to move your application from one instance store-backed instance to an instance store-backed instance with a different instance type, you must migrate it by creating an image from your instance, and then launching a new instance from this image with the instance type that you need. To ensure that your users can continue to use the applications that you’re hosting on your instance uninterrupted, you must take any Elastic IP address that you’ve associated with your original instance and associate it with the new instance. Then you can terminate the original instance.

Complete steps to migrate an instance store-backed instance: via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-resize.html

Incorrect options:

You can’t resize an instance store-backed instance. Instead, you choose a new compatible instance and move your application to the new instance - An instance store-backed EC2 instance can be resized, as explained above.

You can’t resize an instance store-backed instance. Instead, configure an EBS volume to be the root device for the instance and migrate using the EBS volume - This statement is incorrect.

Create an image of your instance, and then launch a new instance from this image with the instance type that you need. Any public IP address associated with the instance can be moved with the instance for uninterrupted access of services - Public IP addresses are released when an instance is changed. You need an Elastic IP to keep the service uninterrupted for users since these can be moved across instances.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-resize.html

Question 15

A developer is tasked with cleaning up obsolete resources. When he tried to delete an AWS CloudFormation stack, the stack deletion process returned without any error or a success message. The stack was not deleted either.

What is the reason for this behavior and how will you fix it? 1

The AWS user who initiated the stack deletion does not have enough permissions 2

Some resources must be empty before they can be deleted. Such resources will not be deleted if they are not empty and stack deletion fails without any error 3

If you attempt to delete a stack with termination protection enabled, the deletion fails and the stack - including its status - remains unchanged 4

Dependent resources should be deleted first, before deleting the rest of the resources in the stack. If this order is not followed, then stack deletion fails without an error Correct Answer 3

If you attempt to delete a stack with termination protection enabled, the deletion fails and the stack - including its status - remains unchanged Explanation

Correct option:

If you attempt to delete a stack with termination protection enabled, the deletion fails and the stack - including its status - remains unchanged

You cannot delete stacks that have termination protection enabled. If you attempt to delete a stack with termination protection enabled, the deletion fails and the stack - including its status - remains unchanged. Disable termination protection on the stack, then perform the delete operation again.

This includes nested stacks whose root stacks have termination protection enabled. Disable termination protection on the root stack, then perform the delete operation again. It is strongly recommended that you do not delete nested stacks directly, but only delete them as part of deleting the root stack and all its resources.

Complete steps for Stack deletion: via - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html

Incorrect options:

The AWS user who initiated the stack deletion does not have enough permissions - If the user does not have enough permissions to delete the stack, an error explaining the same is displayed and the stack will be in the DELETE_FAILED state.

Some resources must be empty before they can be deleted. Such resources will not be deleted if they are not empty and stack deletion fails without any error - Some resources must be empty before they can be deleted. For example, you must delete all objects in an Amazon S3 bucket or remove all instances in an Amazon EC2 security group before you can delete the bucket or security group. Otherwise, stack deletion fails and the stack will be in the DELETE_FAILED state.

Dependent resources should be deleted first, before deleting the rest of the resources in the stack. If this order is not followed, then stack deletion fails without an error - Any error during stack deletion will result in the stack being in the DELETE_FAILED state.

Reference:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html

Question 16

An organization that started as a single AWS account, gradually moved to a multi-account setup. The organization also has multiple AWS environments in each account, that were being managed at the account level. Backups are a big part of this management task. The organization is looking at moving to a centralized backup management process that consolidates and automates Cross-Region backup tasks across AWS accounts.

Which of the solutions below is the right choice for this requirement? 1

Configure AWS Systems Manager Maintenance Windows to schedule backup tasks as per company’s policies. Tag the resources to help identify them by the AWS environment they run in. Amazon CloudWatch dashboards hosted by Systems Manager to get an overall view of the status of all resources under the AWS account 2

Use Amazon EventBridge to create a workflow for scheduled backup of all AWS resources under an account. Amazon S3 lifecycle policies, Amazon EC2 instance backups, and Amazon RDS backups can be used to create the events for the EventBridge. The same workflow can be scheduled to work on production and non-production environments, based on the tags created 3

Create a backup plan in AWS Backup. Assign tags to resources based on the environment ( Production, Development, Testing). Create one backup policy for production environments and one backup policy for non-production environments. Schedule the backup plan based on the organization’s backup policies 4

Use Amazon Data Lifecycle Manager to manage creation, deletion, and managing of all the AWS resources under an account. Tag all the resources that need to be backed up and use lifecycle policies to customize the backup management to cater to the needs of the organization Correct Answer 3

Create a backup plan in AWS Backup. Assign tags to resources based on the environment ( Production, Development, Testing). Create one backup policy for production environments and one backup policy for non-production environments. Schedule the backup plan based on the organization’s backup policies Explanation

Correct option:

Create a backup plan in AWS Backup. Assign tags to resources based on the environment ( Production, Development, Testing). Create one backup policy for production environments and one backup policy for non-production environments. Schedule the backup plan based on the organization’s backup policies

AWS Backup is a fully managed and cost-effective backup service that simplifies and automates data backup across AWS services including Amazon EBS, Amazon EC2, Amazon RDS, Amazon Aurora, Amazon DynamoDB, Amazon EFS, and AWS Storage Gateway. In addition, AWS Backup leverages AWS Organizations to implement and maintain a central view of backup policy across resources in a multi-account AWS environment. Customers simply tag and associate their AWS resources with backup policies managed by AWS Backup for Cross-Region data replication.

The following post shows how to centrally manage backup tasks across AWS accounts in your organization by deploying backup policies with AWS Backup.

Example AWS Backup Architecture: via - https://aws.amazon.com/blogs/storage/centralized-cross-account-management-with-cross-region-copy-using-aws-backup/

Incorrect options:

Configure AWS Systems Manager Maintenance Windows to schedule backup tasks as per the company’s policies. Tag the resources to help identify them by the AWS environment they run in. Amazon CloudWatch dashboards hosted by Systems Manager to get an overall view of the status of all resources under the AWS account

AWS Systems Manager Maintenance Windows let you define a schedule for when to perform potentially disruptive actions on your instances such as patching an operating system, updating drivers, or installing software or patches. Although a useful service, it is not suited for the given requirements.

Use Amazon EventBridge to create a workflow for scheduled backup of all AWS resources under an account. Amazon S3 lifecycle policies, Amazon EC2 instance backups, and Amazon RDS backups can be used to create the events for the EventBridge. The same workflow can be scheduled to work on production and non-production environments, based on the tags created - Amazon EventBridge is a serverless event bus that makes it easy to connect applications together using data from your own applications, integrated Software-as-a-Service (SaaS) applications, and AWS services. It is possible to build a backup solution using EventBridge, but it will not be an optimized one, since AWS offers services with better features for centrally managing backups.

Use Amazon Data Lifecycle Manager to manage creation, deletion and managing of all the AWS resources under an account. Tag all the resources that need to be backed up and use lifecycle policies to customize the backup management to cater to the needs of the organization - DLM provides a simple way to manage the lifecycle of EBS resources, such as volume snapshots. You should use DLM when you want to automate the creation, retention, and deletion of EBS snapshots.

References:

https://aws.amazon.com/blogs/storage/centralized-cross-account-management-with-cross-region-copy-using-aws-backup/

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-maintenance.html

https://aws.amazon.com/eventbridge/

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html

Question 17

The development team at an IT company is looking at moving its web applications to Amazon EC2 instances. The team is weighing its options for EBS volumes and instance store-backed instances for these applications with varied workloads.

Which of the following would you identify as correct regarding instance store and EBS volumes? (Select three) Multi Select 1

Use separate Amazon EBS volumes for the operating system and your data, even though root volume persistence feature is available 2

Data stored in the instance store is preserved when you stop or terminate your instance. However, data is lost when you hibernate the instance. Configure EBS volumes or have a backup plan to avoid using critical data to this behavior 3

EBS snapshots only capture data that has been written to your Amazon EBS volume, which might exclude any data that has been locally cached by your application or operating system 4

By default, data on a non-root EBS volume is preserved even if the instance is shutdown or terminated 5

EBS encryption does not support boot volumes 6

Snapshots of EBS volumes, stored on Amazon S3, can be accessed using Amazon S3 APIs Correct Answer 1

Use separate Amazon EBS volumes for the operating system and your data, even though root volume persistence feature is available 3

EBS snapshots only capture data that has been written to your Amazon EBS volume, which might exclude any data that has been locally cached by your application or operating system 4

By default, data on a non-root EBS volume is preserved even if the instance is shutdown or terminated Explanation

Correct options:

Use separate Amazon EBS volumes for the operating system and your data, even though root volume persistence feature is available

As a best practice, AWS recommends the use of separate Amazon EBS volumes for the operating system and your data. This ensures that the volume with your data persists even after instance termination or any issues to the operating system.

EBS snapshots only capture data that has been written to your Amazon EBS volume, which might exclude any data that has been locally cached by your application or operating system

Snapshots only capture data that has been written to your Amazon EBS volume, which might exclude any data that has been locally cached by your application or OS. To ensure consistent snapshots on volumes attached to an instance, AWS recommends detaching the volume cleanly, issuing the snapshot command, and then reattaching the volume. For Amazon EBS volumes that serve as root devices, AWS recommends shutting down the machine to take a clean snapshot.

By default, data on a non-root EBS volume is preserved even if the instance is shutdown or terminated

By default, when you attach a non-root EBS volume to an instance, its DeleteOnTermination attribute is set to false. Therefore, the default is to preserve these volumes. After the instance terminates, you can take a snapshot of the preserved volume or attach it to another instance. You must delete a volume to avoid incurring further charges.

Incorrect options:

Data stored in the instance store is preserved when you stop or terminate your instance. However, data is lost when you hibernate the instance. Configure EBS volumes or have a backup plan to avoid using critical data to this behavior - Data stored in instance store is lost when you stop, hibernate or terminate the instance.

EBS encryption does not support boot volumes - EBS volumes used as root devices can be encrypted without any issue.

Snapshots of EBS volumes, stored on Amazon S3, can be accessed using Amazon S3 APIs - This is incorrect. Snapshots are only available through the Amazon EC2 API.

References:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-best-practices.html

https://aws.amazon.com/ebs/faqs/

Question 18

A retail company has complex AWS VPC architecture that is getting difficult to maintain. The company has decided to configure VPC flow logs to track the network traffic to analyze various traffic flow scenarios. The systems administration team has configured VPC flow logs for one of the VPCs, but it’s not able to see any logs. After initial analysis, the team has been able to track the error. It says Access error and the administrator of the team wants to change the IAM Role defined in the flow log definition.

What is the correct way of configuration a solution for this issue so that the VPC flow logs can be operational? 1

The error indicates that the IAM role does not have a trust relationship with the flow logs service. Change the trust relationship from flow log configuration 2

The flow log is still in the process of being created. It sometimes takes almost 10 minutes to start the logs 3

The error indicates IAM role is not correctly configured. After you’ve created a flow log, you cannot change its configuration. Instead, you need to delete the flow log and create a new one with the required configuration 4

The error indicates an internal error has occurred in the flow logs service. Raise a service request with AWS Correct Answer 3

The error indicates IAM role is not correctly configured. After you’ve created a flow log, you cannot change its configuration. Instead, you need to delete the flow log and create a new one with the required configuration Explanation

Correct option:

The error indicates the IAM role is not correctly configured. After you’ve created a flow log, you cannot change its configuration. Instead, you need to delete the flow log and create a new one with the required configuration

Access error can be caused by one of the following reasons:

The IAM role for your flow log does not have sufficient permissions to publish flow log records to the CloudWatch log group

The IAM role does not have a trust relationship with the flow logs service

The trust relationship does not specify the flow logs service as the principal

After you’ve created a flow log, you cannot change its configuration or the flow log record format. For example, you can’t associate a different IAM role with the flow log or add or remove fields in the flow log record. Instead, you can delete the flow log and create a new one with the required configuration.

Incorrect options:

The error indicates that the IAM role does not have a trust relationship with the flow logs service. Change the trust relationship from flow log configuration - As discussed above, the VPC flow log configuration cannot be changed once created.

The flow log is still in the process of being created. It sometimes takes almost 10 minutes to start the logs - This scenario is possible when you have just configured the flow logs. However, the status of the flow logs will not be in an error state.

The error indicates an internal error has occurred in the flow logs service. Raise a service request with AWS - This is a made-up option, given only as a distractor.

References:

https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs-troubleshooting.html

https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html#flow-log-records

Question 19

A data analytics company runs its technology operations on AWS Cloud using different VPC configurations for each of its applications. A systems administrator wants to configure the Network Access Control List (ACL) and Security Group (SG) of VPC1 to allow access for AWS resources in VPC2.

Which is the best way of configuring this requirement? 1

Network ACLs and Security Groups share a parent-child relationship. If resources in VPC2 are given inbound and outbound permissions on Network ACLs of VPC1, the resources will get necessary permissions on the associated security groups too 2

By default, Security Groups allow outbound traffic. Hence, only the inbound traffic configuration of the security groups have to be changed to allow requests from resources in VPC2 to access instances in VPC1. If the subnet is not associated with any Network ACL, you will not need any configuration changes 3

Based on the inbound and outbound traffic configurations on Network ACL of VPC1, you can create a similar deny rules on Security Groups of the instances in VPC1 to deny all traffic, other than the one originating from resources in VPC2 4

The Security Groups of instances on VPC1 should be configured to allow inbound traffic from resources in VPC2. By default, Network ACLs allow all inbound and outbound traffic. So, a default Network ACLs on VPC1 will not need any configuration changes Correct Answer 4

The Security Groups of instances on VPC1 should be configured to allow inbound traffic from resources in VPC2. By default, Network ACLs allow all inbound and outbound traffic. So, a default Network ACLs on VPC1 will not need any configuration changes Explanation

Correct option:

The Security Groups of instances on VPC1 should be configured to allow inbound traffic from resources in VPC2. By default, Network ACLs allow all inbound and outbound traffic. So, a default Network ACLs on VPC1 will not need any configuration changes - A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC can be assigned to a different set of security groups.

A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.

Security groups are stateful — if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. Responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules.

Network ACLs are stateless, which means that responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).

Incorrect options:

Network ACLs and Security Groups share a parent-child relationship. If resources in VPC2 are given inbound and outbound permissions on Network ACLs of VPC1, the resources will get necessary permissions on the associated security groups too - This is an incorrect statement. Security Groups act at the instance level and Network ACLs are at the subnet level. They are different levels of security provided by AWS and do not form any hierarchy.

By default, Security Groups allow outbound traffic. Hence, only the inbound traffic configuration of the security groups have to be changed to allow requests from resources in VPC2 to access instances in VPC1. If the subnet is not associated with any Network ACL, you will not need any configuration changes - Each subnet in your VPC must be associated with a network ACL. If you don’t explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL. Hence, a subnet will always have a network ACL associated with it.

Based on the inbound and outbound traffic configurations on Network ACL of VPC1, you can create similar deny rules on Security Groups of the instances in VPC1 to deny all traffic, other than the one originating from resources in VPC2 - Security Groups and Network ACLs are mutually exclusive and do not share permissions. Also, Security Groups can only be used to specify allow rules, and not deny rules.

References:

https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

Question 20

A banking service uses Amazon EC2 instances and Amazon RDS databases to run its core business functionalities. The Chief Technology Officer (CTO) of the company has requested granular OS level metrics from the database service for benchmarking.

As a SysOps Administrator, how will you provide this information? 1

Enable Enhanced Monitoring for your RDS DB instance 2

Subscribe to Amazon RDS events to be notified when changes occur with a DB instance and its connected resources 3

Subscribe to CloudWatch metrics that track CPU utilization of the instances the RDS is hosted on 4

Enable Performance Insights to expand on the existing Amazon RDS monitoring features to illustrate your database’s performance Correct Answer 1

Enable Enhanced Monitoring for your RDS DB instance Explanation

Correct option:

Enable Enhanced Monitoring for your RDS DB instance - Amazon RDS provides metrics in real time for the operating system (OS) that your DB instance runs on. You can view the metrics for your DB instance using the console. Also, you can consume the Enhanced Monitoring JSON output from Amazon CloudWatch Logs in a monitoring system of your choice.

By default, Enhanced Monitoring metrics are stored for 30 days in the CloudWatch Logs, which are different from typical CloudWatch metrics. Enhanced Monitoring for RDS provides the following OS metrics: 1.Free Memory 2.Active Memory 3.Swap Free 4.Processes Running 5.File System Used

You can use these metrics to understand the environment’s performance, and these metrics are ingested by Amazon CloudWatch Logs as log entries. You can use CloudWatch to create alarms based on metrics. These alarms run actions, and you can publish these metrics from within your infrastructure, device, or application into CloudWatch as a custom metric. By using Enhanced Monitoring and CloudWatch together, you can automate tasks by creating a custom metric for the CloudWatch Logs RDS ingested date from the Enhanced Monitoring metrics. Enhanced Monitoring metrics are useful when you want to see how different processes or threads on a DB instance use the CPU.

Incorrect options:

Subscribe to Amazon RDS events to be notified when changes occur with a DB instance and its connected resources - Subscribe to Amazon RDS events to be notified when changes occur with a DB instance, DB snapshot, DB parameter group, or DB security group. Amazon RDS uses the Amazon Simple Notification Service (Amazon SNS) to provide notification when an Amazon RDS event occurs. This option is not relevant for the given use-case.

Subscribe to CloudWatch metrics that track CPU utilization of the instances the RDS is hosted on - CloudWatch gathers metrics about CPU utilization from the hypervisor for a DB instance, and Enhanced Monitoring gathers its metrics from an agent on the instance. As a result, you might find differences between the measurements, because the hypervisor layer performs a small amount of work. The differences can be greater if your DB instances use smaller instance classes because then there are likely more virtual machines (VMs) that are managed by the hypervisor layer on a single physical instance. Enhanced Monitoring metrics are useful when you want to see how different processes or threads on a DB instance use the CPU.

Enable Performance Insights to expand on the existing Amazon RDS monitoring features to illustrate your database’s performance - Performance Insights collects metric data from the database engine to monitor the actual load on a database. Performance Insights will not help in gathering granular OS level metrics.

Reference:

https://aws.amazon.com/premiumsupport/knowledge-center/custom-cloudwatch-metrics-rds/

Question 21

Your application has complex runtime and OS dependencies and is taking a long time to be deployed on Elastic Beanstalk. You cannot sacrifice application availability.

What should you do to improve the deployment time? (Select two) Multi Select 1

Create a Golden AMI with your application 2

Create a new beanstalk environment for each application and apply blue/green deployment patterns 3

Use rolling with additional batch 4

Upgrade the EC2 instance type 5

Use all at once deployment pattern Correct Answer 1

Create a Golden AMI with your application 2

Create a new beanstalk environment for each application and apply blue/green deployment patterns Explanation

Correct options:

Create a Golden AMI with your application

A Golden AMI is an AMI that you standardize through configuration, consistent security patching, and hardening. It also contains agents you approve for logging, security, performance monitoring, etc. For the given use-case, you can have the complex runtime and OS dependencies already setup via the golden AMI.

Golden AMI Pipeline: via - https://aws.amazon.com/blogs/awsmarketplace/announcing-the-golden-ami-pipeline/

Create a new beanstalk environment for each application and apply blue/green deployment patterns

Elastic Beanstalk provides several deployment policies and settings.

via - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html

Since AWS Elastic Beanstalk performs an in-place update when you update your application versions, your application can become unavailable to users for a short period of time. You can avoid this downtime by performing a blue/green deployment, where you deploy the new version to a separate environment, and then swap CNAMEs of the two environments to redirect traffic to the new version instantly.

via - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html

A blue/green deployment is also required when you want to update an environment to an incompatible platform version.

Incorrect options:

Use rolling with additional batch - With the rolling update, your application is deployed to your environment one batch of instances at a time. Most bandwidth is retained throughout the deployment. This also avoids downtime and minimizes reduced availability, at a cost of a longer deployment time. Since the use-case mandates a short deployment time, this option is ruled out.

Upgrade the EC2 instance type - An upgraded instance type may only marginally improve the deployment time.

Use all at once deployment pattern - With all at once deployment, Elastic Beanstalk deploys the new application version to each instance. Then, the web proxy or application server might need to restart. As a result, your application might be unavailable to users (or have low availability) for a short time. Since the use-case mandates high availability, this option is ruled out.

References:

https://aws.amazon.com/blogs/awsmarketplace/announcing-the-golden-ami-pipeline/

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html

Question 22

You are deploying an application and use the cfn-init and cfn-signal script to ensure the application is properly deployed before signaling to CloudFormation the success of your stack deployment. Right now, every time you deploy, CloudFormation completes successfully, even though the instance is still executing the cfn-init script.

As a SysOps Administrator, which of the following would you identify as the root cause behind the issue? 1

You forgot the Wait Condition 2

You did not disable Rollbacks 3

You forgot to include the cfn-signal command in your user data 4

You forgot to include a deletion policy Correct Answer 1

You forgot the Wait Condition Explanation

Correct option:

You forgot the Wait Condition

The cfn-init helper script reads template metadata from the AWS::CloudFormation::Init key and acts accordingly to:

Fetch and parse metadata from AWS CloudFormation

Install packages

Write files to disk

Enable/disable and start/stop services

The cfn-signal helper script signals AWS CloudFormation to indicate whether Amazon EC2 instances have been successfully created or updated. If you install and configure software applications on instances, you can signal AWS CloudFormation when those software applications are ready.

You can use the wait condition handle to make AWS CloudFormation pause the creation of a stack and wait for a signal before it continues to create the stack. For example, you might want to download and configure applications on an Amazon EC2 instance before considering the creation of that Amazon EC2 instance complete.

AWS CloudFormation creates a wait condition just like any other resource. When AWS CloudFormation creates a wait condition, it reports the wait condition’s status as CREATE_IN_PROGRESS and waits until it receives the requisite number of success signals or the wait condition’s timeout period has expired. If AWS CloudFormation receives the requisite number of success signals before the time out period expires, it continues creating the stack; otherwise, it sets the wait condition’s status to CREATE_FAILED and rolls the stack back.

Incorrect options:

You did not disable Rollbacks - Enabling/disabling rollbacks has no impact on the ability to track the status of the cfn-init script.

You forgot to include the cfn-signal command in your user data - This is a distractor as the cfn-signal command is managed via CloudFormation and not via the user data.

You forgot to include a deletion policy - This is again a distractor as a deletion policy has nothing to do with tracking the status of the cfn-init script.

References:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-helper-scripts-reference.html

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-waitcondition.html

Question 23

As part of the best practices for DevOps, all your infrastructure is deployed using CloudFormation. This includes EBS volumes. When the CloudFormation stacks are deleted, it is mandatory to keep a snapshot of the EBS volumes for backup and compliance purposes.

How can you achieve this using CloudFormation? 1

Enable termination protection 2

Use cfn helper scripts and Wait Conditions upon stack deletion 3

Use DeletionPolicy=Snapshot 4

Reference the EBS volume as a stack output Correct Answer 3

Use DeletionPolicy=Snapshot Explanation

Correct option:

Use DeletionPolicy=Snapshot

To control how AWS CloudFormation handles the EBS volume when the stack is deleted, set a deletion policy for your volume. You can choose to retain the volume, to delete the volume, or to create a snapshot of the volume.

Here is the sample YAML:

NewVolume: Type: AWS::EC2::Volume Properties: Size: 100 Encrypted: true AvailabilityZone: !GetAtt Ec2Instance.AvailabilityZone Tags: - Key: MyTag Value: TagValue DeletionPolicy: Snapshot

via - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-ebs-volume.html

Incorrect options:

Enable termination protection - You can prevent a stack from being accidentally deleted by enabling termination protection on the stack. If a user attempts to delete a stack with termination protection enabled, the deletion fails and the stack–including its status–remains unchanged. You can enable termination protection on a stack when you create it. Termination protection on stacks is disabled by default. You can set termination protection on a stack with any status except DELETE_IN_PROGRESS or DELETE_COMPLETE.

Use cfn helper scripts and Wait Conditions upon stack deletion - The cfn helper scripts such as cfn-init, cfn-signal, etc help in installing packages or to indicate whether Amazon EC2 instances have been successfully created or updated. You cannot use these scripts to mandatorily keep a snapshot of the EBS volume.

Reference the EBS volume as a stack output - The optional Outputs section for a CloudFormation stack declares output values that you can import into other stacks (to create cross-stack references), return in response (to describe stack calls), or view on the AWS CloudFormation console. You should note that a stack that is referenced by another stack cannot be deleted and it cannot modify or remove the exported value. Just by referencing the EBS volume as a stack output, you will not be able to enforce the snapshot of the EBS volume.

References:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-ebs-volume.html

https://aws.amazon.com/blogs/aws/aws-cloudformation-update-yaml-cross-stack-references-simplified-substitution/

Question 24

You are provisioning an internal full LAMP stack using CloudFormation, and the EC2 instance gets configured automatically using the cfn helper scripts, such as cfn-init and cfn-signal. The stack creation fails as CloudFormation fails to receive a signal from your EC2 instance.

What are the possible reasons for this? (Select two) Multi Select 1

The subnet where the application is deployed does not have a network route to the CloudFormation service through a NAT Gateway or Internet Gateway 2

The EC2 instance does not have a proper IAM role allowing to signal the success to CloudFormation 3

The cfn-signal script does not get executed before the timeout of the wait condition 4

AWS is experiencing an Insufficient Capacity for the instance type you requested 5

The cfn-init script failed Correct Answer 1

The subnet where the application is deployed does not have a network route to the CloudFormation service through a NAT Gateway or Internet Gateway 3

The cfn-signal script does not get executed before the timeout of the wait condition Explanation

Correct options:

The subnet where the application is deployed does not have a network route to the CloudFormation service through a NAT Gateway or Internet Gateway - As the use-case mentions an internal full LAMP stack, this implies that the stack is to be deployed in a private subnet. Now this private subnet must have a network route to the CloudFormation service through a NAT Gateway or Internet Gateway.

You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances.

An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. It, therefore, imposes no availability risks or bandwidth constraints on your network traffic. Internet Gateways must be deployed in a public subnet and the corresponding entry should be added in the route table.

The cfn-signal script does not get executed before the timeout of the wait condition

The Timeout property determines how long AWS CloudFormation waits for the requisite number of success signals. Timeout is a minimum-bound property, meaning the timeout occurs no sooner than the time you specify, but can occur shortly thereafter. The maximum time that you can specify is 43200 seconds (12 hours ). For the given scenario, the stack creation can fail as CloudFormation may fail to receive a signal from your EC2 instance if the Timeout property is set to a low value.

Incorrect options:

The EC2 instance does not have a proper IAM role allowing to signal the success to CloudFormation - You do not need an IAM role to use cfn-signal.

AWS is experiencing an Insufficient Capacity for the instance type you requested - In case of Insufficient Capacity, the instance would have not been created and the CloudFormation stack would have failed altogether.

The cfn-init script failed - The cfn-init script failure should still be followed by the cfn-signal script, which would have sent a signal to CloudFormation nonetheless.

References:

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-waitcondition.html

Question 25

You are developing a new CloudFormation stack and writing some very complex cfn-init code. The code fails and you would like to debug why. When reading the documentation, you see all the logs are in the file /var/cfn/cfn-init-output.log and will give you more information as to why the instance provisioning is failing. But you realize that you can’t gain access to this file as the CloudFormation stack always terminates the EC2 instance when the creation fails.

What can you do to access these logs files, while not changing the way your EC2 instance works and ensuring you can debug your instance over 24 hours? 1

Install the CloudWatch logs agent, create a new IAM role and assign it to the EC2 instance, and send the logs directly to CloudWatch Logs 2

Set OnFailure=DO_NOTHING 3

Increase the Wait Timeout to 2 hours 4

Enable VPC Flow Logs and intercept the cfn-init log file Correct Answer 2

Set OnFailure=DO_NOTHING Explanation

Correct option:

Set OnFailure=DO_NOTHING

You can use the OnFailure property of the CloudFormation CreateStack call for this use-case. The OnFailure property determines what action will be taken if stack creation fails. This must be one of DO_NOTHING, ROLLBACK, or DELETE. You can specify either OnFailure or DisableRollback, but not both.

Using the OnFailure property, you can prevent the termination of the EC2 instances created by the CloudFormation stack.

Incorrect options:

Install the CloudWatch logs agent, create a new IAM role and assign it to the EC2 instance, and send the logs directly to CloudWatch Logs

Enable VPC Flow Logs and intercept the cfn-init log file

As the use-case mentions that there should be no changes done to the EC2 instance, so both these options are ruled out since these involve installing or configuring additional software.

Increase the Wait Timeout to 2 hours - The wait timeout works with cfn-signal, however, the given issue is related to cfn-init wherein some underlying code is failing. Therefore increasing wait timeout is not a valid solution for this scenario.

References:

https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html

https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-prevent-rollback-failure/

Question 26

Your gp2 drive of 8TB is reaching its peak performance of 10,000 IOPS while being almost fully utilized.

How can you increase the performance while keeping the costs at the same level? 1

Convert the gp2 drive to io1 and increase the PIOPS 2

Create two 4 TB gp2 drives and mount them in RAID 0 on the EC2 instance 3

Create two 4 TB gp2 drives and mount them in RAID 1 on the EC2 instance 4

Enable burst mode on the gp2 drive Correct Answer 2

Create two 4 TB gp2 drives and mount them in RAID 0 on the EC2 instance Explanation

Correct option:

Create two 4 TB gp2 drives and mount them in RAID 0 on the EC2 instance

With Amazon EBS, you can use any of the standard RAID configurations that you can use with a traditional bare metal server, as long as that particular RAID configuration is supported by the operating system for your instance. This is because all RAID is accomplished at the software level.

For greater I/O performance than you can achieve with a single volume, RAID 0 can stripe multiple volumes together; for on-instance redundancy, RAID 1 can mirror two volumes together. So for the given use-case, to increase the performance, you should use RAID 0.

via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html

Incorrect options:

Convert the gp2 drive to io1 and increase the PIOPS - Changing the gp2 drive to io1 entails more costs as the pricing is $0.10 per GB-month of provisioned storage for gp2 and $0.125 per GB-month of provisioned storage for io1. So this option is ruled out.

Create two 4 TB gp2 drives and mount them in RAID 1 on the EC2 instance - You should use RAID 1 when fault tolerance is more important than I/O performance.

Enable burst mode on the gp2 drive - gp2 volumes can burst to 3,000 IOPS for extended periods of time. This option is a distractor as you do not need to enable the burst mode for gp2 volumes as it’s available by default.

via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html

References:

https://aws.amazon.com/ebs/pricing/

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html

https://aws.amazon.com/blogs/database/understanding-burst-vs-baseline-performance-with-amazon-rds-and-gp2/

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html

Question 27

Which of the following services allows for an in-place switch from unencrypted to encrypted without impacting existing operations? 1

S3 2

RDS 3

EBS 4

EFS Correct Answer 1

S3 Explanation

Correct options:

S3

Amazon S3 default encryption provides a way to set the default encryption behavior for an S3 bucket. You can set default encryption on a bucket so that all new objects are encrypted when they are stored in the bucket. The objects are encrypted using server-side encryption with either Amazon S3-managed keys (SSE-S3) or customer master keys (CMKs) stored in AWS Key Management Service (AWS KMS). When you use server-side encryption, Amazon S3 encrypts an object before saving it to disk and decrypts it when you download the objects.

There is no change to the encryption of the objects that existed in the bucket before default encryption was enabled.

So for the given use-case, you can continue to use the same S3 buckets without impacting operations.

Incorrect options:

RDS - You can only enable encryption for an Amazon RDS DB instance when you create it, not after the DB instance is created.

However, because you can encrypt a copy of an unencrypted snapshot, you can effectively add encryption to an unencrypted DB instance. That is, you can create a snapshot of your DB instance, and then create an encrypted copy of that snapshot. You can then restore a DB instance from the encrypted snapshot, and thus you have an encrypted copy of your original DB instance.

EBS - There is no direct way to encrypt an existing unencrypted volume or snapshot, you can encrypt them by creating either a volume or a snapshot. If you enabled encryption by default, Amazon EBS encrypts the resulting new volume or snapshot using your default key for EBS encryption.

EFS - You can enable encryption of data at rest when creating an Amazon EFS file system. Once the file system is created, you cannot modify the file system to be unencrypted or vice-versa.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html

Question 28

How can you enforce encryption on all the files uploaded into your example S3 bucket? 1

Using the “Default Encryption” setting in AWS S3 2

Use the following S3 bucket policy:

{ “Statement”:[ { “Action”: “s3:”, “Effect”:”Deny”, “Principal”: “”, “Resource”:”arn:aws:s3:::bucketname/*”, “Condition”:{ “Bool”: { “aws:SecureTransport”: false } } } ] }

3

Use the following S3 bucket policy:

{ “Statement”:[ { “Action”: “s3:”, “Effect”:”Deny”, “Principal”: “”, “Resource”:”arn:aws:s3:::bucketname/*”, “Condition”:{ “Bool”: { “aws:SecureTransport”: true } } } ] }

4

Use an encrypted CloudFront distribution in front of your S3 bucket Correct Answer 1

Using the “Default Encryption” setting in AWS S3 Explanation

Correct option:

Using the “Default Encryption” setting in AWS S3

Amazon S3 default encryption provides a way to set the default encryption behavior for an S3 bucket. You can set default encryption on a bucket so that all new objects are encrypted when they are stored in the bucket. The objects are encrypted using server-side encryption with either Amazon S3-managed keys (SSE-S3) or customer master keys (CMKs) stored in AWS Key Management Service (AWS KMS).

When you use server-side encryption, Amazon S3 encrypts an object before saving it to disk and decrypts it when you download the objects.

Incorrect options:

Use the following S3 bucket policy:

{ “Statement”:[ { “Action”: “s3:”, “Effect”:”Deny”, “Principal”: “”, “Resource”:”arn:aws:s3:::bucketname/*”, “Condition”:{ “Bool”: { “aws:SecureTransport”: false } } } ] }

The above bucket policy only denies access to HTTP requests for any action on the S3 bucket bucketname. It cannot help enforce SSE-S3 encryption on S3. So it’s not the right fit for the given use-case.

Use the following S3 bucket policy:

{ “Statement”:[ { “Action”: “s3:”, “Effect”:”Deny”, “Principal”: “”, “Resource”:”arn:aws:s3:::bucketname/*”, “Condition”:{ “Bool”: { “aws:SecureTransport”: true } } } ] }

The above bucket policy only denies access to HTTPS requests for any action on the S3 bucket bucketname. It cannot help enforce SSE-S3 encryption on S3. So it’s not the right fit for the given use-case.

Use an encrypted CloudFront distribution in front of your S3 bucket - This option is a distractor as you cannot enforce SSE-S3 encryption on S3 by using in-transit or at-rest encryption for CloudFront.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html

https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/data-protection-summary.html

Question 29

You are in S3 and have deleted all the files in it. As you can see, the bucket is empty:

You have tried to delete the bucket afterward and it fails with an error saying the bucket is not empty.

What’s the issue? 1

S3 is eventually consistent. Wait two minutes and retry, it will work then 2

S3 versioning is enabled and delete markers are still present in the bucket 3

An S3 bucket policy is set up and it prevents bucket deletion 4

Some files are in Glacier Correct Answer 2

S3 versioning is enabled and delete markers are still present in the bucket Explanation

Correct option:

S3 versioning is enabled and delete markers are still present in the bucket

S3 Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures. When you enable versioning for a bucket, if Amazon S3 receives multiple write requests for the same object simultaneously, it stores all of the objects.

If you overwrite an object, it results in a new object version in the bucket. You can always restore the previous version.

If you delete an object, instead of removing it permanently, Amazon S3 inserts a delete marker, which becomes the current object version.

So for the given use-case, as delete markers are still present in the bucket, therefore you get the error saying the bucket is not empty.

Incorrect options:

S3 is eventually consistent. Wait two minutes and retry, it will work then - This is a made-up option. Amazon S3 provides strong read-after-write consistency for PUTs and DELETEs of objects in your Amazon S3 bucket in all AWS Regions.

An S3 bucket policy is set up and it prevents bucket deletion - You could set up a bucket policy to prevent bucket deletion, but it would not present an error that says the bucket is not empty.

Some files are in Glacier - You store your data in Amazon S3 Glacier as archives. Archives may be further grouped into vaults. So files are not stored in buckets for Glacier. So while deleting, you will not get an error that says the bucket is not empty.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html

https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel

https://aws.amazon.com/glacier/faqs/

Question 30

After enabling S3 MFA-Delete, for which actions do you need MFA? (Select two) Multi Select 1

Permanently delete an object version 2

Suspending versioning 3

Enabling Versioning 4

Listing deleted versions 5

Uploading a new object version Correct Answer 1

Permanently delete an object version 2

Suspending versioning Explanation

Correct options:

Permanently delete an object version

Suspending versioning

You may add another layer of security by configuring a bucket to enable MFA (multi-factor authentication) Delete, which requires additional authentication for either of the following operations:

Change the versioning state of your bucket

Permanently delete an object version

MFA Delete requires two forms of authentication together:

Your security credentials

The concatenation of a valid serial number, a space, and the six-digit code displayed on an approved authentication device

If a bucket’s versioning configuration is MFA Delete–enabled, the bucket owner must include the x-amz-mfa request header in requests to permanently delete an object version or change the versioning state of the bucket. Requests that include x-amz-mfa must use HTTPS.

Incorrect options:

Enabling Versioning - You do not need MFA to enable versioning for a bucket.

Listing deleted versions - You do not need MFA to list deleted versions.

Uploading a new object version - You do not need MFA to upload a new object version.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html#MultiFactorAuthenticationDelete

Question 31

How should MFA-Delete be enabled on an S3 bucket? 1

Using the root account and the AWS Console 2

Using the root account and the AWS CLI 3

Using an admin IAM user and the AWS Console 4

Using an admin IAM user and the AWS CLI Correct Answer 2

Using the root account and the AWS CLI Explanation

Correct option:

Using the root account and the AWS CLI

MFA Delete represents another layer of security wherein you can configure a bucket to enable MFA (multi-factor authentication) Delete, which requires additional authentication for either of the following operations:

Change the versioning state of your bucket

Permanently delete an object version

You should note that only the bucket owner (root account) can enable MFA Delete only via the AWS CLI. However, the bucket owner, the AWS account that created the bucket (root account), and all authorized IAM users can enable versioning.

via - https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html#MultiFactorAuthenticationDelete

Incorrect options:

Using the root account and the AWS Console

Using an admin IAM user and the AWS Console

Using an admin IAM user and the AWS CLI

These three options contradict the explanation above, so these options are incorrect.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html#MultiFactorAuthenticationDelete

Question 32

You suspect some of your employees try to access files in S3 that they don’t have access to.

How can you verify this is indeed the case without them noticing? 1

Restrict their IAM policies and look at CloudTrail logs 2

Enable S3 Access Logs and analyze them using Athena 3

Use a bucket policy 4

Use AWS Config to define compliance rules on these users Correct Answer 2

Enable S3 Access Logs and analyze them using Athena Explanation

Correct option:

Enable S3 Access Logs and analyze them using Athena

By default, Amazon Simple Storage Service (Amazon S3) doesn’t collect server access logs. When you enable logging, Amazon S3 delivers access logs for a source bucket to a target bucket that you choose. The target bucket must be in the same AWS Region as the source bucket and must not have a default retention period configuration.

Server access logging provides detailed records for the requests that are made to an S3 bucket. Server access logs are useful for many applications. For example, access log information can be useful in security and access audits. It can also help you learn about your customer base and understand your Amazon S3 bill. Each access log record provides details about a single access request, such as the requester, bucket name, request time, request action, response status, and an error code, if relevant.

Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to set up or manage, and customers pay only for the queries they run. You can use Athena to process logs, perform ad-hoc analysis, and run interactive queries.

For the given use-case, you can enable S3 access logs and then use Athena to analyze the access patterns for specific employees.

Incorrect options:

Restrict their IAM policies and look at CloudTrail logs - Restricting their IAM policies would deny access to S3 which is to be avoided per the use-case.

Use a bucket policy - You cannot use a bucket policy to log S3 access information

Use AWS Config to define compliance rules on these users - AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. You can use Config to answer questions such as - “What did my AWS resource look like at xyz point in time?”. You cannot use AWS Config to log S3 access information.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html

Question 33

As a service provider, you generate a daily report that you need to share with your dynamically changing list of over 10,000 customers. These reports sit in S3, and you would like to automate sharing the reports with them so they can have on-demand access upon their identity being proven.

You plan to use Cognito, API Gateway and AWS Lambda to address this use-case. On the S3 side, what should you do? 1

Provide each of your customers an AWS user and tell them to use the CLI 2

Generate pre-signed URLs for your reports 3

Create a bucket policy so that the S3 files are only accessible from CloudFront and force SSL mutual authentication there 4

Make the S3 bucket public and password protect each S3 file. Share the password with each customer Correct Answer 2

Generate pre-signed URLs for your reports Explanation

Correct option:

Generate pre-signed URLs for your reports

A presigned URL gives you access to the object identified in the URL, provided that the creator of the presigned URL has permissions to access that object.

All objects by default are private. Only the object owner has permission to access these objects. However, the object owner can optionally share objects with others by creating a presigned URL, using their own security credentials, to grant time-limited permission to download the objects.

When you create a presigned URL for your object, you must provide your security credentials, specify a bucket name, an object key, specify the HTTP method (GET to download the object) and expiration date and time. The presigned URLs are valid only for the specified duration.

Anyone who receives the presigned URL can then access the object. For example, if you have a video in your bucket and both the bucket and the object are private, you can share the video with others by generating a presigned URL.

Incorrect options:

Provide each of your customers an AWS user and tell them to use the CLI - This is not practicable considering that there are 10,000 customers.

Create a bucket policy so that the S3 files are only accessible from CloudFront and force SSL mutual authentication there - Mutual Transport Layer Security (TLS) authentication is supported for Amazon API Gateway and not for CloudFront. This is a new method for client-to-server authentication that can be used with API Gateway’s existing authorization options.

Make the S3 bucket public and password protect each S3 file. Share the password with each customer - This is a distractor as there is no way to password protect files on S3.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html

Question 34

In order to improve the read performance of the files stored in S3, you have decided to deploy it using CloudFront. As part of this deployment, you would like to ensure that only CloudFront is allowed to access the S3 bucket files.

How can you achieve that? 1

Using an Origin Access Identity and a bucket policy 2

Attaching an IAM role to CloudFront and defining a bucket policy to only allow this role 3

Encrypt all your files using a KMS key that only CloudFront can access 4

Attaching a security group to S3 and CloudFront and only allow incoming traffic from CloudFront using the security group rules Correct Answer 1

Using an Origin Access Identity and a bucket policy Explanation

Correct option:

Using an Origin Access Identity and a bucket policy

To restrict access to content that you serve from Amazon S3 buckets, you need to follow these steps:

Create a special CloudFront user called an origin access identity (OAI) and associate it with your distribution.

Configure your S3 bucket permissions so that CloudFront can use the OAI to access the files in your bucket and serve them to your users. Make sure that users can’t use a direct URL to the S3 bucket to access a file there.

via - https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html

After you take these steps, users can only access your files through CloudFront, not directly from the S3 bucket.

Incorrect options:

Attaching an IAM role to CloudFront and defining a bucket policy to only allow this role - This is a distractor as you cannot associate an IAM role to CloudFront.

Encrypt all your files using a KMS key that only CloudFront can access - Although you could enable SSE-KMS on S3 and serve content using CloudFront by using Lambda@Edge, but this solution does not address the given use-case. You can ensure that only CloudFront is allowed to access the S3 bucket files by using Origin Access Identity and a bucket policy.

via - https://aws.amazon.com/blogs/networking-and-content-delivery/serving-sse-kms-encrypted-content-from-s3-using-cloudfront/

Attaching a security group to S3 and CloudFront and only allow incoming traffic from CloudFront using the security group rules - This is a distractor as you cannot attach a security group to S3 or CloudFront.

References:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html

https://aws.amazon.com/blogs/networking-and-content-delivery/serving-sse-kms-encrypted-content-from-s3-using-cloudfront/

Question 35

You distribute a monthly raw data extract of your public forum’s discussions that is about 10TB each month. Currently, the archive is distributed through an EFS drive, that is mounted on all your EC2 instances. Customers retrieve the file through the load balancer you have. This solution is costing you a lot of money and forces you to tremendously scale on the 1st of each month as people all try to retrieve the file at the same time.

What can you do to improve the situation? 1

Enable static file caching on the ALB 2

Store the files in S3 and distribute them using a CloudFront distribution instead 3

Store the files on instance stores instead, so you don’t need to use EFS anymore 4

Enable enhanced networking between EC2 and ALB Correct Answer 2

Store the files in S3 and distribute them using a CloudFront distribution instead Explanation

Correct option:

Store the files in S3 and distribute them using a CloudFront distribution instead

S3 is more cost-effective than EFS. For example, per GB storage cost for S3 is $0.023/month whereas per GB storage cost for EFS is $0.3/month. Further, storing your static content with S3 provides a lot of advantages. But to help optimize your application’s performance and security while effectively managing cost, AWS recommends that you also set up Amazon CloudFront to work with your S3 bucket to serve and protect the content. CloudFront is a content delivery network (CDN) service that delivers static and dynamic web content, video streams, and APIs around the world, securely and at scale. By design, delivering data out of CloudFront can be more cost-effective than delivering it from S3 directly to your users.

Incorrect options:

Enable static file caching on the ALB - This is a distractor as there is no such thing as static file caching on the ALB.

Store the files on instance stores instead, so you don’t need to use EFS anymore - You cannot use Instance Stores since they are physically attached to their own EC2 instances. Instance Store is not a shared storage like EFS, so this option is ruled out.

Enable enhanced networking between EC2 and ALB - Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported EC2 instance types. SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. There is no such thing as enhanced networking between EC2 and ALB.

References:

https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-cloudfront-a-match-made-in-the-cloud/

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html

Question 36

Your website is hosted on S3 and exposed through a CloudFront distribution and some users are said to experience a lot of 501 errors.

How can you analyze these errors and come up with a solution? 1

Analyze the CloudFront access logs using Athena 2

Analyze the CloudFront access logs using Inspector 3

Enable S3 access logs and analyze using Athena 4

Enable S3 access logs and analyze using Inspector Correct Answer 1

Analyze the CloudFront access logs using Athena Explanation

Correct option:

Analyze the CloudFront access logs using Athena

You can configure CloudFront to create log files that contain detailed information about every user request that CloudFront receives. These are called standard logs, also known as access logs. These standard logs are available for both web and RTMP distributions. If you enable standard logs, you can also specify the Amazon S3 bucket that you want CloudFront to save files in.

via - https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html

AWS recommends that you use the logs to understand the nature of the requests for your content, not as a complete accounting of all requests. CloudFront delivers access logs on a best-effort basis. The log entry for a particular request might be delivered long after the request was actually processed and, in rare cases, a log entry might not be delivered at all. When a log entry is omitted from access logs, the number of entries in the access logs won’t match the usage that appears in the AWS usage and billing reports.

Incorrect options:

Analyze the CloudFront access logs using Inspector

Enable S3 access logs and analyze using Inspector

Amazon Inspector is an automated security assessment service that helps you test the network accessibility of your Amazon EC2 instances and the security state of your applications running on the instances.

Inspector cannot be used to analyze CloudFront access logs or S3 access logs, so both these options are incorrect.

Enable S3 access logs and analyze using Athena - The S3 access logs will not provide details about the user IP and other crucial information, as the requests are proxied through CloudFront. Additionally, results are cached in CloudFront and the S3 access logs won’t contain a lot of information, so this option is incorrect.

Reference:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html

Question 37

You host a forum for law questions and per your country’s law, you must store all the archives of conversations (about 1 TB) every week for 7 years. These archives must not be tampered with in any way, and you must prove you have set enough controls around your data protection.

What should you do? 1

Store the archives in S3 and set up a bucket policy, enable versioning and MFA-Delete 2

Store the archives in Glacier and set up a Vault Lock Policy for WORM access 3

Store the archives in EBS and use Linux file system protection on the files 4

Store the archives in AWS Artifact and enable compliance monitoring Correct Answer 2

Store the archives in Glacier and set up a Vault Lock Policy for WORM access Explanation

Correct option:

Store the archives in Glacier and set up a Vault Lock Policy for WORM access

You store your data in Amazon S3 Glacier as archives. Archives may be further grouped into vaults.

S3 Glacier Vault Lock allows you to easily deploy and enforce compliance controls for individual S3 Glacier vaults with a vault lock policy. You can specify controls such as “write once read many” (WORM) in a vault lock policy and lock the policy from future edits. Once locked, the policy can no longer be changed.

For the given use-case, as you need to ensure that the archives are not tampered, so you need to store the archives in Glacier and set up a Vault Lock Policy for WORM access.

via - https://docs.aws.amazon.com/amazonglacier/latest/dev/vault-lock.html

Incorrect options:

Store the archives in S3 and set up a bucket policy, enable versioning and MFA-Delete - Even if you store the archives in a versioned S3 bucket, someone could overwrite the archive and create a new version of it so it does not strictly meet the requirements of the given use-case wherein an existing object mutates into a new version. MFA-Delete would only protect against permanent delete of any object.

Store the archives in EBS and use Linux file system protection on the files Linux file system protection would not be able to enforce compliance controls for the archives in EFS.

Store the archives in AWS Artifact and enable compliance monitoring - AWS Artifact is a self-service audit artifact retrieval portal that provides our customers with on-demand access to AWS’ compliance documentation and AWS agreements. You cannot use AWS Artifact to enforce compliance controls for the archives.

Reference:

https://docs.aws.amazon.com/amazonglacier/latest/dev/vault-lock.html

Question 38

Your data center generates tens of terabytes of data daily and has a cumulative historic data volume of 5PB. The data center is running short of storage as well as bandwidth infrastructure to store or transfer this data. Later you would like to analyze this data using Redshift or Athena, however, first you must clean it using a proprietary process running on EC2.

What’s the optimal way of moving this data to the cloud? 1

Use S3 transfer acceleration 2

Use Snowball Edge 3

Use Volume Gateway 4

Use AWS Data Migration Correct Answer 2

Use Snowball Edge Explanation

Correct option:

Use Snowball Edge

AWS Snowball, a part of the AWS Snow Family, is a data migration and edge computing device that comes in two options. Snowball Edge Storage Optimized devices provide both block storage and Amazon S3-compatible object storage, and 40 vCPUs. They are well suited for local storage and large scale data transfer. Snowball Edge Compute Optimized devices provide 52 vCPUs, block and object storage, and an optional GPU for use cases like advanced machine learning and full-motion video analysis in disconnected environments.

Snowball Edge Storage Optimized is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS. It provides up to 80 TB of usable HDD storage, 40 vCPUs, 1 TB of SATA SSD storage, and up to 40 Gb network connectivity to address large scale data transfer and pre-processing use cases.

For the given use-case, you can use multiple Snowball Edge devices to migrate the entire data to AWS Cloud.

Exam Alert:

The original Snowball devices were transitioned out of service and Snowball Edge Storage Optimized are now the primary devices used for data transfer. You may see the Snowball device on the exam, just remember that the original Snowball device had 80TB of storage space.

Incorrect options:

Use S3 transfer acceleration - Amazon S3 Transfer Acceleration can speed up content transfers to and from Amazon S3 by as much as 50-500% for long-distance transfer of larger objects. Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path. As the data center does not have sufficient bandwidth infrastructure, so this option is ruled out.

Use Volume Gateway - You can configure the AWS Storage Gateway service as a Volume Gateway to present cloud-based iSCSI block storage volumes to your on-premises applications. The Volume Gateway provides either a local cache or full volumes on-premises while also storing full copies of your volumes in the AWS cloud. Volume Gateway also provides Amazon EBS Snapshots of your data for backup, disaster recovery, and migration. It’s easy to get started with the Volume Gateway: Deploy it as a virtual machine or hardware appliance, give it local disk resources, connect it to your applications, and start using your hybrid cloud storage for block data. As the data center does not have sufficient bandwidth infrastructure, so this option is ruled out.

How Storage Gateway Works: via - https://aws.amazon.com/storagegateway/

Use AWS Data Migration - AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from the most widely used commercial and open-source databases. As the data center does not have sufficient bandwidth infrastructure, so this option is ruled out.

References:

https://aws.amazon.com/snowball/

https://aws.amazon.com/dms/

https://aws.amazon.com/storagegateway/

Question 39

You have tape backup processes and you would like to start migrating to the cloud to leverage the S3 storage capacity while keeping the same processes and iSCSI-compatible backup software you purchased a 10-year license for.

What do you recommend your company should be using? 1

File Gateway 2

Volume Gateway 3

Tape Gateway 4

Snowball Correct Answer 3

Tape Gateway Explanation

Correct option:

Tape Gateway

Tape Gateway enables you to replace using physical tapes on-premises with virtual tapes in AWS without changing existing backup workflows. Tape Gateway supports all leading backup applications and caches virtual tapes on-premises for low-latency data access. Tape Gateway encrypts data between the gateway and AWS for secure data transfer and compresses data and transitions virtual tapes between Amazon S3 and Amazon S3 Glacier, or Amazon S3 Glacier Deep Archive, to minimize storage costs.

How Storage Gateway Works: via - https://aws.amazon.com/storagegateway/

How Tape Gateway Works: via - https://aws.amazon.com/storagegateway/vtl/

Incorrect options:

File Gateway - File Gateway provides a seamless way to connect to the cloud in order to store application data files and backup images as durable objects in Amazon S3 cloud storage. File Gateway offers SMB or NFS-based access to data in Amazon S3 with local caching. It can be used for on-premises applications, and for Amazon EC2-based applications that need file protocol access to S3 object storage.

File Gateway cannot be used to facilitate tape backup processes.

Volume Gateway - You can configure the AWS Storage Gateway service as a Volume Gateway to present cloud-based iSCSI block storage volumes to your on-premises applications. The Volume Gateway provides either a local cache or full volumes on-premises while also storing full copies of your volumes in the AWS cloud. Volume Gateway also provides Amazon EBS Snapshots of your data for backup, disaster recovery, and migration. It’s easy to get started with the Volume Gateway: Deploy it as a virtual machine or hardware appliance, give it local disk resources, connect it to your applications, and start using your hybrid cloud storage for block data.

Volume Gateway cannot be used to facilitate tape backup processes.

Snowball - AWS Snowball, a part of the AWS Snow Family, is a data migration and edge computing device that comes in two options. Snowball Edge Storage Optimized devices provide both block storage and Amazon S3-compatible object storage, and 40 vCPUs. They are well suited for local storage and large scale data transfer. Snowball Edge Compute Optimized devices provide 52 vCPUs, block and object storage, and an optional GPU for use cases like advanced machine learning and full-motion video analysis in disconnected environments.

Snowball Edge Storage Optimized is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS. It provides up to 80 TB of usable HDD storage, 40 vCPUs, 1 TB of SATA SSD storage, and up to 40 Gb network connectivity to address large scale data transfer and pre-processing use cases.

Snowball cannot be used to facilitate tape backup processes.

References:

https://aws.amazon.com/storagegateway/vtl/

https://aws.amazon.com/storagegateway/

https://aws.amazon.com/snowball/

Question 40

You would like to replace your on-premise NFS v3 drive with something that will leverage the huge capacity of Amazon S3. You would like to ensure files that are commonly used are locally cached on-premises.

What should you use? 1

EFS 2

File Gateway 3

EBS Drives 4

Volume Gateway Correct Answer 2

File Gateway Explanation

Correct option:

File Gateway - File Gateway provides a seamless way to connect to the cloud in order to store application data files and backup images as durable objects in Amazon S3 cloud storage. File Gateway offers SMB or NFS-based access to data in Amazon S3 with local caching. It can be used for on-premises applications, and for Amazon EC2-based applications that need file protocol access to S3 object storage.

How Storage Gateway Works: via - https://aws.amazon.com/storagegateway/

How File Gateway Works: via - https://aws.amazon.com/storagegateway/file/

Incorrect options:

EFS - Amazon EFS is a file storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage for up to thousands of Amazon EC2 instances. Amazon S3 is an object storage service. EFS cannot leverage S3 for storage.

EBS Drives - Amazon Elastic Block Store (EBS) is an easy to use, high-performance, block-storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS. EBS cannot leverage S3 for storage.

Volume Gateway - You can configure the AWS Storage Gateway service as a Volume Gateway to present cloud-based iSCSI block storage volumes to your on-premises applications. The Volume Gateway provides either a local cache or full volumes on-premises while also storing full copies of your volumes in the AWS cloud. Volume Gateway also provides Amazon EBS Snapshots of your data for backup, disaster recovery, and migration. It’s easy to get started with the Volume Gateway: Deploy it as a virtual machine or hardware appliance, give it local disk resources, connect it to your applications, and start using your hybrid cloud storage for block data. Since Volume Gateway represents cloud-backed iSCSI block storage, so it cannot be used to replace an on-premise NFS v3 drive, so this option is incorrect.

How Volume Gateway Works: via - https://aws.amazon.com/storagegateway/volume/

References:

https://aws.amazon.com/storagegateway/file/

https://aws.amazon.com/storagegateway/volume/

https://aws.amazon.com/storagegateway/

Question 41

You just released a new mobile game and users have the chance to interact with each other. In order to publish a profile picture, your company has made the architectural decision to have users directly upload their images into a designated S3 bucket.

How can you provide write access to the mobile application users effectively? 1

Create an AWS Lambda function that will create an IAM User for each new user, and store their API keys in the mobile app database 2

Create one IAM user and publish the access keys as part of the mobile application 3

Federate the users with SAML so they can use Single Sign-On (SSO) to access S3 4

Federate the users with Cognito so they can assume a role to access S3 Correct Answer 4

Federate the users with Cognito so they can assume a role to access S3 Explanation

Correct option:

Federate the users with Cognito so they can assume a role to access S3

Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0. Application-specific user authentication can be provided via a Cognito User Pool and then users can access AWS services such as S3 using a Cognito Identity Pool. Here Cognito is the best technology choice for managing mobile user accounts.

via - https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-integrating-user-pools-with-identity-pools.html

Amazon Cognito Features: via - https://aws.amazon.com/cognito/details/

Exam Alert:

Please review the following note to understand the differences between Cognito User Pools and Cognito Identity Pools: via - https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html

Incorrect options:

Create an AWS Lambda function that will create an IAM User for each new user, and store their API keys in the mobile app database - Creating an IAM user for each new user of the mobile phone is not practicable, so this option is ruled out.

Create one IAM user and publish the access keys as part of the mobile application - This is a security bad-practice. You should not expose the IAM user access keys via a third-party application. The best solution is to use Cognito user pool for authentication and then access AWS services using an identity pool.

Federate the users with SAML so they can use Single Sign-On (SSO) to access S3 - The scenario does not mention that the users belong to a specific organization, therefore you cannot use SAML to facilitate SSO to access S3.

via - https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html

References:

https://aws.amazon.com/cognito/details/

https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html

https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools.html

https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-integrating-user-pools-with-identity-pools.html

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html

Question 42

Your home-cooking website stores its recipes and comments from users in a Multi-AZ RDS database, which is located in a private subnet. As of yesterday, it seems that your users are unable to access the website and see an error message “512 - Cannot connect to the database”.

What could be the reason why the website cannot connect to the database anymore? (Select three) Multi Select 1

DB Security Group inbound rules have changed 2

Network ACL inbound rules have changed 3

Security Group outbound rules have changed 4

Network ACL outbound rules have changed 5

The primary database’s private IP has changed 6

A read replica has been created recently Correct Answer 1

DB Security Group inbound rules have changed 2

Network ACL inbound rules have changed 4

Network ACL outbound rules have changed Explanation

Correct options:

DB Security Group inbound rules have changed

A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you can specify one or more security groups; otherwise, we use the default security group. You can add rules to each security group that allows traffic to or from its associated instances. You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group. When we decide whether to allow traffic to reach an instance, we evaluate all the rules from all the security groups that are associated with the instance.

The following are the characteristics of security group rules:

By default, security groups allow all outbound traffic.

Security group rules are always permissive; you can’t create rules that deny access.

Security groups are stateful

For the given use-case, if the DB Security Group inbound rules have changed, then the website may not be able to connect to the database.

Network ACL inbound rules have changed

Network ACL outbound rules have changed

A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.

The following are the basic things that you need to know about network ACLs:

The default NACL allows all inbound and outbound IPv4 traffic and, if applicable, IPv6 traffic.

You can create a custom network ACL and associate it with a subnet. By default, each custom network ACL denies all inbound and outbound traffic until you add rules.

Each subnet in your VPC must be associated with a network ACL. If you don’t explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL.

You can associate a network ACL with multiple subnets. However, a subnet can be associated with only one network ACL at a time. When you associate a network ACL with a subnet, the previous association is removed.

A network ACL contains a numbered list of rules. AWS evaluates the rules in order, starting with the lowest numbered rule, to determine whether traffic is allowed in or out of any subnet associated with the network ACL. The highest number that you can use for a rule is 32766. AWS recommends that you start by creating rules in increments (for example, increments of 10 or 100) so that you can insert new rules where you need to later on.

A network ACL has separate inbound and outbound rules, and each rule can either allow or deny traffic.

Network ACLs are stateless, which means that responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).

For the given use-case, if the inbound or the outbound NACL rules have changed, then the website may not be able to connect to the database.

Incorrect options:

Security Group outbound rules have changed - Since security groups are stateful so incoming connections can send a reply back to, regardless of any outbound rules, so this option is incorrect.

The primary database’s private IP has changed - The private IP associated with an RDS database can change due to things such as multi-AZ failover, so you should use the DNS endpoint to connect to the database. Even if the private IP of the database changes, the DNS endpoint would not change on its own. So this option is incorrect.

A read replica has been created recently - Adding a read replica does not change the primary database’s DNS name, so this option is ruled out.

References:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

Question 43

The Big Data team at an insurance company is performing a nightly ETL on top of your production RDS database to compute a view and then extract it into their data lake in Amazon S3. This query has been performing reasonably well in your website’s infancy but now that it has grown in popularity, the query is running for a much longer period and affects the user experience while they browse your website.

How can you improve the situation in the short and long term? 1

Enable RDS Multi-AZ 2

Create an RDS Read Replica for the ETL team 3

Upgrade the RDS instance type 4

Use Athena to query RDS Correct Answer 2

Create an RDS Read Replica for the ETL team Explanation

Correct option:

Create an RDS Read Replica for the ETL team

Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. For the MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server database engines, Amazon RDS creates a second DB instance using a snapshot of the source DB instance. It then uses the engines’ native asynchronous replication to update the read replica whenever there is a change to the source DB instance.

For the given use-case, you can use one or more read replicas for the given source DB instance as the source for the ETL process to populate the data lake on S3.

via - https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html

Incorrect options:

Enable RDS Multi-AZ - Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). You cannot use Multi-AZ to improve the ETL process as it cannot use the standby instance as a source for the ETL process.

Exam Alert:

Please review the key differences between Read Replicas and Multi-AZ: via - https://aws.amazon.com/rds/features/multi-az/

Upgrade the RDS instance type - Upgrade the RDS instance type may help a little bit, but the problem will resurface as traffic increases further. A better solution is to use the Read Replica as the source for the ETL process to populate the data lake on S3.

Use Athena to query RDS - Although Athena can query data from RDS by using its federated query feature, however, the problem would persist as the entire ETL load will fall on the main database. A better solution is to use the Read Replica as the source for the ETL process to populate the data lake on S3.

References:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html

https://aws.amazon.com/rds/features/multi-az/

https://aws.amazon.com/blogs/big-data/query-any-data-source-with-amazon-athenas-new-federated-query/

Question 44

Your RDS database sometimes can become unresponsive, failing health checks and you need your application to fail-over automatically and safely without losing any committed transactions.

Which options would you choose? 1

Create an RDS read replica in the same region and an AWS lambda function to promote that replica as the main database when the main RDS database is down 2

Enable RDS Multi-AZ 3

Create an RDS read replica in a different region and an AWS lambda function to promote that replica as the main database when the main RDS database is down 4

Setup a CloudWatch alarm for DB RAM going over 90% and reboot the database then Correct Answer 2

Enable RDS Multi-AZ Explanation

Correct option:

Enable RDS Multi-AZ

RDS provides high availability and failover support for DB instances using Multi-AZ deployments. In a Multi-AZ deployment, RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone.

The failover happens only in the following conditions:

The primary DB instance fails

An Availability Zone outage

The DB instance server type is changed

The operating system of the DB instance is undergoing software patching.

A manual failover of the DB instance can be initiated using Reboot with failover.

via - https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

Incorrect options:

Create an RDS read replica in the same region and an AWS lambda function to promote that replica as the main database when the main RDS database is down

Create an RDS read replica in a different region and an AWS lambda function to promote that replica as the main database when the main RDS database is down

RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. Whether you create the Read Replica in the same AWS Region or a different Region from the primary DB, you cannot use it for building a High Availability solution that also transparently switches to the standby by maintaining the same DB endpoint. Using AWS Lambda to promote the Read Replica as the main database when the primary RDS database is down, would cause the DB endpoint to change.

Therefore, both of these options are incorrect.

Exam Alert:

Please review the key differences between Read Replicas and Multi-AZ: via - https://aws.amazon.com/rds/features/multi-az/

Setup a CloudWatch alarm for DB RAM going over 90% and reboot the database then - This is akin to applying a band-aid. As the root cause persists, as soon as the DB instance is up and running, you will face the DB performance issues again.

References:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

https://aws.amazon.com/rds/features/multi-az/

Question 45

You have a production Postgres RDS database and a custom rule in AWS Config has been set up and shows that some connections established to your database are not encrypted.

How can you ensure all connections to RDS are encrypted? 1

Edit the security group rules 2

Review the DB parameter groups 3

Enable SSL connections from the RDS Console 4

Patch the database with the SSL/TLS Postgres Addon Correct Answer 2

Review the DB parameter groups Explanation

Correct option:

Review the DB parameter groups

You can allow only SSL connections to your RDS for PostgreSQL database instance by enabling the rds.force_ssl parameter (“0” by default) through the parameter groups page on the RDS Console or through the CLI.

via - https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts.General.SSL

Incorrect options:

Edit the security group rules - Security group can be used to allow connections based on certain inbound rules from selected sources. However, you need to use parameter groups to enforce SSL.

Enable SSL connections from the RDS Console - This is a made-up option and has been added as a distractor.

Patch the database with the SSL/TLS Postgres Addon - You cannot install patches on RDS databases as these are completely managed by AWS, so this option is incorrect.

Reference:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts.General.SSL

Question 46

A retail company has realized that their Amazon EBS volume backed EC2 instance is consistently over-utilized and needs an upgrade. A developer has connected with you to understand the key parameters to be considered when changing the instance type.

As a SysOps Administrator, which of the following would you identify as correct regarding the instance types for the given use-case? (Select three) Multi Select 1

Resizing of an instance is only possible if the root device for your instance is an EBS volume 2

The new instance retains its public, private IPv4 addresses, any Elastic IP addresses, and any IPv6 addresses that were associated with the old instance 3

You must stop your Amazon EBS–backed instance before you can change its instance type. AWS moves the instance to new hardware; however, the instance ID does not change 4

If your instance is in an Auto Scaling group, the Amazon EC2 Auto Scaling service marks the stopped instance as unhealthy, and may terminate it and launch a replacement instance 5

There is no downtime on the instance if you choose an instance of a compatible type since AWS starts the new instance and shifts the applications from current instance 6

Resizing of an instance is possible if the root device is either EBS volume or an instance store volume. However, instance store volumes taking longer to start on the new instance, since cache data is lost on these instances Correct Answer 1

Resizing of an instance is only possible if the root device for your instance is an EBS volume 3

You must stop your Amazon EBS–backed instance before you can change its instance type. AWS moves the instance to new hardware; however, the instance ID does not change 4

If your instance is in an Auto Scaling group, the Amazon EC2 Auto Scaling service marks the stopped instance as unhealthy, and may terminate it and launch a replacement instance Explanation

Correct options:

Resizing of an instance is only possible if the root device for your instance is an EBS volume - If the root device for your instance is an EBS volume, you can change the size of the instance simply by changing its instance type, which is known as resizing it. If the root device for your instance is an instance store volume, you must migrate your application to a new instance with the instance type that you need.

You must stop your Amazon EBS–backed instance before you can change its instance type. AWS moves the instance to new hardware; however, the instance ID does not change - You must stop your Amazon EBS–backed instance before you can change its instance type. When you stop and start an instance, AWS moves the instance to new hardware; however, the instance ID does not change.

If your instance is in an Auto Scaling group, the Amazon EC2 Auto Scaling service marks the stopped instance as unhealthy, and may terminate it and launch a replacement instance - If your instance is in an Auto Scaling group, the Amazon EC2 Auto Scaling service marks the stopped instance as unhealthy, and may terminate it and launch a replacement instance. To prevent this, you can suspend the scaling processes for the group while you’re resizing your instance.

Incorrect options:

The new instance retains its public, private IPv4 addresses, any Elastic IP addresses, and any IPv6 addresses that were associated with the old instance - If your instance has a public IPv4 address, AWS releases the address and gives it a new public IPv4 address. The instance retains its private IPv4 addresses, any Elastic IP addresses, and any IPv6 addresses.

There is no downtime on the instance if you choose an instance of a compatible type since AWS starts the new instance and shifts the applications from current instance - AWS suggests that you plan for downtime while your instance is stopped. Stopping and resizing an instance may take a few minutes, and restarting your instance may take a variable amount of time depending on your application’s startup scripts.

Resizing of an instance is possible if the root device is either EBS volume or an instance store volume. However, instance store volumes taking longer to start on the new instance, since cache data is lost on these instances - As discussed above, resizing of an instance is possible only if the root device for the instance is an EBS volume.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-resize.html

Question 47

A large IT company manages several projects on AWS Cloud and has decided to use AWS X-Ray to trace application workflows. The company uses a plethora of AWS services like API Gateway, Amazon EC2 instances, Amazon S3 storage service, Elastic Load Balancers and AWS Lambda functions.

Which of the following should the company keep in mind while using AWS X-Ray for the AWS services they use? 1

Application Load balancers do not send data to X-Ray 2

AWS X-Ray does not integrate with Amazon S3 and you need to use CloudTrail for tracking requests on S3 3

AWS X-Ray cannot be used to trace your AWS Lambda functions since they are not integrated 4

You cannot use X-Ray to trace or analyze user requests to your Amazon API Gateway APIs Correct Answer 1

Application Load balancers do not send data to X-Ray Explanation

Correct option:

Application Load balancers do not send data to X-Ray - Elastic Load Balancing application load balancers add a trace ID to incoming HTTP requests in a header named X-Amzn-Trace-Id. Load balancers do not send data to X-Ray and do not appear as a node on your service map.

Incorrect options:

AWS X-Ray does not integrate with Amazon S3 and you need to use CloudTrail for tracking requests on S3 - AWS X-Ray integrates with Amazon S3 to trace upstream requests to update your application’s S3 buckets.

AWS X-Ray cannot be used to trace your AWS Lambda functions since they are not integrated - You can use AWS X-Ray to trace your AWS Lambda functions. Lambda runs the X-Ray daemon and records a segment with details about the function invocation and execution.

You cannot use X-Ray to trace or analyze user requests to your Amazon API Gateway APIs - You can use X-Ray to trace and analyze user requests as they travel through your Amazon API Gateway APIs to the underlying services. API Gateway supports X-Ray tracing for all API Gateway endpoint types: Regional, edge-optimized, and private. You can use X-Ray with Amazon API Gateway in all AWS Regions where X-Ray is available.

Reference:

https://docs.aws.amazon.com/xray/latest/devguide/xray-services-elb.html

Question 48

A systems administrator has configured Amazon EC2 instances in an Auto Scaling Group (ASG) for two separate development teams. However, only one configuration has CloudWatch agent installed on the instances, whereas the other one does not have it. The administrator has not manually installed the agents on either group of instances.

Which of the following would you identify as a root-cause behind this issue? 1

CloudWatch agent can be configured to be loaded on the EC2 instances while configuring the ASG. The developer could have unintentionally checked this flag on one of the ASGs he created 2

The architecture of the InstanceType mentioned in your launch configuration does not match the image architecture. So, the ASG was created with errors, resulting in skipping CloudWatch agent. A thorough check is needed for such ASGs, more services could have been skipped 3

If your AMI contains a CloudWatch agent, it’s automatically installed on EC2 instances when you create an EC2 Auto Scaling group. The developer needs to choose the AMI that has CloudWatch agent pre-configured on it 4

The instance architecture might not have been compatible with the AMI chosen. The incompatibility results in various errors, one of which is, some of the AWS services will not be installed as expected Correct Answer 3

If your AMI contains a CloudWatch agent, it’s automatically installed on EC2 instances when you create an EC2 Auto Scaling group. The developer needs to choose the AMI that has CloudWatch agent pre-configured on it Explanation

Correct option:

If your AMI contains a CloudWatch agent, it’s automatically installed on EC2 instances when you create an EC2 Auto Scaling group. The developer needs to choose the AMI that has CloudWatch agent pre-configured on it

If your AMI contains a CloudWatch agent, it’s automatically installed on EC2 instances when you create an EC2 Auto Scaling group. With the stock Amazon Linux AMI, you need to install it (AWS recommends to install via yum).

Incorrect options:

CloudWatch agent can be configured to be loaded on the EC2 instances while configuring the ASG. The developer could have unintentionally checked this flag on one of the ASGs he created - This is incorrect and added only as a distractor.

The architecture of the InstanceType mentioned in your launch configuration does not match the image architecture. So, the ASG was created with errors, resulting in skipping CloudWatch agent. A thorough check is needed for such ASGs, more services could have been skipped - This is incorrect. Either the ASG is created successfully or fails completely. Partial installation of services will not take place.

The instance architecture might not have been compatible with the AMI chosen. The incompatibility results in various errors, one of which is, some of the AWS services will not be installed as expected - If there are compatibility issues, the ASG will not be able to spin up instances, and throws an error that explains the compatibility error.

Reference:

https://aws.amazon.com/ec2/autoscaling/faqs/

Question 49

An automobile company manages its AWS resource creation and maintenance process through AWS CloudFormation. The company has successfully used CloudFormation so far, and wishes to continue using the service. However, while moving to CloudFormation, the company only moved critical resources and left out the other resources to be managed manually. To leverage the ease of creation and maintenance that CloudFormation offers, the company wants to move rest of the resources to CloudFormation.

Which of the following options is the recommended way to configure this requirement? 1

Use Parameters section of CloudFormation template to input the required resources 2

You can bring an existing resource into AWS CloudFormation management using resource import 3

You can use Mappings part of CloudFormation template to input the needed resources 4

Drift detection is the mechanism by which you add resources to the stack of Cloudformation resources already created Correct Answer 2

You can bring an existing resource into AWS CloudFormation management using resource import Explanation

Correct option:

You can bring an existing resource into AWS CloudFormation management using resource import

If you created an AWS resource outside of AWS CloudFormation management, you can bring this existing resource into AWS CloudFormation management using resource import. You can manage your resources using AWS CloudFormation regardless of where they were created without having to delete and re-create them as part of a stack.

During an import operation, you create a change set that imports your existing resources into a stack or creates a new stack from your existing resources. You provide the following during import.

A template that describes the entire stack, including both the original stack resources and the resources you're importing. Each resource to import must have a DeletionPolicy attribute.

Identifiers for the resources to import. You provide two values to identify each target resource.

a) An identifier property. This is a resource property that can be used to identify each resource type. For example, an AWS::S3::Bucket resource can be identified using its BucketName.

b) An identifier value. This is the target resource’s actual property value. For example, the actual value for the BucketName property might be MyS3Bucket.

Incorrect options:

Use Parameters section of CloudFormation template to input the required resources - Parameters are a way to provide inputs to your AWS CloudFormation template. They are useful when you want to reuse your templates. Some inputs can not be determined ahead of time. They aren’t useful for importing resources into CloudFormation.

You can use Mappings part of CloudFormation template to input the needed resources - Mappings are fixed variables within your CloudFormation Template. They’re very handy to differentiate between different environments (dev vs prod), regions (AWS regions), AMI types, etc. They aren’t useful for importing resources into CloudFormation.

Drift detection is the mechanism by which you add resources to the stack of Cloudformation resources already created - Performing a drift detection operation on a stack determines whether the stack has drifted from its expected template configuration, and returns detailed information about the drift status of each resource in the stack that supports drift detection. It is not useful for importing resources into CloudFormation.

Reference:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import.html

Question 50

As a SysOps Administrator, you have been asked to fix the network performance issues for a fleet of Amazon EC2 instances of a company.

Which of the following use-cases represents the right fit for using enhanced networking? 1

To reach speeds up to 2,500 Gbps between EC2 instances 2

To support throughput near or exceeding 20K packets per second (PPS) on the VIF driver 3

To configure multi-attach for an EBS volume that can be attached to a maximum of 16 EC2 instances in a single Availability Zone 4

To configure Direct Connect to reach speeds up to 25 Gbps between EC2 instances Correct Answer 2

To support throughput near or exceeding 20K packets per second (PPS) on the VIF driver Explanation

Correct option:

To support throughput near or exceeding 20K packets per second (PPS) on the VIF driver - Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types. SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies. There is no additional charge for using enhanced networking.

Consider using enhanced networking for the following scenarios:

If your packets-per-second rate reaches its ceiling, consider moving to enhanced networking. If your rate reaches its ceiling, you've likely reached the upper thresholds of the virtual network interface driver.

If your throughput is near or exceeding 20K packets per second (PPS) on the VIF driver, it's a best practice to use enhanced networking.

All current generation instance types support enhanced networking, except for T2 instances.

Incorrect options:

To reach speeds up to 2,500 Gbps between EC2 instances - If you need to reach speeds up to 25 Gbps between instances, launch instances in a cluster placement group along with ENA compatible instances. If you need to reach speeds up to 10 Gbps between instances, launch your instances into a cluster placement group with the enhanced networking instance type. This option has been added as a distractor, as it is not possible to support speeds up to 2,500 Gbps between EC2 instances.

To configure multi-attach for an EBS volume that can be attached to a maximum of 16 EC2 instances in a single Availability Zone - An EBS (io1 or io2) volume, when configured with the new Multi-Attach option, can be attached to a maximum of 16 EC2 instances in a single Availability Zone. Additionally, each Nitro-based EC2 instance can support the attachment of multiple Multi-Attach enabled EBS volumes. Multi-Attach capability makes it easier to achieve higher availability for applications that provide write-ordering to maintain storage consistency. You do not need to use enhanced networking to configure this option.

To configure Direct Connect to reach speeds up to 25 Gbps between EC2 instances - AWS Direct Connect is a networking service that provides an alternative to using the internet to connect your on-premises resources to AWS Cloud. In many circumstances, private network connections can reduce costs, increase bandwidth, and provide a more consistent network experience than internet-based connections. You cannot use enhanced networking to configure Direct Connect to reach speeds up to 25 Gbps between EC2 instances.

References:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html

https://aws.amazon.com/premiumsupport/knowledge-center/enable-configure-enhanced-networking/

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html

Question 51

As SysOps Administrator, you have created two configuration files for CloudWatch Agent configuration. The first configuration file collects a set of metrics and logs from all servers and the second configuration file collects metrics from certain applications. You have given the same name to both the files but stored these files in different file paths.

What is the outcome when the CloudWatch Agent is started with the first configuration file and then the second configuration file is appended to it? 1

Second configuration file parameters are added to the Agent already running with the first configuration file parameters 2

Two different Agents are started with different configurations, collecting the metrics and logs listed in either of the configuration files 3

The append command overwrites the information from the first configuration file instead of appending to it 4

A CloudWatch Agent can have only one configuration file and all required parameters are defined in this file alone Correct Answer 3

The append command overwrites the information from the first configuration file instead of appending to it Explanation

Correct option:

The append command overwrites the information from the first configuration file instead of appending to it

You can set up the CloudWatch agent to use multiple configuration files. For example, you can use a common configuration file that collects a set of metrics and logs that you always want to collect from all servers in your infrastructure. You can then use additional configuration files that collect metrics from certain applications or in certain situations.

To set this up, first create the configuration files that you want to use. Any configuration files that will be used together on the same server must have different file names. You can store the configuration files on servers or in Parameter Store.

Start the CloudWatch agent using the fetch-config option and specify the first configuration file. To append the second configuration file to the running agent, use the same command but with the append-config option. All metrics and logs listed in either configuration file are collected.

Any configuration files appended to the configuration must have different file names from each other and from the initial configuration file. If you use append-config with a configuration file with the same file name as a configuration file that the agent is already using, the append command overwrites the information from the first configuration file instead of appending to it. This is true even if the two configuration files with the same file name are on different file paths.

Incorrect options:

Second configuration file parameters are added to the Agent already running with the first configuration file parameters

Two different Agents are started with different configurations, collecting the metrics and logs listed in either of the configuration files

A CloudWatch Agent can have only one configuration file and all required parameters are defined in this file alone

These three options contradict the explanation provided above, so these options are incorrect.

Reference:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-common-scenarios.html

Question 52

A SysOps administrator has deployed multiple applications on a fleet of Amazon EC2 instances.

What is the right way to configure scheduled events for these EC2 instances? 1

Use CloudWatch Alarm to configure scheduled events on Amazon EC2 instances 2

Use Amazon EventBridge to configure scheduled events on Amazon EC2 instances 3

Use CloudWatch Agent to configure scheduled events on Amazon EC2 instances 4

Scheduled events are managed by AWS, you cannot configure scheduled events for your instances Correct Answer 4

Scheduled events are managed by AWS, you cannot configure scheduled events for your instances Explanation

Correct option:

Scheduled events are managed by AWS, you cannot configure scheduled events for your instances - AWS can schedule events for your instances, such as a reboot, stop/start, or retirement. These events do not occur frequently. If one of your instances will be affected by a scheduled event, AWS sends an email to the email address that’s associated with your AWS account before the scheduled event. The email provides details about the event, including the start and end date. Depending on the event, you might be able to take action to control the timing of the event. AWS also sends an AWS Health event, which you can monitor and manage using Amazon CloudWatch Events.

Scheduled events are managed by AWS; you cannot schedule events for your instances. You can view the events scheduled by AWS, customize scheduled event notifications to include or remove tags from the email notification, perform actions when an instance is scheduled to reboot, retire, or stop.

Incorrect options:

Use CloudWatch Alarm to configure scheduled events on Amazon EC2 instances - CloudWatch alarms can be used to watch a single metric over a time period you specify, and perform one or more actions based on the value of the metric relative to a given threshold over a number of time periods. The action is a notification sent to an Amazon Simple Notification Service (Amazon SNS) topic or Amazon EC2 Auto Scaling policy. CloudWatch Alarm cannot be used to configure scheduled events on Amazon EC2 instances.

Use Amazon EventBridge to configure scheduled events on Amazon EC2 instances - Amazon EventBridge helps automate your AWS services and respond automatically to system events. Events from AWS services are delivered to EventBridge in near real-time, and you can specify automated actions to take when an event matches a rule you write. EventBridge cannot be used to configure scheduled events on Amazon EC2 instances.

Use CloudWatch Agent to configure scheduled events on Amazon EC2 instances - CloudWatch Agent helps collect logs and system-level metrics from both hosts and guests on your EC2 instances and on-premises servers. CloudWatch Agent cannot be used to configure scheduled events on Amazon EC2 instances.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instances-status-check_sched.html

Question 53

A systems administrator is configuring Amazon EC2 status check alarm to publish a notification to an SNS topic when the instance fails either the instance check or system status check.

Which CloudWatch metric is the right choice for this configuration? 1

StatusCheckFailed 2

CombinedStatusCheckFailed 3

StatusCheckFailed_Instance 4

StatusCheckFailed_System Correct Answer 1

StatusCheckFailed Explanation

Correct option:

StatusCheckFailed - The AWS/EC2 namespace includes a few status check metrics. By default, status check metrics are available at a 1-minute frequency at no charge. For a newly-launched instance, status check metric data is only available after the instance has completed the initialization state (within a few minutes of the instance entering the running state).

StatusCheckFailed - Reports whether the instance has passed both the instance status check and the system status check in the last minute. This metric can be either 0 (passed) or 1 (failed). By default, this metric is available at a 1-minute frequency at no charge.

List of EC2 status check metrics: via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/viewing_metrics_with_cloudwatch.html#status-check-metrics

Incorrect options:

CombinedStatusCheckFailed - This is a made-up option, given only as a distractor.

`StatusCheckFailed_Instance - Reports whether the instance has passed the instance status check in the last minute. This metric can be either 0 (passed) or 1 (failed).

StatusCheckFailed_System - Reports whether the instance has passed the system status check in the last minute. This metric can be either 0 (passed) or 1 (failed).

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/viewing_metrics_with_cloudwatch.html#status-check-metrics

Question 54

As a SysOps Administrator, you have been asked to calculate the total network usage for all the EC2 instances of a company and determine which instance used the most bandwidth within a date range.

Which Amazon CloudWatch metric(s) will help you get the needed data? 1

DataTransfer-Out-Bytes 2

NetworkIn and NetworkOut 3

DiskReadBytes and DiskWriteBytes 4

NetworkTotalBytes Correct Answer 2

NetworkIn and NetworkOut Explanation

Correct option:

NetworkIn and NetworkOut - You can determine which instance is causing high network usage using the Amazon CloudWatch NetworkIn and NetworkOut metrics. You can aggregate the data points from these metrics to calculate the network usage for your instance.

NetworkIn - The number of bytes received by the instance on all network interfaces. This metric identifies the volume of incoming network traffic to a single instance.

The number reported is the number of bytes received during the period. If you are using basic (five-minute) monitoring and the statistic is Sum, you can divide this number by 300 to find Bytes/second. If you have detailed (one-minute) monitoring and the statistic is Sum, divide it by 60. Units of this metric are Bytes.

NetworkOut - The number of bytes sent out by the instance on all network interfaces. This metric identifies the volume of outgoing network traffic from a single instance.

The number reported is the number of bytes sent during the period. If you are using basic (five-minute) monitoring and the statistic is Sum, you can divide this number by 300 to find Bytes/second. If you have detailed (one-minute) monitoring and the statistic is Sum, divide it by 60. Units of this metric are Bytes.

Incorrect options:

DataTransfer-Out-Bytes - DataTransfer-Out-Bytes metric is used in AWS Cost Explorer reports and is not useful for the current scenario.

DiskReadBytes and DiskWriteBytes - DiskReadBytes is the bytes read from all instance store volumes available to the instance. This metric is used to determine the volume of the data the application reads from the hard disk of the instance. This can be used to determine the speed of the application.

DiskWriteBytes is the bytes written to all instance store volumes available to the instance. This metric is used to determine the volume of the data the application writes onto the hard disk of the instance. This can be used to determine the speed of the application.

NetworkTotalBytes - This is a made-up option, given only as a distractor.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/viewing_metrics_with_cloudwatch.html

Question 55

A team noticed that it has accidentally deleted the AMI of Amazon EC2 instances belonging to the test environment. The team had configured backups via EBS snapshots for these instances.

Which of the following options would you suggest to recover/rebuild the accidentally deleted AMI? (Select two) Multi Select 1

AWS Support retains backups of AMIs. Write to the support team to get help for recovering the lost AMI 2

Create a new AMI from Amazon EBS snapshots that were created as backups 3

Create a new AMI from Amazon EC2 instances that were launched before the deletion of AMI 4

Recover the AMI from the current Amazon EC2 instances that were launched before the deletion of AMI 5

Recover the AMI from Amazon EBS snapshots that were created as backups before the deletion of AMI Correct Answer 2

Create a new AMI from Amazon EBS snapshots that were created as backups 3

Create a new AMI from Amazon EC2 instances that were launched before the deletion of AMI Explanation

Correct options:

Create a new AMI from Amazon EBS snapshots that were created as backups

Create a new AMI from Amazon EC2 instances that were launched before the deletion of AMI

It isn’t possible to restore or recover a deleted or deregistered AMI. However, you can create a new, identical AMI using one of the following:

Amazon Elastic Block Store (Amazon EBS) snapshots that were created as backups: When you delete or deregister an Amazon EBS-backed AMI, any snapshots created for the volume of the instance during the AMI creation process are retained. If you accidentally delete the AMI, you can launch an identical AMI using one of the retained snapshots.

Amazon Elastic Compute Cloud (Amazon EC2) instances that were launched from the deleted AMI: If you deleted the AMI and the snapshots are also deleted, then you can recover the AMI from any existing EC2 instances launched using the deleted AMI. Unless you have selected the No reboot option on the instance, performing this step will reboot the instance.

Incorrect options:

AWS Support retains backups of AMIs. Write to the support team to get help for recovering the lost AMI - For security and privacy reasons, AWS Support doesn’t have visibility or access to customer data. If you don’t have backups of your deleted AMI, AWS Support can’t recover it for you.

Recover the AMI from the current Amazon EC2 instances that were launched before the deletion of AMI

Recover the AMI from Amazon EBS snapshots that were created as backups before the deletion of AMI

As discussed above, it is not possible to restore or recover a deleted or deregistered AMI. The only option is to create a new, identical AMI as discussed above.

Reference:

https://aws.amazon.com/premiumsupport/knowledge-center/recover-ami-accidentally-deleted-ec2/

Question 56

A team needs to create an AMI from their Amazon EC2 instances for use in another environment.

What is the right way to create an application-consistent AMI from existing EC2 instances? 1

Create the AMI with No reboot option enabled 2

Create an EBS-backed AMI for application consistency 3

Create the AMI by disabling the No reboot option 4

Create the AMI with Delete on termination enabled Correct Answer 3

Create the AMI by disabling the No reboot option Explanation

Correct option:

Create the AMI by disabling the No reboot option - On the Create image page, No reboot flag is present. The default functionality is, Amazon EC2 shuts down the instance, takes snapshots of any attached volumes, creates and registers the AMI, and then reboots the instance. When No reboot option is selected, the instance is not shut down while creating the AMI. This option is not selected by default.

If you select No reboot, the AMI will be crash-consistent (all the volumes are snapshotted at the same time), but not application-consistent (all the operating system buffers are not flushed to disk before the snapshots are created).

Incorrect options:

Create the AMI with No reboot option enabled - If the No reboot flag is selected, the instance is not shutdown while creating an AMI. This implies, the Operating System buffers are not flushed before creating an AMI, so data integrity could be an issue with AMIs created in this way. Such AMIs are crash-consistent but not application-consistent.

Create an EBS-backed AMI for application consistency - When you create a new instance from an EBS-backed AMI, you are using persistent storage. No reboot flag should still be unchecked to ensure that everything on the instance is stopped and in a consistent state during the creation process.

Create the AMI with Delete on termination enabled - If you select Delete on termination, when you terminate the instance created from this AMI, the EBS volume is deleted. If you clear Delete on termination, when you terminate the instance, the EBS volume is not deleted. This option has been added as a distractor.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-ebs.html

Question 57

An organization has multiple AWS accounts to manage different lines of business. A user from the Finance account has to access reports stored in Amazon S3 buckets of two other AWS accounts (belonging to the HR and Audit departments) and copy these reports back to the S3 bucket in the Finance account. The user has requested the necessary permissions from the systems administrator to perform this task.

As a SysOps Administrator, how will you configure a solution for this requirement? 1

Create resource-based policies in the HR, Audit accounts that will allow the requester from the Finance account to access the respective S3 buckets 2

Create resource-level permissions in the HR, Audit accounts to allow access to respective S3 buckets for the user in the Finance account 3

Create identity-based IAM policy in the Finance account that allows the user to make a request to the S3 buckets in the HR and Audit accounts. Also, create resource-based IAM policies in the HR, Audit accounts that will allow the requester from the Finance account to access the respective S3 buckets 4

Create IAM roles in the HR, Audit accounts, which can be assumed by the user from the Finance account when the user needs to access the S3 buckets of the accounts Correct Answer 3

Create identity-based IAM policy in the Finance account that allows the user to make a request to the S3 buckets in the HR and Audit accounts. Also, create resource-based IAM policies in the HR, Audit accounts that will allow the requester from the Finance account to access the respective S3 buckets Explanation

Correct option:

Create identity-based IAM policy in the Finance account that allows the user to make a request to the S3 buckets in the HR and Audit accounts. Also, create resource-based IAM policies in the HR, Audit accounts that will allow the requester from the Finance account to access the respective S3 buckets

Identity-based policies are attached to an IAM user, group, or role. These policies let you specify what that identity can do (its permissions).

Resource-based policies are attached to a resource. For example, you can attach resource-based policies to Amazon S3 buckets, Amazon SQS queues, and AWS Key Management Service encryption keys.

Identity-based policies and resource-based policies are both permissions policies and are evaluated together. For a request to which only permissions policies apply, AWS first checks all policies for a Deny. If one exists, then the request is denied. Then AWS checks for each Allow. If at least one policy statement allows the action in the request, the request is allowed. It doesn’t matter whether the Allow is in the identity-based policy or the resource-based policy.

For requests made from one account to another, the requester in Account A must have an identity-based policy that allows them to make a request to the resource in Account B. Also, the resource-based policy in Account B must allow the requester in Account A to access the resource. There must be policies in both accounts that allow the operation, otherwise, the request fails.

Comparing IAM policies: via - https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html

Incorrect options:

Create resource-based policies in the HR, Audit accounts that will allow the requester from the Finance account to access the respective S3 buckets - Creating resource-based policy alone will be sufficient when the request is made within a single AWS account.

Create resource-level permissions in the HR, Audit accounts to allow access to respective S3 buckets for the user in the Finance account - Resource-based policies differ from resource-level permissions. You can attach resource-based policies directly to a resource, as described in this topic. Resource-level permissions refer to the ability to use ARNs to specify individual resources in a policy. Resource-based policies are supported only by some AWS services.

Create IAM roles in the HR, Audit accounts, which can be assumed by the user from the Finance account when the user needs to access the S3 buckets of the accounts - Cross-account access with a resource-based policy has some advantages over cross-account access with a role. With a resource that is accessed through a resource-based policy, the principal still works in the trusted account and does not have to give up his or her permissions to receive the role permissions. In other words, the principal continues to have access to resources in the trusted account at the same time as he or she has access to the resource in the trusting account. This is useful for tasks such as copying information to or from the shared resource in the other account.

We chose resource-based policy, so the user from the Finance account will continue to have access to resources in his own account while also getting permissions on resources from other accounts.

References:

https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_compare-resource-policies.html

Question 58

A retail company wants to get out of the business of owning and maintaining its own IT infrastructure. As part of this digital transformation, the company wants to archive about 5PB of data in its on-premises data center to durable long term storage.

As a SysOps Administrator, what is your recommendation to migrate this data in the MOST cost-optimal way? 1

Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into AWS Glacier 2

Setup AWS direct connect between the on-premises data center and AWS Cloud. Use this connection to transfer the data into AWS Glacier 3

Setup Site-to-Site VPN connection between the on-premises data center and AWS Cloud. Use this connection to transfer the data into AWS Glacier 4

Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into Amazon S3 and create a lifecycle policy to transition the data into AWS Glacier Correct Answer 4

Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into Amazon S3 and create a lifecycle policy to transition the data into AWS Glacier Explanation

Correct option:

Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into Amazon S3 and create a lifecycle policy to transition the data into AWS Glacier

Snowball Edge Storage Optimized is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS. It provides up to 80 TB of usable HDD storage, 40 vCPUs, 1 TB of SATA SSD storage, and up to 40 Gb network connectivity to address large scale data transfer and pre-processing use cases. The data stored on the Snowball Edge device can be copied into the S3 bucket and later transitioned into AWS Glacier via a lifecycle policy. You can’t directly copy data from Snowball Edge devices into AWS Glacier.

Incorrect options:

Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into AWS Glacier - As mentioned earlier, you can’t directly copy data from Snowball Edge devices into AWS Glacier. Hence, this option is incorrect.

Setup AWS direct connect between the on-premises data center and AWS Cloud. Use this connection to transfer the data into AWS Glacier - AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry-standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. Direct Connect involves significant monetary investment and takes more than a month to set up, therefore it’s not the correct fit for this use-case where just a one-time data transfer has to be done.

Setup Site-to-Site VPN connection between the on-premises data center and AWS Cloud. Use this connection to transfer the data into AWS Glacier - AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). VPN Connections are a good solution if you have an immediate need, and have low to modest bandwidth requirements. Because of the high data volume for the given use-case, Site-to-Site VPN is not the correct choice.

Reference:

https://aws.amazon.com/snowball/

Question 59

The Chief Technology Officer (CTO) of a healthcare company realized that he does not have access to an Amazon S3 bucket present in the company’s own AWS account. The CTO is the root user for the AWS account and has created other AWS users using the root user account.

What is the reason for this behavior and how can you fix this? 1

If an IAM user, with full access to IAM and Amazon S3, assigns a bucket policy to an Amazon S3 bucket and doesn’t specify the AWS account root user as a principal, the root user is denied access to that bucket 2

Root user always has access to all the resources of the account. The Amazon S3 bucket could be from another AWS account and the S3 bucket has been shared with the root user and hence appears in his list of S3 buckets 3

An Amazon S3 bucket policy that specifies a wildcard (*) in the principal element, sometimes is declared void by AWS to avoid the risk of complete public exposure. Such S3 buckets policies are in invalid status and have random behavior 4

Root user has access to all the resources in his AWS account. Contact AWS support to resolve the access issue Correct Answer 1

If an IAM user, with full access to IAM and Amazon S3, assigns a bucket policy to an Amazon S3 bucket and doesn’t specify the AWS account root user as a principal, the root user is denied access to that bucket Explanation

Correct option:

If an IAM user, with full access to IAM and Amazon S3, assigns a bucket policy to an Amazon S3 bucket and doesn’t specify the AWS account root user as a principal, the root user is denied access to that bucket

Sometimes, you might have an IAM user with full access to IAM and Amazon S3. If the IAM user assigns a bucket policy to an Amazon S3 bucket and doesn’t specify the AWS account root user as a principal, the root user is denied access to that bucket. However, as the root user, you can still access the bucket. To do that, modify the bucket policy to allow root user access from the Amazon S3 console or the AWS CLI. Use the following principal, replacing 123456789012 with the ID of the AWS account.

“Principal”: { “AWS”: “arn:aws:iam::123456789012:root” }

Incorrect options:

Root user always has access to all the resources of the account. The Amazon S3 bucket could be from another AWS account and the S3 bucket has been shared with the root user and hence appears in his list of S3 buckets

An Amazon S3 bucket policy that specifies a wildcard (*) in the principal element, sometimes is declared void by AWS to avoid the risk of complete public exposure. Such S3 buckets policies are in invalid status and have random behavior

Root user has access to all the resources in his AWS account. Contact AWS support to resolve the access issue

These three options contradict the explanation above, so these options are incorrect.

Reference:

https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_iam-s3.html

Question 60

The development team at a retail company manages the deployment and scaling of their web application through AWS Elastic Beanstalk. After configuring the Elastic Beanstalk environment, the team has realized that Beanstalk is not handling the scaling activities the way they expected. This has impacted the application’s ability to respond to the variations in traffic.

How should the environment be configured to get the best of Beanstalk’s auto-scaling capabilities? 1

The IAM Role attached to the Auto Scaling group might not have enough permissions to scale instances on-demand 2

The Auto Scaling group in your Elastic Beanstalk environment uses the number of logged-in users, as the criteria to trigger auto-scaling action. These alarms must be configured based on the parameters appropriate for your application 3

By default, Auto Scaling group created from Beanstalk uses Elastic Load Balancing health checks. Configure the Beanstalk to use Amazon EC2 status checks 4

The Auto Scaling group in your Elastic Beanstalk environment uses two default Amazon CloudWatch alarms to trigger scaling operations. These alarms must be configured based on the parameters appropriate for your application Correct Answer 4

The Auto Scaling group in your Elastic Beanstalk environment uses two default Amazon CloudWatch alarms to trigger scaling operations. These alarms must be configured based on the parameters appropriate for your application Explanation

Correct option:

The Auto Scaling group in your Elastic Beanstalk environment uses two default Amazon CloudWatch alarms to trigger scaling operations. These alarms must be configured based on the parameters appropriate for your application

The Auto Scaling group in your Elastic Beanstalk environment uses two Amazon CloudWatch alarms to trigger scaling operations. Default Auto Scaling triggers are configured to scale when the average outbound network traffic (NetworkOut) from each instance is higher than 6 MB or lower than 2 MB over a period of five minutes.

For more efficient Amazon EC2 Auto Scaling, configure triggers that are appropriate for your application, instance type, and service requirements. You can scale based on several statistics including latency, disk I/O, CPU utilization, and request count.

Incorrect options:

The IAM Role attached to the Auto Scaling group might not have enough permissions to scale instances on-demand - The Auto Scaling group will not be able to spin up Amazon EC2 instances if the IAM Role associated with Beanstalk does not have enough permissions. Since the current use-case talks about scaling not happening at the expected rate, this should not be the issue.

By default, Auto Scaling group created from Beanstalk uses Elastic Load Balancing health checks. Configure the Beanstalk to use Amazon EC2 status checks - This statement is incorrect. By default, Auto Scaling group created from Beanstalk uses Amazon EC2 status checks.

The Auto Scaling group in your Elastic Beanstalk environment uses the number of logged-in users, as the criteria to trigger auto-scaling action. These alarms must be configured based on the parameters appropriate for your application - The default scaling criteria has already been discussed above (and it is not the number of logged-in users).

Reference:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.alarms.html

Question 61

A company initially used a manual process to create and manage different IAM roles needed for the organization. As the company expanded and lines of business grew, different AWS accounts were created to manage the AWS resources as well as the users. The manual process has resulted in errors with IAM roles getting created with insufficient permissions. The company is looking at automating the process of creating and managing the necessary IAM roles for multiple AWS accounts. The company already uses AWS Organizations to manage multiple AWS accounts.

As a SysOps Administrator, can you suggest an effective way to automate this process? 1

Create CloudFormation templates and reuse them to create necessary IAM roles in each of the AWS accounts 2

Use CloudFormation StackSets with AWS Organizations to deploy and manage IAM roles to multiple AWS accounts simultaneously 3

Use AWS Directory Service with AWS Organizations to automatically associate necessary IAM roles with the Microsoft Active Directory users 4

Use AWS Resource Access Manager that integrates with AWS Organizations to deploy and manage shared resources across AWS accounts Correct Answer 2

Use CloudFormation StackSets with AWS Organizations to deploy and manage IAM roles to multiple AWS accounts simultaneously Explanation

Correct option:

Use CloudFormation StackSets with AWS Organizations to deploy and manage IAM roles to multiple AWS accounts simultaneously

CloudFormation StackSets allow you to roll out CloudFormation stacks over multiple AWS accounts and in multiple Regions with just a couple of clicks. When AWS launched StackSets, grouping accounts was primarily for billing purposes. Since the launch of AWS Organizations, you can centrally manage multiple AWS accounts across diverse business needs including billing, access control, compliance, security and resource sharing.

You can now centrally orchestrate any AWS CloudFormation enabled service across multiple AWS accounts and regions. For example, you can deploy your centralized AWS Identity and Access Management (IAM) roles, provision Amazon Elastic Compute Cloud (EC2) instances or AWS Lambda functions across AWS Regions and accounts in your organization. CloudFormation StackSets simplify the configuration of cross-accounts permissions and allow for automatic creation and deletion of resources when accounts are joining or are removed from your Organization.

You can get started by enabling data sharing between CloudFormation and Organizations from the StackSets console. Once done, you will be able to use StackSets in the Organizations master account to deploy stacks to all accounts in your organization or in specific organizational units (OUs). A new service managed permission model is available with these StackSets. Choosing Service managed permissions allows StackSets to automatically configure the necessary IAM permissions required to deploy your stack to the accounts in your organization.

How to use AWS CloudFormation StackSets for Multiple Accounts in an AWS Organization: via - https://aws.amazon.com/blogs/aws/new-use-aws-cloudformation-stacksets-for-multiple-accounts-in-an-aws-organization/

Incorrect options:

Create CloudFormation templates and reuse them to create necessary IAM roles in each of the AWS accounts - CloudFormation templates can ease the current manual process that the company is using. However, it’s not a completely automated process that the company needs.

Use AWS Directory Service with AWS Organizations to automatically associate necessary IAM roles with the Microsoft Active Directory users - AWS Directory Service for Microsoft Active Directory, or AWS Managed Microsoft AD, lets you run Microsoft Active Directory (AD) as a managed service. AWS Directory Service makes it easy to set up and run directories in the AWS Cloud or connect your AWS resources with an existing on-premises Microsoft Active Directory. It is not meant for the automatic creation of IAM roles across AWS accounts.

Use AWS Resource Access Manager that integrates with AWS Organizations to deploy and manage shared resources across AWS accounts - AWS Resource Access Manager (AWS RAM) enables you to share specified AWS resources that you own with other AWS accounts. It’s a centralized service that provides a consistent experience for sharing different types of AWS resources across multiple accounts. This service enables you to share resources across AWS accounts. It’s not meant for re-creating the same resource definitions in different AWS accounts.

References:

https://aws.amazon.com/blogs/aws/new-use-aws-cloudformation-stacksets-for-multiple-accounts-in-an-aws-organization/

https://docs.aws.amazon.com/organizations/latest/userguide/services-that-can-integrate-ram.html

Question 62

An e-commerce company uses AWS Elastic Beanstalk to create test environments comprising of an Amazon EC2 instance and an RDS instance whenever a new product or line-of-service is launched. The company is currently testing one such environment but wants to decouple the database from the environment to run some analysis and reports later in another environment. Since testing is in progress for a high-stakes product, the company wants to avoid downtime and database sync issues.

As a SysOps Administrator, which solution will you recommend to the company? 1

Since it is a test environment, take a snapshot of the database and terminate the current environment. Create a new one without attaching an RDS instance directly to it (from the snapshot) 2

Use an Elastic Beanstalk Immutable deployment to make the entire architecture completely reliable. You can terminate the first environment whenever you are confident of the second environment working correctly 3

Use an Elastic Beanstalk blue (environment A)/green (environment B) deployment to decouple the RDS DB instance from environment A. Create a new Elastic Beanstalk environment (environment B) with the necessary information to connect to the decoupled RDS DB instance 4

Decoupling an RDS instance that is part of a running Elastic Beanstalk environment is not currently supported by AWS. You will need to terminate the current environment after taking the snapshot of the database and create a new one with RDS configured outside the environment Correct Answer 3

Use an Elastic Beanstalk blue (environment A)/green (environment B) deployment to decouple the RDS DB instance from environment A. Create a new Elastic Beanstalk environment (environment B) with the necessary information to connect to the decoupled RDS DB instance Explanation

Correct option:

Use an Elastic Beanstalk blue (environment A)/green (environment B) deployment to decouple the RDS DB instance from environment A. Create a new Elastic Beanstalk environment (environment B) with the necessary information to connect to the decouple RDS DB instance - Attaching an RDS DB instance to an Elastic Beanstalk environment is ideal for development and testing environments. However, it’s not recommended for production environments because the lifecycle of the database instance is tied to the lifecycle of your application environment. If you terminate the environment, then you lose your data because the RDS DB instance is deleted by the environment.

Since the current use case mentions not having downtime on the database, we can follow these steps for resolution: 1. Use an Elastic Beanstalk blue (environment A)/green (environment B) deployment to decouple an RDS DB instance from environment A. Create an RDS DB snapshot and enable Deletion protection on the DB instance to Safeguard your RDS DB instance from deletion. 2. Create a new Elastic Beanstalk environment (environment B) with the necessary information to connect to the RDS DB instance. Your new Elastic Beanstalk environment (environment B) must not include an RDS DB instance in the same Elastic Beanstalk application.

Step-by-step instructions to configure the above solution: via - https://aws.amazon.com/premiumsupport/knowledge-center/decouple-rds-from-beanstalk/

Incorrect options:

Since it is a test environment, take a snapshot of the database and terminate the current environment. Create a new one without attaching an RDS instance directly to it (from the snapshot) - It is mentioned in the problem statement that the company is looking at a solution with no downtime. Hence, this is an incorrect option.

Use an Elastic Beanstalk Immutable deployment to make the entire architecture completely reliable. You can terminate the first environment whenever you are confident of the second environment working correctly - Immutable deployments perform an immutable update to launch a full set of new instances running the new version of the application in a separate Auto Scaling group, alongside the instances running the old version. Immutable deployments can prevent issues caused by partially completed rolling deployments. If the new instances don’t pass health checks, Elastic Beanstalk terminates them, leaving the original instances untouched. This solution is an over-kill for the test environment, even if the company is looking at a no-downtime option.

Decoupling an RDS instance that is part of a running Elastic Beanstalk environment is not currently supported by AWS. You will need to terminate the current environment after taking the snapshot of the database and create a new one with RDS configured outside the environment - This is a made-up option and given only as a distractor.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/decouple-rds-from-beanstalk/

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html

Question 63

An IT company runs its server infrastructure on Amazon EC2 instances configured in an Auto Scaling Group (ASG) fronted by an Elastic Load Balancer (ELB). For ease of deployment and flexibility in scaling, this AWS architecture is maintained via an Elastic Beanstalk environment. The Technology Lead of a project has requested to automate the replacement of unhealthy Amazon EC2 instances in the Elastic Beanstalk environment.

How will you configure a solution for this requirement? 1

To automate the replacement of unhealthy EC2 instances, you must change the health check type of your instance’s Auto Scaling group from EC2 to ELB by using a configuration file of your Beanstalk environment 2

Modify the Auto Scaling Group from Amazon EC2 console directly to change the health check type to ELB 3

To automate the replacement of unhealthy EC2 instances, you must change the health check type of your instance’s Auto Scaling group from ELB to EC2 by using a configuration file of your Beanstalk environment 4

Modify the Auto Scaling Group from Amazon EC2 console directly to change the health check type to EC2 Correct Answer 1

To automate the replacement of unhealthy EC2 instances, you must change the health check type of your instance’s Auto Scaling group from EC2 to ELB by using a configuration file of your Beanstalk environment Explanation

Correct option:

To automate the replacement of unhealthy EC2 instances, you must change the health check type of your instance’s Auto Scaling group from EC2 to ELB by using a configuration file of your Beanstalk environment

By default, the health check configuration of your Auto Scaling group is set as an EC2 type that performs a status check of EC2 instances. To automate the replacement of unhealthy EC2 instances, you must change the health check type of your instance’s Auto Scaling group from EC2 to ELB by using a configuration file.

The following are some important points to remember:

Status checks cover only an EC2 instance's health, and not the health of your application, server, or any Docker containers running on the instance.

If your application crashes, the load balancer removes the unhealthy instances from its target. However, your Auto Scaling group doesn't automatically replace the unhealthy instances marked by the load balancer.

By changing the health check type of your Auto Scaling group from EC2 to ELB, you enable the Auto Scaling group to automatically replace the unhealthy instances when the health check fails.

Complete list of steps to configure the above: via - https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-instance-automation/

Incorrect options:

To automate the replacement of unhealthy EC2 instances, you must change the health check type of your instance’s Auto Scaling group from ELB to EC2 by using a configuration file of your Beanstalk environment - As mentioned earlier, the health check type of your instance’s Auto Scaling group should be changed from EC2 to ELB.

Modify the Auto Scaling Group from Amazon EC2 console directly to change the health check type to ELB

Modify the Auto Scaling Group from Amazon EC2 console directly to change the health check type to EC2

You should configure your Amazon EC2 instances in an Elastic Beanstalk environment by using Elastic Beanstalk configuration files (.ebextensions). Configuration changes made to your Elastic Beanstalk environment won’t persist if you use the following configuration methods:

Configuring an Elastic Beanstalk resource directly from the console of a specific AWS service.

Installing a package, creating a file, or running a command directly from your Amazon EC2 instance.

Both these options contradict the above explanation and therefore these two options are incorrect.

Reference:

https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-configuration-files/

Question 64

An e-commerce company runs its web application on Amazon EC2 instances backed by Amazon Elastic Block Store (Amazon EBS) volumes. An Amazon S3 bucket is used for storing sharable data. A developer has attached an Amazon EBS to an Amazon EC2 instance, but it’s still in the “attaching” state after 10-15 minutes.

As a SysOps Administrator, what solution will you suggest to fix this issue with the EBS volume? 1

Check that the device name you specified when you attempted to attach the EBS volume isn’t already in use. Attempt to attach the volume to the instance, again, but use a different device name 2

The EBS volume could be encrypted and the custom KMS key used to encrypt the snapshot is missing. The custom KMS key needs to be added to the volume configuration 3

Each EBS volume receives an initial I/O credit balance, an error in accumulating the credit balance can stop the volume from attaching properly to the instance. Restart the instance to fix the error 4

The attaching status indicates that the underlying hardware related to your EBS volume has failed. This issue cannot be fixed. Raise a service request on AWS and request for a new volume. You are not charged for volumes that are in error state Correct Answer 1

Check that the device name you specified when you attempted to attach the EBS volume isn’t already in use. Attempt to attach the volume to the instance, again, but use a different device name Explanation

Correct option:

Check that the device name you specified when you attempted to attach the EBS volume isn’t already in use. Attempt to attach the volume to the instance, again, but use a different device name

Check that the device name you specified when you attempted to attach the EBS volume isn’t already in use. If the specified device name is already being used by the block device driver of the EC2 instance, the operation fails.

When attaching an EBS volume to an Amazon EC2 instance, you can specify a device name for the volume (by default, one is filled in for you). The block device driver of the EC2 instance mounts the volume and assigns a name. The volume name can be different from the name that you assign.

If you specify a device name that’s not in use by Amazon EC2, but is used by the block device driver within the EC2 instance, the attachment of the Amazon EBS volume fails. Instead, the EBS volume is stuck in the attaching state. This is usually due to one of the following reasons:

The block device driver is remapping the specified device name: On an HVM EC2 instance, /dev/sda1 remaps to /dev/xvda. If you attempt to attach a secondary Amazon EBS volume to /dev/xvda, the secondary EBS volume can't successfully attach to the instance. This can cause the EBS volume to be stuck in the attaching state.

The block device driver didn't release the device name: If a user has initiated a forced detach of an Amazon EBS volume, the block device driver of the Amazon EC2 instance might not immediately release the device name for reuse. Attempting to use that device name when attaching a volume causes the volume to be stuck in the attaching state. You must either choose a different device name or reboot the instance.

You can resolve most issues with volumes stuck in the attaching state by following these steps: Force detach the volume and attempt to attach the volume to the instance, again, but use a different device name. The instance must be in running state for this to work.

If the above does not solve the problem, you can reboot the instance or stop and start the instance to migrate it to new underlying hardware. Keep in mind that instance store data is lost when you stop and start an instance.

Incorrect options:

The EBS volume could be encrypted and the custom KMS key used to encrypt the snapshot is missing. The custom KMS key needs to be added to the volume configuration - A missing KMS key will not lead to attaching state of the volume.

Each EBS volume receives an initial I/O credit balance, an error in accumulating the credit balance can stop the volume from attaching properly to the instance. Restart the instance to fix the error - This is a made-up option and has been added as a distractor.

The attaching status indicates that the underlying hardware related to your EBS volume has failed. This issue cannot be fixed. Raise a service request on AWS and request for a new volume. You are not charged for volumes that are in error state - When the underlying hardware related to your EBS volume has failed, the EBS volume will have a status of error. The data associated with the volume is unrecoverable and Amazon EBS processes the volume as lost. AWS doesn’t bill for volumes with a status of error.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/ebs-stuck-attaching/

https://docs.amazonaws.cn/en_us/AWSEC2/latest/WindowsGuide/ebs-volume-types.html

Question 65

A hospitality company runs their applications on its on-premises infrastructure but stores the critical customer data on AWS Cloud using AWS Storage Gateway. At a recent audit, the company has been asked if the customer data is secure while in-transit and at rest in the Cloud.

What is the correct answer to the auditor’s question? And what should the company change to meet the security requirements? 1

AWS Storage Gateway uses SSL/TLS (Secure Socket Layers/Transport Layer Security) to encrypt data that is transferred between your gateway appliance and AWS storage. File and Volume Gateway data stored on Amazon S3 is encrypted. Tape Gateway data cannot be encrypted at-rest 2

AWS Storage Gateway uses IPsec to encrypt data that is transferred between your gateway appliance and AWS storage. File and Volume Gateway data stored on Amazon S3 is encrypted. Tape Gateway data cannot be encrypted at-rest 3

AWS Storage Gateway uses SSL/TLS (Secure Socket Layers/Transport Layer Security) to encrypt data that is transferred between your gateway appliance and AWS storage. By default, Storage Gateway uses Amazon S3-Managed Encryption Keys to server-side encrypt all data it stores in Amazon S3 4

AWS Storage Gateway uses IPsec to encrypt data that is transferred between your gateway appliance and AWS storage. All three Gateway types store data in encrypted form at-rest Correct Answer 3

AWS Storage Gateway uses SSL/TLS (Secure Socket Layers/Transport Layer Security) to encrypt data that is transferred between your gateway appliance and AWS storage. By default, Storage Gateway uses Amazon S3-Managed Encryption Keys to server-side encrypt all data it stores in Amazon S3 Explanation

Correct option:

AWS Storage Gateway uses SSL/TLS (Secure Socket Layers/Transport Layer Security) to encrypt data that is transferred between your gateway appliance and AWS storage. By default, Storage Gateway uses Amazon S3-Managed Encryption Keys to server-side encrypt all data it stores in Amazon S3

AWS Storage Gateway uses SSL/TLS (Secure Socket Layers/Transport Layer Security) to encrypt data that is transferred between your gateway appliance and AWS storage. By default, Storage Gateway uses Amazon S3-Managed Encryption Keys (SSE-S3) to server-side encrypt all data it stores in Amazon S3. You have an option to use the Storage Gateway API to configure your gateway to encrypt data stored in the cloud using server-side encryption with AWS Key Management Service (SSE-KMS) customer master keys (CMKs).

File, Volume and Tape Gateway data is stored in Amazon S3 buckets by AWS Storage Gateway. Tape Gateway supports backing data to Amazon S3 Glacier apart from the standard storage.

Encrypting a file share: For a file share, you can configure your gateway to encrypt your objects with AWS KMS–managed keys by using SSE-KMS.

Encrypting a volume: For cached and stored volumes, you can configure your gateway to encrypt volume data stored in the cloud with AWS KMS–managed keys by using the Storage Gateway API.

Encrypting a tape: For a virtual tape, you can configure your gateway to encrypt tape data stored in the cloud with AWS KMS–managed keys by using the Storage Gateway API.

Incorrect options:

AWS Storage Gateway uses IPsec to encrypt data that is transferred between your gateway appliance and AWS storage. File and Volume Gateway data stored on Amazon S3 is encrypted. Tape Gateway data cannot be encrypted at-rest

AWS Storage Gateway uses IPsec to encrypt data that is transferred between your gateway appliance and AWS storage. All three Gateway types store data in encrypted form at-rest

There is no such thing as using IPSec for encrypting in-transit data between your gateway appliance and AWS storage. You need to use SSL/TLS for this. So both these options are incorrect.

AWS Storage Gateway uses SSL/TLS (Secure Socket Layers/Transport Layer Security) to encrypt data that is transferred between your gateway appliance and AWS storage. File and Volume Gateway data stored on Amazon S3 is encrypted. Tape Gateway data cannot be encrypted at-rest - For a virtual tape, you can configure your gateway to encrypt tape data stored in the cloud with AWS KMS–managed keys by using the Storage Gateway API. So this option is incorrect.

Reference:

https://docs.aws.amazon.com/storagegateway/latest/userguide/encryption.html

Certstest

QUESTION 1

  • (Exam Topic 1) A company hosts its website in the us-east-1 Region. The company is preparing to deploy its website into the eu-central-1 Region. Website visitors who are located in Europe should access the website that is hosted in eu-central-1. All other visitors access the website that is hosted in us-east-1. The company uses Amazon Route 53 to manage the website’s DNS records. Which routing policy should a SysOps administrator apply to the Route 53 record set to meet these requirements?

    A. Geolocation routing policy B. Geoproximity routing policy C. Latency routing policy D. Multivalue answer routing policy

Correct Answer: A geolocation “Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from. For example, you might want all queries from Europe to be routed to an ELB load balancer in the Frankfurt region.” Could be confused with geoproximity - “Geoproximity routing lets Amazon Route 53 route traffic to your resources based on the geographic location of your users and your resources. You can also optionally choose to route more traffic or less to a given resource by specifying a value, known as a bias. A bias expands or shrinks the size of the geographic region from which traffic is routed to a resource” the use case is not needed as per question. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

QUESTION 2

  • (Exam Topic 1) A company uses Amazon Elasticsearch Service (Amazon ES) to analyze sales and customer usage data. Members of the company’s geographically dispersed sales team are traveling. They need to log in to Kibana by using their existing corporate credentials that are stored in Active Directory. The company has deployed Active Directory Federation Services (AD FS) to enable authentication to cloud services. Which solution will meet these requirements?

    A. Configure Active Directory as an authentication provider in Amazon E B. Add the Active Directory server’s domain name to Amazon E C. Configure Kibana to use Amazon ES authentication. D. Deploy an Amazon Cognito user poo E. Configure Active Directory as an external identity provider for the user poo F. Enable Amazon Cognito authentication for Kibana on Amazon ES. G. Enable Active Directory user authentication in Kiban H. Create an IP-based custom domain access policy in Amazon ES that includes the Active Directory server’s IP address. I. Establish a trust relationship with Kibana on the Active Directory serve J. Enable Active Directory user authentication in Kiban K. Add the Active Directory server’s IP address to Kibana.

Correct Answer: B https://aws.amazon.com/blogs/security/how-to-enable-secure-access-to-kibana-using-aws-single-sign-on/ https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-cognito-auth.html

QUESTION 3

  • (Exam Topic 1) A SysOps administrator must ensure that a company’s Amazon EC2 instances auto scale as expected The SysOps administrator configures an Amazon EC2 Auto Scaling Lifecycle hook to send an event to Amazon EventBridge (Amazon CloudWatch Events), which then invokes an AWS Lambda function to configure the EC2 distances When the configuration is complete, the Lambda function calls the complete Lifecycle-action event to put the EC2 instances into service. In testing, the SysOps administrator discovers that the Lambda function is not invoked when the EC2 instances auto scale. What should the SysOps administrator do to reserve this issue?

    A. Add a permission to the Lambda function so that it can be invoked by the EventBridge (CloudWatch Events) rule. B. Change the lifecycle hook action to CONTINUE if the lifecycle hook experiences a fa* we or timeout. C. Configure a retry policy in the EventBridge (CloudWatch Events) rule to retry the Lambda function invocation upon failure. D. Update the Lambda function execution role so that it has permission to call the complete lifecycle-action event

Correct Answer: D

QUESTION 4

  • (Exam Topic 1) An Amazon EC2 instance is running an application that uses Amazon Simple Queue Service (Amazon SQS} queues A SysOps administrator must ensure that the application can read, write, and delete messages from the SQS queues Which solution will meet these requirements in the MOST secure manner?

    A. Create an IAM user with an IAM policy that allows the sqs SendMessage permission, the sqs ReceiveMessage permission, and the sqs DeleteMessage permission to the appropriate queues Embed the IAM user’s credentials in the application’s configuration B. Create an IAM user with an IAM policy that allows the sqs SendMessage permission, the sqs ReceiveMessage permission, and the sqs DeleteMessage permission to the appropriate queues Export the IAM user’s access key and secret access key as environment variables on the EC2 instance C. Create and associate an IAM role that allows EC2 instances to call AWS services Attach an IAM policy to the role that allows sqs.” permissions to the appropriate queues D. Create and associate an IAM role that allows EC2 instances to call AWS services Attach an IAM policy to the role that allows the sqs SendMessage permission, the sqs ReceiveMessage permission, and the sqs DeleteMessage permission to the appropriate queues

Correct Answer: D

QUESTION 5

  • (Exam Topic 1) A company has multiple AWS Site-to-Site VPN connections between a VPC and its branch offices. The company manages an Amazon Elasticsearch Service (Amazon ES) domain that is configured with public access. The Amazon ES domain has an open domain access policy. A SysOps administrator needs to ensure that Amazon ES can be accessed only from the branch offices while preserving existing data. Which solution will meet these requirements?

    A. Configure an identity-based access policy on Amazon E B. Add an allow statement to the policy that includes the Amazon Resource Name (ARN) for each branch office VPN connection. C. Configure an IP-based domain access policy on Amazon E D. Add an allow statement to the policy that includes the private IP CIDR blocks from each branch office network. E. Deploy a new Amazon ES domain in private subnets in a VPC, and import a snapshot from the old domai F. Create a security group that allows inbound traffic from the branch office CIDR blocks. G. Reconfigure the Amazon ES domain in private subnets in a VP H. Create a security group that allows inbound traffic from the branch office CIDR blocks.

Correct Answer: B

QUESTION 6

  • (Exam Topic 1) A Sysops administrator has created an Amazon EC2 instance using an AWS CloudFormation template in the us-east-I Region. The administrator finds that this template has failed to create an EC2 instance in the us-west-2 Region. What is one cause for this failure?

    A. Resource tags defined in the CloudFormation template are specific to the us-east-I Region. B. The Amazon Machine Image (AMI) ID referenced in the CloudFormation template could not be found in the us-west-2 Region. C. The cfn-init script did not run during resource provisioning in the us-west-2 Region. D. The IAM user was not created in the specified Region.

Correct Answer: B One possible cause for the failure of the CloudFormation template to create an EC2 instance in the us-west-2 Region is that the Amazon Machine Image (AMI) ID referenced in the template could not be found in the us-west-2 Region. This could be due to the fact that the AMI is not available in that region, or the credentials used to access the AMI were not configured properly. The other options (resource tags defined in the CloudFormation template are specific to the us-east-I Region, the cfn-init script did not run during resource provisioning in the us-west-2 Region, and the IAM user was not created in the specified Region) are not valid causes for this failure.

QUESTION 7

  • (Exam Topic 1) A company has a public website that recently experienced problems. Some links led to missing webpages, and other links rendered incorrect webpages. The application infrastructure was running properly, and all the provisioned resources were healthy. Application logs and dashboards did not show any errors, and no monitoring alarms were raised. Systems administrators were not aware of any problems until end users reported the issues. The company needs to proactively monitor the website for such issues in the future and must implement a solution as soon as possible. Which solution will meet these requirements with the LEAST operational overhead?

    A. Rewrite the application to surface a custom error to the application log when issues occur.Automatically parse logs for error B. Create an Amazon CloudWatch alarm to provide alerts when issues are detected. C. Create an AWS Lambda function to test the websit D. Configure the Lambda function to emit an Amazon CloudWatch custom metric when errors are detecte E. Configure a CloudWatch alarm to provide alerts when issues are detected. F. Create an Amazon CloudWatch Synthetics canar G. Use the CloudWatch Synthetics Recorder plugin to generate the script for the canary ru H. Configure the canary in line with requirement I. Create an alarm to provide alerts when issues are detected.

Correct Answer: A

QUESTION 8

  • (Exam Topic 1) A company hosts a database on an Amazon RDS Multi-AZ DB instance. The database is not encrypted. The company’s new security policy requires all AWS resources to be encrypted at rest and in transit. What should a SysOps administrator do to encrypt the database?

    A. Configure encryption on the existing DB instance. B. Take a snapshot of the DB instanc C. Encrypt the snapsho D. Restore the snapshot to the same DB instance. E. Encrypt the standby replica in a secondary Availability Zon F. Promote the standby replica to the primary DB instance. G. Take a snapshot of the DB instanc H. Copy and encrypt the snapsho I. Create a new DB instance by restoring the encrypted copy.

Correct Answer: B

QUESTION 9

  • (Exam Topic 1) A company has an Amazon CloudFront distribution that uses an Amazon S3 bucket as its origin. During a review of the access logs, the company determines that some requests are going directly to the S3 bucket by using the website hosting endpoint. A SysOps administrator must secure the S3 bucket to allow requests only from CloudFront. What should the SysOps administrator do to meet this requirement?

    A. Create an origin access identity (OAI) in CloudFron B. Associate the OAI with the distributio C. Remove access to and from other principals in the S3 bucket polic D. Update the S3 bucket policy to allow accessonly from the OAI. E. Create an origin access identity (OAI) in CloudFron F. Associate the OAI with the distributio G. Update the S3 bucket policy to allow access only from the OA H. Create a new origin, and specify the S3 bucket as the new origi I. Update the distribution behavior to use the new origi J. Remove the existing origin. K. Create an origin access identity (OAI) in CloudFron L. Associate the OAI with the distributio M. Update the S3 bucket policy to allow access only from the OA N. Disable website hostin O. Create a new origin, and specify the S3 bucket as the new origi P. Update the distribution behavior to use the new origi Q. Remove the existing origin. R. Update the S3 bucket policy to allow access only from the CloudFront distributio S. Remove access to and from other principals in the S3 bucket polic T. Disable website hostin . Create a new origin, and specify the S3 bucket as the new origi . Update the distribution behavior to use the new origi . Remove the existing origin.

Correct Answer: A

QUESTION 10

  • (Exam Topic 1) A SysOps administrator is investigating why a user has been unable to use RDP to connect over the internet from their home computer to a bastion server running on an Amazon EC2 Windows instance. Which of the following are possible causes of this issue? (Choose two.)

    A. A network ACL associated with the bastion’s subnet is blocking the network traffic. B. The instance does not have a private IP address. C. The route table associated with the bastion’s subnet does not have a route to the internet gateway. D. The security group for the instance does not have an inbound rule on port 22. E. The security group for the instance does not have an outbound rule on port 3389.

Correct Answer: A C

QUESTION 11

  • (Exam Topic 1) A SysOps administrator recently configured Amazon S3 Cross-Region Replication on an S3 bucket Which of the following does this feature replicate to the destination S3 bucket by default?

    A. Objects in the source S3 bucket for which the bucket owner does not have permissions B. Objects that are stored in S3 Glacier C. Objects that existed before replication was configured D. Object metadata

Correct Answer: B

QUESTION 12

  • (Exam Topic 1) A SysOps administrator is required to monitor free space on Amazon EBS volumes attached to Microsoft Windows-based Amazon EC2 instances within a company’s account. The administrator must be alerted to potential issues. What should the administrator do to receive email alerts before low storage space affects EC2 instance performance?

    A. Use built-in Amazon CloudWatch metrics, and configure CloudWatch alarms and an Amazon SNS topic for email notifications B. Use AWS CloudTrail logs and configure the trail to send notifications to an Amazon SNS topic. C. Use the Amazon CloudWatch agent to send disk space metrics, then set up CloudWatch alarms using an Amazon SNS topic. D. Use AWS Trusted Advisor and enable email notification alerts for EC2 disk space

Correct Answer: C

QUESTION 13

  • (Exam Topic 1) A SysOps administrator is creating an Amazon EC2 Auto Scaling group in a new AWS account. After adding some instances, the SysOps administrator notices that the group has not reached the minimum number of instances. The SysOps administrator receives the following error message: AWS-SysOps dumps exhibit Which action will resolve this issue?

    A. Adjust the account spending limits for Amazon EC2 on the AWS Billing and Cost Management console B. Modify the EC2 quota for that AWS Region in the EC2 Settings section of the EC2 console. C. Request a quota Increase for the Instance type family by using Service Quotas on the AWS Management Console. D. Use the Rebalance action In the Auto Scaling group on the AWS Management Console.

Correct Answer: C

QUESTION 14

  • (Exam Topic 1) A company has a stateless application that runs on four Amazon EC2 instances. The application requires tour instances at all times to support all traffic. A SysOps administrator must design a highly available, fault-tolerant architecture that continually supports all traffic if one Availability Zone becomes unavailable. Which configuration meets these requirements?

    A. Deploy two Auto Scaling groups in two Availability Zones with a minimum capacity of two instances in each group. B. Deploy an Auto Scaling group across two Availability Zones with a minimum capacity of four instances. C. Deploy an Auto Scaling group across three Availability Zones with a minimum capacity of four instances. D. Deploy an Auto Scaling group across three Availability Zones with a minimum capacity of six instances.

Correct Answer: C

QUESTION 15

  • (Exam Topic 1) A company’s financial department needs to view the cost details of each project in an AWS account A SysOps administrator must perform the initial configuration that is required to view cost for each project in Cost Explorer Which solution will meet this requirement?

    A. Activate cost allocation tags Add a project tag to the appropriate resources B. Configure consolidated billing Create AWS Cost and Usage Reports C. Use AWS Budgets Create AWS Budgets reports D. Use cost categories to define custom groups that are based on AWS cost and usage dimensions

Correct Answer: A

QUESTION 16

  • (Exam Topic 1) A company’s SysOps administrator deploys a public Network Load Balancer (NLB) in front of the company’s web application. The web application does not use any Elastic IP addresses. Users must access the web application by using the company’s domain name. The SysOps administrator needs to configure Amazon Route 53 to route traffic to the NLB. Which solution will meet these requirements MOST cost-effectively?

    A. Create a Route 53 AAAA record for the NLB. B. Create a Route 53 alias record for the NLB. C. Create a Route 53 CAA record for the NLB. D. Create a Route 53 CNAME record for the NLB.

Correct Answer: B

QUESTION 17

  • (Exam Topic 1) A large company is using AWS Organizations to manage hundreds of AWS accounts across multiple AWS Regions. The company has turned on AWS Config throughout the organization. The company requires all Amazon S3 buckets to block public read access. A SysOps administrator must generate a monthly report that shows all the S3 buckets and whether they comply with this requirement. Which combination of steps should the SysOps administrator take to collect this data? {Select TWO).

    A. Create an AWS Config aggregator in an aggregator accoun B. Use the organization as the source.Retrieve the compliance data from the aggregator. C. Create an AWS Config aggregator in each accoun D. Use an S3 bucket in an aggregator account as the destinatio E. Retrieve the compliance data from the S3 bucket F. Edit the AWS Config policy in AWS Organization G. Use the organization’s management account to turn on the s3-bucket-public-read-prohibited rule for the entire organization. H. Use the AWS Config compliance report from the organization’s management accoun I. Filter the results by resource, and select Amazon S3. J. Use the AWS Config API to apply the s3-bucket-public-read-prohibited rule in all accounts for all available Regions.

Correct Answer: C D

QUESTION 18

  • (Exam Topic 1) A company’s SysOps administrator deploys four new Amazon EC2 instances by using the standard Amazon Linux 2 Amazon Machine Image (AMI). The company needs to be able to use AWS Systems Manager to manage the instances The SysOps administrator notices that the instances do not appear in the Systems Manager console What must the SysOps administrator do to resolve this issue?

    A. Connect to each instance by using SSH Install Systems Manager Agent on each instance Configure Systems Manager Agent to start automatically when the instances start up B. Use AWS Certificate Manager (ACM) to create a TLS certificate Import the certificate into each instance Configure Systems Manager Agent to use the TLS certificate for secure communications C. Connect to each instance by using SSH Create an ssm-user account Add the ssm-user account to the/etcsudoers d directory D. Attach an IAM instance profile to the instances Ensure that the instance profile contains the AmazonSSMManagedinstanceCore policy

Correct Answer: D

QUESTION 19

  • (Exam Topic 1) A SysOps administrator is trying to set up an Amazon Route 53 domain name to route traffic to a website hosted on Amazon S3. The domain name of the website is www.anycompany.com and the S3 bucket name is anycompany-static. After the record set is set up in Route 53, the domain name www.anycompany.com does not seem to work, and the static website is not displayed in the browser. Which of the following is a cause of this?

    A. The S3 bucket must be configured with Amazon CloudFront first. B. The Route 53 record set must have an IAM role that allows access to the S3 bucket. C. The Route 53 record set must be in the same region as the S3 bucket. D. The S3 bucket name must match the record set name in Route 53.

Correct Answer: D

QUESTION 20

  • (Exam Topic 1) A company needs to view a list of security groups that are open to the internet on port 3389. What should a SysOps administrator do to meet this requirement?

    A. Configure Amazon GuardDuly to scan security groups and report unrestricted access on port 3389. B. Configure a service control policy (SCP) to identify security groups that allow unrestricted access on port 3389 C. Use AWS Identity and Access Management Access Analyzer to find any instances that have unrestricted access on port 3389. D. Use AWS Trusted Advisor to find security groups that allow unrestricted access on port 3389.

Correct Answer: D

QUESTION 21

  • (Exam Topic 1) A gaming application is deployed on four Amazon EC2 instances in a default VPC. The SysOps administrator has noticed consistently high latency in responses as data is transferred among the four instances. There is no way for the administrator to alter the application code. The MOST effective way to reduce latency is to relaunch the EC2 instances in:

    A. a dedicated VPC. B. a single subnet inside the VPC. C. a placement group. D. a single Availability Zone.

Correct Answer: C

QUESTION 22

  • (Exam Topic 1) A company runs an application on Amazon EC2 instances. The EC2 instances are in an Auto Scaling group and run behind an Application Load Balancer (ALB). The application experiences errors when total requests exceed 100 requests per second. A SysOps administrator must collect information about total requests for a 2-week period to determine when requests exceeded this threshold. What should the SysOps administrator do to collect this data?

    A. Use the ALB’s RequestCount metri B. Configure a time range of 2 weeks and a period of 1 minute.Examine the chart to determine peak traffic times and volumes. C. Use Amazon CloudWatch metric math to generate a sum of request counts for all the EC2 instances over a 2-week perio D. Sort by a 1-minute interval. E. Create Amazon CloudWatch custom metrics on the EC2 launch configuration templates to create aggregated request metrics across all the EC2 instances. F. Create an Amazon EventBridge (Amazon CloudWatch Events) rul G. Configure an EC2 event matching pattern that creates a metric that is based on EC2 request H. Display the data in a graph.

Correct Answer: A Using the ALB’s RequestCount metric will allow the SysOps administrator to collect information about total requests for a 2-week period and determine when requests exceeded the threshold of 100 requests per second. Configuring a time range of 2 weeks and a period of 1 minute will ensure that the data can be accurately examined to determine peak traffic times and volumes.

QUESTION 23

  • (Exam Topic 1) A company uses AWS Cloud Formation templates to deploy cloud infrastructure. An analysis of all the company’s templates shows that the company has declared the same components in multiple templates. A SysOps administrator needs to create dedicated templates that have their own parameters and conditions for these common components. Which solution will meet this requirement?

    A. Develop a CloudFormaiion change set. B. Develop CloudFormation macros. C. Develop CloudFormation nested stacks. D. Develop CloudFormation stack sets.

Correct Answer: C

QUESTION 24

  • (Exam Topic 1) A database is running on an Amazon RDS Mufti-AZ DB instance. A recent security audit found the database to be out of compliance because it was not encrypted. Which approach will resolve the encryption requirement?

    A. Log in to the RDS console and select the encryption box to encrypt the database B. Create a new encrypted Amazon EBS volume and attach it to the instance C. Encrypt the standby replica in the secondary Availability Zone and promote it to the primary instance. D. Take a snapshot of the RDS instance, copy and encrypt the snapshot and then restore to the new RDS instance

Correct Answer: D

QUESTION 25

  • (Exam Topic 1) A company has an internal web application that runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group in a single Availability Zone. A SysOps administrator must make the application highly available. Which action should the SysOps administrator take to meet this requirement?

    A. Increase the maximum number of instances in the Auto Scaling group to meet the capacity that is required at peak usage. B. Increase the minimum number of instances in the Auto Scaling group to meet the capacity that is required at peak usage. C. Update the Auto Scaling group to launch new instances in a second Availability Zone in the same AWS Region. D. Update the Auto Scaling group to launch new instances in an Availability Zone in a second AWS Region.

Correct Answer: C

QUESTION 26

  • (Exam Topic 1) A company is running a serverless application on AWS Lambda The application stores data in an Amazon RDS for MySQL DB instance Usage has steadily increased and recently there have been numerous “too many connections” errors when the Lambda function attempts to connect to the database The company already has configured the database to use the maximum max_connections value that is possible What should a SysOps administrator do to resolve these errors’?

    A. Create a read replica of the database Use Amazon Route 53 to create a weighted DNS record that contains both databases B. Use Amazon RDS Proxy to create a proxy Update the connection string in the Lambda function C. Increase the value in the max_connect_errors parameter in the parameter group that the database uses D. Update the Lambda function’s reserved concurrency to a higher value

Correct Answer: B https://aws.amazon.com/blogs/compute/using-amazon-rds-proxy-with-aws-lambda/ RDS Proxy acts as an intermediary between your application and an RDS database. RDS Proxy establishes and manages the necessary connection pools to your database so that your application creates fewer database connections. Your Lambda functions interact with RDS Proxy instead of your database instance. It handles the connection pooling necessary for scaling many simultaneous connections created by concurrent Lambda functions. This allows your Lambda applications to reuse existing connections, rather than creating new connections for every function invocation. Check “Database proxy for Amazon RDS” section in the link to see how RDS proxy help Lambda handle huge connections to RDS MySQL https://aws.amazon.com/blogs/compute/using-amazon-rds-proxy-with-aws-lambda/

QUESTION 27

  • (Exam Topic 1) A SysOps administrator needs to delete an AWS CloudFormation stack that is no longer in use. The CloudFormation stack is in the DELETE_FAILED state. The SysOps administrator has validated the permissions that are required to delete the Cloud Formation stack.

    A. The configured timeout to delete the stack was too low for the delete operation to complete. B. The stack contains nested stacks that must be manually deleted fast. C. The stack was deployed with the -disable rollback option. D. There are additional resources associated with a security group in the stack E. There are Amazon S3 buckets that still contain objects in the stack.

Correct Answer: D E

QUESTION 28

  • (Exam Topic 1) A company is using an Amazon DynamoDB table for data. A SysOps administrator must configure replication of the table to another AWS Region for disaster recovery. What should the SysOps administrator do to meet this requirement?

    A. Enable DynamoDB Accelerator (DAX). B. Enable DynamoDB Streams, and add a global secondary index (GSI). C. Enable DynamoDB Streams, and-add a global table Region. D. Enable point-in-time recovery.

Correct Answer: C

QUESTION 29

  • (Exam Topic 1) A company has a VPC with public and private subnets. An Amazon EC2 based application resides in the private subnets and needs to process raw .csv files stored in an Amazon S3 bucket. A SysOps administrator has set up the correct IAM role with the required permissions for the application to access the S3 bucket, but the application is unable to communicate with the S3 bucket. Which action will solve this problem while adhering to least privilege access?

    A. Add a bucket policy to the S3 bucket permitting access from the IAM role. B. Attach an S3 gateway endpoint to the VP C. Configure the route table for the private subnet. D. Configure the route table to allow the instances on the private subnet access through the internet gateway. E. Create a NAT gateway in a private subnet and configure the route table for the private subnets.

Correct Answer: B Technology to use is a VPC endpoint - “A VPC endpoint enables private connections between your VPC and supported AWS services and VPC endpoint services powered by AWS PrivateLink. AWS PrivateLink is a technology that enables you to privately access services by using private IP addresses. Traffic between your VPC and the other service does not leave the Amazon network.” S3 is an example of a gateway endpoint. We want to see services in AWS while not leaving the VPC.

QUESTION 30

  • (Exam Topic 1) A company has an existing web application that runs on two Amazon EC2 instances behind an Application Load Balancer (ALB) across two Availability Zones The application uses an Amazon RDS Multi-AZ DB Instance Amazon Route 53 record sets route requests tor dynamic content to the load balancer and requests for static content to an Amazon S3 bucket Site visitors are reporting extremely long loading times. Which actions should be taken to improve the performance of the website? (Select TWO )

    A. Add Amazon CloudFront caching for static content B. Change the load balancer listener from HTTPS to TCP C. Enable Amazon Route 53 latency-based routing D. Implement Amazon EC2 Auto Scaling for the web servers E. Move the static content from Amazon S3 to the web servers

Correct Answer: A D

QUESTION 31

  • (Exam Topic 1) A company is trying to connect two applications. One application runs in an on-premises data center that has a hostname of hostl .onprem.private. The other application runs on an Amazon EC2 instance that has a hostname of hostl.awscloud.private. An AWS Site-to-Site VPN connection is in place between the on-premises network and AWS. The application that runs in the data center tries to connect to the application that runs on the EC2 instance, but DNS resolution fails. A SysOps administrator must implement DNS resolution between on-premises and AWS resources. Which solution allows the on-premises application to resolve the EC2 instance hostname?

    A. Set up an Amazon Route 53 inbound resolver endpoint with a forwarding rule for the onprem.private hosted zon B. Associate the resolver with the VPC of the EC2 instanc C. Configure the on-premises DNS resolver to forward onprem.private DNS queries to the inbound resolver endpoint. D. Set up an Amazon Route 53 inbound resolver endpoin E. Associate the resolver with the VPC of the EC2 instanc F. Configure the on-premises DNS resolver to forward awscloud.private DNS queries to the inbound resolver endpoint. G. Set up an Amazon Route 53 outbound resolver endpoint with a forwarding rule for the onprem.private hosted zon H. Associate the resolver with the AWS Region of the EC2 instanc I. Configure theon-premises DNS resolver to forward onprem.private DNS queries to the outbound resolver endpoint. J. Set up an Amazon Route 53 outbound resolver endpoin K. Associate the resolver with the AWS Region of the EC2 instanc L. Configure the on-premises DNS resolver to forward awscloud.private DNS queries to the outbound resolver endpoint.

Correct Answer: C

QUESTION 32

  • (Exam Topic 1) A SysOps administrator is provisioning an Amazon Elastic File System (Amazon EFS) file system to provide shared storage across multiple Amazon EC2 instances The instances all exist in the same VPC across multiple Availability Zones. There are two instances In each Availability Zone. The SysOps administrator must make the file system accessible to each instance with the lowest possible latency. Which solution will meet these requirements?

    A. Create a mount target for the EFS file system in the VP B. Use the mount target to mount the file system on each of the instances C. Create a mount target for the EFS file system in one Availability Zone of the VP D. Use the mount target to mount the file system on the instances in that Availability Zon E. Share the directory with the other instances. F. Create a mount target for each instanc G. Use each mount target to mount the EFS file system on each respective instance. H. Create a mount target in each Availability Zone of the VPC Use the mount target to mount the EFS file system on the Instances in the respective Availability Zone.

Correct Answer: D A mount target provides an IP address for an NFSv4 endpoint at which you can mount an Amazon EFS file system. You mount your file system using its Domain Name Service (DNS) name, which resolves to the IP address of the EFS mount target in the same Availability Zone as your EC2 instance. You can create one mount target in each Availability Zone in an AWS Region. If there are multiple subnets in an Availability Zone in your VPC, you create a mount target in one of the subnets. Then all EC2 instances in that Availability Zone share that mount target. https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html

QUESTION 33

  • (Exam Topic 1) A company is planning to host its stateful web-based applications on AWS A SysOps administrator is using an Auto Scaling group of Amazon EC2 instances The web applications will run 24 hours a day 7 days a week throughout the year The company must be able to change the instance type within the same instance family later in the year based on the traffic and usage patterns Which EC2 instance purchasing option will meet these requirements MOST cost-effectively?

    A. Convertible Reserved Instances B. On-Demand instances C. Spot instances D. Standard Reserved instances

Correct Answer: A https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-convertible-exchange.html

QUESTION 34

  • (Exam Topic 1) A SysOps administrator is setting up an automated process to recover an Amazon EC2 instance In the event of an underlying hardware failure. The recovered instance must have the same private IP address and the same Elastic IP address that the original instance had. The SysOps team must receive an email notification when the recovery process is initiated. Which solution will meet these requirements?

    A. Create an Amazon CloudWatch alarm for the EC2 instance, and specify the SiatusCheckFailedjnstance metri B. Add an EC2 action to the alarm to recover the instanc C. Add an alarm notification to publish a message to an Amazon Simple Notification Service (Amazon SNS> topi D. Subscribe the SysOps team email address to the SNS topic. E. Create an Amazon CloudWatch alarm for the EC2 Instance, and specify the StatusCheckFailed_System metri F. Add an EC2 action to the alarm to recover the instanc G. Add an alarm notification to publish a message to an Amazon Simple Notification Service (Amazon SNS) topi H. Subscribe the SysOps team email address to the SNS topic. I. Create an Auto Scaling group across three different subnets in the same Availability Zone with a minimum, maximum, and desired size of 1. Configure the Auto Seating group to use a launch template that specifies the private IP address and the Elastic IP addres J. Add an activity notification for the Auto Scaling group to send an email message to the SysOps team through Amazon Simple Email Service (Amazon SES). K. Create an Auto Scaling group across three Availability Zones with a minimum, maximum, and desired size of 1. Configure the Auto Scaling group to use a launch template that specifies the private IP addressand the Elastic IP addres L. Add an activity notification for the Auto Scaling group to publish a message to an Amazon Simple Notification Service (Amazon SNS) topi M. Subscribe the SysOps team email address to the SNS topic.

Correct Answer: B You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically recovers the instance if it becomes impaired due to an underlying hardware failure or a problem that requires AWS involvement to repair. Terminated instances cannot be recovered. A recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata. If the impaired instance has a public IPv4 address, the instance retains the public IPv4 address after recovery. If the impaired instance is in a placement group, the recovered instance runs in the placement group. When the StatusCheckFailed_System alarm is triggered, and the recover action is initiated, you will be notified by the Amazon SNS topic that you selected when you created the alarm and associated the recover action. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html

QUESTION 35

  • (Exam Topic 1) A company is creating a new multi-account architecture. A Sysops administrator must implement a login solution to centrally manage user access and permissions across all AWS accounts. The solution must be integrated with AWS Organizations and must be connected to a third-party Security Assertion Markup Language (SAML) 2.0 identity provider (IdP). What should the SysOps administrator do to meet these requirements?

    A. Configure an Amazon Cognito user poo B. Integrate the user pool with the third-party IdP. C. Enable and configure AWS Single Sign-On with the third-party IdP. D. Federate the third-party IdP with AWS Identity and Access Management (IAM) for each AWS account in the organization. E. Integrate the third-party IdP directly with AWS Organizations.

Correct Answer: A

QUESTION 36

  • (Exam Topic 1) A SysOps administrator must create a solution that automatically shuts down any Amazon EC2 instances that have less than 10% average CPU utilization for 60 minutes or more. Which solution will meet this requirement In the MOST operationally efficient manner?

    A. Implement a cron job on each EC2 instance to run once every 60 minutes and calculate the current CPU utilizatio B. Initiate an instance shutdown If CPU utilization is less than 10%. C. Implement an Amazon CloudWatch alarm for each EC2 instance to monitor average CPU utilization.Set the period at 1 hour, and set the threshold at 10%. Configure an EC2 action on the alarm to stop the instance. D. Install the unified Amazon CloudWatch agent on each EC2 instance, and enable the Basic level predefined metric se E. Log CPU utilization every 60 minutes, and initiate an instance shutdown if CPU utilization is less than 10%. F. Use AWS Systems Manager Run Command to get CPU utilization from each EC2 instance every 60 minute G. Initiate an instance shutdown if CPU utilization is less than 10%.

Correct Answer: B https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/UsingAlarmActions.html

QUESTION 37

  • (Exam Topic 1) A global company handles a large amount of personally identifiable information (Pll) through an internal web portal. The company’s application runs in a corporate data center that is connected to AWS through an AWS Direct Connect connection. The application stores the Pll in Amazon S3. According to a compliance requirement, traffic from the web portal to Amazon S3 must not travel across the internet. What should a SysOps administrator do to meet the compliance requirement?

    A. Provision an interface VPC endpoint for Amazon S3. Modify the application to use the interface endpoint. B. Configure AWS Network Firewall to redirect traffic to the internal S3 address. C. Modify the application to use the S3 path-style endpoint. D. Set up a range of VPC network ACLs to redirect traffic to the Internal S3 address.

Correct Answer: B

QUESTION 38

  • (Exam Topic 1) A company recently its server infrastructure to Amazon EC2 instances. The company wants to use Amazon CloudWatch metrics to track instance memory utilization and available disk space. What should a SysOps administrator do to meet these requirements?

    A. Configure CloudWatch from the AWS Management Console tor all the instances that require monitoring by CloudWatc B. AWS automatically installs and configures the agents far the specified instances. C. Install and configure the CloudWatch agent on all the instance D. Attach an IAM role to allow theinstances to write logs to CloudWatch. E. Install and configure the CloudWatch agent on all the instance F. Attach an IAM user to allow the instances to write logs to CloudWatch. G. Install and configure the CloudWatch agent on all the instance H. Attach the necessary security groups to allow the instances to write logs to CloudWatch

Correct Answer: C

QUESTION 39

  • (Exam Topic 1) A company runs a website from Sydney, Australia. Users in the United States (US) and Europe are reporting that images and videos are taking a long time to load. However, local testing in Australia indicates no performance issues. The website has a large amount of static content in the form of images and videos that are stored m Amazon S3. Which solution will result In the MOST Improvement In the user experience for users In the US and Europe?

    A. Configure AWS PrivateLink for Amazon S3. B. Configure S3 Transfer Acceleration. C. Create an Amazon CloudFront distributio D. Distribute the static content to the CloudFront edge locations E. Create an Amazon API Gateway API in each AWS Regio F. Cache the content locally.

Correct Answer: D

QUESTION 40

  • (Exam Topic 1) A SysOps administrator is reviewing AWS Trusted Advisor warnings and encounters a warning for an S3 bucket policy that has open access permissions. While discussing the issue with the bucket owner, the administrator realizes the S3 bucket is an origin for an Amazon CloudFront web distribution. Which action should the administrator take to ensure that users access objects in Amazon S3 by using only CloudFront URLs?

    A. Encrypt the S3 bucket content with Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3). B. Create an origin access identity and grant it permissions to read objects in the S3 bucket. C. Assign an 1AM user to the CloudFront distribution and grant the user permissions in the S3 bucket policy. D. Assign an 1AM role to the CloudFront distribution and grant the role permissions in the S3 bucket policy.

Correct Answer: B https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3

QUESTION 41

  • (Exam Topic 1) A SysOps administrator must set up notifications for whenever combined billing exceeds a certain threshold for all AWS accounts within a company. The administrator has set up AWS Organizations and enabled Consolidated Billing. Which additional steps must the administrator perform to set up the billing alerts?

    A. In the payer account: Enable billing alerts in the Billing and Cost Management console; publish an Amazon SNS message when the billing alert triggers. B. In each account: Enable billing alerts in the Billing and Cost Management console; set up a billing alarm in Amazon CloudWatch; publish an SNS message when the alarm triggers. C. In the payer account: Enable billing alerts in the Billing and Cost Management console; set up a billing alarm in the Billing and Cost Management console to publish an SNS message when the alarm triggers. D. In the payer account: Enable billing alerts in the Billing and Cost Management console; set up a billing alarm in Amazon CloudWatch; publish an SNS message when the alarm triggers.

Correct Answer: D

QUESTION 42

  • (Exam Topic 1) A large company is using AWS Organizations to manage its multi-account AWS environment. According to company policy, all users should have read-level access to a particular Amazon S3 bucket in a central account. The S3 bucket data should not be available outside the organization. A SysOps administrator must set up the permissions and add a bucket policy to the S3 bucket. Which parameters should be specified to accomplish this in the MOST efficient manner?

    A. Specify “’ as the principal and PrincipalOrgld as a condition. B. Specify all account numbers as the principal. C. Specify PrincipalOrgld as the principal. D. Specify the organization’s management account as the principal.

Correct Answer: A https://aws.amazon.com/blogs/security/control-access-to-aws-resources-by-using-the-aws-organization-of-iam-p

QUESTION 43

  • (Exam Topic 1) A SysOps Administrator runs a web application that is using a microservices approach whereby different responsibilities of the application have been divided in a separate microservice running on a different Amazon EC2 instance. The administrator has been tasked with reconfiguring the infrastructure to support this approach. How can the administrator accomplish this with the LEAST administrative overhead?

    A. Use Amazon CloudFront to log the URL and forward the request. B. Use Amazon CloudFront to rewrite the header based on the microservice and forward the request. C. Use an Application Load Balancer (ALB) and do path-based routing. D. Use a Network Load Balancer (NLB) and do path-based routing.

Correct Answer: C https://aws.amazon.com/premiumsupport/knowledge-center/elb-achieve-path-based-routing-alb/

QUESTION 44

  • (Exam Topic 1) A company has a stateless application that is hosted on a fleet of 10 Amazon EC2 On-Demand Instances in an Auto Scaling group. A minimum of 6 instances are needed to meet service requirements. Which action will maintain uptime for the application MOST cost-effectively?

    A. Use a Spot Fleet with an On-Demand capacity of 6 instances. B. Update the Auto Scaling group with a minimum of 6 On-Demand Instances and a maximum of 10 On-Demand Instances. C. Update the Auto Scaling group with a minimum of 1 On-Demand Instance and a maximum of 6 On-Demand Instances. D. Use a Spot Fleet with a target capacity of 6 instances.

Correct Answer: A

QUESTION 45

  • (Exam Topic 1) While setting up an AWS managed VPN connection, a SysOps administrator creates a customer gateway resource in AWS. The customer gateway device resides in a data center with a NAT gateway in front of it. What address should be used to create the customer gateway resource?

    A. The private IP address of the customer gateway device B. The MAC address of the NAT device in front of the customer gateway device C. The public IP address of the customer gateway device D. The public IP address of the NAT device in front of the customer gateway device

Correct Answer: D

QUESTION 46

  • (Exam Topic 1) An environment consists of 100 Amazon EC2 Windows instances The Amazon CloudWatch agent Is deployed and running on at EC2 instances with a baseline configuration file to capture log files There is a new requirement to capture the DHCP tog tiles that exist on 50 of the instances What is the MOST operational efficient way to meet this new requirement?

    A. Create an additional CloudWatch agent configuration file to capture the DHCP logs Use the AWS Systems Manager Run Command to restart the CloudWatch agent on each EC2 instance with the append-config option to apply the additional configuration file B. Log in to each EC2 instance with administrator rights Create a PowerShell script to push the needed baseline log files and DHCP log files to CloudWatch C. Run the CloudWatch agent configuration file wizard on each EC2 instance Verify that the base the log files are included and add the DHCP tog files during the wizard creation process D. Run the CloudWatch agent configuration file wizard on each EC2 instance and select the advanced detail leve E. This wifi capture the operating system log files.

Correct Answer: A

QUESTION 47

  • (Exam Topic 1) A SysOps administrator launches an Amazon EC2 Linux instance in a public subnet. When the instance is running, the SysOps administrator obtains the public IP address and attempts to remotely connect to the instance multiple times. However, the SysOps administrator always receives a timeout error. Which action will allow the SysOps administrator to remotely connect to the instance?

    A. Add a route table entry in the public subnet for the SysOps administrator’s IP address. B. Add an outbound network ACL rule to allow TCP port 22 for the SysOps administrator’s IP address. C. Modify the instance security group to allow inbound SSH traffic from the SysOps administrator’s IP address. D. Modify the instance security group to allow outbound SSH traffic to the SysOps administrator’s IP address.

Correct Answer: C

QUESTION 48

  • (Exam Topic 1) A SysOps administrator has successfully deployed a VPC with an AWS Cloud Formation template The SysOps administrator wants to deploy me same template across multiple accounts that are managed through AWS Organizations. Which solution will meet this requirement with the LEAST operational overhead?

    A. Assume the OrganizationAccountAcccssKolc IAM role from the management accoun B. Deploy the template in each of the accounts C. Create an AWS Lambda function to assume a role in each account Deploy the template by using the AWS CloudFormation CreateStack API call D. Create an AWS Lambda function to query fc a list of accounts Deploy the template by using the AWS Cloudformation CreateStack API call. E. Use AWS CloudFormation StackSets from the management account to deploy the template in each of the accounts

Correct Answer: D AWS CloudFormation StackSets extends the capability of stacks by enabling you to create, update, or delete stacks across multiple accounts and AWS Regions

QUESTION 49

  • (Exam Topic 1) A SysOps administrator is reviewing AWS Trusted Advisor recommendations. The SysOps administrator notices that all the application servers for a finance application are listed in the Low Utilization Amazon EC2 Instances check. The application runs on three instances across three Availability Zones. The SysOps administrator must reduce the cost of running the application without affecting the application’s availability or design. Which solution will meet these requirements?

    A. Reduce the number of application servers. B. Apply rightsizing recommendations from AWS Cost Explorer to reduce the instance size. C. Provision an Application Load Balancer in front of the instances. D. Scale up the instance size of the application servers.

Correct Answer: C

QUESTION 50

  • (Exam Topic 1) An application runs on multiple Amazon EC2 instances in an Auto Scaling group The Auto Scaling group is configured to use the latest version of a launch template A SysOps administrator must devise a solution that centrally manages the application logs and retains the logs for no more than 90 days Which solution will meet these requirements?

    A. Launch an Amazon Machine Image (AMI) that is preconfigured with the Amazon CloudWatch Logs agent to send logs to an Amazon S3 bucket Apply a 90-day S3 Lifecycle policy on the S3 bucket to expire the application logs B. Launch an Amazon Machine Image (AMI) that is preconfigured with the Amazon CloudWatch Logs agent to send logs to a log group Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled rule to perform an instance refresh every 90 days C. Update the launch template user data to install and configure the Amazon CloudWatch Logs agent to send logs to a log group Configure the retention period on the log group to be 90 days D. Update the launch template user data to install and configure the Amazon CloudWatch Logs agent to send logs to a log group Set the log rotation configuration of the EC2 instances to 90 days

Correct Answer: C

QUESTION 51

  • (Exam Topic 1) A company’s SysOps administrator attempts to restore an Amazon Elastic Block Store (Amazon EBS) snapshot. However, the snapshot is missing because another system administrator accidentally deleted the snapshot. The company needs the ability to recover snapshots for a specified period of time after snapshots are deleted. Which solution will provide this functionality?

    A. Turn on deletion protection on individual EBS snapshots that need to be kept. B. Create an 1AM policy that denies the deletion of EBS snapshots by using a condition statement for the snapshot age Apply the policy to all users C. Create a Recycle Bin retention rule for EBS snapshots for the desired retention period. D. Use Amazon EventBridge (Amazon CloudWatch Events) to schedule an AWS Lambda function to copy EBS snapshots to Amazon S3 Glacier.

Correct Answer: B

QUESTION 52

  • (Exam Topic 1) A company has a critical serverless application that uses multiple AWS Lambda functions. Each Lambda function generates 1 GB of log data daily in tts own Amazon CloudWatch Logs log group. The company’s security team asks for a count of application errors, grouped by type, across all of the log groups. What should a SysOps administrator do to meet this requirement?

    A. Perform a CloudWatch Logs Insights query that uses the stats command and count function. B. Perform a CloudWatch Logs search that uses the groupby keyword and count function. C. Perform an Amazon Athena query that uses the SELECT and GROUP BY keywords. D. Perform an Amazon RDS query that uses the SELECT and GROUP BY keywords.

Correct Answer: A

QUESTION 53

  • (Exam Topic 1) A company is testing Amazon Elasticsearch Service (Amazon ES) as a solution for analyzing system logs from a fleet of Amazon EC2 instances. During the test phase, the domain operates on a single-node cluster. A SysOps administrator needs to transition the test domain into a highly available production-grade deployment. Which Amazon ES configuration should the SysOps administrator use to meet this requirement?

    A. Use a cluster of four data nodes across two AWS Region B. Deploy four dedicated master nodes in each Region. C. Use a cluster of six data nodes across three Availability Zone D. Use three dedicated master nodes. E. Use a cluster of six data nodes across three Availability Zone F. Use six dedicated master nodes. G. Use a cluster of eight data nodes across two Availability Zone H. Deploy four master nodes in a failover AWS Region.

Correct Answer: B

QUESTION 54

  • (Exam Topic 1) A company is releasing a new static website hosted on Amazon S3. The static website hosting feature was enabled on the bucket and content was uploaded: however, upon navigating to the site, the following error message is received: 403 Forbidden - Access Denied What change should be made to fix this error?

    A. Add a bucket policy that grants everyone read access to the bucket. B. Add a bucket policy that grants everyone read access to the bucket objects. C. Remove the default bucket policy that denies read access to the bucket. D. Configure cross-origin resource sharing (CORS) on the bucket.

Correct Answer: B

QUESTION 55

  • (Exam Topic 1) A company has an Auto Scaling group of Amazon EC2 instances that scale based on average CPU utilization. The Auto Scaling group events log indicates an InsufficientlnstanceCapacity error. Which actions should a SysOps administrator take to remediate this issue? (Select TWO.

    A. Change the instance type that the company is using. B. Configure the Auto Scaling group in different Availability Zones. C. Configure the Auto Scaling group to use different Amazon Elastic Block Store (Amazon EBS) volume sizes. D. Increase the maximum size of the Auto Scaling group. E. Request an increase in the instance service quota.

Correct Answer: A B

QUESTION 56

  • (Exam Topic 1) A company uses an Amazon Elastic File System (Amazon EFS) file system to share files across many Linux Amazon EC2 instances. A SysOps administrator notices that the file system’s PercentIOLimit metric is consistently at 100% for 15 minutes or longer. The SysOps administrator also notices that the application that reads and writes to that file system is performing poorly. They application requires high throughput and IOPS while accessing the file system. What should the SysOps administrator do to remediate the consistently high PercentIOLimit metric?

    A. Create a new EFS file system that uses Max I/O performance mod B. Use AWS DataSync to migrate data to the new EFS file system. C. Create an EFS lifecycle policy to transition future files to the Infrequent Access (IA) storage class to improve performanc D. Use AWS DataSync to migrate existing data to IA storage. E. Modify the existing EFS file system and activate Max I/O performance mode. F. Modify the existing EFS file system and activate Provisioned Throughput mode.

Correct Answer: A To support a wide variety of cloud storage workloads, Amazon EFS offers two performance modes, General Purpose mode and Max I/O mode. You choose a file system’s performance mode when you create it, and it cannot be changed. If the PercentIOLimit percentage returned was at or near 100 percent for a significant amount of time during the test, your application should use the Max I/O performance mode. https://docs.aws.amazon.com/efs/latest/ug/performance.html

QUESTION 57

  • (Exam Topic 1) A company creates a new member account by using AWS Organizations. A SysOps administrator needs to add AWS Business Support to the new account Which combination of steps must the SysOps administrator take to meet this requirement? (Select TWO.)

    A. Sign in to the new account by using 1AM credential B. Change the support plan. C. Sign in to the new account by using root user credential D. Change the support plan. E. Use the AWS Support API to change the support plan. F. Reset the password of the account root user. G. Create an IAM user that has administrator privileges in the new account.

Correct Answer: B E The best combination of steps to meet this requirement is to sign in to the new account by using root user credentials and change the support plan, and to create an IAM user that has administrator privileges in the new account. Signing in to the new account by using root user credentials will allow the SysOps administrator to access the account and change the support plan to AWS Business Support. Additionally, creating an IAM user that has administrator privileges in the new account will ensure that the SysOps administrator has the necessary access to manage the account and make changes to the support plan if necessary. Reference: [1] https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_access.html#orgs_ma

QUESTION 58

  • (Exam Topic 1) A SysOps administrator needs to create alerts that are based on the read and write metrics of Amazon Elastic Block Store (Amazon EBS) volumes that are attached to an Amazon EC2 instance. The SysOps administrator creates and enables Amazon CloudWatch alarms for the DiskReadBytes metric and the DiskWriteBytes metric. A custom monitoring tool that is installed on the EC2 instance with the same alarm configuration indicates that the volume metrics have exceeded the threshold. However, the CloudWatch alarms were not in ALARM state. Which action will ensure that the CloudWatch alarms function correctly?

    A. Install and configure the CloudWatch agent on the EC2 instance to capture the desired metrics. B. Install and configure AWS Systems Manager Agent on the EC2 instance to capture the desired metrics. C. Reconfigure the CloudWatch alarms to use the VolumeReadBytes metric and the VolumeWriteBytes metric for the EBS volumes. D. Reconfigure the CloudWatch alarms to use the VolumeReadBytes metric and the VolumeWriteBytes metric for the EC2 instance.

Correct Answer: A

QUESTION 59

  • (Exam Topic 1) An organization with a large IT department has decided to migrate to AWS With different job functions in the IT department it is not desirable to give all users access to all AWS resources Currently the organization handles access via LDAP group membership What is the BEST method to allow access using current LDAP credentials?

    A. Create an AWS Directory Service Simple AD Replicate the on-premises LDAP directory to Simple AD B. Create a Lambda function to read LDAP groups and automate the creation of IAM users C. Use AWS CloudFormation to create IAM roles Deploy Direct Connect to allow access to the on-premises LDAP server D. Federate the LDAP directory with IAM using SAML Create different IAM roles to correspond to different LDAP groups to limit permissions

Correct Answer: D

QUESTION 60

  • (Exam Topic 1) An existing, deployed solution uses Amazon EC2 instances with Amazon EBS General Purpose SSD volumes, an Amazon RDS PostgreSQL database, an Amazon EFS file system, and static objects stored in an Amazon S3 bucket. The Security team now mandates that at-rest encryption be turned on immediately for all aspects of the application, without creating new resources and without any downtime. To satisfy the requirements, which one of these services can the SysOps administrator enable at-rest encryption on?

    A. EBS General Purpose SSD volumes B. RDS PostgreSQL database C. Amazon EFS file systems D. S3 objects within a bucket

Correct Answer: D https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingEncryption.html

QUESTION 61

  • (Exam Topic 1) A company is using an Amazon Aurora MySQL DB cluster that has point-in-time recovery, backtracking, and automatic backup enabled. A SysOps administrator needs to be able to roll back the DB cluster to a specific recovery point within the previous 72 hours. Restores must be completed in the same production DB cluster. Which solution will meet these requirements?

    A. Create an Aurora Replic B. Promote the replica to replace the primary DB instance. C. Create an AWS Lambda function to restore an automatic backup to the existing DB cluster. D. Use backtracking to rewind the existing DB cluster to the desired recovery point. E. Use point-in-time recovery to restore the existing DB cluster to the desired recovery point.

Correct Answer: C “The limit for a backtrack window is 72 hours…..Backtracking is only available for DB clusters that were created with the Backtrack feature enabled….Backtracking “rewinds” the DB cluster to the time you specify. Backtracking is not a replacement for backing up your DB cluster so that you can restore it to a point in time….You can backtrack a DB cluster quickly. Restoring a DB cluster to a point in time launches a new DB cluster and restores it from backup data or a DB cluster snapshot, which can take hours.” https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Managing.Backtrack.html

QUESTION 62

  • (Exam Topic 2) If your AWS Management Console browser does not show that you are logged in to an AWS account, close the browser and relaunch the console by using the AWS Management Console shortcut from the VM desktop. If the copy-paste functionality is not working in your environment, refer to the instructions file on the VM desktop and use Ctrl+C, Ctrl+V or Command-C , Command-V. Configure Amazon EventBridge to meet the following requirements.
    1. use the us-east-2 Region for all resources,
    1. Unless specified below, use the default configuration settings.
    1. Use your own resource naming unless a resource name is specified below.
    1. Ensure all Amazon EC2 events in the default event bus are replayable for the past 90 days.
    1. Create a rule named RunFunction to send the exact message every 1 5 minutes to an existing AWS Lambda function named LogEventFunction.
    1. Create a rule named SpotWarning to send a notification to a new standard Amazon SNS topic named TopicEvents whenever an Amazon EC2 Spot Instance is interrupted. Do NOT create any topic subscriptions. The notification must match the following structure: AWS-SysOps dumps exhibit Input Path:

Image

{“instance” : “$.detail.instance-id”}

Input template: “ The EC2 Spot Instance has been on account.

Solution: Here are the steps to configure Amazon EventBridge to meet the above requirements: AWS-SysOps dumps exhibit Log in to the AWS Management Console by using the AWS Management Console shortcut from the VM desktop. Make sure that you are logged in to the desired AWS account.

AWS-SysOps dumps exhibit Go to the EventBridge service in the us-east-2 Region.
AWS-SysOps dumps exhibit In the EventBridge service, navigate to the "Event buses" page.
AWS-SysOps dumps exhibit Click on the "Create event bus" button.
AWS-SysOps dumps exhibit Give a name to your event bus, and select "default" as the event source type.
AWS-SysOps dumps exhibit Navigate to "Rules" page and create a new rule named "RunFunction"
AWS-SysOps dumps exhibit In the "Event pattern" section, select "Schedule" as the event source and set the schedule to run every 15 minutes.
AWS-SysOps dumps exhibit In the "Actions" section, select "Send to Lambda" and choose the existing AWS Lambda function named "LogEventFunction"
AWS-SysOps dumps exhibit Create another rule named "SpotWarning"
AWS-SysOps dumps exhibit In the "Event pattern" section, select "EC2" as the event source, and filter the events on "EC2 Spot Instance interruption"
AWS-SysOps dumps exhibit In the "Actions" section, select "Send to SNS topic" and create a new standard Amazon SNS topic named "TopicEvents"
AWS-SysOps dumps exhibit In the "Input Transformer" section, set the Input Path to {“instance” : “$.detail.instance-id”} and Input template to “The EC2 Spot Instance has been interrupted on account.
AWS-SysOps dumps exhibit Now all Amazon EC2 events in the default event bus will be replayable for past 90 days. Note:
AWS-SysOps dumps exhibit You can use the AWS Management Console, AWS CLI, or SDKs to create and manage EventBridge resources.
AWS-SysOps dumps exhibit You can use CloudTrail event history to replay events from the past 90 days.
AWS-SysOps dumps exhibit You can refer to the AWS EventBridge documentation for more information on how to configure and use the service: https://aws.amazon.com/eventbridge/

Does this meet the goal?

A.
Yes
B.
No

Correct Answer: A

QUESTION 63

  • (Exam Topic 1) A SysOps administrator is reviewing VPC Flow Logs to troubleshoot connectivity issues in a VPC. While reviewing the togs the SysOps administrator notices that rejected traffic is not listed. What should the SysOps administrator do to ensure that all traffic is logged?

    A. Create a new flow tog that has a titter setting to capture all traffic B. Create a new flow log set the tog record format to a custom format Select the proper fields to include in the tog C. Edit the existing flow log Change the fitter setting to capture all traffic D. Edit the existing flow lo E. Set the log record format to a custom format Select the proper fields to include in the tog

Correct Answer: A

QUESTION 64

  • (Exam Topic 1) A new application runs on Amazon EC2 instances and accesses data in an Amazon RDS database instance. When fully deployed in production, the application fails. The database can be queried from a console on a bastion host. When looking at the web server logs, the following error is repeated multiple times: “** Error Establishing a Database Connection Which of the following may be causes of the connectivity problems? {Select TWO.)

    A. The security group for the database does not have the appropriate egress rule from the database to the web server. B. The certificate used by the web server is not trusted by the RDS instance. C. The security group for the database does not have the appropriate ingress rule from the web server to the database. D. The port used by the application developer does not match the port specified in the RDS configuration. E. The database is still being created and is not available for connectivity.

Correct Answer: C D

QUESTION 65

  • (Exam Topic 1) A SysOps administrator Is troubleshooting an AWS Cloud Formation template whereby multiple Amazon EC2 instances are being created The template is working In us-east-1. but it is failing In us-west-2 with the error code:

    • AMI [ami-12345678] does not exist

How should the administrator ensure that the AWS Cloud Formation template is working in every region?

A.
Copy the source region's Amazon Machine Image (AMI) to the destination region and assign it the same ID.
B.
Edit the AWS CloudFormatton template to specify the region code as part of the fully qualified AMI ID.
C.
Edit the AWS CloudFormatton template to offer a drop-down list of all AMIs to the user by using the aws :: EC2:: ami :: imageiD control.
D.
Modify the AWS CloudFormation template by including the AMI IDs in the "Mappings" sectio
E.
Refer to the proper mapping within the template for the proper AMI ID.

Correct Answer: A

QUESTION 66

  • (Exam Topic 1) A company runs hundreds of Amazon EC2 instances in a single AWS Region. Each EC2 instance has two attached 1 GiB General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volumes. A critical workload is using all the available IOPS capacity on the EBS volumes. According to company policy, the company cannot change instance types or EBS volume types without completing lengthy acceptance tests to validate that the company’s applications will function properly. A SysOps administrator needs to increase the I/O performance of the EBS volumes as quickly as possible. Which action should the SysOps administrator take to meet these requirements?

    A. Increase the size of the 1 GiB EBS volumes. B. Add two additional elastic network interfaces on each EC2 instance. C. Turn on Transfer Acceleration on the EBS volumes in the Region. D. Add all the EC2 instances to a cluster placement group.

Correct Answer: A Increasing the size of the 1 GiB EBS volumes will increase the IOPS capacity of the volumes, which will improve the I/O performance of the EBS volumes. This option does not require any changes to the instance types or EBS volume types, so it can be done quickly without the need for lengthy acceptance tests to validate that the company’s applications will function properly. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/requesting-ebs-volume-modifications.html

QUESTION 67

  • (Exam Topic 1) A SysOps administrator is using AWS Systems Manager Patch Manager to patch a fleet of Amazon EC2 instances. The SysOps administrator has configured a patch baseline and a maintenance window. The SysOps administrator also has used an instance tag to identify which instances to patch. The SysOps administrator must give Systems Manager the ability to access the EC2 instances. Which additional action must the SysOps administrator perform to meet this requirement?

    A. Add an inbound rule to the instances’ security group. B. Attach an 1AM instance profile with access to Systems Manager to the instances. C. Create a Systems Manager activation Then activate the fleet of instances. D. Manually specify the instances to patch Instead of using tag-based selection.

Correct Answer: A

QUESTION 68

  • (Exam Topic 1) A company wants to use only IPv6 for all its Amazon EC2 instances. The EC2 instances must not be accessible from the internet, but the EC2 instances must be able to access the internet. The company creates a dual-stack VPC and IPv6-only subnets. How should a SysOps administrator configure the VPC to meet these requirements?

    A. Create and attach a NAT gatewa B. Create a custom route table that includes an entry to point all IPv6 traffic to the NAT gatewa C. Attach the custom route table to the IPv6-only subnets. D. Create and attach an internet gatewa E. Create a custom route table that includes an entry to point all IPv6 traffic to the internet gatewa F. Attach the custom route table to the IPv6-only subnets. G. Create and attach an egress-only internet gatewa H. Create a custom route table that includes an entry to point all IPv6 traffic to the egress-only internet gatewa I. Attach the custom route table to the IPv6-only subnets. J. Create and attach an internet gateway and a NAT gatewa K. Create a custom route table that includes an entry to point all IPv6 traffic to the internet gateway and all IPv4 traffic to the NAT gatewa L. Attach thecustom route table to the IPv6-only subnets.

Correct Answer: C

QUESTION 69

  • (Exam Topic 1) A SysOps administrator needs to secure the credentials for an Amazon RDS database that is created by an AWS CloudFormation template. The solution must encrypt the credentials and must support automatic rotation. Which solution will meet these requirements?

    A. Create an AWS::SecretsManager::Secret resource in the CloudFormation templat B. Reference thecredentials in the AWS::RDS::DBInstance resource by using the resolve:secretsmanager dynamic reference. C. Create an AWS::SecretsManager::Secret resource in the CloudFormation templat D. Reference the credentials in the AWS::RDS::DBInstance resource by using the resolve:ssm-secure dynamic reference. E. Create an AWS::SSM::Parameter resource in the CloudFormation templat F. Reference the credentials in the AWS::RDS::DBInstance resource by using the resolve:ssm dynamic reference. G. Create parameters for the database credentials in the CloudFormation templat H. Use the Ref intrinsic function to provide the credentials to the AWS::RDS::DBInstance resource.

Correct Answer: A

QUESTION 70

  • (Exam Topic 1) An Amazon EC2 instance needs to be reachable from the internet. The EC2 instance is in a subnet with the following route table:

Image

Which entry must a SysOps administrator add to the route table to meet this requirement?

A.
A route for 0.0.0.0/0 that points to a NAT gateway
B.
A route for 0.0.0.0/0 that points to an egress-only internet gateway
C.
A route for 0.0.0.0/0 that points to an internet gateway
D.
A route for 0.0.0.0/0 that points to an elastic network interface

Correct Answer: C

QUESTION 71

  • (Exam Topic 1) A company hosts a web application on an Amazon EC2 instance. The web server logs are published to Amazon CloudWatch Logs. The log events have the same structure and include the HTTP response codes that are associated with the user requests. The company needs to monitor the number of times that the web server returns an HTTP 404 response. What is the MOST operationally efficient solution that meets these requirements?

    A. Create a CloudWatch Logs metric filter that counts the number of times that the web server returns an HTTP 404 response. B. Create a CloudWatch Logs subscription filter that counts the number of times that the web server returns an HTTP 404 response. C. Create an AWS Lambda function that runs a CloudWatch Logs Insights query that counts the number of 404 codes in the log events during the past hour. D. Create a script that runs a CloudWatch Logs Insights query that counts the number of 404 codes in the log events during the past hour.

Correct Answer: A

This is the most operationally efficient solution that meets the requirements, as it will allow the company to monitor the number of times that the web server returns an HTTP 404 response in real-time. The other solutions (creating a CloudWatch Logs subscription filter, an AWS Lambda function, or a script) will require additional steps and resources to monitor the number of times that the web server returns an HTTP 404 response. A metric filter allows you to search for specific terms, phrases, or values in your log events, and then to create a metric based on the number of occurrences of those search terms. This allows you to create a CloudWatch Metric that can be used to create alarms and dashboards, which can be used to monitor the number of HTTP 404 responses returned by the web server.

QUESTION 72

  • (Exam Topic 1) An application accesses data through a file system interface. The application runs on Amazon EC2 instances in multiple Availability Zones, all of which must share the same data. While the amount of data is currently small, the company anticipates that it will grow to tens of terabytes over the lifetime of the application. What is the MOST scalable storage solution to fulfill this requirement?

    A. Connect a large Amazon EBS volume to multiple instances and schedule snapshots. B. Deploy Amazon EFS in the VPC and create mount targets in multiple subnets. C. Launch an EC2 instance and share data using SMB/CIFS or NFS. D. Deploy an AWS Storage Gateway cached volume on Amazon EC2.

Correct Answer: B

QUESTION 73

  • (Exam Topic 1) A company uses AWS Organizations. A SysOps administrator wants to use AWS Compute Optimizer and AWS tag policies in the management account to govern all member accounts in the billing family. The SysOps administrator navigates to the AWS Organizations console but cannot activate tag policies through the management account. What could be the reason for this issue?

    A. All features have not been enabled in the organization. B. Consolidated billing has not been enabled. C. The member accounts do not have tags enabled for cost allocation. D. The member accounts have not manually enabled trusted access for Compute Optimizer.

Correct Answer: C

QUESTION 74

  • (Exam Topic 1) A company is running a flash sale on its website. The website is hosted on burstable performance Amazon EC2 instances in an Auto Scaling group. The Auto Scaling group is configured to launch instances when the CPU utilization is above 70%. A couple of hours into the sale, users report slow load times and error messages for refused connections. A SysOps administrator reviews Amazon CloudWatch metrics and notices that the CPU utilization is at 20% across the entire fleet of instances. The SysOps administrator must restore the website’s functionality without making changes to the network infrastructure. Which solution will meet these requirements?

    A. Activate unlimited mode for the instances in the Auto Scaling group. B. Implement an Amazon CloudFront distribution to offload the traffic from the Auto Scaling group. C. Move the website to a different AWS Region that is closer to the users. D. Reduce the desired size of the Auto Scaling group to artificially increase CPU average utilization.

Correct Answer: B

Implement an Amazon CloudFront distribution to offload the traffic from the Auto Scaling group does not breach the requirement of no changes in the network infrastructure. Reason is that cloudfront is a distribution that allows you to distribute content using a worldwide network of edge locations that provide low latency and high data transfer speeds. It plug in to existing setup, not changes to it.

QUESTION 75

  • (Exam Topic 1) A data storage company provides a service that gives users the ability to upload and download files as needed. The files are stored in Amazon S3 Standard and must be immediately retrievable for 1 year. Users access files frequently during the first 30 days after the files are stored. Users rarely access files after 30 days. The company’s SysOps administrator must use S3 Lifecycle policies to implement a solution that maintains object availability and minimizes cost. Which solution will meet these requirements?

    A. Move objects to S3 Glacier after 30 days. B. Move objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days. C. Move objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days. D. Move objects to S3 Standard-Infrequent Access (S3 Standard-IA) immediately.

Correct Answer: C

https://aws.amazon.com/s3/storage-classes/

QUESTION 76

  • (Exam Topic 1) A SysOps administrator has created a VPC that contains a public subnet and a private subnet. Amazon EC2 instances that were launched in the private subnet cannot access the internet. The default network ACL is active on all subnets in the VPC, and all security groups allow all outbound traffic: Which solution will provide the EC2 instances in the private subnet with access to the internet?

    A. Create a NAT gateway in the public subne B. Create a route from the private subnet to the NAT gateway. C. Create a NAT gateway in the public subne D. Create a route from the public subnet to the NAT gateway. E. Create a NAT gateway in the private subne F. Create a route from the public subnet to the NAT gateway. G. Create a NAT gateway in the private subne H. Create a route from the private subnet to the NAT gateway.

Correct Answer: A

NAT Gateway resides in public subnet, and traffic should be routed from private subnet to NAT Gateway: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html

QUESTION 77

  • (Exam Topic 1) A SysOps administrator created an Amazon VPC with an IPv6 CIDR block, which requires access to the internet. However, access from the internet towards the VPC is prohibited. After adding and configuring the required components to the VPC. the administrator is unable to connect to any of the domains that reside on the internet. What additional route destination rule should the administrator add to the route tables?

    A. Route ;:/0 traffic to a NAT gateway B. Route ::/0 traffic to an internet gateway C. Route 0.0.0.0/0 traffic to an egress-only internet gateway D. Route ::/0 traffic to an egress-only internet gateway

Correct Answer: D

https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html

QUESTION 78

  • (Exam Topic 1) A company has an initiative to reduce costs associated with Amazon EC2 and AWS Lambda. Which action should a SysOps administrator take to meet these requirements?

    A. Analyze the AWS Cost and Usage Report by using Amazon Athena to identity cost savings. B. Create an AWS Budgets alert to alarm when account spend reaches 80% of the budget. C. Purchase Reserved Instances through the Amazon EC2 console. D. Use AWS Compute Optimizer and take action on the provided recommendations.

Correct Answer: D

QUESTION 79

  • (Exam Topic 1) A company’s reporting job that used to run in 15 minutes is now taking an hour to run. An application generates the reports. The application runs on Amazon EC2 instances and extracts data from an Amazon RDS for MySQL database. A SysOps administrator checks the Amazon CloudWatch dashboard for the RDS instance and notices that the Read IOPS metrics are high, even when the reports are not running. The SysOps administrator needs to improve the performance and the availability of the RDS instance. Which solution will meet these requirements?

    A. Configure an Amazon ElastiCache cluster in front of the RDS instanc B. Update the reporting job to query the ElastiCache cluster. C. Deploy an RDS read replic D. Update the reporting job to query the reader endpoint. E. Create an Amazon CloudFront distributio F. Set the RDS instance as the origi G. Update the reporting job to query the CloudFront distribution. H. Increase the size of the RDS instance.

Correct Answer: B

Using an RDS read replica will improve the performance and availability of the RDS instance by offloading read queries to the replica. This will also ensure that the reporting job completes in a timely manner and does not affect the performance of other queries that might be running on the RDS instance. Additionally, updating the reporting job to query the reader endpoint will ensure that all read queries are directed to the read replica. Reference: [1] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html

QUESTION 80

  • (Exam Topic 1) A company uses AWS Organizations to manage multiple AWS accounts with consolidated billing enabled. Organization member account owners want the benefits of Reserved Instances (RIs) but do not want to share RIs with other accounts. Which solution will meet these requirements?

    A. Purchase RIs in individual member account B. Disable Rl discount sharing in the management account. C. Purchase RIs in individual member account D. Disable Rl discount sharing in the member accounts. E. Purchase RIs in the management accoun F. Disable Rl discount sharing in the management account. G. Purchase RIs in the management accoun H. Disable Rl discount sharing in the member accounts.

Correct Answer: A

https://aws.amazon.com/premiumsupport/knowledge-center/ec2-ri-consolidated-billing/ RI discounts apply to accounts in an organization’s consolidated billing family depending upon whether RI sharing is turned on or off for the accounts. By default, RI sharing for all accounts in an organization is turned on. The management account of an organization can change this setting by turning off RI sharing for an account. The capacity reservation for an RI applies only to the account the RI was purchased on, no matter whether RI sharing is turned on or off.

QUESTION 81

  • (Exam Topic 1) A user working in the Amazon EC2 console increased the size of an Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 Windows instance. The change is not reflected in the file system. What should a SysOps administrator do to resolve this issue?

    A. Extend the file system with operating system-level tools to use the new storage capacity. B. Reattach the EBS volume to the EC2 instance. C. Reboot the EC2 instance that is attached to the EBS volume. D. Take a snapshot of the EBS volum E. Replace the original volume with a volume that is created from the snapshot.

Correct Answer: B

QUESTION 82

  • (Exam Topic 1) A SysOps administrator is tasked with deploying a company’s infrastructure as code. The SysOps administrator want to write a single template that can be reused for multiple environments. How should the SysOps administrator use AWS CloudFormation to create a solution?

    A. Use Amazon EC2 user data in a CloudFormation template B. Use nested stacks to provision resources C. Use parameters in a CloudFormation template D. Use stack policies to provision resources

Correct Answer: C

Reuse templates to replicate stacks in multiple environments After you have your stacks and resources set up, you can reuse your templates to replicate your infrastructure in multiple environments. For example, you can create environments for development, testing, and production so that you can test changes before implementing them into production. To make templates reusable, use the parameters, mappings, and conditions sections so that you can customize your stacks when you create them. For example, for your development environments, you can specify a lower-cost instance type compared to your production environment, but all other configurations and settings remain the same. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html#reuse

QUESTION 83

  • (Exam Topic 1) A company monitors its account activity using AWS CloudTrail. and is concerned that some log files are being tampered with after the logs have been delivered to the account’s Amazon S3 bucket. Moving forward, how can the SysOps administrator confirm that the log files have not been modified after being delivered to the S3 bucket?

    A. Stream the CloudTrail logs to Amazon CloudWatch Logs to store logs at a secondary location. B. Enable log file integrity validation and use digest files to verify the hash value of the log file. C. Replicate the S3 log bucket across regions, and encrypt log files with S3 managed keys. D. Enable S3 server access logging to track requests made to the log bucket for security audits.

Correct Answer: B

When you enable log file integrity validation, CloudTrail creates a hash for every log file that it delivers. Every hour, CloudTrail also creates and delivers a file that references the log files for the last hour and contains a hash of each. This file is called a digest file. CloudTrail signs each digest file using the private key of a public and private key pair. After delivery, you can use the public key to validate the digest file. CloudTrail uses different key pairs for each AWS region https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html

QUESTION 84

  • (Exam Topic 1) A company is expanding its fleet of Amazon EC2 instances before an expected increase of traffic. When a SysOps administrator attempts to add more instances, an InstanceLimitExceeded error is returned. What should the SysOps administrator do to resolve this error?

    A. Add an additional CIDR block to the VPC. B. Launch the EC2 instances in a different Availability Zone. C. Launch new EC2 instances in another VPC. D. Use Service Quotas to request an EC2 quota increase.

Correct Answer: D

QUESTION 85

  • (Exam Topic 1) A SysOps administrator notices a scale-up event for an Amazon EC2 Auto Scaling group Amazon CloudWatch shows a spike in the RequestCount metric for the associated Application Load Balancer The administrator would like to know the IP addresses for the source of the requests Where can the administrator find this information?

    A. Auto Scaling logs B. AWS CloudTrail logs C. EC2 instance logs D. Elastic Load Balancer access logs

Correct Answer: D

Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues. https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html

AWSStatic Exam Prep Sample Questions

Question 1

*A company hosts a web application on an Amazon EC2 instance. Users report that the web application is occasionally unresponsive. Amazon CloudWatch metrics indicate that the CPU utilization is 100% during these times. A SysOps administrator must implement a solution to monitor for this issue. Which solution will meet this requirement?

A. Create a CloudWatch alarm that monitors AWS CloudTrail events for the EC2 instance.

B. Create a CloudWatch alarm that monitors CloudWatch metrics for EC2 instance CPU utilization.

C. Create an Amazon Simple Notification Service (Amazon SNS) topic to monitor CloudWatch metrics for EC2 instance CPU utilization.

D. Create a recurring assessment check on the EC2 instance by using Amazon Inspector to detect deviations in CPU utilization.

Correct Answer: B

Explanation

Amazon CloudWatch provides you with data and actionable insights to monitor your applications. Amazon EC2 sends metrics to CloudWatch. The CPUUtilization metric represents the percentage of allocated EC2 compute units that are currently in use on an instance. You can create a CloudWatch alarm that monitors CPUUtilization for one of your instances. For example, you might want to receive an email notification when the average CPUUtilization over a 5-minute period is greater than 75%.

Question 2

  • A company has an application that uses Amazon ElastiCache for Memcached to cache query responses to improve latency. However, the application’s users are reporting slow response times. A SysOps administrator notices that the Amazon CloudWatch metrics for Memcached evictions are high. Which actions should the SysOps administrator take to fix this issue? (Select TWO.)

    A. Flush the contents of ElastiCache for Memcached.

    B. Increase the ConnectionOverhead parameter value.

    C. Increase the number of nodes in the cluster.

    D. Increase the size of the nodes in the cluster.

    E. Decrease the number of nodes in the cluster.

Correct Answer: C, D

Explanation

The Evictions metric for Amazon ElastiCache for Memcached represents the number of non- expired items that the cache evicted to provide space for new items. If you are experiencing evictions with your cluster, it is usually a sign that you need to scale up (use a node that has a larger memory footprint) or scale out (add additional nodes to the cluster) to accommodate the additional data

Question 3

  • A company needs to ensure that an AWS Lambda function can access resources in a VPC in the company’s account. The Lambda function requires access to third-party APIs that can be accessed only over the internet. Which action should a SysOps administrator take to meet these requirements?

    A. Attach an Elastic IP address to the Lambda function and configure a route to the internet gateway of the VPC.

    B. Connect the Lambda function to a private subnet that has a route to the virtual private gateway of the VPC.

    C. Connect the Lambda function to a public subnet that has a route to the internet gateway of the VPC.

    D. Connect the Lambda function to a private subnet that has a route to a NAT gateway deployed in a public subnet of the VPC.

Correct Answer: D

Explanation

By default, AWS Lambda runs your functions in a secure VPC with access to AWS services and the internet. Lambda owns this VPC, which is not connected to your account’s default VPC. When you connect a Lambda function to a VPC in your account to access private resources, the function cannot access the internet unless your VPC provides access. Internet access from a private subnet requires network address translation (NAT). To give your function access to the internet, route outbound traffic to a NAT gateway in a public subnet

Question 4

  • A company runs an application on a large fleet of Amazon EC2 instances to process financial transactions. The EC2 instances share data by using an Amazon Elastic File System (Amazon EFS) file system. The company wants to deploy the application to a new Availability Zone and has created new subnets and a mount target in the new Availability Zone. When a SysOps administrator launches new EC2 instances in the new subnets, the EC2 instances are unable to mount the file system. What is a reason for this issue?

    A. The EFS mount target has been created in a private subnet.

    B. The IAM role that is associated with the EC2 instances does not allow the efs:MountFileSystem action.

    C. The route tables have not been configured to route traffic to a VPC endpoint for Amazon EFS in the new Availability Zone.

    D. The security group for the mount target does not allow inbound NFS connections from the security group used by the EC2 instances.

Correct Answer: D

Explanation

The security groups that you associate with a mount target must allow inbound access for the TCP protocol on the NFS port from the security group used by the instances

Question 5

  • A company uses AWS Organizations to create and manage many AWS accounts. The company wants to deploy new IAM roles in each account. Which action should the SysOps administrator take to deploy the new roles in each of the organization’s accounts?

    A. Create a service control policy (SCP) in the organization to add the new IAM roles to each account.

    B. Deploy an AWS CloudFormation change set to the organization with a template to create the new IAM roles.

    C. Use AWS CloudFormation StackSets to deploy a template to each account to create the new IAM roles.

    D. Use AWS Config to create an organization rule to add the new IAM roles to each account.

Correct Answer: C

Explanation

With AWS CloudFormation StackSets, you can create, update, or delete stacks across multiple accounts and AWS Regions with a single operation. A user in the AWS Organizations management account can create a stack set with service-managed permissions that deploys stack instances to accounts in the organization or in specific organizational units (OUs). For example, you can use AWS CloudFormation StackSets to deploy your centralized IAM roles to all accounts in your organization.

Question 6

  • A company runs several production workloads on Amazon EC2 instances. A SysOps administrator discovered that a production EC2 instance failed a system health check. The SysOps administrator recovered the instance manually. The SysOps administrator wants to automate the recovery task of EC2 instances and receive notifications whenever a system health check fails. Detailed monitoring is activated for all of the company’s production EC2 instances. Which of the following is the MOST operationally efficient solution that meets these requirements?

    A. For each production EC2 instance, create an Amazon CloudWatch alarm for Status Check Failed: System. Set the alarm action to recover the EC2 instance. Configure the alarm notification to be published to an Amazon Simple Notification Service (Amazon SNS) topic.

    B. On each production EC2 instance, create a script that monitors the system health by sending a heartbeat notification every minute to a central monitoring server. If an EC2 instance fails to send a heartbeat, run a script on the monitoring server to stop and start the EC2 instance and to publish a notification to an Amazon Simple Notification Service (Amazon SNS) topic.

    C. On each production EC2 instance, create a script that sends network pings to a highly available endpoint by way of a cron job. If the script detects a network response timeout, invoke a command to reboot the EC2 instance.

    D. On each production EC2 instance, configure an Amazon CloudWatch agent to collect and send logs to a log group in Amazon CloudWatch Logs. Create a CloudWatch alarm that is based on a metric filter that tracks errors. Configure the alarm to invoke an AWS Lambda function to reboot the EC2 instance and send a notification email.

Correct Answer: A

Explanation

You can use Amazon CloudWatch alarm actions to create alarms that automatically stop, terminate, reboot, or recover your Amazon EC2 instances. For example, if an instance becomes impaired due to hardware or software issues on the physical host, loss of network connectivity, or loss of system power, you can automatically initiate a recovery action to migrate the instance to new hardware. You also can configure a message to be published to an Amazon Simple Notification Service (Amazon SNS) topic to receive a notification of the recovery action

Question 7

  • The company uses AWS Organizations to manage its accounts. For the production account, a SysOps administrator must ensure that all data is backed up daily for all current and future Amazon EC2 instances and Amazon Elastic File System (Amazon EFS) file systems. Backups must be retained for 30 days. Which solution will meet these requirements with the LEAST amount of effort?

    A. Create a backup plan in AWS Backup. Assign resources by resource ID, selecting all existing EC2 and EFS resources that are running in the account. Edit the backup plan daily to include any new resources. Schedule the backup plan to run every day with a lifecycle policy to expire backups after 30 days.

    B. Create a backup plan in AWS Backup. Assign resources by tags. Ensure that all existing EC2 and EFS resources are tagged correctly. Apply a service control policy (SCP) for the production account OU that prevents instance and file system creation unless the correct tags are applied. Schedule the backup plan to run every day with a lifecycle policy to expire backups after 30 days.

    C. Create a lifecycle policy in Amazon Data Lifecycle Manager (Amazon DLM). Assign all resources by resource ID, selecting all existing EC2 and EFS resources that are running in the account. Edit the lifecycle policy daily to include any new resources. Schedule the lifecycle policy to create snapshots every day with a retention period of 30 days.

    D. Create a lifecycle policy in Amazon Data Lifecycle Manager (Amazon DLM). Assign all resources by tags. Ensure that all existing EC2 and EFS resources are tagged correctly. Apply a service control policy (SCP) that prevents resource creation unless the correct tags are applied. Schedule the lifecycle policy to create snapshots every day with a retention period of 30 days.

Correct Answer: B

Explanation

AWS Backup is a fully managed backup service that makes it easy to centralize and automate the backup of data across AWS services. The use of tags to assign resources is a simple and scalable way to back up multiple resources. Any resources with the tags that you specify are assigned to the backup plan. A tag policy is a type of service control policy (SCP) in AWS Organizations that can help you standardize and enforce tags across resources in your organization’s accounts

Question 8

  • A company is using AWS CloudTrail and wants to ensure that SysOps administrators can easily verify that the log files have not been deleted or changed. Which action should a SysOps administrator take to meet this requirement?

    A. Grant administrators access to the AWS Key Management Service (AWS KMS) key used to encrypt the log files.

    B. Enable CloudTrail log file integrity validation when the trail is created or updated.

    C. Turn on Amazon S3 server access logging for the bucket storing the log files.

    D. Configure the S3 bucket to replicate the log files to another bucket.

Correct Answer: B

Explanation

You can validate the integrity of AWS CloudTrail log files and detect whether the log files were unchanged, modified, or deleted since CloudTrail delivered them to your Amazon S3 bucket. With a validated log file, you can assert positively that the log file itself has not changed, or that particular user credentials performed specific API activity. The CloudTrail log file integrity validation process also informs you if a log file has been deleted or changed. You gain the insight to assert positively that log files either were delivered or were not delivered to your account during a given period of time. You can activate log file integrity validation with the CloudTrail console when you create or update a trail.

Question 9

  • A company is running a custom database on an Amazon EC2 instance. The database stores its data on an Amazon Elastic Block Store (Amazon EBS) volume. A SysOps administrator must set up a backup strategy for the EBS volume. What should the SysOps administrator do to meet this requirement?

    A. Create an Amazon CloudWatch alarm for the VolumeIdleTime metric with an action to take a snapshot of the EBS volume.

    B. Create a pipeline in AWS Data Pipeline to take a snapshot of the EBS volume on a recurring schedule.

    C. Create an Amazon Data Lifecycle Manager (Amazon DLM) policy to take a snapshot of the EBS volume on a recurring schedule.

    D. Create an AWS DataSync task to take a snapshot of the EBS volume on a recurring schedule.

Correct Answer: C

Explanation

You can use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation, retention, and deletion of Amazon Elastic Block Store (Amazon EBS) snapshots. You can create a lifecycle policy that includes specific tags to back up EBS volumes on a specified schedule and for a specified retention period. For example, you can take a snapshot of an EBS volume every day and keep the snapshots for 30 days.

Question 10

  • A company runs a large number of Amazon EC2 instances for internal departments. The company needs to track the costs of its existing AWS resources by department. What should a SysOps administrator do to meet this requirement?

    A. Activate all of the AWS generated cost allocation tags for the account.

    B. Apply user-defined tags to the instances through Tag Editor. Activate these tags for cost allocation.

    C. Schedule an AWS Lambda function to run the AWS Pricing Calculator for EC2 usage on a recurring schedule.

    D. Use the AWS Trusted Advisor dashboard to export EC2 cost reports

Correct Answer: B

Explanation

User-defined tags are tags that you define, create, and apply to resources manually. You can use Tag Editor to search for all resources and apply tags to them. Use cost allocation tags to track your AWS costs on a detailed level. After you activate cost allocation tags, AWS uses the tags to organize your resource costs to make it easier for you to categorize and track your AWS costs. For example, to track costs by department, you can use a tag that is named “Department” with the value equal to the department name.

Negron AWS Study Guide

Chapter 1 AWS Fundamentals

Review Questions

  1. Which of the following injects an additional piece of information into the authentication process?

    A. Defining a secret access ley B. Using AWS CloudShell C. Implementing MFA D. Defining an access key ID

Correct Answer C Implementing multifactor authentication (MFA) injects an additional piece of information into the authentication process. MFA can be implemented using software or hardware tools and will add protection to your root account and users that goes beyond simple username and password. Use MFA for all accounts and users if possible.

  1. Which of the following are required to implement CLI programmatic access? (Choose two.)

    A. Defining a secret access key B. Using SSH Keygen C. Implementing MFA D. Defining an access key ID

A, D Configuring programmatic access for the CLI will require four pieces of information: access key ID, secret access key, default region name, and default output format.

  1. Which of the following are best practices for AWS account protection? (Choose three.)

    A. Defining an account-level password B. Using AWS CloudShell C. Implementing MFA for all users D. Enabling AWS Security Hub E. Using Session Manager for EC2 instances F. Using service-linked roles

A, C, D Defining an account-level password, enabling MFA for all users, and enabling AWS Security Hub are fundamental to protecting your AWS account. Using CloudShell, Session Manager, and/or service-linked roles provides a form of security and protection but are not as fundamental.

  1. Which of the following is a best practice for cross–AWS account access?

    A. Using AWS Organizations B. Using IAM groups C. Implementing MFA for all users D. Using IAM roles

D IAM roles are essential to provide cross-account access as well as enabling AWS services to interact with each other. Learn and understand roles and the mechanics of role policy creation to maintain a strong security posture.

  1. Which of the following saves you from provisioning keys to operate AWS services in a programmatic way?

    A. The AWS Management Console B. AWS CloudShell C. Session Manager D. IAM groups

B AWS CloudShell provides a mechanism for operators to use the AWS CLI without having to provision access keys in a local machine. This adds a new layer of security as it saves time and effort in executing one-line and simple administrative CLI commands.

  1. Which of the following saves you from configuring SSH or RDP resources to operate EC2 instances?

    A. The AWS Management Console B. AWS CloudShell C. Session Manager D. IAM groups

C Systems Manager Session Manager provides you with a way to connect to Amazon EC2 instances that does not require the configuration of SSH or RDP resources to operate a particular instance. This is a significantly more secure way to manage EC2 instances.

  1. Which of the following represents the URL to log into the AWS Management Console as an IAM user? (Choose two.)

    A. https://aws.amazon.com/console/ B. https://accountID.signin.aws.amazon.com/console C. https://signin.aws.amazon.com/signin D. https://signin.aws.amazon.com/signin/console E. https://account_alias.signin.aws.amazon.com/console

B, E For console access, IAM users need to use the URL as follows: https://accountID.signin.aws.amazon.com/console or https://account_alias.signin.aws.amazon.com/console.

  1. Which of the following brings AWS services to the edge of a 5G network?

    A. Edge location B. Local zone C. Outpost D. Wavelength zone

D Wavelength zones bring AWS services to the edge of a 5G network, reducing the latency to connect to your application from a mobile device. Application traffic can reach application servers running in wavelength zones without leaving the mobile provider’s network. They provide single-digit millisecond latencies to mobile devices by reducing the extra network hops that may be needed without such a resource.

  1. Which of the following is an extension of a region where you can run low-latency applications using AWS services?

    A. Edge location B. Local zone C. Outpost D. Wavelength zone

B A local zone is an extension of a region where you can run low-latency applications using AWS services in proximity to end users. Local zones deliver single-digit millisecond latencies to users for use cases like media, entertainment, and real-time gaming, among others.

  1. Which of the following bring AWS services, infrastructure, and operating models to your datacenter, co-location space, or physical facility?

    A. Direct Connect location B. Local zone C. Outpost D. Wavelength zone

C Outpost is designed to support applications that need to remain in your datacenter due to low-latency requirements or local data processing needs. It brings AWS services, infrastructure, and operating models to your datacenter, co-location space, or physical facility.

  1. Which of the following is the resource used by AWS to deliver reliable and low-latency performance globally?

    A. Region B. Local zone C. Edge location D. Wavelength zone

C Edge locations are the resource used by AWS to deliver reliable and low-latency performance globally. Edge locations are how AWS attains high performance in countries and territories where a region does not exist. The global edge network connects thousands of Tiers 1, 2, and 3 telecom carriers globally and delivershundreds of terabits of capacity. Edge locations are connected with regions using the AWS backbone, which is a fully redundant, multiple 100 Gigabit Ethernet (GbE) parallel fiber infrastructure. The AWS edge network consists of over 400 edge locations.

  1. Which of the following represents a logical group of AWS datacenters?

    A. Region B. Local zone C. Edge location D. Availability zone

D An availability zone is a logical group of datacenters. These groups are isolated and physically separate. Each of them includes independent power, cooling, physical security, and interconnectivity using high bandwidth and low-latency links. All traffic between availability zones is encrypted. Also, each availability zone is implemented separately from other availability zones but within 60 miles of each other.

  1. Which of the following CLI commands will guide you through the process of managing AWS resources?

    A. aws configure wizard B. aws configure sso C. aws configure import –csv file://path/to/creds.csv D. aws configure

A The AWS CLIv2 wizards feature is an improved version of the –cli-auto-prompt command-line option. Wizards guide you through the process of managing AWS resources. You can access the wizards feature by using the command line: aws wizard

  1. Which AWS services have CLI wizards available? (Choose three.)

    A. Amazon EC2 B. AWS Lambda functions C. Amazon DynamoDB D. AWS IAM E. Amazon RDS F. Amazon S3

B, C, D Wizards will query existing resources and prompt you for data in the process of setting up for the service invoked. As of this writing, wizards are available for configure, dynamodb, iam, and lambda functions. For example, the command: aws dynamodb wizard new-table will guide you in creating a DynamoDB table. Also, note that the configure command does not use a wizard name. It’s invoked as aws configure wizard.

  1. Which of the following CLI commands creates an S3 bucket?

    A. aws s3 ls s3://my-bucket B. aws s3 cp file s3://my-bucket/file C. aws s3 ls D. aws s3 mb s3://my-bucket

D The CLI command to create an Amazon S3 bucket is: aws s3 mb s3://my-bucket You can type aws s3 help for details.

  1. Which of the following CLI commands copies the content of a local directory to an S3 bucket?

    A. aws s3 cp s3://bucket1/file s3://bucket2/file B. aws s3 cp file s3://my-bucket/file C. aws s3 sync my-directory s3://my-bucket/ D. aws s3 mb s3://my-

C The CLI command to copy the contents of a directory to an Amazon S3 bucket is: aws s3 sync my-directory s3://my-bucket/ You can type aws s3 help for details.

  1. Which of the following CLI options provide filtering of the output? (Choose two.)

    A. –query B. –filter C. –search D. –dry-run E. –cli-auto-prompt

A, B The –query option can be used to limit the results displayed from a CLI command. The query is expected to be structured according to the JMESPath specification, which defines the syntax for searching a JSON document. The –filter option can also be used to manage the results displayed. However, with the –filter option, the output is restricted on the server side whereas – query filters the results at the client side. The –dry-run option is used to verify that you have the required permissions to make the request and gives you an error if you are not authorized. The –dry-run option does not make the request.

  1. Which of the following support options give you access to the AWS Health API? (Choose two.)

    A. Basic B. Developer C. Business D. Enterprise E. AWS IQ

C, D The AWS Health API is available directly as part of an AWS Business Support or AWS Enterprise Support plan. It allows for chat integration and ingesting events into Slack, Microsoft Teams, and Amazon Chime. It also allows integration with dozens of AWS partners such as DataDog and Splunk, among many others.

  1. What is the AWS default quota value for EC2-VPC Elastic IPs?

    A. 50 B. 5 C. 5,000 D. 500

B The AWS default quota value for EC2-VPC Elastic IPs is 5 and will need an adjustment if you need more IPs. You can use the Service Quotas page on your management console to make a request to support and have the limit increased if needed.

  1. Which of the following URLs are useful for the purpose of pricing a solution using AWS? (Choose three.)

    A. https://calculator.s3.amazonaws.com B. https://aws.amazon.com/free C. https://aws.amazon.com/migration-evaluator D. https://calculator.aws E. https://tco.aws.amazon.com

B, C, D For details about service pricing and usage limits included in the AWS free tier, you can visit https://aws.amazon.com/free.The logic for AWS TCO calculator now resides in the Migration Evaluator at https://aws.amazon.com/migrationevaluator. The AWS Pricing Calculator is available at https://calculator.aws. The older, simple monthly calculator and TCO calculator have been deprecated

Chapter 2: Account Creation, Security, and Compliance

Review Questions

  1. You have been asked by your manager to gather the AWS reports for an upcoming SOC 3 audit. Which tool would you open to find the report? A. AWS Audit Manager B. Amazon Reports C. AWS License Manager D. AWS Artifact

  2. You are setting up a directory service for your small but growing company. There are currently about 5,000 objects, but the plan is to double the number of employees over the next three years. You have been directed to use Microsoft Active Directory and the company is cloud native. Which option would be the most cost-effective and lowest management overhead solution for your organization at this time? A. AWS Directory Services for Microsoft Active Directory Standard Edition B. Microsoft Active Directory on EC2 deployed in two availability zones (AZs) C. AWS Directory Services for Microsoft Active Directory Enterprise Edition D. AWS Simple AD

  3. What are the two types of behavior guardrails on AWS Control Tower? A. Preventive and detective B. Preventive and audit C. Infrastructure and code D. Enabled and Disabled

  4. A service control policy (SCP) was created in your organization that will allow all users of the Admin role permission to schedule KMS key deletion (“Action”: “kms:*”). However, when administrators attempt to actually schedule key deletion, they report error messages. Why might this error be occurring? A. Users must have the explicit permission “Action”: “kms:ScheduleKeyDeletion” in order to schedule key deletion. B. KMS keys cannot be deleted but only disabled. C. Administrators must approve the email sent to their primary email address as a second-factor authentication when attempting to delete KMS keys. D. Service control policies do not grant permissions, so allowing an action in an SCP has no effect.

  5. Inline IAM policies are best used when: A. Inline policies are not recommended. B. Customer-managed policies must be kept secure. C. An appropriate AWS-managed policy does not exist. D. Resource-based policies must be tightly integrated with identity-based policies.

  6. What are the three required elements of an identity- based IAM policy? (Choose three.) A. Action B. Effect C. Principal D. Resource

  7. Which of the following is a way in which AWS License Manager can track Bring Your Own Licenses (BYOLs) consumed by launched instances? A. By using a Lambda function to compare the AMI of each instance to an AWS Launch Manager license configuration. B. By associating a license configuration with an AMI. C. By creating a rule in AWS Config that matches a license configuration in AWS License Manager. D. The AWS Systems Manager Agent (SSM) automatically reports license usage to AWS Systems Manager. License Manager integrates with AWS Systems Manager to collect license usage data.

  8. Your client with over 50,000 directory objects has an on-premises Active Directory domain running Windows Server 2016. They need to have users access Amazon WorkDocs and Amazon WorkMail using single sign-on. No directory data should be cached in the cloud, but the directory service must be highly available. Which solution best solves the customer’s requirements? A. Active Directory Connector B. Amazon Managed Microsoft AD C. AWS Cognito D. Simple Active Directory

  9. When creating an AWS Organizations member account in your own organization. you notice that you do not have permissions for some actions. The error states that “AWS Account Management trusted access is not enabled. Enable it to view this content.” Which of the following will grant the required permissions?

alt text

A. In AWS Organizations, navigate to the Policies page and create an SCP that grants administrators of the member account full account management permissions (ALLOW = admin:*). B. Navigate to AWS Organizations Services and enable trusted access on AWS Account Management. C. Navigate to IAM and add the email address used to create the member account to the Admin role. D. The account was created using an email address rather than a role. Only root accounts can be created with an email address. The member account must be deleted and re-created using an admin role of the parent organizational unit (OU).

  1. Your customer wants to manage licenses across multiple accounts in order to better manage compliance. However, they have not been able to manage license usage in any accounts except the one that License Manager was set up in. What would solve this customer’s problem? A. AWS License Manager is account-specific and must be set up separately in each account. B. Enable AWS Organizations and link AWS License Manager. C. From each account to be managed, assign the service-linked roles to the main account where License Manager is configured. D. Install the SSM agent on instances in each account to be managed. Assign the SSM agents the service- linked roles in the account where AWS License Manager is configured.

  2. By default, Control Tower creates two accounts. These are: A. Audit and Log Archive B. Security and Log C. Management and Sandbox D. Security and Management

  3. Which two of the following policy types might be attached to an S3 bucket to grant permissions to a specified principal? (Choose two.) A. Access control lists (ACLs) B. Identity-based policies C. Permission boundaries D. Resource-based policies E. Organizations service control policies (SCPs)

  4. Which of the following are characteristics of permission boundaries? (Choose two.) A. Permission boundaries apply only to users and roles. B. Permission boundaries define permission limits but do not grant permissions. C. Permission boundaries grant permissions. D. Permission boundary policy statements contain only DENY effects.

  5. Which of the following are supported forms of multifactor authentication in AWS IAM? (Choose three.) A. Google Authenticator B. Hardware MFA device C. SMS D. U2F security key

  6. The Control Tower provides a feature that allows account provisioning following preapproved templates. What is this feature called? A. Account Factory B. Account Vending Machine C. AWS Config D. CloudFormation

  7. AWS Managed Microsoft AD directories are deployed in what architecture? A. In the customer VPC and in the customer’s datacenter with a DirectConnect between in an active-active configuration B. In two availability zones in a region and connected to the customer VPC C. In two availability zones within the customer VPC D. In two regions in an active-active configuration and connected to the customer VPC using PrivateLink

  8. To add an existing account to AWS Organization, the administrator must do which of the following? A. Add the account to AWS Organizations using the account ID, access key, and secret access key. B. Invite the account owner to join using the account owner’s email address. C. Existing accounts cannot be added but can only be created using Account Factory. D. Log into the account as root user and accept the invitation.

  9. The best explanation of when an IMPLICIT DENY occurs is: A. A resource-based policy and identity-based policy conflict. B. A user attempts to access restricted resources as described in a service control policy (SCP). C. IAM attempts to parse a policy but encounters a 500 error. D. When no deny statement or allow statement exists.

  10. The principal of trust between two unrelated networks is known as: A. Distributed computing B. Federation C. Hybrid computing D. Interoperability

  11. Which of the following are valid AWS IAM policy types? (Choose three.) A. Access control lists B. Identity-based policies C. Permission boundaries D. Service-based policies E. System access policies

Answers

Chapter 2: Account Creation, Security, and Compliance

  1. D Remember that compliance in the cloud is two parts: AWS and what the customer builds using AWS services. AWS Artifact is used to retrieve security and compliance reports and some online agreements related to AWS’s part of the equation. AWS Artifact is accessed through the management console. AWS Audit Manager could easily be confused as the correct answer. However, AWS Audit Manager is used for continual monitoring of compliance, whereas AWS Artifact is used simply to pull reports of AWS compliance.
  2. A Any time there are several services doing something similar, know the use cases for each. Directory services is an example. AWS Simple AD will not work because the requirements specify using Microsoft AD. Deploying on EC2 in two AZs seems like a plausible solution, but a managed service is typically a better option for the customer based on price and management overhead unless the question gives some detail that makes the managed service not feasible. That leaves Microsoft AD Standard and Enterprise Editions. In this case you do not need to know the details of Microsoft AD capacity planning. Simple AD is not recommended for more than 5,000 users, whereas AWS Directory Service for Microsoft AD is. Remember that objects do not equal users and there will generally be more objects than users. That strongly suggests Standard Edition would suffice and that Enterprise Edition would most likely be more expense for capacity that is not yet needed.
  3. A Control Tower and its guardrails are a critical part of governing an enterprise. Be sure to know AWS Control Tower and AWS Organizations and how they work together. Know the various types of policies and when to use each. The structure of guardrails and their terminology can be confusing. Behavioral controls either detect noncompliance or prevent noncompliance.
  4. D While all of these seem plausible, ultimately SCPs are not the correct place to grant permissions. Be sure to know what each policy type does (and doesn’t do).
  5. A While inline policies are available as an option, they are not recommended. Inline policies can be difficult to troubleshoot, and there are almost always better options.
  6. A, B, D Remember EAR (Effect, Action, Resource). A fourth common policy option (but not required) is Conditions.
  7. B Recall that licenses from AWS Marketplace will automatically be tracked. What we are considering here would be Bring Your Own Licenses (BYOLs), which License Manager does not automatically know how to associate with an instance. The easiest solution is to associate the license with the AMI. In this way, any instance launched from that AMI will be associated with that license and tracked.
  8. A The key criterion in the question is that no data can be cached in the cloud. Options B, C, and D all store data in the cloud. AD Connector does not.
  9. B Option A is not correct because SCPs cannot grant permissions. Option C is also incorrect since you want to follow the principle of least privilege and granting administrator rights necessarily would violate that principle. Option D is incorrect because creating an account can be done when logged in as either an IAM user or root user, or by assuming a role. However, remember that using the root user account is not best practice when a role can be used instead.
  10. B The service-linked roles for License Manager cannot be manually assigned—they are automatically assigned when setting up the service. The correct answer is that AWS Organizations is required for AWS License Manager to work across accounts to discover compute resources.
  11. A The two accounts created by Control Tower are Audit and Log Archive.
  12. A, D ACLs and resource-based policies can both be attached to a resource such as S3 and grant permissions to a specified principal in the same or another account. Permission boundaries can only deny, not grant, permissions. An identity-based policy is attached to the user and not to the resource. SCPs, like permission boundaries, define the limits of permissions but do not actually grant permissions themselves.
  13. A, B Remember that permission boundaries set the limit of what permissions can be held. They neither grant nor deny permission on their own.
  14. A, B, D AWS no longer supports SMS as an MFA factor for user accounts in IAM. Note that MFA is still a supported option in Cognito user pools.
  15. A Account Vending Machine was an older term and not used now, though you may still see reference to “vending machine” when speaking of Account Factory. The correct term for the feature is Account Factory.
  16. C AWS Managed Microsoft AD resides in a VPC rather than on-premises. PrivateLink does not support AWS Manage Directory Services. Multiregion replication for AWS Managed Microsoft AD is handled by the service using native Active Directory replication.
  17. B Option A is wrong since accounts themselves would not have keys, and sharing of an account’s admin keys would not make sense from a security perspective. Option D would not be practical in many cases where the owner of the inviting account is not also the owner of the invited account. Accounts can be both created and invited in AWS Organizations, so option C is also incorrect.
  18. D You will want to be very comfortable with policy evaluation logic. The use of an implicit deny secures resources for which no permission is explicitly given nor denied. See the documentation on identity and resource-based policy evaluation, especially the section on the difference between explicit and implicit denies (https://docs.aws.amazon.com/IAM/latest/UserGuide/reference :policies_evaluation-logic.html).
  19. B Distributed computing is, at its most fundamental, just computing between two of more computers via messaging usually along a network. It does not imply trust. Hybrid computing refers to a combination of cloud and on-premises resources. Again, no trust is implied. Interoperability is the ability of one computer or application to talk to another. Standards and protocols provide us with interoperability but do not imply trust.
  20. A, B, C You will want to be very familiar with the six basic policy types and when each is used. Review the IAM User Guide (https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html)

Chapter 3: AWS Cost Management

Review Questions

  1. Which of the following tag types are available for use under cost allocation tags? (Choose two.) A. User-defined tags B. Organization-defined tags C. AWS-generated tags D. AWS support–generated tags E. Cost center tags
  2. How long can it take for user-defined cost allocation tags to appear in the AWS Billing dashboard? A. 30 minutes B. 60 minutes C. 24 hours D. 8 hours
  3. The chief financial officer of your organization has asked you to provide a final Cost and Usage Report for the AWS spend in your development account. How can you determine if the report is finalized before sending it over? A. Check that the Cost and Usage Report dashboard is including finalized data. B. Check that the Cost and Usage Report has the prefix of Final-Report within the Amazon S3 bucket. C. Check that the Cost and Usage Report is present as all reports are final reports. D. Check that the Cost and Usage Report has the column Bill/InvoiceID.
  4. What is the purpose of a manifest collection found in the Amazon S3 bucket with your AWS Cost and Usage Reports?A. The manifest collection provides mapping information for AWS Cost Explorer to import AWS Cost and Usage Report data. B. The manifest collection provides connectivity details for AWS analytic services to work with the AWS Cost and Usage Report data. C. The manifest collection indicates the order in which multipart AWS Cost and Usage Reports must be structured and viewed. D. The manifest collection indicates the naming convention and prefix data for AWS Cost and Usage Reports stored in an Amazon S3 bucket.
  5. The annual security audit has just been released and the security team has asked that you prevent any AWS Cost and Usage Reports from being created outside of the primary AWS Organizations management account. What can you do to ensure member accounts cannot create AWS Cost and Usage Reports? A. Apply a service control policy to restrict IAM users within member accounts from configuring AWS Cost and Usage Reports. B. Apply an IAM policy within member accounts to prevent configuring AWS Cost and Usage Reports. C. Apply a service control policy to restrict all Cost and Usage Report use for management accounts. D. Apply a managed IAM policy within member accounts to only allow management accounts access to AWS Cost and Usage Reports.
  6. The finance department has asked which options are available to perform analytics on the AWS Cost and Usage Reports provided for each AWS memberaccount. Which of the following options are provided within AWS Cost and Usage Reports? (Choose three.) A. Amazon Athena 1 B. Amazon Redshift C. Amazon Artifact D. Amazon Comprehend E. Amazon QuickSight F. Amazon PinPoint
  7. The compliance officer for the organization has asked you to confirm the maximum length of historical data that AWS Cost Explorer provides. Which of the following options will you provide to the compliance officer? A. 24 months B. 36 months C. 18 months D. 12 months
  8. You have been tasked with determining the top five cost-accruing AWS Services within your development AWS accounts over the last six months. Which AWS service will provide the fastest visualization of this data? A. AWS Cost Explorer B. AWS Cost and Usage Reports C. Amazon Athena D. Amazon Billing Dashboard
  9. You have been tasked with securing the organization’s AWS developer accounts from having access to AWSCost Explorer while retaining access to AWS Cost Explorer access for nondeveloper accounts. Which of the following is the best option to accomplish this goal? A. Disable AWS Cost Explorer access at the management account level. B. Deny AWS Cost Explorer access using a service control policy for developer AWS accounts. C. Activate IAM access and configure IAM policies for each AWS member account requiring access to AWS Cost Explorer. D. AWS Cost Explorer is disabled for member accounts by default, so no changes are necessary to accomplish this goal.
  10. Your AWS Account is a member of AWS Organizations and has access to AWS Cost Explorer. What of the following options best describes the Cost Explorer data that you can view? A. As a member of AWS Organizations, the account has access to view all AWS Cost Explorer data for the organization. B. Only AWS Cost Explorer data from the time the AWS account joined the AWS Organization. C. Only AWS Cost Explorer data from before the AWS account joined the AWS Organization. D. As a member of AWS Organizations, the account has access to view AWS Cost Explorer data for all accounts in the same organizational unit.
  11. Which of the following Savings Plans options are available from AWS to reduce AWS Service cost? (Choose three.) A. SageMaker Savings Plans 1B. Network Savings Plans C. Compute Savings Plans D. EC2 Instance Savings Plans E. Lambda Savings Plans F. EMR Savings Plans
  12. Your organization is looking to cost-optimize a project that runs in North America and in Australia. The project is very heavily using Amazon EC2 instances with Microsoft workloads. The organization has a requirement that if the project moves to another region the cost optimization plan will still be valid. Which option satisfies the need of the organization? A. Compute Savings Plans B. EC2 Instance Savings Plans C. SageMaker Savings Plans D. EC2 Reserved Instances
  13. Your organization has heavy utilization requirements for machine learning in an upcoming project. You have been tasked with selecting the best cost optimization option to allow changes between regions, but also between inference or training workload types. Which of the following cost optimization option is the best fit? A. SageMaker free tier B. Compute Savings Plans 2 C. SageMaker Savings Plans D. SageMaker Reserved Instances
  14. The finance department has tasked you with determining how much of the On-Demand spend within an AWS account is not covered by a Compute SavingsPlan. Which Savings Plan monitoring report will provide the details you are looking for without requiring customization or detailed exports of Savings Plans data? A. Savings Plans utilization reports B. Savings Plans inventory reports C. AWS Cost and Usage Reports D. Savings Plans coverage reports E. AWS Billing Dashboard
  15. Your senior administrator has asked you to set up notifications for the AWS Budgets configuration in the organization’s developer accounts. Which of the following notification options are available for AWS Budgets? (Choose two.) A. Posting to the Personal Health Dashboard B. Amazon SNS topics C. AWS Management Console notifications D. Direct integration with ServiceNow E. Direct email recipients
  16. You are developing a cost optimization policy that automatically disables a development organization member AWS account from launching EC2 instances when the account budget goes above 90 percent of forecasted costs. Which of the following AWS Budgets actions will accomplish this goal? A. Configure a service control policy to deny Amazon EC2 instances from launching, which is applied once the budget reaches 90 percent. B. Configure an IAM policy to deny the developer admin accounts from launching Amazon EC2instances. C. Configure an action using Amazon SNS to send an SMS message to the developers warning them not to launch Amazon EC2 instances once the budget reaches 90 percent. D. Configure an action using the Trusted Advisor APIs to record budget overages and notify the AWS Support concierge to disable Amazon EC2 instance launching.
  17. The finance department for your company is asking why they are not receiving the forecast Budget alerts you set up for a new account last week. You configured AWS Budget alarms for forecasted amounts at multiple intervals, but on evaluation there is no forecast data. Which of the following options may be a potential cause for not seeing any forecasting data? A. AWS Budgets requires a minimum of two weeks of historical billing data to be able to forecast budget spend. B. AWS Budgets requires a connection to the AWS Cost and Usage Reports and the account is too new to produce these reports. C. AWS Budgets requires a minimum of five weeks of historical billing data to be able to forecast budget spend. D. AWS Budgets requires a connection to Trusted Advisor and the account is too new to generate Trusted Advisor checks on spending.
  18. Which of the following options is a cost benefit of using AWS Managed services such as Amazon RDS, AWS Fargate, or Amazon EFS?A. Managed services are the responsibility of AWS and only require payment for the use of the service, not the configuration of the applications. B. Managed services reduce the IT operational overhead as you can focus on developing applications, not running or designing an infrastructure. C. Managed services reduce the need for highly available architectural designs as this is now the responsibility of AWS. D. Managed services include all operating system licensing costs that AWS pays and manages, essentially eliminating the need to calculate licensing costs.
  19. Your organization has asked for a modification of the cost optimization plan to include more cost-effective scaling solutions for their most popular web application. You have settled on the use of Amazon EC2 Spot instances to help scale during peak hours. Which of the following options do you use to implement this change? A. Modify the Auto Scaling configuration to relaunch all Amazon EC2 instances as spot instances. B. Create a new Auto Scaling group that utilizes Amazon EC2 Spot instances as the primary launch template. C. Create a new launch template to replace OnDemand instances with spot instances and apply the launch template to your existing Auto Scaling group. D. Create a new launch template to include the use of Amazon EC2 Spot instances when scaling and applythe launch template to your existing Auto Scaling group.
  20. Which of the following options represent a potential drawback of using Amazon EC2 Spot instances for cost optimization? A. Amazon EC2 Spot instances can be reclaimed with a two-minute warning at any time. B. Amazon EC2 Spot instances need to have bids refreshed daily. C. Amazon EC2 Spot instances cannot be used with Auto Scaling. D. Amazon EC2 Spot instances are only useful for short-term projects.

Answers

  1. A, C AWS cost allocation tags support two different types: user-defined and AWS-generated. User-defined tags can incorporate useful information such as cost center, project, or department. AWS-generated tags are automatically defined, created, and applied to services.
  2. C When creating user-defined tags for use in AWS cost allocation tags, it can take up to 24 hours for the new tags to show up in the cost allocation reports.
  3. D AWS Cost and Usage Reports (CURs) only enter the finalized state after all pending refunds, credits, or AWS account support fees are updated for that month. When the bill is finalized, the CUR will have a column named Bill/InvoiceID in the CSV file. This indicates the bill has been finalized by AWS and will not change.
  4. B AWS Cost and Usage Report exports into an Amazon S3 bucket to produce a manifest collection that holds files to help set up all the resources you need for Amazon Athena, Amazon Redshift, or Amazon QuickSight to analyze the report data. The manifest collection is only created when the option to use these analytics services is selected when configuring the AWS Cost and Usage Reports.
  5. A The only way to prevent AWS member accounts from creating AWS Cost and Usage Reports is to apply a service control policy (SCP) restricting access to theAWS Cost and Usage Report. Be mindful, however, that SCPs are not retroactive for preexisting accounts.
  6. A, B, E AWS Cost and Usage Reports can be configured to include a manifest collection for use of Amazon Athena, Amazon Redshift, and Amazon QuickSight. These additional analytic services provide a deeper look into cost and usage within AWS membership accounts.
  7. D AWS Cost Explorer provides current month, prior 12 months, and the ability to forecast the next 12 months of AWS cost and usage using the same dataset as the AWS Cost and Usage reports.
  8. A AWS Cost Explorer provides several preconfigured views to display cost and usage information trends within AWS accounts. One of the preconfigured reports is the top five cost-accruing services, and you can modify the time period dimension of the visualization to meet the six-month requirement.
  9. C AWS Cost Explorer has two options to restrict access. The first is at the AWS Organizational level, but this provides an on or off approach for the entire organization. It will not limit to specific accounts. To limit specific member accounts, the IAM Access setting must be turned on in AWS Cost Explorer and an IAM policy allowing access to Cost Explorer must be created and applied.
  10. B When a stand-alone AWS account joins an AWS Organization as a member account, the only Cost Explorer data available is from the time the account joined. Any data prior to joining is unavailable for viewing. Once an AWS account is no longer a member account of the organization, the account can view prior stand-alone data once again.11. A, C, D AWS offers three Savings Plan models to decrease cost of AWS services up to 72 percent compared to On-Demand pricing. The Saving Plan models currently available are SageMaker Savings Plans, Compute Savings Plans, and EC2 Instance Savings Plans.
  11. A Compute Savings Plans offer flexibility to apply savings to EC2 instances and compute resources across any region, tenancy, or instance type. As the project changes and moves to a new region, the Compute Savings Plans will still apply. Compute Savings Plans are not specific to an operating system, which makes it the best choice for Microsoft Workloads.
  12. C Having the ability to move between regions and change between inference and training workloads is a feature of SageMaker Savings Plans. SageMaker Savings Plans allow region, instance size, and instance type changes as well as changing between workload types without losing the cost savings benefits.
  13. D Savings Plans coverage reports are the best option as the reports include a high-level predefined metric showing On-Demand spend not covered by Savings Plans. This provides a fast method for viewing coverage gaps without needing to manually export detailed information from inventory or utilization reports.
  14. B, E When configuring AWS Budgets for notifications, you can select from emailing up to 10 recipients directly from the budget configuration. You can also use Amazon SNS to send SMS messages or take other actions through event triggers with Lambda.
  15. A AWS Budget actions can automatically or with manual approval take remediation steps to avoid budget overages based on forecasted results being over90 percent. The best way to limit launching Amazon EC2 instances is to apply a service control policy (SCP) and apply it to the development organizational unit or specific development AWS account preventing the ec2:RunInstances operation as the remediation step.
  16. C AWS Budgets requires a minimum of five weeks of historical billing data to provide forecasted spending data. As this data is not available, AWS Budgets is unable to send alerts for forecasted amounts.
  17. B One of the largest benefits to using AWS Managed services to reduce cost comes in the form of removing IT overhead for administration. Managed services allow IT administrators to focus on application maintenance and support as the need to update operating systems, design highly available databases, or configure scalable systems is no longer required for the AWS Managed services. All these actions are the responsibility of AWS.
  18. D To address this configuration change you must create a new launch template and add in Amazon Spot Instances within the scaling policy as we do not want to replace all EC2 instances, only those needed when scaling to meet peak utilization.
  19. A One of the potential drawbacks of using Amazon EC2 Spot instances is the possibility of AWS interrupting or reclaiming the EC2 Spot instance due to capacity limits, high demand of EC2 Spot instances, or the Spot bid price maximum being exceeded. In the event of a reclaim, AWS provides a two-minute warning, which administrators can use to stop or hibernate the spot instance or run a script to move data to permanent storage.

Chapter 4: Automated Security Services and Compliance

Review Questions

  1. Which of the following AWS support plans offer access to all Trusted Advisor security checks within an AWS account? (Choose two.) A. Enterprise Support B. Developer SupportC. Basic Support D. Business Support E. AWS Forums support

  2. Which AWS service is required for Security Hub to evaluate security findings and then perform automated remediation actions? A. Amazon CloudWatch B. AWS Config C. Amazon EventBridge D. AWS Step Functions

  3. The chief information officer has asked you to provide reports and warnings for any S3 buckets that have DeleteBucket and DeleteObject actions taken on them in the production environment. Which AWS service will you use to accomplish this task? A. Amazon Macie B. Amazon Detective C. Amazon Inspector D. Amazon GuardDuty

  4. You have been instructed by the security team to locate and evaluate any vulnerabilities in any available EC2 instances that may have reachable TCP and UDP ports from the VPC edges. You have decided to use Inspector for this task. Which of the following finding types will provide the vulnerability information required? A. IAM finding type B. Network reachability finding type C. Package vulnerability finding typeD. CVE vulnerability finding type

  5. You are attempting to encrypt 1,024 KB of data using AWS Key Management Service, and you are receiving errors when sending the data to the service. Which of the following is a potential cause for not being able to encrypt the data directly with AWS KMS? A. KMS only allows 4 KB of data to be encrypted or decrypted directly without the use of envelope encryption. B. The plaintext data encryption key is using the wrong IAM policy. C. Large datasets can only be encrypted when directly accessed from EBS. D. The AWS KMS key permissions are not configured to allow encryption of datasets above 4 KB in size.

  6. The application security team has asked that you assist in procuring certificates and setting up caching for use in their web applications running in AWS. The infrastructure requires the use of CloudFront, and the applications are spread throughout the US-WEST, APSOUTHEAST, and EU-WEST regions. You decide to use AWS Certificate Manager for all web certificates, but when you attempt to locate the certificate for CloudFront you do not see it in the list of available certificates. Which of the following will allow CloudFront to see the certificate for use? A. Certificate Manager is a regional service, and you must ensure that is enabled and configured in the US-WEST, AP-SOUTHEAST, and EU-WEST regions. B. You cannot use Certificate Manager certificates with CloudFront. You must purchase the certificatefrom a third party and import IT directly within CloudFront. C. CloudFront is a regional service and you will need a different certificate from Certificate manager for each region. D. The web certificates must be present in the USEAST region with AWS Certificate Manager or imported for use by CloudFront before they are selectable when configuring CloudFront.

  7. The organization that you work for recently took on a contract to store sensitive proprietary data within S3 buckets for later use by analytical applications. Your CISO is concerned about potential data breaches from developers storing this sensitive data in S3 buckets that are not specified for this project. Which of the following options will provide a report for any sensitive information stored in S3 buckets that should not have this data? A. Use GuardDuty to identify the network traffic that is storing the data in the S3 buckets and report back to Security Hub. B. Use Inspector to evaluate all S3 buckets for sensitive data vulnerabilities and produce reports on which buckets fall out of compliance. C. Use Macie with custom data identifiers to define the criteria to match data stored in S3 buckets and provide reports on which objects and buckets hold noncompliant data. D. Use Detective to scan each object uploaded to S3 by developer accounts and produce a report of findings stored in Security Hub.

  8. The application development group has reached out stating that they are having issues with Secrets Manager when attempting to retrieve a secret for their database connection. The team is attempting to connect using an EC2 instance that was created via an administrator account. When the application attempts to retrieve the secret, the team is presented with an “Unauthorized” error message. Which of the following actions should you check to resolve the issue? A. IAM permissions for the secret. Ensure that the EC2 instance has permissions to access Secrets Manager and the secret. B. AWS Secrets Manager API location. Check to ensure that the web application is using the proper HTTPS API endpoint instead of an HTTP endpoint. C. Verify that the latest version of the AWS SDK is being used by the application. Ensure that the connection string is formatted in JSON. D. Ensure that the web application is retrieving the secret using the AWSCURRENT value and not the AWSPREVIOUS value.

  9. You just received an urgent phone call from several panicked application owners stating that their website is down. You also receive several alerts from your monitoring software that the application has become unresponsive. On investigation you realize that your web application is under a TCP SYN flood attack and you do not have any protection in place to stop this attack. Which services can the application utilize to create a TCP SYN proxy to help mitigate this type of attack in the future and receive help from a specialized team if the attack occurs again? (Select three.) A. Amazon CloudFrontB. Amazon Elastic Load Balancer C. Amazon Shield Advanced D. Amazon Inspector E. Amazon Route 53 F. AWS Security Hub

  10. You have been tasked by the CISO to protect all web applications in the production AWS account from SQL injection attacks and cross-site scripting. Which AWS service will you use to accomplish this goal? A. Amazon VPC security groups B. AWS Web Application Firewall C. AWS Network Firewall D. AWS Shield

  11. The web development team has asked you to deploy a method of limiting scanner and crawler traffic coming into the production web applications. Which AWS WAF feature will accomplish this task? A. AWS WAF CAPTCHA B. AWS WAF Account Takeover Prevention C. AWS WAF Bot Control D. AWS WAF client application integration

  12. Your organization has recently acquired another web development company and is in process of combining AWS account resources. AWS Organizations was chosen to bring and manage the AWS accounts from the acquisition, but management of the AWS accounts to ensure compliance against security group standards and AWS WAF rule groups is becoming increasinglydifficult. Which of the following AWS services will solve this problem? A. AWS Firewall Manager B. AWS Detective C. AWS Web Application Firewall D. AWS Security Hub

  13. Your organization is using Detective and GuardDuty to visualize and investigate potential security issues and findings. What is the first phase of investigation when you receive a notification about a suspected high-risk activity? A. Scoping B. Response C. Triage D. Remediate

  14. Your company has been using AWS WAF for all production web applications for a little over a year. During this time, you have created several custom AWS WAF rule groups that you want to share with other SysOps administrators across your global organization. Which of the following methods allows you to share the rule groups with other AWS accounts? A. Share the rule group using AWS WAF client application integration and exporting the rule sets to a CSV file. B. Share the rule group by entering the AWS account number of the destination account when creating a rule group. C. Share the rule group by selecting Share on the WebACL edit screen in the AWS WAF console.D. Share the rule group using PutPermissionPolicy and the AWS WAF API.

  15. Which of the following AWS services is used to view a global level of aggregated threats over the last day and at an account level a list of DDoS events detected over the last year? A. AWS Shield B. AWS GuardDuty C. AWS Detective D. AWS Security Hub

  16. The security team for your organization has asked for a detailed list of API calls for Secrets Manager used within your organization. The team is looking to validate when a select few secrets were last rotated as part of a recent incident review. What Secrets Manager logs will you pull for the security team? A. Provide the security team with access to CloudWatch and filter metrics based on Secrets Manager. B. Provide the security team with access to Detective and filter findings based on last rotation. C. Provie the security team with access to Security Hub and filter results from Secrets Manager. D. Provide the security team with the CloudTrail logs within the region where Secrets Manager is being used.

  17. You have been contacted by the security team because they are receiving too many findings from Macie in Security Hub. The security team has asked if it is possible to change the frequency of findings being sentinto Security Hub from Macie. Which of the flowing frequencies are supported by Macie? (Choose three.) A. 15 minutes B. 5 minutes C. 1 hour D. 3 hours E. 6 hours F. 30 minutes

  18. You are helping the web development team with creating certificates for their new web applications. The team wants to be able to protect all subdomains for their application under a single certificate from AWS Certificate Manager. Which of the following domain name types will need to be used when requesting the certificates with Certificate Manager? A. Root domain name B. Wildcard domain name C. Bare domain name D. Apex domain name

  19. You have been tasked with generating a new key in AWS Key Management Service for use with a new application hosted on EC2. The key will be used to encrypt and decrypt data within the production AWS account only. Which key type will you select when making the key? A. PKI key type B. Asymmetric key type C. Custom key type D. Symmetric key type

  20. Your organization has decided to standardize on the use of Inspector for all vulnerability scanning of AWS accounts assigned to production, development, and user acceptance testing. You have received reports that several newly deployed EC2 instances are not being checked for CVE vulnerabilities by Inspector, but they are receiving network reachability findings. Which of the following is a possible cause? A. The Systems Manager agent is not running or installed on the EC2 instances. B. Inspector will only scan AWS Elastic Beanstalk applications and containers. C. A security group is blocking access to the CVE item database. D. Inspector is not enabled to scan for CVE vulnerabilities within the same region as the EC2 instances.

Answers

  1. A, D AWS Trusted Advisor offers basic security checks to all AWS accounts but only Enterprise and Business support plans can access all available checks within Trusted Advisor.
  2. C Security Hub allows you to automate remediation actions using EventBridge. Security Hub will automatically send all new findings and updates to existing findings to EventBridge as events. EventBridge events can then be used to perform remedial actions using AWS Lambda or notifications using SNS.
  3. D GuardDuty S3 protection uses CloudTrail management events and S3 data events to monitor against threats on S3 resources. GuardDuty will generate findings for actions on an S3 bucket such as DeleteBucket and DeleteObject and post these findings in the GuardDuty console.
  4. B There are two finding types used in Inspector: package vulnerability findings and network reachability findings. The network reachability findings look for TCP and UDP ports that are open for resources outside of the VPC edge locations, like the Internet gateway or VPC peering connections. This type of access is considered overly permissive, and Inspector will provide detailed information in a finding about the EC2 instances involved, the ports discovered, and the security groups or access control lists (ACLs) involved.
  5. A The Key Management Service can only encrypt and decrypt up to 4 KB datasets when being directly sent to the service. To encrypt larger datasets, you must use envelope encryption to retrieve a plaintext data key and an encrypted data key that are used to encrypt the file, and then package the file with envelope encryption using the encrypted data key. The encryption process is handled outside of KMS and only uses the AWS KMS API to retrieve the data keys.
  6. D In order to use AWS Certificate Manager certificates in CloudFront, you must either import the certificate in the US-EAST (N. Virginia) region or provision a certificate using Certificate Manager before it can be used. CloudFront-associated Certificate Manager certificates in this region are distributed to all geographic locations configured for CloudFront distribution.
  7. C Macie with custom data identifiers is a great solution to identify any sensitive information stored in objects across nondesignated S3 buckets. Using the custom data identifiers, you can define criteria using regex expressions to match values that hold proprietary data. Once this information is found, you can have Macie generate reports or send alerts using EventBridge.
  8. A Secrets Manager requires proper permissions to be set on users, groups, and roles before a secret can be retrieved. In this scenario, an error stating unauthorized indicates that the EC2 instance or application does not have access to retrieve the secret from Secrets Manager. Check the IAM identity-based permissions for the user or check the resource-based policy to verify that the EC2 instance role has permission to access the secret.
  9. A, C, E The application servers are under a TCP SYN flood attack. To stop this type of attack, you need to challenge any new connection requests to your web application and only serve legitimate users. Route 53 and CloudFront have built-in TCP SYN proxy capabilities to remediate this problem. When using Shield Advanced in conjunction with CloudFront and Route 53, you can use the Shield Response Team (SRT) to assist in mitigation of this type of issue if it were to occur again.
  10. B The AWS Web Application Firewall (AWS WAF) is a layer 7 firewall used to protect your web applications from DDoS attacks, SQL injection attacks, and cross- site scripting attacks. You can also allow, block, or count web requests coming into an application based on criteria that you set, such as IP addresses, geo locations, and HTTP headers.
  11. C AWS WAF offers several optional components to enhance the network and application protection. In this scenario the web application team has asked you to limit bot traffic coming from scanners and crawlers. When using AWS WAF, you can enable the optional Bot Control feature to use managed rule groups to identify common bots, verify desirable bots, and detect high- confidence signatures of bots. You can also monitor, block, or rate-limit bots like crawlers and scanners while allowing beneficial bots like search engines to continue.
  12. A AWS Firewall Manager requires the use of AWS Organizations where you can define an organization Firewall Manager administrator account to apply rule groups and policies to every AWS Organization member. The requirement is to manage compliance against security groups and AWS WAF rule groups, which can be accomplished by using AWS WAF policies and security group policies within Firewall Manager. When setting the criteria for each of the policies, you can define an automated action to remediate any security groups that deviate from the standard policy security groups. You can do the same for AWS WAF rules and rule groups to ensure that every member account has the same rule groups applied and available for applications.
  13. C Detective has three phases of investigation when an alert or notification is received from a potential high- risk or suspected malicious activity. The first phase is Triage, which is when you determine whether a report is a false positive or needs further investigation. The next phase is Scoping, where you determine the extent of the activity and the underlying cause. The final phase is Response, where you remediate the action either by resolving the security threat or by marking the threat as a false positive.
  14. D Within AWS WAF you can share custom rule groups by using the AWS WAF API and the PutPermissionPolicy API call. You can only attach one policy in each PutPermissionPolicy request, and the policy must include an effect, an action, and a principal. You also must ensure that you are sharing the rule group from the account and user that is the owner of the rule group.
  15. A The AWS Shield console provides a global aggregated view of threats over the last day, three days, and the last two weeks. The summary view in the AWS Shield Management Console displays the DDoS events detected by Shield for resources that are eligible for protection by Shield Advanced. Alternatively, you can use the AWS Shield API operation DescribeAttackStatistics to retrieve the account-level details.
  16. D Secrets Manager has two logging methods that can be used to evaluate the behavior of secrets: CloudTrail and CloudWatch. In this case, the security team wants to check the details of when a secret was rotated, and this is stored in CloudTrail within the region where Secrets Manager is used. The security team can access the S3 bucket where the CloudTrail logs are stored to do further analysis using third-party software if needed. CloudTrail will provide event details when a secret is deleted, versioned, or rotated. CloudWatch only provides details on the number of requests against the AWS Secrets Manager API and can be useful in identifying applications that are calling the service too frequently.
  17. A, C, E Macie allows customizable frequencies for when findings are published to Security Hub. You can update the publication setting to fit the needs of the security team by adjusting the findings publication from the default of 15 minutes to either every one hour or every six hours. If you modify the publication timings within one region, you will need to modify every other region where Macie is in use as well.
  18. B When requesting certificates that will need to cover all subdomains of a domain name, you need to use the wildcard domain type. This means that the certificate will show a domain name of *.domainname.com, where the
    • indicates that all names to the leftmost position of the domain name will be covered under the certificate. The wildcard name will appear in the Subject field and the Subject Alternative Name extension of the AWS Certificate Manager certificate.
  19. D Key Management Service offers two choices when generating keys: symmetric and asymmetric. The symmetric key type is used when encrypting or decrypting data within an AWS account and requires direct calls to the AWS KMS service. This is a great option to use when an EC2 instance is required to process encryption using the AWS KMS key as it will call the AWS KMS API to accomplish encryption and decryption actions.
  20. A Inspector requires the use of the Systems Manager (SSM) agent to scan EC2 instances for vulnerabilities against the CVE item database. Network reachability scanning is available even when the SSM agent is not installed or running, which means that the EC2 instances not reporting CVE data must have issues with the SSM agent installation or it is not running.

Chapter 5: Compute

Review Questions

  1. Creating which of the following is the first step in setting up EC2 Auto Scaling? A. Auto Scaling Group B. Launch configuration C. Launch template D. Target groups
  2. What is a key metric for monitoring Lambda performance? A. 500 errors B. CPU UtilizationC. Network In D. Throttles
  3. What are the three placement group types for EC2? A. Cluster, partition, and spread B. Cluster, shared tenancy, and isolated C. Hardware virtualized, para-virtualized, and baremetal D. Tight, loosely coupled, and normalized
  4. An EC2 instance only incurs cost when in which state? A. Active B. Engaged C. Online D. Running
  5. By default, AWS Compute Optimizer looks at how many days of data to make its recommendations? A. 7 days B. 14 days C. 30 days D. Three months
  6. You have been asked to create a load balancer for a third-party virtual appliance that uses the GENEVE protocol. Which of the following would be the best solution? A. Application load balancer B. Classic Load Balancer C. Gateway load balancer D. Network load balancer7. What is the best choice of load balancer optimized for HTTPS network traffic? A. Application load balancer B. Classic Load Balancer C. Gateway load balancer D. Network load balancer
  7. You discover that your client has been using several Classic Load Balancers since they created their AWS account in 2019. What would be your best recommendation to the customer? A. Convert the Classic Load Balancers to network load balancers using a gateway load balancer to ensure traffic is correctly routed during the transition. B. Convert the Classic Load Balancers to network or application load balancers. C. Retain the Classic Load Balancers and provision elastic load balancing auto scaling to automatically add more load balancers to meet demand. D. Retain the Classic Load Balancers and submit a request to increase the throughput quotas on the Classic Load Balancers.
  8. In auto scaling, what does the desired capacity refer to? A. The average capacity that the customer expects to need over the next billing cycle B. The capacity that the customer expects to need over the next billing cycle C. The initial capacity of the Auto Scaling Group that the system will attempt to maintainD. The lowest capacity of the Auto Scaling Group at which the workload is still able to perform
  9. What is the minimum billing for an EC2 instance? A. 1 hour B. 1 second C. 24 hours D. 60 seconds
  10. Your customer has asked you to create an isolated EC2 compute environment with cryptographic attestation to process healthcare data. Which feature best meets this requirement? A. Amazon Elastic Inference B. Instance store volumes C. Nitro enclaves D. Partitioned placement groups
  11. You have been asked to load-balance a workload of TCP traffic. Which of the following is the best solution for your client? A. Application load balancer B. Classic Load Balancer C. Gateway load balancer D. Network load balancer
  12. What are the three important capacity limits of auto scaling? (Choose three.) A. Desired capacity B. Maximum size C. Minimum sizeD. Optimal capacity
  13. You have noticed that during peak demand your Lambda function is being throttled. You suspect you may be exceeding your concurrency quota. Which of the following is the best metric for determining if the concurrency limits need to be increased? A. Dead letter errors B. Errors C. Invocations D. ConcurrencyQuota
  14. You have a workload composed of several EC2 instances. You wish to keep the average CPU utilization of the workload at or near 60 percent. Which of the following will most efficiently keep your workload as close as possible to the desired utilization? A. Manual scaling B. Simple scaling C. Step scaling D. Target tracking
  15. AWS Lambda scales automatically to meet demand. By default, Lambda will scale up to the soft concurrency limit. What is the concurrency limit? A. 100 B. 1,000 C. 10,000 D. 1,000,000
  16. Your workload spikes every Thursday evening while batch processing runs, and processes are frequently throttled as soon as processing begins. Which of thefollowing scaling methods will most effectively solve this problem? A. Predictive scaling B. Simple scaling C. Step scaling D. Target tracking
  17. EC2 instances have a life cycle. Which of the following are the four principle states of an instance? A. Pending, running, shutting down, terminated B. Preparing, engaged, shutting down, terminated C. Provisioning, provisioned, deprovisioning, decommissioned D. Starting, running, terminating, stopped
  18. Your client uses a variety of compute resources—EC2, Lambda, and Fargate—with frequent changes in instance sizes and operating systems. Which pricing model would you recommend to them in order to optimize cost? A. Dedicated instances B. Reserved instances C. Savings plan D. Spot instances
  19. Which of the following is a burstable EC2 instance type? A. C B. D C. M D. T

Answers

  1. C Launch templates are now recommended over launch configurations. Target groups are for load balancers. Auto Scaling Groups are set up after the launch configuration is defined.
  2. D The error codes in the 500 range indicate a server problem. However, these are HTTP error codes and typically captured at the application level. CPU Utilization and Network In are very common EC2 metrics but are not applicable to Lambda. Throttles are a critical metric for Lambda and indicate invocations exceeding concurrency. Consider either provisioned concurrency or increasing quotas or both.
  3. A The placement groups for EC2 are cluster, partition, and spread placement. Each of these has a use case that you should be familiar with.
  4. D Be familiar with the life cycle of an EC2 instance. In addition to billing, the life cycle determines persistence of data. Be sure to understand how data persists on EBS-backed instances versus instance store–backed and what the EC2 life cycle looks like using an instance store.
  5. B By default, Compute Optimizer looks at the past 14 days of data to make a recommendation. Enhanced Infrastructure Metrics is a paid feature that looks at up to three months of data.
  6. C The gateway load balancer is specifically designed for use with third-party appliances such as next- generation firewalls (NGFWs) and web application firewalls (WAFs) that use the GENEVE protocol on port 6081.
  7. A The application load balancer is optimized for HTTP and HTTPS traffic.
  8. B Classic Load Balancers are being deprecated, and AWS recommends migration to network or application load balancers. Moving to these newer load balancers brings performance and cost optimizations. Elastic load balancers (ELBs) of all types are managed services and do not need auto scaling. Throughput quotas cannot be adjusted for since they are managed services. A gateway load balancer (GLB) is intended for third-party appliances using the GENEVE protocol. A GLB is not used as a transitional load balancer.
  9. C There are three limits that are set for an Auto Scaling Group: the minimum, desired, and maximum capacities. The minimum is the smallest acceptable group size. The maximum is as large as the group will be allowed to scale. The desired is the initial size of the group. Auto scaling then attempts to maintain that size. When demand causes the group to scale out, Auto Scaling will then scale in at the end of the event back to the desired capacity.
  10. D EC2 is billed in 1-second increments after the first 60 seconds. The minimum you would be billed for is 60 seconds.
  11. C Nitro enclaves leverage the Nitro hypervisor components to provide an isolated and hardened compute environment on EC2. A feature of Nitro enclaves is cryptographic attestation, which allows for identification of the enclave and assures that only authorized code is run in the environment. Instance store refers to the boot device, which is either EBS- backed or an ephemeral instance store. Amazon Elastic Inference provides low-cost GPU acceleration on EC2 and SageMaker instances. A partitioned placement group is one of three placement group options.
  12. D Application load balancers handle the HTTP/HTTPS protocols. A gateway load balancer handles the GENEVE protocol. Classic Load Balancers are no longer recommended. Network load balancers handle the TCP protocol.
  13. A, B, C The three limits are desired, maximum, and minimum.
  14. D Each of these can indicate resource configuration issues. The ConcurrentExecution metric indicates the number of concurrent executions. When concurrent executions reach the quota or the reserved concurrency, then further requests will be throttled.
  15. D Step scaling is usually an improvement over simple scaling since step scaling can respond to events as they happen without needing to wait for health check replacements and cooldown periods. Target tracking is normally recommended over step scaling since it can stay closer to the desired target value than step scaling. While manual scaling has its place, it is not the best option for automatically scaling this type of workload.
  16. B The soft concurrency limit of AWS Lambda is 1,000. If more concurrent invocations are required, a request can be made to increase the quota.
  17. A While simple, step, and target tracking scaling will scale out the workload, they only begin scaling after the metric indicates a problem. Predictive scaling anticipates the event based on historical data and scales out ahead of the Thursday evening batch processing so that throttling is avoided.
  18. A The states of an EBS-backed EC2 instance are pending, running (rebooting), shutting down (stopping, stopped), and terminated.
  19. C Dedicated instances are typically more expensive for comparable compute power and would not be recommended for cost optimization. Reserved instances and Spot instances offer significant savings. However, savings plans allow for flexibility across EC2, Lambda, and Fargate as well as many dimensions, such as operating system, instance size and family, and tenancy.
  20. D C instances are compute optimized, D are memory optimized, and M and T are both general-purpose instance families. However, T instance types are the only instance types that are burstable.

Chapter 6: Storage, Migration, and Transfer

Review Questions

  1. Which of the following Amazon Simple Storage Service (S3) features allows an organization to scale delivery of temporary HTML pages featuring downloadable podcasts? A. Amazon S3 multipart upload B. Amazon S3 Transfer Accelerator C. Amazon S3 static website hosting D. Amazon S3 cross-region replication

  2. You work for an organization that requires any content available in an S3 bucket must replicate to two or more additional regions for security and compliance. The solutions architect implemented cross-region replication (CRR) on the source S3 bucket, and all new files uploaded to the bucket are replicating correctly. However, there have been reports that the destination buckets are missing files found in the origin buckets. As the SysOps administrator, you must assist in solving this problem. Which of the following is the best solution for ensuring new and existing files replicate to the destination buckets? A. Create a script using the AWS SDK and Amazon S3 REST API to copy the existing files from the origin bucket into the destination bucket before enabling cross-region replication. B. Ensure that the IAM permissions and bucket policy configurations allow full bucket replication from the origin bucket. C. Turn on the full bucket replication feature in the properties for the origin bucket. D. Use Amazon S3 Batch Replication to backfill the existing data of the origin bucket into the new destination buckets.

  3. You receive reports from several clients that their uploads into the project Amazon S3 buckets have been slow and unreliable. The clients report the file size to average over 3 GB in size per upload and they have been uploading the files while working on-site in a remote location. Which of the following solutions will best address the upload speed and reliability issue when uploading objects into Amazon S3? (Choose two.)

A. Enable S3 Transfer Acceleration for the destination bucket and direct the clients to upload files to the https://s3-accelerate.amazonaws.com endpoint. B. Enable Amazon CloudFront for the project and direct the clients to upload directly to the Amazon CloudFront edge location closest to the remote location. C. Instruct the client that the file size is too large and to reduce the file to 1 GB or smaller to avoid reliability and upload performance issues. D. Instruct the client to only use the aws s3 cp CLI commands with the dual-stack (IPv6) endpoints to ensure the fastest speeds and reliability. E. Instruct the client to use multipart uploads for large file uploads into the destination buckets.

  1. Your organization has recently decided to not renew the maintenance contract for the data storage array used for long-term storage of financial records. The organization must maintain all financial records for a minimum of 10 years, and the records must be available for audit within 48 hours. Which of the following AWS storage services is the most costeffective option to replace the long-term storage needs? A. Amazon S3 B. Amazon S3 Glacier Deep Archive C. Amazon FSx D. Amazon EFS

  2. You are designing a storage backup solution that includes the use of Amazon S3 buckets for general file storage and project files. The organization has asked for you to cost-optimize the storage solution and look for ways to cut costs while maintaining durability of the backed-up files. The files only require access once or twice per quarter as part of a standard audit process. Beyond the audit period, the files do not require frequent access. However, when the auditors request the files, they must be immediately available. Which of the following options provides the most cost-effective solution while maintaining the access required for auditors? A. Use the Amazon S3 Standard-IA storage class. B. Use the Amazon S3 Glacier Flexible Retrieval storage class. C. Use the Amazon S3 One Zone-IA storage class. D. Use the Amazon S3 Glacier Instant Retrieval storage class.

  3. Your organization heavily invests in the use of Amazon EC2 instances to support web applications and critical business applications in the cloud. It has come to your attention that backups for all Amazon EC2 instances are manual and use Amazon EBS Snapshots at a random occurrence whenever the storageadministrators think of doing it. As the new SysOps administrator for the organization, you suggest automating the Amazon EBS Snapshot process for these critical systems to enforce a disaster recovery action plan. Which of the following options will accomplish the goal of automating EBS Snapshots and avoiding data loss? A. Use AWS Systems Manager to automate the EBS volume snapshots each time the EC2 instance restarts. B. Create a script to create a new AMI of each Amazon EC2 instance on a defined schedule. C. Use Amazon Data Lifecycle Manager to automate the snapshot process. D. Create a script to create EBS Snapshots of each EBS volume present in the AWS account on a defined schedule.

  4. You have received reports from several application owners that data reads are slow for an application hosted on an Amazon EC2 instance. You suspect that the EBS volumes are receiving too many read requests for data and performance is suffering. Which of the following CloudWatch metrics can you use to verify read performance for the attached EBS volumes? (Choose two.) A. VolumeQueueLength B. VolumeWriteBytes C. VolumeReadOps D. VolumeWriteOps E. BurstBalance

  5. You oversee monitoring of performance for several production data conversion systems running on Amazon EC2 instances. Recently the data engineers reported below normal write and read speeds coming from several application servers. Each application server is a T3.Large using gp2 EBS volumes for the operating system volume and St1 volumes for the data processing volumes. You are concerned that the volumes are throttling. Which Amazon CloudWatch EBS volume metric will confirm EBS volume throttling? A. VolumeQueueLength B. VolumeWriteOps C. VolumeReadBytes D. BurstBalance

  6. You recently deployed an Amazon EFS filesystem to handle user home directories for several Linux-based applications hosted across four separate availability zones in your VPC. When attempting to mount the EFS system on each EC2 instance, you find that you are only able to create a mount for EC2 instances in the USEAST-1a availability zone. What could be the possible issue preventing other EC2 instances from mounting the new EFS storage system? A. When configuring the Amazon EFS storage system, only one Amazon EC2 instance is selected as a target. B. The Amazon EFS storage system does not have the proper security groups in place. C. When configuring the Amazon EFS storage system, the option for One Zone storage class is in use. D. The Amazon EFS storage system does not have the proper EC2 instance role policies configured toconnect to multiple Amazon EC2 instances.

  7. Your organization has successfully implemented and is actively using the new Amazon EFS configuration for six months to share media processing files between production and editing. The production team has recently expanded globally and will be sharing the workload between the Washington, DC and London locations. The London location is worried about latency when connecting to the Washington, DC file share when processing production files and has asked you to evaluate a potential solution. Which of the following options solves the latency issue for the London office? A. Configure Amazon EFS replication between the USEAST and EU-WEST-2 regions. B. Configure bastion hosts locally in the EU-WEST-2 region for the London office and have all production staff use this for processing. C. Configure an Amazon DirectConnect connection to the local London office to reduce latency when connecting to the Amazon EFS file share. D. Configure Amazon EFS mount targets locally in a new VPC located in EU-WEST-2 to locally mount production processing systems to the Amazon EFS mount target.

  8. You are assisting the high-performance computing (HPC) department with migration of their data and systems into AWS. The director of the lab has asked for recommendations on a scalable, secure, and faulttolerant storage solution in AWS to support the parallel filesystems. The department does not have the funds to manage the storage system and is looking for the lowest-cost solution. Which of the following storage solutions would you recommend to the director?A. Amazon EBS B. Amazon S3 C. Amazon FSx for Lustre D. Amazon FSx for NetApp ONTAP

  9. Most of the applications and systems within your organization are now running in the AWS Cloud. The director of IT is concerned that the large investment in the current SAN technologies will go to waste now that most data is moving into AWS. The director has asked for a way to run a hybrid infrastructure where onpremises servers can still connect using the iSCSI protocol, and data is available in both AWS and the onpremises SAN. Which AWS storage solution would you recommend? A. Amazon EBS with Windows Files Server cluster B. Amazon FSx for NetApp ONTAP with AWS Storage Gateway File Gateway C. Amazon FSx for Windows File Server D. Amazon EFS with AWS DirectConnect and VPN

  10. You receive a task to create a disaster recovery and business continuity plan for your organization’s storage hosted on Amazon S3. The IT director requires that any solution must support point-in-time recovery (PITR) for immediate restoration of any data. Which of the following solutions would you recommend? A. Amazon S3 Batch Replication B. Amazon S3 versioning C. Amazon Data Lifecycle Manager D. AWS Backup

  11. You have chosen AWS Backup as the primary component of the disaster and recovery strategy within your organization. You are using AWS Backup to perform daily backups of four Amazon S3 buckets, but the backup jobs are reporting a failure. On further investigation it appears that the AWS Backup job is failing because it cannot access the Amazon S3 buckets to back up files. What is the cause of this error? A. The backup plan is using an IAM role that does not include the necessary in-line IAM policies. B. The Amazon S3 buckets are not located in the same AWS region where the backup plan is running. C. The AWS Backup plan does not have PITR enabled for the Amazon S3 buckets. D. The backup plan within AWS Backup requires an AWS PrivateLink configured to back up Amazon S3 buckets.

  12. You receive a task to assist with a major datacenter consolidation project that includes moving several petabytes of archived data into the AWS Cloud. The organization use a tape backup system to maintain weekly and monthly incremental backups of all onpremises servers and storage systems. The CIO is concerned that moving to another backup solution would cause too much training overhead and potential outages as people sort out the new operational processes. Which AWS solution would you recommend to the CIO? A. Amazon S3 Glacier Deep Archive B. AWS Storage Gateway Volume Gateway C. AWS Storage Gateway Tape Gateway D. AWS Backup

  13. Your organization is in process of migrating hundreds of gigabytes of application data into Amazon S3. You must help to develop a solution that allows on-premises application servers to directly interface with the data now hosted in Amazon S3 using the NFS protocol. Which of the following solutions accomplishes this goal? A. Implement AWS Storage Gateway File Gateway. B. Implement Amazon EFS. C. Implement AWS Storage Gateway File Gateway with Amazon FSx for Windows File Share. D. Implement AWS DataSync to synchronize files between Amazon S3 and on-premises.

  14. You are the SysOps administrator for an organization that has recently started a new data analytics division. The new director of analytics has asked you to configure a data lake within AWS. The director has also asked to have a local copy of the datasets, which incrementally updates if they want to perform localized analytics from the existing on-premises hardware. Which solution would you recommend to the director? A. Implement AWS Backup to incrementally back up and restore data from Amazon S3 onto on-premises storage. B. Implement Amazon EFS to share data volumes with on-premises. C. Implement Amazon FSx for Windows File Server to enable the SMB protocol and share storage with local servers. D. Implement AWS DataSync to copy data between onpremises and Amazon S3.

  15. Your organization has tasked you with migrating data from on-premises and a third-party cloud provider into AWS. The CIO is concerned that data between the third-party cloud and AWS will not be in sync, resulting in application downtime while the migration is underway. Which AWS service would you use to accomplish this migration task? A. AWS DataSync B. AWS Transfer Family C. Amazon Storage Gateway D. AWS Backup

  16. Your organization is undergoing an application modernization effort and focusing on decommissioning and consolidating application on-premises into a new AWS environment. Several applications require the use of Secure Shell File Transfer Protocol (SFTP) to move files between the application server and the customer. To reduce cost and assist with consolidation, you want to move all SFTP servers into AWS. Which AWS service provides the most scalable and cost-effective solution? A. Amazon EFS B. AWS Transfer Family C. AWS DataSync D. Amazon EC2

  17. You work as a SysOps administrator for a data collection and processing organization. Every day a series of FTPS servers receive client information and store local copies for processing by another server. Once a day a data clerk accesses the server to categorize the data into different data classifications, including identifying PII data for processing. During astaff meeting, the manager for the application mentioned that the clerk responsible for completing the data classification is no longer able to complete the work. After some discussion, you offered to develop an automated solution to the problem and migrate the service into AWS. Which of the following solutions meets the needs for this migration without needing additional administrative overhead? A. Rearchitect the application to use Amazon S3 and apply tags for data classification at upload. B. Create a new Amazon EC2 FTPS instance and use AWS Lambda functions to process data classification. C. Implement AWS Transfer Family for FTPS and configure a data classification managed workflow and tagging strategy. D. Migrate the current FTPS service to an Amazon EC2 instance and use AWS Lambda functions to apply data classification tags to uploaded data.

Answers

with a download link to the S3 object for the podcasts that are temporarily available. The result is a scalable temporary solution for hosting the podcast and the website, without the need for compute resources.

  1. D Amazon S3 cross-region replication and same-region replication pose a limit when working with existing objects and buckets. Cross-region replication will take care of any new object uploads into the bucket by replicating the changes to the destination bucket(s). However, existing objects will not copy using cross- region replication. To accomplish this task, the best option is to use Amazon S3 Batch Replication to backfill newly created replaced buckets with objects from the existing origin buckets. This is a one-time replication required for each destination bucket to bring parity with the existing source bucket.
  2. A, E When uploading objects into an Amazon S3 bucket, larger files can be sensitive to network disruptions and slower Internet speeds. Enabling S3 Transfer Acceleration for the buckets and directing the clients to use the transfer acceleration endpoints will send uploads to the nearest edge location. This will drastically increase the speed of the uploads and reduce latency. Using multipart uploads for larger files will also increase upload speeds as smaller chunks of the overall file upload. Multipart uploads are also a terrific way to solve for network connectivity issues since you can pause the uploads or restart at the last part in process in the multipart upload. This option allows for the file size to remain the same while increasing performance and reliability for the upload process. When paired with S3 Transfer Acceleration, you can achieve the best customer experience.
  3. B In this scenario your organization is looking to achieve a cost-effective solution for their long-term archival and storage needs. They decided to forgo the maintenance contract of the on-premises storage and need a suitable replacement that still maintains access to the files within 48 hours to meet audit compliance requirements. The fact that the organization must maintain files for 10 years, that the recovery time is 48 hours, and that the solution needs to be the most cost- effective are reasons to evaluate the use of Amazon S3 Glacier Deep Archive. Data retrieval for Amazon S3 Glacier Deep Archive is complete within 48 hours using bulk retrieval options, or if the organization needs data faster, they can pay an expedited fee for using the Standard retrieval option in Amazon S3 Glacier Deep Archive to retrieve the files within 12 hours. Amazon S3 Glacier Deep Archive offers long-term cold storage for several use cases, such as storing financial documents, healthcare records, or other data subject to compliance requirements dictating length of time files need to be available.
  4. D In this scenario the organization wants to reduce storage backup costs but also maintain instance access to any files recovered from the system. The organization also wants to maintain the same durability and benefits of Amazon S3. To meet the needs of the organization, you can use Amazon S3 Glacier Instant Retrieval storage class as it matches the durability and reliability of Amazon S3 Standard while reducing the overall storage costs. Amazon S3 Glacier Instant Retrieval also enables the auditors to receive the requested files in milliseconds, which offers similar retrieval times compared to Amazon S3 Standard. The options for Amazon S3-IA and S3 One Zone-A do not apply as they reduce the durability and availability of the stored files, as well as not being the most cost- effective option when storing files for the long term.
  5. C In this scenario the organization is looking to achieve a cost-effective solution for backing up EBS volumes within their environment. There is a concern over automation and the potential for data loss due to manual actions previously taken. To best address the automation and recovery of EBS Snapshots, use the Amazon Data Lifecycle Manager for this process. The Amazon Data Lifecycle Manager automates the creation, retention, and deletion of EBS Snapshots and EBS-backed AMIs. This allows the creation of a schedule backup process to create a disaster recovery backup process to avoid potential data loss. Using the other options available introduces various lag in the data backup options, resulting in potentially missed data during a restoration, where the Data Lifecycle Manager makes incremental backups to reduce cost and reduce the data lost during restoration.
  6. A, C Identifying EBS performance for read operations starts with the evaluation of Amazon CloudWatch metrics related to the VolumeQueueLength, which is an indicator of a bottleneck on either the guest operating system or the network link to the EBS volume. The next metric to check is VolumeReadOps, which indicates how many read operations per second the EBS volume is receiving. This metric helps identify if there is an I/O size or throughput issue between the guest operating system and the EBS volume. Too high a number of read requests on a standard nonprovisioned IOPS volume can lead to queuing and slow read performance.
  7. D In this scenario the data engineers are reporting below normal write and read speeds, which is a great indicator that the volume is throttling. The EBS volumes used in this deployment are gp2 and st1 volume types, which both use burst bucket balance to maintain performance above the baseline available IOPS for the volume. Checking for depletion of the BucketBalance metric for the volume can identify depletion of the burst bucket and results in low performance for the EBS volume.
  8. C In this scenario only one Amazon EC2 instance can connect to the Amazon EFS storage system in a single availability zone. It is likely that the configuration of the Amazon EFS system is using the One Zone storage class, which limits Amazon EFS to a single mount target in only one availability zone, compared to multiple mount targets when using Amazon EFS Standard. The fact that the instances in US-EAST-1a can connect means the system is working with connectivity. There are no mount targets available for each availability zone in the VPC. Amazon EC2 instance roles or security group configurations will resolve this issue.
  9. A In this scenario the teams access the same files across the Washington, DC and London offices for production processing. This means the files must stay in sync and the London office is concerned with latency when connecting to the Washington, DC (US-EAST-1) services. To reduce latency and allow the local London office to work on files in the Amazon EFS share the same way the Washington, DC office does, you can enable Amazon EFS replication. Amazon EFS replication automatically and transparently replicates the data and metadata located in the Amazon EFS filesystem to the new destination. The files are in sync and the London locals can work directly on the files and the Washington, DC locals can see the changes as they are replicated.
  10. C In this scenario the director has asked for a managed service as they cannot afford a storage administrator, as well as asking for a filesystem that supports parallel filesystems. The entire department focuses on HPC, which means they are a candidate for using the Amazon FSx for Lustre option to manage their parallel filesystems. The use of Amazon EBS or Amazon S3 would increase complexity and cost, and would be an unmanaged solution. Amazon FSx for NetApp ONTAP is not a consideration as it is not the right Amazon FSx filesystem the parallel filesystem needs.
  11. B The key indicator in this scenario is the director wants to maintain the current SAN infrastructure in a hybrid deployment while connecting servers using iSCSI. This immediately eliminates the use of Amazon EBS with Windows File Server and the Amazon FSx for Windows File Server options since the connection protocol is SMB. Amazon EFS with AWS DirectConnect and VPN would utilize the NFS infrastructure and be a costly solution. The Amazon FSx for NetApp ONTAP solution using the Storage Gateway File Gateway will enable local access of data for on-premises. This means the SAN is still in use, and data replicates to the FSx filesystem, with eventual migration directly to the Amazon FSx for NetAPP ONTAP solution full-time when the maintenance contract and lifespan of the current SAN has expired.
  12. D Data for the organization is already located in Amazon S3, which means the Amazon Data Lifecycle Manager would not be a solution as it relates to EBS and EBS-backed AMIs. Amazon S3 Batch Replication is not the correct solution as it only replicates entire buckets into another destination, usually across regions. Amazon S3 versioning is a recommended best practice when combined with a life-cycle expiration period, and it is a requirement when using AWS Backup. While this is a good manual solution, restoring to a specific point in time with S3 versioning would cause increased administrative overhead. AWS Backup is the correct solution because it natively supports PITR for Amazon S3 and Amazon RDS backups, while maintaining a centrally managed location to configure and restore the backups when needed.
  13. A In this scenario AWS Backup cannot successfully complete a scheduled daily backup and is throwing an error that states it cannot access the buckets. The first area to check is that all IAM permissions are correct for the role assigned to the backup plan. The IAM role must have two in-line IAM policies attached to enable backup and restore operations inside of the S3 buckets. It does not matter where the Amazon S3 buckets are located or that AWS PrivateLink is enabled and in use. Having PITR enabled for the backup plan associated to the Amazon S3 bucket would not matter as the IAM permissions are not appropriately set.
  14. C The key indicator in this scenario is the need to continue using a tape backup or tape-like backup system to avoid retraining individuals on a completely new system. The use of AWS Storage Gateway Tape Gateway allows the creation of virtual tape libraries that function just like the on-premises system. This alleviates the need for retraining and only requires configuration of the virtual tape libraries and the Tape Gateway. The use of Amazon S3 Glacier Deep Archive is a component of a good backup strategy with AWS Storage Gateway Tape Gateway, but it is not the complete solution. AWS Storage Gateway Volume Gateway and AWS Backup do not address the needs of the CIO and would require a new backup process and training for the teams.
  15. A The solution to this problem requires the use of NFS file-sharing protocols and the ability to maintain data in Amazon S3 while on-premises servers still have access to use the data once it is in Amazon S3. Implementing AWS DataSync is not an option as this requires on- premises data stores to still be present during the migration. Amazon EFS is not an option as it would require additional copying of data between Amazon S3 and the EFS filesystem. AWS Storage Gateway File Gateway with Amazon FSx for Windows File Share uses the SMB protocol instead of NFS, which does not meet the requirements of this solution. The remaining option is implementing AWS Storage Gateway File Gateway, which enables NFS file share capabilities while maintaining the data within Amazon S3.
  16. D This scenario is a common use case for AWS DataSync. The director wishes to use the power of AWS to perform analytics using a data lake, but also wants to have a localized copy of the data. You can configure AWS DataSync to transfer entire on-premises datasets into Amazon S3 while synchronizing data between on- premises and the Amazon S3 bucket. AWS Backup does not accomplish the synchronization requirement, and Amazon FSx for Windows File Share does not meet the needs of synchronization and assumes that all servers are Microsoft Windows-based. Amazon EFS is not a correct solution because it does not provide a local copy of the data.
  17. A In this scenario the CIO wants to keep data synchronized between the third-party cloud provider, on-premises, and the new AWS environment during a migration. The CIO is also concerned that downtime may occur due to copy methods. This use case is common within AWS DataSync as it provides the connection to existing storage systems, like third-party cloud providers and on-premises systems, to migrate data to AWS storage services like Amazon S3, Amazon EFS, or Amazon FSx. AWS Transfer Family is not the proper solution as the scenario does not mention a migration of SFTP, FTP, or FTPS systems. Amazon Storage Gateway and AWS Backup do not accomplish the data synchronization required for this migration without additional scripting or customization of implementation.
  18. B In this scenario the organization is looking for a scalable and cost-effective solution to migrate SFTP services from on-premises to the AWS Cloud. This automatically eliminates the option of using Amazon EFS and AWS DataSync as they do not offer a method of enabling SFTP. Amazon EC2 is a potential option but would require custom configuration of an SFTP server on Amazon EC2, including the need for configuring scaling using Auto Scaling. This increases the overall cost and complexity of the solution. The most cost- effective and scalable option is to use AWS Transfer Family, which is a managed service that lets you configure an SFTP service that scales to meet customer demand; AWS manages the underlying infrastructure.
  19. C In this scenario it is important that a solution minimize the amount of administrative overhead, which means reducing complexity and the need for manual interactions. Rearchitecting the application and migrating the current FTPS service both require extensive administrative overhead or manual interaction. Creating a new FTPS service using Amazon EC2 and AWS Lambda would accomplish the task, but it requires additional administration to maintain the Amazon EC2 instance and AWS Lambda components. The most scalable and administrative overhead–neutral solution is to use AWS Transfer family configured for FTPS and utilize managed workflows to classify incoming data and scan for potential PII information. Using the managed workflows, you can configure different actions depending on the data classification and eliminate the need for manual data entry or tagging.

Chapter 7: Databases

Review Questions

  1. Your customer has asked you to improve the performance of their RDS instance. Their database is consistently under a heavy load due to very large analysis and reporting workloads. Which of the following would be the best solution? A. Create a CloudFront distribution. B. Implement RDS Read Replicas. C. Migrate to Provisioned IOPS SSDs (io1). D. Scale up the primary RDS instance.
  2. Which of the following database engines are supported by Amazon Relational Database Service (RDS)? A. MySQL, DynamoDB, MariaDB, Oracle, PostgreSQL, SQL Server B. MySQL, SQLite, Oracle, PostgreSQL, Amazon Aurora C. MySQL Oracle, PostgreSQL, SQL Server, MariaDB, Amazon Aurora D. MySQL Oracle, PostgreSQL, SQL Server, MariaDB, SQLite
  3. ElastiCache supports which two in-memory cache options? (Choose two.) A. Apache Ignite B. DAX C. EhcacheD. Memcached E. Redis
  4. The two strategies for cache loading include which of the following? (Choose two.) A. Arbitrary acquisition B. First-in, first-out (FIFO) C. Lazy loading D. Least effort load E. Write-through
  5. Your customer has asked you to migrate a SQL Server database to AWS. This database will handle heavy read traffic globally but has customizations at the operating system level. Which of the following would be the best solution for your customer? A. Amazon EC2 running SQL Server B. Amazon Redshift for SQL Server C. Amazon Relational Database Service (RDS) D. Amazon Relational Database Service (RDS) Custom
  6. Which of the following RDS features can best give visibility into load and bottleneck issues on a MariaDB RDS instance? A. CloudWatch Performance Alerts B. CloudWatch C. CloudTrail D. Performance Insights
  7. To improve the security of an RDS instance connected to EC2 instances, where should the RDS instance be placed?A. In a DB subnet group and connected to your EC2 instances using the DB DNS name B. In a subnet and connected to your EC2 instance using a bastion host C. In the same subnet as the EC2 instances using IPv6 routing inside the subnet D. Inside its own DB VPC connected to your EC2 instances using a PrivateLink
  8. Automated backups in RDS can be disabled by changing what setting? A. Changing the Automated Database Backup setting from Enabled to Disabled B. Setting the retention period to 0 C. Deselecting the Enable Automated Backup setting and typing confirm in the dialog D. Deselecting the Enable Instance Snapshots setting
  9. You have 40 RDS instances in the us-east-1 region, each with 10 unique databases. You attempt to create another RDS instance with 20 additional databases. The creation fails. To correct this issue so that you can add the databases, you must do which of the following? A. Create a read replica in a second AZ to free up resources on the primary instance. B. Enable Big Tables on the instance. C. Request an increase of your RDS instance database quota. D. You cannot create more than 10 databases on a SQL Server instance.
  10. You are running Amazon ElastiCache for Redis and your engine requires a specific configuration that is notavailable by default. What would you modify to achieve your goal? A. ElastiCache is a managed service and the engine cannot be modified B. Parameter Group C. Redis.conf D. Use ElastiCache Custom
  11. You have been asked to implement a caching solution for an RDS database. This solution will need to support complex data objects. The solution must be highly available and have persistence. Which solution will you use? A. CloudFront Distribution B. ElastiCache for Memcached C. ElastiCache for Redis D. RDS Read Replica
  12. RDS Proxy supports most database engines except which one? A. Aurora B. Aurora Serverless C. Oracle D. SQL Server
  13. Which of the following are indicators that an RDS proxy might be considered? (Choose three.) A. Disk full errors B. Many short-lived connections C. Out-of-memory errors D. RDS instances with less than 2 GB of memoryE. Too many connections errors
  14. Which of the following features is best suited to monitoring the operating system of the DB instance in real time? A. Amazon EventBridge B. CloudWatch logs C. Enhanced Monitoring D. Performance Insights
  15. What is the principal metric found in Performance Insights? A. CPU Utilization B. DB Activity C. DB Load D. Memory Utilization
  16. Which of the following statements about RDS Read Replicas is true? (Choose two.) A. A replica can be promoted to replace the primary DB instance. B. Read replicas are used as read-only copies of the primary DB instance. C. Read replicas should be created in a different VPC from the primary DB instance. D. The read replica and primary DB instance replicate synchronously.
  17. You have been asked to recommend a solution that will provide point-in-time recovery for a client’s RDS database. Which of the following will most efficiently achieve this goal? (Choose two.)A. Add tags to RDS instances and create a backup plan in AWS Backup. B. Enable automated RDS backups and set the backup retention period to 0. C. Enable automated RDS backups by setting the backup retention period to non-0. D. Ship RDS transaction logs to CloudWatch logs. E. Use AWS Lambda to automate snapshots every five minutes.
  18. When an RDS read replica is promoted, what happens? A. A final snapshot is taken of the source DB instance and the source is terminated. B. The read replica is immediately available as a stand-alone DB instance. C. The read replica is rebooted and is then available as a stand-alone DB instance. D. The source DB instance is marked for termination.
  19. Your research team has just discovered the cure for cancer, and you have been asked to share the research database with the world. To share this valuable data, what would you do? A. Place the database in a public subnet. B. Set all security groups on the database to a source of 0.0.0.0/0. C. Set the Publicly Accessible property to Yes. D. Share the RDS DB snapshot.
  20. Your client has a business-critical PostgreSQL database in the us-west-2 region running on Amazon Aurora. The client wants to ensure the database is available even inthe event of a regional event. Additionally, the database replication must have low latency and minimal impact on write operations. Which service would best support the client’s requirements? A. Amazon Aurora Global Database B. Amazon Aurora global tables C. Amazon RDS Read Replicas D. Amazon RDS Proxy

Answers

  1. B CloudFront is used for static content such as images and documents. Provisioned IOPS is used to provide high throughput for RDS instances but is not sufficient to improve performance under high read loads. Scaling the database instance may improve read performance but is a costly solution. The preferred method for improving read performance is to implement read replicas. ElastiCache is another possible tool to improve read performance.
  2. C DynamoDB is a NoSQL key-value database and not a relational database. SQLite is not supported. Amazon Aurora is a relational database service that supports (drop-in compatible) PostgreSQL and MySQL.
  3. D, E Apache Ignite is an open source in-memory distributed database management system. DAX stands for DynamoDB Accelerator and is an in-memory cache for DynamoDB. Ehcache is an open source cache.
  4. C, E Only options C and E are valid caching strategies.
  5. D Amazon Redshift is a relational database used for data warehousing. Redshift does not run SQL Server. SQL Server can be run on EC2 and is an option when a customer needs to customize the database engine or underlying operating system. However, EC2 is an unmanaged service. RDS, on the other hand, is a managed service, which is normally preferred. As a managed service, RDS gives no access to the database engine or operating system of the underlying instance. RDS Customer is a service that allows customization of the database engine and underlying operating system but is still a managed service. RDS Custom is the best solution given the scenario.
  6. D CloudTrail is used to log API activity in the account and does not capture performance data. CloudWatch captures performance metrics above the hypervisor but is not the best at showing bottlenecks. CloudWatch Performance Alerts is not a service. Performance Insights detailed metrics provide visibility into load on the database and can identify bottlenecks.
  7. A Placing a database instance in a separate VPC would not be an efficient way to improve security. A bastion host is designed to pass SSH traffic and not handle communications between an EC2 instance and the database layer. Placing the database in the same subnet as the EC2 instances violates the principle of security in layers. Additionally, using IPv6 for routing does not add security. The correct answer is to use a DB subnet group. It is highly recommended that you use the DNS name since this protects the connection when an underlying RDS instance fails and is automatically replaced.
  8. B Disabling the automated backups in RDS is done by setting the retention period to 0. This isn’t particularly intuitive, so it’s a good one to try to remember. You would rarely want to disable automated backups on important production data, but it is something you may want to do on test and development databases to save cost. Remember that snapshots and automated backups operate differently.
  9. C The question is not about your knowledge of a specific database engine. Rather, what is important to notice here is that you are creating an instance. The number of databases per instance is an RDBMS concern. The number of RDS instances in a region is an AWS quota concern. You won’t be expected to know quota limits, but you should be able to recognize when quotas might be causing the problem.
  10. B There is an RDS Custom that allows engine- and OS- level customization of RDS, but ElastiCache does not have an equivalent capability. Redis.conf is not accessible in ElastiCache. Using Parameter Groups in ElastiCache provides the ability to make modifications, which can be applied to one or more ElastiCache clusters.
  11. C CloudFront is used for static content such as images and files. Read replicas can help off-load read requests from the primary database instance, but replicas are not a true caching solution. Memcached does not support complex data objects or high availability, or have persistence. In general, Redis is going to be more feature-rich and robust than Memcached.
  12. C Amazon RDS Proxy is not supported on RDS for Oracle.
  13. B, C, E Indicators that an RDS proxy might benefit a workload include errors related to too many connections, out-of-memory errors, or high CPU utilization. The amount of memory that an RDS instance has is not necessarily an indicator since a small instance with low load may handle that load easily without a proxy. Disk full errors can occur due to runaway log activity but are not necessarily an indicator of connection overload.
  14. C CloudWatch logs are able to collect data from the operating system but are not designed, alone, to present that data in real time. Amazon EventBridge is used to connect applications using event-driven architectures. Performance Insights looks for patterns in aggregate DB metrics over time. Enhanced Monitoring gives real-time visibility into DB instance operating system metrics.
  15. C DB Activity is not an available metric. CPU and memory utilization can be found in other monitoring tools. DB Load is an aggregated metric capturing average active sessions and giving insight into the load being placed on a database instance over time.
  16. A, B Replication between the primary and read replicas is asynchronous. Creating read replicas in VPCs outside of the primary instance’s VPC can create conflicts with the classless inter-domain routing (CIDR).
  17. A, C Setting RDS automated backup retention to 0 disables backups. CloudWatch logs are best used for storage and monitoring of system and application logs. Using AWS Lambda to create snapshots is unnecessary.
  18. C When a read replica is promoted, no action is taken against the original primary DB instance (source). No final snapshot is taken of the source.
  19. D Changing the subnets and security groups are networking actions that would expose the database to bad actors but not give access to other researchers. The Publicly Accessible property enables or disables Internet connectivity to the database but does not grant permissions. RDS snapshots can be shared unencrypted with other accounts.
  20. A DynamoDB global tables are used for cross-region replication of DynamoDB tables. There are no Amazon Aurora global tables. The correct answer is an Amazon Aurora Global Database, which is a single Aurora database spanning multiple regions. RDS Read Replicas are used for cross-region disaster recovery, and a replica can be promoted to become the primary. An RDS proxy is used to pool and share database connections.

Chapter 8: Monitoring, Logging, and Remediation

Review Questions

  1. You suspect that an application client is nonperformant because it is making more calls than normal to a RESTbased API on your application estate. What AWS tool would you use to verify this information and validate any changes you make to correct this issue? A. AWS Config B. Amazon CloudWatch C. AWS CloudTrail D. AWS NetReporter

  2. You have a number of metrics collecting via CloudWatch on your fleet of EC2 instances. However, you want to gather additional metrics on a number ofinstances that do not seem to be performing as well as the majority of running instances. How can you gather additional metrics not available through CloudWatch’s stock configuration? A. Turn on detailed monitoring. B. Install the CloudWatch Logs Agent. C. Create a new VPC flow log. D. Turn on detailed statistics in CloudWatch.

  3. Which of the following statements about a CloudTrail trail with regard to regions is true? (Choose two.) A. A trail applies to all your AWS regions by default. B. A trail collects both management and data events. C. A trail can apply only to a single region. D. A trail applies to a single region by default.

  4. Which of the following is not an example of a management event? A. An AttachRolePolicy IAM operation B. An AWS CloudTrail CreateTrail API operation C. Activity on an S3 bucket via a PutObject event D. A CreateSubnet API operation for an EC2 instance

  5. How are management events different from data events? (Choose two.) A. Data events are typically much higher volume than management events. B. Data events are typically lower volume than management events. C. Data events are disabled by default when creating a trail, whereas management events are enabled bydefault. D. Management events include Lambda execution activity, whereas data events do not.

  6. Which of the following options for a trail would capture events related to actions such as RunInstances or TerminateInstances? (Choose two.) A. All B. Read-Only C. Write-Only D. None

  7. Which of the following is not a valid Amazon CloudWatch alarm state? A. OK B. INSUFFICIENT_DATA C. ALARM D. INVALID_DATA

  8. You have a CloudWatch alarm with a period of 2 minutes. The evaluation period is set to 10 minutes, and Datapoints To Alarm is set to 3. How many metrics would need to be outside the defined threshold for the alarm to move into an ALARM state? (Choose two.) A. Three out-of-threshold metrics out of five within 10 minutes B. Three out-of-threshold metrics out of five within 2 minutes C. Two out-of-threshold metrics out of five within 5 minutes D. Three out-of-threshold metrics out of eight within 16 minutes

  9. Which of the following settings are allowed for dealing with missing data points within Amazon CloudWatch? (Choose two.) A. notBreaching B. invalid C. missing D. notValid

  10. Which of the following does AWS Config not provide? A. Remediation for out-of-compliance events B. Definition of states that resources should be in C. Notifications when a resource changes its state D. Definition of compliance baselines for your system

  11. Which of the following would you use to ensure that your S3 buckets never allow public access? (Choose two.) A. AWS Config B. Amazon CloudWatch C. AWS Lambda D. AWS CloudTrail

  12. Which of the following is not part of an AWS Config configuration item (CI)? A. An AWS CloudTrail event ID B. A mapping of relationships between the resource and other AWS resources C. The set of IAM policies related to the resource D. The version of the configuration item

  13. You have a number of instances based on AMIs with AWS Systems Manager agent installed, but none are able to communicate to the SSM service. What is likely the source of this issue? A. You need to create an IAM group and assign that group to each instance you want communicating with AWS Systems Manager. B. You need to create an IAM role and have each instance assume that role to communicate with the AWS Systems Manager service. C. You need to add the AWSSystemsManager policy to each instance running an SSM agent. D. You need to use a Linux-based AMI on each instance to ensure it can communicate with the SSM service.
  14. Which of the following are supported notation formats for documents in AWS Systems Manager? (Choose two.) A. YAML B. JSON C. CSV D. Text

  15. You are responsible for a fleet of EC2 instances and have heard that a recently released patch has known issues with Rails, which your instances are all running. How would you prevent the patch from being deployed to the instances, given that they are all running the SSM agent? A. Remove the patch from the automation pipeline. B. Remove the patch from the patch baseline.C. Add the patch as an exclusion to the patch baseline. D. Add the path as an exclusion to the automation pipeline.

  16. You have a command document written in JSON for your instances running a Windows AMI and communicating with the AWS Systems Manager Service. You now have inherited several Linux-based instances and want to use the same command document. What do you need to do to use this document with the Linux instances? A. Convert the document from JSON to YAML and reload it. B. Copy the document and assign the copy to the Linux-based instances. C. You cannot use a document written for Windowsbased instances with Linux-based instances. D. Nothing; documents will work across platform operating systems.

  17. You need to ensure that a compliance script is executed on all of your managed instances every morning at 1 a.m. How would you accomplish this task? A. Create a new Execute command and use Systems Manager to set it up on your instances. B. Create a new Run command and use Systems Manager to set it up on your instances. C. Create a new compliance policy document and ensure that all instances’ agents reference the document. D. Create a new action document and ensure that all instances’ agents reference the document.

  18. You want to centrally collect and refer to applications, AWS components, network configuration information, etc. installed on multiple EC2 instances that you manage. Which of the following should be adopted to meet this requirement? A. Install the Systems Manager Agent on your EC2 instance. Log in with Session Manager, and create and execute a script that collects inventory information. B. Install the Systems Manager Agent on your EC2 instance. Use Systems Manager Inventory to collect inventory information. C. Use AWS Config to collect EC2 inventory information. D. SSH into your EC2 instance. Create and run a script that collects inventory information.

  19. You are using an EC2 instance to host a web application. You have configured a CloudWatch alarm for this EC2 instance’s CPU utilization metric, which uses SNS to send notifications when it is under heavy load. When you check the alarm status, it says INSUFFICIENT_DATA. Which of the following are common causes of this message? (Choose two.) A. The same metric is used in other alarms. B. Detailed monitoring has not been enabled for the metric. C. The CloudWatch alarm has just started. D. S3 bucket for logs does not exist. E. The metric is unavailable.

  20. A user wants to connect to a Windows instance using Remote Desktop, but the Ops team wants to encourageusing Systems Manager features. Which statement is true? A. Native RDP is supported and you can enable it in the session properties so that the user can view the desktop instead of the PowerShell prompt. B. Map a local port to the RDP port on the instance and start a session. The user can then use remote desktop through port forwarding. C. RDP is not supported by SSM. You must open port 3389 in the instance security group. D. RDP is not supported by SSM. Use Apache Guacamole over port 80 instead.

Answers

  1. C AWS CloudTrail provides insight into API calls, and a client interacting with a REST API is exactly that.
  2. B The Amazon CloudWatch Logs Agent, when installed on an instance, provides metrics not available in any other manner, including using the basic CloudWatch capabilities.
  3. B, D CloudTrail trails apply to a single region by default (option D) but can be applied to all regions (meaning options A and C are both false). They also collect both management and data events (option B).
  4. C Management events in CloudTrail relate to security, registering devices, configuring security rules, routing, and setting up logging. In the options, this would include A, B, and D. Option A is a security event, B is setting up a security rule for routing, and D is a routing data rule. Option C, on the other hand, is related to data and is a data event rather than a management event.
  5. A, C Because data events capture the movement, creation, and removal of data, they are typically much higher volume than management events (option A). Data events are also disabled by default (option C), making them different from management events.
  6. A, C The RunInstances and TerminateInstances events are considered write events. This is easiest to remember because they are not read events, and AWS provides only two options: read and write. Collecting these events, then, would require a trail be set to Write-Only or All (which collects all events).
  7. D CloudWatch alarms have three states: OK, ALARM, and INSUFFICIENT_DATA. INVALID_DATA is not a valid alarm state.
  8. A, C In this scenario, there would need to be three out- of-threshold data points within the evaluation period of 10 minutes to trigger an alarm. This means that both options A and C would trigger an alarm. Note that it is possible that the scenario in option D would trigger an alarm, depending on when the out-of-threshold metrics occurred (inside 10 minutes), but it is not clear from the answer, so options A and C are better answers.
  9. A, C There are four possible settings for handling missing data points: notBreaching (A), breaching, ignore, and missing (C).
  10. A AWS Config does not provide mediation mechanisms. You can write code to remediate situations that cause notifications via AWS Config, but the remediation capability is not a standard part of Config itself.
  11. A, C AWS Config will notify you if a bucket has been granted public access (provided you have set that baseline up in AWS Config). You would then need to remediate that access, and that would require AWS Lambda (option C).
  12. C Configuration items do not include IAM-related information (option C). They do include event IDs (option A), configuration data about the resource, basic information about the resource such as tags, a map of resource relationships (option B), and metadata about the CI, including the version of the CI itself (option D).
  13. B Any instance running an SSM agent will need to assume an IAM role for connecting to the AWS Systems Manager service (option B). There is no such policy as AWSSystemsManager (option C).
  14. A, B AWS Systems Manager supports documents in JSON and YAML.
  15. B A patch baseline stores the patches that will be automatically deployed to your instances. If you want to avoid a certain patch, simply remove it from the baseline.
  16. D AWS Systems Manager documents can be used cross- platform without any changes (option D).
  17. B The Run command allows you to execute scripts and other commands on instances. In this case, a Run command could execute the compliance script needed.
  18. B When using Systems manager agent on EC2 instances, it enables the collection of metadata types such as network configuration, AWS components, and other useful information. The agent can be installed across several EC2 instances with the same configuration to collect the inventory information required for your organization.
  19. B, E The most common causes of the INSUFFICIENT_DATA message for CloudWatch alarms are conditions responsible for the alarm not having enough data to evaluate its status.
  20. A It is true that native RDP is supported and can be enabled in the session properties so that the user can view the desktop instead of the PowerShell prompt.

Chapter 9 Networking

Review Questions

  1. The three common types of endpoints are: A. Gateway endpoints, gateway load balancer endpoints, and interface endpoints B. Gateway load balancer endpoints, elastic endpoints, and static endpoints C. VPC endpoints, elastic endpoints, and PrivateLink D. VPC endpoints, interface endpoints, and elastic endpoints2. Which of the following is the best option for creating a hub-and-spoke network to connect on-premises resources and multiple VPCs? A. AWS DirectConnect B. AWS Gateway Endpoint C. AWS Transit Gateway D. AWS VPC Peering
  2. Which of the following is the best use of a gateway endpoint? A. Connect a VPC to S3 without traversing the Internet. B. Connect an on-premises network to a VPC. C. Connects resources and services using PrivateLink. D. Connect two VPCs.
  3. You wish to use deny rules to restrict traffic to your resources. Which of the following will allow you to implement deny rules? A. Internet gateway B. Network access control lists (NACL) C. Route table rules D. Security groups
  4. You are experiencing connectivity errors with IPv4 traffic within your VPC. Which of the following will most efficiently help diagnose the issue? A. CloudWatch B. Reachability Analyzer C. Traffic Mirroring D. VPC flow log6. Your customer wants you to connect their on-premises network to their VPC. The customer is a budgetconscious startup with a small volume of network traffic. They expect to grow at a slow but steady pace over the next year. Which solution will best achieve their goal? A. AWS Transit Gateway B. DirectConnect C. VPC peering D. VPN
  5. To direct network traffic from an EC2 instance to the public cloud, the destination field of the route table should be which of the following? A. 0.0.0.0/0 B. 0.0.0.0/32 C. Internet gateway D. Local
  6. Which of the following represents the largest CIDR block that can be used in a VPC subnet? A. 10.0.0.0/32 B. 172.16.0.0/16 C. 172.16.0.0/0 D. 192.168.0.0/28
  7. Which of the following is not a valid private IP address? A. 10.0.0.5 B. 75.10.150.5 C. 172.16.0.5 D. 172.28.10.1010. You want to route traffic to the Internet from resources in an Ipv6 subnet without allowing external resources to initiate contact. Which of the following best solves this problem? A. DirectConnect B. Egress-only Internet gateway C. Internet gateway D. Transit gateway
  8. You wish to capture a subset of your VPC traffic in order to diagnose an issue. Which of the following will allow you to capture only the traffic you want and route it to a specified monitoring appliance? A. AWS CloudWatch B. AWS Systems Manager Distributor C. Traffic Mirroring D. VPC flow logs
  9. You wish to allow administrators to securely connect to hosts in a private subnet in your VPC. Which of the following will best solve this problem? A. Bastion host B. Client VPN C. NAT gateway D. Transit gateway
  10. Instances in a private subnet must be able to securely initiate software updates with services on the Internet. Which of the following will accomplish this goal? A. Bastion host B. DirectConnectC. NAT instance D. NAT gateway
  11. Your employer has tasked you with implementing a cost-effective firewall to protect your whole VPC. The solution must be able to process a high volume of traffic with both stateful and stateless rules. Your environment spans two regions (us-east-2 and apsoutheast-1) with three availability zones in each region. Which of the following will you choose? A. AWS Network Firewall B. Network access control lists (ACLs) C. Security groups D. Web application firewall (WAF)
  12. Your team needs to securely SSH into your fleet. The solution must be highly available and cost-effective, and have auditable logs of the SSH activity. Which of the following will you choose? A. Bastion host B. NAT gateway C. Systems Manager D. Transit gateway
  13. What is the number of IP addresses reserved by AWS in every subnet? A. 3 B. 5 C. 7 D. 1017. Your employer has a global network with many network providers. You have been tasked with connecting these networks and managing policies centrally. Which of the following will best accomplish this? A. AWS Control Tower B. AWS Transit Gateway C. Cloud WAN D. Route 53
  14. You have been asked to establish a mechanism for managing IP address blocks (CIDR) across your large, global enterprise. This system must centrally manage both private and public IP spaces. Which of the following AWS services best meets these requirements? A. AWS Control Tower B. CloudFormation C. IP Address Manager D. Route 53
  15. You are securing resources in your VPC. You wish to allow only specific ports and you require stateful connections. Which of the following best fulfills these requirements? A. NAT gateway B. Network access control lists (NACLs) C. Security groups D. Web application firewall (WAF)
  16. The security group provides a firewall at what layer in the VPC? A. Availability zoneB. Internet gateway C. Network interface D. Subnet

Answers

  1. A The three types of endpoint are gateway endpoints, gateway load balancer endpoints, and interface endpoints.
  2. C AWS Transit Gateway allows several VPCs and the on-premises network to be connected to a central gateway in a hub-and-spoke architecture. VPC Peering is required with the AWS Transit Gateway in order to connect the resources with multiple VPCs. However, VPC Peering cannot, alone, connect the VPC to an on- premises network. DirectConnect is an important service for connecting the VPC to an on-premises network but cannot connect VPCs to each other. A gateway endpoint is used to connect directly to S3 or DynamoDB without routing through the public Internet.
  3. A There are many gateways and endpoints and you will want to know when to use each. Using a gateway endpoint within a VPC enables traffic to be sent to S3 without traversing the public Internet. The gateway endpoint does not use PrivateLink and does not allow access to S3 from an on-premises network, peered VPCs, or through a transit gateway.
  4. B Security groups and NACLs are very similar and it will be important to remember the differences. One of the key differences is that NACLs offer deny rules whereas security groups only use allow rules. Route tables do not use rules.
  5. B CloudWatch and VPC flow logs can be very helpful in diagnosing errors. Traffic Mirroring creates a copy of inbound and outbound traffic that can be used to route traffic to appliances for threat monitoring and troubleshooting. However, the most efficient tool for diagnosing most connectivity issues is the Reachability Analyzer.
  6. D AWS Transit Gateway and VPC peering are used to connect VPCs. VPNs and DirectConnect are used to connect an on-premises network to a VPC. The customer is budget-conscious and has light network traffic. DirectConnect is designed for high traffic and is significantly more expensive than a VPN. This makes the VPN the best solution for this customer at this time.
  7. A 0.0.0.0/0 is the shorthand for any public destination. The Internet gateway is the target but would not be in the destination field. It would be in the target field and be in the form of igw-id. Local represents the local network CIDR. 0.0.0.0/32 is a single IP address.
  8. B CIDR blocks can range from /28 containing 16 IP addresses to /16 containing 65,536 IP addresses.
  9. B The valid RFC 1918 address ranges are 10.0.0.0– 10.255.255.255, 172.16.0.0–172.31.255.255, and 192.168.0.0–192.168.255.255.
  10. B An Egress-only Internet gateway allows outbound communication over IPv6 from instances in your VPC to the Internet, and prevents the Internet from initiating IPv6 connections with your instances. External traffic can initiate communications through an Internet gateway. DirectConnect is used to connect the VPC to an on-premises network. A transit gateway serves as a hub to connect on-premises networks and VPCs.
  11. C Distributor is used to package and publish software to nodes on your network. CloudWatch is used for application and infrastructure monitoring. It can be used to give indications of system health and performance but is not used to capture and route network traffic. VPC flow logs and Traffic Mirroring may seem similar but are fundamentally different. Flow logs collect information about network traffic whereas Traffic Mirroring actually captures the network data and is able to route a copy to another location.
  12. A The NAT gateway and bastion host are often confused. A NAT gateway allows communication out whereas a bastion host allows communication in.
  13. D The NAT gateway and bastion host are often confused. A NAT gateway allows communication out whereas a bastion host allows communication in. A NAT instance is still available but is an old method and is no longer recommended.
  14. A The WAF operates at the endpoint level to protect resources like the application load balancer and CloudFront. A WAF is only stateless. Security groups operate at the instance (ENI) level and are only stateful. Network ACLs operate at the subnet level and are only stateless. Only the AWS Network Firewall has both stateful and stateless capabilities, and it operates at the VPC level.
  15. C Transit gateways and NAT gateways are not used for SSH connectivity. A bastion host is normally used for secure SSH connections, but Systems Manager offers a more cost-effective, highly available, and auditable solution.
  16. B AWS reserves five IP addresses in each subnet’s CIDR block. In a 10.0.0.0 network these are the network address (10.0.0.0), the VPC router (10.0.0.1), reserved (10.0.0.2 and 10.0.0.3), and the broadcast address (10.0.0.4).
  17. C AWS Transit Gateway is used to connect various networks together and can span the globe. This gateway, however, lacks the automation, ability to segment, and configuration management found in Cloud WAN. AWS Control Tower and Route 53 are not able to meet the requirements.
  18. C AWS IP Address Manager (IPAM) is designed to centrally manage IP addresses (CIDR) globally. AWS Control Tower is an important service for centrally managing large enterprise environments. However, it is not capable of managing CIDR blocks. CloudFormation is for provisioning and managing AWS and third-party resources through code. Route 53 manages DNS.
  19. C A NAT gateway is used by resources in a private subnet to initiate communication with the Internet. A WAF monitors and protects HTTP(S) requests. NACLs and security groups are very similar and you will need to know the differences. The security group is stateful and the NACL is stateless. Additionally, the question only asks for traffic to be allowed with no requirement for deny rules. NACLs allow deny rules. Given the choice between a security group and a NACL, the security group is the preferred method if all else is equal.
  20. C The security group provides protection at the network interface level.

Chapter 10: Content Delivery

Review Questions

  1. Which of the following services can be used to perform DNS routing and health checks? A. Amazon EC2 with DNS and BIND installed B. Amazon Route 53 C. Amazon CloudFront D. Amazon ElastiCache
  2. Which of the following is not a record type supported by Route 53? A. NAPTR B. NS C. SPF D. TXT
  3. You are setting up a new website for a client and have their website loaded into an S3 bucket. They want to ensure that the site responds to the company name— ourgreatcompany.com—both with and without the www part of the address. What types of record do you need to create?A. CNAME B. Alias C. MX D. SRV
  4. You are setting up DNS for an application running on an EC2 host in your network. The application exposes its API through an IPv6 address. What type of record set will you need to create for access to this API? A. AAAA B. A C. Alias D. MX
  5. You have a Lambda-based serverless application. You have several Lambda@Edge functions triggered by a CloudFront distribution and need to set up DNS. What type of record will you need to use? A. CNAME B. A C. Alias D. AAAA
  6. You have an application running in a VPC with an existing DNS record. You have a backup of the application running as a warm standby in another VPC in a different region. If traffic stops flowing to the primary application, you want traffic to be routed to the backup. What type of routing policy should you use? A. Simple routing B. Failover routingC. Latency routing D. Multivalue answer
  7. You have an application deployment with endpoints in multiple countries. The application needs to have fast response times and in the event of a failure you cannot modify the client code to redirect traffic. Which service can help you implement a solution? A. Amazon ElastiCache B. Route 53 C. Amazon CloudFront D. AWS Global Accelerator
  8. You have an application running with copies in three different regions: US-EAST-1, US-WEST-1, and APEAST-1. You want to ensure your application’s users always receive a response from the copy of the application with the lowest network traffic response time. Which DNS routing policy should you use? A. Simple routing B. Failover routing C. Latency routing D. Multivalue answer
  9. You are working for a startup that wants to test a production-ready version of their shopping cart and perform a trickle test with a small set of actual production traffic. Which DNS routing policy can help you implement this test? A. Simple routing B. Failover routing C. Latency routingD. Weighted routing policy
  10. You are responsible for a marketing website running in AWS. You have a requirement from the marketing team to provide an alternate version of the site intended for A/B testing with the current site. However, they only want a small portion of traffic sent to the new version of the site as they evaluate the changes they’ve made. Which DNS routing policy should you use? A. Multivalue answer B. Failover routing C. Weighted routing D. Geolocation routing
  11. A startup has deployed their website to Japan, Australia, and the United States. They want to make sure users get the results from the closest endpoint. Which DNS routing policy can help implement the solution? A. Failover routing B. Weighted routing C. Geolocation routing D. IP-based routing
  12. A startup has deployed a CloudFront distribution with the site hosted in Amazon S3. They want to prevent users from accessing the S3 bucket directly. How can this protection be accomplished? A. Set the TTL value for the cache to 0. B. Enable Origin Access Identity (OAI) for the distribution. C. Enable Origin Shield for the distribution.D. Define a custom behavior for the largest objects.
  13. Which of the following settings need to be configured in a VPC to use private DNS via the Route 53 Resolver? (Choose two.) A. An Internet gateway needs to exist. B. The enableDnsHostnames attribute needs to be set to true. C. The NACLs for the VPC must include port 53. D. The enableDnsSupport attribute must be set to true. E. The autoassignIP attribute must be set to true.
  14. Which of the following must you configure to control how traffic is routed from around the world to your applications using Amazon Route 53 Traffic Flow? (Choose two.) A. Traffic record B. Traffic policy C. Policy record D. Policy route
  15. A startup will launch their new online game title in the US-EAST-1 region. However, players can be anywhere in the world. Which services will allow the startup to optimize the performance of their online game to a global audience? (Choose two.) A. AWS Global Accelerator B. AWS Direct Connect C. AWS Local Zone D. Amazon CloudFront E. AWS Edge Locations16. Which of the following is a not a type of health check offered by Amazon Route 53? A. Endpoint B. Other health checks C. CloudTrail D. CloudWatch
  16. What happens in Amazon Route 53 if an unhealthy response comes back from a health check? (Choose two.) A. Responses are no longer sent to the failing host. B. When the host comes back online, responses are automatically sent back to the host. C. All responses to the failing host are retried until a response is received. D. A CloudWatch alarm is automatically triggered and sent out via notification.
  17. A startup has deployed a CloudFront distribution to a global audience and wants to maximize the number of requests that are served from the CloudFront distribution cache. What can be done to improve the cache hit ratio? A. Set the TTL value for the cache to 0. B. Enable Origin Access Identity (OAI) for the distribution. C. Enable Origin Shield for the distribution. D. Define a custom behavior for the largest objects.
  18. Why might you use a geoproximity routing policy rather than a geolocation routing policy?A. You want to increase the size of traffic in a certain region over time. B. You want to ensure that all U.S. users are directed to U.S.-based hosts. C. You want to route users geographically to ensure compliance issues are met based on requestor location. D. You are concerned about network latency more than requestor location.
  19. You are seeing intermittent issues with a website you maintain that uses Amazon Route 53, a fleet of EC2 instances, and a redundant MySQL database. Even though the hosts are not always responding, traffic is being sent to those hosts. What could cause traffic to go to these hosts? (Choose two.) A. You need to use a failover routing policy to take advantage of health checks on hosts. B. You need to turn on health checks in Amazon Route 53. C. The hosts are failing a health check but not enough times in a row to be taken out of service by Amazon Route 53. D. The hosts should be put behind an application load balancer (ALB).

Answers

  1. B Route 53 is a DNS web service and can be used to perform domain registration, DNS routing, and health checks.
  2. D Although Route 53 does support text records, the record type is TXT, not TEXT, so option D is incorrect. Route53 does support NAPTR, NS, and SPF records.
  3. B You will need an alias record to map the apex record (ourgreatcompany.com) to the S3 bucket. You can then use another alias record to map the subdomain, www.ourgreatcompany.com, to the same S3 bucket website.
  4. A You would need an AAAA record set because this is an IPv6 address.
  5. C Whenever you need to associate a domain name with an AWS service such as CloudFront, S3, an ELB, or a VPC endpoint, you use an alias record.
  6. B This is the main use case for a failover routing policy. If traffic cannot reach a primary instance or service, Route 53 will “fail over” routing to a backup or secondary instance. Please remember that a health check needs to be defined for failover routing to work as expected.
  7. D The anycast IP addresses provisioned by AWS Global Accelerator will allow you to reach a healthy endpoint without having to switch IP addressing, modify the client code, or be concerned about DNS caching.
  8. C Latency routing policies return responses to users based on network latency.
  9. D Using weighted routing, you can choose what proportion of traffic will go to one endpoint versus another. For a trickle test, you can use a weight of 95 for the main production systems and a weight of 5 for the new version to be tested.
  10. C This is a good use case for weight routing. You can send (for example) 10 percent of traffic to the new site and the remaining traffic to the existing site using weighted values.
  11. C Geolocation routing will deliver the closest endpoint to users based on the region or country where they are located.
  12. B Enabling Origin Access Identity or Origin Access Control will limit access to the S3 bucket to the CloudFront distribution exclusively.
  13. B, D If you choose to use a Route 53 private hosted zone for custom DNS names in one or more VPCs, you must set both the enableDnsHostnames and enableDnsSupport attributes to true for all VPCs associated with the private hosted zone.
  14. B, C For Amazon Route 53 Traffic Flow to work, you’ll need both a traffic policy (option B) and a policy record (option C). The traffic policy contains the rules that define how traffic should flow, and a policy record connects that traffic policy to a DNS name.
  15. A, D AWS Global Accelerator can distribute your application traffic globally with significant performance improvements. CloudFront is a content delivery network that can be used to distribute video, images, and audio globally at high transfer rates.
  16. C You can set up health checks in Amazon Route 53 to check an endpoint, other health checks already set up, or alarms in CloudWatch. You cannot directly monitor via CloudTrail (option C), although you could monitor CloudWatch alarms triggered by CloudTrail events.
  17. A, B Amazon Route 53 will stop sending requests to health check failing hosts and will also re-send requests when that host responds as healthy again (options A and B).
  18. C Enabling Origin Shield for the distribution will improve the cache hit ratio.
  19. A A geoproximity policy, like a geolocation policy, routes users to the closest geographical region. This means that options B and C are incorrect, as they are common to both types of routing policy. Option D would imply the use of latency-based routing, leaving only option A. This is the purpose of a geoproximity policy: You can apply a bias to adjust traffic to a region.
  20. B, C Health checks are not always turned on in Amazon Route 53 (and generally are not by default), so that’s the first thing to check (option B). All policies can use health checks, so option A is incorrect, and an ALB is not required to use health checks, making option D incorrect as well. It takes three successive failures of a health check by default to take a host out of commission, so option C is also a possible answer.

Chapter 11: Deployment, Provisioning, and Automation

Review Questions

  1. A company is having issues with an application’s messaging layer becoming saturated at certain times of day. AWS has been adopted, and the messaging layer will be implemented using AWS. Which service will provide a systems operator with the maximum message rate to preserve messages when an application is unresponsive? A. Use SQS FIFO queues and an EC2 fleet with an EC2 Auto Scaling Group. B. Use SNS topics and an EC2 fleet with an EC2 Auto Scaling Group. C. Use SQS standard queues and an EC2 fleet with an EC2 Auto Scaling Group. D. Use SNS FIFO topics and an EC2 fleet with an EC2 Auto Scaling Group.
  2. A company wants to analyze the click sequence of their website users. The website is very busy and receives traffic of 10,000 requests per second. Which service provides a near-real-time solution to capturing the data? A. Kinesis Data Streams B. Kinesis Data Firehose C. Kinesis Data Analytics D. Kinesis Video Stream
  3. A company wants to build a click sequence capture, analysis, and store solution. Which services in what function would provide the highest throughput?A. SNS to capture the data, Lambda to process, and RDS to store it B. SQS to capture the data and a fleet of EC2 instances in an Auto Scaling Group C. Kinesis Data Streams to capture the data, Athena to process the data, Kinesis Data Firehose to store the data in S3 D. Kinesis Data Streams to capture the data, Kinesis Data Analytics to process the data, and Kinesis Data Firehose to store the data in S3
  4. A team of systems operators needs to be immediately notified if an EC2 instance is started or stopped. Which service provides the simplest solution? A. Use event notifications and SQS. B. Use event notifications and Step Functions. C. Use event notifications and SNS. D. Use event notifications and Kinesis Data Streams.
  5. A company’s application is implemented in AWS using microservices and Lambda functions. Which service can be used to coordinate the execution and workflow of multiple Lambda functions? A. SQS B. SNS C. Step Functions D. Simple Workflow Service (SWF)
  6. Which of the following can be used to launch an Amazon Aurora MySQL cluster? (Choose two.) A. AWS Organizations B. AWS CloudFormationC. The AWS Management Console D. Amazon Concierge
  7. What does the AWSTemplateFormatVersion section of a CloudFormation template indicate? A. The date that the template was originally written B. The date that the template was last processed C. The capabilities of the template based on the version available at the indicated date D. The date that the template was last updated
  8. What is the only required component in a template? A. Parameters B. Metadata C. Resources D. Outputs
  9. A SysOps administrator is troubleshooting a large CloudFormation stack. It is taking over 2 hours to roll back the entire stack before another test can be performed. How can the SysOps administrator accelerate the test/repair cycle time? A. Build a second CloudFormation template to tear down all resources that can then be run as needed. B. Enable the CleanupResources option within the template. C. Disable Automatic Rollback On Error. D. Enable Automatic Rollback On Error.
  10. A SysOps administrator has a fleet of EC2 instances and then initiates scripts on each instance. However, the next steps in a CloudFormation stack are failingbecause they depend on resources that those scripts configure. How can the SysOps admin coordinate the creation of resources? A. This is not possible using CloudFormation. B. The admin needs a separate CloudFormation stack that can run manually after the scripts on instances complete. C. The admin needs a separate CloudFormation stack, and must set the initial stack to call the second stack. D. The admin must use the WaitCondition resource to block further execution until the scripts on the instances complete.
  11. Which of the following is not allowed as a data type for a parameter? A. List B. Comma-delimited list C. Array D. Number
  12. A SysOps administrator wants to accept custom CIDR blocks as inputs to a CloudFormation stack. What validation can be used to ensure the CIDR block is correctly formatted as an input parameter? A. AllowedValues B. MinLength C. ValueMask D. AllowedPattern
  13. The URL to a web application created by a CloudFormation stack is to be provided. What elementof a CloudFormation template can be used to accomplish this? A. Parameter B. Output C. Transform D. Resources
  14. A CloudFormation stack needs to obtain a URL to use by an API call by the application being created. What CloudFormation template elements can be used to do this? A. Parameter B. Output C. Transform D. Resources
  15. Which of the following are supported deployment models in Elastic Beanstalk? (Choose two.) A. Rolling with additional batches deployment B. Rolling with incremental updates deployment C. Mutable deployment D. Immutable deployment
  16. Why might you choose to use a rolling with additional batches deployment? (Choose two.) A. You don’t want the application to completely stop when updates are made. B. You want the cheapest possible deployment model. C. Your goal is to always maintain maximum capacity in terms of running instances.D. You never want two versions of an application running at one time.
  17. A SysOps admin is deploying a critical production application that must always be up and running. Any new instances are required to be healthy before accepting traffic. Which deployment model should the sysops admin use? A. Rolling with additional batches deployment B. All-at-once deployment C. Rolling deployment D. Immutable deployment
  18. Which of the following would be required to set up a blue/green deployment? (Choose two.) A. Amazon Route 53 B. Elastic Beanstalk C. Multiple application environmentsAmazon RDS
  19. Which of the following is true about a default Elastic Beanstalk deployment? A. All instances created are private. B. A custom private VPC is created. C. All database instances are private. D. The created application endpoint is publicly available.
  20. Which of the following does Elastic Beanstalk store in S3? (Choose two.) A. Server log files B. Database swap files C. Application filesD. Elastic Beanstalk log files

Answers

  1. C Standard queuing offers higher throughput than FIFO queuing. SNS does not preserve messages.
  2. A This is a classic use case for Kinesis Data Streams.
  3. D This is the only option that provides the speed and data retention with the highest throughput.
  4. C SNS can provide notification for an AWS region such as an email message, a text message, or a call to an HTTP endpoint. Most AWS services include event notification features.
  5. C Coordinating the execution of Lambda functions is one of the classic use cases for Step Functions.
  6. B, C CloudFormation and the AWS console are the only two items that can deliver the needed results.
  7. C AWSTemplateFormatVersion indicates the version of the template—and therefore what its capabilities are—by indicating the date associated with that version.
  8. C CloudFormation templates allow for all the provided answers, but they require only a Resources component to be present.
  9. C CloudFormation provides an Automatic Rollback On Error option that will cause all AWS resources created to be deleted if the entire stack doesn’t complete successfully. Its default value is enabled. You can disable it to troubleshoot the point where the stack failed and not repeat deployment and rollback of resources that are working.
  10. D The admin can use CloudFormation’s WaitCondition to act as a block of action until a signal is received from the application (in this case, when the instance scripts complete).
  11. C Parameters can be lists, comma-delimited lists, numbers, and strings. They cannot be arrays.
  12. D CIDR blocks use specific decimal and hexadecimal number patterns, and therefore the admin should use AllowedPattern to ensure they are properly supplied.
  13. B The URL to a web application created by a stack is an output value. One way to think of this is to see that the value cannot be created until the stack runs.
  14. A This is an input value, as it is something user- supplied and required by the template at runtime. Parameters are the way to import data to a stack.
  15. A, D Elastic Beanstalk supports a number of deployment models, including rolling with additional batches and immutable (options A and D). The other two options are made-up terms.
  16. A, C Both the rolling deployment and the rolling deployment with additional batches deployment models allow you to ensure your application is always running (option A). But you would then use the additional batches option to ensure you maintain maximum capacity throughout the process (option C).
  17. D An immutable deployment is often slower and more expensive than the other models, but it ensures both the health and maximum confidence in a new deployment.
  18. A, C Blue/green deployments require multiple environments (option C) that can run side by side as well as Route 53 (or something similar) for weighted routing policies. Although you can use Elastic Beanstalk, it is not required, and Amazon RDS is unrelated.
  19. D Elastic Beanstalk automatically creates a publicly available endpoint for your application in a default deployment.
  20. A, C Elastic Beanstalk will store application files and server log files in S3.

Dumps