๐ŸŽ„

CertoMetrics - 9% OFF Special Discount Offer - Ends In:

0d 00h 00m 00s
Coupon code: SALE2026

Amazon AWS Certified Solutions Architect - Professional (SAP-C02)

Get full access to the updated question bank and pass on your first attempt.

Vendor

Amazon

Certification

Professional Certifications

Content

771 Qs

Status

Verified

Updated

5 days ago

Test the Practice Engine

Experience our real exam environment with free demo questions

Launch Free Demo
Best Value Bundle

Premium Bundle

Complete Success Suite

$128 $79

Save $49 Instantly

  • โœ“
    Full PDF + Interactive Engine Everything you need to pass
  • โœ“
    All Advanced Question Types Drag & Drop, Hotspots, Case Studies
  • โœ“
    Priority 24/7 Expert Support Direct line to certification leads
  • โœ“
    90 Days Free Priority Updates Stay current as exams change

Success Metric

98.4% Pass Rate

Verified by 15k+ Students
Secure Checkout
Popular

Standard Simulation

Practice Engine

$69

One-Time Payment

  • Web-Based (Zero Install)
  • Real Testing Environment Virtual & Practice Modes
  • Interactive Engine Drag & Drop, Hotspots
  • 60 Days Free Updates

Compatible with All Devices

Chrome
Verified Secure Checkout

Basic Tier

PDF Study Guide

$59

Digital Access

  • โœ“ Exam Questions (PDF)
  • โœ“ Mobile Friendly
  • โœ“ 60 Days Updates
Download Free Sample PDF

Verified 10-Question Preview

Secure Checkout

Verified Community

The CertoMetrics Standard.

Recommend the #1 platform for verified Amazon certification resources.

Success Network

Help a Colleague Succeed.

Invite a peer to get their own updated SAP-C02 prep kit.

Exam Overview

The AWS Certified Solutions Architect - Professional (SAP-C02) certification is the pinnacle for architects demonstrating advanced expertise in designing and deploying dynamic, scalable, highly available, fault-tolerant, and cost-effective solutions on AWS. Achieving this certification validates your ability to navigate complex architectural challenges across multiple accounts, regions, and hybrid environments, integrating a broad range of AWS services with best practices. It signifies a deep understanding of migration strategies, security controls, and operational excellence at an enterprise scale. This credential not only elevates your professional standing but also positions you as a critical leader capable of driving significant cloud transformation initiatives and delivering robust, future-proof architectures.

Questions

65

Passing Score

750/1000

Duration

180 Minutes

Difficulty

Expert

Level

Professional

Skills Measured

Designing complex, secure, and highly available solutions for organizational complexity.
Architecting new solutions using a broad range of AWS services and advanced design patterns.
Implementing robust migration strategies and modernizing existing applications on AWS.
Optimizing solutions for cost control, performance, and operational efficiency at scale.
Designing for continuous improvement, governance, and compliance in existing AWS environments.

Career Path

Target Roles

Senior Solutions Architect Cloud Architect Principal Cloud Engineer

Common Questions

Is the material up to date?

Yes. We update our question bank weekly to match the latest Amazon standards. You get free updates for 90 days.

What format do I get?

You get instant access to both the **PDF** (for reading) and our **Premium Test Engine** (for exam simulation).

Is there a guarantee?

Absolutely. If you fail the SAP-C02 exam using our materials, we offer a full money-back guarantee.

When do I get the download?

Instantly. The download link is available in your dashboard immediately after payment is confirmed.

Free Study Guide Samples

Previewing updated SAP-C02 bank (100 Questions).

QUESTION 1

A company uses Amazon EC2 instances to run business-critical applications. Software that is running on the EC2 instances recently caused Amazon GuardDuty to generate the PenTest:S3/KaliLinux finding for some of the company's environments. The company wants to prevent this software from running again. The company is using AWS Organizations to manage its AWS accounts.

What should a solutions architect do to meet these requirements?

A
Configure Amazon Inspector to check the EC2 instances for the forbidden software and to send an Amazon Simple Notification Service (Amazon SNS) notification when the software is identified. Create an AWS Lambda function that stops the EC2 instances and notifies the company. Subscribe the Lambda function to the SNS topic.
B
Create a centralized Amazon EventBridge (Amazon CloudWatch Events) bus to receive GuardDuty events from all accounts. Configure an EventBridge (CloudWatch Events) rule to invoke an AWS Lambda function when the GuardDuty event is generated. Configure the Lambda function to stop the EC2 instances and notify the company.
C
Configure an SCP to prevent the software from being installed. Apply the SCP to the root OU for the organization.
D
Create a library of approved EC2 AMIS. Create a catalog in AWS Service Catalog to deploy the AMIS for the organization. Update IAM policies to allow EC2 instances to be created only with Service Catalog AMIS.

Correct Option: B

โœ… Create a centralized Amazon EventBridge (Amazon CloudWatch Events) bus to receive GuardDuty events from all accounts. Configure an EventBridge (CloudWatch Events) rule to invoke an AWS Lambda function when the GuardDuty event is generated. Configure the Lambda function to stop the EC2 instances and notify the company.
Description: Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior across your AWS accounts and workloads, generating findings for suspicious activities like the installation of forbidden software. Amazon EventBridge (formerly CloudWatch Events) is a serverless event bus service that enables applications to communicate through events. It can receive events from various AWS services, your own applications, and SaaS applications. AWS Lambda is a serverless compute service that executes code in response to events, allowing for automated remediation actions.

Why this fits: This solution leverages GuardDuty's continuous threat detection capabilities to identify the installation of forbidden software. By integrating with a centralized EventBridge bus, events from GuardDuty across all accounts in an AWS Organization can be aggregated. An EventBridge rule can then trigger an AWS Lambda function, providing an automated, real-time response to stop the compromised EC2 instances and notify relevant personnel. This approach is highly scalable, automated, and aligns with best practices for security incident response in a multi-account environment, focusing on detecting and remediating threats post-deployment.

Example: In a large organization, GuardDuty detects an instance of an unauthorized cryptocurrency mining application being installed and executed on an EC2 instance in a development account. This generates a GuardDuty finding, which is routed to a central EventBridge bus in the security master account. An EventBridge rule, configured to match this specific finding type, invokes a Lambda function. The Lambda function extracts the instance ID, calls the ec2:StopInstances API to isolate the compromised instance, and then sends an Amazon SNS notification to the security operations team with the details of the incident.

QUESTION 2

A company that develops consumer electronics with offices in Europe and Asia has 60 TB of software images stored on premises in Europe. The company wants to transfer the images to an Amazon S3 bucket in the ap-northeast-1 Region. New software images are created daily and must be encrypted in transit. The company needs a solution that does not require custom development to automatically transfer all existing and new software images to Amazon S3.

What is the next step in the transfer process?

A
Deploy an AWS DataSync agent and configure a task to transfer the images to the S3 bucket.
B
Configure Amazon Kinesis Data Firehose to transfer the images using S3 Transfer Acceleration.
C
Use an AWS Snowball device to transfer the images with the S3 bucket as the target.
D
Transfer the images over a Site-to-Site VPN connection using the S3 API with multipart upload.

Correct Option: A

โœ… Deploy an AWS DataSync agent and configure a task to transfer the images to the S3 bucket.
Description: AWS DataSync is a secure, online data transfer service that simplifies, automates, and accelerates moving data between on-premises storage systems and AWS storage services, as well as between different AWS storage services. It utilizes a software agent deployed in your on-premises environment (e.g., on a virtual machine or hardware server) to access your existing file systems (NFS, SMB, HDFS) or object storage and efficiently transfer data to Amazon S3, Amazon EFS, or Amazon FSx.

Why this fits: For migrating "millions of images" from an on-premises source to Amazon S3, DataSync is the most suitable managed service. It is specifically designed for high-performance, large-scale online data migrations. DataSync optimizes network utilization, handles network interruptions, provides in-flight encryption, and performs data integrity verification, ensuring reliable and efficient transfers. This approach is superior to manual scripting over a VPN or using services not designed for bulk file transfers, offering automation, speed, and robustness for significant datasets.

Example: An organization needs to migrate 300 TB of image files from an on-premises NFS share to an S3 bucket for archival and new cloud-native processing workflows. They would deploy a DataSync agent on a VM in their data center, configure the NFS share as a source location and the S3 bucket as a destination, then create a DataSync task to automatically and securely transfer all existing images. Subsequent changes could also be synchronized.



QUESTION 3

An entertainment company recently launched a new game. To ensure a good experience for players during the launch period, the company deployed a static quantity of 12 r6g.16xlarge (memory optimized) Amazon EC2 instances behind a Network Load Balancer. The companyโ€™s operations team used the Amazon CloudWatch agent and a custom metric to include memory utilization in its monitoring strategy.

Analysis of the CloudWatch metrics from the launch period showed consumption at about one quarter of the CPU and memory that the company expected. Initial demand for the game has and has become more variable. The decides to use an Auto Scaling group that monitors the CPU and memory consumption to dynamically scale the instance fleet. A solutions architect needs to configure the Auto Scaling group to meet demand in the most cost-effective way.

Which solution will meet these requirements?

A
Configure the Auto Scaling group to deploy c6g.4xlarge @mpute optimized) Configure a minimum capacity of 3, a desired capacity of 3, and a maximum capacity of 12.
B
Configure the Auto Scaling group to deploy m6g.4xlarge (general purpose) instances. Configure a minimum capacity of 3, a capacity of 3, and a maximum capacity of 12.
C
Configure the Auto Scaling group to deploy r6g.4xlarge (memory optimized) instances. Configure a minimum capacity of 3, a desired capacity of 3, and a maximum capacity of 12.
D
Configure the Auto Scaling group to deploy r6g.8xlarge (memory optimized) instances. Configure a minimum capacity of 2, a desired capacity of 2, and a maximum capacity of 6.

Correct Option: C

โœ… r6g.4xlarge (memory optimized) instances. Configure a minimum capacity of 3, a desired capacity of 3, and a maximum capacity of 12.
Description: R6g instances are memory-optimized instances powered by AWS Graviton2 processors, designed for memory-intensive workloads. They offer a high ratio of memory to vCPU, making them suitable for high-performance databases, distributed web scale in-memory caches, and big data analytics that process large datasets in memory. An Auto Scaling group (ASG) maintains application availability and allows you to scale your Amazon EC2 instances up or down automatically according to conditions you define. Configuring a minimum, desired, and maximum capacity ensures resilience and elasticity.

Why this fits: The choice of r6g.4xlarge instances is optimal for workloads that are heavily reliant on memory performance, such as relational databases, data warehousing, or in-memory caches. The 'r' series denotes memory optimization, directly addressing the common bottleneck in such applications. The Graviton2 processors (g suffix) provide improved price-performance over comparable x86-based instances. The Auto Scaling group configuration with a minimum capacity of 3 ensures high availability and fault tolerance across multiple Availability Zones, as workloads requiring Professional-level solutions typically demand resilience. A desired capacity of 3 maintains optimal performance for baseline operations, and a maximum capacity of 12 allows for significant scaling out during peak loads, distributing the workload and maintaining responsiveness without incurring the higher cost or potential resource underutilization of larger individual instances (like an 8xlarge) when not needed.

Example: A company runs a large-scale, mission-critical e-commerce database that experiences significant fluctuations in traffic. By deploying the database on r6g.4xlarge instances within an Auto Scaling group with a minimum of 3, desired of 3, and maximum of 12, the system ensures it always has at least three highly available database nodes. During flash sales or peak shopping seasons, the ASG can automatically scale out to up to 12 instances to handle the increased query load and maintain low latency, while scaling back down when demand subsides to optimize costs.

QUESTION 4

A company has implemented a new security requirement. According to the new requirement, the company must scan all traffic from corporate AWS instances in the company's VPC for violations of the company's security policies. As a result of these scans, the company can block access to and from specific IP addresses.

To meet the new requirement, the company deploys a set of Amazon EC2 instances in private subnets to serve as transparent proxies. The company installs approved proxy server software on these EC2 instances. The company modifies the route tables on all subnets to use the corresponding EC2 instances with proxy software as the default route. The company also creates security groups that are compliant with the security policies and assigns these security groups to the EC2 instances.

Despite these configurations, the traffic of the EC2 instances in their private subnets is not being properly forwarded to the internet.

What should a solutions architect do to resolve this issue?

A
Disable source/destination checks on the EC2 instances that run the proxy software.
B
Add a rule to the security group that is assigned to the proxy EC2 instances to allow all traffic between instances that have this security group. Assign this security group to all EC2 instances in the VPC.
C
Change the VPC's DHCP options set. Set the DNS server options to point to the addresses of the proxy EC2 instances.
D
Assign one additional elastic network interface to each proxy EC2 instance. Ensure that one of these network interfaces has a route to the private subnets. Ensure that the other network interface has a route to the internet.

Correct Option: A

โœ… Disable source/destination checks on the EC2 instances that run the proxy software.
Description: By default, every EC2 instance performs source/destination checks. This means the instance must be the source or the destination of any network traffic it sends or receives. If an instance needs to act as a router, firewall, NAT device, or a proxy, it must be able to send and receive traffic where it is neither the source nor the final destination.

Why this fits: When an EC2 instance acts as a proxy, it receives network traffic on behalf of other instances and then forwards that traffic to its ultimate destination. Without disabling source/destination checks, the AWS networking infrastructure would drop packets because the proxy instance is sending traffic with source IPs that are not its own, or receiving traffic for destination IPs that are not its own. Disabling this check explicitly permits the instance to forward traffic for other entities, which is fundamental to a proxy's operation.

Example: Consider an EC2 instance configured as a web proxy for internal applications. Client applications within the VPC send HTTP/HTTPS requests to the proxy instance. The proxy then processes these requests and forwards them to external web servers on the internet. For the proxy to successfully forward these requests (where the original source IP is the client and the destination IP is the external web server), its source/destination checks must be disabled.

QUESTION 5

A company is planning to migrate an Amazon RDS for Oracle database to an RDS for PostgreSQL DB instance in another AWS account. A solutions architect needs to design a migration strategy that will require no downtime and that will minimize the amount of time necessary to complete the migration. The migration strategy must replicate all existing data and any new data that is during the migration. The target database must be identical to the source database at completion of the migration process.

All applications currently use an Amazon Route 53 CNAME record as their endpoint for communication with the RDS for Oracle DB instance. The RDS for Oracle DB instance is in a private subnet.

Which of the steps should the solutions architect take to meet these requirements? (Select THREE.)

A
Create a new RDS for PostgreSQL DB instance in the target account. use the AWS Schema Conversion (AWS SCT) to migrate the database schema from the source database to the target database.
B
Use the AWS Schema Conversion TooI (AWS SCT) to create a new RDS for PostgreSQL DB instance in the target account with the schema and initial data from the source database.
C
Configure VPC between the VPCs in the two AWS accounts to provide connectivity to both DB instances from the target account. Configure the security groups that are attached to each DB instance to allow traffic on the database port from the VPC in the target account.
D
Temporarily allow the source DB instance to be publicly accessible to provide connectivity from the VPC in the target account. Configure the groups that are attached to each DB instance to allow traffic on the database port from the VPC in the target account.
E
Use AWS Database Migration Service (AWS DMS) in the target account to perform a full load plus change data capture (CDC) migration from the source database to the target database. When the migration is complete, change the CNAME record to point to the target DB instance endpoint.
F
Use AWS Database Migration Service (AWS DMS) in the target account to perform a change data capture (CDC) migration from the source database to the target database. When the migration is complete, change the CNAME record to point to the target DB instance endpoint.

Correct Option: A,C,E

โœ… Create a new RDS for PostgreSQL DB instance in the target account. use the AWS Schema Conversion (AWS SCT) to migrate the database schema from the source database to the target database.
Description: AWS Schema Conversion Tool (AWS SCT) is a service that helps convert your existing database schema and custom code to a format compatible with your target database. It can also be used to migrate schema between homogeneous databases, especially if you plan to optimize or refactor parts of the schema for the target environment.

Why this fits: Before migrating data, the target database instance (an RDS for PostgreSQL in this case) needs to be created. AWS SCT is the appropriate tool for migrating or converting the database schema and any associated code objects (like stored procedures, functions, views, etc.) from the source database to the newly created target RDS instance. This ensures the target database has the correct structure to receive the data.

Example: An organization migrating an on-premises PostgreSQL database to Amazon RDS for PostgreSQL in a different account would first provision the RDS instance and then use AWS SCT to assess the source schema for compatibility and apply it to the new RDS instance.



โœ… Configure VPC between the VPCs in the two AWS accounts to provide connectivity to both DB instances from the target account. Configure the security groups that are attached to each DB instance to allow traffic on the database port from the VPC in the target account.
Description: Establishing secure network connectivity between AWS accounts is crucial for cross-account database migration. This typically involves using VPC Peering or AWS Transit Gateway. Security groups act as virtual firewalls to control inbound and outbound traffic for EC2 instances, RDS instances, and other network interfaces.

Why this fits: For AWS Database Migration Service (DMS) or any other migration tool to access the source database in one account and write to the target database in another account, secure and private network connectivity must be established. VPC Peering or Transit Gateway provides this private link. Additionally, security groups on both the source and target RDS instances must be configured to allow inbound traffic on the database port (e.g., 5432 for PostgreSQL) from the IP ranges or security groups associated with the migration services or application components in the target VPC.

Example: Two AWS accounts (Account A for source, Account B for target) are connected via VPC peering. The security group for the source PostgreSQL DB in Account A allows inbound traffic on port 5432 from the CIDR block of the VPC in Account B. Similarly, the security group for the target PostgreSQL DB in Account B allows traffic from the DMS replication instance's security group.



โœ… Use AWS Database Migration Service (AWS DMS) in the target account to perform a full load plus change data capture (CDC) migration from the source database to the target database. When the migration is complete, change the CNAME record to point to the target DB instance endpoint.
Description: AWS Database Migration Service (DMS) is a cloud service that helps migrate relational databases, data warehouses, NoSQL databases, and other types of data stores. A common migration strategy is "full load plus change data capture (CDC)," where DMS first loads all existing data from the source to the target (full load), and then continuously replicates ongoing changes (CDC) to keep the target in sync. A CNAME record is a type of DNS record that maps an alias name to a canonical domain name.

Why this fits: For migrating an active production database with minimal downtime, AWS DMS performing a full load followed by CDC is the standard and recommended approach. This allows applications to continue writing to the source database while the migration is in progress. Once the target database is fully synchronized and validated, a cutover can be performed by changing a CNAME (or other DNS alias) record to point to the new target database endpoint. This redirects application traffic to the migrated database seamlessly.

Example: An e-commerce application relies on an on-premises PostgreSQL database. To migrate it to RDS for PostgreSQL, an AWS DMS task is configured for full load and CDC. During the full load, users can still access the application. Once CDC has caught up and the target database is validated, the DNS CNAME record db.example.com is updated from the on-premises database endpoint to the new RDS endpoint, effectively switching the application to the cloud database.

QUESTION 6

A company needs to improve the reliability of its ticketing application. The application runs on an Amazon Elastic Container Service (Amazon ECS) cluster. The company uses Amazon CloudFront to serve the application. A single ECS service of the ECS cluster is the CloudFront distribution's origin.

The application allows only a specific number of active users to enter a ticket purchasing flow. These users are Identified by an encrypted attribute in their JSON Web Token (JWT). All other users are redirected to a waiting room module until there is available capacity for purchasing.

The application is experiencing high loads. The waiting room module is working as designed, but load on the waiting room is disrupting the application's availability. This disruption is negatively affecting the application's ticket sale transactions.

Which solution will provide the MOST reliability for ticket sale transactions during periods of high load?

A
Create a separate service in the ECS cluster for the waiting room. use a separate scaling configuration. Ensure that the ticketing service uses the JWT information and appropriately forwards requests to the waiting room service.
B
Move the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Split the waiting room module into a pod that is separate from the ticketing pod. Make the ticketing pod part of a StatefulSet. Ensure that the ticketing pod uses the JWT information and appropriately forwards requests to the waiting room pod.
C
Create a separate service in the ECS cluster for the waiting room. Use a separate scaling configuration. Create a CloudFront function that inspects the JWT information and appropriately forwards requests to the ticketing service or the waiting room service.
D
Move the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Split the waiting room module into a pod that is separate from the ticketing pod. Use AWS App Mesh by provisioning the App Mesh controller for Kubernetes. Enable mTLS authentication and service-to-service authentication for communication between the ticketing pod and the waiting room pod. Ensure that the ticketing pod uses the JWT information and appropriately forwards requests to the waiting room pod.

Correct Option: C

โœ… Create a separate service in the ECS cluster for the waiting room. Use a separate scaling configuration. Create a CloudFront function that inspects the JWT information and appropriately forwards requests to the ticketing service or the waiting room service.
Description: This solution leverages Amazon Elastic Container Service (ECS) for microservices deployment and AWS CloudFront with CloudFront Functions for intelligent, low-latency edge routing. By creating a separate ECS service for the waiting room, it can be scaled independently of the main ticketing service, handling fluctuating demand without impacting core functionality. CloudFront Functions, executing at AWS edge locations, inspect incoming requests and their JSON Web Tokens (JWTs) to make real-time routing decisions, directing users to either the waiting room or the ticketing service based on the JWT's claims (e.g., user status, queue position).

Why this fits: This approach is highly scalable and efficient.

  1. Microservices Architecture: Separating the waiting room into its own ECS service allows independent scaling, deployment, and management, which is crucial for handling large, unpredictable spikes in traffic often associated with waiting rooms.
  2. Edge Routing with CloudFront Functions: CloudFront Functions provide extremely low-latency execution at the AWS edge network, close to the users. Inspecting the JWT at this stage enables intelligent routing decisions before the request even reaches the origin servers. This significantly reduces the load on backend services and provides a faster, more responsive user experience. JWTs can carry session information, user authentication status, or even a 'waiting room token' that the function can use to decide the destination.
  3. Decoupling Routing Logic: By placing the routing logic in a CloudFront function, the ticketing service is relieved of this responsibility, allowing it to focus solely on its core function. This reduces complexity and potential bottlenecks in the backend application.

    Example: During a popular concert ticket sale, millions of users might simultaneously attempt to access the ticketing website. A CloudFront function can be configured to inspect the JWT of each incoming request. If the JWT indicates the user has not yet passed through the waiting room or is currently in a queue, the CloudFront function redirects them to the waiting room ECS service. If the JWT indicates they are authorized to proceed, they are directed to the main ticketing ECS service. This ensures that the ticketing service is only hit by requests from users who are ready to purchase, while the waiting room absorbs the initial surge.
QUESTION 7

A software company is using three AWS accounts for each of its 10 development teams. The company has developed an AWS CloudFormation standard VPC template that includes three NAT gateways. The template is added to each account for each team. The company is concerned that network costs will increase each time a new development team is added. A solutions architect must maintain the reliability of the companyโ€™s solutions and minimize operational complexity.

What should the solutions architect do to reduce the network costs while meeting these requirements?

A
Create a single VPC with three NAT gateways in a shared services account. Configure each account VPC with a default route through a transit gateway to the NAT gateway in the shared services account VPC. Remove all NAT gateways from the standard VPC template.
B
Create a single VPC with three NAT gateways in a shared services account. Configure each account VPC with a default route through a VPC peering connection to the NAT gateway in the shared services account VPC. Remove all NAT gateways from the standard VPC template.
C
Remove two NAT gateways from the standard VPC template. Rely on the NAT gateway SLA to cover reliability for the remaining NAT gateway.
D
Create a single VPC with three NAT gateways in a shared services account. Configure a Site-to-Site VPN connection from each account to the shared services account. Remove all NAT gateways from the standard VPC template.

Correct Option: B

โœ… Create a single VPC with three NAT gateways in a shared services account. Configure each account VPC with a default route through a VPC peering connection to the NAT gateway in the shared services account VPC. Remove all NAT gateways from the standard VPC template.
Description: This solution centralizes egress internet traffic through a dedicated "shared services" VPC. Within this shared VPC, multiple NAT gateways (three, suggesting high availability across Availability Zones) are deployed. Individual application VPCs in other accounts connect to this shared services VPC using VPC peering connections. A default route (0.0.0.0/0) is configured in the routing tables of the application VPCs to direct all internet-bound traffic through their respective VPC peering connection to the shared services VPC, where it then exits via the centralized NAT gateways. This approach allows for the removal of NAT gateways from individual application VPC templates, reducing cost and management overhead.

Why this fits: This approach effectively centralizes the management and cost of NAT gateways. Instead of provisioning and paying for NAT gateways in every single VPC, they are consolidated into a single, highly available shared services VPC. VPC peering provides a direct, private, and cost-effective network connection between VPCs, enabling the spoke VPCs to route their internet-bound traffic to the central NAT gateways without traversing the public internet. This significantly reduces the complexity of managing network egress across many accounts and VPCs, and typically lowers overall operational costs by eliminating redundant NAT gateway deployments.

Example: An organization has 50 application VPCs across different accounts, all needing internet access for software updates or third-party API calls. Instead of each VPC having its own NAT Gateway (involving 50 sets of NAT Gateways and associated public IP/EIPs), a single "Network Shared Services" VPC is created. This VPC has three NAT Gateways (one in each AZ for high availability). Each of the 50 application VPCs establishes a VPC peering connection to the Network Shared Services VPC. A default route in each application VPC's subnet routing table points to the peering connection, sending all outbound traffic to the shared VPC's NAT Gateways. This saves significant operational overhead and costs compared to managing 50 independent NAT Gateway setups.

QUESTION 8

A company wants to migrate virtual Microsoft workloads from an on-premises data center to AWS. The company has successfully tested a few sample workloads on AWS. The company also has created an AWS Site-to-Site VPN connection to a VPC. A solutions architect needs to generate a total cost of ownership (TCO) report for the migration of all the workloads from the data center.

Simple Network Management Protocol (SNMP) has been enabled on each VM in the data center. The company cannot add more VMs in the data center and cannot install additional software on the VMs. The discovery data must be automatically imported into AWS Migration Hub.

Which solution will meet these requirements?

A
Use the AWS Application Migration Service agentless service and the AWS Migration Hub Strategy Recommendations to generate the TCO report.
B
Launch a Windows Amazon EC2 instance. Install the Migration Evaluator agentless collector on the EC2 instance. Configure Migration Evaluator to generate the TCO report.
C
Launch a Windows Amazon EC2 instance. Install the Migration Evaluator agentless collector on the EC2 instance. Configure Migration Hub to generate the TCO report.
D
Use the AWS Migration Readiness Assessment tool inside the VPC. Configure Migration Evaluator to generate the TCO report.

Correct Option: B

โœ… Launch a Windows Amazon EC2 instance. Install the Migration Evaluator agentless collector on the EC2 instance. Configure Migration Evaluator to generate the TCO report.
Description: AWS Migration Evaluator (formerly TSO Logic) is a complimentary service that provides data-driven business cases for cloud migration. It operates by collecting performance and utilization data from an existing on-premises environment using an agentless collector to estimate the Total Cost of Ownership (TCO) of migrating workloads to AWS.

Why this fits: To utilize Migration Evaluator for generating a TCO report, an agentless collector application needs to be deployed within the customer's environment. This collector is typically installed on a dedicated Windows server. While it can be installed on an on-premises server, deploying it on a Windows Amazon EC2 instance (especially for environments already connected to AWS or for specific network architectures) is a valid and common approach. The collector gathers data such as CPU, RAM, storage, and network utilization, which is then uploaded to the Migration Evaluator service. It is Migration Evaluator itself that processes this data and generates the comprehensive TCO report.

Example: A company aims to understand the financial implications of moving its data centers to AWS. They provision a Windows Server EC2 instance, install the Migration Evaluator collector, and configure it to scan their on-premises VMware environment. After a defined data collection period, Migration Evaluator analyzes the gathered performance metrics and generates a detailed TCO report comparing their current expenditures with potential AWS costs, including recommended instance types and savings.



QUESTION 9

A company has automated the nightly retraining of its machine learning models by using AWS Step Functions. The workflow consists of multiple steps that use AWS Lambda. Each step can fail for various reasons, and any failure causes a failure of the overall workflow.

A review reveals that the retraining has failed multiple nights in a row without the company noticing the failure. A solutions architect needs to improve the workflow so that notifications are sent for all types of in the retraining process.

Which combination of steps should the solutions architect take to meet these requirements? (Select THREE.)

A
Create an Amazon Simple Notification Service (Amazon SNS) topic with a subscription of type "Email" that targets the team's mailing list.
B
Create a task named "Email" that forwards the input arguments to the SNS topic.
C
Add a Catch field to all Task, Map, and Parallel states that have a statement of "ErrorEquals": [โ€œStates.Allโ€] and โ€œNextโ€: โ€œEmailโ€.
D
Add a new email address to Amazon Simple Email Service (Amazon SES). Verify the email address.
E
Create a task named "Email" that forwards the input arguments to the SES email address.
F
Add a Catch field to all Task, Map, and parallel states that have a statement of "ErrorEqualsโ€: [:States.Runtimeโ€] and โ€œNextโ€ : โ€œEmailโ€.

Correct Option: A,B,C

โœ… Choice A: Create an Amazon Simple Notification Service (Amazon SNS) topic with a subscription of type "Email" that targets the team's mailing list.
Description: Amazon SNS is a fully managed messaging service that supports various subscription types, including "Email." When a message is published to an SNS topic, it can be delivered to all subscribed email addresses or mailing lists. This provides a scalable and flexible way to send notifications to multiple recipients.

Why this fits: This choice establishes the fundamental notification channel. By creating an SNS topic with an email subscription to the team's mailing list, any message published to this topic will automatically trigger an email notification to the intended recipients. This is a standard and highly effective pattern for broadcasting event-driven notifications in AWS.

Example: An administrator creates an SNS topic named WorkflowErrorNotifications and adds devops-team@example.com as an email subscription. Any message sent to WorkflowErrorNotifications will now be delivered via email to the team.



โœ… Choice B: Create a task named "Email" that forwards the input arguments to the SNS topic.
Description: In AWS Step Functions, a "Task" state can be configured to invoke various AWS service API actions directly. This includes the sns:Publish API action, which sends a message to a specified SNS topic. The Task state can pass its input arguments (which would contain error details in this context) as the message payload to the SNS topic.

Why this fits: This choice defines the action within the Step Functions state machine that will trigger the email notification. By creating a dedicated "Email" Task state, the workflow clearly identifies the step responsible for sending the notification. This task will integrate with the SNS topic created in Choice A, effectively bridging the Step Functions error handling with the email notification system.

Example: A Step Functions Task state named Email is configured to call arn:aws:states:::sns:publish. Its Parameters would include TopicArn pointing to the WorkflowErrorNotifications topic and Message.$: "$." to forward the state's input (containing error details) as the email content.



โœ… Choice C: Add a Catch field to all Task, Map, and Parallel states that have a statement of "ErrorEquals": ["States.All"] and "Next": "Email".
Description: AWS Step Functions states, such as Task, Map, and Parallel states, support a Catch field for defining error handling logic. The ErrorEquals field specifies which errors to catch, where States.All is a built-in error name that matches any unhandled error. The Next field then dictates the state to transition to if the specified error is caught.

Why this fits: This choice implements comprehensive error handling across the critical processing states of the Step Functions workflow. By adding a Catch field with ErrorEquals: ["States.All"] to all relevant states, the state machine is configured to gracefully capture any type of failure that occurs within these states. Setting Next: "Email" ensures that upon catching an error, the state machine transitions to the "Email" Task state (from Choice B), thereby initiating the email notification process. This ensures that all unhandled errors trigger an alert.

Example: A Task state responsible for ProcessData includes a Catch array: [{ "ErrorEquals": ["States.All"], "Next": "Email", "ResultPath": "$.errorInfo" }]. If ProcessData fails, the execution transitions to the Email state, with error details available in $.errorInfo.

QUESTION 10

A company wants to migrate its website to AWS. The website uses microservices and runs on containers that are deployed in an on-premises, self-managed Kubernetes cluster. All the manifests that define the deployments for the containers in the Kubernetes deployment are in source control.

All data for the website is stored in a PostgreSQL database. An open source container image repository runs alongside the on-premises environment.

A solutions architect needs to determine the architecture that the company will use for the website on AWS.

Which solution will meet these requirements with the LEAST effort to migrate?

A
Create an AWS App Runner service. Connect the App Runner service to the open source container image repository. Deploy the manifests from on premises to the App Runner service. Create an Amazon RDS for PostgreSQL database.
B
Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that has managed node groups. Copy the application containers to a new Amazon Elastic Container Registry (Amazon ECR) repository. Deploy the manifests from on premises to the EKS cluster. Create an Amazon Aurora PostgreSQL DBcluster.
C
Create an Amazon Elastic Container Service (Amazon ECS) cluster that has an Amazon EC2 capacity pool. Copy the application containers to a new Amazon Elastic Container Registry (Amazon ECR) repository. Register each container image as a new task definition. Configure ECS services for each task definition to match the original Kubernetes deployments. Create an Amazon Aurora PostgreSQL DB cluster.
D
Rebuild the on-premises Kubernetes cluster by hosting the cluster on Amazon EC2 instances. Migrate the open source container image repository to the EC2 instances. Deploy the manifests from on premises to the new cluster on AWS. Deploy an open source PostgreSQL database on the new cluster.

Correct Option: B

โœ… Choice B: Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that has managed node groups. Copy the application containers to a new Amazon Elastic Container Registry (Amazon ECR) repository. Deploy the manifests from on premises to the EKS cluster. Create an Amazon Aurora PostgreSQL DBcluster.
Description: Amazon EKS is a managed service that makes it easy to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane. It integrates with various AWS services for networking, storage, and security. Amazon ECR is a fully managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon Aurora PostgreSQL is a fully managed, PostgreSQL-compatible relational database built for the cloud, offering superior performance, high availability, and scalability compared to standard PostgreSQL installations.

Why this fits: This choice represents the most direct and operationally efficient migration path for an existing on-premises Kubernetes workload.

  • Amazon EKS: By using Amazon EKS, the existing Kubernetes manifests (YAML files defining deployments, services, etc.) can be deployed with minimal to no changes, significantly reducing migration effort and risk. Managed node groups further simplify operations by handling patching, updating, and scaling of EC2 instances for the worker nodes. This directly addresses the need to migrate an "on-premises Kubernetes cluster."
  • Amazon ECR: Copying the application containers to Amazon ECR provides a secure, highly available, and integrated container image repository within AWS, which is essential for deploying applications to EKS.
  • Amazon Aurora PostgreSQL DB cluster: Replacing an on-premises PostgreSQL database with Amazon Aurora PostgreSQL provides a robust, highly available, and scalable managed database solution. Aurora's compatibility with PostgreSQL means application code changes are typically minimal, if any, while benefiting from AWS's managed service advantages like automated backups, patching, and replication.

Consider why other options are less optimal:

  • Choice A (App Runner): AWS App Runner is a good solution for quickly deploying web applications and APIs, but it's not designed for directly consuming and managing existing Kubernetes manifests. It would require significant refactoring of the application's deployment strategy.
  • Choice C (ECS): While Amazon ECS is a powerful container orchestration service, migrating from Kubernetes to ECS typically involves converting Kubernetes manifests to ECS task definitions and services. This conversion introduces additional refactoring effort and complexity compared to using EKS, which natively supports Kubernetes.
  • Choice D (Rebuild on EC2): Rebuilding a Kubernetes cluster on EC2 instances yourself (Kubeadm, kops, etc.) would mean managing the Kubernetes control plane, worker nodes, and all underlying infrastructure. This significantly increases operational overhead and complexity compared to using a managed service like EKS. Deploying an "open source PostgreSQL database on the new cluster" would also mean self-managing the database, losing the benefits of a managed service like Aurora or RDS.

Example: An e-commerce company wants to migrate its existing microservices application, currently running on an on-premises Kubernetes cluster with a PostgreSQL database, to AWS. Their development team uses kubectl and standard Kubernetes YAML files for deployments. They would provision an Amazon EKS cluster, push their Docker images to Amazon ECR, and then use kubectl to apply their existing deployment manifests directly to the EKS cluster. For the database, they would create an Amazon Aurora PostgreSQL DB cluster, migrate their data, and update their application's database connection strings.

QUESTION 11

A company wants to migrate its on-premises application to AWS. The database for the application stores structured product data and temporary user session data. The company needs to decouple the product data from the user session data. The company also needs to implement replication in another AWS Region for disaster recovery.

Which solution will meet these requirements with the HIGHEST performance?

A
Create an Amazon RDS DB instance with separate schemas to host the product data and the user session data. Configure a read replica for the DB instance in another.
B
Create an Amazon RDS DB instance to host the product data. Configure a read replica for the DB instance in another Region. Create a global datastore in Amazon ElastiCache for Memcached to host the user session data.
C
Create two Amazon DynamoDB global tables. Use one global table to host the product data. use the other global table to host the user session data. Use DynamoDB Accelerator (DAX) for caching.
D
Create an Amazon RDS DB instance to host the product data. Configure a read replica for the DB instance in another Region. Create an Amazon DynamoDB global table to host the user session data.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 12

A company uses an organization in AWS Organizations to manage multiple AWS accounts. The company hosts some applications in a VPC in the company's shared services account. The company has attached a transit gateway to the VPC in the shared services account.

The company is developing a new capability and has created a development environment that requires access to the applications that are in the shared services account. The company intends to delete and recreate resources frequently in the development account. The company also wants to give a development team the ability to recreate the team's connection to the shared services account as required.

Which solution will meet these requirements?

A
Create a transit gateway in the development account. Create a transit gateway peering request to the shared services account. Configure the shared services transit gateway to automatically accept peering connections.
B
Turn on automatic acceptance for the transit gateway in the shared services account. Use AWS Resource Access Manager (AWS RAM) to share the transit gateway resource in the shared services account with the development account. Accept the resource in the development account. Create a transit gateway attachment in the development account.
C
Turn on automatic acceptance for the transit gateway in the shared services account. Create a VPC endpoint. Use the endpoint policy to grant permissions on the VPC endpoint for the development account. Configure the endpoint service to automatically accept connection requests. Provide the endpoint details to the development team.
D
Create an Amazon EventBridge rule to invoke an AWS Lambda function that accepts the transit gateway attachment when the development account makes an attachment request. use AWS Network Manager to share the transit gateway in the shared services account with the development account. Accept the transit gateway in the development account.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 13

A company has a new application that needs to run on five Amazon EC2 instances in a single AWS Region. The application requires high-throughput, low-latency network connections between all of the EC2 instances where the application will run. There is no requirement for the application to be fault tolerant.

Which solution will meet these requirements?

A
Launch five new EC2 instances into a cluster placement group. Ensure that the EC2 instance type supports enhanced networking.
B
Launch five new EC2 instances into an Auto Scaling group in the same Availability Zone. Attach an extra elastic network interface to each EC2 instance.
C
Launch five new EC2 instances into a partition placement group. Ensure that the EC2 instance type supports enhanced networking.
D
Launch five new EC2 instances into a spread placement group. Attach an extra elastic network interface to each EC2 instance.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 14

A company needs to implement disaster recovery for a critical application that runs in a single AWS Region. The application's users interact with a web frontend that is hosted on Amazon EC2 instances behind an Application Load Balancer (ALB). The application writes to an Amazon RDS for MySQL DB instance. The application also outputs processed documents that are stored in an Amazon S3 bucket.

The company's finance team directly queries the database to run reports. During busy periods, these queries consume resources and negatively affect application performance.

A solutions architect must design a solution that will provide resiliency during a disaster. The solution must minimize data loss and must resolve the performance problems that result from the finance team's queries.

Which solution will meet these requirements?

A
Migrate the database to Amazon DynamoDB and use DynamoDB global tables. Instruct the finance team to query a global table in a separate Region. Create an AWS Lambda function to periodically synchronize the contents of the original S3 bucket to a new S3 bucket in the separate Region. Launch EC2 instances and create an ALB in the separate Region. Configure the application to point to the new S3 bucket.
B
Launch additional EC2 instances that host the application in a separate Region. Add the additional instances to the existing ALB. In the separate Region. Create a read replica of the RDS DB instance. Instruct the finance team to run queries against the read replica. Use S3 Cross-Region Replication (CRR) from the original S3 bucket to a new S3 bucket in the separate Region. During a disaster. promote the read replica to a standalone DB instance. Configure the application to point to the new S3 bucket and to the newly promoted read replica.
C
Create a read replica of the RDS DB instance in a separate Region. Instruct the finance team to run queries against the read replica. Create AMIS of the EC2 instances that host the application frontend. Copy the AMIS to the separate Region. Use S3 Cross-Region Replication (CRR) from the original S3 bucket to a new S3 bucket in the separate Region. During a disaster. promote the read replica to a standalone DB instance. Launch EC2 instances from the AMIS and create an ALB to present the application to end users. Configure the application to point to the new S3 bucket.
D
Create hourly snapshots of the RDS DB instance. Copy the snapshots to a separate Region. Add an Amazon ElastiCache cluster in front of the existing RDS database. Create AMIs of the EC2 instances that host the application frontend. Copy the AMIs to the separate Region. Use S3 Cross-Region Replication (CRR) from the original S3 bucket to a new S3 bucket in the separate Region. During a disaster. restore the database from the latest RDS snapshot. Launch EC2 instances from the AMIs and create an ALB to present the application to end users. Configure the application to point to the new S3 bucket.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 15

A company is using loT devices on its manufacturing equipment. Data from the devices travels to the AWS Cloud through a connection to AWS loT core. An Amazon kinesis data stream sends the data stream sends the data from AWS loT core to the companyโ€™s processing application. The processing application stores data in Amazon S3.

A new requirement states that the company also must send the raw data to a third-party system by using an HTTP API.

Which solution will meet these requirements with the LEAST amount of development work?

A
Create a custom AWS Lambda function to consume records from the Kinesis data stream. Configure the Lambda function to call the third-party HTTP API.
B
Create an S3 event notification with Amazon EventBridge (Amazon CloudWatch Events) as the event destination. Create an EventBridge (CloudWatch Events) API destination for the third-party HTTP API.
C
Create an Amazon Kinesis Data Firehose delivery stream. Configure an HTTP endpoint destination that targets the third-party HTTP API. Configure the kinesis data stream to send data to the Kinesis Data Firehose delivery stream.
D
Create an S3 event notification with an Amazon Simple Queue Service (Amazon SQS) queue as the event destination. Configure the SQS queue to invoke a custom AWS Lambda function. Configure the Lambda function to call the third-part HTTP.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 16

A company runs a video-on-demand (VOD) content streaming application on AWS. The application includes an Amazon CloudFront distribution that uses the default cache behavior. The distribution has a single origin that points to an Amazon S3 bucket that contains the video files.

The company wants to improve the application's reliability. The company creates a second S3 bucket and configures S3 Cross-Region Replication (CRR) between the S3 buckets. The company must implement high availability for the CloudFront deployment and must ensure that failover begins within 1 second.

Which change to the current architecture will meet these requirements with the LEAST operational overhead?

A
Create a second CloudFront distribution that uses the second S3 bucket as a single origin. Create an origin group. Add both distributions to the origin group. Set the original distribution as the primary distribution. Set the new distribution as the secondary distribution. Create an Amazon Route 53 health check to monitor the health of the primary distribution and secondary distribution every second.
B
Create a new origin in the existing CloudFront distribution. Specify the second S3 bucket as the new origin. Create an origin group. Add the original origin as the primary origin. Add the new origin as the secondary origin. Set the origin response timeout value to 1. Set the origin connection attempts value to 1.
C
Create a new origin in the existing CloudFront distribution. Specify the second S3 bucket as the new origin. Create an origin group. Add the original origin as the primary origin. Add the new origin as the secondary origin. Update the default cache behavior to use the origin group. Set the origin connection timeout value to 1. Set the origin connection attempts value to 1.
D
Create a new origin in the existing CloudFront distribution. Specify the second S3 bucket as the new origin. Create an AWS Lambda function to monitor the health of the original origin. Program the Lambda function to update the CloudFront distribution and promote the secondary origin to primary if a health check fails. Create an Amazon EventBridge scheduled rule to invoke the Lambda function every second.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 17

A weather forecasting company is migrating an application that stores data on premises in a PostgreSQL database. The company wants to migrate the database to Amazon Aurora PostreSQL. The database size grows at an average rate of 5 GB daily and is currently 50 TB. The data center has an internet connection with 50 Mbps of available bandwidth. The migration to AWS must be completed as soon as possible within the next 21 days.

Which data transfer strategy meets these requirements with the LEAST amount of application downtime?

A
Take the application offline. Create a local backup of the database. Transmit the database backup file over the existing connection to an Amazon S3 bucket. Use native databases tools to restore the backup onto the new database and to set up replication to capture any changes since the backup. Modify the database connection string, and bring the application online.
B
Install the Server Migration Connector VM in the local data center. Use the AWS Server Migration Service (AWS SMS) console to replicate the on-premises database to the new database. Modify DNS records to points to the new database.
C
Create a local backup of the database, and copy file the backup onto an AWS Snowcone device. Activate the AWS DataSync agent on the device, and configure the agent to copy the backup and ongoing changes to an Amazon S3 bucket. Use AWS Backup to restore the backup onto the new database and to apply the changes. Modify DNS records to point to the new database.
D
Use AWS Database Migration Service (AWS DMS) to launch a replication instance in a connected VPC. Use the AWS Schema Conversion Tool to extract the data locally and to move the data to an AWS Snowball Edge Storage optimized device. Ship the device to AWS, and use an AWS DMS task to complete the transfer to the target database. For the migration type, choose the option to migrate existing data and replicate ongoing changes. Modify DNS records to point to the new database.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 18

A company wants to record key performance indicators (KPIs) from its application as part of a strategy to convert to a user-based licensing schema. The application is a multi-tier application with a web-based UI. The company saves all log files to Amazon CloudWatch by using the CloudWatch agent. All logins to the application are saved in a log file.

As part of the new license schema, the company needs to find out how many unique users each client has on a daily basis, weekly basis, and monthly basis.

Which solution will provide this information with the LEAST change to the application?

A
Configure an Amazon CloudWatch Logs metric filter that saves each successful login as a metric. Configure the user name and client name as dimensions for the metric.
B
Change the application logic to make each successful login generate a call to the AWS SDK to increment a custom metric that records user name and client name dimensions in CloudWatch.
C
Configure the CloudWatch agent to extract successful login metrics from the logs. Additionally, configure the CloudWatch agent to save the successful login metrics as a custom metric that uses the user name and client name as dimensions for the metric.
D
Configure an AWS Lambda function to consume an Amazon CloudWatch Logs stream of the application logs. Additionally, configure the Lambda function to increment a custom CloudWatch that uses the user name and client name as dimensions for the metric.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 19

A company uses AWS Organizations for a multi-account setup in the AWS Cloud. The company's finance team has a data processing application that Uses AWS Lambda and Amazon DynamoDB. The Companyโ€™s marketing team wants to access the data that is stored in the DynamoDB table.

The DynamoDB table contains data. The marketing team can have access to only specific attributes of data in the DynamoDB table. The team and the marketing team have separate AWS accounts.

What should a solutions architect do to provide the marketing team with the appropriate access to the DynamoDB table?

A
Create an SCP to grant the marketing team's AWS account access to the specific attributes of the DynamoDB table. Attach the SCP to the OU of the finance team.
B
Create an IAM role in the finance team's account by using IAM policy conditions for specific DynamoDB (fine-grained access Establish trust with the marketing team's account. In the marketing team's account, create an IAM role that has permissions to assume the IAM role in the finance team's account.
C
Create a resource-based IAM policy that includes conditions for specific DynamoDB attributes (fine-grained access control). Attach the policy to the DynamoDB table. In the marketing team's account, create an IAM role that has permissions to access the DynamoDB table in the finance team's account.
D
Create an IAM role in the finance account to access the DynamoDB table. Use an IAM permissions boundary to limit the access to the specific attributes. In the marketing team's account, create an IAM role that has permissions to assume the IAM role in the finance team's account.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 20

A company runs an ecommerce web application on AWS. The web application is hosted as a static website on Amazon S3 with Amazon CloudFront for content delivery. An Amazon API Gateway API invokes AWS Lambda functions to handle user requests and order processing for the web application. The Lambda functions store data in an Amazon RDS for MySQL DB cluster that uses On-Demand Instances- The DB cluster usage has been consistent in tete past 12 months.

Recently the website has experienced SOL injection and web exploit attempts. Customers also report that order processing time has increased during periods of peak usage, During these periods the Lambda functions often have cold starts. As the company grows. the company needs to ensure scalability and Iow-latency access during traffic peaks, The company also must optimize the database costs and add protection against the SQL injection and web exploit attempts.

Which solution will meet these requirements?

A
Configure the Lambda functions to have an increased timeout value during peak periods. Use RDS Reserved Instances for the database- Use CloudFront and subscribe to AWS Shield Advanced to protect against the SOL injection and web exploit attempts
B
Increase the memory of the Lambda functions- Transition to Amazon Redshift for the database. Integrate Amazon Inspector with CloudFront to protect against the SOL injection and web exploit attempts.
C
Use Lambda functions with provisioned concurrency for compute during peak periods. Transition to Amazon Aurora Serverless for the database. Use CloudFront and subscribe to AWS Shield Advanced to protect against the SOL injection and web exploit attempts.
D
Use Lambda functions with provisioned concurrency for compute during peak periods use RDS Reserved Instances for the database. Integrate AWS WAF with CloudFront to protect against the SOL injection and web exploit attempts.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 21

A data analytics company has an Amazon Redshift cluster that consists of several reserved nodes. The cluster is experiencing unexpected bursts of usage because a team of employees is compiling a deep audit analysis report. The queries to generate the report are complex read queries and are CPU intensive.

Business requirements dictate that the cluster must be able to service read and write queries at all times. A solutions architect must devise a solution that accommodates the bursts of usage.

Which solution meets these requirements MOST cost-effectively?

A
Provision an Amazon EMR cluster. Offload the complex data processing tasks.
B
Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using a classic resize operation when the cluster's CPU metrics in Amazon CloudWatch reach 80%.
C
Deploy an AWS Lambda function to add capacity to the Amazon RedShift cluster by using an elastic resize operation when the cluster's CPU metrics in Amazon CloudWatch reach 80%.
D
Turn on the Concurrency Scaling feature for the Amazon Redshift cluster.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 22

A company has multiple lines of business (LOBs) that roll up to the parent company. The company has asked its solutions architect to develop a solution with the following requirements:

โœ‘ Produce a single AWS invoice for all of the AWS accounts used by its LOBs.

โœ‘ The costs for each LOB account should be broken out on the invoice.

โœ‘ Provide the ability to restrict services and features in the LOB accounts, as defined by the company's governance policy.

โœ‘ Each LOB account should be delegated full administrator permissions, regardless of the governance policy.

Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)

A
Use AWS Organizations to create an organization in the parent account for each LOB. Then, invite each LOB account to the appropriate organization.
B
Use AWS Organizations to create a single organization in the parent account. Then, invite each LOB's AWS account to pin the organization.
C
Implement service quotas to define the services and features that are permitted and apply the quotas to each LOB as appropriate.
D
Create an SCP that allows only approved services and features, then apply the policy to the LOB accounts.
E
Enable consolidated billing in the parent account's billing console and link the LOB accounts.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 23

A company is creating a solution that can move 400 employees into a remote working environment in the event of an unexpected disaster. The user desktops have a mix of Windows and Linux operating systems. Multiple types of software, such as web browsers and mail clients, are installed on each desktop.

A solutions architect needs to implement a solution that can integrated with the company's on-premises Active Directory to allow to use their existing identity credentials. The solution must provide multi-factor authentication (MFA) and must replicate the user from the existing desktops.

Which solution will meet these requirements?

A
Use Amazon Workspaces for the cloud desktop service. Set up a VPN connection to the on-premises network. Create an AD Connector, and connect to the on-premises Active Directory. Activate MFA for Amazon WorkSpaces by using the AWS Management Console.
B
Use Amazon AppStream 2.0 as an application streaming service. Configure Desktop View for the employees. Set up a VPN connection to the on-premises network. Set up Active Directory Federation Services (AD FS) on premises. Connect the VPC network to AD FS through the VPN connection.
C
Use Amazon Workspaces for the cloud desktop service. Set up a VPN connection to the on-premises network. Create an AD Connector, and connect to the on-premises Active Directory. Configure a RADIUS server for MFA.
D
Use Amazon AppStream 2.0 as an application streaming service. Set up Active Directory Federation Services on premises. Configure MFA to grant users access on AppStream 2.0.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 24

A company runs a software-as-a-service (SaaS) application on AWS. The application consists of AWS Lambda functions and an Amazon RDS for MySQL Multi- AZ database. During market events, the application has a much higher workload than normal. Users notice slow response times during the peak periods because of many database connections. The company needs to improve the scalable performance and availability of the database.

Which solution meets these requirements?

A
Create an Amazon CloudWatch alarm action that triggers a Lambda function to add an Amazon RDS for MySQL read replica when resource utilization hits a threshold.
B
Migrate the database to Amazon Aurora, and add a read replica. Add a database connection pool outside of the Lambda handler function.
C
Migrate the database to Amazon Aurora, and add a read replica. Use Amazon Route 53 weighted records.
D
Migrate the database to Amazon Aurora, and add an Aurora Replica. Configure Amazon RDS Proxy to manage database connection pools.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 25

A company has developed an application that is running Windows Server on VMware vSphere VMs that the company hosts or premises. The application data is stored in a proprietary format that must be read through the application. The company manually provisioned the servers and the application.

As part of its disaster recovery plan, the company warns the ability to host its application on AWS temporarily the company's on-premises environment becomes unavailable. The company wants the application to return to on-premises hosting after a disaster recovery event is complete The RPO 5 minutes.

Which solution meets these requirements with the LEAST amount of operational overhead?

A
Configure AWS DataSync. Replicate the data to Amazon Elastic Block Store (Amazon EBS) volumes When the on-premises environment is unavailable, use AWS CloudFormation templates to provision Amazon EC2 instances and attach the EBS volumes.
B
Configure AWS Elastic Disaster Recovery Replicate the data to replication Amazon EC2 instances that are attached to Amazon Elastic Block Store (Amazon EBS) volumes. When the on-premises environment is unavailable, use Elastic Disaster Recovery to launch EC2 instances that use the replicated volumes.
C
Provision an AWS Storage Gateway file gateway. Recreate the data to an Amazon S3 bucket. When the on-premises environment is unavailable, use AWS Backup to restore the data to Amazon Elastic Block Store (Amazon EBS) volumes and launch Amazon EC2 instances from these EBS volumes.
D
Provision an Amazon FSx for Windows File Server file system on AWS Replicate the data to the system. When the on-premise environment is unavailable, use AWS CloudFormation templates to provision Amazon EC2 instances and use AWS :CloudFormation::lnit commands to mount the Amazon FSx file shares

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 26

A company is running its solution on AWS in a manually created VPC. The company is using AWS CloudFormation to provision other parts of the infrastructure. According to a new requirement, the company must manage all infrastructure in an automatic way.

What should the company do to meet this new requirement with the LEAST effort?

A
Create a new AWS Cloud Development Kit (AWS CDK) stack that strictly provisions the existing VPC resources and configuration. Use AWS CDK to import the VPC into the stack and to manage the VPC.
B
Create a CloudFormation stack set that creates the VPC. Use the stack set to import the VPC into the stack.
C
Create a new CloudFormation template that strictly provisions the existing VPC resources and configuration. From the CloudFormation console, create a new stack by importing the existing resources.
D
Create a new CloudFormation template that creates the VPC. Use the AWS Serverless Application Model (AWS SAM) CLI to import the VPC.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 27

A company has more than 10,000 sensors that send data to an on-premises Apache Kafka server by using the Message Queuing Telemetry Transport (MQTT) protocol. The on-premises Kafka server transforms the data and then stores the results as objects in an Amazon S3 bucket.

Recently, the Kafka server crashed. The company lost sensor data while the server was being restored. A solutions architect must create a new design on AWS that is highly available and scalable to prevent a similar occurrence.

Which solution will meet these requirements?

A
Launch two Amazon EC2 instances to host the Kafka server in an active/standby configuration across two Availability Zones. Create a domain name in Amazon Route 53. Create a domain name in Amazon Route 53. Create a Route 53 failover policy. Route the sensors to send the data to the domain name.
B
Migrate the on-premises Kafka server to Amazon Managed Streaming for Apache Kafka (Amazon MSK). Create a Network Load Balancer (NLB) that points to the Amazon MSK broker. Enable NLB health checks. Route the sensors to send the data to the NLB.
C
Deploy AWS loT core, and connect it to an Amazon Kinesis Data Firehose delivery stream. Use an AWS Lambda function to handle data transformation. Route the sensors to send the data to AWS loT core.
D
Deploy AWS loT core, and launch an Amazon EC2 instance to host the Kafka server. Configure AWS loT core to send the data to the EC2 instance. Route the sensors to send the data to AWS loT core.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 28

A company is running a three-tier web application in an on-premises data center. The frontend is served by an Apache web server, the middle tier is a monolithic Java application, and the storage tier is a PostgreSOL database.

During a recent marketing promotion, customers could not place orders through the application because the application crashed An analysis showed that all three tiers were overloaded. The application became unresponsive, and the database reached its capacity limit because of read operations. The company already has several similar promotions scheduled in the near future.

A solutions architect must develop a plan for migration to AWS to resolve these issues. The solution must maximize scalability and must minimize operational effort.

Which combination of steps will meet these requirements? (Select THREE.)

A
Refactor the frontend so that static assets can be hosted on Amazon S3. Use Amazon CloudFront to serve the frontend to customers. Connect the frontend to the Java application.
B
Rehost the Apache web server of the frontend on Amazon EC2 instances that are in an Auto Scaling group. Use a load balancer in front of the Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) to host the static assets that the Apache web server needs.
C
Rehost the Java application in an AWS Elastic Beanstalk environment that includes auto scaling.
D
Refactor the Java application. Develop a Docker container to run the Java application. Use AWS Fargate to host the container.
E
Use AWS Database Migration Service (AWS DMS) to replatform the PostgreSQL database to an Amazon Aurora PostgreSQL database. Use Aurora Auto Scaling for read replicas.
F
Rehost the PostgreSQL database on an Amazon EC2 instance that has twice as much memory as the on-premises server.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 29

A company operates a fleet of servers on premises and operates a fleet of Amazon EC2 instances in its organization in AWS Organizations. The companyโ€™s AWS accounts contain hundreds of VPCs. The company wants to connect its AWS accounts to its on-premises network. AWS Site-to-Site VPN connections are already established to a single AWS account. The company wants to control which VPCs can communicate with other VPCs.

Which combination of steps will achieve this level of control with LEAST operational effort? (Select THREE)

A
Create a transit gateway in an AWS account. Share the transit gateway across accounts by using AWS Resource Access Manager (AWS RAM).
B
Configure attachments to all VPCs and VPNs.
C
Set up transit gateway route tables. Associate the VPCs and VPNs with the route tables.
D
Configure VPC peering between the VPCs.
E
Configure attachments between the VPCs and VPNs
F
Set up route tables on the VPCs and VPNs.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 30

A company provides a centralized Amazon EC2 application hosted in a single shared VPC. The centralized application must be accessible from client applications running in the VPCs of other business units. The centralized application front end is configured with a Network Load Balancer (NLB) for scalability.

Up to 10 business unit VPCs will need to be connected to the shared VPC. Some of the business unit VPC CIDR blocks overlap with the shared VPC, and some overlap with each other. Network connectivity to the centralized application in the shared VPC should be allowed from authorized business unit VPCs only.

Which network configuration should a solutions architect use to provide connectivity from the client applications in the business unit VPCs to the centralized application in the shared VPC?

A
Create an AWS Transit Gateway. Attach the shared VPC and the authorized business unit VPCs to the transit gateway. Create a single transit gateway route table and associate it with all of the attached VPCs. Allow automatic propagation of routes from the attachments into the route table. Configure VPC routing tables to send traffic to the transit gateway.
B
Create a VPC endpoint service using the centralized application NLB and enable the option to require endpoint acceptance. Create a VPC endpoint in each of the business unit VPCs using the service name of the endpoint service. Accept authorized endpoint requests from the endpoint service console.
C
Create a VPC peering connection from each business unit VPC to the shared VPC. Accept the VPC peering connections from the shared VPC console. Configure VPC routing tables to send traffic to the VPC peering connection.
D
Configure a virtual private gateway for the shared VPC and create customer gateways for each of the authorized business unit VPCs. Establish a Site-to-Site VPN connection from the business unit VPCs to the shared VPC. Configure VPC routing tables to send traffic to the VPN connection.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 31

A company uses AWS Organizations. The company runs two firewall appliances in a centralized networking account. Each firewall appliance runs on a manually configured highly available Amazon EC2 instance. A transit gateway connects the VPC from the centralized networking account to VPCs of member accounts. Each firewall appliance uses a static private IP address that is then used to route traffic from the member accounts to the internet.

During a recent incident, a badly configured script the termination of both firewall appliances. During the rebuild of the firewall appliances, the company wrote a new script to configure the firewall appliances at startup.

The company wants to modernize the deployment of the firewall appliances. The firewall appliances need the ability to scale horizontally to handle increased traffic when the network expands. The company must continue to use the firewall appliances to comply with company policy. The provider of the firewall appliances has that the latest version of the firewall code will work with all AWS services.

Which combination of steps should the solutions architect recommend to meet these requirements MOST cost-effectively? (Select THREE.)

A
Deploy a Gateway Load Balancer in the networking account. Set up an endpoint service that uses AWS PrivatelLink.
B
Deploy a Network Load Balancer in the centralized networking Set up an endpoint service that uses AWS PrivatelLink.
C
Create an Auto Scaling group and a launch template that uses the new script as user data to configure the firewall appliances. Create a target group that uses the instance target type.
D
Create an Auto Scaling group. Configure an AWS Launch Wizard deployment that uses the new script as user data to the firewall appliances. Create a target group that uses the IP target type.
E
Create VPC endpoints in each member account. Update the route tables to point to the VPC endpoints.
F
Create VPC endpoints in the centralized networking account. Update the route tables in each member account to point to the VPC endpoints.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 32

A company runs an intranet application on premises. The company wants to configure a cloud backup of the application. The company has selected AWS Elastic Disaster Recovery for this solution.

The company requires that replication traffic does not travel through the public internet. The application also must not be accessible from the internet. The company does not want this solution to consume all available network bandwidth because other applications require bandwidth.

Which combination of steps will meet these requirements? (Choose three.)

A
Create a VPC that has at least two private subnets, two NAT gateways, and a virtual private gateway.
B
Create a VPC that has at least two public subnets, a virtual private gateway, and an internet gateway.
C
Create an AWS Site-to-Site VPN connection between the on-premises network and the target AWS network.
D
Create an AWS Direct Connect connection and a Direct Connect gateway between the on-premises network and the target AWS network.
E
During configuration of the replication servers, select the option to use private IP addresses for data replication.
F
During configuration of the launch settings for the target servers, select the option to ensure that the Recovery instanceโ€™s private IP address matches the source server's private IP address.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 33

A company plans to deploy a new private intranet service on Amazon EC2 instances inside a VPC. An AWS Site-to-site VPN connects the VPC to the company's on-premises network. The new service must communicate with existing on-premises services. The on-premises services are accessible through the use of hostnames that reside in the company.example DNS zone. This DNS zone is wholly hosted on premises and is available only on the companyโ€™s private network.

A solutions architect must ensure that the new service can resolve hostnames on the company.example domain to integrate with existing services.

Which solution meets these requirements?

A
Create an empty private Zone in Amazon Route 53 for company.example. Add an additional NS record to the company's on-premises company.example Zone that to the authoritative name servers for the new private zone in Route 53.
B
Turn on DNS hostnames for the VPC. Configure a new outbound endpoint with Amazon Route Resolver. Create a Resolver rule to forward requests for company-example to the on-premises name servers.
C
Turn on DNS for the VPC. Configure a new inbound resolver with Amazon Route 53 Resolver. Configure the on-premises DNS server to forward requests for company.example to the new resolver.
D
Use AWS Systems Manager to configure a run document that will install a hosts file that contains any required hostnames. use an Amazon Event3ridge (Amazon CloudWatch Events) rule to run the document when an instance is entering the running state.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 34

A medical company is running a REST API on a set of Amazon EC2 instances. The EC2 instances run in an Auto Scaling group behind an Application Load Balancer (ALB). The ALB runs in three public subnets, and the EC2 instances run in three private subnets. The company has deployed an Amazon CloudFront distribution that has the ALB as the only origin.

Which solution should a solutions architect recommend to enhance the origin security?

A
Store a random string in AWS Secrets Manager. Create an AWS Lambda function for automatic secret rotation. Configure CloudFront to inject the random string as a custom HTTP header for the origin request. Create an AWS WAF web ACL rule with a string match rule for the custom header. Associate the web ACL with the ALB.
B
Create an AWS WAF web ACL rule with an IP match condition of the CloudFront service IP address ranges. Associate the web ACL with the ALB. Move the ALB into the three private subnets.
C
Store a random string in AWS Systems Manager Parameter Store. Configure Parameter Store automatic rotation for the string. Configure CloudFront to inject the random string as a custom HTTP header for the origin request. Inspect the value of the custom HTTP header, and block access in the ALB.
D
Configure AWS Shield Advanced. Create a security group policy to allow connections from CloudFront service IP address ranges. Add the policy to AWS Shield Advanced, and attach the policy to the ALB.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 35

A company is serving files to its customers through an SFTP server that is accessible over the internet. The SFTP server is running on a single Amazon EC2 instance with an Elastic IP address attached. Customers connect to the SFTP server through Elastic IP address and use SSH for authentication. The EC2 instance also has an attached security group that allows access from all customer IP addresses.

A solutions architect must implement a solution to improve availability, minimize the complexity of infrastructure management, and minimize the disruption to customers who access files. The solution must not change the way customers connect.

Which solution will meet these requirements?

A
Disassociate the Elastic IP address from the EC2 instance Create an Amazon S3 bucket to be used for SFTP file hosting. Create an AWS Transfer Family server. Configure the Transfer Family server with a publicity accessible endpoint, Associate the SFTP Elastic IP address with the new endpoint. Point the Transfer Family server to the S3 bucket. Sync all files from the SFTP to the S3 bucket.
B
Disassociate the Elastic IP address from the EC2 instance, Create an Amazon S3 bucket to be used for SFTP file hosting. Create an AWS Transfer Family server. Configure the Transfer Family server a 'UPC- hosted, internet-facing endpoint. Associate the SFTP Elastic IP address the new endpoint. Attach the security group with customer IP addresses to the new endpoint. Point the Transfer Family server to the S3 bucket. Sync all files from the SFTP server to the S3bucket.
C
Disassociate the Elastic IP address from the EC2 instance. Create a new Amazon Elastic File System (Amazon EFS) file system to be used for SFTP file hosting. Create an AWS Fargate task definition to run an SFTP server, Special the EFS file system as a mount in the task definition, create a Fargate service by using the task definition, and place a Network Load Balancer (NLB) in front of the service. When configuring the service, attach the security group with customer IP addresses to the tasks that run the SFTP server. Associate the Elastic IP address with the NIB. Sync all files from the SFTP server to the S3 bucket.
D
Disassociate the Elastic IP address from the EC2 instance. Create a multi-attach Amazon Elastic Block Store (Amazon EBS) volume to be used for SFTP file hosting. Create Network Load Balancer (NLB) with the Elastic IP address attached. Create an Auto Scaling group with EC2 instances that run an SFTP server. Define in the Auto Scaling group that are launched should attach the new multi-attach EBS volume. Configure the Auto Scaling group to automatically add instances behind the NLB. Configure the Auto Scaling group to use the security group that allows customer IP addresses for the EC2 instances that the Auto Scaling group launches. Sync all files from the SFTP server to the new multi-attach EBS volume.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 36

A company used AWS CloudFormation to create all new infrastructure in its AWS member accounts. The resources rarely change and are properly sized for the expected load. The monthly AWS bill is consistent.

Occasionally, a developer creates a new resource for testing and forgets to remove the resource when the test is complete. Most of these tests last a few days before the resources are no longer needed.

The company wants to automate the process of finding unused resources. A solutions architect needs to design a solution that determines whether the cost in the AWS bill is increasing. The solution must help identify resources that cause an increase in cost and must automatically notify the company's operations team.

Which solution will meet these requirements?

A
Turn on billing alerts. Use AWS Cost Explorer to determine the costs for the past month. Create an Amazon CloudWatch alarm for total estimated charges. Specify a cost threshold that is higher than the costs that Cost Explorer determined. Add a notification to alert the operations team if the alarm threshold is breached.
B
Turn on billing alerts. Use AWS Cost Explorer to determine the average monthly costs for the past 3 months. Create an Amazon CloudWatch alarm for total estimated charges. Specify a cost threshold that is higher than the costs that Cost Explorer determined. Add a notification to alert the operations team if the alarm threshold is breached.
C
Use AWS Cost Anomaly Detection to create a cost monitor that has a monitor type of Linked account. Create a subscription to send daily AWS cost summaries to the operations team. Specify a threshold for cost variance.
D
Use AWS Cost Anomaly Detection to create a cost monitor that has a monitor type of AWS services. Create a subscription to send daily AWS cost summaries to the operations team. Specify a threshold for cost variance.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 37

A company is using multiple AWS accounts. The company has a shared service account and several other accounts for different projects.

A team has a VPC in a project account. The team wants to connect this VPC to a corporate network through an AWS Direct Connect gateway that exists in the shared services account. The team wants to automatically perform a virtual private gateway association with the Direct Connect gateway by using an already- tested AWS Lambda function while deploying its VPC networking stack. The Lambda function code can assume a role by using AWS Security Token Service (AWS STS). The team is using AWS CloudFormation to deploy its infrastructure.

Which combination of steps will meet these requirements? (Choose three.)

A
Deploy the Lambda function to the project account. Update the Lambda functionโ€™s IAM role with the directconnect:* permission.
B
Create a cross-account IAM role in the shared services account that grants the Lambda function the directconnect:* permission. Add the sts:AssumeRole permission to the IAM role that is associated with the Lambda function in the shared services account.
C
Add a custom resource to the CloudFormation networking stack that references the Lambda function in the project account.
D
Deploy the Lambda function that is performing the association to the shared services account. Update the Lambda functionโ€™s IAM role with the directconnect:* permission.
E
Create a cross-account IAM role in the shared services account that grants the sts:AssumeRole permission to the Lambda function with the directconnect:* permission acting as a resource. Add the sts:AssumeRole permission with this cross-account IAM role as a resource to the IAM role that belongs to the Lambda function in the project account.
F
Add a custom resource to the CloudFormation networking stack that references the Lambda function in the shared services account.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 38

A company has multiple lines of business (LOBs) that roll up to the parent company. The company has asked its solutions architect to develop a solution with the following requirements:

โœ‘ Produce a single AWS invoice for all of the AWS accounts used by its LOBs.

โœ‘ The costs for each LOB account should be broken out on the invoice.

โœ‘ Provide the ability to restrict services and features in the LOB accounts, as defined by the company's governance policy.

โœ‘ Each LOB account should be delegated full administrator permissions, regardless of the governance policy.

Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)

A
Use AWS Organizations to create an organization in the parent account for each LOB. Then, invite each LOB account to the appropriate organization.
B
Use AWS Organizations to create a single organization in the parent account. Then, invite each LOB's AWS account to pin the organization. Most Voted
C
Implement service quotas to define the services and features that are permitted and apply the quotas to each LOB as appropriate.
D
Create an SCP that allows only approved services and features, then apply the policy to the LOB accounts.
E
Enable consolidated billing in the parent account's billing console and link the LOB accounts.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 39

A flood monitoring agency has deployed more than 10,000 water-level monitoring sensors. Sensors send continuous data updates, and each update is less than 1 MB in size. The agency has a fleet of on- premises application servers. These servers receive updates from the sensors, convert the raw data into a human readable format, and write the results to an on-premises relational database server Data analysts then use simple SQL queries to monitor the data.

The agency wants to increase overall application availability and reduce the effort that is required to perform maintenance tasks. These maintenance tasks, which include updates and patches to the application servers. cause downtime. While an application server is down, data is lost from sensors the remaining servers cannot handle the entire workload.

The agency wants a that optimizes operational overhead and costs. A solutions architect recommends the use of AWS IoT Core to collect the sensor data.

What else should the solutions architect recommend to meet these requirements?

A
Send the sensor data to Amazon Kinesis Data Firehose, use an AWS Lambda function to read the Kinesis Data Firehose data, convert it to .csv format, and ingest it into an Amazon Aurora MySQL DB instance. Instruct the data analysts to query the data directly from the DB instance.
B
Send the sensor data to Amazon Kinesis Data Firehose. use an AWS Lambda function to read the Kinesis Data Firehose data, convert it to Apache Parquet format, and save It to an Amazon S3 bucket. Instruct the data analysts to query the data by using Amazon Athena.
C
Send the sensor data to an Amazon Kinesis Data Analytics application to convert the data to csv format and store it in an Amazon SY bucket. Import the data into an Amazon Aurora MySQL DB instance, Instruct the data analysts to query the data from the DB instances.
D
Send the sensor data to an Amazon Kinesis Data Analytics application to convert the data to Apache parquet format and store it in an Amazon S3 bucket. Instruct the data analysts to query the data by using Amazon Athena.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 40

A software as a service (SaaS) company has developed a multi-tenant environment. The company uses Amazon DynamoDB tables that the tenants share for the storage layer. The company uses AWS Lambda functions for the application services.

The company wants to offer a tiered subscription model that is based on resource consumption by each tenant. Each tenant is identified by a unique tenant ID that is sent as part of each request to the Lambda functions. The company has created an AWS Cost and Usage Report (AWS CUR) in an AWS account. The company wants to allocate the DynamoDB costs to each tenant to match that tenant's resource consumption.

Which solution will provide a granular view of the DynamoDB cost for each tenant with the LEAST operational effort?

A
Associate a new tag that is named tenant ID with each table in DynamoDB. Activate the tag as a cost allocation tag in the AWS Billing and Cost Management console. Deploy new Lambda function code to log the tenant ID in Amazon CloudWatch Logs. Use the AWS CUR to separate DynamoDB consumption cost for each tenant ID.
B
Configure the Lambda functions to log the tenant ID and the number of RCUs and WCUs consumed from DynamoDB for each transaction to Amazon CloudWatch Logs. Deploy another Lambda function to calculate the tenant costs by using the logged capacity units and the overall DynamoDB cost from the AWS Cost Explorer API. Create an Amazon EventBridge rule to invoke the calculation Lambda function on a schedule.
C
Create a new partition key that associates DynamoDB items with individual tenants. Deploy a Lambda function to populate the new column as part of each transaction. Deploy another Lambda function to calculate the tenant costs by using Amazon Athena to calculate the number of tenant items from DynamoDB and the overall DynamoDB cost from the AWS CUR. Create an Amazon EventBridge rule to invoke the calculation Lambda function on a schedule.
D
Deploy a Lambda function to log the tenant ID, the size of each response, and the duration of the transaction call as custom metrics to Amazon CloudWatch Logs. Use CloudWatch Logs Insights to query the custom metrics for each tenant. Use AWS Pricing Calculator to obtain the overall DynamoDB costs and to calculate the tenant costs.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 41

A company needs to migrate numerous Windows applications that are hosted on VMware from its on-premises data center to the AWS Cloud. All the applications are written in C++. The company IS not allowed to install agents on the servers.

What should a solutions architect do to meet these requirements?

A
Use AWS Application Migration Service (CloudEndure Migration) for each on-premises VM Perform an initial data synchronization from on premises to AWS. Launch a test Instance, and test the Instance on AWS. Launch a cutover instance on AWS, and finalize the cutover.
B
Run the AWS App2Container tool. Push the App2Container container images to Amazon Elastic Container Registry (Amazon ECR). Deploy the container images as services by using Amazon Elastic Container Service (Amazon ECS) with the Fargate launch type.
C
Use AWS Server Migration Service (AWS SMS) to install and configure the Server Migration Connector in VMware vCenter. Import the server catalog from vCenter into AWS SMS. Create a new replication job that contains all the on-premises VMS for migration. Start the replication job.
D
Install and configure the AWS Agentless Discovery Connector in VMware vCenter to transmit data to AWS Application Discovery Service. In the Agentless Discovery Connector, select the vCenter VMS that are the source of the migration. Start the data collection.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 42

A company has an application that has a web frontend. The application runs in the company's on-premises data center and requires access to file storage for critical data.

The application runs on three Linux VMS for redundancy. The architecture includes a load balancer with HTTP request-based routing. The company needs to migrate the application to AWS as quickly as possible. The architecture on AWS must be highly available.

Which solution will meet these requirements with the FEWEST changes to the architecture?

A
Migrate the application to Amazon Elastic Container Service (Amazon ECS) containers that use the Fargate launch type in three Availability Zones. Use Amazon S3 to provide file storage for all three containers. Use a Network Load Balancer to direct traffic to the containers.
B
Migrate the application to Amazon EC2 instances in three Availability Zones. Use Amazon Elastic File System (Amazon EFS) for file storage. Mount the file storage on all three EC2 instances. Use an Application Load Balancer to direct traffic to the EC2 instances.
C
Migrate the application to Amazon Elastic Kubernetes Service (Amazon EKS) containers that use the Fargate launch type in three Availability Zones. Use Amazon FSx for Lustre to provide file storage for all three containers. Use a Network Load Balancer to direct traffic to the containers.
D
Migrate the application to Amazon EC2 instances in three AWS Regions. Use Amazon Elastic Block Store (Amazon EBS) for file storage. Enable Cross-Region Replication (CRR) for all three EC2 instances. Use an Application Load Balancer to direct traffic to the EC2 instances.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 43

A company wants to run a custom network analysis software package to Inspect traffic as traffic leaves and enters a VPC. The company has deployed the solution by using AWS CloudFormation on three Amazon EC2 Instances in an Auto Scaling group. All network routing has been established to direct traffic to the EC2 Instances.

Whenever the analysis software stops working, the Auto Scaling group replaces an instance. The network routes are not updated when the instance replacement occurs.

Which combination of steps will resolve this issue? (Select THREE)

A
Create alarms based on EC2 status check metrics that will cause the Auto Scaling group to replace the failed instance.
B
Update the CloudFormation template to install the Amazon CloudWatch agent on the EC2 instances. Configure the CloudWatch agent to send process metrics for the application.
C
Update the CloudFormation template to install AWS Systems Manager Agent on the EC2 instances. Configure Systems Manager Agent to send process metrics for the application.
D
Create an alarm for the custom metric in Amazon CloudWatch for the failure scenarios. Configure the alarm to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic.
E
Create an AWS Lambda function that responds to the Amazon Simple Notification Service (Amazon SNS) message to take the instance out of service. Update the network routes to point to the replacement instance.
F
In the CloudFormation template, write a condition that updates the network routes when a replacement Instance is launched.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 44

A company runs an application in the cloud that consists of a database and a website. Users can post data to the website, have the data processed, and have the data sent back to them in an email. Data is stored in a MySQL database running on an Amazon EC2 instance. The database is running in a VPC with two private subnets. The website is running on Apache Tomcat in a single EC2 instance in a different VPC with one public subnet. There is a single VPC peering connection between the database and website VPC.

The website has suffered several outages during the last month due to high traffic.

Which actions should a solutions architect take to increase the reliability of the application? (Select THREE.)

A
Place the Tomcat server in an Auto Scaling group with multiple EC2 instances behind an Application Load Balancer.
B
Provision an additional VPC peering connection.
C
Migrate the MySQL database to Amazon Aurora with one Aurora Replica.
D
Provision two NAT gateways in the database VPC.
E
Move the Tomcat server to the database VPC.
F
Create an additional public subnet in a different Availability Zone in the website VPC.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 45

A company is rearchitecting its applications to run on AWS. The companyโ€™s Infrastructure Includes multiple Amazon EC2 instances. The company's development team needs different levels of access. The company wants to implement a policy that requires all Windows EC2 instances to be joined to an Active Directory domain on AWS. The company also wants to implement enhanced security processes such as multi-factor authentication (MFA). The company wants to use managed AWS services wherever possible.

Which solution will meet these requirements?

A
Create an AWS Directory Service for Microsoft Active Directory implementation. Launch an Amazon WorkSpace. Connect to and use the WorkSpace for domain security configuration tasks.
B
Create an AWS Directory Service for Microsoft Active Directory implementation. Launch an EC2 instance. Connect to and use the EC2 instance for domain security configuration tasks.
C
Create an AWS Directory Service Simple AD implementation. Launch an EC2 instance. Connect to and use the EC2 instance for domain security configuration tasks.
D
Create an AWS Directory Service Simple AD implementation. Launch an Amazon WorkSpace. Connect to and use the WorkSpace for domain security configuration tasks.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 46

A company wants to record key performance indicators (KPIs) from its application as part of a strategy to convert to a user-based licensing schema. The application is a multi-tier application with a web-based I-Jl. The company saves all log files to Amazon CloudWatch by using the CloudWatch agent. All logins to the application are saved in a log file.

As part of the new license schema, the company needs to find out how many unique users each client has on a daily basis, weekly basis, and monthly basis.

Which solution will provide this information with the LEAST change to the application?

A
Configure an Amazon CloudWatch Logs metric filter that saves each successful login as a metric. Configure the user name and client name as dimensions for the metric.
B
Change the application logic to make each successful login generate a call to the AWS SDK to increment a custom metric that records user name and client name dimensions in CloudWatch.
C
Configure the CloudWatch agent to extract successful login metrics from the logs. Additionally, configure the CloudWatch agent to save the successful login metrics as a custom metric that uses the user name and client name as dimensions for the metric.
D
Configure an AWS Lambda function to consume an Amazon CloudWatch Logs stream of the application logs. Additionally, configure the Lambda function to increment a custom metric in CloudWatch that uses the user name and client name as dimensions for the metric.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 47

A software development company has multiple engineers who are working remotely. The company is running Active Directory Domain Services (AD DS) on an Amazon EC2 instance. The company's security policy states that all internal, nonpublic services that are deployed in a VPC must be accessible through a VPN. Multi-factor authentication (MFA) must be used for access to a VPN.

What should a solutions architect do to meet these requirements?

A
Create an AWS Site-to-Site VPN connection. Configure integration between a VPN and ADDS. Use an Amazon Workspaces client with MFA support enabled to establish a VPN connection.
B
Create an AWS Client VPN endpoint. Create an AD Connect or directory for integration with AD DS. Enable MFA for AD Connector. Use AWS Client VPN to establish a VPN connection.
C
Create multiple AWS Site-to-Site VPN connections by using AWS VPN CloudHub. Configure integration between AWS VPN CloudHub and ADDS. Use AWS Co-pilot to establish a VPN connection.
D
Create an Amazon WorkLink endpoint. Configure integration between Amazon WorkLink and AD DS. Enable MFA in Amazon WorkLink. Use AWS Client VPN to establish a VPN connection.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 48

A company is migrating infrastructure for its massive multiplayer game to AWS. The game's application features a leaderboard where players can see rankings in real time. The leaderboard requires microsecond reads and single-digit-millisecond write latencies. The datasets are single-digit terabytes in size and must be available to accept writes in less than a minute if a primary node failure occurs.

The company needs a solution in which data can persist for further analytical processing through a data pipeline.

Which solution will meet these requirements with the LEAST operational overhead?

A
Create an Amazon ElastiCache for Redis cluster with cluster mode enabled. Configure the application to interact with the primary node.
B
Create an Amazon RDS database with a read replica. Configure the application to point writes to the writer endpoint. Configure the application to point reads to the reader endpoint.
C
Create an Amazon MemoryDB for Redis cluster in Multi-AZ mode. Configure the application to interact with the primary node.
D
Create multiple Redis nodes on Amazon EC2 instances that are spread across multiple Availability Zones. Configure backups to Amazon S3.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 49

A solutions architect at a large company needs to set up network security for outbound traffic to the internet from all AWS accounts within an organization in AWS Organizations. The organization has more than 100 AWS accounts, and the accounts route to each other by using a centralized AWS Transit Gateway. Each account has both an internet gateway and a NAT gateway for outbound traffic to the internet. The company deploys resources only into a single AWS Region.

The company needs the ability to add centrally managed rule-based filtering on all outbound traffic to the internet for all AWS accounts in the organization. The peak load of outbound traffic will not exceed 25 Gbps in each Availability Zone.

Which solution meets these requirements?

A
Create a new VPC for outbound traffic to the internet. Connect the existing transit gateway to the new VPC. Configure a new NAT gateway. Create an Auto Scaling group of Amazon EC2 instances that run an open-source internet proxy for rule-based filtering across all Availability Zones in the Region. Modify all default routes to point to the proxyโ€™s Auto Scaling group.
B
Create a new VPC for outbound traffic to the internet. Connect the existing transit gateway to the new VPC. Configure a new NAT gateway. Use an AWS Network Firewall firewall for rule-based filtering. Create Network Firewall endpoints in each Availability Zone. Modify all default routes to point to the Network Firewall endpoints.
C
Create an AWS Network Firewall firewall for rule-based filtering in each AWS account. Modify all default routes to point to the Network Firewall firewalls in each account.
D
In each AWS account, create an Auto Scaling group of network-optimized Amazon EC2 instances that run an open-source internet proxy for rule-based filtering. Modify all default routes to point to the proxyโ€™s Auto Scaling group.Correct Answer: Bhttps://aws.amazon.com/blogs/networking-and-content-delivery/deployment-models-for-aws-network-firewall/

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 50

A company runs an application in an on-premises data center. The application gives users the ability to upload media files. The files persist in a file server. The web application has many users. The application server is overutilized, which causes data uploads to fail occasionally. The company frequently adds new storage to the file server. The company wants to resolve these challenges by migrating the application to AWS.

Users from across the United States and Canada access the application. Only authenticated users should have the ability to access the application to upload files. The company will consider a solution that refactors the application. and the company needs to accelerate application development.

Which solution will meet these requirements with the LEAST operational overhead?

A
Use AWS Application Migration Service to migrate the application server to Amazon EC2 instances. Create an Auto Scaling group for the EC2 instances. Use an Application Load Balancer to distribute the requests. Modify the application to use Amazon S3 to persist the files. use Amazon Cognito to authenticate users.
B
Use AWS Application Migration Service to migrate the application server to Amazon EC2 instances. Create an Auto Scaling group for the EC2 instances. use an Application Load Balancer to distribute the requests. Set up AWS IAM Identity Center (AWS Single Sign-On) to give users the ability to sign in to the application. Modify the application to use Amazon S3 to persist the files.
C
Create a static website for uploads of media files. Store the static assets in Amazon S3. Use AWS AppSync to create an API. Use AWS Lambda resolvers to upload the media files to Amazon S3. use Amazon Cognito to authenticate users.
D
Use AWS Amplify to create a static website for uploads of media files. Use Amplify Hosting to serve the website through Amazon CloudFront. Use Amazon S3 to store the uploaded media files. Use Amazon Cognito to authenticate users.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 51

A solutions architect has launched multiple Amazon EC2 instances in a placement group within a single Availability Zone. Because of additional load on the system, the solutions architect attempts to add new instances to the placement group. However, the solutions architect receives an insufficient capacity error.

What should the solutions architect do to troubleshoot this issue?

A
Use a spread placement group. Set a minimum of eight instances for each Availability Zone.
B
Stop and start all the instances in the placement group. Try the launch again.
C
Create a new placement group. Merge the new placement group with the original placement group.
D
Launch the additional instances as Dedicated Hosts in the placement groups.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 52

A company has an on-premises data center and is using Kubernetes to develop a new solution on AWS. The company uses Amazon Elastic Kubernetes Service (Amazon EKS) clusters for its development and test environments.

The EKS control plane and data plane for production workloads must reside on premises. The company needs an AWS managed solution for Kubernetes management.

Which solution will meet these requirements with the LEAST operational overhead?

A
Install an AWS Outposts server in the on-premises data center. Deploy Amazon EKS by using a local cluster configuration on the Outposts server for the production workloads.
B
Install Amazon EKS Anywhere on the company's hardware in the on-promises data center. Deploy the production workloads on an EKS Anywhere cluster.
C
Install an AWS Outposts server in the on-premises data center. Deploy Amazon EKS by using an extended cluster configuration on the Outposts server for the production workloads.
D
Install an AWS Outposts server in the on-premises data center. Install Amazon EKS Anywhere on the Outposts server. Deploy the production workloads on an EKS Anywhere cluster.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 53

A news company wants to implement an AWS Lambda that calls an external API to receive new press releases every 10 minutes. The API provider is planning to use an IP address allow list of protect the API, so the news company needs to provide any public IP addresses that access the API. The companyโ€™s current architecture includes a VPC with an internet gateway and a NAT gateway. A solutions architect must implement a static IP address for the Lambda function.

Which combination of steps should the solutions architect take to meet these requirements? (Select Two)

A
Use the Elastic IP address that is associated with the NAT gateway for the IP address allow list.
B
Assign an Elastic IP address to the Lambda function. Use the Lambda functionโ€™s Elastic IP address for the IP address allow list.
C
Configure the Lambda function to launch in the private subnet of the VPC.
D
Configure the Lambda function to launch in the public subnet of the VPC.
E
Create a transit gateway. Attach the VPC and the Lambda function to the transit gateway.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 54

A company is deploying a new web-based application and needs a storage solution for the Linux application servers. The company wants to create a single location for updates to application data for all instances. The active dataset will be up to 100 GB in size. A solutions architect has determined that peak operations will occur for 3 hours daily and will require a total of 225 MiBps of read throughput.

The solutions architect must design a Multi-AZ solution that makes a copy of the data available in another AWS Region for disaster recovery (DR). The DR copy has an RPO of less than 1 hour.

Which solution will meet these requirements?

A
Deploy a new Amazon Elastic File System (Amazon EFS) Multi-AZ file system. Configure the file system for 75 MiBps of provisioned throughput. Implement replication to a file system in the DR Region.
B
Deploy a new Amazon FSx for Lustre file system. Configure Bursting Throughput mode for the file system. Use AWS Backup to back up the file system to the DR Region.
C
Deploy a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume with 225 MiBps of throughput. Enable Multi-Attach for the EBS volume. Use AWS Elastic Disaster Recovery to replicate the EBS volume to the DR Region.
D
Deploy an Amazon FSx for OpenZFS file system in both the production Region and the DR Region. Create an AWS DataSync scheduled task to replicate the data from the production file system to the DR file system every 10 minutes.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 55

A company wants to move its three-stage web application to the AWS Cloud. The three stages are development, test, and production. Each stage must use its own dedicated VPC. The company wants to access the stages through IPsec connections from the company's main office location.

Which of the steps should a solutions architect implement in the network design to meet these requirements? (Select THREE.)

A
Create a dedicated networking VPC that includes a virtual private gateway.
B
Create a transit gateway. Attach all the VPCs to the transit gateway.
C
Create security groups in each VPC to control access to and from the application.
D
Create a customer gateway. Create a VPN connection. Attach the VPN connection to the transit gateway by specifying the customer gateway.
E
Create a customer gateway. Create a VPN connection. Attach the VPN to the virtual private gateway by specifying the customer gateway.
F
Create security groups for the transit gateway to control network access to the application resources.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 56

A company has a Windows-based desktop application that is packaged and deployed to the users' Windows machines. The company recently acquired another company that has employees who primarily use machines with a Linux operating system. The acquiring company has decided to migrate and rehost the Windows-based desktop application to AWS.

All employees must be authenticated before they use the application. The acquiring company uses Active Directory on premises but wants a simplified way to manage access to the application on AWS for all the employees.

Which solution will rehost the application on AWS with the LEAST development effort?

A
Set up and provision an Amazon WorkSpaces virtual desktop for every employee. Implement authentication by using Amazon Cognito identity pools. Instruct employees to run the application from their provisioned WorkSpaces virtual desktops.
B
Create an Auto Scaling group of Windows-based Amazon EC2 instances. Join each EC2 instance to the company's Active Directory domain. Implement authentication by using the Active Directory that is running on premises. Instruct employees to run the application by using a Windows remote desktop.
C
Use an Amazon AppStream 2.0 image builder to create an image that includes the application and the required configurations. Provision an AppStream 2.0 On-Demand fleet with dynamic Fleet Auto Scaling policies for running the image. Implement authentication by using AppStream 2.0 user pools. Instruct the employees to access the application by starting browser-based AppStream 2.0 streaming sessions.
D
Refactor and containerize the application to run as a web-based application. Run the application in Amazon Elastic Container Service (Amazon ECS) on AWS Fargate with step scaling policies. Implement authentication by using Amazon Cognito user pools. Instruct the employees to run the application from their browsers.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 57

A company has a static web application that gives users the ability to upload short videos. The website experiences variable traffic with peaks during certain months. The application runs on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). The application stores the videos on Amazon EBS volumes. The company wants to rearchitect the application to use AWS managed services where possible. The company also wants to stop using the third-party software for video categorization. Which solution meets these requirements?

A
Deploy the application on Amazon ECS with the EC2 launch type. Configure the application to upload and store videos in Amazon EFS and post messages to an Amazon SQS queue. Configure an AWS Step Functions state machine to process SQS queue messages and invoke the Amazon Rekognition Video API to categorize the videos.
B
Host the application on an AWS Batch environment that uploads and stores videos in Amazon EFS. Configure the application to post messages to an Amazon SQS queue. Configure an AWS Lambda function to process SQS queue messages and invoke the Amazon Rekognition Video API to categorize the videos.
C
Host the static website on an Amazon S3 bucket. Store uploaded videos in a separate S3 bucket. Configure S3 Event Notifications to publish events to an Amazon SQS queue when new objects upload. Configure an AWS Lambda function that processes SQS queue messages and calls the Amazon Rekognition Video API to categorize the videos.
D
Deploy the application in an AWS Elastic Beanstalk environment that uses EC2 instances in an Auto Scaling group. Configure the application to upload and store videos in an EBS volume and post messages to an SQS queue. Configure an AWS Lambda function to process SQS queue messages and invoke the Amazon Rekognition Video API to categorize the videos.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 58

A video processing company has a fleet of Amazon EC2 Spot Instances. The company uses an Auto Scaling group to launch the EC2 instances. The fleet runs a custom processing service that requires a high amount of CPU for a short amount of time to modify a proprietary video format.

The EC2 instances are configured by a user data script that runs the required service at launch and downloads the required video file from Amazon S3. The launch template uses burstable instance types in unlimited mode. The processing of each request takes an average of 20 minutes to complete.

A solutions architect must review the existing architecture to determine whether the company is using resources properly.

What should the solutions architect recommend to reduce the companyโ€™s operational costs?

A
Replace the EC2 instances with an Amazon Elastic Transcoder pipeline. Invoke the pipeline by using Amazon S3 Event Notifications.
B
Create a new version of the launch template. Edit the configuration options to change to burstable instance types in standard mode. Change the Auto Scaling group to use the new launch template version.
C
Create an AWS Batch job that uses the launch template that the Auto Scaling group uses. Configure the job to use compute optimized instances on a Dedicated Host.
D
Copy the custom application into a container image. Upload the container image to Amazon Elastic Container Registry (Amazon ECR). Create an AWS Lambda function to run the custom container image.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 59

A company needs to gather data from an experiment in a remote location that does not have internet connectivity. During the experiment, sensors that are connected to a local network will generate 6 TB of data in a proprietary format over the course of I week. The sensors can be configured to upload their data files to an FTP server periodically, but the sensors do not have their own FTP server. The sensors also do not support other protocols. The company needs to collect the data centrally and move the data to object storage in the AWS Cloud as soon as possible after the experiment.

Which solution will meet these requirements?

A
Order an AWS Snowball Edge Compute Optimized device. Connect the device to the local network. Configure AWS DataSync with a target bucket name, and unload the data over NFS to the device. After the experiment, return the device to AWS so that the data can be loaded into Amazon S3.
B
Order an AWS Snowcone device. including an Amazon Linux 2 AMI. Connect the device to the local network. Launch an Amazon EC2 instance on the device. Create a shell script that periodically downloads data from each sensor. After the experiment. return the device to AWS so that the data can be loaded as an Amazon Elastic Block Store (Amazon E3S) volume.
C
Order an AWS Snowcone device, including an Amazon Linux 2 AMI. Connect the device to the local network. Launch an Amazon EC2 instance on the device. Install and configure an FTP server on the EC2 instance. Configure the sensors to upload data to the EC2 instance. After the experiment, return the device to AWS so that the data can be loaded into Amazon S3.
D
Order an AWS Snowcone device. Connect the device to the local network. Configure the device to use Amazon FSx. Configure the sensors to upload data to the device. Configure AWS DataSync on the device to synchronize the uploaded data with an Amazon S3 bucket. Return the device to AWS so that the data can be loaded as an Amazon Elastic Block Store (Amazon EBS) volume.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 60

A company is migrating a document processing workload to AWS. The company has updated many applications to use the Amazon S3 API to store, retrieve, and modify documents. A processing server generates the documents at a rate of approximately five documents every second. After the document processing is finished, customers can download the documents directly from Amazon S3. During the migration, the company discovered that it could not immediately update the processing server that generates many documents to support the S3 API. The server runs on Linux and requires fast local access to the files that the server generates and modifies. When the server finishes processing, the files must be available to customers for download within 30 minutes. Which solution will meet these requirements with the LEAST operational overhead?

A
Configure AWS DataSync to connect to an Amazon EC2 instance. Configure a DataSync task to synchronize the generated files to and from Amazon S3.
B
Configure Amazon FSx for Lustre with an import and export policy. Link the new file system to an S3 bucket. Install the Lustre client and mount the document store to an Amazon EC2 instance.
C
Refactor the application as an AWS Lambda function. Use the AWS SDK for Java to generate, modify, and access the files that the company stores directly in Amazon S3.
D
Set up an Amazon S3 File Gateway. Configure a file share linked to the document store. Mount the file share on an Amazon EC2 instance by using SFTP. Initiate a RefreshCache API call to update the S3 File Gateway when changes occur in Amazon S3.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 61

A solutions architect needs to copy data from an Amazon S3 bucket in an AWS account to a new S3 bucket in a new AWS account. The solutions architect must implement a solution that uses the AWS CLI.

Which combination of steps will successfully copy the data? (Select THREE.)

A
Create a bucket policy to allow the source bucket to list its contents and to objects and set object ACLs in the destination bucket. Attach the bucket policy to destination bucket.
B
Create a policy to allow a user in the destination account to list the source bucket's contents and read the source objects. Attach the bucket policy to the source bucket.
C
Create an IAM policy in the source account. Configure the policy to allow a user in the source account to list contents and get objects in the source bucket, and to list contents, objects, and set object ACLs in the destination bucket. Attach the policy to the user.
D
Create an IAM policy in the destination account. Configure the policy to allow a user in the destination account to list contents and get objects in the source bucket, and to contents, put objects, and set object ACLs in the destination bucket. Attach the policy to the user.
E
Run the aws s3 sync command as a user in the source account. Specify the source and destination buckets to copy the data.
F
Run the aws s3 sync command as a user in the destination account. Specify the source and destination buckets to copy the data.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 62

A solutions architect needs to improve the performance of a batch job. The batch pb loads a large dataset of user transactions into an Amazon DynamoDB table from a relational database. The batch pb is the only application that writes to the DynamoDB table.

The existing solution involves an Amazon Elastic Container Service (Amazon ECS) task that pulls the data from the relational database and pushes the data into the DynamoDB table. The DynamoDB table has a composite primary key. with UserlD as the partition key and Transaction Timestamp as the sort key. Each table item is less than 1 KB in sue. The DynamoDB table is using on-demand capacity. The WriteThrottleEvents metric shows that database writes are being throttled for the first 30 minutes of each batch run.

Which combination of actions should the solutions architect take to improve performance of the batch job? (Select THREE.)

A
Change the order that data will be pulled from the database to be by Transaction Timestamp.
B
Change the order that data will be pulled from the database to be by UserlD.
C
Switch the table to use provisioned capacity. At the start of each batch job, set the write capacity to match the pb's expected peak throughput in transactions per second. Set the write capacity to 1 at the end of the job.
D
Switch the table to use provisioned capacity. At the start of each batch job, set the write capacity to match the job's expected peak throughput in kilobytes (KB) per second. Set the write capacity to 1 at the end of the job.
E
Review the DynamoDB service quotas of the account. Ensure that the maximum item size is greater than the required 1 KB in size.
F
Review the DynamoDB service quotas of the account. Ensure that the maximum write capacity for each table is greater than the write capacity that the batch job requires.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 63

A company has a payment gateway that processes millions of daily transactions on AWS. The solution uses Amazon ECS with a single Amazon EC2 instance that is not configured for auto scaling and an Amazon Aurora PostgreSQL database. All the solution's resources are deployed in the same Availability Zone. The company uses Amazon Route 53 to manage its domain name resolution. The company needs to implement a new strategy to make the application more highly available. Which solution will meet this requirement with the LEAST operational overhead?

A
Set up an Amazon RDS proxy in front of the Aurora database. Modify the Aurora database to a Multi-AZ DB cluster by adding a read-replica in a second Availability Zone.
B
Configure Amazon ECS services to distribute tasks across multiple Availability Zones. Create a cross-Region read replica of the Aurora database in a second AWS Region. Create a script to perform a manual failover process.
C
Configure Amazon ECS services on AWS Fargate to distribute tasks across multiple Availability Zones. Modify the Aurora database to a Multi-AZ DB cluster by adding a read-replica in a second Availability Zone.
D
Deploy the gateway application into a second AWS Region. Migrate the Aurora database to an Aurora global database. Configure Route 53 for active-active gateway request routing.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 64

A company has migrated an application from on premises to AWS. The application frontend is a static website that runs on two Amazon EC2 instances behind an Application Load Balancer (ALB). The application backend is a Python application that runs on three EC2 instances behind another ALB. The EC2 instances are large, general purpose On-Demand Instances that were sized to meet the on- premises specifications for peak usage of the application.

The application averages hundreds of thousands of requests each month. However, the application is used mainly during lunchtime and receives minimal traffic during the rest of the day.

A solutions architect needs to optimize the infrastructure cost of the application without negatively affecting the application availability.

Which combination of steps will meet these requirements? (Select TWO.)

A
Change all the EC2 instances to compute optimized instances that have the same number of cores as the existing EC2 instances.
B
Move the application frontend to a static website that is hosted on Amazon S3.
C
Deploy the application frontend by using AWS Elastic Beanstalk. Use the same instance type for the nodes.
D
Change all the backend EC2 instances to Spot Instances.
E
Deploy the backend Python application to general purpose burstable EC2 instances that have the same number of cores as the existing EC2 instances.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 65

A company has an online learning platform that teaches data science. The platform uses the AWS Cloud to provision on-demand lab environments for its students. Each student receives a dedicated AWS account for a short time. Students need access to ml.p2.xlarge instances to run a single Amazon SageMaker machine learning training job and to deploy the inference endpoint.

Account provisioning is automated. The accounts are members of an organization in AWS Organizations with all features enabled. The accounts must be provisioned in the ap-southeast-2 Region.

The default resource usage quotas are not sufficient for the accounts. A solutions architect must enhance the account provisioning process to include automated quota increases.

Which solution will meet these requirements?

A
Create a quota request template in the us-east-1 Region in the organizations management account. Enable template association. Add a quota for SageMaker in ap-southoast-2 for ml.p2.xIargo training job usage. Set the desired quota to 1. Add a quota for SageMaker in ap.southeast.2 for ml.p2xlarge endpoint usage. Set the desired quota to 1.
B
Create a quota request template in the us-east-1 Region in the organization's management Enable template association. Add a quota for SageMaker in ap-southeast-2 for ml.p2.xlarge training warm pool usage. Set the desired quota to 2.
C
Create a quota request template in ape-southeast-2 in the organization's management account. Enable template association. Add a quota for SageMaker in the us-east-1 Region for ml.p2.xlarge training job usage. Set the desired quota to 1. Add a quota for SageMaker in us-east-1 for ml.p2.xlarge endpoint usage. Set the desired quota to 1.
D
Create a quota request template in ap-southeast-2 in the organization's management account. Enable template association. Add a quota for SageMaker in the us-east-1 Region for ml.p2.xlarge training warm pool usage. Set the desired quota to 2.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 66

A company is running a compute workload by using Amazon EC2 Spot Instances in an Auto Scaling group. The launch template uses two placement groups and one instance type.

Recently, a monitoring system reported Auto Scaling instance launch failures that correlated with longer wait times for system users. The company needs to improve the overall reliability of the workload.

Which solution will meet these requirements?

A
Create a launch configuration that uses attribute-based instance type selection. Configure the Auto Scaling group to use the new launch configuration.
B
Create a launch configuration that uses a larger instance type. Configure the Auto Scaling group to use the launch configuration and the launch template.
C
Create a new launch template version that increases the number of placement groups to 3. Configure the Auto Scaling group to use the new launch template version.
D
Create a new launch template version that uses attribute-based instance type selection. Configure the Auto Scaling group to use the new launch template version.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 67

A global media company is planning a multi-Region deployment of an application. Amazon DynamoDB global tables will back the deployment to keep the user experience consistent across the two continents where users are concentrated. Each deployment will have a public Application Load Balancer (ALB). The company manages public DNS internally. The company wants to make the application available through an apex domain.

Which solution will meet these requirements with the LEAST effort?

A
Migrate public DNS to Amazon Route 53. Create CNAME records for the apex domain to point to the ALB. Use a geolocation routing policy to route traffic based on user location.
B
Place a Network Load Balancer (NLB) in front of the ALMigrate public DNS to Amazon Route 53. Create a CNAME record for the apex domain to point to the NLBโ€™s static IP address. Use a geolocation routing policy to route traffic based on user location.
C
Create an AWS Global Accelerator accelerator with multiple endpoint groups that target endpoints in appropriate AWS Regions. Use the acceleratorโ€™s static IP address to create a record in public DNS for the apex domain.
D
Create an Amazon API Gateway API that is backed by AWS Lambda in one of the AWS Regions. Configure a Lambda function to route traffic to application deployments by using the round robin method. Create CNAME records for the apex domain to point to the API's URL.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 68

A company that is developing a mobile game is making game assets available in two AWS Regions. Game assets are served from a set of Amazon EC2 instances behind an Application Load Balancer (ALB) in each Region. The company requires game assets to be fetched from the closest Region. If game assets become unavailable in the closest Region, they should be fetched from the other Region.

What should a solutions architect do to meet these requirements?

A
Create an Amazon CloudFront distribution. Create an origin group with one origin for each ALB. Set one of the origins as primary.
B
Create an Amazon Route 53 health check for each ALB. Create a Route 53 failover routing record pointing to the two ALBs. Set the Evaluate Target Health value to Yes.
C
Create two Amazon CloudFront distributions, each with one ALB as the origin. Create an Amazon Route 53 failover routing record pointing to the two CloudFront distributions. Set the Evaluate Target Health value to Yes.
D
Create an Amazon Route 53 health check for each ALB. Create a Route 53 latency alias record pointing to the two ALBs. Set the Evaluate Target Health value to Yes.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 69

A company migrated its antivirus solution for 10,000 Amazon EC2 instances to a new software as a service (SaaS) solution. Fewer than 5% of instances reported in the new SaaS agent. The company suspects that either the new agent failed to load or the new agent's configuration was altered. The company needs to implement a solution to ensure that all instances consistently run the most recent agent version with a predefined configuration. Which solution will meet these requirements with the LEAST administrative overhead?

A
Create an AWS Lambda function that is invoked on a schedule. Store a machine list in Amazon S3. Configure the Lambda function to log in to every machine, download and install the most recent version of the agent, and configure the agent.
B
Implement an AWS Config rule with auto remediation that uses AWS Lambda for noncompliant events. Develop a Lambda function to access machines and download and install the most recent agent version. Schedule the Lambda function to invoke daily.
C
Create an AWS Systems Manager document that defines the agent installation and configuration process. Configure AWS Systems Manager State Manager to associate the document with EC2 instances. Apply the desired state on a daily schedule.
D
Log in to EC2 instances by using AWS Systems Manager Session Manager. Update the EC2 user data script to download and install the most recent agent and configure the agent. Reboot all EC2 instances to ensure that the script applies successfully.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 70

A company is running a compute workload by using Amazon EC2 Spot Instances that are in an Auto Scaling group. The launch template uses two placement groups and a single instance type.

Recently, a monitoring system reported Auto Scaling instance launch failures that correlated with longer wait times for system users. The company needs to improve the overall reliability of the workload.

Which solution will meet this requirement?

A
Replace the launch template with a launch configuration to use an Auto Scaling group that uses attribute-based instance type selection.
B
Create a new launch template version that uses attribute-based instance type selection. Configure the Auto Scaling group to use the new launch template version.
C
Update the launch template Auto Scaling group to increase the number of placement groups.
D
Update the launch template to use a larger instance type.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 71

A public retail web application uses an Application Load Balancer (ALB) in front of Amazon EC2 instances running across multiple Availability Zones (AZs) in a Region backed by an Amazon RDS MySQL Multi-AZ deployment. Target group health checks are configured to use HTTP and pointed at the product catalog page. Auto Scaling is configured to maintain the web fleet size based on the ALB health check.

Recently, the application experienced an outage. Auto Scaling continuously replaced the instances during the outage. A subsequent investigation determined that the web server metrics were within the normal range, but the database tier was experiencing high load, resulting in severely elevated query response times.

Which of the following changes together would remediate these issues while improving monitoring capabilities for the availability and functionality of the entire application stack for future growth? (Choose two.)

A
Configure read replicas for Amazon RDS MySQL and use the single reader endpoint in the web application to reduce the load on the backend database tier.
B
Configure the target group health check to point at a simple HTML page instead of a product catalog page and the Amazon Route 53 health check against the product page to evaluate full application functionality. Configure Amazon CloudWatch alarms to notify administrators when the site fails.
C
Configure the target group health check to use a TCP check of the Amazon EC2 web server and the Amazon Route 53 health check against the product page to evaluate full application functionality. Configure Amazon CloudWatch alarms to notify administrators when the site fails.
D
Configure an Amazon CloudWatch alarm for Amazon RDS with an action to recover a high-load, impaired RDS instance in the database tier.
E
Configure an Amazon ElastiCache cluster and place it between the web application and RDS MySQL instances to reduce the load on the backend database tier.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 72

A company needs a hybrid DNS architecture. The architecture must include the company's on-premises network and a VPC. An AWS Site-to-Site VPN connection connects the VPC to the on-premises network. The company already hosts the onprem.mydc.com domain name on premises. The company wants to host the myvpc.example.com domain name in the company's AWS account and resolve to the VPC. The company also needs the on-premises devices to resolve DNS queries to the myvpc.example.com domain. Which combination of steps will meet these requirements? (Select THREE.)

A
Create an Amazon Route 53 private hosted zone for the myvpc.example.com reserved domain. Associate the reserved domain with the VPC.
B
Create an Amazon Route 53 public hosted zone for the myvpc.example.com reserved domain. Associate the reserved domain with the VPC.
C
Use Amazon Route 53 Resolver to create an inbound endpoint in the AWS Region of the VPC.
D
Use Amazon Route 53 Resolver to create an outbound endpoint in the AWS Region of the VPC.
E
Use Amazon Route 53 Resolver to create a forwarding rule for the Route 53 private hosted zone domain and IP addresses of the outbound endpoint.
F
Configure the on-premises DNS resolvers with a conditional forwarding rule for DNS queries for the Amazon Route 53 private hosted zone domain and IP addresses of the inbound endpoint.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 73

A solutions architect wants to cost-optimize and appropriately size Amazon EC2 instances in a single AWS account. The solutions architect wants to ensure that instances are optimized based on CPU, memory, and network metrics.

Which combination of steps should the solutions architect take to meet these requirements? (Select TWO.)

A
Purchase AWS Business Support or AWS Enterprise Support for the account.
B
Turn AWS Trusted Advisor and review any โ€œLow Utilization Amazon EC2 Instances" recommendations.
C
Install the Amazon CloudWatch configure memory metric collection on the EC2 instances.
D
Configure AWS Compute Optimizer in the AWS account to receive findings and optimization recommendations.
E
Create an EC2 Instance Savings Plan for the AWS Regions, instance families, and operating systems of interest.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 74

A company has many services running in its on-premises data center. The data center is connected to AWS using AWS Direct Connect (DX) and an IPSec VPN. The service data is sensitive and connectivity cannot traverse the internet. The company wants to expand into a new market segment and begin offering its services to other companies that are using AWS.

Which solution will meet these requirements?

A
Create a VPC Endpoint Service that accepts TCP traffic, host it behind a Network Load Balancer, and make the service available over DX.
B
Create a VPC Endpoint Service that accepts HTTP or HTTPS traffic, host it behind an Application Load Balancer, and make the service available over DX.
C
Attach an internet gateway to the VPC, and ensure that network access control and security group rules allow the relevant inbound and outbound traffic.
D
Attach a NAT gateway to the VPC, and ensure that network access control and security group rules allow the relevant inbound and outbound traffic.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 75

A solutions architect must implement a least privilege permissions policy that allows the centralized Lambda functions to update resources in each of the company's accounts. Which combination of steps will meet these requirements? (Select TWO.)

A
In the centralized account, create an IAM role that has the Lambda service as a trusted entity. Add an inline policy to assume the roles of the other accounts.
B
In the centralized account, create an IAM role that has the roles of the other accounts as trusted entities. Provide minimal required permissions.
C
In the other accounts, create an IAM role that has minimal required permissions. Add the centralized account's Lambda IAM role as a trusted entity.
D
In the other accounts, create an IAM role that has permissions to assume the role of the centralized account. Add the Lambda service as a trusted entity.
E
In the other accounts, create an IAM role that has minimal required permissions. Add the Lambda service as a trusted entity.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 76

A company is building a solution in the AWS Cloud. Thousands of devices will to the solution and send data. Each needs to be able to send and receive data in real time over MQTT protocol. Each device must authenticate by using a unique X.509 certificate.

Which solution will meet these requirements with the LEAST overhead?

A
Set up AWS IoT Coro. For each device, create a corresponding Amazon MO queue and provision a certificate. Connect each device to Amazon MQ.
B
Create a Network Load Balancer (NLB) and configure it with an AWS Lambda authorizer. Run an MQTT broker on Amazon EC2 instances in an Auto Scaling group. Set the Auto Scaling group as the target for the NLB. Connect each device to the NLB.
C
Set up AWS IoT Core. For each device, create a corresponding AWS IoT thing and provision a certificate. Connect each device to AWS IoT Core.
D
Set up an Amazon API Gateway HTTP API and a Network Load Balancer (NLB). Create integration between API Gateway and the NIB targets. Configure a mutual TLS certificate authorizer on the HTTP API. Run an MQTT broker on an Amazon EC2 instance that the NLB targets. Connect each device to the NLB.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 77

A solutions architect must implement a multi-Region architecture for an Amazon RDS for PostgreSQL database that supports a web application. The database launches from an AWS CloudFormation template that includes AWS services and features that are present in both the primary and secondary Regions.

The database is configured for automated backups, and it has an RTO of 15 minutes and an RPO of 2 hours. The web application is configured to use an Amazon Route 53 record to route traffic to the database.

Which combination of steps will result in a highly available architecture that meets all the requirements? (Choose two.)

A
Create a cross-Region read replica of the database in the secondary Region. Configure an AWS Lambda function in the secondary Region to promote the read replica during failover event.
B
In the primary Region, create a health check on the database that will invoke an AWS Lambda function when a failure is detected. Program the Lambda function to recreate the database from the latest database snapshot in the secondary Region and update the Route 53 host records for the database.
C
Create an AWS Lambda function to copy the latest automated backup to the secondary Region every 2 hours.
D
Create a failover routing policy in Route 53 for the database DNS record. Set the primary and secondary endpoints to the endpoints in each Region.
E
Create a hot standby database in the secondary Region. Use an AWS Lambda function to restore the secondary database to the latest RDS automatic backup in the event that the primary database fails.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 78

A company is running a three-tier web application in an on-premises data center. The frontend is a PHP application that is served by an Apache web server. The middle tier is a monolithic Java SE application. The storage tier is a 60 TB PostgreSQL database. The three-tier web application recently crashed and became unresponsive. The database also reached capacity because of read operations. The company wants to migrate to AWS to resolve these issues and improve scalability. Which combination of steps will meet these requirements with the LEAST development effort? (Select THREE.)

A
Configure an Auto Scaling group of Amazon EC2 instances behind an Application Load Balancer to host the web server. Use Amazon EFS for the frontend static assets.
B
Host the static single-page application on Amazon S3. Use an Amazon CloudFront distribution to serve the application.
C
Create a Docker container to run the Java SE application. Use AWS Fargate to host the container.
D
Create an AWS Elastic Beanstalk environment for Java to host the Java SE application.
E
Migrate the PostgreSQL database to an Amazon EC2 instance that is larger than the on-premises PostgreSQL database.
F
Use AWS DMS to replatform the PostgreSQL database to an Amazon Aurora PostgreSQL database. Use Aurora Auto Scaling for read replicas.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 79

A solutions architect must create a business case for migration of a company's on-premises data center to the AWS Cloud. The solutions architect will use a configuration management database (CMDB) export of all the company's servers to create the case.

Which solution will meet these requirements MOST cost-effectively?

A
Use AWS Well-Architected Tool to import the CMDB data to perform an analysis and generate recommendations.
B
Use Migration Evaluator to perform an analysis. Use the data import template to upload the data from the CMDB export.
C
Implement resource matching rules. Use the CMDB export and the AWS Price List Bulk API to query CMDB data against AWS services in bulk.
D
Use AWS Application Discovery Service to import the CMDB data to perform an analysis.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 80

A company deploys workloads in multiple AWS accounts. Each account has a VPC with VPC flow logs published in text log format to a centralized Amazon S3 bucket. Each log file is compressed with gzip compression. The company must retain the log files indefinitely.

A security engineer occasionally analyzes the logs by using Amazon Athena to query the VPC flow logs. The query performance is degrading over time as the number of ingested logs is growing. A solutions architect must improve the performance of the log analysis and reduce the storage space that the VPC flow logs use.

Which solution will meet these requirements with the LARGEST performance improvement?

A
Create an AWS Lambda function to decompress the gzip files and to compress the files with bzip2 compression. Subscribe the Lambda function to an s3:ObjectCreated:Put S3 event notification for the S3 bucket.
B
Enable S3 Transfer Acceleration for the S3 bucket. Create an S3 Lifecycle configuration to move files to the S3 Intelligent-Tiering storage class as soon as the files are uploaded.
C
Update the VPC flow log configuration to store the files in Apache Parquet format. Specify hourly partitions for the log files.
D
Create a new Athena workgroup without data usage control limits. Use Athena engine version 2.Correct Answer:

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 81

A company runs an ecommerce website that sells sporting goods. The company deployed the website by using an Application Load Balancer (ALB), Amazon ECS on Amazon EC2 instances that are in an Auto Scaling group, and an Amazon RDS for MySQL database. During sales events, users report experiencing high latency when they search for items. CPU utilization for the RDS for MySQL database is 95%. The company needs to implement a temporary solution to reduce the latency of searches during sales events before the application can be refactored. Which solution will meet these requirements with the LEAST development effort?

A
Use Amazon CloudFront to cache HTTP query responses. Create one AWS Systems Manager Automation runbook to increase the instance size of the RDS database. Create a second runbook to return the instance size to its previous value. Schedule the first runbook to run one day before sales events.
B
Use Amazon CloudFront to cache HTTP query responses. Schedule the EC2 Auto Scaling group to increase the number of EC2 instances one day before sales events.
C
Use Amazon ElastiCache to cache database query responses. Refactor the website to use ElastiCache. Schedule the EC2 Auto Scaling group to increase the number of EC2 instances.
D
Migrate the RDS for MySQL database to Amazon DynamoDB. Use Amazon ElastiCache to cache database query responses. Refactor the website to use DynamoDB and ElastiCache.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 82

A company runs a new application as a static website in Amazon S3. The company has deployed the application to a production AWS account and uses Amazon CloudFront to deliver the website. The website calls an Amazon API Gateway REST API. An AWS Lambda function backs each API method.

The company wants to create a CSV report every 2 weeks to show each API Lambda functionโ€™s recommended configured memory, recommended cost, and the price difference between current configurations and the recommendations. The company will store the reports in an S3 bucket.

Which solution will meet these requirements with the LEAST development time?

A
Create a Lambda function that extracts metrics data for each API Lambda function from Amazon CloudWatch Logs for the 2-week period. Collate the data into tabular format. Store the data as a .csv file in an S3 bucket. Create an Amazon EventBridge rule to schedule the Lambda function to run every 2 weeks.
B
Opt into AWS Compute Optimizer. Create a Lambda function that calls the Export Lambda Function Recommendations operation. Export the .csv file to an S3 bucket. Create an Amazon EventBridge rule to schedule the Lambda function to run every 2 weeks.
C
Opt in to AWS Compute Optimizer. Set up enhanced infrastructure metrics. Within the Compute Optimizer console, schedule a job to export the Lambda recommendations to a .csv file. Store the file in an S3 bucket every 2 weeks.
D
Purchase the AWS Business Support plan for the production account. Opt in to AWS Compute Optimizer for AWS Trusted Advisor checks. In the Trusted Advisor console, schedule a job to export the cost optimization checks to a .csv file. Store the file in an S3 bucket every 2 weeks.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 83

A company wants to establish a dedicated connection between its on-premises infrastructure and AWS. The company is setting up a 1 Gbps AWS Direct Connect connection to its account VPC. The architecture includes a transit gateway and a Direct Connect gateway to connect multiple VPCs and the on-premises infrastructure.

The company must connect to VPC resources over a transit VIF by using the Direct Connect connection.

Which combination of steps will meet these requirements? (Choose two.)

A
Update the 1 Gbps Direct Connect connection to 10 Gbps.
B
Advertise the on-premises network prefixes over the transit VIF.
C
Advertise the VPC prefixes from the Direct Connect gateway to the on-premises network over the transit VIF.
D
Update the Direct Connect connection's MACsec encryption mode attribute to must_encrypt.
E
Associate a MACsec Connection Key Name/Connectivity Association Key (CKN/CAK) pair with the Direct Connect connection.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 84

A company uses infrastructure as code (IaC) to provision Amazon EC2 instances. The company uses a launch template to implement an EC2 Auto Scaling group. After a recent update that required instance reboots, the Auto Scaling group terminated the instances and launched new, unpatched instances. The company must ensure that the Auto Scaling group launches instances that have the latest security patches. Which combination of solutions will meet this requirement? (Select TWO.)

A
Configure the Auto Scaling group termination policy to use the OldestLaunchTemplate setting.
B
Create a new Auto Scaling group before the next patch maintenance window.
C
Deploy an Application Load Balancer (ALB) in front of the Auto Scaling group.
D
Use AWS Systems Manager to automatically produce patched AMIs. Update the Auto Scaling group launch template. Initiate an instance refresh for the Auto Scaling group.
E
Deploy a Network Load Balancer (NLB) in front of the Auto Scaling group.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 85

A company is using Amazon SageMaker Al Notebook Instances and SageMaker APIs to train machine leaming (ML) models. The SageMaker Al Notebook Instances are deployed in a VPC that does not have access to or from the internet. Datasets for ML model training are stored in an Amazon S3 bucket. Interface VPC endpoints provide access to Amazon S3 and the SageMaker APIs.

Occasionally, data scientists require access to a private Git repository to update application packages that they use as part of their workflow The company must provide access to the Git repository while ensuring that the SageMaker Al Notebook Instances remain Isolated from the internet

Which solution meets these requirements with the LEAST operational overhead?

A
Add the Git repository as a resource for SageMaker by referencing the remote URL. Configure AWS Secrets Manager to use Git credentials to access the repository
B
Add the Git repository as a resource for SageMaker by referencing the remote URL. Add the username to the URL that is required to access the repository
C
Create a NAT gateway in the VPC. Configure VPC routes to allow access to the internet Configure network ACL rules that allow the SageMaker Al Notebook Instances access to only the Git repository URL
D
Create a NAT gateway in the VPC. Configure VPC routes to allow access to the internet with a network ACL that allows access to only the Git repository URL.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 86

A company is migrating applications from on premises to the AWS Cloud. These applications power the company's internal web forms. These web forms collect data for specific events several times each quarter. The web forms use simple SQL statements to save the data to a local relational database.

Data collection occurs for each event, and the on-premises servers are idle most of the time. The company needs to minimize the amount of idle infrastructure that supports the web forms.

Which solution will meet these requirements?

A
Use Amazon EC2 Image Builder to create AMIs for the legacy servers. Use the AMIs to provision EC2 instances to recreate the applications in the AWS Cloud. Place an Application Load Balancer (ALB) in front of the EC2 instances. Use Amazon Route 53 to point the DNS names of the web forms to the ALB.
B
Create one Amazon DynamoDB table to store data for all the data input. Use the application form name as the table key to distinguish data items. Create an Amazon Kinesis data stream to receive the data input and store the input in DynamoDB. Use Amazon Route 53 to point the DNS names of the web forms to theKinesis data stream's endpoint.
C
Create Docker images for each server of the legacy web form applications. Create an Amazon Elastic Container Service (Amazon ECS) cluster on AWS Fargate. Place an Application Load Balancer in front of the ECS cluster. Use Fargate task storage to store the web form data.
D
Provision an Amazon Aurora Serverless cluster. Build multiple schemas for each web form's data storage. Use Amazon API Gateway and an AWS Lambda function to recreate the data input forms. Use Amazon Route 53 to point the DNS names of the web forms to their corresponding API Gateway endpoint.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 87

A company uses AWS Organizations to manage its AWS accounts. A solutions architect must design a solution in which only administrator roles are allowed to use IAM actions. However, the solutions architect does not have access to all the AWS accounts throughout the company.

Which solution meets these requirements with the LEAST operational overhead?

A
Create an SCP that applies to all the AWS accounts to allow IAM actions only for administrator roles. Apply the SCP to the root OU.
B
Configure AWS CloudTrail to invoke an AWS Lambda function for each event that is related to IAM actions. Configure the function to deny the action if the user who invoked the action is not an administrator.
C
Create an SCP that applies to all the AWS accounts to deny IAM actions for all users except for those with administrator roles. Apply the SCP to the root OU.
D
Set an IAM permissions boundary that allows IAM actions. Attach the permissions boundary to every administrator role across all the AWS accounts.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 88

A company is creating a new organization in AWS Organizations to manage multiple AWS accounts and to consolidate billing. The company needs a visual dashboard to display AWS spend across the organization. Which solution will meet this requirement?

A
Configure AWS Cost Explorer in all member accounts to export cost data to an Amazon S3 bucket. Create a CloudWatch dashboard.
B
Use AWS Glue Data Catalog to connect to AWS Billing and Cost Management. Create a new AWS account and deploy the open source Cloud Intelligence Dashboards solution.
C
Set up Cost Optimization Hub in a centralized management account. Configure a standard data export to a new DynamoDB table.
D
Deploy Amazon QuickSight in the organization's management account. Configure AWS Cost and Usage Reports (CUR) to export data to a new Amazon S3 bucket. Configure QuickSight dashboards to visualize the CUR data.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 89

A company hosts an ecommerce platform on an Amazon EC2 instance and uses an on-premises MySQL database that requires consistent data reliability The company stores static web assets in an on-premises NFS file system. The company uses a VPN connection to connect the EC2 instance with the on-premises resources.

Batch processing jobs on the EC2 instance are causing bottlenecks during periods of high traffic. As a result, critical workloads are failing. A solutions architect must design a solution that improves the reliability and scalability of the platform. The solution must ensure that batches are not interrupted.

Which solution will meet these requirements with the LEAST operational overhead?

A
Migrate the platform to AWS Elastic Beanstalk. Migrate the on-premises MySQL database to an Amazon RDS Multi-AZ database. Use AWS DataSync to migrate the static assets from the NFS file system to an Amazon S3 bucket Use EC2 Spot Instances to run the batch processing jobs.
B
Migrate the platform to Amazon EKS on AWS Fargate. Migrate the on-premises MySQL database to an Amazon DynamoDB table. Use AWS DataSync to migrate the static assets from the NFS file system to an Amazon EFS file system. Use a mix of EC2 Spot Instances and EC2 Reserved Instances run the batch processing jobs.
C
Migrate the platform to Amazon ECS on AWS Fargate. Migrate the on-premises MySQL database to an Amazon RDS Multi-AZ database. Use AWS DataSync to migrate the static assets from the NFS file system to an Amazon S3 bucket. Use AWS Batch with Fargate to run batch processing jobs.
D
Keep the platform on Amazon EC2 and implement EC2 Auto Scaling. Migrate the on-premises MySQL database to an Amazon Aurora database. Use AWS DataSync to migrate the static assets from the NFS file system to an Amazon EFS file system. Use AWS Lambda functions to run the batch processing jobs.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 90

A media storage application uploads user photos to Amazon S3 for processing by AWS Lambda functions. Application state is stored in Amazon DynamoDB tables. Users are reporting that some uploaded photos are not being processed properly. The application developers trace the logs and find that Lambda is experiencing photo processing issues when thousands of users upload photos simultaneously. The issues are the result of Lambda concurrency limits and the performance of DynamoDB when data is saved.

Which combination of actions should a solutions architect take to increase the performance and reliability of the application? (Select TWO.)

A
Evaluate and adjust the RCUs for the DynamoDB tables.
B
Evaluate and adjust the WCUs for the DynamoDB tables.
C
Add an Amazon ElastiCache layer to increase the performance of Lambda functions.
D
Add an Amazon Simple Queue Service (Amazon SQS) queue and reprocessing logic between Amazon S3 and the functions.
E
Use S3 Transfer Acceleration to provide lower latency to users.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 91

A company is planning a migration from an on-premises data center to the AWS Cloud. The company plans to use multiple AWS accounts that are managed in an organization in AWS Organizations. The company will create a small number of accounts initially and will add accounts as needed. A solutions architect must design a solution that turns on AWS CloudTrail in all AWS accounts.

What is the MOST operationally efficient solution that meets these requirements?

A
Create an AWS Lambda function that creates a new CloudTrail trail in all AWS accounts in the organization. Invoke the Lambda function daily by using a scheduled action in Amazon EventBridge (Amazon CloudWatch Events).
B
Create a new CloudTrail trail in the organization's management account. Configure the trail to log all events for all AWS accounts in the organization.
C
Create a new CloudTrail trail in all AWS accounts in the organization. Create new trails whenever a new account is created. Define an SCP that prevents deletion or modification of trails. Apply the SCP to the root OU.
D
Create an AWS Systems Manager Automation runbook that creates a CloudTrail trail in all AWS accounts in the organization. Invoke the automation by using Systems Manager State Manager.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 92

A company is running an application on Amazon EC2 instances. The application uses a MongoDB database with a replica set as its data tier on-premises. A solutions architect must migrate the on-premises MongoDB database to Amazon DocumentDB. Which solution will meet these requirements?

A
Create a fleet of EC2 instances. Install MongoDB Community Edition. Create a database. Configure continuous synchronous replication.
B
Create an AWS DMS replication instance. Create a source endpoint for the on-premises MongoDB database by using change data capture (CDC). Create a target endpoint for the Amazon DocumentDB database. Create and run a DMS migration task.
C
Create an AWS Application Migration Service replication template. Install the AWS Replication Agent on the MongoDB database. Set the Amazon DocumentDB database service as the target endpoint.
D
Create a source endpoint for the on-premises MongoDB database by using AWS Glue crawlers. Configure continuous asynchronous replication.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 93

A company uses an organization in AWS Organizations to manage multiple AWS accounts. The company hosts some applications in a VPC in the company's shared services account. The company has attached a transit gateway to the VPC in the shared services account.

The company is developing a new capability and has created a development environment that requires access to the applications that are in the shared services account.

The company intends to delete and recreate resources frequently in the development account The company also wants to give a development team the ability to recreate the team's connection to the shared services account as required

Which solution will meet these requirements?

A
Create a transit gateway in the development account. Create a transit gateway peering request to the shared services account. Configure the shared services transit gateway to automatically accept peering connections.
B
Turn on automatic acceptance for the transit gateway in the shared services account. Use AWS Resource Access Manager (AWS RAM) to share the transit gateway resource in the shared services account with the development account. Accept the resource in the development account. Create a transit gateway attachment in the development account
C
Turn on automatic acceptance for the transit gateway in the shared services account. Create a VPC endpoint. Use the endpoint policy to grant permissions on the VPC endpoint for the development account Configure the endpoint service to automatically accept connection requests. Provide the endpoint details to the development team.
D
Create an Amazon EventBridge rule to invoke an AWS Lambda function that accepts the transit gateway attachment when the development account makes an attachment request. Use AWS Network Manager to share the transit gateway in the shared services account with the development account. Accept the transit gateway in the development account.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 94

A company wants to manage the costs associated with a group of 20 applications that are infrequently used, but are still business-critical, by migrating to AWS. The applications are a mix of Java and Node.js spread across different instance clusters. The company wants to minimize costs while standardizing by using a single deployment methodology.

Most of the applications are part of month-end processing routines with a small number of concurrent users, but they are occasionally run at other times. Average application memory consumption is less than 1 GB, though some applications use as much as 2.5 GB of memory during peak processing. The most important application in the group is a billing report written in Java that accesses multiple data sources and often for several hours.

Which is the MOST cost-effective solution?

A
Deploy a separate AWS Lambda function for each application. Use AWS CloudTrail logs and Amazon CloudWatch alarms to verify completion of critical jobs.
B
Deploy Amazon ECS containers on Amazon EC2 with Auto Scaling configured for memory utilization of 75%. Deploy an ECS task for each application being migrated with ECS task scaling. Monitor services and hosts by using Amazon CloudWatch.
C
Deploy AWS Elastic Beanstalk for each application with Auto Scaling to ensure that all requests have sufficient resources. Monitor each AWS Elastic Beanstalk deployment by using CloudWatch alarms.
D
Deploy a new Amazon EC2 instance cluster that co-hosts all applications by using EC2 Auto Scaling and Application Load Balancers. Scale cluster size based on a custom metric set on instance memory utilization. Purchase 3-year Reserved Instance reservations equal to the GroupMaxSize parameter of the Auto Scaling group. <h1>

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 95

A company wants to use Amazon WorkSpaces in combination with thin client devices to replace aging desktops. Employees use the desktops to access applications that work with clinical trial data. Corporate security policy states that access to the applications must be restricted to only company branch office locations. The company is considering adding an additional branch office in the next 6 months.

Which solution meets these requirements with the MOST operational efficiency?

A
Create an IP access control group rule with the list of public addresses from the branch offices. Associate the IP access control group with the WorkSpaces directory.
B
Use AWS Firewall Manner to create a web ACL rule with an IPSet with the list of public addresses from the branch office locations. Associate the web ACL with the WorkSpaces directory.
C
Use AWS Certificate Manager (ACM) to issue trusted device certificates to the machines deployed in the branch office locations. Enable restricted access on the WorkSpaces directory.
D
Create a custom WorkSpace image with Windows Firewall configured to restrict access to the public addresses of the branch offices. Use the image to deploy the WorkSpaces.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 96

A company has deployed production workloads on Amazon EC2 On-Demand Instances and Amazon RDS for PostgreSQL. The company has the AWS Business Support plan. A solutions architect must optimize the cost of the workloads without negatively affecting the availability or compute capacity. Which solution will meet these requirements?

A
Use AWS Cost and Usage Reports. Use AWS Lambda to terminate underutilized instances. Purchase Compute Savings Plans.
B
Use AWS Budgets to track spending. Configure AWS Trusted Advisor cost optimization checks to rightsize instances. Purchase Reserved Instances.
C
Opt in to AWS Compute Optimizer. Use Compute Optimizer and AWS Trusted Advisor to identify underutilized instances. Implement recommendations and purchase a Compute Savings Plan.
D
Use AWS Cost Explorer recommendations to rightsize underutilized instances. Replace On-Demand Instances with Spot Instances.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 97

A company has a new requirement to store all database backups in an isolated AWS account. The company is using AWS Organizations and has created a central write-once, read-many (WORM) account for the backups.

The company has 40 Amazon RDS for MySQL databases in its production account. The databases are encrypted with the default RDS AWS KMS key. RDS automated backups of the databases occur daily and have a retention period of 30 days

Which solution will successfully copy the database backups to the central account?

A
Enable Organizations trusted access and backup policies for AWS Backup. Configure the central account as the delegated administrator for AWS Backup. Create IAM policies and backup policies. Enable cross-account management. Create a backup vault in the central account. Create a KMS key for the backup vault and share the key with the production account. In the production account, restore the databases from a snapshot and apply the shared KMS key to the new DB instances. Create a backup plan in the central account to back up the databases to the backup vault
B
Enable Organizations trusted access and backup policies for AWS Backup. Configure the central account as the delegated administrator for AWS Backup. Create IAM policies and backup policies. Enable cross-account management In the production account, share the default RDS KMS key with the central account. Create a backup vault in the central account Apply the shared default RDS KMS key to the backup vault. Create a backup plan in the central account to back up the databases to the backup vault
C
Create an Amazon EventBridge rule to invoke an AWS Lambda function every day. Program the Lambda function to decrypt the snapshots and to initiate a copy request of all unencrypted snapshots to the central account. After the copy job is complete, create a new KMS key. Use the new KMS key to encrypt the database snapshots in the central account.
D
Create an Amazon EventBridge rule to invoke an AWS Lambda function every day. In the production account, share the default RDS KMS key with the central account. Program the Lambda function to decrypt the snapshots and to initiate a copy request of all unencrypted snapshots to the central account After the copy job is complete, encrypt the database snapshots with the shared default RDS KMS key in the central account.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 98

During an audit, a security team discovered that a development team was putting IAM user secret access keys in their code and then committing it to an AWS CodeCommit repository. The security team wants to automatically find and remediate instances of this security vulnerability.

Which solution will ensure that the credentials are appropriately secured automatically?

A
Run a script nightly using AWS Systems Manager Run Command to search for credentials on the development instances. If found, use AWS Secrets Manager to rotate the credentials.
B
Use a scheduled AWS Lambda function to download and scan the application code from CodeCommit. If credentials are found, generate new credentials and store them in AWS KMS.
C
Configure Amazon Macie to scan for credentials in CodeCommit repositories. If credentials are found, trigger an AWS Lambda function to disable the credentials and notify the user.
D
Configure a CodeCommit trigger to invoke an AWS Lambda function to scan new code submissions for credentials. If credentials are found, disable them in AWS IAM and notify the user.<h1>

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 99

A company wants to migrate its website to AWS. The website uses containers that are deployed in an on-premises, self-managed Kubernetes cluster. All data for the website is stored in an on-premises PostgreSQL database.

The company has decided to migrate the on-premises Kubernetes cluster to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The EKS cluster will use EKS managed node groups with a static number of nodes. The company will also migrate the on-premises database to an Amazon RDS for PostgreSQL database.

A solutions architect needs to estimate the total cost of ownership (TCO) for this workload before the migration.

Which solution will provide the required TCO information?

A
Request access to Migration Evaluator. Run the Migration Evaluator Collector and import the data. Configure a scenario. Export a Quick Insights report from Migration Evaluator
B
Launch AWS Database Migration Service (AWS DMS) for the on-premises database. Generate an assessment report. Create an estimate in AWS Pricing Calculator for the costs of the EKS migration.
C
Initialize AWS Application Migration Service. Add the on-premises servers as source servers. Launch a test instance. Output a TCO report from Application Migration Service.
D
Access the AWS Cloud Economics Center webpage to assess the AWS Cloud Value Framework. Create an AWS Cost and usage report from the Cloud Value Framework.

Premium Solution Locked

Unlock all 771 answers & explanations

QUESTION 100

A company is deploying a new web-based application and needs a storage solution for the Linux application servers. The company wants to create a single location for updates to application data for all instances. The active dataset will be up to 100 GB. Peak operations require 225 MiBps of read throughput. The solutions architect must design a Multi-AZ solution that makes a copy of the data available in another AWS Region for disaster recovery (DR) with an RPO of less than 1 hour. Which solution will meet these requirements?

A
Deploy a new Amazon EFS Multi-AZ file system. Configure the file system for 75 MiBps of provisioned throughput. Implement replication to a file system in the DR Region.
B
Deploy a new Amazon FSx for Lustre file system. Configure Bursting Throughput mode. Use AWS Backup to back up the file system to the DR Region.
C
Deploy a General Purpose SSD (gp3) Amazon EBS volume with 225 MiBps. Enable Multi-Attach. Use AWS Elastic Disaster Recovery.
D
Deploy an Amazon FSx for OpenZFS file system in both Regions. Create an AWS DataSync scheduled task to replicate the data every 10 minutes.

Premium Solution Locked

Unlock all 771 answers & explanations

Full Question Bank Locked

You have reached the end of the free study guide preview. Upgrade now to unlock all 771 questions and the full simulation engine.

Customer Reviews

5 / 5
(15,000+ verified)
5
100%
4
0%
3
0%
2
0%
1
0%

Global Community Feedback

DM

David M.

Verified Student

"The practice engine is incredible. It feels exactly like the real testing environment and helped me build so much confidence."

SJ

Sarah J.

Premium Member

"The PDF is very well organized and the explanations for the answers are actually helpful, not just random text."

MC

Michael C.

Verified Buyer

"I was skeptical, but the content is high quality and definitely worth the price. I passed on my first try!"

Need Assistance?

Our expert support team is available to assist you with any inquiries about our exam materials.

Contact Support
Average response: < 24 Hours

Get Exam Updates

Subscribe to receive instant notifications on new questions and exclusive flash sales.

* Join 5,000+ students getting weekly updates

Support Chat โ— Active Now

๐Ÿ‘‹ Hi! How can we help you pass your exam?

Enter email to start chatting