🎄

CertoMetrics - 9% OFF Special Discount Offer - Ends In:

0d 00h 00m 00s
Coupon code: SALE2026

Amazon AWS Certified Advanced Networking - Specialty (ANS-C01)

Get full access to the updated question bank and pass on your first attempt.

Vendor

Amazon

Certification

Specialty Certifications

Content

265 Qs

Status

Verified

Updated

10 hours ago

Test the Practice Engine

Experience our real exam environment with 10 free questions

Launch Free Demo
Best Value Bundle

Premium Bundle

Complete Success Suite

$108 $69

Save $39 Instantly

  • Full PDF + Interactive Engine Everything you need to pass
  • All Advanced Question Types Drag & Drop, Hotspots, Case Studies
  • Priority 24/7 Expert Support Direct line to certification leads
  • 90 Days Free Priority Updates Stay current as exams change

Success Metric

98.4% Pass Rate

Verified by 15k+ Students
Secure Checkout
Popular

Standard Simulation

Practice Engine

$59

One-Time Payment

  • Web-Based (Zero Install)
  • Real Testing Environment Virtual & Practice Modes
  • Interactive Engine Drag & Drop, Hotspots
  • 60 Days Free Updates

Compatible with All Devices

Chrome
Verified Secure Checkout

Basic Tier

PDF Study Guide

$49

Digital Access

  • Exam Questions (PDF)
  • Mobile Friendly
  • 60 Days Updates
Standard Checkout

Verified Community

The CertoMetrics Standard.

Recommend the #1 platform for verified Amazon certification resources.

Success Network

Help a Colleague Succeed.

Invite a peer to get their own updated ANS-C01 prep kit.

Exam Overview

The AWS Certified Advanced Networking - Specialty (ANS-C01) certification validates a candidate's advanced technical skills and experience in designing and implementing complex AWS and hybrid networking solutions. Achieving this credential signifies a deep understanding of core AWS networking services, advanced routing, security best practices, and optimizing network performance and cost. It demonstrates proficiency in handling intricate network architectures, ensuring high availability, scalability, and security for mission-critical applications. This certification is invaluable for professionals aiming to lead sophisticated cloud networking projects, elevating their standing as experts capable of navigating the most challenging network requirements within the AWS ecosystem and beyond, significantly enhancing career opportunities in cloud infrastructure roles.

Questions

65

Passing Score

750/1000

Duration

170 Minutes

Difficulty

Expert

Level

Specialty

Skills Measured

Designing and Implementing Complex AWS Network Architectures
Implementing Hybrid Connectivity Solutions
Securing AWS Network Infrastructure
Optimizing Network Performance and Cost
Monitoring and Troubleshooting Network Operations

Career Path

Target Roles

Cloud Network Engineer Senior Cloud Architect Network Operations Specialist

Common Questions

Is the material up to date?

Yes. We update our question bank weekly to match the latest Amazon standards. You get free updates for 90 days.

What format do I get?

You get instant access to both the **PDF** (for reading) and our **Premium Test Engine** (for exam simulation).

Is there a guarantee?

Absolutely. If you fail the ANS-C01 exam using our materials, we offer a full money-back guarantee.

When do I get the download?

Instantly. The download link is available in your dashboard immediately after payment is confirmed.

Free Practice Demo

Test the quality of our questions before purchasing access.

Time Left

Progress

/10

Upgrade to the Full Access bundle for 100% of the questions.

Correct

Final Score

%

Result

Question 1 mcq multiple

A company uses a hybrid architecture and has an AWS Direct Connect connection between its on-premises data center and AWS. The company has production applications that run in the on-premises data center. The company also has production applications that run in a VPC. The applications that run in the on-premises data center need to communicate with the applications that run in the VPC. The company is using corp.example.com as the domain name for the on-premises resources and is using an Amazon Route 53 private hosted zone for aws.example.com to host the VPC resources.

The company is using an open-source recursive DNS resolver in a VPC subnet and is using a DNS resolver in the on-premises data center. The company's on-premises DNS resolver has a forwarder that directs requests for the aws.exampte.com domain name to the DNS resolver in the VPC. The DNS resolver in the VPC has a forwarder that directs requests for the corp.example.com domain name to the DNS resolver in the on-premises data center. The company has decided to replace the open-source recursive DNS resolver with Amazon Route 53 Resolver endpoints.

Which combination of steps should a network engineer take to make this replacement? (Select THREE.)

Explanation:
Option C (Correct) Reasoning: To replace the open-source resolver, both an Inbound Endpoint (for on-premises to AWS DNS queries) and an Outbound Endpoint (for AWS to on-premises DNS queries) are essential components of Route 53 Resolver for hybrid connectivity.



Option B (Correct) Reasoning: On-premises DNS servers must be configured to forward queries for AWS resources (aws.example.com) to the IP addresses of the newly created Route 53 Resolver Inbound Endpoint, allowing resolution via Route 53 private hosted zones.



Option E (Correct) Reasoning: A Route 53 Resolver rule is required within AWS to direct queries for on-premises resources (corp.example.com) to the on-premises DNS resolver's IP address, utilizing the Outbound Endpoint to cross the network boundary.

Why the other choices are incorrect:

  • Option A is incorrect: A Resolver rule for aws.example.com pointing to an outbound endpoint makes no sense; aws.example.com is resolved in AWS via the inbound endpoint.
  • Option D is incorrect: A Resolver rule is typically for queries originating from the VPC and pointing towards another DNS server, not for directing aws.example.com queries to its own inbound endpoint within AWS itself.
  • Option F is incorrect: On-premises DNS forwards aws.example.com queries to the inbound endpoint, not the outbound endpoint. The outbound endpoint handles traffic from AWS to on-premises.


Question 2 mcq

A company is building an API-based application on AWS and is using a microservices architecture for the design. The company is using a multi-account AWS environment that includes a separate AWS account for each microservice development team. Each team hosts its microservice in its own VPC that contains Amazon EC2 instances behind a Network Load Balancer (NLB).

A network engineer needs to use Amazon API Gateway in a shared services account to create an HTTP API to expose these microservices to external applications. The network engineer must ensure that access to the microservices can occur only over a private network. Additionally, the company must be able to control which entities from its internal network can connect to the microservices. In the future, the company will create more microservices that the company must be able to integrate with the application.

What is the MOST secure solution that meets these requirements?

Explanation:
Option A (Correct) Reasoning: This solution creates a private path: API Gateway integrates with an ALB via a VPC link. The ALB in the shared services account connects to each microservice's NLB using AWS PrivateLink endpoints. This provides secure, private, cross-account access, scalable integration, and granular control over network traffic.

Why the other choices are incorrect:

  • Option B is incorrect: While Transit Gateway connects VPCs, adding NLB IPs as ALB targets across TGW is less secure, manageable, and scalable for this scenario than PrivateLink. PrivateLink provides explicit service exposure and consumption.
  • Option C is incorrect: HTTP-based integration without a VPC Link generally implies public access, violating the "private network only" requirement. API Gateway needs a VPC Link for secure private integration across VPCs.
  • Option D is incorrect: Creating a separate VPC Link for each microservice is inefficient and not how VPC Links are designed for this scale. A VPC Link targets ALBs/NLBs by ARN, not HTTP endpoints, and typically targets an ALB that then routes internally.
Question 3 mcq

A company is planning to create a service that requires encryption in transit. The traffic must not be decrypted between the client and the backend of the service. The company will implement the service by using the gRPC protocol over TCP port 443. The service will scale up to thousands of simultaneous connections. The backend of the service will be hosted on an Amazon Elastic Kubernetes Service (Amazon EKS) duster with the Kubernetes Cluster Autoscaler and the Horizontal Pod Autoscaler configured. The company needs to use mutual TLS for two-way authentication between the client and the backend.

Which solution will meet these requirements?

Explanation:
Option A (Correct) Reasoning: A Network Load Balancer (NLB) with a TCP listener on port 443 provides Layer 4 passthrough, forwarding encrypted gRPC traffic directly to the backend Pods without decryption. This ensures end-to-end encryption and allows mutual TLS to be handled by the backend service. The AWS Load Balancer Controller correctly targets EKS Pod IPs.

Why the other choices are incorrect:

  • Option B is incorrect: An Application Load Balancer (ALB) with an HTTPS listener terminates TLS at the ALB, violating the requirement that traffic must not be decrypted between the client and the backend.
  • Option C is incorrect: An ALB terminates TLS (violating the decryption rule) and targets EKS node Auto Scaling groups, which is not the optimal method for directly targeting Pods in EKS via the controller.
  • Option D is incorrect: An NLB with a TLS listener terminates TLS at the load balancer, violating the requirement that traffic must not be decrypted between the client and the backend. Targeting EKS nodes' Auto Scaling groups is also suboptimal.


Question 4 mcq

A company has developed an application on AWS that will track inventory levels of vending machines and initiate the restocking process automatically. The company plans to integrate this application with vending machines and deploy the vending machines in several markets around the world. The application resides in a VPC in the us-east-1 Region. The application consists of an Amazon Elastic Container Service (Amazon ECS) cluster behind an Application Load Balancer (ALB). The communication from the vending machines to the application happens over HTTPS.

The company is planning to use an AWS Global Accelerator accelerator and configure static IP addresses of the accelerator in the vending machines for application endpoint access. The application must be accessible only through the accelerator and not through a direct connection over the internet to the ALB endpoint.

Which solution will meet these requirements?

Explanation:
Option A (Correct) Reasoning: Placing the ALB in a private subnet, without specific routes to an Internet Gateway from that subnet, prevents direct internet access. Global Accelerator then uses internal AWS routing to reach the private ALB. Allowing 0.0.0.0/0 in the security group is acceptable because the subnet's routing prevents arbitrary internet access, ensuring only Global Accelerator traffic reaches the ALB.

Why the other choices are incorrect:

  • Option B is incorrect: While it correctly places the ALB in a private subnet, it omits mentioning the necessary Internet Gateway (even if no specific routes are added from the ALB's subnet). An IGW is still required for the VPC to interface with Global Accelerator.
  • Option C is incorrect: Configuring the ALB in a public subnet directly exposes it to the internet, violating the requirement that the application should not be accessible via a direct internet connection.
  • Option D is incorrect: Adding routes in the private subnet's route table to point to the Internet Gateway effectively makes the ALB directly internet-routable, failing the requirement to prevent direct internet access.
Question 5 mcq

A company uses a 4 Gbps AWS Direct Connect dedicated connection with a link aggregation group (LAG) bundle to connect to five VPCs that are deployed in the us-east-1 Region. Each VPC serves a different business unit and uses its own private VIF for connectivity to the on-premises environment. Users are reporting slowness when they access resources that are hosted on AWS.

A network engineer finds that there are sudden increases in throughput and that the Direct Connect connection becomes saturated at the same time for about an hour each business day. The company wants to know which business unit is causing the sudden increase in throughput. The network engineer must find out this information and implement a solution to resolve the problem.

Which solution will meet these requirements?

Explanation:
Option A (Correct) Reasoning: VirtualInterfaceBpsEgress and VirtualInterfaceBpsIngress are the correct CloudWatch metrics to monitor individual VIF throughput, precisely identifying which business unit saturates the connection. Creating a new 10 Gbps dedicated connection effectively addresses the bandwidth constraint, allowing a phased traffic migration to minimize disruption and resolve the slowness.

Why the other choices are incorrect:

  • Option B is incorrect: While identification metrics are correct, simply "upgrading" a dedicated connection in-place is not standard practice; creating a new connection for increased bandwidth is more robust and common for dedicated Direct Connect.
  • Option C is incorrect: ConnectionBpsIngress and ConnectionPpsEgress are connection-level metrics, not VIF-specific, failing to pinpoint the source business unit. A 5 Gbps hosted connection is an insufficient upgrade and changes connection type, which is not ideal.
  • Option D is incorrect: This option incorrectly uses ConnectionBpsIngress and ConnectionPpsEgress metrics, which do not provide per-VIF data, making it impossible to identify the problematic business unit.
Question 6 mcq multiple

A retail company is running its service on AWS. The company’s architecture includes Application Load Balancers (ALBs) in public subnets. The ALB target groups are configured to send traffic to backend Amazon EC2 instances in private subnets. These backend EC2 instances can call externally hosted services over the internet by using a NAT gateway.

The company has noticed in its billing that NAT gateway usage has increased significantly. A network engineer needs to find out the source of this increased usage.

Which options can the network engineer use to investigate the traffic through the NAT gateway? (Choose two.)

Explanation:
Option A (Correct) Reasoning: Enabling VPC Flow Logs on the NAT Gateway's Elastic Network Interface captures all IP traffic details (source, destination, bytes). Publishing these logs to CloudWatch Logs allows using CloudWatch Logs Insights for powerful, real-time querying and analysis to identify the internal sources driving the increased NAT Gateway usage.



Option D (Correct) Reasoning: Similar to Option A, VPC Flow Logs on the NAT Gateway's ENI provide crucial traffic information. Publishing logs to Amazon S3 and leveraging Amazon Athena for queries offers a scalable, cost-effective method to analyze large volumes of log data, pinpointing specific instances or applications contributing to the usage increase.

Why the other choices are incorrect:

  • Option B is incorrect: "NAT Gateway access logs" is not an AWS feature. VPC Flow Logs are the standard mechanism for monitoring traffic through NAT Gateways.
  • Option C is incorrect: Traffic Mirroring is overly complex and resource-intensive for investigating aggregate usage patterns. It provides deep packet inspection but is not the practical or recommended approach for simply identifying sources of increased NAT Gateway data transfer; Flow Logs are designed for this.
  • Option E is incorrect: As with Option B, "NAT Gateway access logs" do not exist as a feature in AWS.
Question 7 mcq

An ecommerce company is hosting a web application on Amazon EC2 instances to handle continuously changing customer demand. The EC2 instances are part of an Auto Scaling group. The company wants to implement a solution to distribute traffic from customers to the EC2 instances. The company must encrypt all traffic at all stages between the customers and the application servers. No decryption at intermediate points is allowed.

Which solution will meet these requirements?

Explanation:
Option C (Correct) Reasoning: A Network Load Balancer with a TCP listener forwards traffic directly to the backend instances at Layer 4 without inspecting or terminating the SSL/TLS connection. The encryption remains end-to-end from the customer to the EC2 instance, satisfying the "no decryption at intermediate points" requirement.

Why the other choices are incorrect:

  • Option A is incorrect: An Application Load Balancer with an HTTPS listener decrypts traffic at the ALB itself, violating the "no decryption at intermediate points" rule.
  • Option B is incorrect: Amazon CloudFront acts as a reverse proxy and terminates the SSL/TLS connection from the client, decrypting the traffic at CloudFront before re-encrypting (if configured) to the origin.
  • Option D is incorrect: A Gateway Load Balancer is designed for traffic inspection appliances and operates at Layer 3, not suitable for direct web application traffic distribution with end-to-end SSL/TLS without intermediate decryption for the described scenario.


Question 8 mcq multiple

A company has expanded its network to the AWS Cloud by using a hybrid architecture with multiple AWS accounts. The company has set up a shared AWS account for the connection to its on-premises data centers and the company offices. The workloads consist of private web-based services for internal use. These services run in different AWS accounts. Office-based employees consume these services by using a DNS name in an on-premises DNS zone that is named example.internal.

The process to register a new service that runs on AWS requires a manual and complicated change request to the internal DNS. The process involves many teams.

The company wants to update the DNS registration process by giving the service creators access that will allow them to register their DNS records. A network engineer must design a solution that will achieve this goal. The solution must maximize cost-effectiveness and must require the least possible number of configuration changes.

Which combination of steps should the network engineer take to meet these requirements? (Choose three.)

Explanation:
Option A (Correct) Reasoning: This allows service creators to manage their specific DNS records (e.g., serviceA.account1.aws.example.internal) directly, delegating DNS management to the service teams as required by the problem.



Option B (Correct) Reasoning: An Inbound Endpoint in the shared account allows on-premises DNS servers to forward queries for AWS-hosted private domains (aws.example.internal) into the AWS Cloud, enabling hybrid resolution.



Option F (Correct) Reasoning: Creating private hosted zones (e.g., account1.aws.example.internal) in the shared account and associating them with both the service VPC (in account1) and the shared account VPC enables centralized resolution. Service creators in account1 would then be granted cross-account IAM permissions to manage records within their respective account1.aws.example.internal PHZ in the shared account, fulfilling the delegation goal.

Why the other choices are incorrect:

  • Option C is incorrect: Creating a Resolver rule for onprem.example.internal is for AWS resources to resolve on-premises DNS, not the other way around, which is the problem focus.
  • Option D is incorrect: While creating aws.example.internal is a logical step, it alone doesn't address the multi-account delegation or on-premises resolution of specific service subdomains.
  • Option E is incorrect: Using custom BIND servers on EC2 instances is less cost-effective and involves more configuration and operational overhead compared to using managed Route 53 Resolver services.
Question 9 mcq

An international company provides early warning about tsunamis. The company plans to use IoT devices to monitor sea waves around the world. The data that is collected by the IoT devices must reach the company’s infrastructure on AWS as quickly as possible. The company is using three operation centers around the world. Each operation center is connected to AWS through Its own AWS Direct Connect connection. Each operation center is connected to the internet through at least two upstream internet service providers.

The company has its own provider-independent (PI) address space. The IoT devices use TCP protocols for reliable transmission of the data they collect. The IoT devices have both landline and mobile internet connectivity. The infrastructure and the solution will be deployed in multiple AWS Regions. The company will use Amazon Route 53 for DNS services.

A network engineer needs to design connectivity between the IoT devices and the services that run in the AWS Cloud.

Which solution will meet these requirements with the HIGHEST availability?

Explanation:
Option B (Correct) Reasoning: Route 53 latency-based routing directs IoT devices to the AWS Region with the lowest latency, fulfilling "as quickly as possible." Crucially, "Evaluate Target Health to Yes" ensures only healthy endpoints receive traffic, providing the highest availability and automatic failover across regions using the specified DNS service.

Why the other choices are incorrect:

  • Option A is incorrect: CloudFront is primarily a CDN. While it can accelerate API calls, it adds a hop and isn't the most direct or highest availability solution for raw, real-time IoT data ingestion compared to dedicated routing.
  • Option C is incorrect: AWS Global Accelerator offers high availability and performance via Anycast IPs and the AWS backbone. While strong, Route 53 latency-based routing with health checks is a direct, robust, and explicitly DNS-centric solution that meets the requirements without introducing a separate Anycast IP layer, leveraging the stated Route 53 usage.
  • Option D is incorrect: Using the same PI addresses for multiple distinct Regions without a service like Global Accelerator or complex BGP configurations for Anycast routing is problematic and does not guarantee optimal routing or highest availability.
Question 10 mcq multiple

A government contractor is designing a multi-account environment with multiple VPCs for a customer. A network security policy requires all traffic between any two VPCs to be transparently inspected by a third-party appliance.

The customer wants a solution that features AWS Transit Gateway. The setup must be highly available across multiple Availability Zones, and the solution needs to support automated failover. Furthermore, asymmetric routing is not supported by the inspection appliances.

Which combination of steps is part of a solution that meets these requirements? (Choose two.)

Explanation:
Option B (Correct) Reasoning: This step correctly describes deploying inspection appliances with a Gateway Load Balancer (GWLB). GWLB is purpose-built for transparent inline inspection with Transit Gateway, supporting high availability and automated failover across multiple AZs. Routing the inspection VPC's TGW subnet to the GWLB endpoint ensures traffic is directed through the appliances.



Option C (Correct) Reasoning: This outlines the essential Transit Gateway (TGW) routing configuration for centralized inspection. Two TGW route tables—one for application VPCs with a default route to the inspection VPC and one for the inspection VPC—are crucial. Propagating application routes back to the inspection route table facilitates return traffic. Enabling appliance mode on the inspection VPC attachment prevents asymmetric routing, which is critical as appliances don't support it.

Why the other choices are incorrect:

  • Option A is incorrect: Network Load Balancer (NLB) is not designed for transparent inline inspection with Transit Gateway. It lacks the necessary integration (like GWLB endpoints) to insert third-party appliances into the TGW traffic path, making it unsuitable for this requirement.
  • Option D is incorrect: This option's TGW routing is flawed. Propagating all VPC attachments into the application route table would allow application VPCs to route directly to each other, bypassing inspection. The static default route is also incorrectly placed in the inspection route table instead of the application route table.
  • Option E is incorrect: Using a single TGW route table prevents centralized inspection. With a single route table, if application VPC routes are propagated, traffic will route directly between application VPCs, bypassing any appliance and failing to meet the security policy requirement.

Customer Reviews

5 / 5
(15,000+ verified)
5
100%
4
0%
3
0%
2
0%
1
0%

Global Community Feedback

DM

David M.

Verified Student

"The practice engine is incredible. It feels exactly like the real testing environment and helped me build so much confidence."

SJ

Sarah J.

Premium Member

"The PDF is very well organized and the explanations for the answers are actually helpful, not just random text."

MC

Michael C.

Verified Buyer

"I was skeptical, but the content is high quality and definitely worth the price. I passed on my first try!"

Need Assistance?

Our expert support team is available to assist you with any inquiries about our exam materials.

Contact Support
Average response: < 24 Hours

Get Exam Updates

Subscribe to receive instant notifications on new questions and exclusive flash sales.

* Join 5,000+ students getting weekly updates

Support Chat ● Active Now

👋 Hi! How can we help you pass your exam?

Enter email to start chatting