๐ŸŽ„

CertoMetrics - 9% OFF Special Discount Offer - Ends In:

0d 00h 00m 00s
Coupon code: SALE2026

Amazon AWS Certified Generative AI Developer - Professional (AIP-C01)

Get full access to the updated question bank and pass on your first attempt.

Vendor

Amazon

Certification

Professional Certifications

Content

86 Qs

Status

Verified

Updated

1 day ago

Test the Practice Engine

Experience our real exam environment with free demo questions

Launch Free Demo
Best Value Bundle

Premium Bundle

Complete Success Suite

$103 $59

Save $44 Instantly

  • โœ“
    Full PDF + Interactive Engine Everything you need to pass
  • โœ“
    All Advanced Question Types Drag & Drop, Hotspots, Case Studies
  • โœ“
    Priority 24/7 Expert Support Direct line to certification leads
  • โœ“
    90 Days Free Priority Updates Stay current as exams change

Success Metric

98.4% Pass Rate

Verified by 15k+ Students
Secure Checkout
Popular

Standard Simulation

Practice Engine

$54

One-Time Payment

  • Web-Based (Zero Install)
  • Real Testing Environment Virtual & Practice Modes
  • Interactive Engine Drag & Drop, Hotspots
  • 60 Days Free Updates

Compatible with All Devices

Chrome
Verified Secure Checkout

Basic Tier

PDF Study Guide

$49

Digital Access

  • โœ“ Exam Questions (PDF)
  • โœ“ Mobile Friendly
  • โœ“ 60 Days Updates
Download Free Sample PDF

Verified 10-Question Preview

Secure Checkout

Verified Community

The CertoMetrics Standard.

Recommend the #1 platform for verified Amazon certification resources.

Success Network

Help a Colleague Succeed.

Invite a peer to get their own updated AIP-C01 prep kit.

Exam Overview

The AWS Certified Generative AI Developer - Professional certification is a pinnacle achievement for engineers specializing in cutting-edge AI. This credential validates deep expertise in designing, developing, deploying, and optimizing generative AI solutions on AWS. Earning this certification signifies a candidate's advanced ability to leverage foundational models, fine-tune them for specific use cases, and integrate them into enterprise applications with robust security and scalability. It positions professionals as leaders capable of driving innovation, accelerating product development, and solving complex business challenges using the transformative power of generative AI. This certification is crucial for those looking to distinguish themselves in the rapidly evolving field of artificial intelligence, unlocking new career opportunities and demonstrating unparalleled proficiency in AWS's generative AI ecosystem.

Questions

65

Passing Score

750/1000

Duration

170 Minutes

Difficulty

Expert

Level

Professional

Skills Measured

Designing and implementing generative AI solutions on AWS.
Developing, fine-tuning, and deploying large language models (LLMs) and diffusion models.
Integrating generative AI models with AWS services for data processing, storage, and inference.
Optimizing generative AI applications for performance, cost, and security.
Evaluating and monitoring generative AI models and managing their lifecycle.

Career Path

Target Roles

Generative AI Engineer Machine Learning Architect AI/ML Solutions Developer

Common Questions

Is the material up to date?

Yes. We update our question bank weekly to match the latest Amazon standards. You get free updates for 90 days.

What format do I get?

You get instant access to both the **PDF** (for reading) and our **Premium Test Engine** (for exam simulation).

Is there a guarantee?

Absolutely. If you fail the AIP-C01 exam using our materials, we offer a full money-back guarantee.

When do I get the download?

Instantly. The download link is available in your dashboard immediately after payment is confirmed.

Free Study Guide Samples

Previewing updated AIP-C01 bank (18 Questions).

QUESTION 1

A healthcare company uses Amazon Bedrock to deploy an application that generates summaries of clinical documents. The application experiences inconsistent response quality with occasional factual hallucinations. Monthly costs exceed the company's projections by 40%. A GenAI developer must implement a near real-time monitoring solution to detect hallucinations, identify abnormal token consumption, and provide early warnings of cost anomalies. The solution must require minimal custom development work and maintenance overhead.

Which solution will meet these requirements?

A
Configure Amazon CloudWatch alarms to monitor InputTokenCount and OutputTokenCount metrics to detect anomalies. Store model invocation logs in an Amazon S3 bucket. Use AWS Glue and Amazon Athena to identify potential hallucinations.
B
Run Amazon Bedrock evaluation jobs that use LLM-based judgments to detect hallucinations. Configure Amazon CloudWatch to track token usage. Create an AWS Lambda function to process CloudWatch metrics. Configure the Lambda function to send usage pattern notifications.
C
Configure Amazon Bedrock to store model invocation logs in an Amazon S3 bucket. Enable text output logging. Configure Amazon Bedrock guardrails to run contextual grounding checks to detect hallucinations. Create Amazon CloudWatch anomaly detection alarms for token usage metrics.
D
Use AWS CloudTrail to log all Amazon Bedrock API calls. Create a custom dashboard in Amazon QuickSight to visualize token usage patterns. Use Amazon SageMaker Model Monitor to detect quality drift in generated summaries.

Correct Option: C

โœ…

Reasoning: Bedrock Guardrails offer near real-time contextual grounding checks to detect hallucinations with minimal custom work. Storing logs enables text output analysis. CloudWatch anomaly detection on token usage metrics efficiently identifies abnormal consumption, providing early cost warnings with low maintenance overhead. This directly meets all specified requirements. โŒ Why the other choices are incorrect:

  • Option A is incorrect: Using AWS Glue and Amazon Athena for hallucination detection requires significant custom NLP development and is not near real-time, violating the minimal custom development constraint.
  • Option B is incorrect: Bedrock evaluation jobs are for offline assessment, not real-time production monitoring. A Lambda function adds custom development and maintenance that CloudWatch Anomaly Detection handles natively.
  • Option D is incorrect: AWS CloudTrail logs API calls, not detailed token usage for quality analysis. SageMaker Model Monitor requires extensive custom development for GenAI hallucination detection, violating the minimal effort constraint.
QUESTION 2

A company is using AWS Lambda and REST APIs to build a reasoning agent to automate support workflows. The system must preserve memory across interactions, share the relevant agent state, and support event-driven invocation and synchronous invocation. The system must also enforce access control and session-based permissions.

Which combination of steps provides the MOST scalable solution? (Choose two.)

A
Use Amazon Bedrock AgentCore to manage memory and session-aware reasoning. Deploy the agent with built-in identity support, event handling, and observability.
B
Register the Lambda functions and the REST APIs as actions by using Amazon API Gateway and Amazon EventBridge. Enable Amazon Bedrock AgentCore to invoke the Lambda functions and the REST APIs without custom orchestration code.
C
Use Amazon Bedrock Agents for reasoning and conversation management. Use AWS Step Functions and Amazon SQS queues for orchestration. Store the agent state in Amazon DynamoDB to maintain memory between steps.
D
Deploy the reasoning logic as a container on Amazon ECS behind Amazon API Gateway. Use Amazon Aurora to store memory data and identity data.
E
Build a custom RAG pipeline by using Amazon Kendra and Amazon Bedrock. Use AWS Lambda to orchestrate tool invocations. Store the agent state in Amazon S3.

Correct Option: A,B

โœ…

Reasoning: Amazon Bedrock AgentCore natively manages memory and session-aware

Reasoning: , fulfilling requirements for state preservation. Its built-in identity support, event handling, and observability make it a highly scalable, managed service for access control and various invocation patterns.


โœ…

Reasoning: Registering Lambda functions and REST APIs via Amazon API Gateway and Amazon EventBridge as actions for Amazon Bedrock AgentCore enables it to directly invoke these tools. This eliminates custom orchestration, simplifying integration and ensuring a scalable, managed solution for leveraging existing logic. โŒ Why the other choices are incorrect:

  • Option C is incorrect: Using AWS Step Functions and Amazon SQS for orchestration and DynamoDB for state introduces custom complexity and operational overhead, duplicating or complicating functionalities Bedrock AgentCore provides natively for agent orchestration and memory.
  • Option D is incorrect: Deploying

Reasoning: logic on Amazon ECS with Amazon Aurora for memory and identity requires significant custom development and operational management, making it less scalable and more complex than leveraging a fully managed service like Bedrock AgentCore.

  • Option E is incorrect: Building a custom RAG pipeline with Lambda orchestration and S3 for agent state involves extensive custom coding for tool invocation and state management, which is less scalable and increases maintenance compared to Bedrock AgentCore's integrated features.


QUESTION 3

An ecommerce company is developing a generative AI (GenAI) solution that uses Amazon Bedrock with Anthropic Claude to recommend products to customers. Customers report that some of the recommended products are not available for sale on the website or are not relevant to the customer. Customers also report that the solutions takes a long time to generate some recommendations.

The company investigates the issues and finds that most interactions between customers and the product recommendation solution are unique. The company confirms that the solutions recommends products that are not in the company's product catalog. The company must resolve these issues.

Which solution will meet this requirement?

A
Increase grounding within Amazon Bedrock Guardrails. Enable Automated Reasoning checks. Set up provisioned throughput.
B
Use prompt engineering to restrict the model responses to relevant products. Use streaming techniques such as the Invoke Model With Response Stream action to reduce perceived latency for the customers.
C
Create an Amazon Bedrock knowledge base. Implement Retrieval Augmented Generation (RAG). Set the Performance ConfigLatency parameter to optimized.
D
Store product catalog data in Amazon OpenSearch Service. Validate the model's product recommendations against the product catalog. Use Amazon DynamoDB to implement response caching.

Correct Option: C

โœ…

Reasoning: An Amazon Bedrock knowledge base combined with Retrieval Augmented Generation (RAG) is the definitive solution to ground the LLM with the company's specific product catalog. This prevents the model from hallucinating or recommending unavailable/irrelevant products. The latency optimization directly addresses the slow recommendation generation. โŒ Why the other choices are incorrect:

  • Option A is incorrect: Amazon Bedrock Guardrails focus on safety policies (e.g., hate speech, violence), not factual grounding to an external product catalog. Automated

Reasoning: checks are not a standard Bedrock feature for this type of factual grounding.

  • Option B is incorrect: Relying solely on prompt engineering for a large, dynamic product catalog with unique interactions is prone to hallucinations and insufficient to ensure factual accuracy. While streaming reduces perceived latency, it doesn't solve the core grounding issue.
  • Option D is incorrect: Storing data in OpenSearch is a component, but validating recommendations after generation is reactive and inefficient. Caching responses with DynamoDB is ineffective for "unique" interactions, and it doesn't proactively prevent incorrect recommendations.


QUESTION 4

A company is building an AI advisory application by using Amazon Bedrock. The application will provide recommendations to customers. The company needs the application to explain its reasoning process and cite specific sources for data. The application must retrieve information from company data sources and show step-by-step reasoning for recommendations. The application must also link data claims to source documents and maintain response latency under 3 seconds.

Which solution will meet these requirements with the LEAST operational overhead?

A
Use Amazon Bedrock Knowledge Bases with source attribution enabled. Use the Anthropic Claude Messages API with RAG to set high-relevance thresholds for source documents. Store reasoning and citations in Amazon S3 for auditing purposes.
B
Use Amazon Bedrock with Anthropic Claude models and extended thinking. Configure a 4,000-token thinking budget. Store reasoning traces and citations in Amazon DynamoDB for auditing purposes.
C
Configure Amazon SageMaker AI with a custom Anthropic Claude model. Use the model's reasoning parameter and AWS Lambda to process responses. Add source citations from a separate Amazon RDS database.
D
Use Amazon Bedrock with Anthropic Claude models and chain-of-thought reasoning. Configure custom retrieval tracking with the Amazon Bedrock Knowledge Bases API. Use Amazon CloudWatch to monitor response latency metrics.

Correct Option: A

โœ…

Reasoning: Amazon Bedrock Knowledge Bases with source attribution enabled directly addresses the requirements for retrieving information from company data sources, citing specific sources, and linking claims to source documents with the least operational overhead due to its fully managed nature. Anthropic Claude models are effective for

Reasoning: . Storing audit logs in S3 is a standard, low-overhead practice. โŒ Why the other choices are incorrect:

  • Option B is incorrect: While using Claude models, it lacks a managed solution like Knowledge Bases for RAG and automated source attribution, leading to higher operational overhead for implementation.
  • Option C is incorrect: Using Amazon SageMaker AI with a custom Claude model significantly increases operational overhead compared to using Bedrock's fully managed service. Custom citation logic via RDS and Lambda also adds complexity.
  • Option D is incorrect: While using Bedrock Knowledge Bases is correct, "custom retrieval tracking" is unnecessary; Bedrock Knowledge Bases natively provide source attribution. Monitoring with CloudWatch doesn't address the core functional requirements.
QUESTION 5

A media company must use Amazon Bedrock to implement a robust governance process for AI-generated content. The company needs to manage hundreds of prompt templates. Multiple teams use the templates across multiple AWS Regions to generate content. The solution must provide version control with approval workflows that include notifications for pending reviews. The solution must also provide detailed audit trails that document prompt activities and consistent prompt parameterization to enforce quality standards.

Which solution will meet these requirements?

A
Configure Amazon Bedrock Studio prompt templates. Use Amazon CloudWatch to create dashboards that display prompt usage metrics. Store the approval status of content in Amazon DynamoDB. Use AWS Lambda functions to enforce approvals.
B
Use Amazon Bedrock Prompt Management to implement version control. Configure AWS CloudTrail for audit logging. Use IAM policies to control approval permissions. Create parameterized prompt templates by specifying variables.
C
Use AWS Step Functions to create an approval workflow. Store prompts as documents in Amazon S3. Use tags to implement version control. Use Amazon EventBridge to send notifications.
D
Deploy Amazon SageMaker Canvas with prompt templates that are stored in Amazon S3. Use AWS CloudFormation to implement version control. Use AWS Config to enforce approval policies.

Correct Option: B

โœ…

Reasoning: Amazon Bedrock Prompt Management natively provides version control and parameterized prompt templates. AWS CloudTrail ensures detailed audit logging of prompt activities. IAM policies are the standard mechanism to control approval permissions for prompt versions. This combination directly addresses all specified requirements for robust governance. โŒ Why the other choices are incorrect:

  • Option A is incorrect: Amazon Bedrock Studio offers basic templates but lacks advanced versioning or built-in approval workflows. DynamoDB, Lambda, and CloudWatch would require extensive custom development to meet the governance requirements.
  • Option C is incorrect: Storing prompts in S3 with tags for version control is not a robust, scalable solution for hundreds of templates. AWS Step Functions provides workflows but not integrated prompt management capabilities, making it a custom, less efficient approach.
  • Option D is incorrect: Amazon SageMaker Canvas is a low-code ML platform, not designed for enterprise-level Bedrock prompt governance. AWS CloudFormation manages infrastructure, not content versioning. AWS Config enforces compliance rules, not content approval policies for prompts.
QUESTION 6

A company uses an organization in AWS Organizations with all features enabled to manage multiple AWS accounts. Employees use Amazon Bedrock across multiple accounts. The company must prevent specific topics and proprietary information from being included in prompts to Amazon Bedrock models. The company must ensure that employees can use only approved Amazon Bedrock models. The company centrally manages IAM roles for employees.

Which combination of solutions will meet these requirements? (Choose two.)

A
Create an IAM permissions boundary for each employee's IAM role. Configure the permissions boundary to require an approved Amazon Bedrock guardrail identifier to invoke Amazon Bedrock models. Create an SCP that allows employees to use only approved models.
B
Create an SCP that allows employees to use only approved models. Configure the SCP to require employees to specify a guardrail identifier in calls to invoke an approved model.
C
Create an SCP that prevents an employee from invoking a model if a centrally deployed guardrail identifier is not specified in a call to the model. Create a permissions boundary on each employee's IAM role that allows each employee to invoke only approved models.
D
Use AWS CloudFormation to create a custom Amazon Bedrock guardrail that has a block filtering policy. Use stack sets to deploy the guardrail to each account in the organization.
E
Use AWS CloudFormation to create a custom Amazon Bedrock guardrail that has a mask filtering policy. Use stack sets to deploy the guardrail to each account in the organization.

Correct Option: B,D

โœ…

Reasoning: An SCP restricting model use to approved ARNs addresses "only approved models". Requiring a guardrail identifier in the SCP ensures the company's content filtering policy is always applied.


โœ…

Reasoning: Creating a custom Bedrock guardrail with a block filtering policy directly addresses preventing specific topics/proprietary information. Deploying it via CloudFormation StackSets ensures central, consistent deployment across all accounts. โŒ Why the other choices are incorrect:

  • Option A is incorrect: While permissions boundaries can enforce guardrail usage, they don't define the guardrail's content (block policy). Managing PBs on each employee role is less centralized for content filtering than a single, deployed guardrail.
  • Option C is incorrect: This option uses an SCP for guardrail enforcement but a permissions boundary for model restriction. An SCP is generally more effective for centralized model restriction across an organization than individual PBs. Crucially, it still lacks the step of creating the guardrail's actual content filtering policy.
  • Option E is incorrect: A mask filtering policy replaces sensitive content with asterisks. The requirement is to "prevent specific topics and proprietary information from being included", which implies blocking the content, not just masking it. A block filtering policy (as in D) is required.
QUESTION 7

An insurance company uses existing Amazon SageMaker AI infrastructure to support a web-based application that allows customers to predict what their insurance premiums will be. The company stores customer data that is used to train the SageMaker AI model in an Amazon S3 bucket. The dataset is growing rapidly. The company wants a solution to continuously re-train the model. The solution must automatically re-train and re-deploy the model to the application when an employee uploads a new customer data file to the S3 bucket.

Which solution will meet these requirements?

A
Use AWS Glue to run an ETL job on each uploaded file. Configure the ETL job to use the AWS SDK to invoke the Sage Maker AI model endpoint. Use real-time inference with the endpoint to re-deploy the model after it is re-trained on the updated customer dataset.
B
Create an AWS Lambda function and webhook handlers to generate an event when an employee uploads a new file. Configure SageMaker Pipelines to re-deploy the model after it is re-trained on the updated customer dataset. Use Amazon EventBridge to create an event bus. Set the Lambda function event as the source and SageMaker Pipelines as the target.
C
Create an AWS Step Functions Express workflow with AWS SDK integrations to retrieve the customer data from the S3 bucket when an employee uploads a new file to the S3 bucket. Use a SageMaker Data Wrangler flow to export the data from the S3 bucket to SageMaker Autopilot. Use SageMaker Autopilot to re-deploy the model after it has been re-trained on the updated customer dataset.
D
Create an AWS Step Functions Standard workflow. Configure the first state to call an AWS Lambda function to respond when an employee uploads a new file to the S3 bucket. Use a pipeline in SageMaker Pipelines to re-deploy the model after it has been re-trained on the updated customer dataset. Use the next state in the workflow to run the pipeline when the first state receives a response.

Correct Option: D

โœ…

Reasoning: An AWS Step Functions Standard workflow is ideal for orchestrating long-running, stateful processes like ML pipelines. S3 can trigger a Lambda function, which then initiates the Step Functions workflow. SageMaker Pipelines are purpose-built for MLOps, handling automated re-training and re-deployment efficiently. This architecture provides robust, automated, and observable continuous integration/continuous delivery (CI/CD) for machine learning models. โŒ Why the other choices are incorrect:

  • Option A is incorrect: Invoking a SageMaker AI model endpoint uses real-time inference and does not trigger model re-training or re-deployment.
  • Option B is incorrect: While viable, "webhook handlers" are unnecessarily complex for direct S3-to-Lambda integration. Step Functions offers more explicit state management and orchestration capabilities for complex MLOps workflows compared to purely EventBridge-driven execution.
  • Option C is incorrect: AWS Step Functions Express workflows are for short-duration, high-event-rate tasks, not long-running ML training and deployment. SageMaker Autopilot might fit, but Express Workflows are not suitable for this orchestration.


QUESTION 8

A GenAI developer is building a Retrieval Augmented Generation (RAG)-based customer support application that uses Amazon Bedrock foundation models (FMs). The application needs to process 50 GB of historical customer conversations that are stored in an Amazon S3 bucket as JSON files. The application must use the processed data as its retrieval corpus. The application's data processing workflow must extract relevant data from customer support documents, remove customer personally identifiable information (PII), and generate embeddings for vector storage. The processing workflow must be cost-effective and must finish within 4 hours.

Which solution will meet these requirements with the LEAST operational overhead?

A
Use AWS Lambda and Amazon Comprehend to process files in parallel, remove PII, and call Amazon Bedrock APIs to generate vectors. Configure Lambda concurrency limits and memory settings to optimize throughput.
B
Create an AWS Glue ETL job to run PII detection scripts on the data. Use Amazon SageMaker Processing to run the HuggingFaceProcessor to generate embeddings by using a pre-trained model. Store the embeddings in Amazon OpenSearch Service.
C
Deploy an Amazon EMR cluster that runs Apache Spark with user-defined functions (UDFs) that call Amazon Comprehend to detect PII. Use Amazon Bedrock APIs to generate vectors. Store outputs in Amazon Aurora PostgreSQL with the pgvector extension.
D
Implement a data processing pipeline that uses AWS Step Functions to orchestrate a workload that uses Amazon Comprehend to detect PII and Amazon Bedrock to generate embeddings. Directly integrate the workflow with Amazon OpenSearch Serverless to store vectors and provide similarity search capabilities.

Correct Option: D

โœ…

Reasoning: AWS Step Functions orchestrates a serverless workflow using Amazon Comprehend for PII detection and Amazon Bedrock for generating embeddings. Integrating directly with Amazon OpenSearch Serverless provides a fully managed, scalable, and low-operational-overhead vector store. This combination effectively processes 50GB within 4 hours while minimizing management burden. โŒ Why the other choices are incorrect:

  • Option A is incorrect: While serverless, managing Lambda concurrency and memory settings for 50GB within 4 hours can incur significant optimization effort and operational overhead.
  • Option B is incorrect: "PII detection scripts" implies custom code, adding operational overhead. Amazon SageMaker Processing requires managing SageMaker job resources.
  • Option C is incorrect: Deploying and managing an Amazon EMR cluster, including Spark UDFs, represents high operational overhead, directly contradicting the requirement.
QUESTION 9

A financial services company is creating a Retrieval Augmented Generation (RAG) application that uses Amazon Bedrock to generate summaries of market activities. The application relies on a vector database that stores a small proprietary dataset that has a low index count. The application must perform similarity searches. The Amazon Bedrock model's responses must maximize accuracy and maintain high performance.

The company needs to configure the vector database and integrate it with the application.

Which solution will meet these requirements?

A
Launch an Amazon MemoryDB cluster and configure the index by using the Flat algorithm. Configure a horizontal scaling policy based on performance metrics.
B
Launch an Amazon MemoryDB cluster and configure the index by using the Hierarchical Navigable Small World (HNSW) algorithm. Configure a vertical policy based on performance metrics.
C
Launch an Amazon Aurora PostgresSQL cluster and configure the index by using the Inverted File with Flat Compression (IVFFlat) algorithm. Configure the instance class to scale to a larger size when the load increases.
D
Launch an Amazon DocumentDB cluster that has an Inverted File with Flat Compression (IVFFlat) index and a high probe value. Configure connections to the cluster as a replica set Distribute reads to replica instances.

Correct Option: B

โœ…

Reasoning: Amazon MemoryDB, using Redis Stack, is excellent for high-performance, in-memory vector storage. The HNSW algorithm provides an optimal balance of very high accuracy (recall) and superior performance for similarity searches, crucial for RAG applications. Vertical scaling directly boosts computational resources for complex vector operations. โŒ Why the other choices are incorrect:

  • Option A is incorrect: The Flat algorithm guarantees 100% accuracy but is computationally exhaustive and can hinder "high performance" compared to HNSW, even for small datasets, as query load increases. Horizontal scaling is less direct for single-index performance.
  • Option C is incorrect: While Aurora PostgreSQL can host vector data with pgvector, the IVFFlat algorithm typically provides lower recall than HNSW, thus not "maximizing accuracy" as effectively for this scenario.
  • Option D is incorrect: Amazon DocumentDB is a document database not natively designed or optimized for efficient vector similarity search, making it an unsuitable choice for a RAG vector database.
QUESTION 10

A company uses Amazon Bedrock to build a Retrieval Augmented Generation (RAG) system. The RAG system uses an Amazon Bedrock knowledge base that is based on an Amazon S3 bucket as the data source for emergency news video content. The system retrieves transcripts, archived reports, and related documents from the S3 bucket.

The RAG system uses state-of-the-art embedding models and a high-performing retrieval setup. However, users report slow responses and irrelevant results, which cause decreased user satisfaction. The company notices that vector searches are evaluating too many documents across too many content types and over long periods of time.

The company determines that the underlying models will not benefit from additional fine tuning. The company must improve retrieval accuracy by applying smarter constraints. The company wants a solution that requires minimal changes to the existing architecture.

Which solution will meet these requirements?

A
Enhance embeddings by using a domain-adapted model that is specifically trained on emergency news content for improved vector similarity.
B
Migrate to Amazon OpenSearch Service. Use vector fields and metadata filters to define the scope of results retrieval.
C
Enable metadata-aware filtering within the Amazon Bedrock knowledge base by indexing S3 object metadata.
D
Migrate to an Amazon Q Business index to perform structured metadata filtering and document categorization during retrieval.

Correct Option: C

โœ…

Reasoning: Amazon Bedrock knowledge bases inherently support metadata-aware filtering. By indexing S3 object metadata (e.g., content type, timestamp, origin), the retrieval process can apply specific filters before or during the vector search. This directly allows for "smarter constraints" to scope down results, reducing the number of documents evaluated, improving relevance, and speeding up responses with minimal architectural change. โŒ Why the other choices are incorrect:

  • Option A is incorrect: Enhancing embeddings improves the quality of semantic similarity but does not address the root cause of too many documents being evaluated. It doesn't provide a mechanism to apply constraints to limit the search scope.
  • Option B is incorrect: Migrating to Amazon OpenSearch Service from an existing Bedrock knowledge base constitutes a significant architectural change. This violates the explicit requirement for "minimal changes to the existing architecture."
  • Option D is incorrect: Migrating to an Amazon Q Business index involves a substantial architectural shift. This solution directly contradicts the requirement for implementing a solution with "minimal changes" to the current RAG architecture.


QUESTION 11

An enterprise application uses an Amazon Bedrock foundation model (FM) to process and analyze 50 to 200 pages of technical documents. Users are experiencing inconsistent responses and receiving truncated outputs when processing documents that exceed the FM's context window limits.

Which solution will resolve this problem?

A
Configure fixed-size chunking at 4,000 tokens for each chunk with 20% overlap. Use application-level logic to link multiple chunks sequentially until the FM's maximum context window of 200,000 tokens is reached before making inference calls.
B
Use hierarchical chunking with parent chunks of 8,000 tokens and child chunks of 2,000 tokens. Use Amazon Bedrock Knowledge Bases built-in retrieval to automatically select relevant parent chunks based on query context. Configure overlap tokens to maintain semantic continuity.
C
Use semantic chunking with a breakpoint percentile threshold of 95% and a buffer size of 3 sentences. Use the Amazon Bedrock Retrieve And Generate API call to dynamically select the most relevant chunks based on embedding similarity scores.
D
Create a pre-processing AWS Lambda function that analyzes document token count by using the FM's tokenizer. Configure the lambda function to split documents into equal segments that fit within 80% of the context window. Configure the Lambda function to process each segment independently before aggregating the results.

Premium Solution Locked

Unlock all 86 answers & explanations

QUESTION 12

A financial services company needs to build a document analysis system that uses Amazon Bedrock to process quarterly reports. The system must analyze financial data, perform sentiment analysis, and validate compliance across batches of reports. Each batch contains 5 reports. Each report requires multiple foundation model (FM) calls. The solution must finish the analysis within 10 seconds for each batch. Current sequential processing takes 45 seconds for each batch.

Which solution will meet these requirements?

A
Use AWS Lambda functions with provisioned concurrency to process each analysis type sequentially. Configure the Lambda function timeouts to 10 seconds. Configure automatic retries with exponential backoff.
B
Use AWS Step Functions with a Parallel state to invoke separate AWS Lambda functions for each analysis type simultaneously. Configure Amazon Bedrock client timeouts. Use Amazon CloudWatch metrics to track execution time and model inference latency.
C
Create an Amazon SQS queue to buffer analysis requests. Deploy multiple AWS Lambda functions with reserved concurrency. Configure each Lambda function to process different aspects of each report sequentially and then combine the results.
D
Deploy an Amazon ECS cluster that runs containers that process each report sequentially. Use a load balancer to distribute batch workloads. Configure an auto-scaling policy based on CPU utilization to handle demand fluctuations.

Premium Solution Locked

Unlock all 86 answers & explanations

QUESTION 13

A company is building a generative AI (GenAI) application that produces content based on a variety of internal and external data sources. The company wants to ensure that the generated output is fully traceable. The application must support data source registration and enable metadata tagging to attribute content to its original source. The application must also maintain audit logs of data access and usage throughout the pipeline.

Which solution will meet these requirements?

A
Use AWS Lake Formation to catalog data sources and control access. Apply metadata tags directly in Amazon S3. Use AWS CloudTrail to monitor API activity.
B
Use AWS Glue Data Catalog to register and tag data sources. Use Amazon CloudWatch Logs to monitor access patterns and application behavior.
C
Store data in Amazon S3 and use object tagging for attribution. Use AWS Glue Data Catalog to manage schema information. Use AWS CloudTrail to log access to S3 buckets.
D
Use AWS Glue Data Catalog to register all data sources. Apply metadata tags to attribute data sources. Use AWS CloudTrail to log access and activity across services.

Premium Solution Locked

Unlock all 86 answers & explanations

QUESTION 14

Company configures a landing zone in AWS Control Tower. The company handles sensitive data that must remain within the European Union. The company must use only the eu-central-1 Region. The company uses SCPs to enforce data residency policies. GenAI developers at the company are assigned IAM roles that have full permissions for Amazon Bedrock.

The company must ensure that GenAI developers can use the Amazon Nova Pro model through Amazon Bedrock only by using cross-Region inference (CRI) and only in eu-central-1. The company enables model access for the GenAI developer IAM roles in Amazon Bedrock. However, when a GenAI developer attempts to invoke the model through the Amazon Bedrock Chat/Text playground, the GenAI developer receives the following error.

User: arn:aws:sts::123456789012:assumed-role/AssumedDevRole/DevUserName

Action: bedrock:InvokeModelWithResponseStream

On resource(s): arn:aws:bedrock:eu-west-3::foundation-model/amazon.nova-pro-v1:0

Context: a service control policy explicitly denies the action

The company needs a solution to resolve the error. The solution must retain the company's existing governance controls and must provide precise access control. The solution must comply with the company's existing data residency policies.

Which combination of solutions will meet these requirements? (Choose two.)

A
Add an AdministratorAccess policy to the GenAI developer IAM role.
B
Extend the existing SCPs to enable CRI for the eu.amazon.nova-pro-v1:0 inference profile.
C
Enable Amazon Bedrock model access for Amazon Nova Pro in the eu-west-3 Region.
D
Validate that the GenAI developer IAM roles have permissions to invoke Amazon Nova Pro through the eu.amazon.nova-pro.v1:0 inference profile on all European Union AWS Regions that can serve the model.
E
Extend the existing SCP to enable CRI for the eu.* inference profile.

Premium Solution Locked

Unlock all 86 answers & explanations

QUESTION 15

A company is designing an API for a generative AI (GenAI) application that uses a foundation model (FM) that is hosted on a managed model service. The API must stream responses to reduce latency, enforce token limits to manage compute resource usage, and implement retry logic to handle model timeouts and partial responses.

Which solution will meet these requirements with the LEAST operational overhead?

A
Integrate an Amazon API Gateway HTTP API with an AWS Lambda function to invoke Amazon Bedrock. Use Lambda response streaming to stream responses. Enforce token limits within the Lambda function. Implement retry logic for model timeouts by using Lambda and API Gateway timeout configurations.
B
Connect an Amazon API Gateway HTTP API directly to Amazon Bedrock. Simulate streaming by using client-side polling. Enforce token limits on the frontend. Configure retry behavior by using API Gateway integration settings.
C
Connect an Amazon API Gateway WebSocket API to an Amazon ECS service that hosts a containerized inference server. Stream responses by using the WebSocket protocol. Enforce token limits within Amazon ECS. Handle model timeouts by using ECS task lifecycle hooks and restart policies.
D
Integrate an Amazon API Gateway REST API with an AWS Lambda function that invokes Amazon Bedrock. Use Lambda response streaming to stream responses. Enforce token limits within the Lambda function. Implement retry logic by using Lambda and API Gateway timeout configurations.

Premium Solution Locked

Unlock all 86 answers & explanations

QUESTION 16

A company is developing a new AI-powered application that needs to integrate with various specialized tools. These tools currently run as Model Context Protocol (MCP) servers on the local machines of developers and do not maintain states between invocations. The company plans to deploy each MCP server as an AWS Lambda function to support the company's production application.

The solution must be accessible to both internal applications and authorized third-party partners. The solution must use strict authentication and authorization controls.

Which additional steps will meet these requirements with the LEAST operational overhead?

A
Create a custom Lambda invocation transport by using the Lambda Invoke API. Implement IAM authentication and grant InvokeFunction permissions to authorized users and roles.
B
Expose the Lambda functions through Amazon API Gateway REST API endpoints. Implement API keys for authentication. Configure the applications that need to access the MCP servers to use standard HTTP requests instead of the MCP protocol.
C
Create Lambda function URLs and enable a custom Streamable HTTP transport and SigV4. Implement AWS IAM authentication. Grant InvokeFunctionUrl permissions to authorized users and roles.
D
Expose the Lambda function through Amazon API Gateway HTTP API endpoints with the Streamable HTTP transport. Use Amazon Cognito to implement OAuth authentication. Configure API Gateway to validate OAuth tokens.

Premium Solution Locked

Unlock all 86 answers & explanations

QUESTION 17

A company provides a service that helps users from around the world discover new restaurants. The service has 50 million monthly active users. The company wants to implement a semantic search solution across a database that contains 20 million restaurants and 200 million reviews. The company currently stores the data in a PostgresQL database.

The solution must support complex natural language queries and return results for at least 95% of queries within 500 ms. The solution must maintain data freshness for restaurant details that update hourly. The solution must also scale cost-effectively during peak usage periods.

Which solution will meet these requirements with the LEAST development effort?

A
Migrate the restaurant data to Amazon OpenSearch Service. Implement keyword-based search rules that use custom analyzers and relevance tuning to find restaurants based on attributes such as cuisine type, feature, and location. Create Amazon API Gateway HTTP API endpoints to transform user queries into structured search parameters.
B
Migrate the restaurant data to Amazon OpenSearch Service. Use a foundation model (FM) in Amazon Bedrock to generate vector embeddings from restaurant descriptions, reviews, and menu items. When users submit natural language queries, convert the queries to embeddings by using the same FM. Perform k-nearest neighbors (k-NN) searches to find semantically similar results.
C
Keep the restaurant data in PostgresQL and implement a pgvector extension. Use a foundation model (FM) in Amazon Bedrock to generate vector embeddings from restaurant data. Store the vector embeddings directly in PostgreSQL. Create an AWS Lambda function to convert natural language queries to vector representations by using the same FM. Configure the Lambda function to perform similarity searches within the database.
D
Migrate restaurant data to an Amazon Bedrock knowledge base by using a custom ingestion pipeline. Configure the knowledge base to automatically generate embeddings from restaurant information. Use the Amazon Bedrock Retrieve API with built-in vector search capabilities to query the knowledge base directly by using natural language input.AWS Certified Generative AI Developer - Professional AIP-C01 Exam Questions

Premium Solution Locked

Unlock all 86 answers & explanations

QUESTION 18

A company is building a legal research AI assistant that uses Amazon Bedrock with an Anthropic Claude foundation model (FM). The AI assistant must retrieve highly relevant case law documents to augment the FM's responses. The AI assistant must identify semantic relationships between legal concepts, specific legal terminology, and citations. The AI assistant must perform quickly and return precise results.

Which solution will meet these requirements?

A
Configure an Amazon Bedrock knowledge base to use a default vector search configuration. Use Amazon Bedrock to expand queries to improve retrieval for legal documents based on specific terminology and citations.
B
Use Amazon OpenSearch service to deploy a hybrid search architecture that combines vector search with keyword search. Apply an Amazon Bedrock reranker model to optimize result relevance.
C
Enable the Amazon Kendra query suggestion feature for end users. Use Amazon Bedrock to perform post-processing of search results to identify semantic similarity in the documents and to produce precise results.
D
Use Amazon OpenSearch Service with vector search and Amazon Bedrock Titan embeddings to index and search legal documents. Use custom AWS Lambda functions to merge results with keyword-based filters that are stored in an Amazon RDS database.

Premium Solution Locked

Unlock all 86 answers & explanations

Full Question Bank Locked

You have reached the end of the free study guide preview. Upgrade now to unlock all 86 questions and the full simulation engine.

Customer Reviews

5 / 5
(15,000+ verified)
5
100%
4
0%
3
0%
2
0%
1
0%

Global Community Feedback

DM

David M.

Verified Student

"The practice engine is incredible. It feels exactly like the real testing environment and helped me build so much confidence."

SJ

Sarah J.

Premium Member

"The PDF is very well organized and the explanations for the answers are actually helpful, not just random text."

MC

Michael C.

Verified Buyer

"I was skeptical, but the content is high quality and definitely worth the price. I passed on my first try!"

Need Assistance?

Our expert support team is available to assist you with any inquiries about our exam materials.

Contact Support
Average response: < 24 Hours

Get Exam Updates

Subscribe to receive instant notifications on new questions and exclusive flash sales.

* Join 5,000+ students getting weekly updates

Support Chat โ— Active Now

๐Ÿ‘‹ Hi! How can we help you pass your exam?

Enter email to start chatting