Amazon AWS Certified Generative AI Developer - Professional (AIP-C01)
Get full access to the updated question bank and pass on your first attempt.
Vendor
Amazon
Certification
Professional Certifications
Content
86 Qs
Status
Verified
Updated
1 day ago
Test the Practice Engine
Experience our real exam environment with free demo questions
Premium Bundle
Complete Success Suite
Save $44 Instantly
-
โFull PDF + Interactive Engine Everything you need to pass
-
โAll Advanced Question Types Drag & Drop, Hotspots, Case Studies
-
โPriority 24/7 Expert Support Direct line to certification leads
-
โ90 Days Free Priority Updates Stay current as exams change
Success Metric
98.4% Pass Rate
Standard Simulation
Practice Engine
One-Time Payment
-
Web-Based (Zero Install)
-
Real Testing Environment Virtual & Practice Modes
-
Interactive Engine Drag & Drop, Hotspots
-
60 Days Free Updates
Compatible with All Devices
Basic Tier
PDF Study Guide
Digital Access
- โ Exam Questions (PDF)
- โ Mobile Friendly
- โ 60 Days Updates
Verified 10-Question Preview
Verified Community
The CertoMetrics Standard.
Recommend the #1 platform for verified Amazon certification resources.
Success Network
Help a Colleague Succeed.
Invite a peer to get their own updated AIP-C01 prep kit.
Exam Overview
The AWS Certified Generative AI Developer - Professional certification is a pinnacle achievement for engineers specializing in cutting-edge AI. This credential validates deep expertise in designing, developing, deploying, and optimizing generative AI solutions on AWS. Earning this certification signifies a candidate's advanced ability to leverage foundational models, fine-tune them for specific use cases, and integrate them into enterprise applications with robust security and scalability. It positions professionals as leaders capable of driving innovation, accelerating product development, and solving complex business challenges using the transformative power of generative AI. This certification is crucial for those looking to distinguish themselves in the rapidly evolving field of artificial intelligence, unlocking new career opportunities and demonstrating unparalleled proficiency in AWS's generative AI ecosystem.
Questions
65
Passing Score
750/1000
Duration
170 Minutes
Difficulty
Expert
Level
Professional
Skills Measured
Career Path
Target Roles
Common Questions
Is the material up to date?
Yes. We update our question bank weekly to match the latest Amazon standards. You get free updates for 90 days.
What format do I get?
You get instant access to both the **PDF** (for reading) and our **Premium Test Engine** (for exam simulation).
Is there a guarantee?
Absolutely. If you fail the AIP-C01 exam using our materials, we offer a full money-back guarantee.
When do I get the download?
Instantly. The download link is available in your dashboard immediately after payment is confirmed.
Free Study Guide Samples
Previewing updated AIP-C01 bank (18 Questions).
A healthcare company uses Amazon Bedrock to deploy an application that generates summaries of clinical documents. The application experiences inconsistent response quality with occasional factual hallucinations. Monthly costs exceed the company's projections by 40%. A GenAI developer must implement a near real-time monitoring solution to detect hallucinations, identify abnormal token consumption, and provide early warnings of cost anomalies. The solution must require minimal custom development work and maintenance overhead.
Which solution will meet these requirements?
Correct Option: C
โ
Reasoning: Bedrock Guardrails offer near real-time contextual grounding checks to detect hallucinations with minimal custom work. Storing logs enables text output analysis. CloudWatch anomaly detection on token usage metrics efficiently identifies abnormal consumption, providing early cost warnings with low maintenance overhead. This directly meets all specified requirements. โ Why the other choices are incorrect:
- Option A is incorrect: Using AWS Glue and Amazon Athena for hallucination detection requires significant custom NLP development and is not near real-time, violating the minimal custom development constraint.
- Option B is incorrect: Bedrock evaluation jobs are for offline assessment, not real-time production monitoring. A Lambda function adds custom development and maintenance that CloudWatch Anomaly Detection handles natively.
- Option D is incorrect: AWS CloudTrail logs API calls, not detailed token usage for quality analysis. SageMaker Model Monitor requires extensive custom development for GenAI hallucination detection, violating the minimal effort constraint.
A company is using AWS Lambda and REST APIs to build a reasoning agent to automate support workflows. The system must preserve memory across interactions, share the relevant agent state, and support event-driven invocation and synchronous invocation. The system must also enforce access control and session-based permissions.
Which combination of steps provides the MOST scalable solution? (Choose two.)
Correct Option: A,B
โ
Reasoning: Amazon Bedrock AgentCore natively manages memory and session-aware
Reasoning: , fulfilling requirements for state preservation. Its built-in identity support, event handling, and observability make it a highly scalable, managed service for access control and various invocation patterns.
โ
Reasoning: Registering Lambda functions and REST APIs via Amazon API Gateway and Amazon EventBridge as actions for Amazon Bedrock AgentCore enables it to directly invoke these tools. This eliminates custom orchestration, simplifying integration and ensuring a scalable, managed solution for leveraging existing logic. โ Why the other choices are incorrect:
- Option C is incorrect: Using AWS Step Functions and Amazon SQS for orchestration and DynamoDB for state introduces custom complexity and operational overhead, duplicating or complicating functionalities Bedrock AgentCore provides natively for agent orchestration and memory.
- Option D is incorrect: Deploying
Reasoning: logic on Amazon ECS with Amazon Aurora for memory and identity requires significant custom development and operational management, making it less scalable and more complex than leveraging a fully managed service like Bedrock AgentCore.
- Option E is incorrect: Building a custom RAG pipeline with Lambda orchestration and S3 for agent state involves extensive custom coding for tool invocation and state management, which is less scalable and increases maintenance compared to Bedrock AgentCore's integrated features.
An ecommerce company is developing a generative AI (GenAI) solution that uses Amazon Bedrock with Anthropic Claude to recommend products to customers. Customers report that some of the recommended products are not available for sale on the website or are not relevant to the customer. Customers also report that the solutions takes a long time to generate some recommendations.
The company investigates the issues and finds that most interactions between customers and the product recommendation solution are unique. The company confirms that the solutions recommends products that are not in the company's product catalog. The company must resolve these issues.
Which solution will meet this requirement?
Correct Option: C
โ
Reasoning: An Amazon Bedrock knowledge base combined with Retrieval Augmented Generation (RAG) is the definitive solution to ground the LLM with the company's specific product catalog. This prevents the model from hallucinating or recommending unavailable/irrelevant products. The latency optimization directly addresses the slow recommendation generation. โ Why the other choices are incorrect:
- Option A is incorrect: Amazon Bedrock Guardrails focus on safety policies (e.g., hate speech, violence), not factual grounding to an external product catalog. Automated
Reasoning: checks are not a standard Bedrock feature for this type of factual grounding.
- Option B is incorrect: Relying solely on prompt engineering for a large, dynamic product catalog with unique interactions is prone to hallucinations and insufficient to ensure factual accuracy. While streaming reduces perceived latency, it doesn't solve the core grounding issue.
- Option D is incorrect: Storing data in OpenSearch is a component, but validating recommendations after generation is reactive and inefficient. Caching responses with DynamoDB is ineffective for "unique" interactions, and it doesn't proactively prevent incorrect recommendations.
A company is building an AI advisory application by using Amazon Bedrock. The application will provide recommendations to customers. The company needs the application to explain its reasoning process and cite specific sources for data. The application must retrieve information from company data sources and show step-by-step reasoning for recommendations. The application must also link data claims to source documents and maintain response latency under 3 seconds.
Which solution will meet these requirements with the LEAST operational overhead?
Correct Option: A
โ
Reasoning: Amazon Bedrock Knowledge Bases with source attribution enabled directly addresses the requirements for retrieving information from company data sources, citing specific sources, and linking claims to source documents with the least operational overhead due to its fully managed nature. Anthropic Claude models are effective for
Reasoning: . Storing audit logs in S3 is a standard, low-overhead practice. โ Why the other choices are incorrect:
- Option B is incorrect: While using Claude models, it lacks a managed solution like Knowledge Bases for RAG and automated source attribution, leading to higher operational overhead for implementation.
- Option C is incorrect: Using Amazon SageMaker AI with a custom Claude model significantly increases operational overhead compared to using Bedrock's fully managed service. Custom citation logic via RDS and Lambda also adds complexity.
- Option D is incorrect: While using Bedrock Knowledge Bases is correct, "custom retrieval tracking" is unnecessary; Bedrock Knowledge Bases natively provide source attribution. Monitoring with CloudWatch doesn't address the core functional requirements.
A media company must use Amazon Bedrock to implement a robust governance process for AI-generated content. The company needs to manage hundreds of prompt templates. Multiple teams use the templates across multiple AWS Regions to generate content. The solution must provide version control with approval workflows that include notifications for pending reviews. The solution must also provide detailed audit trails that document prompt activities and consistent prompt parameterization to enforce quality standards.
Which solution will meet these requirements?
Correct Option: B
โ
Reasoning: Amazon Bedrock Prompt Management natively provides version control and parameterized prompt templates. AWS CloudTrail ensures detailed audit logging of prompt activities. IAM policies are the standard mechanism to control approval permissions for prompt versions. This combination directly addresses all specified requirements for robust governance. โ Why the other choices are incorrect:
- Option A is incorrect: Amazon Bedrock Studio offers basic templates but lacks advanced versioning or built-in approval workflows. DynamoDB, Lambda, and CloudWatch would require extensive custom development to meet the governance requirements.
- Option C is incorrect: Storing prompts in S3 with tags for version control is not a robust, scalable solution for hundreds of templates. AWS Step Functions provides workflows but not integrated prompt management capabilities, making it a custom, less efficient approach.
- Option D is incorrect: Amazon SageMaker Canvas is a low-code ML platform, not designed for enterprise-level Bedrock prompt governance. AWS CloudFormation manages infrastructure, not content versioning. AWS Config enforces compliance rules, not content approval policies for prompts.
A company uses an organization in AWS Organizations with all features enabled to manage multiple AWS accounts. Employees use Amazon Bedrock across multiple accounts. The company must prevent specific topics and proprietary information from being included in prompts to Amazon Bedrock models. The company must ensure that employees can use only approved Amazon Bedrock models. The company centrally manages IAM roles for employees.
Which combination of solutions will meet these requirements? (Choose two.)
Correct Option: B,D
โ
Reasoning: An SCP restricting model use to approved ARNs addresses "only approved models". Requiring a guardrail identifier in the SCP ensures the company's content filtering policy is always applied.
โ
Reasoning: Creating a custom Bedrock guardrail with a block filtering policy directly addresses preventing specific topics/proprietary information. Deploying it via CloudFormation StackSets ensures central, consistent deployment across all accounts. โ Why the other choices are incorrect:
- Option A is incorrect: While permissions boundaries can enforce guardrail usage, they don't define the guardrail's content (block policy). Managing PBs on each employee role is less centralized for content filtering than a single, deployed guardrail.
- Option C is incorrect: This option uses an SCP for guardrail enforcement but a permissions boundary for model restriction. An SCP is generally more effective for centralized model restriction across an organization than individual PBs. Crucially, it still lacks the step of creating the guardrail's actual content filtering policy.
- Option E is incorrect: A mask filtering policy replaces sensitive content with asterisks. The requirement is to "prevent specific topics and proprietary information from being included", which implies blocking the content, not just masking it. A block filtering policy (as in D) is required.
An insurance company uses existing Amazon SageMaker AI infrastructure to support a web-based application that allows customers to predict what their insurance premiums will be. The company stores customer data that is used to train the SageMaker AI model in an Amazon S3 bucket. The dataset is growing rapidly. The company wants a solution to continuously re-train the model. The solution must automatically re-train and re-deploy the model to the application when an employee uploads a new customer data file to the S3 bucket.
Which solution will meet these requirements?
Correct Option: D
โ
Reasoning: An AWS Step Functions Standard workflow is ideal for orchestrating long-running, stateful processes like ML pipelines. S3 can trigger a Lambda function, which then initiates the Step Functions workflow. SageMaker Pipelines are purpose-built for MLOps, handling automated re-training and re-deployment efficiently. This architecture provides robust, automated, and observable continuous integration/continuous delivery (CI/CD) for machine learning models. โ Why the other choices are incorrect:
- Option A is incorrect: Invoking a SageMaker AI model endpoint uses real-time inference and does not trigger model re-training or re-deployment.
- Option B is incorrect: While viable, "webhook handlers" are unnecessarily complex for direct S3-to-Lambda integration. Step Functions offers more explicit state management and orchestration capabilities for complex MLOps workflows compared to purely EventBridge-driven execution.
- Option C is incorrect: AWS Step Functions Express workflows are for short-duration, high-event-rate tasks, not long-running ML training and deployment. SageMaker Autopilot might fit, but Express Workflows are not suitable for this orchestration.
A GenAI developer is building a Retrieval Augmented Generation (RAG)-based customer support application that uses Amazon Bedrock foundation models (FMs). The application needs to process 50 GB of historical customer conversations that are stored in an Amazon S3 bucket as JSON files. The application must use the processed data as its retrieval corpus. The application's data processing workflow must extract relevant data from customer support documents, remove customer personally identifiable information (PII), and generate embeddings for vector storage. The processing workflow must be cost-effective and must finish within 4 hours.
Which solution will meet these requirements with the LEAST operational overhead?
Correct Option: D
โ
Reasoning: AWS Step Functions orchestrates a serverless workflow using Amazon Comprehend for PII detection and Amazon Bedrock for generating embeddings. Integrating directly with Amazon OpenSearch Serverless provides a fully managed, scalable, and low-operational-overhead vector store. This combination effectively processes 50GB within 4 hours while minimizing management burden. โ Why the other choices are incorrect:
- Option A is incorrect: While serverless, managing Lambda concurrency and memory settings for 50GB within 4 hours can incur significant optimization effort and operational overhead.
- Option B is incorrect: "PII detection scripts" implies custom code, adding operational overhead. Amazon SageMaker Processing requires managing SageMaker job resources.
- Option C is incorrect: Deploying and managing an Amazon EMR cluster, including Spark UDFs, represents high operational overhead, directly contradicting the requirement.
A financial services company is creating a Retrieval Augmented Generation (RAG) application that uses Amazon Bedrock to generate summaries of market activities. The application relies on a vector database that stores a small proprietary dataset that has a low index count. The application must perform similarity searches. The Amazon Bedrock model's responses must maximize accuracy and maintain high performance.
The company needs to configure the vector database and integrate it with the application.
Which solution will meet these requirements?
Correct Option: B
โ
Reasoning: Amazon MemoryDB, using Redis Stack, is excellent for high-performance, in-memory vector storage. The HNSW algorithm provides an optimal balance of very high accuracy (recall) and superior performance for similarity searches, crucial for RAG applications. Vertical scaling directly boosts computational resources for complex vector operations. โ Why the other choices are incorrect:
- Option A is incorrect: The Flat algorithm guarantees 100% accuracy but is computationally exhaustive and can hinder "high performance" compared to HNSW, even for small datasets, as query load increases. Horizontal scaling is less direct for single-index performance.
- Option C is incorrect: While Aurora PostgreSQL can host vector data with
pgvector, the IVFFlat algorithm typically provides lower recall than HNSW, thus not "maximizing accuracy" as effectively for this scenario. - Option D is incorrect: Amazon DocumentDB is a document database not natively designed or optimized for efficient vector similarity search, making it an unsuitable choice for a RAG vector database.
A company uses Amazon Bedrock to build a Retrieval Augmented Generation (RAG) system. The RAG system uses an Amazon Bedrock knowledge base that is based on an Amazon S3 bucket as the data source for emergency news video content. The system retrieves transcripts, archived reports, and related documents from the S3 bucket.
The RAG system uses state-of-the-art embedding models and a high-performing retrieval setup. However, users report slow responses and irrelevant results, which cause decreased user satisfaction. The company notices that vector searches are evaluating too many documents across too many content types and over long periods of time.
The company determines that the underlying models will not benefit from additional fine tuning. The company must improve retrieval accuracy by applying smarter constraints. The company wants a solution that requires minimal changes to the existing architecture.
Which solution will meet these requirements?
Correct Option: C
โ
Reasoning: Amazon Bedrock knowledge bases inherently support metadata-aware filtering. By indexing S3 object metadata (e.g., content type, timestamp, origin), the retrieval process can apply specific filters before or during the vector search. This directly allows for "smarter constraints" to scope down results, reducing the number of documents evaluated, improving relevance, and speeding up responses with minimal architectural change. โ Why the other choices are incorrect:
- Option A is incorrect: Enhancing embeddings improves the quality of semantic similarity but does not address the root cause of too many documents being evaluated. It doesn't provide a mechanism to apply constraints to limit the search scope.
- Option B is incorrect: Migrating to Amazon OpenSearch Service from an existing Bedrock knowledge base constitutes a significant architectural change. This violates the explicit requirement for "minimal changes to the existing architecture."
- Option D is incorrect: Migrating to an Amazon Q Business index involves a substantial architectural shift. This solution directly contradicts the requirement for implementing a solution with "minimal changes" to the current RAG architecture.
An enterprise application uses an Amazon Bedrock foundation model (FM) to process and analyze 50 to 200 pages of technical documents. Users are experiencing inconsistent responses and receiving truncated outputs when processing documents that exceed the FM's context window limits.
Which solution will resolve this problem?
Premium Solution Locked
Unlock all 86 answers & explanations
A financial services company needs to build a document analysis system that uses Amazon Bedrock to process quarterly reports. The system must analyze financial data, perform sentiment analysis, and validate compliance across batches of reports. Each batch contains 5 reports. Each report requires multiple foundation model (FM) calls. The solution must finish the analysis within 10 seconds for each batch. Current sequential processing takes 45 seconds for each batch.
Which solution will meet these requirements?
Premium Solution Locked
Unlock all 86 answers & explanations
A company is building a generative AI (GenAI) application that produces content based on a variety of internal and external data sources. The company wants to ensure that the generated output is fully traceable. The application must support data source registration and enable metadata tagging to attribute content to its original source. The application must also maintain audit logs of data access and usage throughout the pipeline.
Which solution will meet these requirements?
Premium Solution Locked
Unlock all 86 answers & explanations
Company configures a landing zone in AWS Control Tower. The company handles sensitive data that must remain within the European Union. The company must use only the eu-central-1 Region. The company uses SCPs to enforce data residency policies. GenAI developers at the company are assigned IAM roles that have full permissions for Amazon Bedrock.
The company must ensure that GenAI developers can use the Amazon Nova Pro model through Amazon Bedrock only by using cross-Region inference (CRI) and only in eu-central-1. The company enables model access for the GenAI developer IAM roles in Amazon Bedrock. However, when a GenAI developer attempts to invoke the model through the Amazon Bedrock Chat/Text playground, the GenAI developer receives the following error.
User: arn:aws:sts::123456789012:assumed-role/AssumedDevRole/DevUserName
Action: bedrock:InvokeModelWithResponseStream
On resource(s): arn:aws:bedrock:eu-west-3::foundation-model/amazon.nova-pro-v1:0
Context: a service control policy explicitly denies the action
The company needs a solution to resolve the error. The solution must retain the company's existing governance controls and must provide precise access control. The solution must comply with the company's existing data residency policies.
Which combination of solutions will meet these requirements? (Choose two.)
Premium Solution Locked
Unlock all 86 answers & explanations
A company is designing an API for a generative AI (GenAI) application that uses a foundation model (FM) that is hosted on a managed model service. The API must stream responses to reduce latency, enforce token limits to manage compute resource usage, and implement retry logic to handle model timeouts and partial responses.
Which solution will meet these requirements with the LEAST operational overhead?
Premium Solution Locked
Unlock all 86 answers & explanations
A company is developing a new AI-powered application that needs to integrate with various specialized tools. These tools currently run as Model Context Protocol (MCP) servers on the local machines of developers and do not maintain states between invocations. The company plans to deploy each MCP server as an AWS Lambda function to support the company's production application.
The solution must be accessible to both internal applications and authorized third-party partners. The solution must use strict authentication and authorization controls.
Which additional steps will meet these requirements with the LEAST operational overhead?
Premium Solution Locked
Unlock all 86 answers & explanations
A company provides a service that helps users from around the world discover new restaurants. The service has 50 million monthly active users. The company wants to implement a semantic search solution across a database that contains 20 million restaurants and 200 million reviews. The company currently stores the data in a PostgresQL database.
The solution must support complex natural language queries and return results for at least 95% of queries within 500 ms. The solution must maintain data freshness for restaurant details that update hourly. The solution must also scale cost-effectively during peak usage periods.
Which solution will meet these requirements with the LEAST development effort?
Premium Solution Locked
Unlock all 86 answers & explanations
A company is building a legal research AI assistant that uses Amazon Bedrock with an Anthropic Claude foundation model (FM). The AI assistant must retrieve highly relevant case law documents to augment the FM's responses. The AI assistant must identify semantic relationships between legal concepts, specific legal terminology, and citations. The AI assistant must perform quickly and return precise results.
Which solution will meet these requirements?
Premium Solution Locked
Unlock all 86 answers & explanations
Full Question Bank Locked
You have reached the end of the free study guide preview. Upgrade now to unlock all 86 questions and the full simulation engine.
Certification Path
Related Certifications
Customer Reviews
Global Community Feedback
David M.
"The practice engine is incredible. It feels exactly like the real testing environment and helped me build so much confidence."
Sarah J.
"The PDF is very well organized and the explanations for the answers are actually helpful, not just random text."
Michael C.
"I was skeptical, but the content is high quality and definitely worth the price. I passed on my first try!"