- Unified gateway for 1600+ models while keeping AgentCore’s runtime, gateway, and memory services intact
- Production telemetry with traces, logs, and metrics for every AgentCore invocation via Portkey headers and metadata @integrations/agents/strands.mdx#150-215
- Reliability controls (fallbacks, load balancing, timeouts) that shield your agents from provider failures @integrations/agents/strands.mdx#229-303
- Centralized governance over provider keys, spend, and access policies using Portkey API keys across AgentCore environments @integrations/agents/strands.mdx#490-519
AgentCore Developer Guide
Review AWS’s toolkit for packaging and deploying runtimes, gateway tools, and memory services
Quick start
1
Provision prerequisites
- Install your preferred framework (for example,
openai-agents,strands-agents,langgraph, orgoogle-adk) - Add Portkey’s SDK:
pip install portkey-ai - Unpack the AgentCore starter toolkit (provides
bedrock_agentcore.runtimehelpers for local testing and packaging)
2
Store credentials securely
- Create or reuse a Portkey API key with your desired routing config @integrations/agents/strands.mdx#84-142
- Store the key (and optional Portkey Config ID) in AWS Secrets Manager; reference it from your AgentCore runtime environment variables (for example,
PORTKEY_API_KEY)
3
Wire Portkey into your agent
Wrap your agent runnable with
BedrockAgentCoreApp and point the underlying OpenAI-compatible client at Portkey.4
Package & deploy
- Follow the AgentCore toolkit instructions to zip your runtime and upload it to Amazon S3
- Create an AgentCore Runtime application that references the bundle and environment variables
- Trigger the agent from the AgentCore console, API, or Gateway tools; all LLM traffic now flows through Portkey
[!TIP] AgentCore batches tools, memory, and runtime services. Portkey only replaces the LLM transport, so you can keep using AgentCore Gateway, Memory, and Identity features while benefiting from Portkey’s routing and analytics.
Integration patterns
| Scenario | Recommended approach | Notes |
|---|---|---|
| Entire AgentCore app should use Portkey | Register a global Portkey client (as shown above) so every LLM call flows through Portkey @integrations/agents/openai-agents.mdx#124-160 | |
| Some requests should use native Bedrock models | Keep the global client pointing at Bedrock and wrap specific runs with a custom Portkey-backed model provider @integrations/agents/openai-agents.mdx#165-211 | |
| Different agents inside the runtime need different providers | Instantiate per-agent model objects with bespoke Portkey headers/configs @integrations/agents/openai-agents.mdx#213-244 |
Production features to enable
Observability
Attach trace IDs and metadata directly from your AgentCore entrypoint so Portkey groups every tool call, LLM exchange, and retry under a single execution record. @integrations/agents/strands.mdx#150-215Reliability controls
Apply Portkey Configs for fallbacks, retries, load balancing, or conditional routing to keep AgentCore agents resilient to provider hiccups. You can attach the config globally via the API key or per-request viacreateHeaders. @integrations/agents/strands.mdx#229-303
Model interoperability
Switch providers without touching your AgentCore business logic by swapping the Portkey config or provider slug (@openai-prod, @anthropic-prod, @gemini-fast, etc.). The agent definition stays unchanged. @integrations/agents/openai-agents.mdx#757-802
Governance & access control
Distribute Portkey API keys (not raw provider keys) to AgentCore teams, enforce spend budgets, and audit usage across every invocation emitted by the runtime. @integrations/agents/strands.mdx#490-529Compatibility checklist
- ✅ Agent frameworks: Strands, OpenAI Agents (Python/TypeScript), LangGraph, CrewAI, Pydantic AI, Google ADK—anything that can target an OpenAI-compatible client
- ✅ AgentCore services: Runtime, Gateway, Memory, Identity all continue to work; Portkey only handles LLM transport
- ✅ MCP / A2A tools: Tool invocations remain unchanged; Portkey runs alongside AgentCore Gateway tool definitions
- ✅ Foundation models: Route to Amazon Bedrock, OpenAI, Anthropic, Google Gemini, Mistral, Cohere, or on-prem models by updating your Portkey config—no redeploy required
[!NOTE]
For best performance, deploy your Portkey gateway in the same AWS Region as your AgentCore runtime (for example, use customHost pointing at a private Portkey data plane) to minimize cross-region latency.
Next steps
- Monitor test invocations in the Portkey dashboard to validate tracing, metadata, and costs
- Attach Portkey guardrails (PII redaction, schema validation, content filters) if your AgentCore agents need compliance controls
- Expand beyond a single model by adding fallbacks or conditional routing rules in Portkey Configs
- Coordinate with AWS AgentCore Gateway to expose Portkey-observed tools for deeper analytics across both platforms

