Back to catalog
Amplitude icon

Amplitude

Official

Search, access, and get insights on your Amplitude data

Data & analytics44 toolsAuth: oauth

Tools (44)

get_charts

Retrieve full chart objects by their IDs using the chart service directly WHEN TO USE: - You want to retrieve a full chart definition. - Useful if you want to base an ad hoc query dataset analysis on an exsiting chart. INSTRUCTIONS: - Use the search tool to find the IDs of charts you want to retrieve, then call this tool with the IDs.

save_chart_edits

Save temporary chart edits as permanent charts WHEN TO USE: - You have chart edit IDs from query_dataset and want to save them as permanent charts - You need to add charts to dashboards or notebooks (which require saved chart IDs) WORKFLOW: 1. Use query_dataset to create ad-hoc analyses (returns editId) 2. Use save_chart_edits to convert editIds into permanent chartIds 3. Use chartIds in create_dashboard or create_notebook IMPORTANT: - All AI-generated charts are saved as unpublished in your personal space - Charts require human review before publishing to shared spaces - Use bulk saving to reduce tool calls when creating multiple charts

get_cohorts

Get detailed information about specific cohorts by their IDs. WHEN TO USE: - You want to retrieve full cohort definitions after finding them via search. - You need detailed cohort information including definition, metadata, and audience details. INSTRUCTIONS: - Use the search tool to find the IDs of cohorts you want to retrieve, then call this tool with the IDs. - This returns full cohort objects with all details, unlike the search tool which returns summary information.

create_cohort

Create a new cohort with the provided definition and configuration. WHEN TO USE: - You need to create a new audience segment based on user behavior or properties - You want to save a cohort definition for reuse in charts, experiments, or other analyses - You need to create a cohort from specific conditions like events, user properties, or funnels LEARNING FROM EXISTING COHORTS: - Before creating a cohort, use the "search" MCP tool to find relevant cohorts by name/description, then use get_cohorts with those IDs to analyze existing cohort definitions and structure - Study the structure and patterns of existing cohort definitions to understand the correct payload format - Pay attention to how different condition types (event, user_property, other_cohort, etc.) are structured - Learn from the andClauses/orClauses patterns and how they combine different conditions - Use existing cohorts as templates for similar use cases to ensure proper schema compliance EXAMPLES: - Create an event-based cohort (users who performed a specific event >= 1 time in the past 30 days): { "app_id": "365742", "name": "xuan-simple-event-cohort", "definition": { "version": 3, "andClauses": [{ "negated": false, "orClauses": [{ "type": "event", // Condition type: event-based "metric": null, // No specific metric aggregation "offset": 0, // No time offset from the base time range "group_by": [], // No grouping/segmentation by event properties "interval": 1, // Time granularity: 1 = DAY (daily buckets) "operator": ">=", // Event count operator: greater than or equal "time_type": "rolling", // Rolling time window (last N days from now) "time_value": 30, // Time range: 30 units of the interval (30 days) "type_value": "xuan-test-httpapi-event-type", // The specific event name to match "operator_value": 1, // Minimum event count threshold: >= 1 occurrence "exclude_current_interval": false // Include events from the current day }] }], "cohortType": "UNIQUES", // Count unique users (not event occurrences) "countGroup": {"name": "User", "is_computed": false}, // Group by User entities "referenceFrameTimeParams": {} // No additional time frame parameters }, "type": "redshift", // Cohort computation engine type (optional - defaults to "redshift") "published": true // Make cohort discoverable to others } // EXPLANATION: This creates a cohort of users who performed the event "xuan-test-httpapi-event-type" // at least once in the last 30 days. The interval=1 means we evaluate this on a daily basis, // so the system looks at each day in the past 30 days to see if the user performed the event. - Create a complex cohort with multiple conditions (organizations in another cohort OR new active, AND performed an event): { "app_id": "365742", "name": "xuan-test", "definition": { "version": 3, "andClauses": [{ // First AND condition group "negated": false, "orClauses": [{ // First OR condition: existing cohort membership "type": "other_cohort", // Condition type: reference to another cohort "offset": 0, // No time offset "interval": 1, // Time granularity: 1 = DAY (daily evaluation) "time_type": "rolling", // Rolling time window (last N days) "time_value": 365, // Time range: 365 days (1 year lookback) "cohort_keys": ["rs4d2xg5"], // Reference to cohort ID "rs4d2xg5" "exclude_current_interval": false // Include current day in evaluation }, { // Second OR condition: new/active users "type": "new_active", // Condition type: new or active user status "offset": 0, // No time offset "interval": 1, // Time granularity: 1 = DAY (daily buckets) "time_type": "absolute", // Absolute time range (specific dates) "time_value": [1760572800, 1761955199], // Unix timestamps: specific date range "type_value": "new", // Filter for "new" users (vs "active") "exclude_current_interval": false // Include current day }] }, { // Second AND condition group "negated": false, "orClauses": [{ // Event-based condition "type": "event", // Condition type: event-based "metric": null, // No specific metric aggregation "offset": 0, // No time offset "group_by": [], // No event property grouping "interval": 1, // Time granularity: 1 = DAY (daily buckets) "operator": ">=", // Event count operator: greater than or equal "time_type": "rolling", // Rolling time window (last N days) "time_value": 30, // Time range: 30 days "type_value": "test event", // Event name to match "operator_value": 1, // Minimum event count: >= 1 occurrence "exclude_current_interval": false // Include current day }] }], "cohortType": "UNIQUES", // Count unique organizations (not occurrences) "countGroup": {"name": "org id", "is_computed": false}, // Group by Organization entities "referenceFrameTimeParams": {} // No additional time frame parameters }, "type": "redshift", // Cohort computation engine type (optional - defaults to "redshift") "published": true // Make cohort discoverable to others } // EXPLANATION: This creates a complex cohort using boolean logic: // (Organizations in cohort "rs4d2xg5" in the last 365 days OR new users in the specified date range) // AND (Organizations that performed "test event" >= 1 time in the last 30 days) // // The interval=1 in all conditions means daily granularity: // - Cohort membership is checked daily over 365 days // - New user status is evaluated daily within the absolute date range // - Event occurrences are counted daily over the last 30 days // // Note: This cohort counts organizations (org id) rather than individual users.

get_context

Get information about the current user, organization, and list of accessible projects. WHEN TO USE: - "What projects do I have access to?" - "Show me my organization details" - "What is my user role?" RETURNS: - User details (email, name, role) - Organization info (name, plan, quota) - List of accessible projects (just names and IDs) DO NOT USE FOR: - Project-specific settings (timezone, currency, sessions) → use 'get_project_context' instead - Running analytics queries → use 'query_dataset' or 'query_amplitude_data' instead - Finding charts/dashboards/cohorts → use 'search' instead

get_project_context

Get project-specific settings and configuration for a specific project. WHEN TO USE: - "What timezone is this project using?" - "What are the currency settings?" - "How are sessions defined?" - "What is the week start day?" RETURNS: - Timezone and date settings (timezone, week start, quarter start) - Currency settings (locale, target currency) - Session definition (timeout, custom property) - Source projects (data lineage) - Project AI context (business guidelines) REQUIRES: projectId parameter DO NOT USE FOR: - Listing all projects → use 'get_context' instead - User/org info → use 'get_context' instead

get_dashboard

Get specific dashboards and all their charts WHEN TO USE: - You want to retrieve full dashboard definitions including chart IDs that you can query and analyze individually. INSTRUCTIONS: - Use the search tool to find the IDs of dashboards you want to retrieve, then call this tool with the IDs. - Very commonly you will want to query the charts after retrieving a dashboard.

create_dashboard

Create a comprehensive dashboard with charts, rich text, and custom layout WHEN TO USE: - After the user has searched existing content or explored some analysis in Amplitude - The user has explicitly requested to create a dashboard CRITICAL - CHART IDs MUST BE FROM SAVED CHARTS: - Only use chartIds from SAVED/PERMANENT charts - these are returned by save_chart_edits (in the chartId field) or create_chart - DO NOT use editIds from query_dataset - these are temporary IDs that cannot be added to dashboards - DO NOT use the editId from query_dataset responses - you must first call save_chart_edits to get a permanent chartId - The typical workflow is: query_dataset (returns editId) → save_chart_edits (converts editId to permanent chartId) → create_dashboard (uses chartId) - If you use an editId instead of a saved chartId, the dashboard creation will fail with "NotFoundError: No chart" INSTRUCTIONS: - Provide a descriptive name for the dashboard - Use rows array where each row contains items in left-to-right order - Each item specifies width (3-12 columns). If width is omitted, items auto-fill remaining space - Each row must specify height in pixels. Only heights of 375, 500, 625, 750 are allowed - Total width of items in a row must not exceed 12 columns - Max 4 items per row (ensures minimum 3-column width per item) - Use chartMetas to configure chart display options (view type, annotations, etc.) - Return a link to the new dashboard in the response - DO NOT include static analysis in dashboard text content. Dashboards are meant to be long-lived and thus a point in time insight does not help - DO group similar charts together and include a header and some text describing how to interpret the charts effectively MARKDOWN FORMAT: - Rich text content uses standard markdown syntax - Supported: headers (# ## ###), bold (**text**), italic (*text*), lists (- or 1.), links ([text](url)), code blocks (```), inline code (`code`) - Example: "# Overview\n\nThis dashboard shows **key metrics** for user engagement." LAYOUT EXAMPLES: - Full-width item: { height: 6, items: [{ type: 'chart', chartId: '123', width: 12 }] } - Two side-by-side: { height: 4, items: [{ type: 'chart', chartId: '1', width: 6 }, { type: 'rich_text', content: '# Notes', width: 6 }] } - Three columns: { height: 5, items: [{ width: 4 }, { width: 4 }, { width: 4 }] } - Auto-fill: { height: 4, items: [{ type: 'chart', chartId: '1' }, { type: 'chart', chartId: '2' }] } (each gets 6 columns)

edit_dashboard

Edit a dashboard's metadata and layout with optimistic concurrency protection. WHEN TO USE: - You already have a dashboard ID and want to update its name/description and/or content rows. INSTRUCTIONS: - Always call get_dashboard first to retrieve the dashboard's current lastModified and rows. - Pass the retrieved lastModified in expectedLastModified. - Metadata fields are only updated when values are not null/undefined. - Use one structural edit at a time via edit. EXAMPLES: - Metadata only: {"dashboardId":"123","expectedLastModified":1700000000,"metadata":{"name":"Q1 Dashboard"}} - Replace all rows: {"dashboardId":"123","expectedLastModified":1700000000,"edit":{"type":"set_rows","rows":[{"height":500,"items":[{"type":"chart","chartId":"abc","width":12}]}]}} - Update a row: {"dashboardId":"123","expectedLastModified":1700000000,"edit":{"type":"update_row","rowIndex":0,"row":{"height":500,"items":[{"type":"rich_text","content":"# Notes","width":12}]}}} - Insert a row: {"dashboardId":"123","expectedLastModified":1700000000,"edit":{"type":"insert_row","index":1,"row":{"height":375,"items":[{"type":"chart","chartId":"def","width":12}]}}} - Remove a row: {"dashboardId":"123","expectedLastModified":1700000000,"edit":{"type":"remove_row","rowIndex":2}} NOTES: - The request fails with a conflict if expectedLastModified is stale. - Response is intentionally compact to minimize context usage.

create_experiment

Create a new experiment across one or more projects. INSTRUCTIONS: - If the user has not specified projects, prompt them to decide which projects to use - Creates a feature A/B test with control and treatment variants - Creates the same experiment in each specified project - Returns the experiment IDs and URLs for viewing in Amplitude EXAMPLES: - Basic A/B test: Provide projectIds, key, and name - Multiple projects: Provide array of projectIds to create experiment in each - With custom variants: Provide projectIds, key, name, and variants array - With links: Provide links array with url and title for each link (e.g., PRs, tickets, docs) - With deployments: Provide deploymentIds array to associate specific deployments (API keys) NOTES: - Experiment keys must be unique within each project - Variants default to 'control' and 'treatment' if not specified - Use get_deployments to retrieve available deployment IDs

get_deployments

Retrieve all deployments for the current project. Deployments are api keys for flags and experiments.

get_experiments

Retrieve specific experiments by their IDs. WHEN TO USE: - You want to retrieve addition information for experiments like state, decisions, etc. INSTRUCTIONS: - Use the search tool to find the IDs of experiments you want to retrieve, then call this tool with the IDs.

get_flags

Retrieve specific feature flags by their IDs. WHEN TO USE: - You want to retrieve full flag definitions including variants, metadata, and configuration details. INSTRUCTIONS: - Use the search tool to find the IDs of flags you want to retrieve, then call this tool with the IDs.

create_flags

Create multiple feature flags in batch. INSTRUCTIONS: - Use get_context to get available project IDs - Always prompt the user if they haven't specified a project - Use get_deployments to find deployment IDs to associate with the flag EXAMPLES: - {"flags": [{"projectId": "123456", "key": "new-feature", "name": "New Feature"}]} NOTES: - Flag keys must be unique within each project - Flags are disabled by default - If a flag is enabled, it does not mean that all users will see the flag.

update_flag

Update a feature flag or experiment with comprehensive options including metadata, variants, testers, and deployments. Works for both feature flags and experiments — all experiments are also flags and share the same flag ID. INSTRUCTIONS: - Use the search tool with entityTypes: ['FLAG'] or entityTypes: ['EXPERIMENT'] to find IDs first - Use get_deployments to find deployment IDs if modifying deployments - Only include sections you want to update (flagConfig, variants, testers, deployments) - To update experiment-specific settings (metrics), use update_experiment instead EXAMPLES: - Update name: {"flagId": "abc123", "flagConfig": {"name": "New Name"}} - Enable flag/experiment: {"flagId": "abc123", "flagConfig": {"enabled": true}} - Add variant: {"flagId": "abc123", "variants": {"create": [{"variantKey": "variant-c"}]}} - Add tester: {"flagId": "abc123", "testers": {"add": [{"variantKey": "control", "userOrDeviceIds": ["user123"]}]}} - Add link: {"flagId": "abc123", "links": {"add": [{"url": "https://jira.example.com/PROJ-123", "title": "JIRA Ticket"}]}} NOTES: - At least one variant must remain after deletions - For testers to receive a variant the flag must be enabled - All experiments are also flags and have an associated flag ID — use this tool for metadata, variants, testers, and deployments on experiments

update_experiment

Set metrics on an experiment. For other changes (name, description, enabled, variants, testers, deployments, links), use update_flag — it works for both flags and experiments. INSTRUCTIONS: - Use the search tool with entityTypes: ['EXPERIMENT'] to find experiment IDs first - Use create_metric to create metrics before attaching them here - Metrics are replaced entirely — include all metrics you want on the experiment - Only one metric can have recommendation=true (the primary metric) EXAMPLES: - Set primary + guardrail: {"experimentId": "abc123", "metrics": [{"metricId": "met1", "metricIndex": 0, "recommendation": true, "analysisParams": {"metricGoalType": "success", "testDirection": "larger", "minDetectableEffect": 0.02}}, {"metricId": "met2", "metricIndex": 1, "analysisParams": {"metricGoalType": "guardrail", "testDirection": "smaller", "minDetectableEffect": 0.02}}]} - Replace all metrics: {"experimentId": "abc123", "metrics": [{"metricId": "met3", "metricIndex": 0, "recommendation": true, "analysisParams": {"metricGoalType": "success", "testDirection": "larger", "minDetectableEffect": 0.02}}]} - Clear all metrics: {"experimentId": "abc123", "metrics": []} NOTES: - This tool only works on experiments (not feature flags). - Use get_experiments to see current metrics before updating.

create_metric

Create a new metric in a project. WHEN TO USE: - You need to create a metric for use in experiments or dashboards. - You want to define a reusable KPI like "Daily Active Users" or "Error Rate". - The user wants a success or guardrail metric for an A/B test. INSTRUCTIONS: - Provide the projectId and a descriptive name. - Choose a metricType: "UNIQUES" (unique users), "TOTALS" (total event count), "FORMULA" (custom formula), "PROPSUM"/"PROPAVG" (property aggregation), "RETENTION" (retention metrics), or "CONVERSION" (funnel / multi-step conversion metrics). - For UNIQUES or TOTALS: provide a single event with optional filters. - For FORMULA: provide multiple events (max 6) and a formula string. Each event is referenced by letter (A, B, C...). Available formula functions: UNIQUES(A), TOTALS(A), PROPSUM(A), PROPAVG(A), PROPMIN(A), PROPMAX(A), PROPCOUNT(A), PROPCOUNTAVG(A), REVENUETOTAL(A). - For FORMULA with property aggregation functions (PROPSUM, PROPAVG, etc.): the event must include a group_by specifying which property to aggregate. Example: events=[{event_type: "Purchase", group_by: [{type: "event", value: "revenue"}]}], formula="PROPSUM(A)". - For PROPSUM/PROPAVG: provide a single event AND an aggregationProperty specifying which numeric property to aggregate. - For property min/max/count operations, use a FORMULA metric with PROPMIN(A), PROPMAX(A), PROPCOUNT(A), or PROPCOUNTAVG(A). - For RETENTION: provide a startEvent and returnEvent, and returnOn and returnOnInterval. If you want one day retention, set returnOn=1 and returnOnInterval=1. If you want one week retention, set returnOn=1 and returnOnInterval=7. If you want one month retention, set returnOn=1 and returnOnInterval=30. - For RETENTION: if you see the word "day", set returnOnInterval=1. If you see the word "week", set returnOnInterval=7. If you see the word "month", set returnOnInterval=30. - For CONVERSION (funnel): provide funnelEvents (at least 2 ordered steps), funnelMode ("ordered" = this order, "unordered" = any order, "sequential" = exact order), conversionSeconds (max time to complete funnel, e.g. 86400 for 1 day), and funnelPathsToCollect ("UNIQUES" for unique users completing the funnel, "TOTALS" for total funnel conversion counts). Optional funnelConstantProperties holds properties constant across steps (holding constant). Optional funnelComputeProperty + funnelComputePropFunction ("SUM" or "AVG") aggregate a numeric property on the last funnel step (e.g. sum revenue on the final purchase step). - For CONVERSION (funnel): if funnelComputeProperty is provided, funnelComputePropFunction must also be provided. - For CONVERSION (funnel): if funnelConstantProperties is provided, make sure that event property or user property exists for every event in the funnel. - Set isExperimentMetric=true when the metric is intended for use in an experiment. This enables experiment-specific formula validation. - ALWAYS use the search or get_events tool first to discover valid event names before calling this tool. Do not guess event names. EXAMPLES: - Unique users for an event: metricType="UNIQUES", event={event_type: "Button Clicked"} - Error rate formula: metricType="FORMULA", events=[{event_type: "API Call"}, {event_type: "API Error"}], formula="TOTALS(B)/TOTALS(A)" - Revenue per user formula: metricType="FORMULA", events=[{event_type: "Purchase", group_by: [{type: "event", value: "revenue"}]}, {event_type: "_active"}], formula="PROPSUM(A)/UNIQUES(B)" - Total event count: metricType="TOTALS", event={event_type: "Purchase Completed"} - Sum of a property: metricType="PROPSUM", event={event_type: "Purchase Completed"}, aggregationProperty={type: "event", value: "revenue"} - Average of a property: metricType="PROPAVG", event={event_type: "Page Viewed"}, aggregationProperty={type: "event", value: "load_time"} - Count of a property: metricType="FORMULA", events=[{event_type: "Purchase", group_by: [{type: "event", value: "item_id"}]}], formula="PROPCOUNT(A)" - Retention metrics: metricType="RETENTION", startEvent={event_type: "Purchase"}, returnEvent={event_type: "Purchase"}, returnOn=7, returnOnInterval=7 - Funnel conversion (unique users): metricType="CONVERSION", funnelEvents=[{event_type: "View Item"}, {event_type: "Add to Cart"}, {event_type: "Purchase"}], funnelMode="ordered", conversionSeconds=604800, funnelPathsToCollect="UNIQUES" - Funnel conversion (totals): same as above with funnelPathsToCollect="TOTALS" - Funnel with holding constant: add funnelConstantProperties=[{type: "event", value: "country", group_type: ""}] - Funnel with sum of last-step property: add funnelComputeProperty={type: "event", value: "revenue", group_type: ""}, funnelComputePropFunction="SUM" NOTES: - The returned metric ID can be used with create_experiment's projectMetrics parameter. - Metrics with duplicate names or identical definitions within the same project will be rejected. - FORMULA metrics are validated against the backend before creation. The validation checks formula syntax, event references, and metric compatibility. - PROPSUM/PROPAVG aggregation properties must be numeric.

create_notebook

Create a new notebook WHEN TO USE: - The user wants to create a new notebook - The user wants to generate an interactive report with specific content from amplitude data specifically CRITICAL - CHART IDs MUST BE FROM SAVED CHARTS: - Only use chartIds from SAVED/PERMANENT charts - these are returned by save_chart_edits (in the chartId field) or create_chart - DO NOT use editIds from query_dataset - these are temporary IDs that cannot be added to notebooks - DO NOT use the editId from query_dataset responses - you must first call save_chart_edits to get a permanent chartId - The typical workflow is: query_dataset (returns editId) → save_chart_edits (converts editId to permanent chartId) → create_notebook (uses chartId) - If you use an editId instead of a saved chartId, the notebook creation will fail with "NotFoundError: No chart" INSTRUCTIONS: - Provide a name for the notebook - Use rows array where each row contains items in left-to-right order - Each item specifies width (3-12 columns). If width is omitted, items auto-fill remaining space - Total width of items in a row must not exceed 12 columns - Max 4 items per row (ensures minimum 3-column width per item) - The tool will create the notebook and return the new notebook ID and details - Return a link to the new notebook in the response MARKDOWN FORMAT: - Rich text content uses standard markdown syntax - Supported: headers (# ## ###), bold (**text**), italic (*text*), lists (- or 1.), links ([text](url)), code blocks (```), inline code (`code`) - Example: "# Analysis Summary\n\nKey findings show **significant growth** in user engagement." LAYOUT EXAMPLES: - Full-width item: { items: [{ type: 'chart', chartId: '123', width: 12 }] } - Two side-by-side: { items: [{ type: 'chart', chartId: '1', width: 6 }, { type: 'rich_text', content: '# Notes', width: 6 }] } - Three columns: { items: [{ width: 4 }, { width: 4 }, { width: 4 }] } - Auto-fill: { items: [{ type: 'chart', chartId: '1' }, { type: 'chart', chartId: '2' }] } (each gets 6 columns)

edit_notebook

Edit a notebook's metadata and layout with optimistic concurrency protection. WHEN TO USE: - You already have a notebook ID and want to update its name and/or content rows. INSTRUCTIONS: - Always call get_notebook first to retrieve the notebook's current lastModifiedAt and rows. - Pass the retrieved lastModifiedAt in expectedLastModifiedAt. - Metadata fields are only updated when values are not null/undefined. - Use one structural edit at a time via edit. EXAMPLES: - Metadata only: {"notebookId":"123","expectedLastModifiedAt":1700000000,"metadata":{"name":"Q1 Notebook"}} - Replace all rows: {"notebookId":"123","expectedLastModifiedAt":1700000000,"edit":{"type":"set_rows","rows":[{"items":[{"type":"chart","chartId":"abc","width":12}]}]}} - Update a row: {"notebookId":"123","expectedLastModifiedAt":1700000000,"edit":{"type":"update_row","rowIndex":0,"row":{"items":[{"type":"rich_text","content":"# Notes","width":12}]}}} - Insert a row: {"notebookId":"123","expectedLastModifiedAt":1700000000,"edit":{"type":"insert_row","index":1,"row":{"items":[{"type":"chart","chartId":"def","width":12}]}}} - Remove a row: {"notebookId":"123","expectedLastModifiedAt":1700000000,"edit":{"type":"remove_row","rowIndex":2}} NOTES: - The request fails with a conflict if expectedLastModifiedAt is stale. - Response is intentionally compact to minimize context usage.

query_amplitude_data

Query Amplitude analytics data. Works in two modes — call Mode 1 first, then Mode 2. ## MODE 1 — Discover (pass query, no definition) Returns the project's matching events, properties, and chart schema so you can build a valid definition. ## MODE 2 — Execute (pass definition) Executes the query and returns CSV/JSON data. ## WORKFLOW 1. Decompose the user's question into: chart type, events needed, properties for filters/group-by 2. Call Mode 1 with eventSearchTerms and propertySearchTerms extracted from the question. You can make multiple parallel Mode 1 calls with different search terms when the query involves multiple distinct concepts (e.g. separate calls for each funnel step). 3. Review the Mode 1 results. Each matched event includes a `volume` field (unique users in the last 30 days) — use this to decide which events are active. If no events were found or all have zero volume, ask the user for clarification. If matches look uncertain, make additional parallel Mode 1 calls with refined search terms. 4. Build a definition using the returned events, properties, and schema. Check the event's own `properties` array for filter/group-by candidates — it may have properties not found in `matchedProperties`. 5. Call Mode 2 with the definition → get data 6. Check Mode 2 results before calling render_chart: - If results are good call render_chart with the EXACT SAME definition you used in Mode 2 - Data is non-empty AND chart type is renderable (eventsSegmentation, funnels, retention, sessions) → call `render_chart` with the same definition - Data is empty or all zeros → tell the user, suggest alternatives. Do NOT call render_chart. - Chart type is dataTableV2 → present the data as a table. render_chart does not support dataTableV2. - A group_by is all "(none)" → the property doesn't exist on that event, try a different property. Do NOT render. ## SYSTEM EVENTS — USE THESE FIRST FOR COMMON QUERIES Before searching for events, check if a system event fits the query. These are built into every Amplitude project and do NOT appear in search results: - `_active`: Any active event — **always use this for DAU/WAU/MAU** (with metric: "uniques"). Do NOT search for "active users" — just use _active directly. - `_new`: First-ever event by a user — **always use this for new user counts and as the startEvent for retention charts**. Do NOT search for "new user" events. - `_all`: Any event tracked — use for total event volume across all event types. - `_any_revenue_event`: Any revenue event — use for revenue analysis. **Common patterns that need system events, NOT search:** - "DAU/WAU/MAU" or "active users" → event: _active, metric: uniques - "new users" or "signups" (generic) → event: _new, metric: uniques - "user retention" → startEvent: _new, retentionEvents: [_active] - "total event volume" → event: _all, metric: totals If the user asks about a SPECIFIC event (e.g., "purchase funnel", "button clicks"), then use Mode 1 discover to find the right events. But for generic activity/user metrics, use the system events above — the Mode 1 search results will be irrelevant noise. ## AMPLITUDE CONCEPTS ### Events User actions tracked in Amplitude (e.g., "Button Clicked", "Purchase Completed", "Page Viewed"). Each event has properties (metadata attached to that action). ### Properties Metadata on events or users used for filtering and grouping: - Event properties: attached to specific events (e.g., "button_name" on "Button Clicked", "product_id" on "Purchase") - User properties: attributes of users (e.g., "country", "plan_type", "signup_date") - Group properties: attributes of accounts/companies (e.g., "company_size", "industry") ### Segments User filters applied to the analysis. Use conditions to define who is included: - All users: [{"conditions": []}] - Property filter: [{"conditions": [{"type": "property", "prop": "country", "op": "is", "values": ["US"], "prop_type": "user", "group_type": "User"}]}] ## CHART TYPES ### eventsSegmentation (most common) Trends, counts, and aggregations over time. Use for: - User counts: DAU, WAU, MAU (metric: "uniques", event: "_active") - Event totals: how many times X happened (metric: "totals") - Property sums/averages: revenue, session length (metric: "sums" or "value_avg") - Frequency distributions: how many users did X once, twice, etc. (metric: "frequency") Example: "How many active users last 30 days?" → eventsSegmentation with _active + uniques ### funnels Multi-step conversion analysis. Use for: - Conversion rates through a sequence of steps (e.g., View → Add to Cart → Purchase) - Drop-off analysis between steps - Time-to-convert metrics Each step is an event. CRITICAL RULES: - Steps MUST be ordered from highest volume to lowest (broad → narrow) - Step 1 should be the most common action, last step the rarest - Every event MUST have volume > 0 — check the volume in Mode 1 results - If a funnel has inverted steps (step 2 volume > step 1), it will produce misleading results - Mode 1 returns events sorted by volume with funnelGuidance when chartType="funnels" Example: "What's the signup to purchase conversion?" → funnels with [Sign Up, Purchase] (Sign Up volume > Purchase volume) ### retention User return behavior over time. Use for: - N-day retention: what % of users come back on day 1, 7, 30 - Cohort retention curves - Churn analysis Requires a start event (e.g., "Sign Up") and a return event (e.g., "_active"). Example: "What's our 7-day retention?" → retention with startEvent=_new, retentionEvents=[_active] ### sessions Session-level engagement metrics. Use for: - Average session length, total sessions, sessions per user - Session duration distributions Example: "Average session length by platform" → sessions with sessionType="average" ### dataTableV2 (DATA ONLY — cannot be rendered as a chart) Tabular multi-metric breakdowns. Use for: - Side-by-side metric comparisons - Multi-dimensional tables with multiple group-by dimensions - Period-over-period analysis Example: "Compare DAU and revenue by country" → dataTableV2 NOTE: render_chart does NOT support dataTableV2. Present the data directly to the user as a table. Tell the user that Amplitude does not support rendering dataTableV2 as a visual chart. ## RENDERABLE VS DATA-ONLY CHART TYPES - render_chart supports: eventsSegmentation, funnels, retention, sessions - Data-only (no render_chart): dataTableV2 — return data from Mode 2 and present as a table - When the user asks for a chart and the type is dataTableV2, tell them you can get the data but cannot render a visual chart for this type ## DO NOT USE FOR - Finding existing charts/dashboards → use 'search' - Querying an existing chart by ID → use 'query_chart' - Rendering a chart UI or generating a chart edit URL → use 'render_chart' RESPONSE FORMAT: Returns {isCsvResponse: bool, csvResponse or jsonResponse, definition}. Only ONE response type present. Check the isCsvResponse flag to determine which response format to parse CSV Response Structure (when isCsvResponse is true): - Header rows: The top rows contain metadata including chart name, description, events, formulas, and other chart configuration details - Data header row: A single row containing column labels for the data points below (typically includes dates or time periods) - Data rows: Each row contains: * Label columns: First few columns contain row labels identifying the data series * Value columns: Numerical data organized under the corresponding date/time columns from the data header row - Parse by: Skip metadata rows, identify the data header row, then extract labels from first columns and values from remaining columns - Cells in the CSV response are delimited by commas and may be prepended with a character Example below measures uniques of custom event "Valuable Tweaking" over 3 days (2025-08-23, 2025-08-24, 2025-08-25) for all users. The data points are 614, 1769, and 4132 for the 3 days respectively. data: " Example chart name" " Formula"," UNIQUES(A)" " A:"," [Custom] 'Valuable Tweaking'" " Segment"," 2025-08-23"," 2025-08-24"," 2025-08-25" " All Non-Amplitude Users","614","1769","4132" definition: { "app": "APP_ID", "params": { "countGroup": "User", "end": 1756166399, "events": [ { "event_type": "ce:'Valuable Tweaking'", "filters": [], "group_by": [] } ], "groupBy": [], "interval": 1, "metric": "uniques", "segments": [], "start": 1755907200, }, "type": "eventsSegmentation", } JSON Response Structure (when isCsvResponse is false): - Parse using the following structure: - timeSeries: Array of arrays, each containing data point for a given time period with a "value" property - overallSeries: Array of arrays, each containing data the overall data point (across the entire range) under the "value" property - seriesMetadata: Array of objects containing metadata for each series - xValuesForTimeSeries: Array of strings representing the x-axis values (dates) for the time series - Use the dataset definition to be able to parse referenced events, properties, and segments. Example below is a JSON response is for the same query as the CSV example above. { "timeSeries": [[{"value": 614}, {"value": 1769}, {"value": 4132}]], "overallSeries": [[{"value": 5642}]], "seriesMetadata": [{"segmentIndex": 0, "formulaIndex": 0, "formula": "UNIQUES(A)"}], "xValuesForTimeSeries": ["2025-08-23T00:00:00", "2025-08-24T00:00:00", "2025-08-25T00:00:00"] }

render_chart

Renders a chart UI and returns a chart edit URL using the chart definition from query_amplitude_data (use that tool first before calling this one)Creates a visual, interactive Amplitude chart with a link to Amplitude. ## SUPPORTED CHART TYPES — ONLY these can be rendered eventsSegmentation, funnels, retention, sessions. - dataTableV2 CANNOT be rendered. Do NOT call this tool with dataTableV2. Use query_amplitude_data Mode 2 to get data and present it as a table instead. - If the user asks to visualize a dataTableV2 query, tell them this chart type can only return data, not a visual chart. ## WHEN TO USE - You have ALREADY called query_amplitude_data Mode 2 and confirmed the data is non-empty - The chart type is one of: eventsSegmentation, funnels, retention, sessions - Pass the EXACT SAME definition you used in Mode 2 ## WHEN NOT TO USE — DO NOT CALL THIS WITHOUT GETTING CORRECT CHART DEFINITION FROM QUERY_AMPLITUDE_DATA - You have NOT yet validated the definition via query_amplitude_data — call that first - The query_amplitude_data response showed all-zero or empty data — do NOT render, tell the user instead - Events from Mode 1 discover have volume: 0 — they will produce empty charts - A group_by property returned all "(none)" values — the property doesn't exist on that event, try a different one - You are unsure whether events exist in the project — use query_amplitude_data to check first ## WHAT TO DO INSTEAD OF RENDERING AN EMPTY CHART - If events have zero volume: tell the user which events you found, their volumes, and ask which to use - If a property group_by is all (none): explain the property isn't populated on that event, suggest alternatives from the discover results (e.g. "client name" instead of "platform" for MCP events) - If the project has no matching events: suggest the user try a different project, or use search to find where those events live ## INSTRUCTIONS - Pass the same chart definition you validated with query_amplitude_data - If projectId is omitted, definition.app is used - This tool executes the query, creates a chart edit, and returns renderable chart data RESPONSE FORMAT: Returns {isCsvResponse: bool, csvResponse or jsonResponse, definition}. Only ONE response type present. Check the isCsvResponse flag to determine which response format to parse CSV Response Structure (when isCsvResponse is true): - Header rows: The top rows contain metadata including chart name, description, events, formulas, and other chart configuration details - Data header row: A single row containing column labels for the data points below (typically includes dates or time periods) - Data rows: Each row contains: * Label columns: First few columns contain row labels identifying the data series * Value columns: Numerical data organized under the corresponding date/time columns from the data header row - Parse by: Skip metadata rows, identify the data header row, then extract labels from first columns and values from remaining columns - Cells in the CSV response are delimited by commas and may be prepended with a character Example below measures uniques of custom event "Valuable Tweaking" over 3 days (2025-08-23, 2025-08-24, 2025-08-25) for all users. The data points are 614, 1769, and 4132 for the 3 days respectively. data: " Example chart name" " Formula"," UNIQUES(A)" " A:"," [Custom] 'Valuable Tweaking'" " Segment"," 2025-08-23"," 2025-08-24"," 2025-08-25" " All Non-Amplitude Users","614","1769","4132" definition: { "app": "APP_ID", "params": { "countGroup": "User", "end": 1756166399, "events": [ { "event_type": "ce:'Valuable Tweaking'", "filters": [], "group_by": [] } ], "groupBy": [], "interval": 1, "metric": "uniques", "segments": [], "start": 1755907200, }, "type": "eventsSegmentation", } JSON Response Structure (when isCsvResponse is false): - Parse using the following structure: - timeSeries: Array of arrays, each containing data point for a given time period with a "value" property - overallSeries: Array of arrays, each containing data the overall data point (across the entire range) under the "value" property - seriesMetadata: Array of objects containing metadata for each series - xValuesForTimeSeries: Array of strings representing the x-axis values (dates) for the time series - Use the dataset definition to be able to parse referenced events, properties, and segments. Example below is a JSON response is for the same query as the CSV example above. { "timeSeries": [[{"value": 614}, {"value": 1769}, {"value": 4132}]], "overallSeries": [[{"value": 5642}]], "seriesMetadata": [{"segmentIndex": 0, "formulaIndex": 0, "formula": "UNIQUES(A)"}], "xValuesForTimeSeries": ["2025-08-23T00:00:00", "2025-08-24T00:00:00", "2025-08-25T00:00:00"] }

query_chart

Query a single chart given its ID. RULES: - Users want to know references for analyses in order to validate the data. - ALWAYS REFERENCE CHARTS TO THE USER BY THEIR LINK WHEN QUERIED AND USED IN ANALYSES. WHEN TO USE: - You want to query a chart to get its data. - Only one chart or chart edit can be queried in a single request. INSTRUCTIONS: - Identify the IDs of the charts you want to query from the conversation context (e.g., from URLs) or use the search tool to find them. - Provide saved charts via `chartId` parameter and chart edits (links ending in `/chart/new/<edit_id>` or `/chart/<chart_id>/edit/<edit_id>`) via the `chartEditId` parameter. - Only one chart or chart edit ID is allowed per request; if you have both, prefer the chart edit ID. - Use this tool to query one chart or chart edit. - Results will include data for the chart and errors if it fails. RESPONSE FORMAT: Returns {isCsvResponse: bool, csvResponse or jsonResponse, definition}. Only ONE response type present. Check the isCsvResponse flag to determine which response format to parse CSV Response Structure (when isCsvResponse is true): - Header rows: The top rows contain metadata including chart name, description, events, formulas, and other chart configuration details - Data header row: A single row containing column labels for the data points below (typically includes dates or time periods) - Data rows: Each row contains: * Label columns: First few columns contain row labels identifying the data series * Value columns: Numerical data organized under the corresponding date/time columns from the data header row - Parse by: Skip metadata rows, identify the data header row, then extract labels from first columns and values from remaining columns - Cells in the CSV response are delimited by commas and may be prepended with a character Example below measures uniques of custom event "Valuable Tweaking" over 3 days (2025-08-23, 2025-08-24, 2025-08-25) for all users. The data points are 614, 1769, and 4132 for the 3 days respectively. data: " Example chart name" " Formula"," UNIQUES(A)" " A:"," [Custom] 'Valuable Tweaking'" " Segment"," 2025-08-23"," 2025-08-24"," 2025-08-25" " All Non-Amplitude Users","614","1769","4132" definition: { "app": "APP_ID", "params": { "countGroup": "User", "end": 1756166399, "events": [ { "event_type": "ce:'Valuable Tweaking'", "filters": [], "group_by": [] } ], "groupBy": [], "interval": 1, "metric": "uniques", "segments": [], "start": 1755907200, }, "type": "eventsSegmentation", } JSON Response Structure (when isCsvResponse is false): - Parse using the following structure: - timeSeries: Array of arrays, each containing data point for a given time period with a "value" property - overallSeries: Array of arrays, each containing data the overall data point (across the entire range) under the "value" property - seriesMetadata: Array of objects containing metadata for each series - xValuesForTimeSeries: Array of strings representing the x-axis values (dates) for the time series - Use the dataset definition to be able to parse referenced events, properties, and segments. Example below is a JSON response is for the same query as the CSV example above. { "timeSeries": [[{"value": 614}, {"value": 1769}, {"value": 4132}]], "overallSeries": [[{"value": 5642}]], "seriesMetadata": [{"segmentIndex": 0, "formulaIndex": 0, "formula": "UNIQUES(A)"}], "xValuesForTimeSeries": ["2025-08-23T00:00:00", "2025-08-24T00:00:00", "2025-08-25T00:00:00"] }

query_charts

Query up to 3 charts concurrently given their IDs. RULES: - Users want to know references for analyses in order to validate the data. - ALWAYS REFERENCE CHARTS TO THE USER BY THEIR LINK WHEN QUERIED AND USED IN ANALYSES. WHEN TO USE: - You want to query multiple charts to get their data efficiently. - Maximum of 3 charts can be queried in a single request. INSTRUCTIONS: - Identify the IDs of the charts you want to query from the conversation context (e.g., from URLs) or use the search tool to find them. - Provide saved charts via `chartIds` parameter and chart edits (links ending in `/chart/new/<edit_id>` or `/chart/<chart_id>/edit/<edit_id>`) via the `chartEditIds` parameter. - Chart edit IDs take precedence over chart IDs when both are available for a given chart. - Use this tool to query up to 3 charts + chart edits (combined total). - Results will include data for each successfully queried chart and errors for any failed charts. RESPONSE FORMAT: Returns {isCsvResponse: bool, csvResponse or jsonResponse, definition}. Only ONE response type present. Check the isCsvResponse flag to determine which response format to parse CSV Response Structure (when isCsvResponse is true): - Header rows: The top rows contain metadata including chart name, description, events, formulas, and other chart configuration details - Data header row: A single row containing column labels for the data points below (typically includes dates or time periods) - Data rows: Each row contains: * Label columns: First few columns contain row labels identifying the data series * Value columns: Numerical data organized under the corresponding date/time columns from the data header row - Parse by: Skip metadata rows, identify the data header row, then extract labels from first columns and values from remaining columns - Cells in the CSV response are delimited by commas and may be prepended with a character Example below measures uniques of custom event "Valuable Tweaking" over 3 days (2025-08-23, 2025-08-24, 2025-08-25) for all users. The data points are 614, 1769, and 4132 for the 3 days respectively. data: " Example chart name" " Formula"," UNIQUES(A)" " A:"," [Custom] 'Valuable Tweaking'" " Segment"," 2025-08-23"," 2025-08-24"," 2025-08-25" " All Non-Amplitude Users","614","1769","4132" definition: { "app": "APP_ID", "params": { "countGroup": "User", "end": 1756166399, "events": [ { "event_type": "ce:'Valuable Tweaking'", "filters": [], "group_by": [] } ], "groupBy": [], "interval": 1, "metric": "uniques", "segments": [], "start": 1755907200, }, "type": "eventsSegmentation", } JSON Response Structure (when isCsvResponse is false): - Parse using the following structure: - timeSeries: Array of arrays, each containing data point for a given time period with a "value" property - overallSeries: Array of arrays, each containing data the overall data point (across the entire range) under the "value" property - seriesMetadata: Array of objects containing metadata for each series - xValuesForTimeSeries: Array of strings representing the x-axis values (dates) for the time series - Use the dataset definition to be able to parse referenced events, properties, and segments. Example below is a JSON response is for the same query as the CSV example above. { "timeSeries": [[{"value": 614}, {"value": 1769}, {"value": 4132}]], "overallSeries": [[{"value": 5642}]], "seriesMetadata": [{"segmentIndex": 0, "formulaIndex": 0, "formula": "UNIQUES(A)"}], "xValuesForTimeSeries": ["2025-08-23T00:00:00", "2025-08-24T00:00:00", "2025-08-25T00:00:00"] }

query_experiment

Query an experiment analysis. CRITICAL: Do NOT pass metricIds unless user explicitly requests specific metrics or requests analysis on secondary metrics. Omit metricIds for primary metric only (cleaner, focused results). RULES: - Users want to know references for analyses in order to validate the data. - ALWAYS REFERENCE EXPERIMENTS TO THE USER BY THEIR LINK WHEN QUERIED AND USED IN ANALYSES. WHEN TO USE: - You want to query a experiment for analysis. INSTRUCTIONS: - Use the search tool to find the ID of the experiment you want to query. - You may want to use the get_experiments tool to get more context about the experiment (i.e. state, variants, etc.) - Use this tool to query the experiment analysis. EXAMPLE: groupBy: [{"type": "user", "value": "device type", "group_type": "User"}]

search

Search for dashboards, charts, notebooks, experiments, and other content in Amplitude. INSTRUCTIONS: - Use this as your primary tool to discover and explore available analytics content before diving into specific analyses. - If you are not sure what to search for, use the default search query. - Do not specify appIds/projectIds in the input unless the user explicitly asks to search within a specific app/project. - When searching for taxonomy entities like events, properties, etc. use higher limits (e.g. 100-200) to get more results as there are more important entities to search through. - When searching for events, use the get_event_properties tool to get event properties on an individual event. DO NOT USE FOR: - AI agent results, agent analyses, or agent runs → use 'get_agent_results' instead - Getting full dashboard definitions with chart details → use 'get_dashboard' with the IDs from search results - Running queries or analysis → use 'query_dataset' or 'query_chart' ADDITIONAL INFORMATION: - Results are personalized to the user you are making the request on behalf of. - Results do not include the full object definition. You will need to use other tools to get the full object definition when needed. - Best practice is to query for a single entity type, unless the user's request is open ended. - The response includes an isOfficial flag in contentMeta to identify content that has been marked as official by the organization.

get_from_url

Retrieve objects from Amplitude URLs WHEN TO USE: - CRITICAL: Only use this tool if the user shares a link to an amplitude URL which starts with https://app.amplitude.com! - You have an Amplitude URL and want to get the full object definition - User shares a link to a dashboard, chart, notebook, experiment, etc. INSTRUCTIONS: - Provide the full Amplitude URL (e.g., https://app.amplitude.com/analytics/myorg/chart/456) - The tool will parse the URL, validate the organization, and return the full object - Works with charts, dashboards, notebooks, experiments, flags, cohorts, and metrics

get_session_replays

Search session replays for a project using event count filters. WHEN TO USE: - You want to find session replays that match certain event count filters and user properties (e.g., plan type, cohort membership, email). INSTRUCTIONS: - Provide the projectId and one or more eventCountFilters. - Searches the last 30 days with a limit of 10 replays by default. - Optionally provide startTime and/or endTime to narrow the search window (ISO 8601 string or Unix timestamp in milliseconds). Defaults to the last 30 days if omitted. - Optionally include groupBys and limit. IMPORTANT: Provide clickable replay links when available. Some sessions may not have a URL if identifiers are missing — omit those gracefully. IMPORTANT: Before specifying any event_type other than "_all", first call get_labeled_events or get_events to confirm the event exists in the project. Do not guess or infer event names from context. EXAMPLES: - Filter by user property (e.g. plan type): eventCountFilters: [ { "count": "1", "operator": "greater or equal", "event": { "event_type": "_all", "filters": [ { "group_type": "User", "subprop_key": "gp:plan", "subprop_op": "is", "subprop_type": "user", "subprop_value": ["enterprise2", "growth"] } ], "group_by": [] } } ] - Filter by local project-scoped property (e.g. platform): eventCountFilters: [ { "count": "1", "operator": "greater or equal", "event": { "event_type": "_all", "filters": [ { "group_type": "User", "subprop_key": "platform", "subprop_op": "is", "subprop_type": "user", "subprop_value": ["Web"] } ], "group_by": [] } } ] - Exclude internal users by email: eventCountFilters: [ { "count": "1", "operator": "greater or equal", "event": { "event_type": "_all", "filters": [ { "group_type": "User", "subprop_key": "gp:email", "subprop_op": "does not contain", "subprop_type": "user", "subprop_value": ["amplitude.com"] } ], "group_by": [] } } ] - Filter by a specific event (>= 1 occurrence): eventCountFilters: [ { "count": "1", "operator": "greater or equal", "event": {"event_type": "Purchase", "filters": [], "group_by": []} } ] - Filter by event with property (>= 3 Page Viewed on pricing page): eventCountFilters: [ { "count": "3", "operator": "greater or equal", "event": { "event_type": "Page Viewed", "filters": [ { "group_type": "User", "subprop_key": "page", "subprop_op": "is", "subprop_type": "event", "subprop_value": ["pricing"] } ], "group_by": [] } } ] - Combine user property filter with event filter: eventCountFilters: [ { "count": "1", "operator": "greater or equal", "event": { "event_type": "_all", "filters": [{"group_type":"User","subprop_key":"gp:plan","subprop_op":"is","subprop_type":"user","subprop_value":["enterprise2"]}], "group_by": [] } }, { "count": "1", "operator": "greater or equal", "event": {"event_type": "Checkout Started", "filters": [], "group_by": []} } ] NOTES: - Use event_type "_all" to filter on user properties (applies across any event). - Regular tracked events use just the event name (e.g. "Purchase", "Checkout Started"). Amplitude built-in events use bracket notation (e.g. "[Amplitude] Start Session"). Amplitude custom events created in the UI use the "ce:" prefix. Use "_all" to match any event. - User properties use the "gp:" prefix for globally propagated properties (e.g. "gp:plan", "gp:email") — these are reliably set across the org. Local project-scoped properties use plain keys (e.g. "platform" for Web/iOS/Android). - This tool returns the complete result in a single call. Do not call it again with the same parameters. - If results are empty, there are no matching sessions in the search window. Do not retry. - If this tool returns an error, report it to the user and stop.

list_session_replays

List session replays for a project using the Amplitude public Session Replay API. WHEN TO USE: - You want a paginated list of session replays within a time window without complex event-count filtering. - Use get_session_replays for event-count or user-property filtering; use this tool for simple time-range listing or when you need a replay_id to pass to get_session_replay_events. INSTRUCTIONS: - Provide projectId. Optionally narrow results with start_time, end_time, page_size, and sort_order. - Use the returned next_page_token to fetch subsequent pages. - replay_id format is "device_id/session_id" — pass it directly to get_session_replay_events to retrieve rrweb events. NOTES: - If no start_time/end_time is provided, the last 48 hours is used automatically (end_time = now, start_time = 48 hours ago). - Keep sort_order consistent across paginated requests for the same query. - Replays older than their retention period will not appear.

get_session_replay_events

Retrieve and process rrweb events from a session replay recording. WHEN TO USE: - You have a replay_id (from list_session_replays or get_session_replays) and want to understand what the user did during that session. - Useful for debugging user issues, understanding UX flows, or analyzing error reproduction steps. INSTRUCTIONS: - Provide projectId and replay_id in "device_id/session_id" format (as returned by list_session_replays). - Returns a processed interaction timeline: page navigations, clicks, text inputs, and significant scrolls. - Use event_limit to cap the number of events returned (default 500; reduce if context is tight). OUTPUT: - interactions: ordered list of user actions with timestamps (ms since Unix epoch) - navigation: { url, viewport } - click / double_click: { nodeId, x, y } - input: { nodeId, value } — text inputs and checkbox state changes - scroll: { nodeId, x, y, delta_y } — only large position jumps (≥200px) - viewport_resize: { width, height } NOTES: - rrweb nodeId values reference DOM nodes from the session's full snapshot; without the snapshot they cannot be resolved to CSS selectors or element text. - Large or long sessions may be truncated at event_limit; the response indicates if truncation occurred.

get_properties

Retrieve properties from a project's taxonomy. Use propertyType to select which kind of properties to fetch. PROPERTY TYPES: | propertyType | What it returns | Key params | |---|---|---| | event | Properties for a specific event type | eventType (required) | | user | User-level properties | sources, name | | derived | Computed/formula properties | derivedPropertyType, names | | group | Group properties (e.g., company_name, plan_tier). NOT group types themselves — use get_group_types for that. | groupTypes | | lookup | CSV lookup table properties | configurationFilter, lookupTableName | | channel | Traffic source channel properties | names | | persisted | Event-to-user persisted properties | names | INSTRUCTIONS: - Use the search tool or get_events first to find exact event/property names before calling this tool. - All property types except 'event' support limit/cursor pagination. - If the user has not specified a project, prompt them to decide. Don't decide for them. EXAMPLES: - Event properties: { "propertyType": "event", "projectId": "123", "eventType": "Button Clicked" } - User properties: { "propertyType": "user", "projectId": "123", "sources": ["CUSTOMER"] } - Derived properties: { "propertyType": "derived", "projectId": "123", "derivedPropertyType": "event" } - Group properties: { "propertyType": "group", "projectId": "123", "groupTypes": ["company"] }

get_events

Retrieve events from a project with strict filtering by event types, limit, and cursor pagination. WHEN TO USE: - You can generally rely on the search tool to find the event you are looking for. - Use this tool to get full event objects which include the event category and whether or not the event is active. - Use this tool to paginate through ALL events when the search tool may not return the event you are looking for. INSTRUCTIONS: - Get the project ID from the context tool. - Use the search tool first to try to find the event you're looking for. - If the search tool does not return the event you are looking for, use this tool without specifying eventTypes to paginate through all events. - If you know the event types you want to get, use this tool with the eventTypes parameter to get more information about the event.

get_transformations

Retrieve data transformations from a project. Transformations are data cleaning operations that merge events, merge properties, or map property values. WHEN TO USE: - Use this tool to see what transformations are configured for a project. - Use this tool to understand how events or properties are being merged or remapped. - Use this tool to audit data cleaning rules applied to a project. WHAT IT RETURNS: - A list of transformations with their type, name, description, and configuration details. - Transform types include: merge (merge events), merge_events_derived_prop (merge events with derived property), merge_event_properties, merge_user_properties, map_event_property_values, map_user_property_values. INSTRUCTIONS: - Get the project ID from the context tool. - Optionally filter by transform type to narrow results. - Use pagination to retrieve large lists of transformations.

get_feedback_comments

Get raw customer feedback comments for a project with optional filtering and pagination. WHEN TO USE: - You want to retrieve raw feedback comments from various sources (surveys, support tickets, app reviews). - You need to see the actual customer feedback text and metadata. - You want to analyze comments with optional filters before they are grouped into insights. - You need to search for specific feedback or paginate through results. NOT FOR: AI/LLM agent chat transcripts, conversation logs, agent response quality (use get_agent_analytics_conversation, search_agent_analytics_conversations instead). TRIGGER PHRASES: "feedback comments", "customer comments", "survey responses", "support tickets", "app reviews", "user reviews", "customer feedback text", "product feedback", "what are customers saying", "feedback page", "AI feedback page" INSTRUCTIONS: - Provide the projectId (appId) to retrieve comments. - To get available sourceId values, first call get_feedback_sources with the projectId. - Provide sourceId to filter comments from a specific feedback source. - Use search to filter by body text. Pass an array for OR matching across terms. - Use page and pageSize parameters for pagination (defaults: page=1, pageSize=20). - Returns a list of comments with metadata including totalCount for pagination. - Each comment includes the feedback text, source information, and associated metadata.

get_feedback_insights

Get customer feedback insights (processed and grouped themes) for a project with optional filtering and pagination. WHEN TO USE: - You want to see grouped themes extracted from customer feedback (feature requests, complaints, bugs, etc.). - You want to understand what customers are asking for or complaining about. - You need to drill into feedback themes before looking at individual comments. NOT FOR: AI/LLM agent error categories, session topics, agent quality scores (use query_agent_analytics_metrics, query_agent_analytics_sessions instead). TRIGGER PHRASES: "feedback insights", "feedback themes", "feature requests from customers", "customer complaints", "pain points", "loved features", "what are users asking for", "product feedback", "feedback page", "AI feedback page", "AI feedback insights" WHAT ARE INSIGHTS: - Insights are derived from mentions in feedback comments. Insights represent processed and grouped themes extracted from feedback comments. - The 8 types of insights are: feature requests (request), complaints (complaint), loved features (lovedFeature), brands mentioned (competitor), bugs (bug), feature mentions (mentionedFeature), pain points (painPoint), and key takeaways (takeaway). SORTING AND FILTERING: - Results are sorted by popularity (mention count), NOT by creation date. The tool cannot sort by when insights were created. - dateStart and dateEnd filter to insights with feedback comments within the date range. DRILLING DEEPER: - Present the deep dive details to the user to help them understand the insight better. - If a user asks for more details on a specific insight, use get_feedback_mentions with the insightId to see the actual user feedback. - Do NOT call get_feedback_mentions for every insight - only when explicitly requested. DIFFERENCE FROM GET_FEEDBACK_COMMENTS TOOL: - Insights are processed and grouped themes, while comments are raw individual feedback from a feedback source. - Use insights to understand themes and trends, use comments to see individual feedback.

get_feedback_mentions

Get customer feedback mentions (individual feedback comments associated with an insight or trend). WHEN TO USE: - ONLY after a user asks for more details or wants to see the actual feedback behind a specific insight or trend. - Do NOT call this tool for every insight/trend - only when the user specifically requests to drill down. NOT FOR: AI/LLM agent session data or conversation turns (use get_agent_analytics_conversation instead). TRIGGER PHRASES: "feedback mentions", "feedback details", "drill into feedback", "see the actual feedback", "comments behind insight", "feedback behind trend" PREREQUISITE: You MUST call get_feedback_insights or get_feedback_trends first to obtain an insightId or trendId. WHAT ARE MENTIONS: - Mentions are the individual user feedback comments that contributed to a specific insight or trend. - Each mention represents a piece of feedback from a user (survey response, support ticket, app review, etc). REQUIRED PARAMETERS: - Either insightId OR trendId must be provided. - insightId: The ID of the insight (from get_feedback_insights response). - trendId: The ID of the trend (from get_feedback_trends response). - All filter parameters (dateStart, dateEnd, sourceIds, ampId) should match what was used in the original query to ensure consistent results.

get_feedback_sources

Get customer feedback sources (connected feedback integrations) for a project. WHEN TO USE: - You want to retrieve the list of feedback sources/integrations configured for a project. - You need to understand which feedback channels (e.g., surveys, support tickets, app reviews) are connected. - You need sourceId values to filter feedback comments by specific sources. - You want to see metadata about feedback sources before analyzing comments. NOT FOR: AI/LLM agent session data, agent performance, conversation transcripts (use query_agent_analytics_sessions, query_agent_analytics_metrics instead). TRIGGER PHRASES: "feedback sources", "feedback integrations", "survey sources", "support ticket sources", "app review sources", "connected feedback channels", "feedback page", "AI feedback page", "customer feedback", "product feedback", "user feedback from surveys/reviews" INSTRUCTIONS: - Provide the projectId (appId) to retrieve all configured feedback sources. - Returns a list of sources with their sourceId, sourceType, name, and configuration details. - Use the sourceId from results to filter comments in get_feedback_comments. - Each source represents a connected integration like surveys, support tickets, or app store reviews. TYPICAL WORKFLOW: 1. Call get_feedback_sources to discover available sources and their sourceId values. 2. Call get_feedback_comments with specific sourceId to retrieve comments from that source.

get_feedback_trends

Get saved customer feedback trends for a project. Trends are user-defined groupings of product feedback that are tracked over time. WHEN TO USE: - When the user wants to see what feedback themes are being tracked. - When the user asks about tracked trends, saved trends, or monitored themes. - Before creating a new trend, to check what already exists. NOT FOR: AI/LLM agent quality trends, cost trends, session volume trends (use query_agent_analytics_metrics with timeseries metrics instead). TRIGGER PHRASES: "feedback trends", "tracked feedback themes", "saved feedback trends", "monitored themes", "feedback over time", "feedback page", "AI feedback page" WHAT ARE TRENDS: - Trends are saved themes derived from insights that are tracked over time as new feedback comes in. - Each trend has a name, category (same types as insights), and associated mentions (feedback comments). - Trends provide a mention timeline showing how the volume of feedback changes over time. TYPICAL WORKFLOW: 1. Call get_feedback_trends to see existing saved trends. 2. Call get_feedback_mentions with a trendId to see the feedback behind a specific trend.

get_group_types

List available group types for a project. Group types are the categories of groups (e.g., "Company", "Team", "Account") — NOT the properties/attributes of those groups. WHEN TO USE: - Use this tool to discover what group types exist (e.g., "Company", "Team"). - Use get_properties with propertyType "group" instead if the user is generally asking for group properties (e.g., company_name, plan_tier). NOTES: - Returns group types from the latest staging version of the default branch.

get_workspace_settings

Retrieve Data workspace settings for a project. These settings control governance and data quality configuration. Do not use this to determine general Amplitude settings or context information. Use the get_context tool instead. AVAILABLE SETTINGS: - eventNC: Event naming convention (e.g., camelCase, snake_case, custom regex) - propertyNC: Property naming convention - approvalWF: Approval workflow (None, Required). Used to indicate whether main branches are protected and require approval. - descriptionRequired: Whether descriptions are required for events/properties - drsDefaultRole: Default role for data role overrides - snowplowVendorName: Custom Snowplow vendor name - copyToProjects: Cross-project copy configuration - sessionReplayEventGIFsDisabled: Whether session replay GIFs are disabled WHEN TO USE: - When you need to understand governance settings for a project. - When checking naming conventions before creating events or properties. - When checking whether main branch protection is enabled - when enabled, the approvalWF setting will be "Required", and the user will not be able to directly make changes to the main branch (the default). INSTRUCTIONS: - Get the project ID from the context tool. - Returns all configured settings. Settings not yet configured will not appear in the response.

get_users

Retrieve user-level data and associated session replays for users who performed a specific event. WHEN TO USE: - You want to inspect individual users who performed a specific event - You need amplitude user IDs to use with get_user_profile or get_user_timeline tools - You want session replays correlated to specific users who performed an event INSTRUCTIONS: - IMPORTANT: Call get_events first to confirm the exact event name exists in the project. Do not guess event names. - Use the search tool to search for events, charts, properties to build the necessary event filter. - Provide projectId, event type, and optional event filters - Use limits to control how many users and session replays to return - Session replays are automatically fetched for returned users when available FILTER FORMAT: Each filter object MUST include all four fields: - subprop_type: "event" (event property) or "user" (user property) - subprop_key: property key name (e.g., "button_name", "gp:plan") - subprop_op: operator (e.g., "is", "is not", "contains", "does not contain", "greater", "less") - subprop_value: array of string values (e.g., ["Submit"]) Do NOT pass empty objects ({}) as filters — either pass a complete filter object or an empty array []. TIME RANGE FORMAT: - Use ISO 8601 format: "2025-01-15T00:00:00Z" - The start date must be before the end date COMMON EXAMPLES: - Active users (default time range): {"projectId": "12345", "event": {"event_type": "_active", "filters": []}} - Button clicks with filters: {"projectId": "12345", "event": {"event_type": "Button Clicked", "filters": [{"subprop_type": "event", "subprop_key": "button_name", "subprop_op": "is", "subprop_value": ["Submit"]}]}} - With explicit time range: {"projectId": "12345", "event": {"event_type": "Page Viewed", "filters": []}, "timeRange": {"start": "2025-01-01T00:00:00Z", "end": "2025-01-31T23:59:59Z"}} - Custom limits: {"projectId": "12345", "event": {"event_type": "Purchase", "filters": []}, "limits": {"users": 20, "sessionReplays": 5}} NOTES: - Time range defaults to "Last 30 Days" if not specified - The amplitude user IDs returned can be used with other tools like get_user_profile or get_user_timeline - Session replays are fetched per-user via the session-replay-lookup API and included in the response - If session replay lookup fails, the error is surfaced in metadata and user data is still returned

get_agent_results

Retrieve results from AI agents that have analyzed your dashboards or session replays. This is NOT for searching dashboards, charts, or notebooks — use 'search' for that. This is specifically for retrieving the AI-generated insights and analyses that agents produced. This tool handles both searching for agent analyses and fetching full results in a single call. MODES: 1. **Search mode** (no session_id): Search agent analyses with filters. Returns preview summaries. 2. **Direct fetch mode** (session_id provided): Fetch full artifact data for a specific session. 3. **Auto-expand**: If search returns exactly 1 session, full artifacts are included automatically. SEARCH STRATEGY (important — follow this order): 1. If you have a specific session_id, use direct fetch mode. 2. For dashboard analyses, pass agent_params with dashboard_id (cheap exact-match). 3. For session replay insights, use agent_params with category and/or impact (cheap exact-match). 4. Use "query" ONLY for natural language / fuzzy search when exact filters aren't sufficient. WHEN TO USE: - "What are my dashboard agents?" → agent_type: dashboard_explorer - "Show my agent results" → agent_type: dashboard_explorer (or session_replay_explorer) - "What agents have I run?" → agent_type: dashboard_explorer - "What analyses exist for dashboard xyz?" → agent_type: dashboard_explorer, agent_params: { dashboard_id: "xyz" } - "Show me high-impact session replay insights" → agent_type: session_replay_explorer, agent_params: { impact: "High" } - "Friction hotspots in the checkout flow" → agent_type: session_replay_explorer, agent_params: { category: "Friction Hotspots" }, query: "checkout flow" - "What did the AI find about rage clicks?" → agent_type: session_replay_explorer, query: "rage clicks" - "Any dashboard insights about revenue?" → agent_type: dashboard_explorer, query: "revenue" - "Show me insights from last week" → agent_type: session_replay_explorer, created_after: "2026-03-16T00:00:00Z" - "What did I analyze recently?" → agent_type: dashboard_explorer DO NOT USE FOR: - Finding or listing dashboards, charts, or notebooks → use 'search' instead - Running new analyses or queries → use 'query_dataset' or 'query_amplitude_data' or 'query_chart' instead - Creating dashboards → use 'create_dashboard' instead RETURNS: - Search mode: List of analysis sessions with preview summaries, URLs, and pagination info - Direct fetch mode: Full artifact data for the session - Links to view each session in the Amplitude UI are included

query_dataset

Run analytics queries to answer data questions about users, events, funnels, and retention. # WHEN TO USE - Answer questions like: - "How many active users did we have last week?" - "Show me a funnel from sign up to purchase" - "What is the retention rate for new users?" - "How many users completed checkout yesterday?" - Any question asking for metrics, counts, trends, funnels, or retention analysis # DO NOT USE FOR: - Finding existing charts/dashboards → use 'search' instead - To get the valid chart definition to pass -> use 'get_chart_definition_params' instead - To validate the chart definition before passing it to query_dataset -> use 'verify_chart_definition' instead - BOTH THESE TOOLS SHOULD BE USED AND ARE INEXPENSIVE TO USE BEFORE PASSING THE CHART DEFINITION TO query_dataset - If you need more information about existing charts in the project as an example → use 'get_charts' instead - Project settings (timezone, currency) → use 'get_project_context' instead # STRATEGIES 1. Use the 'search' tool to find if there are charts with properties that relate to the data you want to query. 2. If you are unsure about the properties or schema please don't modify existing properties and use the two tools below to get detailed information: 3. Use the 'get_chart_definition_params' tool to understand the valid chart definition to pass. 4. Use the 'verify_chart_definition' tool to validate the chart definition before passing it to query_dataset. 5. Use the 'get_charts' tool to find examples of existing charts in the project to understand the events, properties, and dataset schema generally. 6. Optionally use the 'search' tool again to find additional events, user properties, etc. needed for the query. 7. Optionally use the 'get_event_properties' tool to get properties on individual events. 8. Use this tool to query the ad hoc analysis. # GENERAL GUIDELINES - Don't assume or guess properties, events, or schema. Use the tools provided to you to understand the data before running a dataset query. - When running into query failures, try searching for existing charts to understand the data taxonomy and dataset schema. - When you receive a 400 error response the schema is likely incorrect or the events/properties do not exist. - ALWAYS include a descriptive "name" field in the definition object. This name will be displayed as the chart title. Examples: "Active Users Last 7 Days", "Sign Up to Purchase Funnel", "New User Retention". # AMPLITUDE WIDE META EVENTS TYPES Special system events available for analysis. Events are passed in the "event_type" field: - "_active": Any active event useful for tracking 'active users' like DAU, MAU(events not marked as inactive) - "_all": Any event being tracked in Amplitude - "_new": Events triggered by new users within the time interval. Useful for tracking 'new users'. - "_any_revenue_event": Any revenue-generating event. Useful for tracking 'revenue'. - "$popularEvents": Top events by volume (dynamically computed). Useful for more meta taxonomy analyses like 'what are the most common events'. # PROPERTY TYPES: - Amplitude core properties are built-in and use standard names like "country", "platform", "device_id", "user_id" - Custom properties are organization-defined and are typically prefixed with "gp:" - If you are unsure which properties exist, use search/get_charts/get_event_properties before querying RESPONSE FORMAT: Returns {isCsvResponse: bool, csvResponse or jsonResponse, definition}. Only ONE response type present. Check the isCsvResponse flag to determine which response format to parse CSV Response Structure (when isCsvResponse is true): - Header rows: The top rows contain metadata including chart name, description, events, formulas, and other chart configuration details - Data header row: A single row containing column labels for the data points below (typically includes dates or time periods) - Data rows: Each row contains: * Label columns: First few columns contain row labels identifying the data series * Value columns: Numerical data organized under the corresponding date/time columns from the data header row - Parse by: Skip metadata rows, identify the data header row, then extract labels from first columns and values from remaining columns - Cells in the CSV response are delimited by commas and may be prepended with a character Example below measures uniques of custom event "Valuable Tweaking" over 3 days (2025-08-23, 2025-08-24, 2025-08-25) for all users. The data points are 614, 1769, and 4132 for the 3 days respectively. data: " Example chart name" " Formula"," UNIQUES(A)" " A:"," [Custom] 'Valuable Tweaking'" " Segment"," 2025-08-23"," 2025-08-24"," 2025-08-25" " All Non-Amplitude Users","614","1769","4132" definition: { "app": "APP_ID", "params": { "countGroup": "User", "end": 1756166399, "events": [ { "event_type": "ce:'Valuable Tweaking'", "filters": [], "group_by": [] } ], "groupBy": [], "interval": 1, "metric": "uniques", "segments": [], "start": 1755907200, }, "type": "eventsSegmentation", } JSON Response Structure (when isCsvResponse is false): - Parse using the following structure: - timeSeries: Array of arrays, each containing data point for a given time period with a "value" property - overallSeries: Array of arrays, each containing data the overall data point (across the entire range) under the "value" property - seriesMetadata: Array of objects containing metadata for each series - xValuesForTimeSeries: Array of strings representing the x-axis values (dates) for the time series - Use the dataset definition to be able to parse referenced events, properties, and segments. Example below is a JSON response is for the same query as the CSV example above. { "timeSeries": [[{"value": 614}, {"value": 1769}, {"value": 4132}]], "overallSeries": [[{"value": 5642}]], "seriesMetadata": [{"segmentIndex": 0, "formulaIndex": 0, "formula": "UNIQUES(A)"}], "xValuesForTimeSeries": ["2025-08-23T00:00:00", "2025-08-24T00:00:00", "2025-08-25T00:00:00"] }

get_chart_definition_params

Get the parameter schema, valid enum values, and a working example for a specific chart type. WHEN TO USE: - Before calling query_dataset, to understand the correct parameter schema for a chart type. - When you need to know valid enum values (e.g., funnel modes, segmentation metrics). - When you need a working example definition to use as a template. INSTRUCTIONS: - Call this tool with the chart type you want to build a definition for. - Use the returned schema to construct a valid definition object. - Pass the constructed definition to query_dataset. - If the chart type is not yet supported, construct the definition based on existing chart examples from search/get_charts. SUPPORTED CHART TYPES: composition, eventsSegmentation, funnels, retention, sessions, stickiness

verify_chart_definition

Validate and auto-correct a chart definition before passing it to query_dataset. WHEN TO USE: - After constructing a chart definition, to verify it is valid before querying. - When you want to catch and auto-fix common mistakes (wrong enum values, wrong field names). - The tool will auto-coerce known LLM mistakes and return the corrected definition. - Validates event types and properties exist in the project taxonomy. WHAT IT DOES: - Validates required fields (type, app, params) and chart-type-specific parameters. - Auto-coerces known mistakes: wrong funnel mode names (this_order → ordered), conversionWindow objects → conversionSeconds, ISO date strings → Unix timestamps, string events → {event_type, filters, group_by}. - Validates that referenced event types exist in the project. - Validates that referenced properties exist on their event types. - Returns the corrected definition ready to pass to query_dataset. SUPPORTED CHART TYPES: composition, eventsSegmentation, funnels, retention, sessions, stickiness Unsupported types pass through with a warning (not an error) — you can still send them to query_dataset.

AI everywhere.