The AI that investigates your stack, from inside it

The more data you have, the harder it is to find what matters. groundcover has more data than any other platform. The Agent mode is built for exactly that.

  • BYOC-native AI

    investigation

    The agent runs on Amazon Bedrock inside your own AWS account. No prompts, logs, or traces leave your environment, eliminating the compliance conversation before it starts.

  • Deeper data than

    any competitor

    Built on eBPF, the agent sees kernel-level telemetry automatically: service dependencies, database connections, traffic patterns, without anyone manually instrumenting a thing.

  • Part of the investigation,
not a detour

    The agent lives alongside every page in the product. @mention it mid-investigation and it picks up from your current context, with no tool-switching and no lost thread.

The only BYOC-native AI agent for observability

  • AI adoption inside engineering teams is blocked by compliance.
The standard answer is to give a third-party your API key and let it
fetch your production logs. That creates two actors handling your most
sensitive data. groundcover's answer is architectural: the AI runs inside
your account.
  • The agent runs on Amazon Bedrock in your own AWS account, on your
quota. Prompts, logs, traces, and results never leave your infrastructure.
Bedrock quota is provisioned automatically during onboarding. Nothing
to configure, no security review required for a tool you already own.
  • Token costs are paid directly by you at cost with no groundcover
markup. Set quota budgets per user or team so usage stays predictable
and controllable, mirroring the model engineering teams already know
from tools like Cursor.

Answers questions Otel alone cannot

  • groundcover deploys an eBPF sensor at the kernel, providing deep
automatic telemetry that doesn't require developers to instrument
anything. groundcover sees the complete picture of your infrastructure,
not just the services someone remembered to trace.
  • Every signal is enriched with a cross-signal identifier at ingest.
Agent mode connects the dots across logs, traces, metrics, and events
automatically, inferring service purpose, dependencies, and topology
from the data alone, without anyone building a map manually.
  • Ask questions that manual instrumentation makes impossible: how many
databases are running, which services changed behavior in the last hour,
what a given workload is talking to. eBPF has always provided this depth.
Agent mode makes it accessible to any engineer, not just the ones who 
know where to look.

An AI that lives in your investigation

  • Agent mode is accessible from any page in the product, already aware of
your current context. On the Traces page and spotted something unusual?
@mention Agent mode and it continues the investigation from exactly where
you are, with no context lost and no tool to switch to
  • Agent mode output creates first-class groundcover assets: dashboards,
monitors, gcQL queries, and OTTL pipelines. Everything Agent mode builds
uses the same schema as the rest of the platform so outputs are
immediately usable, modifiable, and observable. Every tool call is visible
in the relevant product page.
  • Open multiple Agent mode tabs to run parallel investigations, matching how
engineering teams actually work incidents. One thread on a latency spike,
another on a deployment that looks off. Agent mode is part of the investigation,
not a detour from it.

Open-ended investigation, powered by gcQL

  • Most AI agents only activate when an alert fires. groundcover's agent
supports open-ended investigation: questions without a pre-existing
monitor, incident ticket, or known failure state. That covers the majority
of day-to-day engineering work, not just on-call firefighting.
  • The Agent uses gcQL, groundcover's unified query language, to query logs,
metrics, traces, and events through a single interface. It runs complex
queries in parallel and pushes processing to the backend rather than
pulling raw data into the context window, making responses faster and
more accurate. Every query it runs is visible and reusable, not a black box.
  • Background jobs run during off-hours: auto-generating service topology
maps, producing daily incident summaries, and surfacing suggested
configuration changes, all queued for human review before anything
is applied. When an engineer starts their day, relevant context is
already waiting.

The first AI agent that keeps all your production data 100% in-house

  • Runs on Amazon Bedrock inside your own AWS account with no data transfer to audit, no third party to trust, and no compliance conversation to have
  • Compliant with GDPR, CCPA, and the strictest enterprise data residency requirements by architecture, not by policy
  • No AI surcharges. Pay your own Bedrock token costs directly, with full quota controls per user and team

FAQs

Most AI tools in observability work by connecting to your data via API and sending it to an external LLM, meaning your production logs, traces, and service credentials pass through at least two third parties. groundcover's agent runs on Amazon Bedrock inside your own AWS account. The AI processes data where the data lives. Nothing leaves. This is not a configuration option; it is how the product is built. The result: no compliance conversation, no security review, no third party to trust.

An AI agent is only as good as the data it can see. Tools built on OpenTelemetry can only answer questions about services that were manually instrumented, which is never the complete picture. groundcover deploys an eBPF sensor at the kernel, capturing automatic telemetry across every service, database connection, and network call without any developer instrumentation. This lets the agent answer questions that manual instrumentation makes impossible: which services are talking to each other, what databases are running, what changed in the last hour. The data advantage is structural and no competitor can replicate it without rebuilding their instrumentation layer from scratch.

Yes. Most AI agents in observability are incident-triggered and only activate when an alert fires. groundcover's agent supports open-ended investigation: questions without a pre-existing monitor, incident ticket, or known failure state. This covers the majority of day-to-day engineering work, from exploring unusual patterns and validating a deployment to understanding why a specific service is slow for one customer but not others. The agent is designed for daily use, not just on-call firefighting.

Every signal groundcover collects, including logs, traces, metrics, and events, is enriched with a cross-signal identifier at ingest. The agent walks through them and connects the dots automatically. It sees container images, environment variables, DNS usage, traffic patterns, and inter-service connections. From that data alone it infers what each service does, who it talks to, and what normal behavior looks like. No one needs to document the topology manually. The agent builds it from data it already has and refines it over time. One customer recently described spending weeks building a manual topology map so their SRE agent could function. groundcover generates that automatically.

GCQL is groundcover's unified query language, a single interface for querying logs, metrics, traces, events, entities, and monitors. Most observability platforms accumulate a different query model for each data type, which means an AI agent has to know which interface to use, translate between them, and reassemble the results. groundcover's agent learns one language, not seven. Every query it runs is visible in the relevant product page and can be modified, saved as a monitor, or turned into a dashboard widget. The agent teaches you to do what it did. Nothing is a black box.

Agent output creates first-class groundcover assets: dashboards, monitors, GCQL queries, and OTTL pipelines. Everything the agent builds uses the same schema as the rest of the platform, so outputs are immediately usable and not exports that need to be reformatted or re-entered. Every tool call is visible in the relevant product page. The agent can also run background jobs during off-hours, auto-generating service topology maps, daily incident summaries, and suggested configuration changes, all surfaced for review before anything is applied.

The groundcover AI Agent is included in Pro, Enterprise, and OnPrem plans with no AI surcharge. You pay your own Amazon Bedrock token costs directly at cost with no groundcover markup. Quota budgets can be set per user or team to keep usage predictable. Visit our pricing page for more information.

Observability
for what comes next.

Start in minutes. No migrations. No data leaving your infrastructure. No surprises on the bill.