Case Study 02 — Datadog
Enter password
Carlos Diaz
Staff Product Designer
Work About
Case Study 02 — Datadog

Redesigning Search Suggestions

A unified suggestion experience across 80+ search surfaces — making autocomplete clear, helpful, and reliable everywhere.

68%Typing reduction
9.3%Click-through rate
1M+Suggestions selected
326msLoad time
Project overview video
01Context

Search powers nearly every workflow at Datadog — Logs, APM, RUM, Dashboards, Monitors. But over time, 80+ product teams had each built their own version.

The result: 80+ search implementations with different behaviors, different suggestion UIs, and wildly different levels of usefulness. Recents showed up in some products but not others. Metadata was repeated or missing. Nothing was grouped or prioritized.

The DASH 2025 survey confirmed what we suspected: 21% of users described autocomplete as "overwhelming." Over 54% had given up on suggestions entirely, relying on recent searches or trial-and-error instead.

How do you make search suggestions feel clear, helpful, and trustworthy — across every product surface?

RoleDesign Lead
TimelineQ2–Q3 2025 (6 months)
Shipped toMonitor Log Query Editor
TeamsLogs, Monitors, RUM, APM
Add image
02The Problem

I audited 80+ search UIs across 12+ products. The experience wasn't just inconsistent — it was actively working against users.

78%

No grouping or structure

78% of audited UIs showed flat lists with no categories or hierarchy. Users had to read every suggestion to find what they needed — killing speed on a tool built for speed.

21%

Users didn't trust suggestions

One in five users called autocomplete "overwhelming." When you can't predict what the system will show you, you stop relying on it. Many defaulted to trial-and-error.

54%

Bypassed autocomplete entirely

Over half of users relied on recent searches as their primary navigation. The suggestion system was there — they just didn't use it because it wasn't earning confidence.

Before-state screenshot
03Process

Six months from internal audit to shipped MVP — grounded in real data, built through cross-team collaboration.

Apr – May

Audit + Research

Catalogued 80+ search implementations across 12 products. Cross-referenced with DASH 2025 survey data (300+ responses) and session recordings. Built a taxonomy of suggestion types, backend sources, and interaction patterns.

Jun – Jul

Systems Design

Worked with Monitors, Logs, and RUM teams to map suggestion types against backend readiness. Key constraint: the design had to work identically on both the existing static engine and the newer ML-powered predictions — same UI, different brains.

Aug – Sep

Prototype + Ship

Built and tested floating panel, footer actions, info cards, and history mode in Figma and code prototypes. Ran async design reviews with 4 product teams. Shipped MVP into the Monitor Log Query Editor with zero frontend regressions.

Add image
04Solution

One component. Every search surface. Both backends. Five design decisions that made it work.

Anchored to the Cursor

Old dropdowns were pinned to the search bar. On long or multiline queries, suggestions appeared far from where you were actually typing. The new floating panel tracks the active input position — key, value, or free text — so your eyes never have to jump.

Floating panel

Opinionated Limits

More suggestions isn't better — it's noisier. We capped results at 10 for static engines and 5 for predictive. This forced every product team to rank their suggestions by relevance instead of dumping everything into a list. The dropdown became scannable at a glance.

Suggestion limits

History Mode

54% of users already relied on recent searches as their main navigation method. Instead of fighting that behavior, we designed for it. History Mode surfaces full query history — not just the last few entries — as a dedicated, searchable view.

History mode

Contextual Footer

Different users need different help at different times. A persistent footer adapts its CTA to context: new users see "Syntax Help," returning users see their history toggle, and products with NLQ support show "Try Natural Language." One component, many entry points.

Footer actions

Info Cards

Suggestions are just strings until you know what they mean. Hovering now surfaces field type, description, and usage frequency. Users can also rename or edit attributes directly from the card — turning the search bar into a lightweight data governance tool.

Info cards
05Outcomes

Shipped to the Monitor Log Query Editor. Four more teams committed to adoption within weeks.

68%

Typing Reduction

Users needed less than a third of the keystrokes to complete queries. Most finished in under 7 seconds.

9.3%

Click-Through Rate

Over 1M suggestions selected. Recent searches and saved filters drove nearly half of all interactions.

326ms

Load Time

Down from inconsistent response times across products. Fast enough to feel like the suggestions were already there.

5 teams

Adoption Pipeline

RUM, Dashboards, APM, and Security all committed to adopting the shared component in Q4.

Metadata Engagement

Editable field definitions used 3× more frequently than the old facet panel. Info cards hovered in 3–7% of rich queries.

Default

Platform Standard

The component is now the required starting point for any new search surface built at Datadog.

Add image
← Reimagining Cmd+K NextSignal Side Panel →