A unified suggestion experience across 80+ search surfaces — making autocomplete clear, helpful, and reliable everywhere.
Search powers nearly every workflow at Datadog — Logs, APM, RUM, Dashboards, Monitors. But over time, 80+ product teams had each built their own version.
The result: 80+ search implementations with different behaviors, different suggestion UIs, and wildly different levels of usefulness. Recents showed up in some products but not others. Metadata was repeated or missing. Nothing was grouped or prioritized.
The DASH 2025 survey confirmed what we suspected: 21% of users described autocomplete as "overwhelming." Over 54% had given up on suggestions entirely, relying on recent searches or trial-and-error instead.
How do you make search suggestions feel clear, helpful, and trustworthy — across every product surface?
I audited 80+ search UIs across 12+ products. The experience wasn't just inconsistent — it was actively working against users.
78% of audited UIs showed flat lists with no categories or hierarchy. Users had to read every suggestion to find what they needed — killing speed on a tool built for speed.
One in five users called autocomplete "overwhelming." When you can't predict what the system will show you, you stop relying on it. Many defaulted to trial-and-error.
Over half of users relied on recent searches as their primary navigation. The suggestion system was there — they just didn't use it because it wasn't earning confidence.
Six months from internal audit to shipped MVP — grounded in real data, built through cross-team collaboration.
Catalogued 80+ search implementations across 12 products. Cross-referenced with DASH 2025 survey data (300+ responses) and session recordings. Built a taxonomy of suggestion types, backend sources, and interaction patterns.
Worked with Monitors, Logs, and RUM teams to map suggestion types against backend readiness. Key constraint: the design had to work identically on both the existing static engine and the newer ML-powered predictions — same UI, different brains.
Built and tested floating panel, footer actions, info cards, and history mode in Figma and code prototypes. Ran async design reviews with 4 product teams. Shipped MVP into the Monitor Log Query Editor with zero frontend regressions.
One component. Every search surface. Both backends. Five design decisions that made it work.
Old dropdowns were pinned to the search bar. On long or multiline queries, suggestions appeared far from where you were actually typing. The new floating panel tracks the active input position — key, value, or free text — so your eyes never have to jump.
More suggestions isn't better — it's noisier. We capped results at 10 for static engines and 5 for predictive. This forced every product team to rank their suggestions by relevance instead of dumping everything into a list. The dropdown became scannable at a glance.
54% of users already relied on recent searches as their main navigation method. Instead of fighting that behavior, we designed for it. History Mode surfaces full query history — not just the last few entries — as a dedicated, searchable view.
Different users need different help at different times. A persistent footer adapts its CTA to context: new users see "Syntax Help," returning users see their history toggle, and products with NLQ support show "Try Natural Language." One component, many entry points.
Suggestions are just strings until you know what they mean. Hovering now surfaces field type, description, and usage frequency. Users can also rename or edit attributes directly from the card — turning the search bar into a lightweight data governance tool.
Shipped to the Monitor Log Query Editor. Four more teams committed to adoption within weeks.
Users needed less than a third of the keystrokes to complete queries. Most finished in under 7 seconds.
Over 1M suggestions selected. Recent searches and saved filters drove nearly half of all interactions.
Down from inconsistent response times across products. Fast enough to feel like the suggestions were already there.
RUM, Dashboards, APM, and Security all committed to adopting the shared component in Q4.
Editable field definitions used 3× more frequently than the old facet panel. Info cards hovered in 3–7% of rich queries.
The component is now the required starting point for any new search surface built at Datadog.