Directory/Inspectlet
Inspectlet

Inspectlet

Partner
Integration
  • Technology Partner - Integration
Categories
  • Analytics
  • Heatmaps
  • Session replay
Type of Integration
  • 1st party

Connect Convert experiments with Inspectlet session recordings for deeper, behavior-rich A/B test insights

The Convert + Inspectlet integration is built to connect your A/B test results with real user behavior. It tags Inspectlet session recordings with the exact experiments and variations each visitor saw. With a lightweight JavaScript snippet on top of your existing Convert and Inspectlet tracking codes, you can start filtering and segmenting recordings by experiment and variation in minutes. Experiment metadata appears directly inside Inspectlet’s dashboard, so CRO and growth teams can jump straight from test results to relevant replays. The integration is opt-in and performance-safe, giving marketers full control over when and where experiment data is sent to Inspectlet.

Key capabilities

  • Tag Inspectlet session recordings with Convert experiment and variation names for each recorded visitor
  • Filter, search, and segment Inspectlet sessions by specific experiments and variations
  • Surface experiment and variation metadata directly in Inspectlet’s session recordings dashboard
  • Enable the integration via a small JavaScript snippet using Convert’s Project Global JS or on-site placement
  • Loop through active Convert experiments and send their names to Inspectlet using its tagging API
  • Keep tracking lightweight by sending only experiment and variation metadata, not heavy data payloads

Benefits

  • See how users in each variation actually behave by watching precisely tagged session replays
  • Combine quantitative A/B test outcomes with qualitative behavior insights in one workflow
  • Validate and refine hypotheses faster by tying test performance to real user interactions
  • Troubleshoot underperforming variations quickly by jumping straight to relevant recordings
  • Enhance CRO processes with precise, variation-level segmentation of session recordings without complex setup

Convert and Inspectlet

Inspectlet is a session recording and behavior analytics platform that helps teams understand how visitors interact with their websites through replays, heatmaps, and detailed engagement insights.

Together, Convert and Inspectlet let experimentation teams connect the “what” of A/B test performance with the “why” behind user behavior. By tagging Inspectlet session recordings with Convert experiment and variation data, teams can filter and analyze replays by test experience, uncover friction, validate hypotheses, and turn test results into clear, actionable insights with minimal setup.

Use Cases

Diagnose Why a Winning Variant Still Feels Risky

Problem: An A/B test shows a statistically significant lift, but stakeholders worry the new experience might introduce UX issues or long-term friction that numbers alone don’t reveal. Solution: Convert tags each visitor’s experiment and variation in Inspectlet, letting teams filter replays to only the winning variant and visually inspect how users navigate, scroll, and interact. Outcome: Teams validate that the winning experience is both performant and usable, increasing confidence in rollout decisions and reducing the risk of shipping a problematic variant at scale.

Uncover Hidden Friction in Losing Variations

Problem: A variation underperforms, but funnel metrics don’t explain why users drop off or where exactly they struggle within the new experience. Solution: Using Convert’s experiment and variation tags in Inspectlet, analysts filter recordings to only the losing variation and watch real user sessions to spot confusion, rage clicks, or dead ends. Outcome: Clear qualitative evidence reveals specific UX issues to fix, enabling rapid iteration on new test variants that directly address observed friction and improve conversion rates.

Validate Hypotheses for Complex UX Experiments

Problem: For multi-step flows or complex UI changes, quantitative test results don’t fully confirm whether the original UX hypothesis about user behavior was correct. Solution: Convert passes experiment and variation names into Inspectlet as session tags, allowing teams to watch only those users exposed to a given hypothesis and see if they behave as expected. Outcome: Marketers refine or confirm their hypotheses with behavior-level evidence, designing smarter follow-up experiments that target the real drivers behind user decisions.

Speed Up Debugging of Broken or Buggy Variants

Problem: A new variation suddenly tanks performance, and it’s unclear whether the cause is a UX misstep, a JavaScript error, or a specific browser/device issue. Solution: With Convert’s variation data visible in Inspectlet, teams instantly filter to affected experiment sessions and watch replays to pinpoint where forms fail, elements don’t load, or buttons misfire. Outcome: Root causes are identified in hours instead of days, allowing quick hotfixes or rollbacks and minimizing revenue loss from broken or malfunctioning test experiences.

Segment Session Replays by Audience and Experience

Problem: Different audience segments respond differently to the same test, but it’s hard to understand how behavior varies across segments and variations using only analytics reports. Solution: By combining Convert’s experiment/variation tags with Inspectlet’s filters, teams isolate replays for specific tests and user segments to compare how each group interacts with each variant. Outcome: Deeper insight into segment-specific behavior informs more targeted personalization experiments and helps prioritize which experiences to roll out to which audiences.

Turn Flat or Inconclusive Tests into Actionable Insights

Problem: Many experiments end with no clear winner, leaving teams unsure what to change next and risking test fatigue and stalled optimization programs. Solution: Convert sends experiment and variation metadata into Inspectlet, enabling teams to review sessions for both variants and identify subtle behavior differences that metrics missed. Outcome: Even “flat” tests yield learnings about navigation patterns, content engagement, and micro-friction, fueling better-designed follow-up experiments and continuous CRO momentum.

Media