Directory/New Relic
New Relic

New Relic

Partner
Integration
  • Technology Partner - Integration
Categories
  • Analytics
  • Session replay
  • Website performance
Type of Integration
  • 1st party

Connect A/B test data with performance monitoring using Convert + New Relic

The Convert + New Relic integration is built to connect your experimentation program with your performance monitoring stack. It sends experiment and variation data from Convert into New Relic so teams can analyze tests alongside real-time performance and reliability metrics.

By passing experiment exposure as custom attributes, every user session in New Relic can be tied back to the specific A/B test experience they saw. This makes it easy to segment, troubleshoot, and report on performance and business outcomes by variation.

Implementation is lightweight and uses your existing Convert tracking code and New Relic JavaScript snippet, plus a small custom script. All analysis then happens inside New Relic, using the dashboards, queries, and reports your teams already rely on.

Key capabilities

  • Send current experiment and variation names from Convert into New Relic as custom attributes/events
  • Work with existing Convert and New Relic JavaScript snippets using a small additional script
  • Use New Relic’s native setCustomAttribute method for clean, flexible attribute naming
  • Surface Convert experiment and variation attributes in New Relic Insights and Data Explorer
  • Filter, query, and build dashboards in New Relic based on experiment exposure data
  • Correlate CRO test results with backend, frontend, and user experience performance metrics

Benefits

  • Attribute performance and UX changes directly to specific A/B test variations
  • Understand how winning and losing variations impact site speed, stability, and application health
  • Build richer New Relic segments by experiment and variation to uncover hidden performance or conversion patterns
  • Align marketing, product, and engineering around a single source of truth for experimentation and performance data
  • Communicate the full impact of CRO tests on both user experience and business KPIs using New Relic dashboards
  • Reduce manual data stitching by keeping experimentation and performance analysis in one place

Convert and New Relic

New Relic is a digital intelligence and observability platform that helps teams monitor, troubleshoot, and optimize the performance of their applications and infrastructure. It provides real-time visibility into frontend and backend performance, user experience, and key business metrics.

Together, Convert and New Relic connect experimentation data with performance monitoring. Convert sends experiment and variation exposure into New Relic as custom attributes, enabling teams to correlate A/B test outcomes with application health, uncover performance bottlenecks tied to specific variations, and make optimization decisions based on unified experimentation and performance data.

Use Cases

Tie Conversion Uplift to Real Performance Metrics

Problem: Marketing sees a lift in conversions from an A/B test, but engineering can’t tell whether the winning variation is also increasing load times or error rates, risking long‑term UX degradation. Solution: Convert passes experiment and variation names into New Relic as custom attributes. Teams filter performance dashboards by “Exp Name” and “Var Name” to compare latency, errors, and throughput per variation. Outcome: You ship only winners that are both high‑converting and performant. CRO, product, and engineering align on decisions backed by a single view of business KPIs and technical health.

Detect Performance Regressions Caused by Test Variations

Problem: New UI or feature tests occasionally introduce heavy scripts or third‑party tags. Performance dips appear in New Relic, but it’s unclear which experiment or variation is responsible. Solution: Using Convert’s integration, each session in New Relic is tagged with the active experiment and variation. Engineers segment slow transactions and JS errors by variation to pinpoint problematic experiences. Outcome: Teams quickly isolate and roll back underperforming variants before they impact a large audience. Performance regressions are tied directly to specific tests, reducing MTTR and protecting Core Web Vitals.

Quantify Revenue Impact of Performance Improvements

Problem: Engineering invests in performance optimizations, but it’s hard to prove how much faster pages actually influence conversions and revenue across different experiments. Solution: Convert sends experiment context into New Relic, where teams correlate variation‑level conversion metrics with performance indicators like page load time and Apdex scores. Outcome: You can demonstrate that a faster variation not only improves UX metrics but also lifts revenue. This evidence justifies further investment in performance‑driven experimentation roadmaps.

Troubleshoot Region- or Device-Specific Test Failures

Problem: A/B tests look healthy in analytics, but certain regions, browsers, or devices report spikes in errors or timeouts. Without experiment context, debugging is slow and guesswork‑heavy. Solution: With Convert attributes in New Relic, teams cross‑filter by geography, device, browser, and “Var Name” to see which combinations break. Custom attributes highlight exactly which variant misbehaves for whom. Outcome: Localized issues are identified and fixed quickly, avoiding blanket test shutdowns. You maintain experimentation velocity while protecting critical segments from broken or degraded experiences.

Align Experiment Dashboards Across Marketing and DevOps

Problem: Marketing tracks test results in Convert, while DevOps monitors uptime and errors in New Relic. Each team has partial context, leading to misaligned priorities and fragmented reporting. Solution: Convert’s experiment and variation data becomes a shared dimension in New Relic Insights. Both teams build unified dashboards that show conversions, performance, and reliability per variation. Outcome: Stakeholders share a common, real‑time view of how each experiment affects both business and technical KPIs. Decisions on rollouts, canary releases, and feature flags become faster and less contentious.

Run Safer High-Risk Experiments with Observability Guardrails

Problem: Product wants to test bold changes (checkout flows, pricing, personalization), but engineering worries about stability and can’t easily monitor risk at the variation level. Solution: By tagging sessions with Convert experiment attributes in New Relic, teams set alerts and SLOs scoped to specific variations, watching error rates, response times, and resource usage per test. Outcome: High‑impact experiments run with clear guardrails. Risky variants are automatically flagged or throttled when they breach performance thresholds, enabling aggressive testing without compromising reliability.

Media