We’ve all seen it: an important metric in your Analytics tool significantly changes its behaviour. The Average Order Value drastically decreases, we have fewer add-to-carts than usual, the form-submit rate is below a critical level. This is the point where our customers or stakeholders - most often slightly in panic - reach out and request answers or even better: solutions. 

Now the problem here is, that in most cases, there is not one clear cause why a metric develops in a certain way. However, there can be multiple things that influence the evolution of a specific KPI. This is why I think it is very important to check on a variety of things when troubleshooting your data. In order to do so, I established a clear and very practical approach to assessing these problems. In this blog post I want to introduce you to this approach and explain how and why it worked for me in the past. 

Step 1: Exclude the obvious

Before deep diving into the world of analytics, my first step is to make sure that there is no critical technical issue that should be addressed immediately to prevent further loss of data or, even worse, revenue. So the first thing to do is: 

  • Checking if the website or important functionalities/applications are down. 
  • Briefly debugging the Analytics implementation to check if the data collection still works

The important thing for me here is to really only focus on the most important and therefore most obvious things. At this point in time, I try not to randomly start debugging specific functionalities on the website or details in my implementation. This can cost a lot of time if it’s not done in a structured way. 

Step 2: Clearly define the problem

Depending on how and by whom the issue in the numbers was detected, the problem statement might be very high-level. “Our Average Order Value (AOV) dropped below 50$ since last Wednesday.” This is just one example of how a problem statement could look like. As a data analyst the first thing I’ve done here in the past is to specify the issue by adding information like: 

  • Since when does the problem occur? 
  • Is the AOV the only metric impacted? (Actually, in my example it wasn’t. In the same way the AOV was decreasing, we also could see the Units per Transaction or UPT decrease as well.) 
  • Is the evolution of the metric still an issue compared with the last week / last month / same period last year? (this way you can make sure that seasonal effects do not impact your metrics)
  • Is there a specific point in time visible when the drop occurred? Or did it happen slowly and step by step? 

By answering these questions you will get a more complete picture of the issue. This will be an important basis for our next step. 

Step 3: Generate hypotheses for potential causes

Now before I start creating dashboards and analyzing the numbers, I open Confluence (or any other tool you prefer for documentation). Here I write down any reason that comes to my mind, why the AOV and the UPT might be on a downward trend. Here are some examples of areas to think about when it comes to looking for a reason for your issue: 

Tracking Issues:

  • Check if analytics tags, pixels, or tracking codes were recently changed.
  • Compare data from different tools (e.g., Google Analytics vs. CRM).
  • Validate real-time events and debug tracking setup.

User Behavior Changes:

  • Segment data by user type, geography, device, and traffic source.
  • Check if specific pages or flows have changed performance.
  • Look for UX/UI changes that may have influenced behavior.

Marketing & External Factors:

  • Analyze traffic sources: Did paid campaigns end or shift budgets?
  • Check for seasonality, holidays, or viral events affecting demand.
  • Investigate competitor activity.

Of course, the list is far from complete and there might be many other things to consider depending on the specific use case. 

As I already mentioned in the beginning of this post, I try to always check for multiple things in order to find the root cause(s) for an issue. Now at this point, I also like to consult ChatGPT for providing me some hypotheses to check on in Adobe Analytics. My prompt looks like this:

“I am working for a big retailer in the food industry. They have seen a decrease in their Average Order Value and the Units per Transaction. I need some hypotheses about what led to this decrease in order to verify it in Adobe Analytics.”

In my example, ChatGPT is providing the following output:

1. Price Increases or Inflation Effects

  • Customers may be purchasing fewer items due to higher prices.
  • Price-sensitive customers could be trading down to cheaper alternatives, reducing the total order value.

2. Product Assortment & Availability Issues

  • Out-of-stock or supply chain disruptions reducing multi-item purchases.
  • Less variety or discontinued popular bulk-buy items leading to smaller transactions.

3. Promotional Strategy Changes

  • A reduction in discounts, bundle offers, or loyalty program incentives could lead to fewer impulse buys or smaller basket sizes.
  • End of Marketing Campaigns for more expensive products.

4. Cart and Checkout Bugs

  • Technical issues with the cart system might prevent customers from adding multiple items or lead to incorrect cart values.
  • Cart abandonment due to broken “save for later” features or issues with items disappearing from carts after login or during checkout.

5. Personalization Algorithms Malfunction

  • If recommendation engines (cross-sell, upsell) aren’t working as expected, customers may miss out on product suggestions that could increase their order size.
  • Personalized offers or dynamic pricing based on customer behavior may not be triggering correctly.

This list of hypotheses provides you with a variety of topics to look at in order to track down your causes. The next thing to do is to think about how you can verify these hypotheses with the data available in your analytics tool. 

Step 4: Verify your hypotheses with data

Before diving into my analytics data, I first define some strategies to test my hypotheses with the data I have. In my example, this could look like the following: 

1. Price Increases or Inflation Effects

  • Check if the average price per product has changed in general. 
  • Check if there has been a change in the average price per product within a specific category. 
  • Check if the top products sold have changed over time. 

2. Product Assortment & Availability Issues

  • Check if previously top-sold products have been purchased less. → If yes, verify if there is a problem with availability (might need verification from 3rd party systems like ERP). 

3. Promotional Strategy Changes

  • Check if the AOV & UPT decrease can be attributed to a specific Marketing Channel. → If yes, verify if the end of specific campaigns collided with the drop in AOV & UPT.

4. Cart and Checkout Bugs

  • First, check on the website if there is an overall issue with the cart functionality
  • Also re-check in the data if the amount of products added has decreased as well for specific products 
  • Check the numbers and abandonment rates within the checkout-process and compare them with the weeks prior to the issue

5. Personalization Algorithms Malfunction

  • Check the analytics data for your cross-sell-measures → are there any implications visible regarding a bug in the display of recommendation activities
  • If not, also re-check the website if the recommendations are displayed correctly.

As soon as you have a deeper look into your data, there will of course come up new ideas and directions to look into. You might be successful and can verify some of your hypotheses. Some as well might turn out to be false. Either way, you will be able to get closer to the solution step by step. 

Step 5: Document your findings

One other - probably a very obvious - thing I would always recommend is to document your approach and all of your findings. Some analytics tools like for example Adobe Analytics offer you to add descriptions to single visualizations and tables. If this feature is not available you could use Confluence or any other documentation tool for your notes. I always find it very helpful to have my results and the steps of how I got there written down in order to summarize and present them in the end. Also, it is very useful for other people that are following your reports to understand the numbers they see. 

In the end of course, this documentation can be a great source for creating a final summary or presentation about what you found and how it might be fixed again. 

Conclusion

In the end I can say that troubleshooting data anomalies in an analytics tool in my opinion requires a structured, hypothesis-driven approach. Rather than diving into raw data aimlessly, defining potential causes first and systematically testing them leads to faster and more accurate problem resolution.

This method not only saves time but also prevents misinterpretations that could lead to poor decision-making. Additionally, it helps you and your team maintain a proactive mindset, ensuring that anomalies are addressed efficiently while uncovering insights that could improve overall data quality and strategy.