CODE
CODE

Attribution Analysis: 4 Techniques Every Marketer Should Know

Last updated
2
Jun
2025
min read

Content marketers are under more pressure than ever to prove the value of their work. You need to know which content drives results, what’s worth investing time and money in, and how your efforts contribute to wider company goals. 

That’s where attribution analysis comes in. It helps you pinpoint what is and isn’t working with your content so you can make stronger, smarter decisions. 

This guide covers four key attribution techniques every marketer should know: cross-sectional vs. longitudinal analysis, factorial analysis, A/B testing, and ROI modeling. Each one includes detailed examples and step-by-step tips you can apply using tools like Google Analytics 4 (GA4) and Looker Studio.

1. Cross-sectional vs. longitudinal analysis

Cross-sectional analysis looks at data at a specific point in time, helping you see what’s performing well right now

Meanwhile, longitudinal analysis tracks changes in data over a longer period, making it easier to understand how and why things change over time.

Let’s take a look at both below.

Cross-sectional analysis: Assessing data in the moment

Cross-sectional analysis gives you a snapshot of your content’s performance at a specific moment, helping you see what’s performing well right now. This makes it easier to shift resources toward what’s working best currently and ensures you don’t overlook the highest-performing content at any given time.

Cross-sectional analysis is especially useful when:

  • Testing (and comparing) new strategies
  • Performing A/B tests
  • Conducting competitor analysis
  • Making decisions without needing historical data

Consider this example:

Company X, a B2B SaaS provider, recently launched three new content clusters: AI-Driven Automation, Workflow Optimisation, and Compliance Best Practices.

After one quarter, the marketing team conducts a cross-sectional analysis to assess which clusters are performing best and how they compare to competitors.

Here are the results:

Content Cluster Page Views Demo Sign-Ups Sign-Ups per Page View
AI-Driven Automation 14,210 58 40.80%
Workflow Optimisation 11,578 72 62.20%
Compliance Best Practices 8,921 39 43.70%

Thanks to this analysis, the marketing team observes that:

  • AI-Driven Automation attracts the most traffic but has a lower conversion rate.
  • Workflow Optimisation drives the highest conversion rate (62.2%), even with moderate traffic.
  • Compliance Best Practices underperforms in both traffic and sign-ups.
The team also wants to see how Company X performs in these new sectors compared to its competitors.

Competitor Top-Performing Cluster Page Views Top 3 Keywords
Competitor A AI-Powered Automation 18,250 45.20%
Competitor B Workflow and Productivity Guides 12,410 32.10%
Competitor C Compliance and Security Resources 9,105 47.80%

After this comparison, the team notes the following key insights:

  • Competitor A dominates AI-Powered Automation with higher traffic and many top-ranking keywords.
  • Competitor B’s Workflow content performs slightly better than Company X’s, indicating an opportunity to refine messaging or improve distribution.
  • Competitor C’s Compliance content outperforms Company X’s, suggesting stronger authority or better targeting in that niche.
Based on this analysis, the team decides to decrease the number of AI-Driven Automation pages and instead focus on improving conversion and differentiation from competitors. Compliance Best Practices will also see reduced production, and the team might need to reassess the value of this cluster. Finally, the team will increase the output of Workflow Optimisation pieces, doubling down on this high-converting cluster with only moderate competition from Competitor B.

Content Cluster Original Allocation New Allocation Strategy Update
AI-Driven Automation 40% 35% (-5%) Focus on conversion optimisation & differentiation.
Workflow Optimisation 35% 45% (+10%) Double down on a high-converting cluster.
Compliance Best Practices 25% 20% (-5%) Improve targeting or reassess investment.

Cross-sectional analysis also helps you evaluate the financial impact of your content.

For example:

Company Y is a B2B SaaS business in the business management space. Its content is positioned around three main categories: HR & Personnel, Productivity, and Team Management, each with several subcategories.

The marketing team’s most recent attribution report with categorisation shows the following:

Category Subcategory Total Cost Page Views First Visits Sign-Ups Assisted Sign-Ups
HR & Personnel New Hires 47,478.95 87,905 55,240 74 103
Productivity Scaling Growth 13,623.98 16,455 11,684 41 42
HR & Personnel Performance Reports 5,780.00 13,257 10,485 16 16
HR & Personnel Employee Well-Being 4,958.18 12,109 9,754 6 6
Productivity Outsourcing 41,497.92 10,625 7,552 6 6
Team Management Remote Employees 1,499.63 8,627 6,365 0 0

The New Hires subcategory stands out with the most sign-up conversions. But the attribution data enables deeper evaluation and the ability to calculate cost per conversion:

Cost Per Conversion = Total Cost
Sign-Up Conversions


Performing this calculation with the example data shows the Scaling Growth subcategory generates conversions at almost half the cost of New Hires.

New Hires: $47,478.95 / 74 = $641.61 cost per conversion Scaling Growth: $13,623.98 / 41 = $332.29 cost per conversion

Even though New Hires generates more conversions, Scaling Growth is more cost-efficient. Meanwhile, Outsourcing costs nearly as much as New Hires but delivers only six conversions.

These insights are invaluable when determining the actual impact of content. In this case, the marketing team would likely add more Scaling Growth content and less Outsourcing in their next content plan.

Longitudinal analysis: Tracking changes over time

Longitudinal analysis shows how a metric changes over time. This method helps you see patterns and trends, so you can understand the impact of:

  • Industry shifts
  • Algorithm updates
  • Audience behaviour changes
  • Market competition
  • Seasonal events or news

Consider this example: 

Company Z, a B2B SaaS brand in the business automation sector, has been steadily growing its traffic month after month. Its content focuses on three main clusters: Reporting, Customer Onboarding, and Employee Development.
Hoping to accelerate growth further, the marketing team looks at attribution data to see which topics are performing best. Here are the results:

Cluster Page Views Demo Sign-Ups Sign-Ups per Page View
Reporting 14,343 62 0.43%
Customer Onboarding 12,211 57 0.46%
Employee Development 8,119 33 0.41%

Of the three, Reporting performs best in terms of age views, while Customer Onboarding performs best in terms of sign-ups per page view. Based on this, the team decides to distribute content production as follows:

Cluster Content Breakdown
Reporting 35%
Customer Onboarding 55%
Employee Development 10%

When the team views the performance of each cluster month by month, they notice something interesting: Employee Development may be smaller now, but it’s growing rapidly. This trend suggests it might become a top performer soon. Based on this insight, the team updates their content plan accordingly:

Cluster Original Breakdown New Breakdown
Reporting 35% 30% (-5%)
Customer Onboarding 55% 40% (-15%)
Employee Development 10% 30% (+10%)

Had the marketing team relied solely on cross-sectional analysis, they may have missed this opportunity—and a potentially significant increase in conversions.

2. Factorial analysis

Factorial analysis helps you understand how two or more variables (like content type and distribution channel) affect performance, either on their own or together.

This method is advantageous when you want to:

  • See how different parts of your strategy interact
  • Control for variables
  • Discover what really drives user behaviour

For example, suppose your attribution report provides these insights:

Cluster Page Views Demo Sign-Ups Sign-Ups per Page View
Reporting 14,343 62 0.43%
Customer Onboarding 12,211 57 0.46%
Employee Development 8,119 33 0.41%

At first glance, you can see that Reporting content has the most page views and Customer Onboarding content converts best per view.

These are good insights, but the data doesn’t tell you why. How do you know the differences in page views and demo sign-ups are related to the subject matter and not something else, like the author or the call to action (CTA)?

For example, let’s say two authors created this content. If that’s the case, it could be that the differences are due to their differences in writing style and skill.

To understand whether performance is due to the topic or the writer, you compare the same topics by author. If the numbers stay mostly the same regardless of author, then the topic is likely the main factor. If the numbers change a lot between authors, then authorship may be influencing the results. This is called controlling for a variable—you hold one thing steady (e.g., the author) and see how it affects performance.

From here, you can:

  • Review editorial quality
  • Provide training to the weaker of the two writers
  • Adjust author assignments based on strengths

This is known as exploratory factor analysis, a way to reveal the “underlying structure” of your data.

💡 In Looker Studio, you can use the Scatter Chart to help uncover relationships between variables visually.

🔍 For an in-depth explanation of factor analysis and examples using R, see: An Introduction to Data Analytics Using R: Exploratory Factor Analysis.

You can also use factorial analysis to understand which combinations of factors offer the best results

For example, suppose you’re using two CTAs in your Reporting, Customer Onboarding, and Employee Development content. You can create a table in Looker Studio with both CTAs to see how they work together and affect the data.

This is a 3 x 2 analysis: three content clusters and two modals, creating six combinations.

The data suggests that Reporting content paired with Modal A leads to the most sign-ups per page view—even more than Customer Onboarding, which usually performs better overall.

Based on this, you might switch all Reporting pages to Modal B. You’d also want to investigate why these modals are performing better, such as the colour scheme, page position, and the language used, etc. You could assess all these through A/B testing, which we’ll explore below.

💡 Want to go further? Try these:

  • Interaction Effect: Examines whether one variable’s success depends on another—e.g., does the best CTA change based on content type?

  • Two-Way ANOVA (Analysis of Variance): A statistical test to measure how much each factor and their interaction affect performance.
  • 3. A/B testing

    A/B testing involves trying out two versions of something to see which one performs better. It’s great for:

    • Landing pages
    • Product pages
    • Email campaigns
    • CTAs
    • Page titles
    • Headers
    • Meta descriptions
    • Meta titles
    • Blog design
    • Content length
    • Article structure
    • And so much more

    When combined with attribution data, A/B testing helps you understand not only what’s working, but also why.

    Here’s how to run a strong A/B test:

    Step One: Formulate a hypothesis

    A strong hypothesis will include a specific change and a specific outcome—for example, “Using call-out boxes to highlight product features in blog posts will increase demo sign-ups.”

    Your findings will either validate or reject your hypothesis.

    Step Two: Change only one thing

    With A/B testing, it’s important not to make more than one change at a time. If you test multiple things, it’s hard to know which one caused the result.

    Also, the more changes or versions you make, the larger the test and control populations you’ll require and the more time you’ll need to achieve statistical significance.

    In other words, simpler A/B tests = faster turnaround times

    Step Three: Analyse the results

    Lastly, check the results against your hypothesis, which you’ll either validate or reject.

    Even if your hypothesis isn’t validated, look at other changes. For example, you may not see an increase in demo sign-ups, but you could see an increase in social media shares, which is an interesting result and merits its own A/B test. 

    If needed, break down results by audience, topic, or geography—but don’t over-segment. Doing so can reduce statistical significance.

    A/B testing example: First-person language in introductions

    Company A is a large digital publisher that owns multiple brands. Recently, some of its “Best X” buying guides have dropped in search rankings. The content marketing team believes adding first-person language (e.g., “I’ve tested...” or “My experience...”) to the introductions could help.
    But with hundreds of articles in the library, making those changes everywhere would be expensive. So, the team decides to test the idea first.

    Here’s how they set up the A/B test:

    • Add first-person language to 40 new articles
    • Omit first-person language in 40 new articles
    • Refresh 40 older articles with first-person intros
    • Leave 40 other older articles as-is (control group)
    The team formulates five hypotheses: Introducing first-person language to buying guide introductions will significantly increase:

    1. The number of keywords in the Top 3 position
    2. The number of page visits
    3. The conversion rate
    4. The average order value
    5. The total affiliate revenue

    Four weeks after publishing the content and making the updates, the marketing team takes another look at page rank, page visits, conversions, and eCommerce metrics. Here’s what they find:

    Type First-Person Language Top 3 Keywords Page Visits Conversion Rate Avg. Order Value Total Affiliate Rev.
    New Yes 33 14,353 0.09483 $35.35 $4,330
    New No 23 11,322 0.05793 $31.39 $2,059
    Refresh Yes 57 27,598 0.12093 $25.88 $7,774
    Refresh No 49 20,394 0.11032 $28.89 $5,200

    The results are clear.

    New content with first-person introductions:

    • Showed up in the top 3 search results more often
    • Got more visits
    • Converted better (9.5% vs. 5.8%)
    • Earned more per visitor ($35.35 vs. $31.39)
    • Brought in over twice as much affiliate revenue ($4,330 vs. $2,059)
    Refreshed content with first-person introductions also performed better overall, seeing:
    • More keywords ranking higher
    • Higher traffic and conversions
    • A slight dip in average order value, but still a big boost in revenue ($7,774 vs. $5,200)
    The only odd result was the slight drop in order value for the refreshed content. After digging in, the team found that about 30% of that difference was because those particular guides covered lower-priced products. The marketing team will have to perform additional tests to determine whether the reduced AOV with refreshed content is genuine (i.e., the first-person language actually influenced user behaviour) or an artefact (e.g., the average product price in those buying guides happens to be lower). Even so, this is a good test to run because it helps remove the product as a factor—a process you now know is factorial analysis.

    From here, Company A estimates that updating the 300 most important guides would cost around $4,500. These pieces currently generate around $32,000 in affiliate revenue. If the updated pieces perform roughly as well as in the test, that figure should rise to about $48,000 after four weeks—a total of $16,000 in additional affiliate revenue per month.

    To finalise their business case, the team calculates the expected ROI after one month:

    ROI = (Revenue - Cost) / Cost = ($16,000 - $4,500) / $4,500 = 2.55

    Lastly, the team changes their writing guidelines so that all new buying guides use first-person language, and they’re gradually rolling it out to existing content, too. This change costs nothing but will ensure rank, traffic, conversions, AOV, and total revenue remain high.

    4. ROI modelling 

    One of the biggest challenges in content attribution is accurately measuring the ROI for each dollar spent. At Eleven Writing, we’ve found a straightforward way to overcome this hurdle using your existing data from GA4. Here’s how to do it: 

    Step One: Get your conversion weighting in GA4

    When configured correctly, GA4’s Path Exploration report shows the percentage of users who take a specific action, such as signing up for a demo, and later become paying customers. That percentage becomes your conversion weighting.

    Step Two: Assign weightings to each conversion type

    To quantify the impact of different conversions, assign weightings based on how often they lead to a purchase.

    For example:

    • 0.5% of demo sign-ups become customers → weighting = 0.005
    • 0.8% of free trial sign-ups become customers → weighting = 0.008

    Step Three: Create a formula field in Looker Studio

    Import your GA4 data into Looker Studio, and then create a calculated field to apply your conversion weightings.

    The formula for our example conversions is:

    (SUM(CASE WHEN Conversion_Type = "Sign up for demo" THEN Conversions * 0.005 ELSE 0 END)) +
    (SUM(CASE WHEN Conversion_Type = "Sign up for free plan" THEN Conversions * 0.008 ELSE 0 END))

    Important: Ensure the Conversion_Type values match exactly as they appear in Looker Studio.

    Example scores

    Page A has 200 demo sign-ups and 15 free plan sign-ups → (200 x 0.005) + (15 x 0.008) = 1.12

    Page B has 280 demo sign-ups and 3 free plan sign-ups → (400 x 0.005) + (3 x 0.008) = 1.424

    Page Demo Sign-Ups Free Plan Sign-Ups Formula
    Page A 200 15 (200 × 0.005) + (15 × 0.008)
    Page B 280 3 (400 × 0.005) + (3 × 0.008)

    Conclusion: Page B has a higher raw ROI score.

    Step Four: Normalise ROI for comparison

    Since different pages have different visitor counts, normalising the scores allows for a fairer comparison.

    To normalise, divide the weighted conversion score by the total number of visitors. This scales all scores between 0 and 1, where:

    • 1 means every visitor converted.
    • 0 means no visitors converted.

    Let’s assume that Page A had 2,000 visitors and Page B had 4,000 visitors.

    Page Formula Score
    Page A (200 / 2000 × 0.005) + (15 / 2000 × 0.008) 5.6 × 10-5
    Page B (280 / 4000 × 0.005) + (3 / 4000 × 0.008) 3.6 × 10-5

    Conclusion: Page A has a better ROI per visitor, even though Page B had a higher overall conversion score.</p>

    Step Five: Factor in Lifetime Customer Value (LCV) and production costs

    To calculate the monetary value of each page’s performance, multiply the weighted conversion score by the Lifetime Customer Value (LCV).

    Assuming LCV = $5,000, you’d calculate:

    Page ROI Calculation Revenue Generated
    Page A ($5,000 x 1.12) $5,600
    Page B ($5,000 x 1.424) $7,120

    Then, calculate ROI using production costs (e.g., $500 per page):

    Revenue - Cost / Cost

    Page ROI Formula ROI per $1 Spent
    Page A ($5,600 - $500) / $500 10.20x
    Page B ($7,120 - $500) / $500 13.24x

    Final conclusion: Even though Page B generates more total revenue, Page A delivers a stronger ROI per visitor.

    Conclusion

    Attribution analysis gives you the clarity you need to make better marketing decisions faster. Whether you’re experimenting with A/B tests or digging into ROI, these techniques give you a clearer picture of performance and potential.

    At Eleven Writing, we use these methods every day to help teams grow their content with confidence. If you'd like support applying these techniques to your own strategy, get in touch with us today.

    Are you a content writer?

    Receive insider tips straight to your inbox.

    Thank you! We’ll let you know when we’re ready to launch.
    Oops! Something went wrong while submitting the form.
    Are you a publisher?

    Receive insider tips straight to your inbox.

    Thank you! We’ll let you know when we’re ready to launch.
    Oops! Something went wrong while submitting the form.

    Would you like to speak to one of our experts?

    Create custom email campaigns, measure performance, and turn insights into results with Mailchimp’s email marketing tools.

    Book a meeting