Ready to reinvent growth? Let's connect.

How the demographic cliff shapes Medicare Advantage success

Current Medicare demographic trends are rooted in the Baby Boom. Post-war prosperity and GIs returning home in the 1950s and 60s led to inflated numbers of Medicare-eligible seniors entering the market from 2010 to 2025. Healthcare payers have been able to ride that demographic wave and observe year-over-year growth during this period with over four million new seniors becoming Medicare-eligible each year.

However, moving forward, we will likely never see so many people turning sixty-five. The number of incoming Medicare-eligible seniors has peaked and will begin to decline in the coming years. This will translate directly to fewer net-new Medicare Advantage enrollments each year.

Impact of the demographic cliff on customer acquisition

Smaller incoming applicant pools mean carriers will have to fight much harder for a limited supply of prospective Medicare Advantage members. These carriers will be competing in a highly saturated market as higher numbers of carriers have further increased competition. Increased competition also means more difficult marketing.

Marketing challenges:

  • The amount of mail, television ads, and other marketing that individuals receive has increased threefold over the past decade, decreasing the effectiveness of marketing.
  • Media costs continue to rise above the rate of inflation. In other words, carriers are spending more on less efficient marketing.

The road ahead for Medicare Advantage 

The road ahead looks difficult for the Medicare Advantage market. Applicant pools are shrinking, competition has increased, marketing is more expensive yet less efficient, and decreasing member tenure combined with stagnant CMS payments and increasing claims costs is driving down member lifetime value (MLV).

Those who will win in the Medicare Advantage market throughout the next decade will need to turn to go-to-market, product attractiveness, and clinical maturity as the leading drivers of growth. For more information on how to succeed in the face of these challenges, read our executive whitepaper The next decade of Medicare Advantage: 2025 and beyond.

Download our whitepaper, “The next decade of Medicare Advantage: 2025 and beyond”​

Learn how the next decade will reward Medicare Advantage leaders who embrace agility, analytics, and a member-first approach.

Article sources: American Community Survey, NCHS birth data, CMS.gov data, Marketbridge

5 ways to adapt your strategy for Millennial & Gen Z B2B buyers

Almost three-quarters (71%!) of B2B buyers are Millennials or Gen Z (Forrester).

Seems like only yesterday that pundits were yakking about the rise of millennials and how it would affect business culture. Those Millennials are now well into their careers and rapidly entering middle age. (I’m sorry, Millennials, but it’s true. You can switch to wearing taller socks but time marches on regardless.)

People born in or near the 2000s are the new kids in town, and this Gen Z wave is changing the game for B2B marketers once again.

The buying group is even bigger than you think.

Forrester predicts, “As the Millennial and Generation Z buyer cohorts increasingly drive purchases, they will rely on external sources — including their value network — to help make their decisions.”

A few related stats to mull:

  • 6sense reports that nearly three-quarters (72%) of buying teams now hire consultants or analysts to help with purchasing decisions.
  • Among younger buyers who responded to Forrester’s Buyers’ Journey Survey, 2024, 30% indicated that 10 or more people outside their organization are involved in purchase decisions.
  • Not surprisingly, word-of-mouth recommendations still carry the highest weight, with 73% of buyers ranking it as their most trusted source (Wynter).

So, what does that mean for B2B marketers?

Just as we’ve gotten our heads around using account-targeted campaign and media strategy to reach multiple members of the buying group, we must expand our understanding of the audience. We need to reach more broadly to influencers outside of the target organization –– without becoming scattershot.

And where do you start?

1. Continue to invest in your social presence

Social media has become a top source of information across B2B buyers regardless of age (PR News). As more and more “social media natives” get into decision-making roles, its influence will only grow. My LinkedIn scroll is already replete with memes and personal stories, and yours probably is too. The divide between personal and professional social media is getting thin (LinkedIn). You may want to consider expanding your brand’s presence on social channels that have traditionally been thought of as more personal if you have the resources, savvy and determination to support them.

Even if you’re not actively publishing widely, you should be listening widely. Keep digital ears open across social platforms, online communities and industry forums. Conversations are happening in these channels and consideration sets are being formed –– whether you’re part of them or not.

2. Influence the influencers

“Influencers” are not just for aspirational lifestyle brands. They’re part of the value network for B2B buyers too. Identify who has credibility and clout, engage them, and look for opportunities to partner with them.

More and more of our clients are getting serious about their influencer strategy, and it’s about time. Chevron Lubricants has been effectively working with influencers for years, most recently with Bryan Furnace, a heavy equipment operator, content creator and the host of Equipment World’s weekly video show, The Dirt. He’s got the expertise, experience and street cred (worksite cred?) to discuss oil technology claims and benefits with authority. (Chevron’s work in this area recently won them a 2025 B2BMX Killer Content Award for “Best Influencer Marketing”. You can see their award-winning video series with Bryan here.)

3. Authenticity still matters

Consider how you might enable and encourage customers to share honest reviews about your services or solutions. It may feel risky, but it’s a strategy that pays off in increased visibility and credibility.

Reviews help you get found. Great reviews are social proof that speaks for itself. Not-so-great reviews give you the opportunity to authentically engage and repair. How you show up in moments of challenge has enormous influence on the perception of your brand. The “Service Recovery Paradox” has been observed for decades – that is, brands that respond to challenges transparently, quickly and with meaningful action may be perceived more favorably than if no problem had occurred in the first place (Wikipedia).

4. Be sharable

While the idea of a B2B campaign going viral may sound unlikely – at least before Workday’s delightful “Rock Star” spots – it’s a worthwhile ambition. Especially when you use “viral” to mean “gets shared among target audiences.” Sure, you could take a cheeky, entertaining (and costly!) approach like Workday did, but there are other ways to create experiences that are worthy of being shared amongst value networks and by influencers.

What is your brand expert on? What do you care deeply about? What causes or ideas do you want to be associated with? Answer the same questions about your target audiences. Draw your Venn diagram and start in the areas of overlap as a jumping off point for ideation. Maybe there’s content you can create, a learning opportunity you could sponsor, or a contest or event or handy-dandy calculator or tool.

5. Learn about – and from – your audience

Look around your organization. I bet there are at least a few Gen Zs, and I know it’s bursting with Millennials. Tap into your own team for insight. How they make significant purchase decisions in their personal lives may reflect how they’d want to approach business buying. Extensive online research, reaching out to friend and family networks for opinions – almost certainly ducking the salesperson until they have already decided to buy. Ask: how can you reduce friction from your processes and get ahead of theirs?

In B2B marketing, strengthening your brand and accelerating demand go hand-in-hand. (Yikes. Do I leave the corny rhyme? Yes, I do.) They should be thought of as deeply interconnected marketing motions serving the same ultimate goals – build interest, build trust, build results.

There you go. One new generation, three major shifts in the landscape, and five things B2B marketers should be thinking about now.

5 forces stalling Medicare Advantage growth in the next decade

Five forces are converging to stall growth of the Medicare Advantage industry over the next decade—halting momentum enjoyed since the 1990s. While inflation and immutable demographic trends are recurring characters in this story, trends in trust and product quality reveal that carrier actions also contribute to the five stagnation forces:

  • Increased member acquisition costs
  • Fewer age-ins
  • Declining member lifetime values
  • Declining innovation on both product and member services
  • A rapidly changing go-to-market landscape

1 – Increased Member Acquisition Costs: Marketing and Sales

Since Covid, inflation has left its mark across the marketing funnel. Competition for cost per click is ensuring the Paid Search dollar doesn’t go as far, especially during the Annual Enrollment period peak. Postage rates make key Direct Mail campaigns costlier at 120% the rate of core inflation. Meanwhile in upper funnel, cost-per-thousand (CPM) was increasing by 5% per year shortly before 2024, leading to costlier brand marketing.

On the sales front, expenses are climbing too. On-target earnings (OTEs) for captive field agents have risen 5-10% per year since Covid. Agent turnover rate has also increased, lowering sales productivity, as newer agents need more training and “at bats” to sell more.

The takeaway: Costs are not going down, but better audience targeting over the next decade can make acquisition budgets drive a greater impact.

2 – Fewer age-ins

All Baby Boomers will have turned 65 by 2030, spelling an end to the era of rapid growth in newly eligibles. Between 2022 and 2027, an estimated 4.2-4.3 million individuals turn 65 per year. By 2040, the annual age-in population will be only 70% of that, around 3.1 million.

Medicare Advantage penetration of the eligible population also seems to be approaching a plateau around 55%. The sum effect is that annual net new enrollments in Medicare Advantage will trend downward in the decade to come. (American Community Survey, NCHS birth data, CMS.gov data)

The takeaway: In order to sustain growth, the industry will have to nurture product attractiveness and clinical maturity. In other words, insurers will have to work harder to grow.

3 – Declining Member Lifetime Values

Member Lifetime Value (MLV) is a member’s inbound cash flow minus outbound cash flow over a member’s tenure with a single insurer. All three parts of the equation are driving down MLV: members are disenrolling more frequently in the face of declining benefits, stagnant CMS payments and mounting claims costs. These claim costs can trace to Medicaid eligibility changes affecting Dual-eligible Special Needs Plan (DSNP) members, added dental and vision benefits, and forgone preventive care during Covid.

The takeaway: CMS payments are not keeping pace with the costs of servicing the neediest members, and a more productive partnership with CMS will be key to remedying this shift.

4 – Declining Innovation on Product and Member Services

Product innovation – adding fitness programs or dental benefits to health plan benefits for example – characterized recent years of Medicare Advantage. Inflation, once again, acts as a headwind here by discouraging maintenance of characteristic benefits and by reducing the value of food cards. But abandoned benefits stem from other sources too—in-home nurses’ visits, for example, received negative publicity with accusations of abundant fraud. Consequently, they became deemphasized.

Member service innovation might temper member disillusionment, but it occurs in a narrow domain. Government regulations require many communications, driving members to largely ignore most insurer messages. Another potential communication channel, insurer mobile apps, suffer low adoption rates due to competition with separate provider apps and limited data portability and functionality.

The takeaway: Innovation can happen under constraints, but a return to quality can light the path forward through greater efficiency across acquisition, member retention, and care delivery.

5 – Rapidly Changing Go-to-Market Landscape

Unit economics and member preferences comprise serious threats to the traditional channels for reaching the Medicare population. Linear TV is less captivating for seniors, direct mail is being hit with inflation, and effective paid search now requires commitment to bidding wars.

Insurers are rethinking their distribution channels, too. Third-party call centers and aggregators rose to importance around 2010, as Medicare Advantage welcomed age-ins at unprecedented volumes. However, aggregators have shown to generate customer confusion while attracting low lifetime value members, and third-party call centers may have eroded trust by their less than honest call tactics.

The takeaway: Winning over the next decade will look a few different ways. We present more data and additional considerations in our whitepaper, which you can download below.

Download our whitepaper, “The next decade of Medicare Advantage: 2025 and beyond”​

Learn how the next decade will reward Medicare Advantage leaders who embrace agility, analytics, and a member-first approach.

How advertisers should respond to low consumer confidence & uncertainty

As a consulting firm based outside of D.C., we are acutely attuned to politics, even though we don’t do any political or governmental work. Big political and governmental moves ripple across the economy to impact businesses across sectors. I haven’t seen much advice or discussion of what advertisers should consider given the declining consumer financial outlook and the potentially rising consumer activism, so I put together this post to distill my thoughts on how advertisers should prepare.

Declining Consumer Financial Outlook

For the first time in nearly two years, U.S. consumer spending in January fell. That doesn’t yet fully account for the agency staff cuts in February, or the forewarned mass layoffs coming in March. However, those changes and upcoming tariffs likely contributed to the drop in consumer confidence seen in February. As I write this, Target and Best Buy are warning that prices will increase soon.

Advertiser Considerations

  • Stay the course with brand investment. Long-term brand building still pays off if we enter a recession. Defending and even increasing Share of Voice, especially as competitors pull back, pays dividends. Businesses that achieved over 8% Excess Share of Voice (ESOV) saw annualized market share growth of 4.5% during a recession.
  • Consider cuts in lower funnel if demand is soft or non-existent. As consumer confidence wanes, fewer consumers will be shopping. For demand-based channels (such as search and affiliate) advertisers should closely watch efficiency and set gates for when to reduce budget because of lower results.
  • Don’t throw out your marketing playbook or completely reinvent your mix. Work with marketing analytics and agencies to size the impact and reforecast what marketing can achieve. Develop tests and performance gates to re-evaluate metrics and KPIs.
  • Focus on impression and ad view quality. Rather than quantity of impressions, advertisers should prioritize quality ad views for prioritized segments. This may mean significantly limiting where your ads are placed and rigorous testing and analysis to ensure you prioritize high customer lifetime value audiences; however, learnings will continue to pay dividends long past the recession.

Rising Consumer Activism?

The “economic blackout” on Friday, February 28, received a lot of news coverage, though the results were mixed for retailers. On social media, more targeted retail boycotts are being shared. Other consumers are re-evaluating their shopping habits based on companies rolling back DEI initiatives.

Advertiser Considerations

  • Consumers may shop earlier to avoid boycotts. The retail shopping calendar may be out the window if consumer boycotts gain additional traction. Products sold through big box stores and Amazon in particular should evaluate how to drive conversions in the absence of historical sales holiday periods.
  • Consumers may want to shop directly. If you’re already set up to sell directly to consumers, ensure that experience is optimized. Plan and prepare to pivot budget and efforts between sell-through and sell-to channels.
  • Monitor competition on boycott and blackout dates. Competition for consumers who are shopping will heat up. Expect cost pers to increase and advertisers should look at whether these consumers are incremental and high value.

Preventing reactionary decisions to stay ahead

Though the governmental upheaval is unprecedented, seasoned advertisers have weathered low consumer confidence and poor financial outlooks before. To prevent reactionary decision making, advertisers should prepare and set expectations in advance, and develop stage gates or guardrails around performance. Monitoring competitor activity and continuing to test and learn what is working also is essential.

If you’re unsure how to get started, get in touch! Simply fill out the form (put “develop performance stage gates” in the anything else we should know box) or send me an email: srenner@marketbridge.com.

Download our resource, “Accelerating growth through test-and-learn marketing culture”​

For an in-depth look at full-funnel marketing strategy, marketing imperatives, and key testing levers, download our 20-page paper.

The rise of AI for marketing & sales

AI is no longer a distant future—it’s here, transforming the way Sales and Marketing teams operate. From hyper-personalized customer interactions to intelligent automation and predictive insights, AI is accelerating efficiency and driving smarter decision-making at an unprecedented pace.

But while its impact is undeniable, adoption is still in its early stages. The question is: will you take the lead or risk falling behind? In 2025, leaders must move beyond exploration and take decisive action.

What to do in 2025? Take action to explore and embrace AI’s potential to help Sales & Marketing be more efficient and effective.


Our new research and recently released whitepaper “The impact of AI on Go-to-Market strategies, programs, and investments”, outlines several key action items for Sales & Marketing leaders to embrace, as summarized by Forbes:

  1. Identify Areas of Emerging Growth. The increased demand for AI-enabled solutions creates opportunities for new revenue streams. It’s critical for GTM leaders to identify areas with the most potential for growth and invest accordingly.
  2. Recognize Changing Buyer Needs. As buyer behavior shifts, closely track changes in that behavior to guide strategies around customer targeting, promotion timing, support tactics and more.
  3. Reinvent New Routes to Market. Disruptive technology like AI will create new expectations from customers about how they want to engage with vendors. Organizations will need to rethink their strategies to meet those expectations and optimize their distribution channels.
  4. Reimagine the Jobs AI Won’t Do. With AI handling routine tasks, teams can refocus on higher-value activities that drive growth, encouraging a more strategic use of human resources. Successful companies will identify jobs with the highest potential and redesign AI-enabled workflows to support them.
  5. Take a Unique Approach to AI Solutions. It’s not enough to offer innovative AI solutions. To stand out in a crowded marketplace, companies must match their unique value propositions with AI, making it clear how they differ from competitors. That enables crisp positioning that makes the value clear to your audiences.
  6. Activate New High-Performing Sales Motions. Effective activation is the key to driving value from new AI strategies. Marketing and sales leaders will need to work together on creating demand generation campaigns, account-based marketing (ABM) programs and sales motions to build the pipeline.

As AI continues to reshape sales and marketing, now is the time for leaders to take proactive steps toward adoption. The opportunity to drive efficiency, enhance personalization, and unlock new revenue streams is too significant to ignore. Organizations that embrace AI’s potential will gain a competitive edge by identifying growth areas, adapting to evolving buyer behaviors, and reimagining go-to-market strategies.

As highlighted in our latest research, success will depend on integrating AI in ways that enhance—not replace—human expertise. The future of Sales and Marketing is AI-powered, and those who act now will be best positioned to lead the way.

Download our report, “The impact of AI on Go-to-Market strategies, programs, and investments”​

How GenAI is changing B2B buying dynamics (and why GEO is now key)

It’s well known that GenAI is transforming go-to-market strategies. “From content creation and product development to improving employee productivity, its use as a tool in sales and marketing to automate manual processes and personalize customer interactions is beginning to emerge.” (Forbes, 2024).

But AI isn’t just driving a seismic shift in how marketers and sellers get things done. It’s also fundamentally shifting how B2B buyers get answers to key buying questions, find and consider potential providers and conduct research on them faster.

Now, you might be thinking this adoption trend might just be for younger B2B decision makers (see trend #4). But you’d be wrong. Buyers’ shift to generative AI (GenAI) over standard web search engines is fast becoming universal across all B2B buyers. Get this: Since ChatGPT was first introduced just a few years ago, 89% of B2B buyers now use GenAI as one of the top sources of self-guided information in every phase of their buying process (source: Forrester, 2024 B2B Buyers Journey Survey).

The question is, do your marketing efforts reflect this shift? Do you know how tools like ChatGPT, Claude, Perplexity and Gemini represent your brand in relevant results generated? Are you taking steps to ensure your brand is being found and is showing up in the right way?

What to do in 2025? Don’t get caught off guard—time to integrate Generative Engine Optimization (GEO) into your SEO strategy.

This change in buyer behavior is moving fast, so put simply, it’s (past) time to start getting more proactive when it comes to managing your brand for AI-generated search results. To do so, consider these five tips:

  1. Generative Engine Optimization (GEO) focuses on clear, direct answers within comprehensive and context-rich content to address user queries. Format your on-page content accordingly.
  2. When stuck, just ask AI. Utilize LLMs to review and critique your on-page optimizations as well as test or simulate user query response.
  3. While Search AI results can be tough to track, Google AI Overviews (via tools like SEMRush) can provide insight into how well other LLMs are indexing your work.
  4. Ensure GEO and AI search strategies work in tandem with brand building campaigns, as GEO relies on strong, authoritative brands, backlinks, and user engagement, just as much as traditional SEO.
  5. Review your existing organic strategy. Ask your agency, is GEO part of it and how are you optimizing towards it?

The rise of generative AI is transforming how businesses connect with and influence their audiences. As buyer behavior evolves, so must our strategies, ensuring we adapt to new technologies and meet buyers where they are. Success in this new landscape requires proactive engagement, thoughtful innovation, and a commitment to staying ahead of the curve.

Exciting new MEP (Marketing Effectiveness Platform) features

MEP Version 0.8.5

Over the past two years, we have been working on building a business intelligence, scenario planning, and optimization SaaS platform for marketers. MEP provides decision-makers with a single place to understand multi-channel performance, and perform “what-if” analysis of spend by channel.

A tremendous amount of care has been put into building MEP to be more than just a shiny app (no pun intended.) Each company (or business unit) has its own unique marketing mix and architecture, and each element of that architecture—channels, time granularity, cross-section (segmentation), upper- versus lower-funnel—has been parameterized in model objects. Model objects must validate using a JSON metadata file before they are displayed in the platform—and this provides real scalability.

For the first two years of its development, we were focused on critical infrastructure. We made steady progress below the water line, but there wasn’t a ton to show for it. Over that time, the team focused on integrating the front end with our big data back end (Databricks); user roles and permissions, to ensure that each user and client’s data were secure and private; building out our JSON metadata parameterization; adding support for all of the models and curve functions we use (Bayesian, GAM, etc.); and building out the critical tables and charts to understand marketing effectiveness.

Over the past three months, the ship has started to take shape over the water line, and it’s really impressive (it’s even more impressive knowing how robust the hull is—OK, I’ll stop torturing that analogy.)

Scenario Planning

We thought a lot about how to let managers visually plan different levels of marketing spend, and show what the results of these decisions would be. At first, we deployed a simple spreadsheet download/upload function. We thought this would be the most flexible option, but our users thought it was clunky (it was). So, we went back to the drawing board and came up with three different on-platform scenario planning options: Manual, Strategic, and Optimized.

Figure 1: Choosing a scenario type from a model. Note the richness of the metadata; this is evidence of the “underwater” work of the past two years.

Manual provides the user with ultimate power. In this approach, users interact directly with model dataframes in Databricks and then recalculate the scenario. This is particularly useful for our analysts, who are routinely running scenario after scenario with tiny changes in spend and mix in preparation for client deliverables.

Strategic is for business users who want to quickly get to “what if” answers. In the strategic pane, users can choose any input variable—spend, impressions, or controls—and change it, up or down, either by a percentage of a fixed amount, for any time period. The number of these changes have no upper limit, and if you make a mistake, you can delete it. Once you’re happy with a scenario, you save it, give it a name, and then send it back to the Databricks cluster to run.

Figure 2: What would happen if we only spent 25% as much on consideration-focused advertising, over the entire modeling period?

Optimized is just what it sounds like: A user can optimize for, say, total sales in a given period, and then add a series of constraints. Once they are satisfied, the scenario is sent back to Databricks for computation. This can take a while; these models aren’t simple linear regressions, so we can’t use matrix algebra to solve for an optimum. Instead, our awesome team (led by Sam Arrington) built a two-stage program that searches for a macro solution, and then hones in on a local minimum/maximum. When the optimization is done, the user gets an email and can see what the answer is.

When doing this work, we realized that the days of simple “linear program” (think Excel Solver) optimization for marketing are over. We’ve entered a new phase, where advanced machine learning techniques are required, not optional. I don’t like using “AI” flippantly, but we have some of that in here, and it’s the only way this works as fast as it does. More to come on that in coming quarters.

Model Comparison

When we started down the path of scenario creation, we knew we needed an easy way to compare two models or outcomes. We went a little further than just allowing a user to compare two scenarios, however. We built a more robust method that allows a user to compare two of anything. The comparison looks both at overlapping channels and those that are only present in one of the objects—a full outer join, if you will. This allows a lot of flexibility—if you want to know how two different models look, you can do that, too. It’s basically a Swiss Army Knife for marketing data comparison, and will support many future use cases for MTA, testing, and basic reporting.

Figure 3: Model comparison provides a clean ledger between base models and scenarios, or between two scenarios.

Multi-Stage Modeling

We spend a lot of time at Marketbridge making sure that upper-funnel tactics—like display, OOH, digital video, and social—get proper credit for their contributions to sales. To do this, we build multi-stage models, where upper-funnel tactics regress both on end sales and on so-called capture channels—typically branded paid search and affiliate.

To make this happen, models must be “aware” of other models—concretely, a dependent variable of one model is also an input (independent variable) of another model. Behind the scenes, this means that model objects have been built with metadata that attaches them to one another via variables.

At the same time, users should be able to visualize and link models together. In MEP, a user should be able to point a model’s output to another model’s input—potentially in an endless chain. We’ve added a neat visualization to make this happen.

Figure 4: Making a model “system” (for now, two stages only) is now visual.

Up Next: AI, APIs, MTA and Testing Integration, and Benchmarking

Our roadmap is really exciting for the rest of 2024 and into 2025. We’re working on a more integrated marketing measurement approach called ITMA (integrated testing, mix, and attribution) that takes the best elements of test-and-learn processes, econometric inference, and multi-touch attribution and integrates them into a single approach.

We are spending a lot of time building data connectors to the big publishers and platforms to get data into longitudinal human records (LHRs) and econometric data frames. Traditionally, the time to get data into a model has been the limiting factor for multi-channel attribution; our goal is to get this time down from months to hours. It’s a big job, with a lot of edge cases, but expect announcements on this in Q1.

AI is a big topic in effectiveness and attribution. Today, we use generative AI mainly in the code-building and model-construction phase. We have cut the time to write a function or method by around 80% using various AI copilots. The next big step will be integrating AI as “agentic search agents” looking for optimal fits and unexpected relationships.

Finally, benchmarking is a big ask from clients. What’s the typical CPA of branded paid search in healthcare? Is a ROAS of 120 good for a direct-to-consumer electronics company? What percentage of business should marketing be driving? Today, these answers are qualitative; we’ve done a lot of projects and “know” the answers, but we don’t have a quantitative database. The key to getting to this database is metadata and taxonomy. As I mentioned above, we’ve put a huge amount of effort into parameterization, so we will be starting a benchmarking service later in 2025 leveraging all of these data, at a channel and industry level.

That’s all for now on MEP. We’d love to talk to you about your marketing measurement and effectiveness challenges. Complete the form below to schedule a meeting!

Introducing Integrated Testing-Mix-Attribution

A brand new approach to marketing mix insights

Today’s marketing leaders are looking for instant, accurate, and complete insights from their analytics stack. Unfortunately, no single tool–whether Testing, MMM, or MTA–can be that golden bullet on its own. The solution is to combine all three approaches into one unified system. We call this ITMA, or Integrated Testing-Mix-Attribution.

In ITMA, we use each inferential method for what it is good for, in a partially automated, integrated data science environment:

  • (T)esting is good for precisely understanding incrementality
  • M(M)M is good for understanding long-run, non-marketing, and inter-channel effects
  • MT(A) is good for fast reads with unlimited granularity
Figure 1: Testing results are integrated into MMM, which then feeds insight to real-time MTA reporting.

This approach provides significant benefits to the marketing leader:

  • Immediate Results: Because results are built at a record level for each new sale, marketing leaders can understand channel, campaign, and audience attribution in real time via business intelligence dashboards.
  • Consistent Answers: Because stimulus, response, control, and audience data all sit in one data lake, consistency is baked in.
  • Confidence Estimates: Mean estimates are always shipped with upper and lower bounds, at any percentile. There is no limit to channel granularity; more channels mean confidence will decline but will re-narrow with time or testing.
  • Total View of Causality: Integration of upper-funnel brand-focused marketing—and its impact on attitudes—is built in. Every channel comes with its immediate (within 90 days) and long-term impact, forming a complete picture of return.
  • Marketing Data Lake: ITMA is built on a Spark delta lake (e.g. DataBricks) data lake that can serve multiple use cases, including reporting, ad hoc analytics, and activation. Because all of the data are pristine, marketers can most likely replace multiple existing systems with one unified ledger—a marketing income statement for the CMO.

The nitty gritty: how does it work

Marketbridge’s ITMA is built in Databricks, hosted at the cloud provider of your choice. This is not SaaS. Rather, it is a purpose-built, evolving service infrastructure that can be insourced as required.

Components include:

  • Databricks tables with common taxonomy and metadata
  • Data connectors to publishers, platforms, and marketing technologies
  • Reproducible data engineering workbooks
  • Version control and documentation in Github
  • The R-Shiny front-end MEP, which provides reporting, scenario analysis, and optimization
  • The R modeling library mbmmm, which provide econometric, longitudinal, and testing inference, optimization, and taxonomy standardization
Figure 2: Technical architecture of ITMA.

Marketing Data Lake

The ITMA rests on a marketing data lake: A complete view of marketing stimulus and response, along with associated audience and customer information. This data lake provides significant ancillary benefits beyond attribution and optimization; because it must undergo ongoing quality assurance (QA) testing and remediation, it can function as a marketing general ledger—a sorely missing component of many organizations.

Download our whitepaper, “The superpowered CDP: Building a go-to-market data lake”​

For a comprehensive exploration of the technical and use case review of a marketing data lake, download our paper.​

The basic table structure starts with a longitudinal human record (LHR): a record of each individual’s interactions both “in the wild” (third party) and on domain (first party). Where identity resolution is not available, a probability is attached to a record to provide a complete view of potential stimulus. This LHR is then enriched with aggregated data (for example, upper funnel advertising, brand tracking, or economic data). When customers convert, first party demographic data can be cross-walked, and third-party demographics can be appended via an identity resolution service of the client’s choosing (for example, Experian, Equifax, or LiveRamp).

Because Databricks uses distributed storage and compute, query times are shortened from hours to seconds. When compute is not being used, clusters can be shut off, keeping costs reasonable.

Rapid Data Connectors

Because speed to insights is a primary objective of ITMA, shortening the time between marketing execution and ingestion into the data lake is critical. To accomplish this, APIs and direct linkages to data providers via cloud providers are the preferred methods of data transfer. This is most feasible for programmatic and digital marketing.

Reproducible Data Engineering

Marketing effectiveness measurement most often fails in the data transformation phase, before any analysis takes place. The “garbage in, garbage out” mantra is operative—small errors in grouping, summing, counting, and joining multiply and drive large errors downstream.

No black-box code or spreadsheet math is used to drive results. All code—whether custom for a given installation, Marketbridge libraries, or open-source libraries and packages—is available to inspect. Changes to code are preserved in perpetuity, ensuring auditability.

Download our whitepaper, “A roadmap for modern marketing analytics”​

Download our whitepaper to learn more about reproducible data engineering in the context of marketing analytics.

MEP Front End

The Marketbridge Marketing Effectiveness Platform (MEP) is a web-based decision support interface that allows marketers to understand channel-by-channel return, run hypothetical scenarios, and optimize their marketing mix for different objectives. It runs on the same open-code framework, using the same data, all the way back to the longitudinal human record.

mbmmm

mbmmm comprises a set of libraries and packages that power statistical inference, model validation, metadata, and data structures. It is totally flexible and extensible, with no tight couplings that will limit future flexibility.

Case Study

A health insurance carrier was juggling multiple marketing measurement and effectiveness methods, tools, and data structures. Each provided different answers—sometimes dramatically different from system to system. This resulted in low trust in analytics and slow, unconfident marketing decision-making.

Marketbridge worked with the marketing analytics team to replace a black box MMM; a software-based MTA, and a fragmented testing approach with a single measurement and optimization process: ITMA. Over the course of nine months, technology, analytics, and change management workstreams were launched and ultimately integrated to provide marketing executives with a unified multi-channel performance system.

The core of the system was the Marketing Data Lake, built around each newly acquired customer. A complete graph of known and inferred touches prior to conversion allowed attribution, while crosswalks to first- and third-party data allowed almost unlimited audience profiling—critical in understanding how different kinds of customers made the journey from awareness to learning to shopping to submitting applications.

The data lake is fed into three core systems. First, an econometric model forecasting total applications and sales by day and by region was built. This model used the data lake as its main input, grouping and summing both stimulus and response to create a cross-sectional time-series data asset, updated daily. This econometric model—essentially an MMM (media mix model)—also estimated revenue, leads, and other KPIs, and included non-marketing variables like plan strength, the macroeconomy, seasonality, and pricing. Second, a testing “factory” was built and kicked off. Tests were planned in a systematic way, using a kanban board. Each test was appropriately scoped (with one learning objective); statistically powered for low-risk readouts; and scheduled and integrated with marketing execution teams.

Testing was championed at the highest level of leadership (CMO and Chief Commercial Officer) as an ongoing innovation driver; because of this, most short-run concerns about lost performance were overcome. Once tests concluded, standard readout templates allowed learning to be effectively catalogued and put into action. Finally, test results were fed back into the econometric model and the MTA as Bayesian priors.

Download our whitepaper, “Accelerate growth through test-and-learn marketing culture”​

To learn more about the Marketbridge approach to test-and-learn marketing, download our whitepaper.

Finally, a multi-touch attribution (MTA) system used Markov Chain modeling to estimate how each upstream touch or interaction—whether known or inferred—contributed to the ultimate outcome. Priors from the econometric model (MMM) and testing were also fed back into the multi-touch model to provide better estimates for long-run and latent effects. This system powered a daily dashboard showing attribution for each channel, down to a “micro-channel” level (e.g., branded paid search, specific affiliate partners, Meta-social-reel, etc.) This dashboard was used by executives to tune campaigns quickly. As priors from MMM and testing were updated, inferences were likewise updated.

The system replaced a six-figure black-box MMM solution and several complex identity graph-based attribution technologies, saving around $2.5 million dollars per year, while adding near-real-time attribution, and reducing confusion from conflicting ROAS and CPA estimates. The marketing data lake quickly drove additional use cases, including audience profiling, customer experience, and media auditing. Within one year, overall marketing-touched applications increased at a higher than fair share rate, and early indications are that reinvestments in upper-funnel brand marketing is paying off in higher yield rates in previously weak markets.

What to expect

Embedded: We embed our world class marketing data science team inside your domain to build your system. No clean rooms, software licenses, or restrictive contracts. Because we act as our clients’ direct agents, there are no pass-through markups or side arrangements with other vendors or software providers.

Nine Months to Answers: The Marketbridge team sets up the ITMA system inside your domain in six months, and then spends three months in pilot mode, making sure everything works. Because we are consultants at heart, you get weekly updates from the start, where we work with your team to hook up data sources, instantiate tables, run models, and set up dashboards.

Don’t Lose Your Marketing Brain: Because the infrastructure we build is open source, you don’t run the risk of losing what’s been built. While the mbmmm and MEP packages are Marketbridge IP, your team can keep using them and extending upon them, whether we remain your provider or not, subject to a license that they stay inside your walls. This de-risks marketing measurement, future-proofing your team from unforeseen future technologies, marketing approaches,

We Stick Around to Keep the Innovation Going: Once ITMA is moved into production mode, the team shifts into “run” mode, providing weekly updates on marketing performance, making enhancements, and helping you move from good marketing performance to world-class.

New innovations are tackled using an agile approach. A backlog of tests, analytics features, and new data sources is maintained in a kanban board. We work with the client collaboratively to prioritize what to work on next. All new work is done using the same reproducible, white-box methods.

Learn more and get started

We would love to meet with you to understand the current state of marketing measurement and optimization at your company, and to plan an ITMA journey that will get you to better effectiveness in less than a year.

Complete the form below to schedule a meeting with our Chief Analytics Officer.


1 These more comprehensive econometric models are sometimes called “Commercial Mix Models” due to their larger scope. As the scope of explanatory statistical models increases, they become useful to other parts of the organization, like finance and sales.

Survey pricing methodologies: Gabor-Granger vs. Van Westendorp

Pricing your product or service just right can feel like solving a puzzle without the picture on the box to reference. Although it can be challenging, we find pricing is one of the biggest short-term levers to drive sales performance. In fact, McKinsey reports “pricing right is the fastest and most effective way for managers to increase profits…a price rise of 1 percent, if volumes remained stable, would generate an 8 percent increase in operating profits.”

So how do you find the sweet spot for your new or existing product or service – the price point at which you can generate revenue without scaring off potential customers? That’s where survey pricing methodologies come into play. Two of the most popular direct survey-based pricing methodologies are the Gabor-Granger Model and Van Westendorp Price Sensitivity Meter (PSM), although others exist. These two methods are most helpful when looking for simple and straightforward answers about pricing.  So how do these two methods work, and when should you use each one? Let’s break it down.

Gabor-Granger Model: Understanding Price Elasticity

Think of the Gabor-Granger Model as a straightforward way to find the maximum price customers are willing to pay.

Here’s how the Gabor-Granger Model works:

After giving respondents the product or service description, show them a series of prices and ask how likely they are to purchase the product at each price point. If they are willing to pay that price, they are offered a higher (randomly chosen) price. If they are not willing to pay that price, they are offered a lower (randomly chosen) price. The algorithm repeats until we find the highest price each respondent is willing to pay.

Based on their answers, you can pinpoint the price that would give you the best sales while still maximizing revenue.

One of the helpful features of the Gabor-Granger model is that it helps you measure price elasticity—essentially, how sensitive customers are to price changes. For example, if you lower the price, will you see a surge in demand? Or, if you raise the price a little, will you only lose a small percentage of buyers? This method helps you predict those scenarios with confidence.

gabor-granger pricing model output sample
Sample Gabor-Granger Output (for illustrative purposes only)

When should you use it?

The Gabor-Granger model is great when you’re trying to:

  • Find revenue-optimizing price points.
  • Get a clear sense of what your customers are willing to pay, especially for established products.
  • Focus on just one product or service without considering competition.

What’s the catch?

While Gabor-Granger gives you clear pricing estimates, it has a couple of disadvantages:

  • Since you’re suggesting the price points, it doesn’t give you insight into what consumers naturally think is a fair price.
  • Because people know they’re being asked about pricing, they might understate their willingness to pay to try to get a better deal (“gaming the system”).
  • The model only considers your brand or product (without factoring in the competition).

Van Westendorp: Letting Consumers Set the Range

Now, let’s talk about the Van Westendorp model. Unlike Gabor-Granger, which asks respondents to react to predefined prices, Van Westendorp flips the script and lets respondents tell you what prices they think are too low, too high, and just right.

Here’s how the Van Westendorp model works:

Ask a series of questions that gauge perceptions of price. There are typically four questions, set in the context of “At what price would you consider the product to be…”:

  • “Priced so low that you would feel the quality couldn’t be very good?” – to determine the “too cheap” price.
  • “A bargain—a great buy for the money?” – to determine the “cheap” price.
  • “Starting to get expensive, so that it is not out of the question, but you would have to give some thought to buying it?” – to determine the “expensive” price.
  • “So expensive that you would not consider buying it?” – to determine the “too expensive” price.

From there, you can build a price sensitivity meter that shows the range of acceptable prices from “too cheap” to “too expensive.”

Sample Van Westendorp Output
Sample Van Westendorp Output (for illustrative purposes only)

When should you use it?

The Van Westendorp method is ideal for:

  • Getting exploratory results on an acceptable range of prices.
  • Products or services that don’t fit neatly into an existing category.
  • Gaining a deeper understanding of your target demographic and what attitudes they have about price points (and where those attitudes may impact your strategy).

The biggest advantage of Van Westendorp is that you’re getting consumer-driven pricing insights. You’re not dictating the prices (like Gabor-Granger) — consumers are, which can help you understand not only what they might pay but what they want to pay. It’s especially useful when you’re not confident in how the market will react to a specific price point.

Any drawbacks?

While Van Westendorp gives you great insights into price perception, it has a few limitations:

  • It doesn’t offer a clear picture of potential revenue. Knowing that consumers think a price is fair is helpful, but it doesn’t tell you how much money you’re likely to make at that price.
  • Since the pricing preferences are broad, the data can be tricky to interpret if you need concrete numbers for financial forecasting.

So, Which Method Is Best for You?

Both models have their strengths, and in some instances, they can even complement each other.

  • Use the Gabor-Granger model when you’re trying to optimize revenue and you need a clear, calculated price point. This method helps when you’re dealing with products or services that already have a place in the market.
  • Use Van Westendorp when you’re not sure what price the market will accept. It’s a good method for new products or niche items that don’t easily fit into existing categories.

In some cases, we recommend clients use Van Westendorp to gather a baseline of acceptable price ranges and then fine-tune those price points with the Gabor-Granger model to optimize revenue.

Final Thoughts on Survey Pricing

Pricing can make or break your business, but with the right tools, you can turn guesswork into strategy. The Gabor-Granger and Van Westendorp models give you different but equally valuable insights into how much your customers are willing to pay. Whether you’re launching something new or refining an existing product, these survey pricing methodologies can help you strike the perfect balance between affordability and profitability.

Marketing mix modeling example

Marketing Mix Modeling (MMM) is a popular measurement technique to understand how different marketing channels and campaigns–as well as non-marketing factors like pricing, product assortment, distribution channel mix, competitive actions, and the macroeconomic environment—affect business outcomes. While there are many technical resources available online describing the statistical models used in MMMs and the pros and cons of each, a straightforward linear Marketing Mix Modeling example—focusing on the data required and the visual and data outputs emerging—is lacking. In this article, we will go through a complete Marketing Mix Modeling example from start to finish.

Use Cases for Marketing Mix Modeling

Marketing mix modeling (MMM) is a specific type of econometric modeling. Econometric modeling is the analysis of data over time and across categories to understand causality and forecast future results. MMMs, at their simplest, explain how marketing activity drives sales. They have many use cases, including estimating the long-run impact of advertising, optimizing marketing spend between channels, understanding audience responsiveness, evaluating campaign effectiveness, forecasting sales, conducting scenario analysis, and measuring overall performance—usually reported as returning on advertising spend or ROAS.

Marketing mix modeling is used across many industries. The most prevalent marketing mix modeling example is in the consumer package goods (CPG) industry. This industry sells mainly through retail distribution, making end customer-level data hard to come by—either for measurement or activation. This means that most marketing is “upper funnel”—video, print, or interactive without a specific call to action. This kind of marketing is ideal for modeling with MMM, as direct attribution is usually impossible.

Soup to Nuts Marketing Mix Modeling Example

Sourcing Data

Marketing mix modeling data can be divided into three basic categories. The vast majority of data are “x” or independent variables. These variables are hypothesized to drive dependent variables. Independent variables can be further sub-divided into control variables and stimulus variables.

Control variables cannot be influenced by the advertising, but still have the potential to explain outcomes. For example, the 10-year Treasury Rate is commonly used as a proxy for credit availability, which can impact consumer demand for more non-essential items. When the rate goes down, credit tends to be cheaper and looser, causing consumers to open their wallets. Conversely, the S&P 500 index is commonly used as a proxy for how wealthy consumers feel; if they have more money in their 401(K)s—even if it will not be available to them for decades—they tend to open their wallets.

Stimulus variables are at least partially controllable by the advertiser. Paid media—think television, digital display and social, paid search, and affiliate marketing—is completely under the control of marketing decisionmakers. Earned media is partially controlled; it takes time for PR and influencer marketing efforts to drive impressions, but companies can still make decisions to increase or decrease focus. Price is also partially controllable; for companies that use third-party distribution channels, setting price is more suggestive, and takes longer to take hold, but it is still a lever. Likewise, overall distribution channel mix is also a longer-term decision, but still has important impacts on marketing performance.

Response variables represent behavior that is affected by marketing. The most common response variable is sales, which can be measured both in dollars and units; either can be used in modeling. More advanced metrics like customer lifetime value (CLV) can also be used in lieu of gross sales.

Building a panel for econometric modeling should start at the lowest level of detail (records), supplementing with aggregated data when that is all that is available.

Figure 1: Building the panel using both record-level and aggregated data.

Intermediate response variables that point to constructs like brand equity can also be collected. Both survey-based metrics like brand awareness, comprehension, affinity, or net promoter score and behavioral data like share-of-search, Google trends, and pre-paywall pageviews can be used as intermediate proxies.

Cross-sectional (sometimes called panel) variables organize independent and dependent variables. Cross-sections can include two types of components: a time component (Week or Day) and optional category components (Geographic, audiences, cohorts, etc.) For the econometric time series, each model needs between one to three years’ worth of data. The robustness of a model will increase by uncovering any seasonal trend or impact of outside factors. The goal is to ensure the data spans a period with consistent marketing affecting consumer purchase decisions. Category component data can include geographic areas (e.g., County or DMA), audiences, cohorts (e.g., customer birth dates), or any other relevant grouping criteria. While not necessary, including category components increases the degrees of freedom available and thus the precision and granularity of our estimates.

Once identified, all data sources are merged into a clean “panel” format. The source of each data set is either owned by various parts of an organization (Finance, Sales, Consumer Insights, Digital, IT, Marketing), vendors that support business functions (CRM, Media activation), or open-source information (Credit rate, Consumer Price Index). Communication and alignment between these distinct groups are necessary to set data requirements and ensure consistency. This process—sometimes called extraction, transformation, and loading (ETL)—is typically the most time consuming and error prone step in the process.

Typically, this “data munging” process is first done in batch format. Input files are arranged in a staging area—sometimes something as a simple as a hard disk, but increasingly using a cloud-based storage and compute environment like Databricks. It is best practice to write the data transformation steps in a text-based language like SQL or Python, and store the steps in a version control system like Github. This ETL process can then evolve over time as files change, more APIs are used in place of flat files, and additional data sources are added.

Exploratory Data Analysis and Quality Assurance

One of the most common mistakes beginner MMM modelers make is to jump immediately to regression modeling. However, most “bad” models turn out that way not because of the inferential modeling, but because of quality issues with the underlying data or an incomplete understanding of what the data represents.

Exploratory data analysis (EDA) is also too often a throwaway step that is rushed through to get to the real work. In fact, EDA should be approached with a similar level of rigor that one would use for regression modeling. To achieve this, the analysts involved need to have a clear plan of attack. This plan of attack is best documented with a series of archetype tables with “golden number” results expected.

The goal of any data validation is to make sure the “golden number” tracks through the data transformation step. The challenge here is that executive leadership views the data at a total budget or revenue level. Concurrently, the raw data used to create the panel is at a more granular level (Individual sales or marketing campaign activation). Consistency between how the data is labeled and any difference in timescale (When a marketing campaign is in flight vs. When its paid for) will affect the executive view. The goal in a proper EDA is to make sure that the golden numbers match between the analytics data set and the financial “common knowledge” that executives will use to judge whether the model is to be trusted.

For example, say one desired output table was total end customer revenue by geography by month, for the year 2024. The golden number would be sourced from an agreed-upon executive dashboard or report. Using the econometric time series data set, the analyst would then group by the required dimensions and sum (in this case, something like sum revenue by county by month where year equal 2024.)

Beyond validation, exploratory data analysis can also be helpful when forming hypotheses. Data visualizations like time series plots, bar charts showing counts or sums of independent and dependent variables by year, month or quarter, and scatterplots showing relationships between two variables are some of the most common visualizations used. Having a library of common visualizations in a single workbook (for example, in an Rmarkdown or Jupyter notebook file) is a best practice to rapidly create visualizations from a time series data set. Beyond validation and hypothesis generation, initial learnings from the EDA often help lend credibility to MMM results later during delivery.

Modeling

Before starting the modeling process, it is important to select an appropriate model type, which depends on the nature of the underlying data, and the sophistication of the analysis. In determining model type, there are 3 primary considerations:

  • Response shape construction
  • Multilevel modeling
  • Frequentist vs. Bayesian

Multilevel models can include any of the response shapes discussed and any model can be estimated as either a Frequentist or Bayesian model.

Response Shapes

Response shapes can be as simple or complex as needed for the task. As complexity increases, we trade off ease of training and interpretability for accuracy and flexibility. From simplest to complex, the three common response shapes are linear, non-linear, and splines.

Marketing stimuli generally have diminishing returns to scale. The blue response curve has a higher coefficient, but the next dollar should be allocated to the green stimulus, given it is lower on its response curve and will have greater return.

Figure 2: Diminishing returns curves in action.

Linear regression models are linear with respect to the regression parameters (i.e. the betas). This means we can still apply any transformations that do not require additional parameters beyond the classic beta parameters and account for concepts such as diminishing returns. The most common of such are log, square root, and inverse transformations. While useful, these transformations often produce nonsensical results at very low or very high spend due to their lack of flexibility. For example, log transforming both the stimulus and response variables implies constant elasticity—meaning that at any point on the curve (i.e., an amount of marketing spend), the percent change in the response variable divided by the percent change in spend will be the same. In other words, increasing the spend on a marketing channel from $1 to $1.01 results in the same percent increase in sales as going from $1M to $1.01M dollars, which clearly does not represent reality. Nonetheless, linear models are straightforward and easy to interpret, making them suitable for scenarios where we do not expect a complex relationship between stimulus and response variables. They can also be helpful as “back of the envelope” starter models to understand whether a given set of independent variables impact an outcome.

Non-linear models extend linear models by allowing parameters that are not linear with respect to response. This brings the possibility of flexible functional forms that can estimate more realistic diminishing return curves. More advanced approaches are typically used to model the typical “S”-curves seen with upper-funnel advertising. An S-curve shape acknowledges that there is a minimum level of spend below which marketing is ineffective, then the curve rapidly steepens, and then eventually plateaus. While clearly valuable, non-linear models are harder to estimate and thus require more data and training time. This estimation difficulty also typically results in larger parameter confidence intervals.

Generalized additive models further extend modeling capabilities with splines. Splines offer even more flexibility by allowing different polynomial functions (or any other basis function) to be fitted to different segments of the data, ensuring that the model can adapt to varying rates of change in the response variable across different ranges of marketing spend. With this construction, they can theoretically model any smooth response shape including both S-curves and diminishing returns. However, as always there are downsides; without taking care in construction (e.g. applying sufficient regularization) splines often result in nonsensical response shapes (e.g. a response shape that looks like a sine wave) and their nonparametric nature reduces interpretability.

Multilevel Modeling

Mixed models, multilevel models, and panel models all make use of cross-sectional variables and are used interchangeably depending on the domain. Here, we will use the term multilevel models. These cross-sections are often geographies which are used below as a concrete marketing mix modeling example. However, the same statements can be made about any level (e.g. audience or cohort).

Geographical cross-sections provide two dimensions (time and geo) which implies for every point in time, we have many samples (one for each geo) and for every geo, we have many samples (one for each point in time)

This substantially increases the samples in our dataset, increasing the precision of our estimates. Additionally, it opens the door to new modeling techniques. The most common of which are:

  • Geo-level response curves: geo-level variation is leveraged to estimate individual response curves by geography either through no pooling (i.e., response curves for a geo are estimated using only data from that geo) or partial pooling (i.e., response curves for a geo are estimated by essentially taking the weighted average of the no pooling estimate and the average estimate across all geos)
  • Controlling for unobserved time varying effects: dummies for each point in time can be added to the model as there are multiple samples per point in time (often called time fixed effects)

As always, there is no free lunch:

  • The size of the data inherently makes constructing the dataset more difficult and increases training time substantially
  • Data is not always available at the geographic level for all channels, requiring assumptions around how that spend should be distributed in creation of the panel
  • Geographic labels do not always align between different marketing platforms, making joining arduous and error prone

Frequentist vs. Bayesian

Frequentist regression (i.e., the statistical paradigm most taught and used historically) assumes nothing is known of the relationships of interest prior to collecting and analyzing data. Frequentist regression is solely driven by the data collected.

Bayesian regression, on the other hand, combines prior beliefs (i.e., the prior) with the data (i.e., the likelihood) to compute estimates (i.e., posterior estimates). Priors can conceptually be divided into two types:

  • Utility priors: Priors that regularize, helping with multicollinearity and estimation of complex non-linear response curves. As the complexity or granularity of the model increases, utility priors become more necessary.
  • Previous knowledge priors: Priors resulting from domain knowledge, benchmarks, or previous modeling exercises
  • Experimentation priors: Priors resulting from experimental results, most commonly geo-tests. These are particularly useful for channels with questions around causality (e.g., branded paid search).

Historically, frequentist regression became standard primarily for computational reasons. Parameter estimates and p-values can theoretically be computed by hand or very efficiently with a computer whereas Bayesian estimates are usually impossible to compute via hand and time-intensive on a computer. Increasing computational power has closed the gap; however, Bayesian regression models still take much longer to train and iterate.

Variable Selection

Almost any combination of independent variables could be added to a model, but getting to the right structure is an iterative process that demands careful consideration. The structure of a marketing mix model requires a thoughtful approach to selecting and including the right combination of independent variables. This iterative process demands careful consideration because the wrong combination of variables, omitted variables, or an imbalance in the number of variables can lead to problematic model structures. Including too many variables can make the model overly complex and difficult to interpret, while too few variables might result in an oversimplified model that lacks actionability.

Feature selection requires a mix of necessary business variables and leveraging scientific methods to decide variable importance. Any factor that has an indirect impact on sales could be of value from the business perspective. Examples might include the effect of a new campaign for a specific product or measuring the impact of a change in sales channel.

How potential customers interact with marketing stimuli before they make a purchase decision should also be considered. Customers engage with several types of marketing stimuli while in the marketing funnel. The marketing funnel consists of four stages (awareness, interest, engagement, and conversion). For example, TV would be considered a channel driving awareness while paid search would be a conversion channel. How variables interact must be considered before a correlation matrix is used for potential variable selection. A correlation matrix is typically the first step in identifying candidate significant variables. This matrix displays the correlations between all pairs of variables and can be enhanced with color shading to indicate high positive or high negative correlations, making it easier to spot potentially powerful independent variables. High correlations between variables suggest multicollinearity, a situation where two or more variables are highly correlated and provide redundant information. Multicollinearity can inflate the variance of coefficient estimates and make the model unstable. Therefore, identifying and addressing multicollinearity is crucial in the early stages of model building.

Variable reduction is often necessary, especially when using non-Bayesian modeling approaches with many different stimulus variables that are co-linear. Co-linearity occurs when changes in one advertising lever are accompanied by changes in others. This can lead to counterintuitive results, such as variables that should have positive coefficients turning negative. To mitigate this issue, analysts employ techniques like correlation analysis, variance inflation factors (VIF), and principal component analysis (PCA) to reduce the number of variables while retaining the essential information.

Stepwise regression is a systematic technique for variable selection that can help in building a parsimonious model. This method involves adding and removing variables iteratively based on their statistical significance in explaining the response variable. Forward selection starts with no variables in the model, adding one variable at a time, while backward elimination starts with all candidate variables and removes the least significant ones step by step. Stepwise regression balances between these two approaches, adding and removing variables as needed to optimize the model’s performance.

Regularization techniques like Lasso (Least Absolute Shrinkage and Selection Operator) and Ridge Regression are essential for reducing overfitting. Overfitting occurs when a model is too complex and captures the noise in the data rather than the underlying relationship. Lasso adds a penalty equal to the absolute value of the magnitude of coefficients, effectively reducing some coefficients to zero and thus performing variable selection. Ridge Regression adds a penalty equal to the square of the magnitude of coefficients, shrinking all coefficients towards zero but never setting them exactly to zero. Elastic Net combines both Lasso and Ridge penalties, providing a balance that can be particularly useful in situations with highly correlated predictors. Each of these methodologies helps refine the model structure, ensuring it is both robust and interpretable. By iterating through these steps, analysts can develop a model that accurately captures the relationships between marketing activities and business outcomes, providing actionable insights for optimizing marketing strategies.

Evaluating the Model

Evaluating a Marketing Mix Model’s (MMM) performance involves two main components: model fit and prediction. Four key traits should always be considered:

  1. Accurately represents reality
  2. Accuracy of prediction for out-of-sample periods
  3. Measured relationship of marketing variables + external factors
  4. Provides meaningful decision-making insights

These traits ensure that the model is both statistically sound and practically useful.

The first validation step for any model is to review any diagnostic checks such as residual analysis. Residual analysis involves examining the residuals (the differences between observed and predicted values) to check for homoscedasticity (constant variance), autocorrelation (residuals are not correlated over time), and normality (residuals follow a normal distribution). Evaluating residuals over time helps identify unobserved effects which may significantly bias results. These checks help ensure that the model assumptions hold true and that the model provides a reliable representation of reality.

To validate if a model has accurately inferred an underlying pattern from actual data (Model fit), we can use the R-squared metric. The R-squared value measures the proportion of variance in the dependent variable predictable from the independent variables. R-squared can be misleading in complex models so it makes more sense to use an Adjusted R-squared since it adjusts for the number of predictors in the model.

When trying to compare different models, one can use the Akaike Information Criterion (AIC) or Bayesian Information Criterion (BIC) with a lower value indicating a better model. These criteria penalize models with more parameters, thus discouraging overfitting.

For example, a model that produces better fit metrics such as R-squared and BIC but fails to meet model assumptions is worse for inference than one with a lesser fit that meets model assumptions.

The gold standard for measuring a model’s ability to accurately predict unseen data is the out-of-sample mean absolute percentage error (MAPE). This metric assesses the prediction accuracy by comparing the predicted values to the actual values in out-of-sample data. Out-of-sample testing involves splitting the data into a training set and a hold-out set. The model is trained on the training set and then tested on the hold-out set to evaluate its generalizability. Cross-validation techniques, such as k-fold cross-validation, extend out-of-sample testing. In k-fold cross-validation, the data is divided into k subsets, and the model is trained on k-1 subsets while the remaining subset is used for testing. This process is repeated k times, with each subset used exactly once for testing. With time series data, these folds are often split such that training folds precede testing folds temporally. These methods help ensure that the model is not overfitted to the training data and can generalize well to new data. Care still needs to be taken in optimizing purely against out-of-sample performance. A model that predicts well is not useful for inference if it does not properly uncover the relationships between stimulus and response.

Perhaps the most important validation is common sense, known as “face validity.” This involves ensuring that the model makes sense from a business perspective and aligns with known market behaviors and historical insights. For example, if the model suggests that a particular marketing channel has a huge impact on sales, but this contradicts known business practices or historical performance, then the model may need to be re-evaluated. Business validation involves discussing the model results with stakeholders who have domain expertise to confirm that the results are reasonable and actionable. This step is crucial because a model that is statistically sound but lacks practical relevance is of little use. Face validity checks ensure that the model’s insights are grounded in reality and can be used to inform strategic decision-making.

Presenting and Using the Results

Once a model has been built and validated, the last step is using it to inform, predict, and make decisions. This involves communicating the insights effectively, integrating the model into business intelligence tools, and potentially leveraging the model for optimization.

PowerPoint

Even though data scientists hate to admit it, PowerPoint is still the lingua franca of business. Almost every MMM project ends with at least one PowerPoint. Ideally, two decks will be created: a “walking” deck meant to be read, and a “presentation” deck meant to be discussed. Too often, this isn’t the case, and the dense walking deck is read out to executives; this is probably the number one pitfall when communicating MMM results.

Either deck will still have the same basic sections:

  • Background and Objectives: Data scientists and statisticians often overlook explaining what the goal of the entire MMM process is, and generally how econometric modeling works. While this section can be skipped over in later “runs” of the model, it is important to set the stage for executives, outlining how regression works, what it is good for (understanding holistic relationships between marketing channels and long-run strategy) and not good for (immediate decision-making and quick campaign reads), and how it will be used as it evolves.
  • Model Overview: This section explains the model type, the variables included, and the rationale behind their selection. While these slides can be very technical, it is typically best to move most of the technical background to an appendix that can be referenced if needed, and to instead focus on the 30,000-foot view in the main executive summary. Structural equation-type diagrams can be used to illustrate the model structure at a high level and the relationships between variables.
  • Data Insights: Exploratory data analysis, while not the main topic of an MMM, is typically used to validate and tie to “golden numbers,” ensuring executive buy-in that the data used to build the model itself are correct. In addition to validating golden numbers, interesting trends and insights that fall outside of the scope of the model itself can be explored in this section.
  • Model Outputs: This section is the “meat of the sandwich,” in which model outputs are concisely communicated. This section should communicate total marketing contribution; overall ROAS (return on advertising spend) and cost-per-acquisition (CPA); channel-by-channel contributions, again outputting both ROAS and CPA; marginal CPA and ROAS; channel-specific adstocks (how long a channel’s influence is felt in-market); and response curves by channel.
  • Predictions and Scenarios: This section helps stakeholders understand the “so what” of the analysis—the normative “what we should do.” Typically, forecasts and scenario analyses based on the model are created. There are literally infinite scenarios possible, so choosing which to highlight requires coordination between the team doing the work and the marketers making the big future decisions. Regardless of the specific scenarios picked, the presentation should highlight how different levels of marketing spend or other variables impact outcomes.

Business Intelligence (BI) Tools

Integrating MMM results into business intelligence (BI) tools allows for continuous monitoring and analysis. BI tools such as Tableau, Power BI, or QlikView can be used to create interactive dashboards that update in real time as new data becomes available. This integration is definitely a later step for most companies, as it requires data engineering and technical steps beyond just displaying outputs in PowerPoint.

To make marketing mix models play nicely with BI outputs, there need to be consistent output data structures. Typically, an output data structure that includes standard time and cross-sectional dimensions—the same as those used in the time series panel, along with key “facts”—contributions, cost-pers, and ROAS.

Decision Support

Because of their underlying architecture, MMMs are natural candidates for use in decision support and optimization. Marketing mix models include non-linear, diminishing returns curves by their nature; optimization is a matter of finding the ideal mix of curves that maximizes a certain objective function—for example, maximizing revenue—subject to a set of constraints.

By using elasticities derived from mix modeling, an implied efficient frontier can be described by maximizing return for a given mix of spend
While tempting, these optimal mixes should be treated with caution

Figure 3: Optimization is about re-mixing marketing channels to achieve the best efficiency at a given level of spend

The simplest way to support this optimization exercise is to use a built-in linear programming algorithm such as Microsoft Excel’s Solver. In this approach, each curve can be extracted as an equation, and then aggregate outputs can be assigned to certain cells. One cell can be made the objective, and others can be assigned as constraints—for example, TV spend must be less than 30% of mix, the total investment must be less than $100M, and so forth. More advanced approaches can use machine learning algorithms in R and Python to optimize mixes.

Conclusion: Marketing Mix Modeling Example

Marketing mix modeling is sometimes seen as an intimidating and mythical black box by marketers and non-technical business professionals, which can lead to low trust in results. This need not be the case. Even non-technical managers should understand the steps taken to build and output an MMM. By presenting a clear Marketing Mix Modeling example, I illustrated how various marketing channels and external factors intertwine to affect business outcomes. This marketing mix modeling example serves as a valuable reference for understanding how data-driven decisions can enhance marketing effectiveness and drive success.

Download our whitepaper, “Measuring Marketing’s Effectiveness”​

Access our whitepaper for a deep dive into additional imperatives and methods for CMOs and analytics teams driving measurable marketing ROI.

Skip to content