Search marketing is entering a structural reset in 2026.
Generative AI is no longer a feature layered onto the internet. It is becoming part of the infrastructure that determines how information is surfaced, trusted, and acted upon. As a result, AI does not simply enhance marketing, governance, or commerce. It reshapes who controls visibility, value, and decision-making across global systems.
In 2026, the implications are no longer theoretical. Governments are defining regulatory boundaries. Platforms are redefining search behavior. Paid ecosystems are shifting toward automation and model-driven execution.
The following predictions outline the forces that will shape search marketing and AI strategy this year, and what organizations must understand to remain visible, credible, and competitive.
AI global impacts: Governments, power, and infrastructure
The European Union continues to position itself as the global rule-setter for AI governance.
The EU’s AI Act and AI Pact reflect a clear belief. Trust, transparency, and accountability are prerequisites for sustainable AI growth. Major AI companies including OpenAI, Microsoft, Google, and Amazon are engaging with EU-led frameworks to prepare for compliance. This signals that regulatory power, not just market dominance, will shape the AI landscape. Historically, legal precedents established in Europe often influence U.S. markets as well.
At the same time, the U.S. remains comparatively less regulated, allowing Silicon Valley to move quickly in an open innovation environment. This divergence raises two likely outcomes:
The U.S. market adopts more rigorous EU-style protocols over time
A structural uncoupling of AI experiences across regions
The latter appears increasingly plausible. We are already seeing fractured national approaches, including China’s accelerated investment in DeepSeek and broader AI autonomy initiatives.
According to RAND’s analysis of China’s AI industrial policy, Beijing is prioritizing full-stack AI capability, from chips and computing infrastructure to deployment across manufacturing, logistics, healthcare, and defense. The objective is not abstract AGI leadership, but economic integration and national resilience.
For global firms, this means AI adoption, including generative engine optimization and search strategy, cannot follow a one-size-fits-all model. Regional governance, infrastructure maturity, and platform dynamics will materially affect how AI systems operate. As AI search evolves, implementation complexity will increase rather than decrease.
Search marketing in 2026: Visibility without clicks and the next stage of organic search
Search in 2026 is not only about ranking—it’s about representation inside AI systems.
However, GEO (Generative Engine Optimization), and SEO (Search Engine Optimization), are not interchangeable. Equating them creates risks.
“You can’t ‘optimize’ an AI citation like a 2010 keyword. We have to pivot the conversation to what we can actually influence: showing up in the historical training data and winning the real-time RAG layer…”
Incorporating AI search is not a technical extension of SEO. It represents a distinct discipline. Extending legacy SEO playbooks to new AI-driven models, user behaviors, and answer-generation systems will produce diminishing returns.
A paradigm shift is required.
There is overlap between GEO and SEO, just as there has historically been overlap between SEO and SEM. However, they require different capabilities, measurement frameworks, and strategic approaches.
Digital PR + Gen AI
As AI-powered systems synthesize answers rather than present ranked lists, they increasingly rely on trusted, repeat-source signals to determine which brands are credible enough to cite, summarize, or recommend.
Research and practitioner insights indicate that brands consistently mentioned across high-authority publications, expert commentary, and reputable data sources are more likely to be included in AI-generated responses than those relying solely on owned content or technical SEO optimizations.
A holistic organic approach now means integrating digital PR heavily into your search tactics.
Links have always mattered in search. In the Gen AI era, digital PR may matter even more than on-domain content, depending on industry dynamics. A holistic organic strategy must now integrate digital PR as a core search lever rather than treating it as a supporting tactic.
Paid media & SEM: Advertising to algorithms
Paid media will not disappear, but it will transform fundamentally.
Google has already embraced automation through platforms such as Performance Max, Advantage+, and Gemini-powered workflows. Execution is increasingly algorithmic.
The immediate question is where humans fit. The answer is not in manual optimizations, but in strategic oversight. Humans create the strategy, analyze the data, and ensure that the models are staying within the parameters to help achieve the campaign goals. With the push towards more automation and Google’s improved algorithms and learning phases, SEM managers are shifting from tactical execution to performance governance. This shift allows more focus on future strategy and less on daily manual optimizations.
In addition to what we are seeing on Google and similar platforms, Open AI’s ChatGPT announced it will be introducing ads this year.
So, what does this mean for Open AI and marketing? It means that the high confidence that users have in AI Search will now benefit brands. This sets the stage for intensified competition, and the competitive dynamics between these ecosystems will influence how paid media evolves and where brands invest.
Final thoughts
The shift underway across search marketing in 2026 is not about adopting AI tools. It is about understanding how AI systems decide what, and who, gets surfaced.
With B2B buyers leveraging AI search at increasing rates, being findable is critical. And for B2B organizations, where buying cycles are complex and trust is critical, being present inside AI search environments is no longer optional. It requires deliberate planning across organic, PR, and paid channels.
And with the regional differences, platform approaches and the ever-changing partnerships that impact how these systems operate, planning for change is more important than it’s ever been.
The brands that succeed will not chase tactics. They will build systems that adapt to regulatory shifts, platform dynamics, and model behavior.
2026 will reward organizations that treat AI not as a feature of marketing, but as a structural force shaping it.
In the competitive world of go-to-market (GTM) strategy, speed to (quality) insights, and data accessibility are make-or-break advantages. Yet traditional market research (e.g., surveys, interviews, focus groups, etc.), takes time. Enter a new alternative in the AI era: Synthetic research.
Synthetic market research is the use of AI-generated or simulated data to generate responses, behaviors, or insights that tries to simulate what real populations would produce. Broadly speaking, synthetic research involves creating or synthesizing data instead of collecting data from human respondents to mimic real-world outcomes.
At a technical level, synthetic research can take on one of two approaches:
Machine learning models create simulated versions of real data that preserve statistical patterns (synthetic data generation) (PMC 2023); or
Existing datasets and studies are aggregated to produce meta-insights across markets and buyer segments (research synthesis) (ResearchGate 2022)
What are the use cases for synthetic research related to marketing?
Simulated buyer journeys: Using synthetic personas to map awareness, consideration, and conversion pathways to inform campaign design
Concept and message testing at scale: Generating thousands of “AI respondents” to evaluate positioning hypotheses within hours
Synthetic competitive benchmarking: Modeling how buyers might respond to competitor moves in pricing, packaging, or product features
Synthetic segmentation: Combining modeled behavioral data with real CRM and digital signals to project how markets may shift under new conditions
Should you use synthetic market research?
Yes and no. It has its purposes but shouldn’t be used as a panacea. Based on our expertise and an experiment (details below), human research is still the gold standard because it provides the level of depth, context, nuance and confidence synthetic research cannot produce. Synthetic data and research should not be used alone, as a direct replacement for human research. The right approach often is a hybrid to capitalize on the benefits of both human and synthetic research.
“Synthetic data and research should not be used alone, as a direct replacement for human research. The right approach often is a hybrid to capitalize on the benefits of both human and synthetic research.”
To come to that conclusion, we conducted an experiment comparing responses from a survey we conducted using traditional quant and qual methodologies, with responses from ChatGPT.
The topic of our research: healthcare benefits (Employers as the surveyed audience).
Let’s dive into the results from the experiment first – followed by our rationale of why a hybrid approach is often the best solution.
What our AI vs traditional research experiment found
On the surface, ChatGPT wasn’t wrong – but it wasn’t accurate either.
It was able to mirror the high-level themes from traditional research: employers value strong benefits, react to rate increases at a specific level, and weigh network strength, employee feedback, and reputation – at impressive speeds.
But it lacked the detail that fuels activation – the nuance, context and reliability that drive real insights and understanding can only come from engaging real people.
The differences
Depth & context: Without human anchoring, synthetic data insights risk being disconnected with how and why respondents feel, think and act
Over-simplification: AI lacks the ability to create and spot nuance, generalizing data without context, ultimately smoothing out niche findings that represent real-world behaviors
Closed loop responses: Over time and with repetition of the same (stale) data, AI feeds off of itself, ultimately declining in consistency and accuracy
Trust & transparency: Much is still unknown or opaque about how insights are generated, casting doubt on recommendations solely sourced from AI
Examples from our experiment
Depth & context
When asked to prioritize factors influencing employer decisions, ChatGPT’s responses diverged from real-world data and failed to explain how humans actually make decisions.
ChatGPT not only downplayed the role of carrier brand relative to other factors, it also suggested switching carriers for a better price or coverage was a ‘no-brainer’ or a decision made lightly
Our human-led study highlighted the significant role brand plays for Employers selecting Employee plans – likely as a proxy to demonstrate quality
Our human-led study also found switching carriers is a really big deal. Employers overwhelmingly prefer the status quo because of inertia – something ChatGPT failed to capture. Beyond cost and coverage, the emotional and operational barrier (inertia) of switching carriers is a major roadblock…almost regardless of savings
Implication: If played out in a real-world recommendation, instead of suggesting the carrier client focus on building up its brand, the recommendation may be to focus messaging and GTM strategies on the tactical plan components – which likely would not resonate as much as including the carrier, resulting in wasted time and resources.
Over-simplification
When asked about how Employers compare health plans for employees, ChatGPT provided surface-level insights, without detail into why respondents chose certain factors – and how answers may differ based on respondents.
Our human-led research surfaced nuanced priorities—like mental health services for younger workforces, or Occupational Therapy/Physical Therapy coverage for older ones
ChatGPT’s output stayed broad and predictable, citing priorities like deductible and out of pocket costs, regardless of respondent profile and nuance
Implication: Respondent motivations, emotions, and context make research actionable – without those insights, recommendations can fall flat. Further, surfacing differences between respondents is key to crafting GTM strategies that resonate.
Closed loop responses (and trust & transparency)
When asked about the rate renewal increase threshold to explore alternatives, ChatGPT provided different answers all five times we asked the question – with a variance between iterations of 11 percentage points.
ChatGPT indicated as high as 77%, and as low as 66% of respondents would say the renewal rate increase would only need to be 20% or less
Our human-led research found the portion of respondents to be 61% – validated via extensive qualitative studies and secondary research
Implication: Such variance in results sparks skepticism and more questions. Being open about methods and analytical approaches helps to build confidence in the data and recommendations.
Despite these limitations, there are certainly benefits of synthetic research:
Speed and agility: dramatically compresses time-to-insight. Instead of recruiting panels and waiting weeks for responses, marketers can simulate thousands of buyer reactions in hours
Lower cost: Traditional primary research is expensive with panel recruitment, respondent incentives, data tools, and analysis
Access to hard-to-reach audiences: B2B research often struggles to reach niche audiences. ChatGPT can “easily” generate responses for specific employer segments like size, industry, and geography, assuming the data exists already, for it to synthesize
Scenario testing: AI’s flexibility makes it ideal for “what-if” modeling
Our take: Use synthetic research to complement traditional, human-led methods
Our experience conducting qualitative and quantitative research with real human beings (B2B and B2C), tells us decision making is complex, nuanced and sometimes contradictory (i.e., you want more choice, but more choice can be overwhelming). While saving time and money are extremely attractive benefits of synthetic research, we’re not yet convinced that AI can produce the same quality in the context of often complicated and interwoven results and insights.
That’s why, at this point int time, we recommend synthetic research be considered as a complement to — but not a replacement for — real human-led research method, particularly in early exploratory phases where speed matters more than precision.
AI can scan, summarize, and simulate at scale—surfacing patterns, stress-testing ideas, and informing early hypotheses. But it can’t interpret motivations, segment behaviors, or provide consistency with confidence.
A hybrid research model unites both worlds:
AI’s efficiency to explore, iterate, and model possibilities
Human expertise to interpret, validate, and ground findings in reality
Together, they create an insight engine that’s both fast and deeply rooted in human truth.
“…synthetic research should be considered as a complement to — but not a replacement for — real human-led research method… Together, they create an insight engine that’s both fast and deeply rooted in human truth.”
How to apply a hybrid research approach
Use GenAI early to explore Before fielding a study, use AI to identify themes and generate hypotheses. This sharpens your questions and focuses human research where it matters most.
Validate with human-led methods Surveys and interviews provide the context, emotion, and nuance AI can’t replicate—turning directional findings into defensible insights.
Iterate continuously Leverage primary research to enhance or refine the synthetic data set, improving the accuracy of future analyses.
In summary
GenAI expands what’s possible in research by accelerating learning, enables scenario testing, and broadens early exploration. But the insights that stick, that inspire action and confidence, still come from people. Consider using synthetic market research as a complement to — but not a replacement for — real human-led research method. When used together, GTM leaders can create a new breed of insight engine—one that’s both fast and deeply rooted in human truth.
B2B marketing organizations should leverage a Marketing Income Statement built off account-level multi-touch attribution and brand models to better communicate overall marketing ROI.
Like a financial statement, the Marketing Income Statement uses standard dimensions that can be compared over time, and reports three different contributions: last touch, multi touch, and brand.
This article is for Analytics and Revenue executives tired of internal infighting over sales vs. marketing credit, and technical CMOs who want to better communicate with finance.
Counterfactual part-worths
So what to do? Dumping the pipeline concept entirely has been proposed by some. An even more ambitious proposal might be to eliminate the marketing-sales distinction completely (there is in fact a trend toward the “Chief Revenue Officer” running the entire go-to-market function.) However, these ideas probably go too far, even if they have some merit. Instead, taking a counterfactual attribution approach that uses “part worths” rather than all-or-nothing “who drove what” rubric can put marketing and sales—and the channels within marketing—on level ground.
In this approach, it is acknowledged that any transaction is driven by multiple, complex factors over a long period of time. It is also acknowledged that, at scale, each part of the go-to-market mix has a “counterfactual impact”—if that part of the mix had not been used, ultimate revenue would have been lower. This is the right way to think about ROI: The “R” for return should be the de-duplicated, incremental return. No one else should take credit for it.
Of course, a large B2B transaction can’t be “half-won” any more than a cat can be half-dead. However, over many hundreds or thousands of deals, we should be able to say that, for example, had we not done events, we would not have seen 3% of the deals we ended up closing. That is the basic idea behind a counterfactual / part-worth approach to assessing marketing ROI.
Fortunately, to get this done, the pipeline—the cause of our problem—can come to the rescue. A pipeline is essentially a database. The discrete item, whether lead or opportunity, exists in time, changing its stage (probability), while encountering various stimuli (marketing or sales).
More broadly, that item is a member of an account. That account contains many people (contacts) who may be a part of the buying group influencing that member. We can’t be sure, but we can build statistical models to infer these relationships.
All of this is to say that a B2B organization can build an Account Longitudinal Record—everything we know about leads, opportunities, contacts, and stimulus at an account, over time, arranged row by row. Think of this as an account’s “go-to-market fingerprint.” This can include both the things we know (deterministically, i.e., with a database key) and the things we think might have happened (probabilistic touches, for example, that an account watched an online video that was targeted using an account-based marketing approach).
Account level multi-touch attribution
Fortunately, the techniques exist to understand how important different stimuli are in driving outcomes. Multi-touch attribution (MTA), a method first developed to understand how digital marketing touches drove an eventual ecommerce sale, can be expanded in scope to provide insight into B2B marketing effectiveness. The basic idea behind MTA is to understand how each touch in a chain of events contributes to ultimate success. To do this, “ones” (won deals, or opportunities, or sales accepted leads) and “zeros” (losses) are “backtracked” to understand what drove them. In each case, interventions at various points in the causal chain are assigned importances using statistical inference.
One critical point unique to B2B is the focal point of analysis: The account or buying group instead of the lead or opportunity. Many CMOs and CROs are arguing today on LinkedIn that “the pipeline is dead” and that “marketing should be done at the account level.” However, the reality is that martech and CRM systems still use a discrete object to track the progress of sales. It is unrealistic to think that this data structure is going to go away. So what to do?
The short answer is that leads should be thought of as indicators of engagement, instead of as objectives. For example, multiple leads might emerge out of an account, all of which might “partially convert” to an opportunity. The leads reflect the effectiveness of marketing being done, which might ultimately impact zero, one, or more opportunities.
By treating the lead-opportunity construct as a system rather than as a linear handoff, “CRM discipline” issues are also addressed. A common problem in tracing marketing’s impact on revenue is sales teams failing to convert leads, instead creating a new opportunity, and “losing the breadcrumb trail.” In a systems model, this problem becomes irrelevant. Instead, the lead and the opportunity are linked by inference, with the lead having predictive, driving power on the opportunity, but not a 1:1 linkage.
A simplified version of this attribution concept can be seen in Figure 1 below. In this example, revenue is attributed back through the mid- and lower-funnel. Marketing’s incremental impact is 11%, and sales’ is 22% (including some credit for ABM through the sales cycle).
Note that 67% of the win’s value is “base.” This might be the trickiest concept to communicate. Culturally, last-touch crediting remains king in B2B organizations, due to the sales-driven, make-or-break focus that is mandated by quarterly earnings reporting. In this model, whoever hands the lead off to sales gets credit—and blame, when the volume or quality of leads isn’t “up to snuff.” Typically, the entirety of a lead is credited to the last-touch marketing channel, and the entirety of the win is credited to the sales rep or partner who closes the deal. However, the reality is that usually most of the deal flow is caused by neither the immediate actions of marketing or sales—it would have happened anyway. However, over time and without care and feeding, this base will decay away to nothing. In one sense, the “base” of a marketing attribution model is the purest definition of brand equity.
Figure 1: New revenue gets partially attributed. Note that brand equity is still a part of the base.
Measuring the value of upper-funnel and brand investments
The value of a B2B brand is huge. “No one ever got fired for buying IBM” has been a funny refrain since the 1970s (these days, it might be replaced by Microsoft or Adobe.) B2B buying decisions are large, complex, and have huge career implications for the executives making them. If a multi-million-dollar software purchase goes south, a senior executive is going to take the blame. The brand, then, is much more than a shiny logo and its associated feelings—it represents reputational security.
As in the consumer space, marketing has a role to play in driving B2B brand equity. Without “upper funnel” advertising, a B2B brand runs the risk of being seen as unserious, even if it is still known. CMOs intuitively know that a brand like SAP or Adobe needs to be in certain places—at the right conferences, the right airports, or the right sporting events. Presence signals financial health, much like a bird with brilliant feathers signals mating fitness: “I have so much extra energy, I can use it to impress you.”
Measuring brand equity and the investments that support it is hard. This is often where conversations break down between marketing and finance. CMOs spend a lot of energy playing the political games that allow them to spend money on sponsorships and brand-building activities. These investments are often supported by “vanity metrics”—visits to booths, number of executive eyeballs who saw the sign at the U.S. Open, etc.—but ultimately, most of these data are used as a drunk uses a lamppost: For support instead of illumination.
However, this does not have to be the case. A long-run brand value model can be run for B2B companies. This is an econometric model, similar to a traditional MMM (marketing mix model.) It can be run at a coarser time granularity than a traditional model; quarterly works well. The basic idea is establishing two causal linkages: First, between upper-funnel or “non-lead gen” marketing and brand equity; and secondly, between brand equity and revenue (or profit, or growth, depending on business objectives).
Figure 2: Measuring upper-funnel marketing by using brand equity as an intermediary. For B2B brands, having a “low risk” reputation is perhaps more important than being seen as innovative.
The trickiest part here is measuring brand equity over time. To work, a B2B brand really needs several years of data, measured quarterly, on the health of its brand. Brand health itself has to be measured indirectly. Social scientists use a tool called Structural Equation Modeling to understand which “manifest” (i.e. measurable) variables best describe an underlying construct—in this case, the value of the brand. This may sound like black magic, but it is a well-established technique. The good news is that by using multiple manifest variable to measure brand, a more stable and reliable estimate is available that is more usable in regression modeling.
Of course, marketing will not be the only, or even the largest, contributor to brand value. Product quality, competitive actions, word-of-mouth, and sales force interactions all work together to keep a brand strong. However, without marketing, brand struggles, and eventually, growth becomes impossible.
Combining the output from the account-based MTA with the output from the long-run brand regression model can provide what so far has been largely unattainable for B2B marketers: a complete understanding of marketing’s value, over both long- and short-time frames, across accounts, and pan-channel.
Communicating with a marketing income statement
All the data and statistics in the world are useless if the various stakeholders across the company do not have a clear understanding of marketing’s performance. Fortunately, financial reporting gives us a nice template that we can adapt for marketing—the income statement.
Income statements are useful because they tell us what was accomplished over a period. For example, in a typical quarter, I might have generated $50M of profit on a revenue of $500M. We can use the same basic concept for marketing.
There are some other principles that we can borrow from financial accounting. The first is that we can’t count anything twice, or, if we do, we need to be very clear about it. For example, if we are reporting marketing’s return by channel, then each channel added together can’t equal more than marketing’s total return.
The second is that we should use standard dimensions that don’t change (much) over time. In accounting, these dimensions are based on the structure of the General Ledger. The reason we do this in financial accounting is to compare trends to better understand what “good” looks like. Marketing reporting has a bad habit of constantly changing reporting dimensions, which makes it very hard to gain any real insight out of the numbers. So, our marketing income statement should be as stable as possible—and this has implications all the way down the marketing data stack. In fact, that account longitudinal record mentioned above functions much like the general ledger does in accounting.
The marketing income statement itself looks a lot like a financial income statement. In this example, each channel has three different kinds of contributions: last touch, multi touch, and brand. Brand contribution isn’t directly tied to leads or orders, but rather accrues to the “base”—the percentage of revenue that marketing doesn’t drive directly. In this case, marketing’s multi-touch effect is 21.5%—that is, counterfactually, if marketing had not been done, around 22% of sales wouldn’t have happened. The brand effect is also significant, at 4.8%, but that impact, while real, won’t be felt until future years and quarters. The total ROAS (return on ad spend) is $2.91, meaning that for every dollar spent on media, $2.91 of value is created.
Imagine that this is a view for a specific business unit, geography, and time period. Keeping this dimensionality consistent, along with the names and taxonomy of the channels themselves, provides marketers and executives with a clean, comparable performance diagnostic that can eliminate much of the drama around marketing effectiveness.
Figure 3: A B2B marketing income statement. In this case, each channel has a “last touch”, “multi touch”, and “brand” contribution. The brand contribution goes towards the base. For ease of illustration, it eats into the current period base, even though in reality it accrues to future periods.
Conclusion: Is measurement the key to ending the marketing vs. sales war?
At its root, sales and marketing don’t get along because they don’t believe each other. Or, more fairly, sales doesn’t believe marketing. They know on one level that they need them, but the actual dollars that they are driving are always in question in a zero-sum game of all-or-nothing credit.
This doesn’t have to be the case. By centering on a standard set of reports, with consistent dimensions, and taking into account both the multi-touch effects inside of accounts and the long-term impact on brand, sales and marketing can get to a point of common ground. Even if the data and analytics aren’t perfect right away, over time, consistency will win.
That consistency also will illuminate underperformance. Rather than playing the blame game when revenue targets are missed, the standard reporting mechanisms can show if marketing, sales or base is contributing less over time. Many B2B orgs don’t recognize the signs of under investment in upper-funnel until it’s too late. Better understanding the part-worths of every opportunity helps organizations work smarter and more efficiently together.
Buyers don’t think about whether they are interacting with Marketing or Sales—it’s just the brand. Yet, behind the scenes B2B organizations act as if these interactions are distinct, leading to both Marketing and Sales wanting to claim credit for the sale.
This article—the first in a two-part series—is for Marketing, Sales and Revenue executives who are frustrated by marketing and sales attribution infighting, and seek a better way to demonstrate the impact of marketing’s contribution to growth.
The pipeline construct in B2B marketing has a dual nature. On the one hand, it is a true reflection of how buying groups move through purchasing hardware, services, and software. Concretely, it is true that companies, departments inside companies, and individual decisionmakers must be first made aware of a problem; then understand that a category of solutions to that problem exist; become aware of the vendors offering these solutions; at some point engage with the marketing and sales resources of one or more of those companies; and finally transact. Once they are a customer, they continue to update their experience of the company, perhaps adding services based on other perceived needs.
However, in most cases, the buyers and influencers who make up the customer buying group are indifferent to or unaware of whether they are interacting with a vendor’s “marketing,” “sales,” or “customer success” teams. To them, there is a brand, and that brand either meets or exceeds expectations, or does not. They simply want the best product and service at the best price, with the lowest risk (no one got fired for buying Company A) and do not want to jump through hoops to do so.
The reflection of this customer-centric pipeline inside the typical vendor is distorted but still relevant. For a typical B2B vendor—think Microsoft, Caterpillar, Oracle, Goldman Sachs, GE, etc.—the pipeline is divided into discrete stages, each made of either leads or opportunities, with different values and forecasted close dates. Typically, a “lead” is marketing’s responsibility, and an “opportunity” is owned by sales—but it’s critical to note that to a customer, these categories are irrelevant. This “lead / opportunity” split is a legacy of how B2B marketing and sales has typically functioned: Marketing “generates demand” and sales “closes deals.” The best way to think about “generated demand” in a software system is as a “hand raiser”—someone who has poked their head above water and can now be pursued. That hand raiser “becomes” an opportunity when they have been nurtured and developed, and at that point, the opportunity will gain momentum and hopefully turn into real revenue. Of course, leads and opportunities are both abstractions and simplifications of what is really going on.
We all want to measure marketing ROI
ROI (return on investment) continues to be a hot topic for B2B marketing and sellers, for obvious reasons. An accurate ROI (one that is non-duplicative, counterfactual, and based on a financial outcome) is extremely useful, because it allows all investments to be traded off against one another, particularly at the marginal or “last dollar” basis. If my marginal ROI for paid social is 1.1, and my marginal ROI for events is 0.9, then I should increase my paid social budget and decrease my events budget. Critically, ROI as an outcome metric allows marketing to be traded off against any other investment—at least in theory.
B2C companies are arguably closer to an ROI view of the marketing world. For large consumer brands like Coke, marketing mix models (MMMs) are constantly updated to provide ROAS (return on advertising spend) for various channels. The curves these models output are then used to remix dollars up, down, and across the funnel to maximize some objective—usually total revenue. However, MMMs are slow and prone to omitted variable bias—meaning that lurking, unknown variables, if left out of the model, can drive unrealistically rosy assessments of marketing’s performance.
B2B companies can’t generally use MMMs to measure marketing’s effectiveness (some try, and they “sort of” work, but that’s a topic for another day.) The same structural dynamics that lead to a pipeline view of the world make MMMs—which depend on large volumes of frequent time series data, including daily sales and marketing spend by region—ill-suited for B2B, namely:
Long sales cycles (months to years, typically)
Large transaction sizes, few transactions (chunkiness)
Complex buyer groups
Poor data quality when humans are involved (events, field sales, partner channels)
The pipeline, rooted in a database view of the world, is both a cause of and a solution to measuring ROI for B2B firms. It is the cause of the problem when it is taken too literally—that is, that the “lead” is a real thing that someone “generated.”
At some point in the foggy history of corporate marketing, “marketing attributed sales” became a commonly used term. This probably happened when someone in sales asked someone in marketing what value they were providing, which, by corollary, meant how many leads were being handed off.
Now, we commonly speak of “marketing attributed sales” as those opportunities that started with a marketing-generated lead. This means, concretely, that some individual at a buying group filled out a form, and was then “nurtured” until ready for handoff to sales as a “sales qualified lead.” In some cases, sales has to accept the lead for it to “count”—as a “sales accepted lead.”
There are three problems with this way of looking at marketing value. First, it assumes that marketing generated all of the “value” of the lead that it generated. This overstates marketing’s impact. However, this isn’t the biggest problem: All of the other value that marketing creates “under the water” is missed, because it’s not a part of the marketing software / CRM software that has largely come to define the B2B marketing organization. Finally, once a lead is “handed off,” marketing’s role is cut off, leading to both double-counting (marketing and sales both want credit for the deal), and a somewhat toxic “what have you done for me lately” adversarial stance between marketing and sales.
These dysfunctions have real negative impacts. Marketing’s insistence on taking full credit for leads—understandable given its typical fight to show value—drives a bias towards lower funnel behaviors that might not optimize long-run growth. The inability or unwillingness to understand how marketing drives value for all opportunities—known or unknown—makes assessing true ROI impossible. Finally, the “hand-off” concept itself creates an us-them duality that is nonsensical to a customer, and, once again, does not accurately capture marketing’s role in driving value.
Conclusion
Marketing and sales have a common goal: to drive revenue. Yet the most common marketing, sales and CRM tools today pit marketing and sales against one another to claim holistic credit for each sale. True B2B marketing ROI is achievable with the right measurement approach.
Thinking back, this time last year, the Go-to-Market (GTM) landscape was defined by two powerful forces: sustained efficiency pressures, and the revolutionary potential of artificial intelligence. All the research and advisory firms (Gartner, Forrester, etc.) issued clear directives for organizational and technological transformation. As 2025 concludes, it’s a good time to reflect on those predictions (among other things) to see how they held up through this very eventful year.
Economic predictions for a mixed year of nominal growth largely held true, supported by injections of tariff uncertainty and massive AI investments, leading to an unbalanced market that’s basically the magnificent seven vs. the rest of the economy, 2025 was a wild ride. Overall, AI dominated the GTM landscape, broadening the market understanding beyond generative AI to agentic integration across the GTM stack.
What we failed to anticipate was the stark, almost painful, misalignment between the pace of tech innovation and the inertia of the enterprise. Providers delivered tools at lightning speed, but customers couldn’t keep up. Much like the economy in general, AI predictions at least fared okay, and in most cases, pretty good!
Making the grade
To assess the veracity and reality of these shifts, I looked at consensus predictions against in-market performance and commentary from 2025, assigning grade based on proximity to how close predicted outcomes came to reality. To simplify, I bucketed the analysis on three dimensions that determined revenue success:
GTM model alignment: Traditional GTM models will be replaced by blended Hybrid GTM approaches
Data-driven profitability: GTM must transition to data-driven, Intelligent Pricing strategies
AI workflow challenge: AI will successfully automate seller administrative tasks at scale
GTM model alignment
Blended GTM models were not fully adopted, but the economic climate necessitated the collapse of GTM silos and the adoption of more agile growth models, driven by predictions focused on unity and efficiency (Forrester, McKinsey). The core prediction here, the transition to Hybrid GTM Models, was a strategic success of the year, while implementation struggled.
Blending GTM models earned an A, signifying market adoption, but the underlying goal of Organizational Alignment (under the RevOps umbrella) fell short with a C+.
Prediction
Result
Grade
Blended model dominance Pure GTM models (PLG/SLG) will be replaced by blended hybrid approaches
New standard The industry largely moved away from pure models, embracing hybrid models that intelligently allocate resources—PLG for high-volume acquisition and Sales-Led for high-value expansion (Gartner)
A
End-to-end customer experience (CX) GTM CX accountability must seamlessly span Marketing, Sales, and Customer Success
Execution gap While leaders acknowledged that GTM ownership must span the full customer journey, siloed budgets and conflicting internal metrics between Marketing (e.g., MQLs), Sales, and CS continued to impede seamless delivery (Forrester)
B
RevOps unity nirvana GTM functions will achieve structural and cultural alignment under RevOps
Talent and culture lag Technology consolidated successfully, but many organizations struggled to effectively integrate the skillsets, compensation models, and reporting structures required for a truly unified RevOps function (McKinsey, Consensus)
C+
GTM successfully moved toward a hybrid operating model but underestimated the difficulty of achieving true organizational unity and structural alignment required to execute it efficiently.
Data-driven profitability
Maximizing margin and improving sales economics were paramount, requiring innovative intelligence-based GTM levers (Bain & Company). The single greatest failure of 2025: the inability to capitalize on advanced profit levers due to data deficiencies.
The most ambitious prediction, Intelligent and Dynamic Pricing, fell short with a C grade, directly contrasting the success of the foundational prediction: Data and RevOps as the Foundation, which earned an A+.
Prediction
Result
Grade
Dynamic pricing Pricing will transition from static to intelligent and dynamic
Data infrastructure failure This highly ambitious prediction failed to reach scale. The poor quality and complexity of legacy data infrastructures prevented most companies from moving beyond static price increases (McKinsey)
C
Importance of data A centralized data layer is the mandatory precondition for all GTM innovation
Revealing an essential truth The recognition of a centralized data layer and a strong RevOps function proved to be the single most reliable predictor of success in attempting other transformations, including hyper-personalization and campaign optimization (Gartner)
A+
Cost efficiency mandate GTM spending must be justified by clear ROI and operational leverage
Cost control The ongoing pressure internally and externally ensured operational leverage and efficiency was a primary performance metric for all GTM investments, from marketing spend to sales headcount (BCG, Consensus)
A
The ambitious revenue-driving predictions were entirely contingent on the fundamental work of RevOps and data quality, reinforcing that basic technical integrity is the prerequisite for innovation. While the assumptions are correct and the direction clear, much like organizational adoption, Data has a long way to go to achieve its profitability promises.
AI workflow challenge
The most compelling prediction for 2025 was the transformative impact of AI (Deloitte, Gartner). The critical question was whether organizations could translate this promise into measurable, scaled success. The results here were split: AI Co-Pilots earned a resounding A, while the mandate to Scale AI Across the Enterprise lagged with a C.
Prediction
Result
Grade
Sales support AI will seamlessly automate seller administrative tasks
Juiced-up enablement Vendors succeed with high-impact, easy (ish) integrations into CRM platforms for automating marketing content drafting, lead scoring, and seller outreach delivered immediate and significant GTM productivity gains. (Deloitte)
A
Enterprise AI deployment AI will successfully move from pilots to scaled enterprise production
Organizational friction The majority of firms failed to fully redesign core workflows, such as complex multi-channel personalization engines, or data architectures necessary to deploy AI at true enterprise scale, limiting ROI. (BCG, Consensus)
C
Technology consolidation Organizations will consolidate their sprawling tech stacks, eliminating redundant point solutions, and integrating AI natively into core platforms
More sprawl Instead of achieving consolidation, GTM teams added AI to their existing complex ecosystems due to vendor lock-in, and the speed of new point solutions meant stacks became “AI-enhanced sprawl,” creating data flow bottlenecks and limiting the ROI of enterprise AI initiatives (Bain)
C-
SEO becomes GEO Content strategy must pivot from volume-based SEO to AI-optimized answers
Successful, slow strategic pivot The shift toward AI-driven search demanded that Marketing transition content strategies from volume-based SEO to Generative Engine Optimization (GEO), a pivot many were slow to execute (Gartner)
B-
2025 proved AI’s effectiveness as an augmentation tool (Sales Support), but it revealed significant bottlenecks in process management and change adoption necessary for enterprise-wide transformation, especially as it comes up against entrenched teams, processes and vendors.
Final evaluation
A clear narrative coming at the end of 2025 is that while investors and technology providers move forward with AI-abandon, and talks of a bubble have dissipated, GTM leaders are taking more cautious approaches and investing strategically. The year demonstrated that while AI and market shifts are accelerating, successful transformation is ultimately limited by an organization’s willingness to address difficult, systemic, and people-centric challenges (Scale, Pricing, CX).
GTM organizations are entering 2026 leaner and smarter, having successfully prioritized operational efficiency and technology consolidation. However, the clarity gained from 2025 confirmed that the biggest blockers aren’t technological advancements—they are systemic and people-centric.
The success for your 2026 growth roadmap hinges on closing the adoption gap, turning C grades into A grades. This means tackling the fundamental human challenges. As you navigate this next phase of GTM transformation, we’d love to connect to help bridge the gap between technology potential and revenue reality.
The way people discover information has forever changed.
Large Language Models like ChatGPT, Gemini, DeepSeek and Claude have gone from what was initially considered a novelty experience to a core part of the internet. According to a new study by Similarweb, Generative AI (GenAI) systems have progressed beyond just influencing how users start their journeys; they are now a core destination.
As we wrap up 2025, we’re seeing that it’s no longer a niche group of users leveraging AI; it’s a substantial share of netizens.
Another key insight: Traffic going to LLMs is rivaling social media numbers, with ChatGPT becoming the fifth most popular destination on the internet in the United States.
And those adoption numbers reinforce the need for brands to ensure they are represented in Gen AI.
Norwest recently partnered with Marketbridge to conduct their 3rd annual 2025 B2B Sales & Marketing Benchmark Report. Findings confirmed that marketers investing in AI Optimization say it is having a huge influence on performance. When we asked which AI-enabled use case had the most impact on their efforts, AI Search Optimization ranked second, with content and copy generation topping the list.
And this makes sense, you go to where your audience is.
The emerging risks
While the benefits are promising, there are also practical considerations to consider:
Uncertain ROI: AI referral traffic is growing, but performance varies. Not every content type benefits equally.
Operational overhead: Creating structured, machine-friendly content requires development time, quality assurance and continued monitoring.
Crawl volatility: AI tools are aggressive crawlers. This can increase server load and create unpredictable logs if not monitored.
Evolving standards: The AI ecosystem is still shifting. What works today may need adjustment within months.
These risks do not outweigh the opportunity, but they should be factored into infrastructure planning.
What does this mean for B2B brands and their content?
For B2B organizations looking to maintain visibility in this evolving landscape, understanding current LLM performance provides the best foundation for strategic action. When trying to improve performance in an LLM environment, auditing how and where your content currently appears in AI-generated responses gives brands actionable insights. Here are some key considerations:
For content owners
GenAI’s ability to discover new content hasn’t grown as quickly as its adoption. In fact, when LLMs search for sources, they use relatively simple technology and can miss significant parts of a brand’s message.
AI platforms prefer content that is well-structured and backed by clean code, so elements like schema markup, semantic HTML, and consistent authorship information matter when trying to gain GenAI visibility.
For social media
LLMs often have preferred social media sources, so you may see an over-index on Reddit, YouTube or LinkedIn when analyzing their citations. If your brand maintains a social presence, leverage all available optimization options on each respective platform to improve visibility, including strategic use of hashtags, descriptive titles and detailed descriptions.
The more structured and contextually rich your social content, the more likely it is to be surfaced by GenAI.
For PR
Authoritative spaces like news sites and well-known publications are often included as referenced sources for many LLMs, though the weight can vary by industry and platform. Having content published about your brand in these prioritized sources not only creates credible touchpoints for users but can also influence LLM responses.
Strategic media placement is now serving a dual purpose: reaching human audiences and training AI systems on your brand narrative.
Taking action
GenAI has become a structural part of how users access information. Organizations that take steps now to make themselves more AI-friendly will be better positioned as buyers continue to adopt usage.
While the exact impact is still evolving, the opportunity is significant and justifies investment. Brands that treat AI optimization as a strategic priority will maintain visibility in a landscape where Search and Social are no longer the only gateways to information.
Takeaways from “Reclaiming quality in go-to-market: Imperatives for marketing and measurement”
Quality can differentiate brands, drive loyalty and increase revenue. We’ve been talking about this for a while on LinkedIn, on our blog, and at events.
Last week we co-hosted with the Association of National Advertisers a one-day conference, “Reclaiming quality in go-to-market: Imperatives for marketing and measurement,” that gathered marketing leaders and practitioners to discuss why quality matters and how to move toward quality marketing and analytics. Below are 5 key takeaways for marketing leaders to evaluate and discuss internally.
Build audiences offline to better control who you’re targeting
“One of our biggest dangers is thinking about people as datasets,” said Chief Analytics Officer Andy Hasselwander during his quality marketing analytics session, but multiple sessions discussed why thinking about your target audience as big numbers (and not individuals) is a problem.
Multiple sources say 252,000 websites are created daily, and the number of viewable impressions and IP addresses vastly outnumber human beings on earth. According to Truth{set}, any two given data providers agree on what IP address matches a postal address at most only 14% of the time. Privacy is not an excuse for bad data, but marketers are getting duped thinking they’re targeting one person but reaching another. Marketing needs to start policing itself on quality—potentially by bringing the identity spine into the open rather than relying on blackbox, outsourced providers.
If you want better results, leverage data and PII to build offline audiences and “stay in the PII as long as possible,” according to Mark Pilipczuk from The Industrial Arts. Segment your ICP within your own PII data, and leverage a vendor’s database to build lookalike models for your target audience, hashed audience is uploaded to the publisher or ad platform (audiences drops due to match rates), and then the files are delivered with partner cookies.
This offline, more targeted audience almost certainly will outperform the third-party interest and intent categories available in DSPs and ad platforms for customer lifetime value (CLV) and return on ad spend (ROAS).
One lever advertisers can pull to improve impression quality is to ask their DSPs and ad platforms to provide data on refresh rates (higher is worse), sites with multiple advertisers in the same consumer view, and sites with high ad-to-content ratios.
Marketing and analytics should embrace uncertainty
Understanding what we know and what’s still uncertain supports good decision making. Yet many in both marketing and analytics are hesitant to state when we don’t have a definitive answer.
In statistics and modeling, error bars show the variability of data. When MMM reports out a cost per acquisition (CPA) or ROAS, typically only the mean or median value is reported. But that’s where you get into trouble.
In the image below, the estimate for TikTok’s CPA is €41. If Instagram Reels’ estimate CPA is €60, the marketing team may decide to shift budget to TikTok. But then in the next readout (and with more data), TikTok’s CPA is €80. Now marketing’s mad and doesn’t trust the MMM. But in reality, TikTok’s CPA is still within the confidence interval—error bars would’ve helped marketing make a more informed decision.
Analytics teams must help educate their stakeholders about error bars and confidence intervals so the organization can make better decisions.
Delivering ROAS or CPA without error bars doesn’t breed confidence, it breeds distrust.
Cultivate curiosity and come with a solution mindset
Humans are wired to collect data, but creating knowledge, driving insight and providing wisdom don’t happen automatically. So what makes a great analyst? Curiosity, clarity and capability are the core skills of a great analyst, according to Sravanthi Konduri from Navy Federal Credit Union. Often we equate degrees with skill, but building experience and knowledge is needed to drive insight (and eventually wisdom).
Cultivating curiosity within the organization is another matter. Organizations with a growth mindset support an analytical environment and aren’t scared of data and learning. Analytics teams can fail because of analysis paralysis and wanting to have the perfect answer, rather than collaborating with internal stakeholders.
A solution mindset is key for marketing analytics teams to partner internally and offer alternative approaches and solutions. Analysts should think like the GM of a business unit—understand the problems, where the question fits in and who would care, and delivering an answer in the context of why it matters.
How to combat too high and too low ad frequency
Often marketers worry about capping frequency for individual views to prevent waste. This has long been an issue, especially if you’re running a campaign across multiple channels and platforms. David Riva from The Trade Desk pushes for unified frequency control—rather than capping each individual placement, DSPs should support capping frequency across channels.
Another area of waste, according to Ray Van Iterson from the United States Postal Service, is the large group of people who see 1, 2 or 3 fewer impressions than needed to achieve the goal.
And the key question is: do you even know who those people are? Can you identify and target them differently or with additional inventory?
Understanding consumer journeys is essential and MTA isn’t dead
The announcement of cookie deprecation was overblown and yet many organizations stopped trying to understand individual consumer journeys. But reporting focused on campaigns, channels or business units is inherently biased.
Marketers should know what combinations of channels and which sequences lead to the best outcomes. This can be done for known, trackable touches as well as likely touches using probabilistic mapping. Adding up the small probability of seeing an ad on a given day in a specific DMA across an entire campaign can give a better picture of how channels work together.
That individual journey data also can identify if someone is less engaged than we expect. Marketers can then deploy a higher impact channel to achieve the goal.
Understanding the consumer journey at the most granular level of data possible arms marketers with an understanding of the impact of particular platforms or partners within a channel. Identifying the high engagement or high attention platforms and partners can provide optimization opportunities and drive better outcomes.
Want more insights?
If focusing on quality in marketing and analytics feels like a challenge, let’s talk. We’d love to hear about the roadblocks, share best practices and brainstorm solutions.
Complete the form below and we’ll connect to schedule time.
“Influ2 is so unique—it lets Marketing meet Sales where they need to be, with the right message, to the right buyer, at the right time.”
—Amy Grucela, SVP, Demand Strategy
Smaller teams. Tighter budgets. Higher stakes. Today’s GTM leaders are being asked to do more with less—and Marketbridge proves you don’t have to choose between precision and scale—or between data-driven results and meaningful human connection.
With Influ2, we built a smarter contact-level advertising framework—one that delivers real buyer signals, relevant ad journeys, and meaningful marketing moments that actually move pipeline for our clients.
The Challenge
We work across industries, verticals, and complex buying groups—but the challenge is always the same: smarter targeting in a world full of noise. Our goal is to help clients connect with the right people, show real impact, and make Sales’ job easier.
Traditional ABM tools were falling short. We needed something more precise to:
Target specific stakeholders across client accounts
Deliver contact-level engagement signals that sellers could act on
Launch campaigns built to engage complex buying groups
Seamlessly integrate with each client’s existing tech stack
“We needed to solve one of our clients’ biggest challenges—how to surgically target the right people,” says Amy Grucela, SVP, Demand Strategy at Marketbridge. “With Influ2, we know exactly who we’re reaching and who’s engaging.”
The Solution
A scalable framework that flexes and scales, we used Influ2 to build a repeatable, multi-layered approach to precision targeting—one that enabled us to meet each client’s needs without sacrificing efficiency or control.
Step 1: Strategy on Autopilot—(Almost)
With Influ2, we transform strategy into action—from persona mapping to real-time sales signals—enabling us to move fast without losing focus.
With Influ2 we can:
Map buying committees and intent signals
Sync dynamic contact lists into Influ2
Launch persona and stage-based ad journeys
Send real-time engagement data to Sales
Step 2: Content that Actually Feels Relevant
Forget the one-size-fits-none approach. We use Influ2 to deliver content that feels relevant—tailored to the buyer’s role, their stage in the journey, and how they engage over time.
We follow a clear structure to meet each persona with the right message at the right moment:
Awareness: Problem-led messaging to drive familiarity
Consideration: Role-specific product and solution content
Decision: ROI-focused assets, case studies, and customer proof
“We design campaigns around specific personas and adapt the ad journey based on how each person engages over time,” says Bailey Creeden, Director, Media at Marketbridge. “It’s just like a nurture stream—but with ads.”
Step 3: Real-Time Sales Signals
When a CMO clicks three times in a week, Sales isn’t left guessing—they’re already reaching out. They get the full story in Salesforce, Slack, or HubSpot, so sellers always know who is active, what they clicked, and how to follow up.
“Sales gets notified instantly and knows exactly who to reach out to, what resonated, and how to steer the conversation,” says Maggie Forbush, Senior Specialist, Media at Marketbridge. “No other platform has given us that level of clarity.”
Step 4: Insights That Sharpen Your Strategy
With every campaign we run, Influ2 provides clear insights that clients actually want (yes, really). With the Influ2 Dashboard, we can analyze performance by account, campaign, and creative—and turn those insights into immediate strategy upgrades.
“The Influ2 dashboard is incredibly intuitive,” says Maggie Forbush, Senior Specialist, Media at Marketbridge. “It gives us clear contact, account, and campaign data our clients actually want—making it easy to show what’s working and where to go next.”
Key Takeaways
Today’s GTM teams don’t have time (or budget) to play the “maybe” game. You need to know who to talk to, when they’re ready, and what actually matters.
We cracked the code—turning strategy into action and giving Marketing and Sales the green light to move faster, work smarter, and stay perfectly in sync.
Here’s how we do it:
Start with strategy → Define your ICP and target buyers
Contact-level ad targeting → Reach real people, not just accounts
Lead with relevance → Match content to persona and stage
Enable Sales in real time → Deliver signals they can act on instantly
Optimize continuously → Use insights to scale what works
“Influ2 acts as the bridge between our campaigns and our clients’ revenue teams,” says Amy Grucela, SVP, Demand Strategy at Marketbridge. “It connects marketing efforts directly to what Sales needs to move deals forward.”
The Results
For one standout client campaign, we set out to prove that contact-level advertising could do more than just reach the right people—it could drive real momentum through the funnel.
With Influ2, we achieved powerful results:
100% of late-stage deals had at least one engaged contact
41% faster deal velocity when contacts engaged with Influ2
6x more accounts entered active pipeline
19 engaged contacts per deal on average
85% more accounts moved into active engagement
10% of all target accounts converted to pipeline
For us, this campaign validated our approach: when you know exactly who you’re reaching and how they’re engaging, Sales can follow up with confidence and timing that actually drives results.
“With Influ2, we’re not just running ads—we’re tracking impact and connecting the dots from click to close,” says Amy Grucela, SVP, Demand Strategy at Marketbridge. “It’s a complete view of how marketing drives revenue.”
The Conclusion
We elevated our go-to-market strategy with Influ2, replacing generic ABM tactics with a contact-level framework that delivered precision targeting, real-time buyer signals, and relevant ad journeys. By bridging the gap between marketing and sales, we scaled campaign impact without sacrificing personalization—leading to 6x more accounts in pipeline, a 41% increase in deal velocity, and a 94% boost in site engagement. With Influ2 powering our ad strategy, we turned insight into action and made contact-level advertising a growth engine for our clients.
There’s no shortage of buzz around AI, but what separates the leaders from the pack is not experimentation for experimentation’s sake. Rather, organizations that are successful in unlocking AI’s value for B2B sales are hyper-focused on where intelligence can move the needle and deliver results.
At the center of the issue is a recognition that, in theory, AI has potential to reshape every aspect of the go-to-market (GTM) organization ― from prospecting and pipeline management to customer support and pricing. In practice, however, leaders confront pressing realities: budgets for technology and transformation are finite, and teams already face limits on how much change they can absorb.
Instead of spreading resources thinly across the latest AI trends, success demands a disciplined focus on the highest impact opportunities, and constant attention to downstream organizational implications to turn AI investments into measurable results.
From possibility to impact: The critical importance of focus
For sales leaders, translating AI’s wide-ranging potential into practical tangible outcomes starts with identifying the right problem to solve. The key is not in asking, “where could we apply AI?” but rather, “where should we apply it first?”
The answer lies in framing AI opportunities through a clear set of guiding questions that connect business priorities, process pain points, and organizational readiness such as:
What are the strategic GTM priorities over the next year (new logo, cross-sell, churn reduction, etc.)?
Where in the sales process do reps or managers lose the most time?
Which parts of our sales model create the most drag on performance today?
Where would enhanced insight or foresight most change seller and buyer behavior?
Do we have the data and organizational readiness to act here?
High-performing teams use these questions to cut through the noise and target a handful of use cases where AI can truly change the game. Instead of scattering bets across pilots, they invest in focused applications that drive measurable business value.
Consider five proven examples:
Dynamic lead scoring: Equipping teams to identify and act on accounts most likely to convert, streamlining prospecting for greater efficiency and increasing qualified pipeline coverage.
On-demand sales intelligence: Providing real-time access to relevant product, technical, industry, and client information, enabling sellers to navigate even the most complex conversations without pulling in additional specialist resources.
AI-enabled sales coaching: Leveraging analytics platforms and conversational intelligence to provide real-time, personalized coaching to reps—guiding call strategies, recommending best practices, and helping sales managers tailor development to each team member’s strengths and opportunities.
AI agents for inside sales: Deploying conversational AI avatars to qualify leads, book appointments, and handle routine inquiries before seamlessly passing high-potential prospects to human reps.
Pricing optimization: Adapting pricing in real time based on client behavior and market conditions, helping teams close deals faster and at better margins.
New capabilities, new operating models
Done correctly, successful AI deployment should not simply tweak workflows; rather, it should help to inform the future shape of the sales organization itself. As automation handles more data analysis and tactical decisions, the burden of manual, repetitive tasks shrink. Account executives shift toward relationship-building and strategic thinking. Operations and enablement teams shift from report builders and content archivers to stewards of data quality and insight. In aggregate, these shifts enable GTM organizations to deploy fully empowered teams designed for agility and impact.
For example:
AI-enabled account executives: At a global SaaS company, account executives use AI assistants embedded in their CRM. Rather than depending on a separate team of product specialists, they instantly access up-to-date case studies, technical specs, and dynamic pricing proposals ― strengthening credibility and accelerating sales cycles.
Operations as a strategic center of excellence: An industrial manufacturer consolidates its sales operations and analytics into a single “insights” team. This group goes well beyond reporting; they continually curate and upgrade the data that AI models rely on, so field reps always act on the clearest possible view of client needs.
AI agents for inside sales: A technology firm deploys conversational AI avatars to manage the initial stages of prospecting ― qualifying leads, booking appointments, and handling routine inquiries before seamlessly passing high-potential prospects to a human touch. This reallocation of effort allows business development reps to focus on high-value client engagement and strategic nurturing, while machines efficiently scale outreach and qualification.
These shifts let people do what machines can’t: listen, collaborate, and build trust ― faster and with more precision than ever before.
Common pitfalls
Even well-intentioned AI programs stumble when the basics aren’t in place. Two pitfalls in particular tend to limit momentum before value is ever realized.
Underestimating the data lift AI doesn’t run on hope ― it runs on clean, connected data. Too many sales teams launch pilots only to discover their CRM is riddled with duplicates, gaps, and outdated records. Without sustained investment in data quality, governance, and integration, even the most advanced AI deployments stall.
Key imperatives:
Treat data stewardship as a core enablement function, not an afterthought.
Establish clear ownership for data quality across sales, marketing, and operations.
Start with one or two critical data domains (e.g., accounts, opportunities) before scaling.
Treating technology as the strategy AI can sharpen decisions and automate repetitive tasks, but it cannot replace judgment, creativity, or trust-building. Leaders who treat AI as a silver bullet risk weakening customer relationships and demotivating teams. Technology should enable—not dictate—the sales strategy.
Key imperatives:
Position AI as a strategic enabler providing guidance and augmentation, not replacement.
Reinforce the uniquely human strengths GTM teams bring: teamwork, empathy, negotiation, creativity.
Set adoption expectations early and broadcast success stories throughout the change management cycle.
Principles for sales organizations in 2025
Prioritize ruthlessly: Anchor every initiative in business value, rather than novelty or hype.
Redesign deliberately: Let structure follow strategy, adapting roles to maximize new capabilities.
Invest in data: Treat data quality and integration as non-negotiables.
Retain a human core: Encourage teams to use AI as a catalyst for insight and creativity, not a substitute for them.
The future of B2B sales will be shaped by leaders prepared to invest with discipline, reimagine their structures, and blend technological horsepower with human-led strategy and ingenuity.
If you want help evaluating if your organization is ready for AI or which use cases to implement first, get in touch.
More people shopping for health insurance are now using AI for guidance. Many ask generative tools questions such as, “How do I enroll in an ACA plan?” or “What’s the best Medicare plan in my area?” The answers they get, whether right or wrong, depend on the information used to train these AI models.
If AI uses your brand’s content, you have more influence on what buyers do next. If it doesn’t, consumers may rely on competitors or old information. This matters most when people are researching and deciding which options they prefer.
Recent research found that even older adults are using AI. In a study across several countries, people aged 55 and older were most likely to use AI tools when learning about health insurance (Cognizant). This challenges old assumptions about digital habits and shows why healthcare payers need to make their content easy for AI to find.
Generative Engine Optimization, or GEO, helps your content appear and be trusted in AI-powered searches. For health insurance marketers, GEO is more than a technical detail. It’s a strategic imperative.
This blog explores how generative AI is changing the way people shop for health insurance, from ACA marketplace enrollees to Medicare Advantage members. You’ll also learn how marketers can use GEO to help their content get noticed and trusted in AI-driven conversations.
Inside the new AI-driven shopping journey
People of all ages are now using generative AI tools to research and choose health insurance. This change is especially noticeable in two groups:
ACA shoppers
First-time ACA shoppers often feel overwhelmed by the complexity of choosing and enrolling in a plan. Many begin with digital research, searching Google, browsing Reddit, and scanning social media for basic information on subsidies and how the ACA works. With new federal changes from the One Big Beautiful Bill Act and anticipated rate increases, more consumers will seek reliable ACA information wherever they can find it. As of the writing of this piece, Google AI (Gemini), Meta AI, and Claude are identified as the most frequently used platforms for ACA-related searches. Among the top ten most-mentioned web pages for ACA-related AI inquiries, only three are from payer websites.
After learning the basics, consumers start comparing specific plans. This step can be challenging. Most marketplace or payer websites require a zip code before showing local plan options. As a result, most generative AI platforms cannot access live health insurance plan databases or search by zip code unless connected to a backend system. This makes it harder for consumers to use AI tools to compare plans and can cause payers to lose visibility when shoppers look elsewhere for plan comparison.*
Medicare shoppers
As mentioned earlier, research shows that consumers aged 55 and older—including those eligible for Medicare—are increasingly turning to AI tools when researching health insurance. Why? Because they understand how complex plan choices can be, and they value AI’s ability to simplify and personalize the experience.
In our recent CX study (conducted before the rise of GEO), the top reason Medicare Advantage enrollees sought help during enrollment was to ensure they were selecting the right plan for their needs.
Conversational AI can provide users with a personalized Q&A experience, but only if it uses reliable and well-organized information. As with ACA plans, it is challenging to utilize AI models like ChatGPT or Gemini to compare options because plan-level data typically requires a specific zip code. However, some lead aggregators have found a way around this by creating standalone landing pages filled with local plan information that frequently appear in Gen AI recommendations. This is not surprising, since our cursory audit found these pages are easy to read, display STAR ratings, and allow AI tools to access plan information quickly.
What is Generative Engine Optimization (GEO)?
GEO is about making your content easy for AI models like Meta AI, Gemini, or Claude to find, trust, and use in their answers. While SEO is about ranking high in search results, GEO focuses on being included in AI-generated responses that are contextually relevant.
SEO vs. GEO: A quick comparison
SEO
GEO
Ranks on search results
Appears in AI answers
Optimized for clicks
Optimized for citations
Keywords, backlinks
Accuracy, structure, credibility
In health insurance, GEO means making content that clearly and confidently answers real consumer questions. For example, rather than just using the keyword “ACA subsidy qualifications,” your content should explain the rules, mention trusted sources like Healthcare.gov, and be organized so AI can easily use it.
GEO best practices for payer marketing teams
Here’s how to make your content AI-ready:
Publish Authoritative, Well-Cited Content
Cite reputable websites and studies from reputable sources such as CMS and Healthcare.gov.
Identify topics with few authoritative sources and create well-structured content to fill those gaps.
Attribute published content to a named author and link to an author bio page (Google Search Central).
Include statistics and expert quotes.
Distribute press releases through wire services since they are ingested into LLM training and retrieval pipelines (Mynewsdesk).
Blogs are still important for GEO. Even if organic traffic drops, blogs can help demonstrate your expertise, especially when they are part of a broader strategy that includes media coverage, guest posts, and mentions on other sites.
“GEO isn’t just about search. It’s a window into the effectiveness of your entire marketing ecosystem,” Niall Moran, Director of Technology PR, said during the Marketbridge GEO webinar.
Use Q&A and Conversational Formats
Structure content around real questions.
Example: “What does Medicare Part C cover?” followed by a clear answer.
AI works best with FAQ-style formats because question-answer pairs are the exact format AI systems are designed to retrieve (Lisa Lee, Salesforce).
Optimize for Snippets and Quick Answers
Use bullet points, tables, and summaries.
Example: A 5-step ACA enrollment guide or a Medicare comparison table.
Keep Information Up-to-Date and Monitor AI for Accuracy
Update content as needed with each policy change and enrollment season.
Monitor AI outputs for accuracy about your brand.
AI can sometimes generate inaccurate or misleading information, which can hurt your reputation. Being aware of this is the first step to managing the risk.
Expand Your Digital Footprint
Build “citation loops”—ensure your content is referenced across multiple high-authority platforms.
Publish thought leadership on your site.
Contribute to Reddit or Quora threads.
Appear in podcasts and share video content.
Make Your Website AI-Crawlable
Use schema markup and clean HTML.
Provide context on each page (e.g., “This guide is provided by a licensed expert…”).
Leverage AI Tools Internally
Use AI for keyword research, content drafting, measuring visibility, and testing how AI presents your info.
Deploy AI chatbots on your site for personalized guidance.
Embracing GEO in the consumer shopping journey
Generative AI is reshaping the way people shop for health insurance. Whether it’s a 28-year-old exploring ACA options or a 68-year-old comparing Medicare plans, Gen AI is often the first stop. GEO ensures your brand is part of that conversation.
Publishing clear, reliable content and structuring it so AI can easily understand and cite it helps position your brand as a go-to source. Payers that do this well will stand out by offering accurate information and guiding consumers toward informed decisions.
By delivering helpful content at key moments of the buyer journey, you ensure consumers receive the guidance they need when selecting a health plan.
Remember, generative AI is changing fast. Like SEO, GEO best practices will keep evolving. Still, consumers always look for reliable, high-quality information when choosing health insurance. GEO can help you deliver on that. Because AI reflects the values of its creators, your commitment to clarity, accuracy, and consumer empowerment can extend your brand’s reach and impact.