scott brinker

Interview with Brij Pahwa, Editorial Lead, e4m & MartechAI.com

In the past year, marketing teams have moved from talking about AI agents as “future tech” to deploying them in real workflows. At the same time, buyers are increasingly using AI assistants for early discovery and evaluation, creating a new layer between brands and human customers.

In this wide-ranging conversation with Scott Brinker, Analyst & Advisor, chiefmartec, we discuss what has changed since last year, where AI agents are genuinely delivering value today, what risks marketers invite when they “deploy and annoy,” why data and organizational readiness are still the biggest blockers, how the MarTech stack is evolving in 2026, and why he believes we are in an awkward transition between the old internet economy and whatever comes next.

What has changed the most in MarTech since we last spoke on this platform?

If I had to pick one change, it’s that this time last year, AI agents were still largely theoretical. We were seeing science fair project examples, but it seemed like we still had quite a ways to go before we could really rely on agentic workflows to do reliable work in marketing. A year later, they are very real. The scope is still modest and incremental, but more marketing organizations are deploying agents in specific use cases.

The other big change is on the customer side. Buyers are evolving from classic Google search toward using AI assistants for early discovery and evaluation of brands and B2B vendors. These customer agents are becoming intermediaries between us and our human customer. Just like marketers learned how to please Google in the days of SEO, we are now trying to figure out how to please these AI agent intermediaries to make sure we ultimately reach and serve our customers.

You have cautioned that blindly unleashing AI agents can turn marketers from innovators into spammers. How do marketers embrace this ecosystem without falling into that trap?

There are two parts to this. The first is the internal challenge. We tend to overestimate what’s possible in the short term and underestimate what’s possible in the long term. AI is the fastest fast-changing technology we’ve faced yet.

When we did our report MarTech for 2026 and surveyed over 100 marketing ops and marketing tech leaders, the number one challenge they cited was data. The AI engines themselves are becoming commodities. Anyone can buy a seat or get API access. Differentiation comes from the data you feed into it, and getting that data across the organization, making sure it’s clean, with governance and compliance, is hard work. AI has raised the stakes.

The second biggest challenge was org readiness. It’s one thing to have powerful technology. It’s another thing to rethink culture, training, and processes. AI can make processes faster, but it can also force you to ask whether you even need a process at all, or whether there’s a better way to approach it.

The second part is customer experience. The concern is moving from spray and pray to deploy and annoy. AI can personalize messages, but it also removes constraints on volume. The question shifts from how much you can do to how much you should do, and how to engage prospects respectfully through their eyes. If you’re not paying very close attention to what this looks like from the recipient side, you can misuse the technology.

If a CMO wants to deploy AI agents, what is the safest and most valuable customer-facing use case right now?

A very safe and valuable area is supporting inbound engagement, especially on websites and apps. This started with chatbots for customer service, and for a long time most weren’t very good. With LLMs that are good at conversational interactions, and with agents connected to backend data like knowledge bases and ticket histories, they’re getting quite good at answering and resolving customer queries.

This is valuable not just for customer support, but also before someone becomes a customer. Prospects can engage 24-7, wherever they are, any time of day, to get answers tailored to what they’re asking. Because it’s inbound, the prospect initiates it and maintains control. It’s not an interruption. It’s serving a need.

Are we truly in an AI reasoning era, or is AI still just following scripts?

I think of it the way Ethan Mollick describes it, as the jagged frontier. There are things AI is remarkable at, including cases where it has reasoned and figured out novel solutions, even solving certain math problems that don’t have existing solutions.

But there are other things where you expect it to do well, like resolving a specific customer problem, and it might not be able to. It can feel like it becomes a wise old man and then a small baby.

At the same time, without pushing back against healthy skepticism, we are seeing more reasoning and more use cases than we would have expected a year ago. This keeps changing fast, and sometimes what fails today works very well six months later with new models.

Will smaller language models replace large language models, especially for enterprise use cases?

I don’t think of it as either-or. I think of it as both-and. The advantage of LLMs, particularly commercial cloud ones, is they are state-of-the-art frontier models. Companies spend billions training and improving them, and it’s hard to replicate that full scale of power in personal local models.

That said, many use cases don’t require the absolute frontier model. SLMs can be cheaper, offer more control over training and guardrails, and there’s also a real security and privacy angle. Even with strong efforts from frontier labs, putting data in the cloud has non-zero risk. Local models that keep data inside your boundaries can have an advantage.

A lot of it comes down to use case by use case trade-offs.

Where do we stand on data sovereignty and the legality of copyrighted data being used to train LLMs?

There are multiple things entangled. Outside of LLMs, data sovereignty itself is tricky. There’s value when data can flow across national boundaries with governance, because it enables business and learning globally. But it’s also naive to assume there’s no risk. Nations can be cooperative and competitive at the same time. In an AI world where data becomes more valuable, we need to be intentional about governance and trade-offs.

On LLM training, at least in the U.S., the argument has leaned on fair use. If I as a human read websites and books and then do things, that’s fair use for learning. But when machines do it at unprecedented scale and with unprecedented recall, it’s not clear where fair use ends and where embedding and taking begins.

A number of cases have been leaning in a direction where AI companies have to pay for access. It also depends on who you ask. Some content creators want payment if their content is used. Others want AI engines to take and show their content because it drives discovery, which is part of the GEO and AEO conversation. Ultimately, it feels right that content owners should decide whether they want to make it freely available, make it available for payment, or not include it at all.

What happens to the internet economy when more discovery moves into AI assistants and agents?

One big question is whether AI engines will adopt advertising. If more human attention and engagement moves into AI assistants and agents, it makes sense for that to become a channel where more commercial transactions and advertising happen.

If revenue shifts there, that money doesn’t belong entirely to the AI lab. It also belongs to the people feeding the content those engines rely on. We’re in an awkward teenage space between the old and the new. The old economy is showing cracks, but the new economy hasn’t fully arrived yet, which puts us in an awkward situation.

In 2026, what should an ideal MarTech stack look like for a CMO?

Today and probably through most of 2026, we will see hybrid MarTech stacks. Most of the core SaaS products remain essential, marketing automation, customer engagement platforms, DAMs, CMSs, DXPs, CDPs, and other systems. AI capabilities are being embedded into these products, and even independent AI tools still rely on core systems of record for data and core systems of engagement to deliver experiences.

Over the next few years, I think we’ll see a shift in how stacks are organized. The fundamental challenge is getting data right across the organization, not just across marketing tools but across sales, service, digital products, and more. It’s hard to do when you have multiple systems of record that don’t fully integrate.

My best guess is we will see more underlying data platforms becoming the foundation around which MarTech operates. Data planes from platforms like Databricks, Snowflake, and Google BigQuery increasingly become the base, because that’s how you connect data, manage governance, and coordinate what you build on top of it. Most companies aren’t fully there yet, but more are investing in these platforms for the organization as a whole.

What will dominate the next year of conversation in MarTech?

My best guess is it will still be AI agents. They’ve moved from theoretical to deployed in production, but it’s still early. The tasks are relatively isolated and simple, but they are getting smarter. As they get better at higher-level understanding of tasks, that opens up more possibilities.

But it’s not just about technology speed. It’s also about organizations’ ability to adapt, and that takes time. Regardless of breakthroughs, it will be a step-by-step evolutionary journey to an AI future.

Final message for marketers heading into 2026?

Take a deep breath. It is going to be a very wild year ahead for all of us, and that’s OK. We’re on this journey together. The more we can help each other along, I’m excited to see where it takes us over the next 12 months.