IMG Courses — The #1 Place for SEO & Digital Marketing Courses
Edition #3 · March 22, 2026 · 12 min read

What You Call It Matters More Than What It Does

The label “AI-designed” drops purchase intent 29%. The same product, described as “designed by our team using AI tools,” goes up 3.5%. Framing is part of the product. Here’s the data from this week — and what to do with it.

AI Labeling Purchase Intent Branded Search ROI Email Deliverability Agentic AI Marketing Data
← Previous Edition When Your Instruments Change But Your Scan Doesn’t
Next Edition → The Culture of Completion: Why Your Course Isn't the Problem

“It was like we loaded a machine gun with company money and told our developers to have at it.”

That was one of my leaders at Ticketmaster, describing our early shift to the cloud.

Amazon AWS had sold us on the idea that moving to the cloud would cut costs. A lot of companies bought that story. A lot still do. And now the same pitch is being made about AI.

Here is what actually happened. Cloud was a real shift. Companies that adapted competed fiercely. But almost never because costs went down. The real advantage was agility. Speed to market. You could ship a new feature a month, three months, six months before your competitor. The budget looked worse. The revenue made it back.

At Ticketmaster, we spent months doing everything exactly the way we had before, paying cloud prices to do it. We were eating the new cost without capturing the real advantage. It took a genuine mindset shift to understand what the new paradigm actually offered.

The label on the pitch said “cost savings.” The actual product was speed.

That is the pattern I am watching this week. In three different places. How you describe something changes how it performs. The metric that looks strongest is not always measuring what you think. And the real advantage in a new paradigm is almost never where the original sales pitch said it would be.

Here is what I am looking at this week.

This Week’s Finds

Deep Dive: The “AI-Designed” Label Problem

Sources: Science Says — “AI-Designed” Hurts Sales (Mar 17, 2026) · Yang & Tian, Journal of Retailing and Consumer Services (Dec 2025) ↗

−29%
Drop in purchase intent when a product is labeled “AI-designed” — versus a 3.5% rise when described as “designed by our team using AI tools.” Same product. Three words.
Deep Dive — AI Labeling and Consumer Behavior

Why This Is the Most Important Story This Week

The Science Says study measured something most marketers have not tested: not whether AI produces good work, but what happens when you say so. A 5% lift in conversion rate is considered a major win in most paid campaigns. A 29% drop in intent from a three-word description is not a rounding error. It is a structural problem.

What Is Happening Psychologically

The label “AI-designed” triggers two things simultaneously. First, it signals a perceived lack of human judgment in the process. Second, it raises uncertainty about quality control. People do not distrust AI in principle. They distrust the idea that no human made a deliberate call along the way.

The second framing — “our team used AI tools to help design this” — repositions AI as a means rather than the author. A 3.5% rise in intent from that version suggests that honest disclosure can actually help, as long as human judgment stays visible in the framing. The disclosure is not the problem. The authorship claim is.

Three Things Worth Checking in Your Own Work

Key Takeaway

Disclose the process, not just the output. Say your team uses tools. Do not position the thing as the tool’s creation. Human judgment visible in the framing is the difference between a disclosure that helps and one that costs you a third of your purchase intent.

IMG’s Take

IMG’s Take

The labeling data from Science Says lands differently when you have been watching how marketers actually adopt AI over the past two years. The question we hear most in the community is not “should we use AI?” It is “How do we make the AI less obvious?” and “Should we say we use AI?”

The data now has a practical answer. Disclose the process, not just the output. Say your team uses tools. Do not position the thing as the tool’s creation.

The Growth Memo brand tax piece adds a related layer. Both stories are examples of metrics that look right until you ask what they are actually measuring. A 1,299% ROAS on branded search and a “62% of CMOs can prove AI ROI” headline both pass the surface check. The underlying picture is more complicated in both cases.

That is the theme this week. What something looks like on the spec sheet and what it does in production are often different. The gap is usually where the real work is.

If you are an IMG member, this week’s forum thread is worth a look. The question of how to disclose AI use without hurting trust is one the community has been wrestling with. Drop what you are seeing in your own copy tests. Collective data is worth more than any single study.

Join the IMG Community →

— The IMG Team

Sources cited in this edition
  1. AI labeling and purchase intent (−29%): Science Says — “AI-Designed” Hurts Sales (March 17, 2026) · Underlying study: Yang & Tian, “Designer-consumer similarity matters: The effect of AI-designed products on purchase intention,” Journal of Retailing and Consumer Services, Dec 2025 ↗
  2. Branded search ROAS and brand tax analysis: Growth Memo — The Brand Tax: How Google Profits From Demand You Already Own
  3. CMO vs IC AI ROI gap (62% vs 12%): Jasper — The State of AI in Marketing 2026 · Why CMOs and ICs See AI So Differently
  4. Gmail 102KB clipping and tracking pixel impact: AWeber — Why Gmail Is Clipping Your Emails — And What to Do About It
  5. Agentic AI fragility in production: TLDR AI · KDnuggets — OpenClaw Explained
← Previous Edition When Your Instruments Change But Your Scan Doesn’t
Next Edition → The Culture of Completion: Why Your Course Isn't the Problem