“It was like we loaded a machine gun with company money and told our developers to have at it.”
That was one of my leaders at Ticketmaster, describing our early shift to the cloud.
Amazon AWS had sold us on the idea that moving to the cloud would cut costs. A lot of companies bought that story. A lot still do. And now the same pitch is being made about AI.
Here is what actually happened. Cloud was a real shift. Companies that adapted competed fiercely. But almost never because costs went down. The real advantage was agility. Speed to market. You could ship a new feature a month, three months, six months before your competitor. The budget looked worse. The revenue made it back.
At Ticketmaster, we spent months doing everything exactly the way we had before, paying cloud prices to do it. We were eating the new cost without capturing the real advantage. It took a genuine mindset shift to understand what the new paradigm actually offered.
The label on the pitch said “cost savings.” The actual product was speed.
That is the pattern I am watching this week. In three different places. How you describe something changes how it performs. The metric that looks strongest is not always measuring what you think. And the real advantage in a new paradigm is almost never where the original sales pitch said it would be.
Here is what I am looking at this week.
This Week’s Finds
-
Story 1
Calling Something “AI-Designed” Cuts Purchase Intent by 29%
Science Says ↗
A study from Science Says tested how product labeling affects buying behavior. Products described as “AI-designed” saw purchase intent drop 29% compared to a control. The same product, described as “designed by our team using AI tools,” saw purchase intent rise 3.5%.
The output is identical. The framing is the whole difference. This applies beyond product design: course landing pages, ad copy, service descriptions, email sign-offs. If you are using AI in your process, the question is not whether to say so. It is how. “Tool used by experts” frames it differently than “made by AI.”
-
Story 2
The Brand Tax: Google Profits From Demand You Already Built
Growth Memo ↗
An analysis of 99 billion web sessions found that branded search campaigns — ads triggered when someone searches your own company name — show a 1,299% ROAS. That number looks like your best campaign. The problem: those clicks were already coming. Ad costs climbed 30%. Conversion rates fell 5.1%. Bounce rates hit 59%.
The brand campaign is collecting a fee on traffic that was already yours. That is a legitimate spend decision. But knowing what you are actually buying changes how you evaluate the number. The ROAS is real. What it measures is not what most people assume.
-
Story 3
CMOs Say 62% Can Prove AI ROI. Their Teams Say 12%.
Jasper Survey ↗
A Jasper survey of marketing teams found a striking gap: 62% of marketing leaders say they can demonstrate ROI from AI tools. Among individual contributors — the people actually running the campaigns — only 12% agree.
Both groups are telling the truth. Leaders see high-level outcomes and dashboard numbers. ICs see governance delays, failed tool integrations, and hours spent prompt-debugging. The gap is not measurement. It is proximity to the work. If you manage a team, their experience of AI tools is probably not the same as yours.
-
Story 4
Gmail Clips Emails Over 102KB — and Your Tracking Pixel Goes With It
AWeber ↗
Gmail automatically hides the content of any email that exceeds 102KB behind a “Message clipped [View entire message]” link. Most subscribers will not click it. More importantly: if your tracking pixel sits below the clip point, it does not fire. Which means opens do not register. Your Gmail open rate data is understated.
Heavy images and excess CSS are the most common causes. Keeping your HTML under 102KB fixes it. If your open rates on Gmail look unexpectedly low, check your template size before blaming the subject line.
-
Story 5
Fully Agentic AI Systems Are Fragile in Production
TLDR AI ↗
KDnuggets ↗
OpenClaw published a documented pattern from production AI deployments: fully agentic systems — where the AI handles the entire workflow end to end — are fragile. The real-world solutions that hold up under load are structured workflows with targeted LLM steps at specific points, not systems that hand full autonomy to the AI.
End-to-end AI automation sounds clean in the spec. It falls apart in the edge cases. This is the software engineering version of the same lesson in stories 1 through 3: what works in theory and what holds in production are often two different things.
−29%
Drop in purchase intent when a product is labeled “AI-designed” — versus a 3.5% rise when described as “designed by our team using AI tools.” Same product. Three words.
Why This Is the Most Important Story This Week
The Science Says study measured something most marketers have not tested: not whether AI produces good work, but what happens when you say so. A 5% lift in conversion rate is considered a major win in most paid campaigns. A 29% drop in intent from a three-word description is not a rounding error. It is a structural problem.
What Is Happening Psychologically
The label “AI-designed” triggers two things simultaneously. First, it signals a perceived lack of human judgment in the process. Second, it raises uncertainty about quality control. People do not distrust AI in principle. They distrust the idea that no human made a deliberate call along the way.
The second framing — “our team used AI tools to help design this” — repositions AI as a means rather than the author. A 3.5% rise in intent from that version suggests that honest disclosure can actually help, as long as human judgment stays visible in the framing. The disclosure is not the problem. The authorship claim is.
Three Things Worth Checking in Your Own Work
- Audit your copy for AI-author framing. “AI-generated,” “AI-written,” or “AI-designed” are the specific phrases to check. Neutral disclosure (“we use tools to help us work faster”) is different from labeling the output as AI’s creation. The Science Says data suggests the distinction matters.
- Apply this to output quality, not just marketing copy. If your internal content process is fully AI-generated with minimal human review, the quality gap is not just with Google’s filters. People can often sense when human judgment is absent. The study data suggests the instinct is calibrated correctly.
- Remember that framing is part of the product. It always has been. “Handmade” means something. “Fresh” means something. “AI-generated” now means something too, and the association is not favorable. You can be transparent about your process without undermining what you made. The key is keeping human judgment visible in how you describe the work.
Key Takeaway
Disclose the process, not just the output. Say your team uses tools. Do not position the thing as the tool’s creation. Human judgment visible in the framing is the difference between a disclosure that helps and one that costs you a third of your purchase intent.
IMG’s Take
The labeling data from Science Says lands differently when you have been watching how marketers actually adopt AI over the past two years. The question we hear most in the community is not “should we use AI?” It is “How do we make the AI less obvious?” and “Should we say we use AI?”
The data now has a practical answer. Disclose the process, not just the output. Say your team uses tools. Do not position the thing as the tool’s creation.
The Growth Memo brand tax piece adds a related layer. Both stories are examples of metrics that look right until you ask what they are actually measuring. A 1,299% ROAS on branded search and a “62% of CMOs can prove AI ROI” headline both pass the surface check. The underlying picture is more complicated in both cases.
That is the theme this week. What something looks like on the spec sheet and what it does in production are often different. The gap is usually where the real work is.
If you are an IMG member, this week’s forum thread is worth a look. The question of how to disclose AI use without hurting trust is one the community has been wrestling with. Drop what you are seeing in your own copy tests. Collective data is worth more than any single study.
Join the IMG Community →