Creative Strategy as Infrastructure: How AI Agents Turn Expertise Into a Scalable System
Feb 7, 2026 · Haven Team
The Speed vs Expertise Tradeoff
There has long been a paradox in professional services: quality takes time, but the market demands speed. In the world of creative strategy, this has usually meant that "fast" work was shallow, and "deep" work was slow. Better search engines and improved productivity tools have contributed meaningfully towards reducing this seemingly inevitable bias. But the ultimate bottleneck, the human brain's capacity to synthesize large amounts of competing data into a cohesive plan, has remained stubbornly unchanged.
Curiously, successful efforts at balancing the speed / expertise trade-off, suggest that its resolution, is also likely achieved through the marriage of human agency and technological execution. Nonetheless, the terms and dynamics of that partnership have always proven elusive. Until now, that is.
It should come as no surprise to you, if you've made it this far, that the forcing function responsible for turning velocity and understanding, entities once at odds, into teammates, is Artificial Intelligence. I realize that's a vague statement to make. After all, no one can actually seem to agree on a standard definition of what Artificial Intelligence is. But my hope is that in reading through to the end of this article, that you not only walk away with a better understanding of the general surface area that Artificial Intelligence spans across but also the technological harnesses that leverage the frontier of AI to redefine how creative strategy is built.
The Era Before Agents: How We Got Here (and Where We Stalled)
It's hard to overstate how much changed when generative AI went mainstream in late 2023 and early 2024. Overnight, capabilities that once required large teams became accessible to much smaller ones. A creative strategist could draft ad copy variations, generate mood boards, summarize competitor messaging, and pull key insights from a dense market report, all in a day's work. For the first time, AI felt less like a back-office utility and more like something you could actually use.
What followed was an explosion of tools, each one surprisingly good at what it did. Savvy creative strategists started weaving them together into workflows that actually worked. Open the listening tool, pull the sentiment data. Feed the highlights into the copywriting assistant. Pull from the trend platform and distill it with ChatGPT or Gemini. Each of those interactions was quietly executing a specific capability, and the strategist was, without necessarily thinking about it, orchestrating a collection of powerful AI operations step by step.
Unfortunately, this is where the efficiency gains stalled. Each of these tools was completely siloed, operating in its own lane with no awareness of what the others were doing. Nothing one tool learned or surfaced could dynamically influence how another responded. Nothing could look at research results and decide what to do with them next, or recognize that sentiment data from Reddit suggested a shift in tone that should change the entire creative direction and then go act on it. The strategist was still the one connecting the dots, moving between tools, and making the calls. The tools made each step faster, but the number of steps stayed the same. The mental load stayed the same. And most importantly, the bottleneck stayed the same. We'd gotten faster but we still had to reconcile large amounts of data with a broader and effective creative strategy.
Advent of Automated Reasoning
But this changed recently, and it wasn't just another incremental improvement in what AI could do. It was a change in what AI could be. The tools we talked about in the last section were powerful, but they were still fundamentally reactive. You prompted them, they responded. The moment something changed was when AI systems started not just answering questions, but reasoning through problems autonomously, deciding what to research, what to act on, and how to connect the dots without being told at every step. That's when we started calling them agents. And more importantly, that's when the concept of agent skills entered the picture.
So what actually are agent skills? At their core, they're codified expertise. Not just instructions or guardrails, but deep, structured knowledge about how to approach a problem in a specific domain. Think of it as the difference between giving someone a checklist and giving them the judgment of someone who's been doing the job for twenty years. Agent skills are that judgment, packaged in a way that an AI system can actually use it.
To understand why that matters, look at what's already happening in other fields. In medicine, OpenEvidence has built something remarkable. It's a platform used by over 40% of physicians in the United States, and it doesn't just surface medical literature. It synthesizes findings across thousands of peer reviewed studies, weighs the strength of evidence, and surfaces actionable clinical guidance grounded in sources like the New England Journal of Medicine and JAMA. A physician asks a question at the point of care and the system doesn't just retrieve an answer. It reasons through the evidence, highlights consensus, flags where there's disagreement, and translates all of it into a recommendation. That's not a search engine. That's decades of medical research methodology and clinical reasoning, codified into an agent skill and executed at a speed no human could match.
The same thing is happening in software engineering. Claude Code, Anthropic's agentic coding tool, doesn't just write code. It reads an entire codebase, understands how the pieces relate to each other, plans an implementation approach, writes the code, runs tests, debugs failures, and iterates until things work. All of that autonomously. One engineer at Rakuten described giving Claude Code a complex task inside a library with over twelve million lines of code. It worked autonomously for seven hours and delivered a complete implementation. The engineer's role wasn't to write code. It was to occasionally provide guidance. The deep knowledge of how software systems are built, how to navigate large codebases, how to reason about side effects and edge cases, all of that was baked into how the agent approached the problem.
These aren't one off demos or research experiments. They're products that people are relying on in high stakes environments, every single day. And the reason they work isn't just that the underlying AI models got smarter. It's that someone figured out how to take real, hard won human expertise and turn it into something an agent could actually reason with. That's the step change. Not smarter AI. Smarter skills.
The Pillars of a Robust Agent
An agent that has deep expertise codified into it but can't consistently act on that expertise is just a very sophisticated chatbot. The infrastructure underneath is what turns potential into performance, and it's what separates agents that people actually trust from ones that feel impressive in a demo but fall apart when you depend on them.
There are a few pillars that we think of as non-negotiable when building a robust agent.
The first is orchestration. An agent doesn't just do one thing. It has to manage a sequence of tasks, decide which ones to run in parallel, handle outputs from one step and feed them into the next, and keep the whole process moving toward a coherent goal. Without orchestration, an agent is just a series of disconnected actions.
The second is reliability. Things go wrong. APIs time out. Data sources return something malformed. A robust agent doesn't crash when that happens. It retries where it makes sense, degrades gracefully where it doesn't, and keeps moving. An agent that works 80% of the time is not something you can build a product on.
The third is speed. If an agent is going to fit into a real workflow, it can't take twenty minutes to do something that should take a human two minutes. It needs to take twenty minutes to do something that would take a human several hours. That means running independent tasks in parallel, caching results that don't need to be fetched again, and knowing when to go deep on a research thread and when to move on. Speed isn't just about raw compute. It's about the agent being smart about how it spends its time.
The fourth is observability. When an agent is working autonomously, you need to understand what it did and why. Which sources did it pull from? Why did it prioritize one insight over another? Without that transparency, you can't trust the output, and you can't improve the system over time.
These pillars don't work in isolation. Orchestration without reliability just means a well planned workflow that breaks halfway through. Speed without observability means fast answers with no way to know if they're good. They have to work together, and building all of them well at the same time is what separates the agents that actually work from the ones that almost do.
Anatomy of a Creative Strategy Agent Skill
An interesting dichotomy emerges at this stage. Building an agent according to the four pillars we discussed is closer to a science. Codifying domain expertise into a set of skills that allow an agent to effectively act autonomously toward a certain objective, on the other hand, is more of an art. Think of it like onboarding a new employee. There is no singular blueprint for how to do it well. It takes creativity, adaptability, and a willingness to experiment. Writing good agent skills is no different.
With Haven, our goal was to build an elite end to end creative strategy agent. That meant authoring a library of skills that, together, captured the expertise an expert human creative strategist would possess. After a lot of deliberation and experimentation, we landed on a framework where each skill represents what we call internally a mini expert. Each mini expert is responsible for giving the agent deep knowledge of a single business domain. The result is that Haven behaves less like one generalist and more like a collection of specialized creative strategists, putting on a different hat depending on the type of business it is building a strategy for.
So what does creative strategy expertise in a given business domain actually look like? That turned out to be a harder question than we expected. There is no off the shelf template for it. It took a lot more thought, discussion, and trial and error before we arrived at a general framework that consistently produced human level creative strategy. Without giving away too much of the secret sauce, here is what each of Haven's agent skills contains at a high level:
- a description of the business domain the skill is built for
- a heuristic for creative diversity and persona selection
- budget allocation guidelines
- a set of research goals the agent needs to satisfy before it puts the strategy together.
The beauty of this design is that it doesn't just produce creative strategy on par with what a highly competent human would produce today. It also leaves room to evolve. As the social media landscape shifts, the skills can be updated to reflect it, which means Haven gets better over time without needing to be rebuilt from the ground up.
The New Human-AI Division of Labor
There is a narrative that follows every major shift in technology. AI is going to replace jobs. AI is going to take over. It's a compelling story, and it gets clicks, but it misses what's actually happening. What we're seeing with agents isn't replacement. It's a rebalancing. A fundamental rethinking of what humans do and what machines do, and more importantly, how the two work together.
Think about what Haven actually does. It doesn't replace a creative strategist. It takes the parts of the job that are time consuming, repetitive, and bottlenecked by how fast one person can research, analyze, and synthesize, and it handles them. The strategist's role shifts. Instead of spending hours scraping Reddit comment sections and cross referencing trend data, they spend their time on the things that actually require human judgment. Defining the vision. Evaluating whether a strategy feels right for a brand. Making the kinds of calls that no amount of data can fully inform.
That shift unlocks something interesting. It's not just about efficiency. It's about what becomes possible when you free up that cognitive bandwidth. A single strategist can now operate at a scale and breadth that would have previously required an entire team. A junior practitioner can produce work that is informed by the depth of a seasoned professional. And an experienced strategist can take on more ambitious projects because the groundwork is no longer the bottleneck.
What makes this different from the last wave of AI tools is that it compounds. Every time a human expert refines a skill, adds a new heuristic, or corrects the way the agent approaches a problem, the entire system gets better. The expertise doesn't disappear when someone leaves a team or retires. It lives in the skill, and it scales with every business that uses it. Human knowledge feeds the agent. The agent amplifies that knowledge. And the cycle continues.
This is what the new division of labor actually looks like. It's not humans on one side and AI on the other. It's humans and AI operating as a system, each doing what they do best, and producing outcomes that neither could pull off alone.
What Comes Next
We're still in the early innings of what agent skills can do. The examples we've talked about, from OpenEvidence in medicine to Claude Code in software engineering to Haven in creative strategy, are just the beginning. They're proof that the model works, but they're not the ceiling. Not even close.
The question that excites us most isn't what agents can do right now. It's what becomes possible as more people start thinking seriously about how to codify their expertise. Every industry has knowledge that lives in the heads of its best practitioners. Knowledge built over decades of experience, hard to teach, impossible to scale the traditional way. Agent skills change that. They give that knowledge a way to exist beyond any one person and operate at a speed and scale that wasn't possible before.
Haven is our answer to that question in the world of creative strategy. We built it because we believed that the right combination of deep domain expertise and a robust agentic infrastructure could produce something genuinely useful, not just impressive. Something that a real creative strategist could hand work off to and actually trust. We think we're there. But more importantly, we think we've only just started to understand what's possible when you get this right.
If you're building something in your space and you're thinking about how AI fits into it, the answer probably isn't another tool. It's probably a skill. And if you want to talk about what that looks like, we'd love to hear from you.