Last week I shipped a feature at 3pm. By 5pm, three demo videos for it were sitting in my Downloads folder, ready to post. I didn't make any of them.
That's the operational reality at Marnix right now. I'm one person. My co-founder and CTO stepped out of Marnix a few weeks ago, and I'm being deliberate about who comes in next. In the meantime, this is what one founder plus a stack of pipelines actually looks like in 2026 — including a mistake I made a year ago that I'd undo if I could.
We raised over a year ago. The first thing I did with the money was hire — because that's what you do, right? You raise, you hire. It felt like the obvious next move. Everyone I knew who had raised had also hired.
It was the wrong call.
The people I hired were genuinely talented. The problem wasn't them. The problem was that I hired them before I had a revenue pipeline I was dead-set on, before the company really knew what it was. We pivoted not long after, and suddenly the work some of them were hired to do wasn't the work the company needed. The talent was there. The timing was off. With different timing, several of them would have been perfect fits.
Letting people go costs three things at once: money you raised to spend on growth, time you don't have, and the resource a founder has least of — bandwidth. The notice periods, the severance conversations, the role specs, the re-introductions, the team's mood — all of it lives in your head for weeks. You don't write a line of product code while you're doing it. The dollar cost shows up on a P&L; the bandwidth cost shows up nowhere except in everything you didn't ship that month.
The deeper lesson, and the one I'd hand to a friend who just closed a round: getting funded is a blessing and a trap. You lose equity, which you knew. But the deeper cost is that runway pressures you into hiring before you should — because hiring is the visible signal the money is "doing something." Sitting on cash while you build pipelines doesn't look like a startup. It IS a startup, in 2026 — but it doesn't look like one. So founders hire to look like one. I did.
The rule I'd give myself a year ago: don't spend raised money on payroll until you have a revenue pipeline you're dead-set on. Pipeline first, headcount after. If you're not at that stage, the right use of the money is product, infrastructure, AI tooling, and patient runway — not hiring before you know what the role even is. If someone is dying to join your startup, that's great — but it has to be the perfect fit AND the right time. Either alone isn't enough.
Most takes on AI-native startups end at "small teams." That's the headline, not the insight. The actual insight is what changed about the work.
The old CEO job was: figure out what to do, then go do it (or hire someone to). The new job is: build the system that does it. Every function — marketing, product, design, support, sales — needs its own pipeline. Not "AI tools." Pipelines. Things you trigger and trust. The CEO's job is to make sure each function has built theirs, and to keep refining them when better models drop.
"Every function needs an AI orchestrator" is already on the verge of becoming a meaningless phrase, like "data-driven" was in 2018. So let me make it specific. At Marnix, when I ship a feature, a pipeline reads the changelog entry, generates three demo video variants in different styles, and drops them in my Downloads folder. I pick one and post it. That's an orchestrator. Not a person — a system.
1. Banner + brand assets. This is one my old co-founder Filippo taught me before he stepped out. I used to outsource banners and small design jobs on Upwork — cheap and good, but two days of latency before I could publish anything, which always meant publishing later than I wanted to, sometimes never. He showed me how to do the same job with AI tools — same quality, often better brand fit, ready before I finish writing the post. The banner above this one took about six minutes.
2. Changelog → demo videos. Every entry I ship to the changelog gets read by an agent that produces three demo videos in different framings — a close-up product capture, a wider-context shot, and an animated explainer. I pick one. Total of my time involved: 30 seconds. The changelog is at marnix.ai/changelog if you want to see what's actually being shipped.
3. Social cross-posting. TikTok, X, LinkedIn — when a video is ready, agents handle the cross-posting with the right framing for each platform. The TikTok caption is not the LinkedIn caption is not the X thread. The pipeline knows. I used to think this was the kind of thing only an in-house marketing team could do well. It turns out the platform-specific voice is a prompt, not a person.
4. Learning CTO. This one I'm proud of, and I'm aware of the name. It's a feature inside the Marnix product itself — an agent that watches what users build across the platform, learns what good products look like in different verticals (foodie apps look different from CRM dashboards look different from voice-huddle interfaces), and feeds those patterns back into the model context for the next user. It's a CTO that learns from the corpus rather than from one person's head. I'd been mulling something like this for months. Got it shipped a few weeks back.
5. Voice iteration on desktop. The thing I'm most proud of building solo this month. Voice iteration was the hardest engineering problem on the roadmap when we whiteboarded it — multi-turn state, transcription-aware re-prompting, the whole loop. I got it working a few weeks back. You talk to your live Marnix app and watch it change in front of you. It's the feature that makes the whole product worth using.
There's a quieter design choice baked into all of this: Marnix is model-agnostic. We don't bet the product on any specific LLM. When Sonnet 5 ships — or whatever comes after — every Marnix-generated app gets smarter automatically. When a faster transcription model lands, every voice huddle gets snappier. Same for my internal pipelines: the demo-video agent runs on whatever model is best this month, and the prompt for it doesn't care which one.
That's the difference between "AI-powered" and "AI-native." AI-powered companies have to redo their architecture when better models drop. AI-native companies get a free upgrade across the whole stack. Your users get a better app on Tuesday because someone in San Francisco shipped a checkpoint on Monday. You did nothing.
"Build pipelines for every function" is not a strategy. It's a slogan if you don't actually build them. Each of the five pipelines above took an evening or two of fiddling — picking models, writing the prompts, iterating on bad outputs, hooking them into the right tools. Some pipelines I built and threw away because the output wasn't worth the trigger overhead.
"Hire AI-literate people" is the lazy version of this thesis. Literacy is the entry fee, not the moat. The moat is the actual pipelines: who built them, how they're tuned, what they trigger on, what they write back to. That's the work. It is unsexy and goes uncredited and is the entire job.
Given everything above, the shape of what comes next is sharper than it was a year ago. I'm not making junior or senior payroll hires right now. The next person at Marnix is a co-founder, and they fit one of exactly two profiles: technical, or fluent enough with AI that they can run twenty agents in their function.
A marketing person who can't direct an agent fleet — not a fit. A marketing person who builds the ICP-discovery pipeline, the cold-outreach pipeline, the content-test pipeline, and orchestrates all three themselves — exact fit, even if they've never written a line of code. Same for product, ops, sales, design. The job title doesn't matter. The question is: can you run twenty agents and turn that into the work of twenty people?
The constraint of a small team is manpower. The advantage is flexibility. AI is how a small team buys manpower without spending the flexibility. The co-founder I want has already internalized that — and reads the five pipelines above and sees the six I haven't built yet, and wants to build them. The customer-research loop that watches what users do on day 1 and surfaces patterns by day 30. The on-call agent that reads incident metrics and pages me with a fix already drafted. The international SEO engine that ranks Marnix in Seoul, Tokyo, and Berlin without my having to think about it. The launch motion for hitting the indie-hacker and founder community at scale. The self-improving prompt evaluator. The ICP-discovery loop.
There's a year of pipeline-shaped work to do across product, GTM, and ops — and I want a partner who's excited about owning a slice of it.
If you read this and want to talk in person: I'll be in New York this week, London a few days later, and Amsterdam later in the week, with a stretch of AI events along the way. If you're around any of those cities, find me. Otherwise: steven@skylark.dev. Tell me which pipeline you'd build first.
The next $100M company will be built by a founder you've never heard of, with a team of three or four, and a stack of pipelines nobody else has bothered to build. The work is moving from the visible (headcount, office space, slick deck) to the invisible (the systems, the agents, the trigger-graph). It's a worse story to pitch and a better one to live.
I'll write the next post about a specific pipeline in depth. Pick one from the five above and reply — I'll write the one with the most votes.
— Steven