Editor's Note: AI is entering enterprises, but the real question is not "whether to use agents," but whether these agents can understand the company itself.
Using the author's first 100 days at Ramp as a narrative thread, this article discusses a more fundamental issue: a high-speed company cannot rely solely on newcomers slowly reading documents, asking colleagues, and filling in context, nor can it let each AI tool operate in isolation. What's truly important is building a continuously updated "company brain" that consolidates meetings, documents, Slack discussions, customer feedback, and product decisions, allowing both newcomers and agents to start from the same contextual foundation.
When context is systematized, onboarding is no longer just a lengthy adaptation process, and AI is no longer just a collection of isolated tools. The value of enterprise AI may ultimately lie not in how many agents are deployed, but in whether a company can first establish a trustworthy, readable, and reusable knowledge foundation.
The following is the original text:
In a 4×100-meter relay race, victory is often not determined by the entire race but compressed into a 20-meter exchange zone. Runners must pass the baton at high speed: if the receiving runner starts too early, the baton drops; if they start too late, the passing runner has to slow down, and the entire team instantly loses its advantage. If the handoff itself isn't precise—if any aspect of hand position, angle, or timing is off—the result can also be a dropped baton.
A team can have the fastest individual runners yet still lose in those 20 meters. Speed matters, but the handoff matters too. What truly decides the outcome is whether both can be achieved simultaneously.
Every job handover I've seen is essentially a relay race, except one runner is still in the starting blocks. A new hire starts on Monday, beginning from zero; the organization, however, doesn't slow down and continues operating at its original pace. Thus, the newcomer can only rely on reading documents, lurking in Slack, repeatedly asking the same few questions, and spending three months figuring out how the organization works until they finally become "useful."
We usually treat this gap as a matter of time, as if given enough of it, newcomers will naturally catch up. But that's not the case. This gap must be solved systematically, or it will persist.
Context Is the Organization's Real Handoff System
It's been about 100 days since I joined Ramp. Before that, I spent five years at Plaid, familiar with every product, every customer story, and the background behind every decision. I could tell those stories without thinking. But at Ramp, I knew almost none of this.
And product marketing is, at its core, storytelling. If you don't know the characters, plot, and backstory, you can't truly tell the story well.
From day one, my goal was to build an AI-native product marketing organization. But to do this without context, I first had to expand my own knowledge base—the "context layer" that underpins all work.
Ramp is a company known for its speed. There's no room for "catching up slowly next quarter." The company releases, iterates, and advances every week. You either keep up, or you become an additional cost to the organization's operation.
Simultaneously, I was undergoing another layer of onboarding. Ramp is already fast, but AI evolves even faster, and I had to learn both a new company and a new way of working. I'm not an engineer; the last time I opened a terminal was in a university computer science class. That is, I had to both fill in the organizational context and adapt to a new AI-powered workflow, and these two things compounded, amplifying the difficulty.
What ultimately freed me from this pressure wasn't completing a specific article, product launch, or workflow, but treating "context" itself as the deliverable. If the context layer is built correctly, all subsequent work becomes lower cost.
So, I started building something truly scalable: a system that could help me get up to speed quickly, like a good wiki helps a researcher. By week three, it could draft content based on my notes; by week eight, it could summarize meetings I hadn't attended. Learning and catching up didn't disappear, but as the system filled out, their cost began to decrease day by day.
A personal version of this idea has been around for a while. Former Tesla AI lead and OpenAI founding member Karpathy wrote an article in April describing what he called a "personal LLM knowledge base": a folder storing raw inputs like papers, articles, transcripts, and personal notes; an LLM that generates a wiki from this material; and an editor like Obsidian as the front end. When the material accumulates to about 100 articles, the LLM can answer complex questions about the personal corpus without needing sophisticated retrieval techniques.
His judgment: There's an opportunity here for a truly great new product, not just a collection of makeshift scripts.
The personal version exists today. But the company version does not. That's the problem.
Roughly, here's the system I built in my first 100 days. They're not yet polished, but together they form the "connective tissue" within the organization.
The core is an Obsidian vault, read from and written to by Claude. Meeting transcripts, documents, public viewpoints, and personal notes I encounter all go into this knowledge base. When I ask, "What exactly did Geoff and I decide about the homepage three weeks ago?" it searches this vault for answers, rather than relying on the model's generalized memory.
To continuously feed content into this vault, Granola defaults to recording every meeting and archives the transcript overnight. So, a meeting I missed on Monday is queryable by Wednesday. To help others in the company keep up, I chose to work openly—most of what I'm building appears first in #team-pmm or relevant launch project channels before entering Notion documents. The building process itself is a synchronization mechanism.
On top of this vault, there's a small library of named skills that agents can call on demand. One skill generates an agenda based on my last four meetings with a specific person; another scans Slack for a week's worth of product updates and turns them into article ideas. Each skill is roughly 200 lines of markdown, replacing a category of work that used to be manual.
Additionally, I built a dynamic product roadmap based on Ramp's internal application platform. It reads from the same context layer, so it doesn't go stale because it was never a static document to begin with. There's also a morning digest sent to my private Slack messages at 8 a.m. daily: what shipped yesterday, where things are stuck, what needs my response. This is compiled while I sleep.
Individually, these things aren't groundbreaking. But together, they offer a working answer: If a company had the kind of wiki Karpathy described, what would it look like?
You can call it a wiki, a graph, a context layer, or a company brain. The name isn't important; the function is. It must be able to absorb all the signals the company already generates: meetings, Slack discussions, documents, code, transcripts, customer calls, and key decisions, and stay continuously updated without relying on manual maintenance. It must also be the first thing every new hire, every new agent, reads before starting work.
If a new employee starts tomorrow, what should they read on day one? If the real answer is a 2024 Notion document plus a stale Confluence link, that's essentially asking them to receive the baton from a standstill.
From Point Solutions to the Company Brain: AI's Real Gap
Today, the main way AI enters enterprises still relies on forward-deployed engineers. Whether it's OpenAI, Anthropic, or large consulting firms, they choose to build specific workflows on top of models.
This work is real and valuable. But it remains stuck in the "chatbot era" of enterprise AI: narrowly defined tools built around specific tasks, useful in isolation but not connected to a system that yields compounding returns.
The real "company brain" hasn't arrived yet. A customer service agent and an HR onboarding agent might have been built in different months by different teams. They don't know what was decided in the last all-hands meeting, how the company understands its market, or what judgment the head of sales offered at the last management offsite. Each agent is just a chatbot with a specific duty, but they don't share the same brain.
This is the biggest gap today. And outside of research labs, few are building products around this problem.
If you're building a team or starting a company in 2026, the order of operations is different from 2022. Write the context file first, then install the tools. Record every meeting. Build the wiki first, then the dashboard. Deliver skills, not slides. Have new hires read the wiki on day one and start contributing to it on day two. Hire and promote people who can keep the "company brain" running, and also reuse agents that actually read the company brain.
Context is not a side project. It's the infrastructure that makes all AI investments truly pay off.
I'm currently building parts of this at Ramp: the wiki, the skill library, applications that read from the same context layer, and organizational mechanisms to keep feeding it content. It's still small and early. If you're also trying to build a company-level version elsewhere, I'd love to compare notes. More useful than one trustworthy brain is two brains in the same room.
Back to the relay race. The real condition for victory is not the cleanest handoff or the fastest leg, but both happening simultaneously in the same 20-meter stretch.
A new hire reads the company brain, then starts sprinting. A new agent reads the company brain, then starts working. A new customer connects to the company brain, then is up and running from day one.
When the term "ramp-up" loses its meaning, we'll know we've gotten it right.






