In the AI Era, How to Onboard Without Starting from Scratch

marsbitОпубликовано 2026-05-17Обновлено 2026-05-17

Введение

In the AI era, onboarding new employees often resembles a botched relay race baton handoff, where the organization maintains speed while the newcomer starts from zero. The author, after joining Ramp, argues the core problem is a lack of accessible, shared organizational "context"—the collective knowledge from meetings, documents, Slack discussions, and decisions. Instead of relying on slow, manual onboarding or isolated AI tools, the solution is building a continuously updated "company brain." This system acts as a central, AI-native knowledge base that absorbs all company signals. The author describes building a prototype using an Obsidian vault powered by Claude, fed by automated meeting transcripts and notes, and topped with reusable agent "skills." The current enterprise AI approach, deploying specific workflow agents, is likened to the "chatbot era"—useful but disconnected. The real gap is the absence of a shared brain that all agents and employees can access from day one. The future lies in making context layer infrastructure the priority: write context first, then install tools; record every meeting; build the wiki before the dashboard. When new hires, AI agents, and even customers can immediately access this living company brain, the costly "ramp-up" period becomes obsolete. True organizational speed is achieved when maximum velocity and seamless context transfer happen simultaneously.

Editor's Note: AI is entering enterprises, but the real question is not "whether to use agents," but whether these agents can understand the company itself.

Using the author's first 100 days at Ramp as a narrative thread, this article discusses a more fundamental issue: a high-speed company cannot rely solely on newcomers slowly reading documents, asking colleagues, and filling in context, nor can it let each AI tool operate in isolation. What's truly important is building a continuously updated "company brain" that consolidates meetings, documents, Slack discussions, customer feedback, and product decisions, allowing both newcomers and agents to start from the same contextual foundation.

When context is systematized, onboarding is no longer just a lengthy adaptation process, and AI is no longer just a collection of isolated tools. The value of enterprise AI may ultimately lie not in how many agents are deployed, but in whether a company can first establish a trustworthy, readable, and reusable knowledge foundation.

The following is the original text:

In a 4×100-meter relay race, victory is often not determined by the entire race but compressed into a 20-meter exchange zone. Runners must pass the baton at high speed: if the receiving runner starts too early, the baton drops; if they start too late, the passing runner has to slow down, and the entire team instantly loses its advantage. If the handoff itself isn't precise—if any aspect of hand position, angle, or timing is off—the result can also be a dropped baton.

A team can have the fastest individual runners yet still lose in those 20 meters. Speed matters, but the handoff matters too. What truly decides the outcome is whether both can be achieved simultaneously.

Every job handover I've seen is essentially a relay race, except one runner is still in the starting blocks. A new hire starts on Monday, beginning from zero; the organization, however, doesn't slow down and continues operating at its original pace. Thus, the newcomer can only rely on reading documents, lurking in Slack, repeatedly asking the same few questions, and spending three months figuring out how the organization works until they finally become "useful."

We usually treat this gap as a matter of time, as if given enough of it, newcomers will naturally catch up. But that's not the case. This gap must be solved systematically, or it will persist.

Context Is the Organization's Real Handoff System

It's been about 100 days since I joined Ramp. Before that, I spent five years at Plaid, familiar with every product, every customer story, and the background behind every decision. I could tell those stories without thinking. But at Ramp, I knew almost none of this.

And product marketing is, at its core, storytelling. If you don't know the characters, plot, and backstory, you can't truly tell the story well.

From day one, my goal was to build an AI-native product marketing organization. But to do this without context, I first had to expand my own knowledge base—the "context layer" that underpins all work.

Ramp is a company known for its speed. There's no room for "catching up slowly next quarter." The company releases, iterates, and advances every week. You either keep up, or you become an additional cost to the organization's operation.

Simultaneously, I was undergoing another layer of onboarding. Ramp is already fast, but AI evolves even faster, and I had to learn both a new company and a new way of working. I'm not an engineer; the last time I opened a terminal was in a university computer science class. That is, I had to both fill in the organizational context and adapt to a new AI-powered workflow, and these two things compounded, amplifying the difficulty.

What ultimately freed me from this pressure wasn't completing a specific article, product launch, or workflow, but treating "context" itself as the deliverable. If the context layer is built correctly, all subsequent work becomes lower cost.

So, I started building something truly scalable: a system that could help me get up to speed quickly, like a good wiki helps a researcher. By week three, it could draft content based on my notes; by week eight, it could summarize meetings I hadn't attended. Learning and catching up didn't disappear, but as the system filled out, their cost began to decrease day by day.

A personal version of this idea has been around for a while. Former Tesla AI lead and OpenAI founding member Karpathy wrote an article in April describing what he called a "personal LLM knowledge base": a folder storing raw inputs like papers, articles, transcripts, and personal notes; an LLM that generates a wiki from this material; and an editor like Obsidian as the front end. When the material accumulates to about 100 articles, the LLM can answer complex questions about the personal corpus without needing sophisticated retrieval techniques.

His judgment: There's an opportunity here for a truly great new product, not just a collection of makeshift scripts.

The personal version exists today. But the company version does not. That's the problem.

Roughly, here's the system I built in my first 100 days. They're not yet polished, but together they form the "connective tissue" within the organization.

The core is an Obsidian vault, read from and written to by Claude. Meeting transcripts, documents, public viewpoints, and personal notes I encounter all go into this knowledge base. When I ask, "What exactly did Geoff and I decide about the homepage three weeks ago?" it searches this vault for answers, rather than relying on the model's generalized memory.

To continuously feed content into this vault, Granola defaults to recording every meeting and archives the transcript overnight. So, a meeting I missed on Monday is queryable by Wednesday. To help others in the company keep up, I chose to work openly—most of what I'm building appears first in #team-pmm or relevant launch project channels before entering Notion documents. The building process itself is a synchronization mechanism.

On top of this vault, there's a small library of named skills that agents can call on demand. One skill generates an agenda based on my last four meetings with a specific person; another scans Slack for a week's worth of product updates and turns them into article ideas. Each skill is roughly 200 lines of markdown, replacing a category of work that used to be manual.

Additionally, I built a dynamic product roadmap based on Ramp's internal application platform. It reads from the same context layer, so it doesn't go stale because it was never a static document to begin with. There's also a morning digest sent to my private Slack messages at 8 a.m. daily: what shipped yesterday, where things are stuck, what needs my response. This is compiled while I sleep.

Individually, these things aren't groundbreaking. But together, they offer a working answer: If a company had the kind of wiki Karpathy described, what would it look like?

You can call it a wiki, a graph, a context layer, or a company brain. The name isn't important; the function is. It must be able to absorb all the signals the company already generates: meetings, Slack discussions, documents, code, transcripts, customer calls, and key decisions, and stay continuously updated without relying on manual maintenance. It must also be the first thing every new hire, every new agent, reads before starting work.

If a new employee starts tomorrow, what should they read on day one? If the real answer is a 2024 Notion document plus a stale Confluence link, that's essentially asking them to receive the baton from a standstill.

From Point Solutions to the Company Brain: AI's Real Gap

Today, the main way AI enters enterprises still relies on forward-deployed engineers. Whether it's OpenAI, Anthropic, or large consulting firms, they choose to build specific workflows on top of models.

This work is real and valuable. But it remains stuck in the "chatbot era" of enterprise AI: narrowly defined tools built around specific tasks, useful in isolation but not connected to a system that yields compounding returns.

The real "company brain" hasn't arrived yet. A customer service agent and an HR onboarding agent might have been built in different months by different teams. They don't know what was decided in the last all-hands meeting, how the company understands its market, or what judgment the head of sales offered at the last management offsite. Each agent is just a chatbot with a specific duty, but they don't share the same brain.

This is the biggest gap today. And outside of research labs, few are building products around this problem.

If you're building a team or starting a company in 2026, the order of operations is different from 2022. Write the context file first, then install the tools. Record every meeting. Build the wiki first, then the dashboard. Deliver skills, not slides. Have new hires read the wiki on day one and start contributing to it on day two. Hire and promote people who can keep the "company brain" running, and also reuse agents that actually read the company brain.

Context is not a side project. It's the infrastructure that makes all AI investments truly pay off.

I'm currently building parts of this at Ramp: the wiki, the skill library, applications that read from the same context layer, and organizational mechanisms to keep feeding it content. It's still small and early. If you're also trying to build a company-level version elsewhere, I'd love to compare notes. More useful than one trustworthy brain is two brains in the same room.

Back to the relay race. The real condition for victory is not the cleanest handoff or the fastest leg, but both happening simultaneously in the same 20-meter stretch.

A new hire reads the company brain, then starts sprinting. A new agent reads the company brain, then starts working. A new customer connects to the company brain, then is up and running from day one.

When the term "ramp-up" loses its meaning, we'll know we've gotten it right.

Связанные с этим вопросы

QWhat is the main obstacle to AI's effective integration into enterprises, according to the article?

AThe main obstacle is not whether to use AI agents, but the lack of a centralized, continuously updated 'company brain' or knowledge base. Companies currently lack a unified, trustworthy, and reusable knowledge foundation that captures the full organizational context (meetings, documents, Slack discussions, decisions). AI tools are deployed as isolated 'chatbots' for specific tasks without sharing this common understanding, limiting their true value.

QWhat analogy does the author use to describe the problem of employee onboarding and knowledge transfer?

AThe author uses the analogy of a 4x100 meter relay race. The 'handoff zone' between runners is compared to the knowledge transfer process when a new employee joins. If the handoff (context transfer) is not smooth and precise—akin to a new employee starting from a standstill—the entire team (company) loses momentum and efficiency, regardless of individual talent.

QWhat personal system did the author build at Ramp to solve their own 'context gap'?

AThe author built a system centered on an Obsidian vault (knowledge base) read and written to by Claude. It ingested meeting transcripts, documents, notes, and public communications. This was augmented by tools like Granola for automatic meeting transcription, a library of named 'skills' (agents for specific tasks like agenda generation), a dynamic product roadmap, and a daily morning digest sent via Slack.

QWhat key shift in operational priority does the author suggest for companies in the AI era (e.g., in 2026 vs. 2022)?

AThe author suggests a fundamental shift: instead of installing tools first, companies should 'Write the context file first. Install tools second.' The priority is to build the foundational 'wiki' or 'company brain' that captures organizational context. Only after this knowledge infrastructure is established should tools and agents be deployed to read from and contribute to it.

QAccording to the author, when will we know the 'company brain' approach is successful?

AWe will know it's successful when the term 'ramp-up' (the lengthy period for a new employee to become productive) loses its meaning. Success is achieved when new employees, new AI agents, and even new customers can 'read the company brain' from day one and immediately begin contributing or operating effectively within the organizational context.

Похожее

MY Group Completes Web4.0 First Stock Listing Layout, SEC Officially Discloses Form 8-K Announcement

MY Group has completed the listing layout for the "Web4.0 First Share," with the U.S. Securities and Exchange Commission (SEC) formally disclosing a Form 8-K report. According to the filing, the company's board has officially appointed Mr. Zhang Dingwen as Chief Executive Officer (CEO) and Executive Director, marking a significant upgrade in management and the entry into a new phase of its global capital market strategy. The disclosure of Form 8-K, used for reporting major corporate events, coincides with market information indicating the company is advancing several key capital market initiatives. These include a global brand system upgrade, corporate strategic restructuring, and a change of its stock ticker symbol. These moves are viewed by industry experts as signals of accelerated internationalization and enhanced global market presence. Concurrently, MY Group's proposed "Web4.0 Ecosystem" is garnering market attention. The company is integrating core capabilities across social traffic portals, global payment systems, public blockchain infrastructure, digital asset trading, and AI-powered financial systems. Analysts suggest that by closing this ecosystem loop, MY Group has the potential to become a next-generation platform merging Web2 user scale with Web3 asset frameworks and AI financial capabilities. With the management upgrade finalized, the global brand strategy launched, and the stock ticker change pending, MY Group is positioning itself as a focal point in the global technology capital market as a potential leading Web4.0 platform enterprise.

marsbit3 ч. назад

MY Group Completes Web4.0 First Stock Listing Layout, SEC Officially Discloses Form 8-K Announcement

marsbit3 ч. назад

3 People with 100 AI Programmers, Burning Through $1.3 Million a Month! OpenAI: I'll Foot the Bill

In a striking demonstration of AI-powered development, Peter Steinberger (creator of OpenClaw) shared that his three-person team spent $1.3 million in one month to run approximately 100 AI agents (primarily Codex instances). OpenAI covered the cost. The expenditure consumed 6.03 trillion tokens across 7.6 million requests. Steinberger argues that, with "fast mode" disabled, the cost falls below that of a single engineer while providing significantly greater output. This "cloud programmer army" handles core but tedious software engineering tasks: reviewing pull requests, finding security vulnerabilities, deduplicating issues, fixing bugs, monitoring benchmarks, and even generating PRs after meetings. This shifts AI's role from merely writing code to maintaining the entire collaborative fabric of a project. Steinberger's tool, CodexBar (a macOS menu bar app), tracks usage and costs across various AI coding services, highlighting how token consumption is becoming a key metric—a new "means of production." The experiment poses a profound question: if token cost ceases to be a barrier, how will software development transform? As model prices fall, the capability for small teams to leverage large numbers of AI agents could become commonplace, fundamentally altering the scale and speed of development. The future, Steinberger suggests, is arriving rapidly.

marsbit5 ч. назад

3 People with 100 AI Programmers, Burning Through $1.3 Million a Month! OpenAI: I'll Foot the Bill

marsbit5 ч. назад

Торговля

Спот
Фьючерсы

Популярные статьи

Как купить ERA

Добро пожаловать на HTX.com! Мы сделали приобретение Caldera (ERA) простым и удобным. Следуйте нашему пошаговому руководству и отправляйтесь в свое крипто-путешествие.Шаг 1: Создайте аккаунт на HTXИспользуйте свой адрес электронной почты или номер телефона, чтобы зарегистрироваться и бесплатно создать аккаунт на HTX. Пройдите удобную регистрацию и откройте для себя весь функционал.Создать аккаунтШаг 2: Перейдите в Купить криптовалюту и выберите свой способ оплатыКредитная/Дебетовая Карта: Используйте свою карту Visa или Mastercard для мгновенной покупки Caldera (ERA).Баланс: Используйте средства с баланса вашего аккаунта HTX для простой торговли.Третьи Лица: Мы добавили популярные способы оплаты, такие как Google Pay и Apple Pay, для повышения удобства.P2P: Торгуйте напрямую с другими пользователями на HTX.Внебиржевая Торговля (OTC): Мы предлагаем индивидуальные услуги и конкурентоспособные обменные курсы для трейдеров.Шаг 3: Хранение Caldera (ERA)После приобретения вами Caldera (ERA) храните их в своем аккаунте на HTX. В качестве альтернативы вы можете отправить их куда-либо с помощью перевода в блокчейне или использовать для торговли с другими криптовалютами.Шаг 4: Торговля Caldera (ERA)С легкостью торгуйте Caldera (ERA) на спотовом рынке HTX. Просто зайдите в свой аккаунт, выберите торговую пару, совершайте сделки и следите за ними в режиме реального времени. Мы предлагаем удобный интерфейс как для начинающих, так и для опытных трейдеров.

680 просмотров всегоОпубликовано 2025.07.17Обновлено 2025.07.17

Как купить ERA

Обсуждения

Добро пожаловать в Сообщество HTX. Здесь вы сможете быть в курсе последних новостей о развитии платформы и получить доступ к профессиональной аналитической информации о рынке. Мнения пользователей о цене на ERA (ERA) представлены ниже.

活动图片