0:00
/
0:00
Transcript

Forward Deployed, Episode 2: Claude Code Skills and the Progressive Disclosure Problem

Noah walks through the Alephic CLI he’s been building and reveals a wild hook-based routing system using Cerebras at 3,000 tokens per second.

Welcome back to episode 2 of Forward Deployed. This week Noah and Lance dive deep into Claude Code skills, a deceptively simple feature that’s changing how we think about building AI agents. Noah walks through the Alephic CLI he’s been building and reveals a wild hook-based routing system using Cerebras at 3,000 tokens per second.

Plus: Why Andreessen thinks AI isn’t the Internet redux.

Key Topics Covered

  • Claude Code skills and the progressive disclosure problem

  • How Noah built a hook to solve the 10% skill hit rate

  • Tier 1 vs Tier 2 action space: Why Manus and Anthropic converged on the same architecture independently

  • 3,000 tokens per second: Using Cerebras Llama 120B as an invisible routing layer for every user message

  • The MCP/skill/command convergence: Are they all just different flavors of the same primitive?

  • Vision feedback loop: Turning Gemini into a Pentagram creative director to critique Claude’s web designs

  • Andreessen’s “computers v2” thesis: Why AI isn’t the Internet redux, it’s the first von Neumann architecture replacement in 80 years

  • Git workflows with Claude Code: Why Lance and Noah don’t worry about merge conflicts anymore

Timestamps

  • 00:11 – Welcome to episode 2 on Claude Code skills

  • 00:28 – What are Claude Code skills? Not much more than a folder full of prompts

  • 01:10 – Lance: Skills as “instructing a new hire” with subfolder instructions

  • 01:25 – Simon Willison and Jesse Vincent’s “superpowers” discovery

  • 04:45 – Noah demos the Alephic CLI skill directory structure

  • 07:32 – The hook-based skill search system using Cerebras

  • 08:19 – Lance reveals: YAML front matter always loads into system prompt

  • 09:32 – The 10% skill hit rate problem when you have 10+ skills

  • 10:08 – Cerebras Llama 120B running at 3,000 tokens per second for invisible routing

  • 13:17 – The universal pattern: Everyone’s trying to control context

  • 15:59 – Tier 1 vs Tier 2 action space: Manus and Anthropic converge independently

  • 21:29 – Noah’s big challenge: Getting models to consistently look for skills

  • 28:04 – Hit rate drops to 10% even with only 4–5 skills

  • 30:54 – Could progressive disclosure become built-in like chain of thought?

  • 34:06 – Lance on externalizing context to file systems

  • 35:15 – Vision feedback loop: Gemini as Pentagram creative director critiquing Claude’s designs

  • 37:57 – Andreessen: AI isn’t the Internet, it’s computers v2

  • 42:08 – Why Noah and Lance don’t worry about merge conflicts anymore

Links & References

Core References

Tools & Frameworks

Blog Posts

Companies Mentioned

  • Anthropic – Claude Code creator

  • LangChain – Where Lance is a founding engineer

  • Alephic – Noah’s AI consulting company

  • Pentagram – Design agency Noah used as creative director persona

Development Tools

  • Obsidian – Note-taking use case for Claude Code

  • Git Work Trees – How the team manages multi-branch development

About the Hosts

Noah Brier is co-founder of Alephic, an AI consulting company helping brands build custom AI systems. He writes about AI strategy and implementation.

Lance Martin is a founding engineer at LangChain, where he works on developer tools for building AI applications.

Connect with the Hosts

Subscribe for weekly episodes exploring how AI is actually being deployed in the real world.

Discussion about this video

User's avatar