Pre-Work · Self-Paced · Complete Before Day 1
Before the program begins
All students complete six self-paced modules before their admissions interview. No time limit — most people finish in 2–3 weeks working evenings and weekends. This isn't busywork: your pre-work deliverables are reviewed by the admissions team, and the best projects are shared on Day 1.
Module 1
How LLMs Actually Work
Tokens, attention, context windows, hallucination. Explain it to a non-technical executive in 500 words.
Module 2
Prompt Engineering
Build a system prompt, chain-of-thought prompt, and structured JSON output prompt. Test and score each one.
Module 3
Evaluation Frameworks
Build a 10-case eval set for your prompt. Score it. Identify failure modes. Iterate and measure the delta.
Module 4
Domain Mapping
Map AI opportunities in your field. Identify one product to build. Write a full product brief before you arrive.
Module 5
First Deployed Tool
Use Lovable to build and deploy a live AI tool based on your product brief. Get a URL. Show something real.
Module 6
Admissions Prep
Prepare your 15-minute admissions presentation. Know your numbers. Have honest answers to hard questions.
Phase 1
Foundations
Weeks 1–2
How AI tools work at depth — not theory, but the intuition you need to build with them. The no-code stack. Direct API calls. Your first major deliverable.
Week 1
Prompting, Evaluation, and How Models Work
Monday
How AI Models Work + Your First Professional Prompt
Tools: Claude.ai · OpenAI Tokenizer
Transformer architecture conceptually — attention, tokens, context windows. Why models predict next tokens. Hallucination: what it is, why it happens, how to design around it. Live demo: instructor breaks a prompt on purpose and the class diagnoses why. Students write their first professional system prompt for their domain.
Wednesday
Prompt Engineering at Depth
Tools: Claude.ai · Anthropic Docs
Chain-of-thought prompting. Few-shot examples. Structured JSON output. Output format control. Students rebuild their Monday system prompt with all three techniques, then test 10 inputs and score each on accuracy, format, and usefulness.
Friday
Evaluation Fundamentals
Tools: Claude.ai · Google Sheets
Why evals are the most underrated skill in AI development. The difference between a demo that works and a product that works. Students build a 10-case eval set with hand-written ideal outputs, score their current prompt, and name their top failure mode.
Week 2
The No-Code AI Stack + Direct API Calls
Major Deliverable
Monday
Cursor, Lovable, Bolt, and Replit
Tools: Cursor · Lovable · Bolt.new · Replit
The no-code AI landscape. When to use which tool. The mental model: you're the product manager, the AI is the engineer. Live demo: instructor builds a complete, deployed AI tool in Lovable in 30 minutes. Students build and deploy their own using their product brief.
Wednesday
Direct API Calls + Connecting Everything
Tools: Replit · Claude API · Make.com
What an API is and why it matters. Making a direct Claude API call from Replit. API cost awareness — calculating cost per call and extrapolating to 100 users. Connecting Lovable front ends to real API backends. Error handling so users see a useful message when something breaks.
Friday
Major Deliverable — First Deployed AI Tool
Tools: All Week 1–2 tools
Students present their first deployed domain-specific AI tool. 3 minutes each: what it does, a live demo, what breaks and why, and what you'd fix first. Written feedback from instructor and cohort. This is the baseline everything else builds on.
✓ Deliverable: live deployed AI tool at a real URL
→
Week 2 Deliverable: A live, deployed, domain-specific AI tool. Real URL. Real domain. Real prompt engineering behind it.
Phase 2
Building
Weeks 3–5
AI that knows your domain. Agents that act autonomously. Workflows that run without you. Two more major deliverables.
Week 3
RAG — AI That Knows Your Documents
Monday
RAG Fundamentals + Pipeline Construction
Tools: Pinecone · Claude API · Replit
What RAG is and why it matters. The pipeline: query → embed → search → retrieve → inject → generate. Vector databases and semantic search. Students set up a Pinecone index, embed 10 domain documents, and run their first semantic queries.
Wednesday
Document Ingestion + Advanced Retrieval
Tools: Pinecone · Replit · Claude API
Chunking strategies and metadata. Ingesting 50+ real domain documents with a repeatable script. Hybrid search (semantic + keyword). Metadata filtering. Reranking. Students identify their top retrieval failure mode and implement the fix.
Friday
RAG Quality and Evaluation
Tools: Pinecone · Claude API · Lovable
Measuring retrieval precision vs. generation groundedness. Building a RAG eval set. Students build the full Lovable front end: chat interface, citations, out-of-scope handling, streaming responses. URL exchanged by end of session for peer feedback.
Week 4
RAG in Production + User Testing
Major Deliverable
Monday
Production Hardening — Caching, Logging, Fallbacks
Tools: Lovable · Pinecone · Replit
Streaming responses so users see output in real time. Fallback messages for pipeline failures. Query logging to Google Sheets or Airtable. Semantic caching for common queries. Students implement all four and run a production readiness check.
Wednesday
User Testing
Tools: Deployed RAG product · Loom
The think-aloud protocol. In-class user testing pairs. Students identify their top 3 user friction points and prioritize the single highest-impact fix before the deliverable. External user sessions scheduled for this week.
Friday
Major Deliverable — RAG-Powered Domain Product
Tools: All RAG tools
Students present their deployed RAG product. 4 minutes: the data it knows, a live demo, user testing findings, and top failure mode. Instructor and LATAM team provide full written assessment by Saturday morning.
✓ Deliverable: deployed RAG product with real data, citations, and user testing incorporated
→
Week 4 Deliverable: A deployed, domain-specific knowledge product. Real documents. Real users tested it. Real citations in every answer.
Week 5
AI Agents + Workflow Automation
Major Deliverable
Monday
AI Agents — The Agentic Loop
Tools: Replit · Claude API · Tavily Search
The agentic loop: perceive, think, act, observe, repeat. Tool use in Claude. Single-tool agents. Multi-tool agents with domain-specific tools. Reliability and defensive design — human-in-the-loop checkpoints, confirmation steps for write actions, stress testing completion rates.
Wednesday
Workflow Automation with Make.com
Tools: Make.com · Replit · Claude API · Airtable
Mapping a real business workflow. Event-driven automation: form submission → AI reasoning → route to owner → log to Airtable. Multi-agent orchestrator/subagent patterns for complex tasks. Students build the first version of their Week 5 automated workflow.
Friday
Major Deliverable — Deployed AI Agent
Tools: All Phase 2 tools
Students present their deployed AI agent. 5 minutes: the workflow it automates, a live end-to-end demo, stress test results (completion rate + top failure mode), and what they'd build next. LATAM team full assessment by Saturday evening.
✓ Deliverable: deployed agent automating a real domain workflow with stress-test metrics
→
Week 5 Deliverable: A working agent that automates a real workflow. Has completed a 20-run stress test. Knows its own failure rate.
Phase 3
Ship & Scale
Weeks 6–8
Production-ready. Real users. Real costs. A capstone that works in front of a live audience at Demo Day.
Week 6
Database Integration + Production Readiness
Monday
Supabase — Real Data Behind Your Product
Tools: Supabase · Lovable · Replit
Connecting AI to a real Postgres database. When AI should read vs. write. Row-level security so users only see their own data. Students wire a Supabase database to their capstone — reads inform responses, writes require user confirmation.
Wednesday
Real-Time Data + Architecture Review
Tools: Supabase · Replit · Lovable
Real-time subscriptions. Scheduled ingestion jobs. Students present their capstone architecture on the whiteboard — cohort identifies what will break under load and what has no fallback. Live data pipeline tested end to end.
Friday
Production Readiness Audit
Tools: Full production checklist
The production checklist: error handling, fallbacks, logging, security, cost instrumentation, input sanitization, rate limiting. Live audit of a volunteer capstone. Students complete their own audit, document every gap with priority, and fix the top three issues.
Week 7
Productizing, Go-To-Market, and Cost Management
Monday
Productizing — The Gap Between Prototype and Product
Tools: Capstone product · Lovable
Onboarding, empty states, user-friendly error messages. The stranger test: a stranger uses your tool without explanation — you watch and take notes without helping. Students fix their top 3 productization gaps and explicitly scope v1.
Wednesday
Go-To-Market + Pricing
Tools: Google Docs
Positioning your AI product. Pricing models: per-seat, usage-based, outcome-based. Finding your first 10 customers. The one-page GTM plan. 90-second pitches to the cohort — no deck, just talking. Students write their full GTM plan and get hard feedback from peers.
Friday
Cost Management + Demo Day Prep
Tools: Google Sheets · Capstone product
API cost modeling at scale. Model selection trade-offs. Prompt caching and compression. Students build a cost model showing margin at 10, 100, and 1,000 users. Demo Day format briefed. Full 7-minute script written and peer-reviewed.
Week 8
Demo Day
Public Demo Day
Monday
Final Build Day
Tools: All capstone tools
No lectures. Full build time. Instructor and LATAM team on floor for 1:1 support. 1:1 office hours with instructor — 5 minutes each, one thing to complete before Demo Day. Every capstone production-ready by end of day.
Wednesday
Full Dry Run
Tools: Presentation setup
Room set up exactly like Demo Day. Every student delivers the full 7-minute presentation. Timed strictly. Written feedback immediately after each. Second run for students with significant changes. Final logistics, Q&A prep, contingency plans.
Friday
DEMO DAY
Public · Austin tech community
7 minutes per student. Live audience of Austin operators, investors, and hiring partners. Every student presents a production-ready AI product with a live demo. Q&A from domain experts and investors. Documented outcomes shared publicly.
✓ Capstone: production-ready AI product presented to a live public audience
→
Week 8 Capstone: A production-ready AI product. Live audience. Real questions from real investors and operators.
Tools you'll use across the program
Claude API
Lovable
Cursor
Bolt.new
Replit
v0 by Vercel
Pinecone
Supabase
Make.com
Airtable
Tavily Search
ElevenLabs
Twilio
OpenAI API
LangChain
Google Sheets
Ready to start building?
Applications are open now.
Starts July 2026 in Austin, TX. Cohorts are small and selective — we review every application personally.
Apply Now → See tuition & payment options