Voice Coding With Push to Talk | Generated by AI

Home 2026.02

Question: How does Peter Steinberger use voice to control his AI coding assistant, for example with Claude Code or Codex, since they don’t support voice input natively?

Answer:

Peter Steinberger, the creator of OpenClaw (formerly Clawdbot/Moltbot), extensively used voice input for interacting with his AI coding agents and assistants, even though tools like Claude Code (Claude-based coding interfaces) and OpenAI’s Codex (or later GPT Codex variants) do not natively support direct voice input.

He described his workflow in interviews (notably on the Lex Fridman Podcast #491 and others):

This voice-driven conversational style was so intensive that he once lost his voice from overuse.

He preferred this over typing long prompts because:

Although native voice is missing in Claude Code / Codex, he bridged the gap with:

In later setups (e.g., with OpenClaw itself), voice features expanded — including Voice Wake, Talk Mode, and integrations with ElevenLabs for speech — but for pure coding with Claude Code / Codex, the core method was PTT + speech-to-text → text prompt to the agent.

This allowed him to “vibe code” at high speed, ship massive amounts of code (thousands of commits), and treat the AI like a real-time collaborator he could talk to.

References:


Back Donate