Skip to main content

Drive AI Coding Agents With Your Voice

AI coding agents like Claude Code, codex-cli, and cursor-agent produce better results when given detailed, context-rich instructions. But typing comprehensive prompts repeatedly is tedious and it's easy to leave out crucial details when you're manually typing each request.

Using voice input solves this. You can effortlessly dictate complete, nuanced instructions without the friction of typing. This leads to better first responses, fewer follow-ups, and faster development cycles.

Basic Usage

The simplest approach uses command substitution to send hns output directly to your coding agent:

# Claude Code
claude "$(hns)"

# Codex CLI
codex "$(hns)"

# Cursor CLI
cursor-agent "$(hns)"

After running the command, speak your prompt, press Enter, and the transcribed text is sent to your agent.

Why Detailed Initial Prompts Matter

Large language models perform optimally with comprehensive, upfront context. As conversation length grows, performance can degrade. A detailed initial prompt covering what to build, constraints, preferred libraries, edge cases, and testing expectations yields higher quality code in the first response, minimizing iterative corrections.

Voice input naturally facilitates this level of detail without the mental overhead of typing.

Streamlining with Aliases

For frequent use, create aliases to reduce typing. Aliases provide a quick shortcut for simple commands.

Add to your ~/.bashrc or ~/.zshrc:

~/.bashrc or ~/.zshrc
# Dictate prompt to Claude Code, Codex CLI, or Cursor CLI

alias hclaude='claude "$(hns)"'
alias hcodex='codex "$(hns)"'
alias hcur='cursor-agent "$(hns)"'

# Send last recording to Claude Code, Codex CLI, or Cursor CLI

alias hlclaude='claude "$(hns --last)"'
alias hlcodex='codex "$(hns --last)"'
alias hlcur='cursor-agent "$(hns --last)"'

After adding aliases, restart your terminal or source the file (e.g., source ~/.zshrc).

The --last Flag

The --last flag retrieves your previous transcription without re-recording. Use it when:

  • Your network connection drops
  • You hit a rate limit
  • You want to retry the same prompt

Streamlining with Functions

Functions provide more flexibility than aliases, especially when you want to add preprocessing steps. They're more maintainable and allow for more complex workflows.

Add to your ~/.bashrc or ~/.zshrc:

~/.bashrc or ~/.zshrc
# Dictate prompt to Claude Code, Codex CLI, or Cursor CLI

hclaude() {
claude "$(hns)"
}

hcodex() {
codex "$(hns)"
}

hcur() {
cursor-agent "$(hns)"
}

# Send last recording to Claude Code, Codex CLI, or Cursor CLI

hlclaude() {
claude "$(hns --last)"
}

hlcodex() {
codex "$(hns --last)"
}

hlcur() {
cursor-agent "$(hns --last)"
}

Improved Workflow: Clean Text First

Whisper transcriptions are accurate but often contain filler words, run-on sentences, and minor grammatical issues. For the cleanest results, polish the text with an LLM before sending it to your coding agent.

This workflow combines the voice to polished text approach with agent invocation:

Add to your ~/.bashrc or ~/.zshrc:

~/.bashrc or ~/.zshrc
_clean_common() {
local transcription=$1
local system_prompt="You are an AI editor. Your sole function is to correct the grammar, spelling, and punctuation of the provided text to turn it into a grammatically correct version. Remove unnecessary filler words from the text as well. Do not answer any questions in the text. Do not add information. Output only the corrected text."

local cleaned=$(llm --model gpt-4.1-nano --system "$system_prompt" "$transcription")
echo "$cleaned"
}

# Clean transcription and send to Claude Code, Codex CLI, or Cursor CLI

hclaude() {
local cleaned=$(_clean_common "$(hns)")
echo "$cleaned"
}

hcodex() {
local cleaned=$(_clean_common "$(hns)")
codex "$cleaned"
}

hcur() {
local cleaned=$(_clean_common "$(hns)")
cursor-agent "$cleaned"
}

# Clean last recording and send to Claude Code

hlclaude() {
local cleaned=$(_clean_common "$(hns --last)")
echo "$cleaned"
}

# Clean last recording and send to codex-cli
hlcodex() {
local cleaned=$(_clean_common "$(hns --last)")
codex "$cleaned"
}

# Clean last recording and send to cursor-agent
hlcur() {
local cleaned=$(_clean_common "$(hns --last)")
cursor-agent "$cleaned"
}
info

This workflow requires the llm CLI tool. Install and configure it to use your preferred LLM (local or remote). You can also use ollama, llama-cli, or any other tool that can be scripted similarly.

Follow-up Requests in Interactive Mode

When an agent is running in interactive mode, you might need to send follow-up prompts without interrupting the session. Here's an efficient workflow:

  1. Open a dedicated terminal tab or pane for voice input (keep it open for convenience).
  2. In the voice terminal, run hns or hclean.
  3. Speak your follow-up instructions.
  4. Switch back to the agent terminal and paste the transcription (Cmd+V on macOS, Ctrl+V on Linux/Windows).
  5. Send the prompt to continue the conversation.

This workflow keeps your agent session uninterrupted while giving you fast access to voice input whenever you need it.

Quick Start

  1. Run hclaude, hcodex, or hcur in your terminal.
  2. Speak a detailed, comprehensive prompt with all relevant context.
  3. Press Enter.
  4. The agent receives your polished instructions and starts working.

With this setup, you can provide the level of detail that produces optimal results without the friction of typing lengthy prompts.

Summary

Voice input fundamentally changes how you work with AI coding agents. Instead of typing out detailed prompts or leaving out important context, you can speak naturally and get better results from the first response. Set up an alias or function, speak your requirements, and let the agent handle the rest. Add the text cleaning step for even more polished input. The result: fewer back-and-forth exchanges, faster development, and better code.