pipr.tools
Pipes
Clean Email Strip formatting from pasted email for clean plain text Clean AI Output Clean plain text from ChatGPT or AI output Decode & Format JWT Decode a JWT and pretty-print header and payload Word Frequency Count word frequency in text

token-count Count LLM tokens (real BPE) and context usage

# Stats
Input0 chars
Output0 chars

Flags

--encoding select: cl100k_base | o200k_base default: cl100k_base

Examples

Count exact tokens in a system prompt

Usage
"You are a helpful assistant. Answer the user's question conc..." | token-count

Check real token usage for a ChatGPT prompt

Usage
"Write a detailed product description for a wireless Bluetoot..." | token-count

Measure API token cost for a batch of prompts

Usage
"Summarize the following meeting notes and extract action ite..." | token-count
View source
async (input, opts = {})=>{
                if (!input.trim()) return "(empty input)";
                const { getEncoder } = await import('./tokenizer_qMtbZfTQ.mjs').then(async (m)=>{
                    await m.__tla;
                    return m;
                });
                const encoding = opts.encoding || "cl100k_base";
                const enc = await getEncoder(encoding);
                const tokens = enc.encode(input).length;
                const chars = input.length;
                const words = input.trim().split(/\s+/).filter(Boolean).length;
                const bytes = new Blob([
                    input
                ]).size;
                const ratio = tokens > 0 ? (chars / tokens).toFixed(1) : "—";
                const pct = (ctx)=>{
                    const p = (tokens / ctx) * 100;
                    return p < 0.01 ? "<0.01%" : p < 1 ? p.toFixed(2) + "%" : p.toFixed(1) + "%";
                };
                return [
                    `  Tokens: ${tokens.toLocaleString()}  (${encoding})`,
                    `  Chars:   ${chars.toLocaleString()}`,
                    `  Words:   ${words.toLocaleString()}`,
                    `  Bytes:   ${bytes.toLocaleString()}`,
                    `  Ratio:   ${ratio} chars/tok`,
                    ``,
                    `  ── Context window usage ──`,
                    `  GPT-4o        128K  ${pct(128000).padStart(7)}`,
                    `  Claude 3.5    200K  ${pct(200000).padStart(7)}`,
                    `  Gemini 1.5      1M  ${pct(1000000).padStart(7)}`,
                    `  Llama 3       128K  ${pct(128000).padStart(7)}`
                ].join("\n");
            }