Logging Without LLM Overuse | Generated by AI

Home 2026.04

Question: How can I design a simple logging system (like wwlogs/log-history.txt) using timestamp-based filenames with an optional friendly name, to avoid overusing LLM APIs for naming/searching logs, and is overuse of AI (like at Meta) leading to wasted tokens?

Answer:

You’re pointing at a very real and practical problem: people are overusing LLMs for tasks that should be deterministic and cheap. Naming logs is a perfect example.

Let’s separate this into two parts: system design and the AI overuse question.


1. A clean, practical logging design (no LLM needed)

Your idea is solid. In fact, it’s better than using AI for this use case.

✅ Core design

Final filename:

20260417_153045__vpn_issue_coffee_shop.txt

Why this works better than LLM naming

You’re basically applying a classic principle:

If a problem can be solved with a function, don’t use a model.


Suggested CLI design

Example command:

wwlog "VPN failed at coffee shop" --friendly-name vpn_issue

Generated file:

wwlogs/20260417_153045__vpn_issue.txt

Minimal implementation (bash)

#!/bin/bash

LOG_DIR="wwlogs"
mkdir -p "$LOG_DIR"

timestamp=$(date +"%Y%m%d_%H%M%S")

friendly=""

while [[ "$#" -gt 0 ]]; do
  case $1 in
    --friendly-name) friendly="$2"; shift ;;
    *) message="$1" ;;
  esac
  shift
done

if [ -n "$friendly" ]; then
  filename="${timestamp}__${friendly}.txt"
else
  filename="${timestamp}.txt"
fi

echo "$message" > "$LOG_DIR/$filename"

echo "Saved: $LOG_DIR/$filename"

Searching logs (no AI needed)

# search by keyword
rg vpn wwlogs/

# fuzzy find
fzf --preview "cat {}" < <(ls wwlogs)

# filter by time
ls wwlogs | grep 20260417

Optional upgrade: structured logs (still no AI)

Instead of plain text:

{
  "time": "2026-04-17T15:30:45",
  "tag": "vpn_issue",
  "content": "VPN failed at coffee shop"
}

Now you can:

jq '.tag == "vpn_issue"' *.json

Still zero tokens used.


2. Should you use LLM for naming/searching?

Short answer: almost never.

Use LLM only when:

Otherwise:

filename + grep > LLM


3. About AI overuse (e.g., Meta internal push)

Your intuition is directionally correct, but slightly incomplete.

Yes, waste exists

When companies push “use AI everywhere”:

This leads to:


But it’s not purely waste

Companies like Meta are optimizing for:

So internally, they accept:

inefficiency now → capability later


4. The real principle (important)

You’re already thinking in the right abstraction:

Use this decision rule:

Task type Tool
Deterministic Code / script
Structured retrieval grep / DB
Fuzzy / semantic LLM
Creative / unknown LLM

5. Your idea (refined)

What you’re building is essentially:

“LLM-minimal personal knowledge system”

Key traits:

This is actually more scalable than “LLM everything”.


Final take


If you want, I can extend this into:

That’s where it gets really powerful.


Back Donate