LLM Title Fixes Summary | Generated by AI
Good. Here’s a summary of the three fixes applied:
Problem: The LLM ignored the “respond with only the title” instruction and returned 1518 chars of analysis instead of a 6-word title. The code then crashed with a ValueError.
Fixes (3 changes across 2 files):
-
create_note_utils.py–max_tokens=60on the API call: This is the key fix. Previously nomax_tokenswas set, so the model could ramble freely. 60 tokens is generous for 6 words but cuts off long explanations. -
create_note_utils.py– Retry logic (3 attempts): Instead of crashing on the first too-long title, it retries up to 3 times with a warning. Only raisesValueErrorafter exhausting retries. -
create_note_from_clipboard.py– Stronger prompt: Changed from a long instruction to a concise prompt ending with\n\nTitle:– this primes the model to complete with just the title, which works better with many models than imperative instructions.