LLMs: The Things We've Been Overlooking

LLMs: The Things We’ve Been Overlooking “What temperature are you using?” If someone asks, what do you say? “The default.” “0.7.” “I don’t know — does it matter?” Most answers fall into one of those three. And if you try to justify the answer, you run out of words fast. That’s how we use LLMs. We call the APIs every day — stuff prompts into messages, send them off, get responses. But when the question becomes “What does Temperature actually do?”, “How is Top-P different from Temperature?”, “Does Prompt Caching just work if you turn it on?”, “Will hallucinations go away with a better model?” — the answers get fuzzy. ...

April 12, 2026 · nbdawn

Why My PostgreSQL Slow Right After Insert

The Customer Claim That Exposed PostgreSQL’s Statistics Blind Spot It started with a client claim: a critical part of their workflow was unresponsive. Upon investigation, we found the bottleneck was an innocent-looking query that was now consistently timing out. To understand why, we had to descend into the internals of the PostgreSQL query planner. There, we discovered a rare bug in statistics estimation triggered by a unique data distribution pattern that had remained hidden until now. ...

December 20, 2025 · nbdawn