<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Llm on nbdawn's Blog</title><link>https://blog.nbdawn.com/tags/llm/</link><description>Recent content in Llm on nbdawn's Blog</description><generator>Hugo -- 0.160.1</generator><language>en</language><copyright>DJ.Kim 2025</copyright><lastBuildDate>Sun, 12 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://blog.nbdawn.com/tags/llm/index.xml" rel="self" type="application/rss+xml"/><item><title>LLMs: The Things We've Been Overlooking</title><link>https://blog.nbdawn.com/posts/llms-the-things-weve-been-overlooking/</link><pubDate>Sun, 12 Apr 2026 00:00:00 +0000</pubDate><guid>https://blog.nbdawn.com/posts/llms-the-things-weve-been-overlooking/</guid><description>&lt;h1 id="llms-the-things-weve-been-overlooking"&gt;LLMs: The Things We&amp;rsquo;ve Been Overlooking&lt;/h1&gt;
&lt;p&gt;&amp;ldquo;What temperature are you using?&amp;rdquo; If someone asks, what do you say? &amp;ldquo;The default.&amp;rdquo; &amp;ldquo;0.7.&amp;rdquo; &amp;ldquo;I don&amp;rsquo;t know — does it matter?&amp;rdquo; Most answers fall into one of those three. And if you try to justify the answer, you run out of words fast.&lt;/p&gt;
&lt;p&gt;That&amp;rsquo;s how we use LLMs. We call the APIs every day — stuff prompts into &lt;code&gt;messages&lt;/code&gt;, send them off, get responses. But when the question becomes &amp;ldquo;What does Temperature actually do?&amp;rdquo;, &amp;ldquo;How is Top-P different from Temperature?&amp;rdquo;, &amp;ldquo;Does Prompt Caching just work if you turn it on?&amp;rdquo;, &amp;ldquo;Will hallucinations go away with a better model?&amp;rdquo; — the answers get fuzzy.&lt;/p&gt;</description></item></channel></rss>