<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Inference on schristoph.online</title><link>https://schristoph.online/tags/inference/</link><description>Recent content in Inference on schristoph.online</description><generator>Hugo</generator><language>en-us</language><copyright>Stefan Christoph. All rights reserved.</copyright><lastBuildDate>Sat, 02 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://schristoph.online/tags/inference/index.xml" rel="self" type="application/rss+xml"/><item><title>Intelligence Is About Time, Not Parameters</title><link>https://schristoph.online/blog/intelligence-is-about-time/</link><pubDate>Thu, 14 May 2026 00:00:00 +0000</pubDate><guid>https://schristoph.online/blog/intelligence-is-about-time/</guid><description>&lt;h2 id="the-question-every-sa-gets">The Question Every SA Gets&lt;/h2>
&lt;figure>&lt;img src="https://schristoph.online/assets/2026-05-14-intelligence-time-savant-regime.jpg"
 alt="The savant regime in AI models">&lt;figcaption>
 &lt;p>Beyond a complexity threshold, larger models become less insightful — the savant regime.&lt;/p>
 &lt;/figcaption>
&lt;/figure>

&lt;p>&amp;ldquo;Which model should I use?&amp;rdquo;&lt;/p>
&lt;p>I hear it in almost every customer conversation about generative AI. The instinct is always the same: reach for the biggest model. More parameters, more intelligence. It feels right. It&amp;rsquo;s also wrong, and now there&amp;rsquo;s a mathematical proof to explain why.&lt;/p></description></item></channel></rss>