<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Reasoning on schristoph.online</title><link>https://schristoph.online/tags/reasoning/</link><description>Recent content in Reasoning on schristoph.online</description><generator>Hugo</generator><language>en-us</language><copyright>Stefan Christoph. All rights reserved.</copyright><lastBuildDate>Sat, 02 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://schristoph.online/tags/reasoning/index.xml" rel="self" type="application/rss+xml"/><item><title>What Reasoning Actually Means (and Why It Matters for Your Architecture)</title><link>https://schristoph.online/blog/what-reasoning-actually-means/</link><pubDate>Mon, 11 May 2026 00:00:00 +0000</pubDate><guid>https://schristoph.online/blog/what-reasoning-actually-means/</guid><description>&lt;h2 id="it-started-with-a-saturday-morning-experiment">It Started with a Saturday Morning Experiment&lt;/h2>
&lt;p>I recently ran a simple test. I asked a small language model the same questions three times, with zero, one, and three rounds of self-reflection, and &lt;a href="https://schristoph.online/blog/when-thinking-twice-helps/">published the results&lt;/a>. The pattern was clear: self-reflection helped when the model already knew the topic. It did nothing when it didn&amp;rsquo;t. And on bleeding-edge questions, more thinking just produced more confidently wrong answers.&lt;/p>
&lt;p>That experiment raised a question I couldn&amp;rsquo;t shake: if &amp;ldquo;thinking harder&amp;rdquo; only works sometimes, what exactly is happening when a model reasons, and when is it just pretending?&lt;/p></description></item></channel></rss>