When a 'Model' Isn't Just a Model: Redefining AI Systems for the Builder's Era
When a ‘Model’ Isn’t Just a Model: Redefining AI Systems for the Builder’s Era
🎬 Great keynote by Jensen Huang at CES 2026 [1]! Great content and also love the ease of his presentation style. Miguel: We are not the only ones presenting in front of a black screen once in a while ;)
🔓 I agree with Jensen, it’s super exciting to see more and 𝗺𝗼𝗿𝗲 𝗼𝗽𝗲𝗻-𝗶𝘀𝗵 𝗳𝗿𝗼𝗻𝘁𝗶𝗲𝗿 𝗺𝗼𝗱𝗲𝗹𝘀 𝗯𝗲𝗶𝗻𝗴 𝗽𝘂𝗯𝗹𝗶𝘀𝗵𝗲𝗱 by different providers. Sounds like NVIDIA is taking a big stake in this. Really key for me is that providers not “just” release open-weight models but also the data they trained on and the process used to train them. Jensen mentions the obvious responsible AI argument which is super important. This is the only way 3rd parties can verify the models and understand things like bias being introduced by the training data, copyright infringements, and alike. From my perspective, equally important: 𝗢𝗽𝗲𝗻 𝗶𝘀 𝗼𝗻𝗹𝘆 𝘁𝗿𝘂𝗹𝘆 𝗼𝗽𝗲𝗻 𝘁𝗼 𝗺𝗲 𝗶𝗳 𝗜 𝗰𝗮𝗻 𝗯𝘂𝗶𝗹𝗱 𝗶𝘁, 𝗺𝗼𝗱𝗶𝗳𝘆 𝗶𝘁 𝘁𝗼 𝗺𝗮𝗸𝗲 𝗺𝘆 𝗼𝘄𝗻 𝘃𝗮𝗿𝗶𝗮𝗻𝘁, 𝗮𝗻𝗱 𝗜’𝗺 𝗮𝗹𝗹𝗼𝘄𝗲𝗱 𝘁𝗼 𝗱𝗼 𝘀𝗼.
🤔 𝗦𝗽𝗲𝗮𝗸𝗶𝗻𝗴 𝗼𝗳 𝗺𝗼𝗱𝗲𝗹𝘀… 𝗛𝗶𝗴𝗵𝗹𝘆 𝗼𝘃𝗲𝗿𝗹𝗼𝗮𝗱𝗲𝗱 𝘁𝗲𝗿𝗺. Everything becomes a “model” these days. When we talk about frontier models, those are actually a composition of multiple models, with quite some plumbing, orchestration, and integration into external tools. Way more than “just” a machine learning model. I really think we need a dedicated term for these compound systems. Makes my brain hurt, but I actually don’t have a good suggestion for a term. What about you?
💡 Jensen: “The entire fabulary stack of the computer industry is being reinvented. You no longer program the software, you train the software. You don’t run it on CPUs, you run it on GPUs.And whereas applications were pre-recorded, pre-ompiled and run on your device, now applications understand the context and generate every single pixel, every single token completely from scratch every single time.”
This is a strong statement, and I really think directionally this is correct. It’s good for users to some extent. For builders, it’s happening already today. Few of my coffee chats with other builders, be it customers or colleagues, don’t include elements of talking about how they started to co-build dedicated applications for their needs, with their use cases at heart.
🔧 At the same time, as Werner pointed out with his Renaissance Developer[2] narrative, it doesn’t mean that we builders should fear our jobs or forget what we learned in the past. AI systems are probabilistic, while often we need reliable results. Building on the fly is both costly - in the end, we’re building very similar use cases and applications over and over again - and unreliable. Each time we build something, it might, actually will, not work as expected.
🧩 So after all, it’s not just machine learning models, but compound systems where we can apply all good software engineering practices. These systems can use tools which can be reused, tested, and yes, can also be co-built with AI. This way we can navigate cost & reliability versus flexibility. And note - cost beside USD is also a proxy for energy required. Less is better. We need that.
🚀 What terminology do you use when discussing these complex AI systems? Share your thoughts below!
🔗 If you’re building with AI today, I’d love to hear about your approach to balancing flexibility and reliability.
📚 Want to dive deeper into the Renaissance Developer concept? Let’s connect and discuss how traditional software engineering practices still matter in the AI era.
[1] https://www.youtube.com/watch?v=M8fL0RUmbP0
[2] https://thekernel.news/articles/dawn-of-the-renaissance-developer/
Cross-posted to LinkedIn