WOW! Yesterday OpenAI released two open-weight models: gpt-oss-120b and gpt-oss-
WOW! Yesterday OpenAI released two open-weight models: gpt-oss-120b and gpt-oss-20b. Roughly 120 and 20 Billion parameters in size and bother featuring reasoning capabilities.
The models utilise quantisation and MoE(mixture of expert) architecture which allows them to fit into 80GB and 16GB GPU respectively, which enables them to be run with comparable low resources. Performance benchmarks read impressive too, so I can’t wait to get some time to experiment with them.
If you don't have the data available, implementing an AI use case becomes a data
If you don’t have the data available, implementing an AI use case becomes a data gathering death march, often crossing organizational boundaries. Instead of spending 80% of the project time on building the use case, this time is spent on building the prerequisites. Overall project times increase and projects might remain unfinished.
If you don’t have the data available, imagination of what can be done can fall short. Good use cases might remain undiscovered. Instead of people experimenting with data to find possible use cases, this becomes a theoretical exercise.
This morning I was late for work, but I got diffused by something worthwhile. I stumbled across a be
This morning I was late for work, but I got “diffused” by something worthwhile. I stumbled across a beautifully crafted video that provides excellent intuition on how AI-based image generation actually works.
In the video, Stephen Welch takes us on a tour starting with the CLIP model, explaining how this model combines vision and language, then dives deeper into diffusion models. He finishes by providing intuition on how prompts guide image and video generation models toward desired outcomes.
Yesterday I took—after a long break of 5 years—an AWS Certification: The AWS Cer
Yesterday I took—after a long break of 5 years—an AWS Certification: The AWS Certified AI Practitioner[4]. I wanted to take a moment to share my thoughts on the exam and help you determine if this would be a good learning objective for you. To be honest, all of us should be active AI practitioners given the current state of AI—that’s a first indication of whether this could be something for you. That said, it’s not easy to stay on top of all the new AI tools that emerge literally every day. Obtaining and maintaining mastery seems to be a full-time job. That’s exactly where a curated and fixed learning scope provided by a certification exam can be helpful. What I particularly appreciate about the AWS AI Practitioner Exam scope is its focus on creating a foundation of AI concepts and understanding their applicability to certain classes of use cases. Acquiring this knowledge helps tremendously in understanding how to apply AI tools to our own work and how to identify and shape possible use cases for AI. Note that up to this point, this post hasn’t used the term “GenAI.” The certification scope is broader as it starts from the AI foundation but also covers many GenAI aspects. This approach helps you build a healthy foundation and design frugal and effective solutions—not every aspect of every use case needs GenAI. Not even in 2025 😉 The certification is an AWS certification, so it doesn’t stop at foundational knowledge but also requires you to establish a foundational understanding of the AWS service offering to help you design and implement use cases. Many hidden gems well beyond “just” Amazon Bedrock are covered in the scope. That’s another refreshing and quite helpful aspect. If I’ve sparked your interest, have a look at the exam landing page [1] and exam guide [3], which contain valuable information about the exam. Most importantly, in my opinion, is the learning journey—unless you’re just after a shiny cert. Plenty of training content can be found at AWS Skill Builder [2], which also includes sample exams to verify your learning progress.
I just signed up for the Software Architecture Superstream: Architecture Patterns and Antipatterns f
I just signed up for the “Software Architecture Superstream: Architecture Patterns and Antipatterns for AI”[1] which is taking place at 12th August CEST late afternoon. The lineup of speakers and topics to be covered sound very interesting. Maybe something for you too?
Glad to listen again to my dear colleague Luca Mezzalira 🙂 as one of the speakers!
#softwarearchitecture #patterns #genai
Traveling at light speed with open eyes is awesome, but sometimes you should loo
Traveling at light speed with open eyes is awesome, but sometimes you should look back…
On my way back from the Alps, I spent quality time with colleagues and partners from the Media & Entertainment industry at our Munich offices. It was eye-opening to see how rapidly the industry is adopting cloud technology and the massive transformation underway—delivering significantly more value and improved experiences for viewers and users. Thanks to Christopher for the invitation!
Today I had a brief coding session with the newly launched Kiro [1]: a new agentic IDE that works al
Today I had a brief coding session with the newly launched Kiro [1]: a new agentic IDE that works alongside you from prototype to production.
I selected a use case I had previously coded myself, which helped me guide the coding agent through some challenging areas. It was impressive how quickly we achieved results, and the experience was genuinely enjoyable.
The use case involves generating a short advertising video from a high-level description. While the advertising video is the final output, the process also creates supporting assets like campaign descriptions, ad copy, and image assets. I requested a multi-agent implementation.
AWS Summit Hamburg 2025 week - uncompressing ...
AWS Summit Hamburg 2025 week - uncompressing …
Digesting an intense week in Hamburg. As Andy rightly said:
“There is no compression algorithm for experience”
At the same time it turns out that you can pack a lot of experience in a very short event AWS Summit week in Hamburg has been like this.
While I’m still very attached to the good, old Berlin summit which has been shaping my last years, the move to Hamburg turned out to be great success. The large venue allowed to even experience more talks and booths in the exhibition halls in a very relaxed environment. The interior design of the fair kept the typical AWS event vibe and everyone I talked to felt like home. Finally, Hamburg is an exciting city with lovely people - very worth paying it a visit.
+1 on this. I really like the visualisation. I would love a 4th column which summarises the implica
+1 on this. I really like the visualisation. I would love a 4th column which summarises the implication. Reproducibility: Given by Open Source but not with open weight.
Like with a closed source library you can build on top of an open-weight model, but you are not able to understand the implementation or alter it. This is a fundamental difference.
Garbage in - Garbage out, or in other words: No high quality data, no shiny successful Gen AI produc
Garbage in - Garbage out, or in other words: No high quality data, no shiny successful Gen AI product generating business value…
Looking forward to support this workshop next week. Amazing content and a very knowledgable workshop host Tobias. As far as I have seen, registration is still open. So if you haven’t registered so far, you can be in Hamburg on the day before summit (next week Wednesday), I highly recommend signing up!