๐ Caught up on AWSโs open-sourcing of API models in Smithy format (slipped my radar earlier this yea
๐ Caught up on AWSโs open-sourcing of API models in Smithy format (slipped my radar earlier this year, but timeless value!). There is a GitHub Repository [4] available with AWS API smithy models for 200+ services. Announcement Blog: [1]
A Smithy model is the core semantic representation in Smithy, an open-source interface definition language (IDL) developed by AWS for defining APIs, services, and data structures in a protocol-agnostic way.
It consists of shapes (like primitives, lists, maps, structures, and services), traits for metadata, and shape IDs for referencing components, enabling code generation for SDKs, documentation, and validation across languages. Models are serialized in formats like Smithy IDL or JSON and include a prelude with built-in types.
When a 'Model' Isn't Just a Model: Redefining AI Systems for the Builder's Era
When a ‘Model’ Isn’t Just a Model: Redefining AI Systems for the Builder’s Era
๐ฌ Great keynote by Jensen Huang at CES 2026 [1]! Great content and also love the ease of his presentation style. Miguel: We are not the only ones presenting in front of a black screen once in a while ;)
๐ I agree with Jensen, it’s super exciting to see more and ๐บ๐ผ๐ฟ๐ฒ ๐ผ๐ฝ๐ฒ๐ป-๐ถ๐๐ต ๐ณ๐ฟ๐ผ๐ป๐๐ถ๐ฒ๐ฟ ๐บ๐ผ๐ฑ๐ฒ๐น๐ ๐ฏ๐ฒ๐ถ๐ป๐ด ๐ฝ๐๐ฏ๐น๐ถ๐๐ต๐ฒ๐ฑ by different providers. Sounds like NVIDIA is taking a big stake in this. Really key for me is that providers not “just” release open-weight models but also the data they trained on and the process used to train them. Jensen mentions the obvious responsible AI argument which is super important. This is the only way 3rd parties can verify the models and understand things like bias being introduced by the training data, copyright infringements, and alike. From my perspective, equally important: ๐ข๐ฝ๐ฒ๐ป ๐ถ๐ ๐ผ๐ป๐น๐ ๐๐ฟ๐๐น๐ ๐ผ๐ฝ๐ฒ๐ป ๐๐ผ ๐บ๐ฒ ๐ถ๐ณ ๐ ๐ฐ๐ฎ๐ป ๐ฏ๐๐ถ๐น๐ฑ ๐ถ๐, ๐บ๐ผ๐ฑ๐ถ๐ณ๐ ๐ถ๐ ๐๐ผ ๐บ๐ฎ๐ธ๐ฒ ๐บ๐ ๐ผ๐๐ป ๐๐ฎ๐ฟ๐ถ๐ฎ๐ป๐, ๐ฎ๐ป๐ฑ ๐’๐บ ๐ฎ๐น๐น๐ผ๐๐ฒ๐ฑ ๐๐ผ ๐ฑ๐ผ ๐๐ผ.
WOW! Yesterday OpenAI released two open-weight models: gpt-oss-120b and gpt-oss-
WOW! Yesterday OpenAI released two open-weight models: gpt-oss-120b and gpt-oss-20b. Roughly 120 and 20 Billion parameters in size and bother featuring reasoning capabilities.
The models utilise quantisation and MoE(mixture of expert) architecture which allows them to fit into 80GB and 16GB GPU respectively, which enables them to be run with comparable low resources. Performance benchmarks read impressive too, so I canโt wait to get some time to experiment with them.
+1 on this. I really like the visualisation. I would love a 4th column which summarises the implica
+1 on this. I really like the visualisation. I would love a 4th column which summarises the implication. Reproducibility: Given by Open Source but not with open weight.
Like with a closed source library you can build on top of an open-weight model, but you are not able to understand the implementation or alter it. This is a fundamental difference.
๐ข If you are on the hunt for an image ๐๐ก๐ ๐ฉ๐๐๐๐ข segmentation model, which is open and you can deploy
๐ข If you are on the hunt for an image ๐๐ก๐ ๐ฉ๐๐๐๐ข segmentation model, which is open and you can deploy on your own, have a look at the just released ๐ฆ๐ฒ๐ด๐บ๐ฒ๐ป๐ ๐๐ป๐๐๐ต๐ถ๐ป๐ด ๐ ๐ผ๐ฑ๐ฒ๐น ๐ฎ (๐ฆ๐๐ ๐ฎ). The model capabilities can be nicely experienced in Metaโs Demo). Read more about the announcement at their announcement page.
๐ทโโ๏ธIf you are looking into deploying the model to build your own application on AWS, ๐๐บ๐ฎ๐๐ผ๐ป ๐ฆ๐ฎ๐ด๐ฒ๐ ๐ฎ๐ธ๐ฒ๐ฟ is a very good alternative for you. Quoting from Metaโs announcement website:
The ones who joined Philipp and me in our session at the #AWSSUMMIT in Berlin last week already got
The ones who joined Philipp and me in our session at the #AWSSUMMIT in Berlin last week already got a preview of the blog post as we run it as a demo. Nice that it is now published and you all can get hands on it. Kudos!
AWS Inferentia2 is a great way to optimize the inference part of your (gen)AI workloads on AWS and the blog post helps you to dive straight into deploying a LLM (in this case Meta’s Llama 3) model. But it is not “just” Llama 3. From Hugging Face recent blog post: “Enabling over 100,000 models on AWS Inferentia2 with Amazon SageMaker” - https://lnkd.in/ePYb6TFs. So there is a good chance that you can benefit from AWS Inferentia 2 today :)
Very interesting read. Good to see politicians with a clear view and good insight of upcoming, disru
Very interesting read. Good to see politicians with a clear view and good insight of upcoming, disrupting technologies which will change our world - to the better. Disagree about the non-full-automated transportation. I see this happen. Open source is becoming even more key - I agree here …