@hongminhee洪 民憙 (Hong Minhee)
@kransPeter Brett
@dr2chase I would argue that personal-scale SLMs would already be dominating if capital did not hoard all the hardware.
I recently trained what I call a small language model (which does not try to mimic intelligence but rather convert structured data into language and back) at home and it only took a week on an RTX 6000 Blackwell.
When I bought the Blackwell GPU (~$3500) it was still expensive (as any workstation class hardware would be) but now it is the price of a recent vintage used car to buy one (~$14500).
If those compute resources had not been hoarded we would probably see more ethical hobbyist-driven models by now rather than models that require large amounts of capital to train due to the hoarding of resources.
So in other words I would say it is complicated: capital made people aware of the technology, but the same players have also restricted access to the means to make competitive implementations at the hobbyist level.
There is nothing technically blocking the creation of community-based models, the problem is the resource hoarding enabled by capital.
And I can prove that: a hobbyist released the world's first publicly accessible image generation model with an embedded LLM, Craiyon. And that model was and remains libre. We all remember Craiyon right?
But that was in 2022, before the AI frenzy drove up the price of professional GPUs by 500+%.