What do you mean by this - I assume you mean too small for SoTA LLMs? There are many ML applications where 12GB is more than enough.
Even w.r.t. LLMs, not everyone requires the latest & biggest LLM models. Some "small", distilled and/or quantized LLMs are perfectly usable with <24GB
Still...tangibly cheaper than even a 2nd hand 3090 so there is perhaps a market for it
What do you mean by this - I assume you mean too small for SoTA LLMs? There are many ML applications where 12GB is more than enough.
Even w.r.t. LLMs, not everyone requires the latest & biggest LLM models. Some "small", distilled and/or quantized LLMs are perfectly usable with <24GB