hey Raphael!
What about Llama or DeepSeek? Any locally run LLM would do. A lightweight, efficient model would be ideal.
Cheers!
hey Raphael!
What about Llama or DeepSeek? Any locally run LLM would do. A lightweight, efficient model would be ideal.
Cheers!