2024 m. rugpjūčio 16 d., penktadienis

Running LLM locally

Running Ollama with docker

 with CPU

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

 

docker exec -it ollama ollama run llama3.1
 


References:

Video tutorial for ollama on docker both cpu/gpu