Working with local LLMs
List of Frameworks and Tools
Inhaltsverzeichnis
Ollama
curl -fsSL https://ollama.com/install.sh | sh
ollama run llama2
LLM
pip install llm llm install llm-gpt4all llm -m the-model-name "Your query"
llm aliases set falcon ggml-model-gpt4all-falcon-q4_0 llm -m ggml-model-gpt4all-falcon-q4_0 "Tell me a joke about computer programming"
Working with Hugging Face
List of Models
https://huggingface.co/models?sort=trending&search=gguf
Install models with Ollama
https://huggingface.co/docs/hub/ollama
The snippet would be in format:
ollama run hf.co/{username}/{repository}
Please note that you can use both hf.co
and huggingface.co
as the domain name.
Here are some models you can try:
ollama run hf.co/bartowski/Llama-3.2-1B-Instruct-GGUF ollama run hf.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF ollama run hf.co/arcee-ai/SuperNova-Medius-GGUF ollama run hf.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF
Leave a Reply