Run Ollama AI Model on Your Desktop with this Amazing Free App: Follamac

Run Ollama AI Model on Your Desktop with this Amazing Free App: Follamac

You need Ollama running on your localhost with some model. Once Ollama is running the model can be pulled from follamac or from command line. From command line type something like: ollama pull llama3 If you wish to pull from follamac you can write llama3 into "Model name to pull" input box and click the PULL button.

Follamac is a free and open-source desktop application which provides convenient way to work with Ollama and other large language models (LLMs).

The Ollama server must be running. On Linux you can start it using sudo systemctl start ollama.service or ollama serve commands.

Also you need to have some models pulled in the repository. You can pull latest mistral using command ollama pull mistral or you can run Follamac and pull the model from there.

You can also download Ollama clients and prepbuild models for Linux, Windows and macOS from Ollama.com.

10 Free Apps to Run Your Own AI LLMs on Windows Offline – Create Your Own Self-Hosted Local ChatGPT Alternative
Ever thought about having your own AI-powered large language model (LLM) running directly on your Windows machine? Now’s the perfect time to get started. Imagine setting up a self-hosted ChatGPT that’s fully customized for your needs, whether it’s content generation, code writing, project management, marketing, or healthcare

Features

  • pulling/deleting models
  • sending prompts to Ollama (chat or generate)
  • selecting role for a message in the chat mode and possibility to send a system message with the generate mode - basic options (temperature, threads)
  • basic info about selected model
  • code highlighting
  • multiple chats
  • editing/deleting chats
  • editing/deleting messages
  • copy code or whole message to clipboard
  • light and dark theme (defaults to the system setting)
14 Best Open-Source Tools to Run LLMs Offline on macOS: Unlock AI on M1, M2, M3, and Intel Macs
Running Large Language Models (LLMs) offline on your macOS device is a powerful way to leverage AI technology while maintaining privacy and control over your data. With Apple’s M1, M2, and M3 chips, as well as Intel Macs, users can now run sophisticated LLMs locally without relying on cloud services.

Platforms

Follamac works on Windows, Linux, and macOS.

License

  • MIT License

Resources & Downloads

GitHub - pejuko/follamac: Follamac is an desktop application and provides convenient way to work with Ollama and large language models (LLMs).
Follamac is an desktop application and provides convenient way to work with Ollama and large language models (LLMs). - pejuko/follamac
Follamac
10 Free Apps to Run Your Own AI LLMs on Windows Offline – Create Your Own Self-Hosted Local ChatGPT Alternative
Ever thought about having your own AI-powered large language model (LLM) running directly on your Windows machine? Now’s the perfect time to get started. Imagine setting up a self-hosted ChatGPT that’s fully customized for your needs, whether it’s content generation, code writing, project management, marketing, or healthcare







Open-source Apps

9,500+

Medical Apps

500+

Lists

450+

Dev. Resources

900+

Read more