Get your Own AI-Powered Self-hosted Search Engine Perplexity Clone with Farfalle
LLM Powered Search? Yes it is Possible!
LLM-powered chat engines function by utilizing large language models to generate human-like text responses based on input prompts. These models are trained on extensive datasets, enabling them to understand context and provide relevant, coherent replies.
They can be integrated into applications to facilitate natural language interactions, effectively serving as conversational agents.
Perplexity AI is a conversational search engine that leverages large language models to provide direct answers to user queries, accompanied by cited sources for verification.
It offers both free and subscription-based services, with the paid version granting access to advanced models like GPT-4 and Claude 3.5.
What is Farfalle?
Farfalle allows you to have your own self-hosted free alternative to Perplexity AI, that you can run locally or on your private server
It is an open-source AI-powered search engine that allows users to self-host and utilize both local and cloud-based Large Language Models (LLMs) for enhanced search capabilities. It supports models such as Llama 3, Gemma, Mistral, and Phi 3, and integrates with services like OpenAI's GPT-4.
Features
- Support for Local LLMs: Leverage local models via Ollama integration.
- Chat History: Maintain a record of past conversations for reference.
- Expert Search: Advanced search capabilities tailored for specific domains.
- Multiple Search Providers: Integrates Tavily, Searxng, Serper, and Bing for versatile search options.
- Docker Deployment Setup: Simplifies installation with a pre-built Docker image.
- Cloud Model Integration: Supports models like OpenAI GPT-4, GPT-3.5 Turbo, and Groq Llama3.
- Custom LLMs: Easily integrate unique models through LiteLLM.
- Searxng Support: Eliminates external dependencies with Searxng integration.
- Agent-Based Search: Automates planning and execution for more effective search results.
- Local Model Support: Works with Llama3, Mistral, Gemma, and Phi3.
- Pre-Built Docker Image: Quick deployment without manual setup.
- Answer Questions: Utilizes cloud, local, or custom LLMs for query responses.
Prerequisites
- Docker
- Ollama (If running local models)
- Download any of the supported models: llama3, mistral, gemma, phi3
- Start ollama server
ollama serve
Get API Keys
Install and run
git clone https://github.com/rashadphz/farfalle.git
cd farfalle && cp .env-template .env
docker-compose -f docker-compose.dev.yaml up -d
Use Farfalle as a Search Engine
To use Farfalle as your default search engine, follow these steps:
- Visit the settings of your browser
- Go to 'Search Engines'
- Create a new search engine entry using this URL: http://localhost:3000/?q=%s.
- Add the search engine.
License
Apache-2.0 License