Get your Own AI-Powered Self-hosted Search Engine Perplexity Clone with Farfalle

Get your Own AI-Powered Self-hosted Search Engine Perplexity Clone with Farfalle

LLM Powered Search? Yes it is Possible!

LLM-powered chat engines function by utilizing large language models to generate human-like text responses based on input prompts. These models are trained on extensive datasets, enabling them to understand context and provide relevant, coherent replies.

They can be integrated into applications to facilitate natural language interactions, effectively serving as conversational agents.

Perplexity AI is a conversational search engine that leverages large language models to provide direct answers to user queries, accompanied by cited sources for verification.

It offers both free and subscription-based services, with the paid version granting access to advanced models like GPT-4 and Claude 3.5.

Top 11 Free Open-Source AI Search Engines Powered by LLMs You Can Self-Host
The AI Search Revolution: Beyond Keywords The way we search online is changing dramatically. Gone are the days of awkwardly stringing keywords together, hoping to find what we need. A new wave of search engines, powered by Large Language Models (LLMs), is making search feel more like asking a smart

What is Farfalle?

Farfalle allows you to have your own self-hosted free alternative to Perplexity AI, that you can run locally or on your private server

It is an open-source AI-powered search engine that allows users to self-host and utilize both local and cloud-based Large Language Models (LLMs) for enhanced search capabilities. It supports models such as Llama 3, Gemma, Mistral, and Phi 3, and integrates with services like OpenAI's GPT-4.

Features

  • Support for Local LLMs: Leverage local models via Ollama integration.
  • Chat History: Maintain a record of past conversations for reference.
  • Expert Search: Advanced search capabilities tailored for specific domains.
  • Multiple Search Providers: Integrates Tavily, Searxng, Serper, and Bing for versatile search options.
  • Docker Deployment Setup: Simplifies installation with a pre-built Docker image.
  • Cloud Model Integration: Supports models like OpenAI GPT-4, GPT-3.5 Turbo, and Groq Llama3.
  • Custom LLMs: Easily integrate unique models through LiteLLM.
  • Searxng Support: Eliminates external dependencies with Searxng integration.
  • Agent-Based Search: Automates planning and execution for more effective search results.
  • Local Model Support: Works with Llama3, Mistral, Gemma, and Phi3.
  • Pre-Built Docker Image: Quick deployment without manual setup.
  • Answer Questions: Utilizes cloud, local, or custom LLMs for query responses.

Prerequisites

  • Docker
  • Ollama (If running local models)
    • Download any of the supported models: llama3mistralgemmaphi3
    • Start ollama server ollama serve

Get API Keys

Install and run

git clone https://github.com/rashadphz/farfalle.git
cd farfalle && cp .env-template .env
docker-compose -f docker-compose.dev.yaml up -d

Use Farfalle as a Search Engine

To use Farfalle as your default search engine, follow these steps:

  1. Visit the settings of your browser
  2. Go to 'Search Engines'
  3. Create a new search engine entry using this URL: http://localhost:3000/?q=%s.
  4. Add the search engine.

License

Apache-2.0 License

Resources & Downloads

GitHub - rashadphz/farfalle: 🔍 AI search engine - self-host with local or cloud LLMs
🔍 AI search engine - self-host with local or cloud LLMs - rashadphz/farfalle
Farfalle
Open-source AI powered answer engine.







Open-source Apps

9,500+

Medical Apps

500+

Lists

450+

Dev. Resources

900+