LIVE-OFFLINE AI: Experience the full power of AI assistance without internet dependency
LocalMind is a comprehensive offline AI assistant with conversation memory, powered by GPT4All and designed for complete privacy and accessibility. It provides intelligent responses across education, healthcare, and general domains while operating entirely on your local machine.
🔒 Complete Privacy - All data stays on your device
🌐 100% Offline - No internet required after setup
🧠 Conversation Memory - Remembers context and learns from interactions
🎯 Domain Expertise - Specialized in Education, Healthcare, and General assistance
💻 Dual Interface - Both CLI and GUI available
⚡ GPT4All Powered - Uses Llama-3.2-1B for intelligent responses
🔍 Vector Memory - FAISS-powered semantic search for conversation history
- Download the repository
- Double-click
setup.batto install all dependencies - Double-click
LocalMind.batto launch with menu options
# Clone the repository
git clone https://github.com/your-username/LocalMind.git
cd LocalMind
# Run setup
setup.bat
# Launch LocalMind
LocalMind.batLocalMind.bat- Interactive menu to choose CLI or GUIrun_cli.bat- Direct CLI launchrun_gui.bat- Direct GUI launch
# Switch domains
domain education # Switch to education domain
domain healthcare # Switch to healthcare domain
domain general # Switch to general domain
# Memory management
memory stats # View conversation statistics
memory search <query> # Search conversation history
memory clear # Clear conversation history
# System commands
status # Check system status
help # Show all commands
quit # Exit LocalMind🧠 LocalMind - Education Domain
You: Explain photosynthesis
AI: [Detailed explanation with diagrams]
You: What about cellular respiration?
AI: [Continues with context from photosynthesis discussion]
LocalMind/
├── 🚀 LocalMind.bat # Main launcher
├── 🔧 setup.bat # One-click setup
├── 📱 run_cli.bat # CLI launcher
├── 🖥️ run_gui.bat # GUI launcher
├── 🐍 localmind.py # Main application
├── 📂 src/ # Source code
│ ├── 🧠 model/ # AI models & GPT4All
│ ├── 💾 knowledge/ # Vector DB & memory
│ ├── 🎯 domains/ # Specialized domains
│ └── 🖥️ interface/ # CLI & GUI
├── 📋 requirements.txt # Dependencies
└── 📖 README.md # This file
- AI Engine: GPT4All with Llama-3.2-1B (773MB model)
- Memory System: FAISS vector database with sentence-transformers
- Embeddings: all-MiniLM-L6-v2 for semantic search
- Interface: Rich CLI formatting + Tkinter GUI
- Domains: Modular architecture for specialized responses
- OS: Windows 10/11 (with .bat files), Linux/macOS supported
- Python: 3.8+
- RAM: 4GB minimum, 8GB recommended
- Storage: 2GB free space (includes model downloads)
- CPU: Any modern processor (optimized for CPU inference)
All dependencies are automatically installed via setup.bat:
torch- PyTorch for model inferencetransformers- HuggingFace transformersgpt4all- Local LLM integrationfaiss-cpu- Vector database for memorysentence-transformers- Text embeddingsrich- Beautiful CLI formattingtkinter- GUI framework (built-in)
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- GPT4All for local LLM capabilities
- FAISS for vector database
- Sentence Transformers for embeddings
- Rich for beautiful CLI
If you encounter any issues or have questions:
- Check the Issues page
- Create a new issue with detailed description
- Include system specs and error logs
⭐ Star this repository if you find LocalMind useful!