Your intelligent, AI-powered PDF research companion — powered locally by LLaMA 3 via Ollama.
Noto.ai transforms static PDFs into interactive, AI-narrated research companions. Ask questions, summarize content, and interact naturally — all while keeping your AI engine local and private using Meta’s LLaMA 3 via Ollama.
Feature | Description |
---|---|
📄 PDF Reader | Upload and parse any PDF |
🤖 Chat with PDF | Interact with your documents using LLaMA 3 |
🧠 Summarization | AI-generated summaries of sections or pages |
📍 Search & Navigate | Quickly find terms and content |
🌙 Dark Mode | Kivy-powered user-friendly interface |
🖥️ Works Offline | No cloud required – local model inference |
- UI: Python + Kivy + KivyMD
- AI Backend: LLaMA 3 via Ollama
- PDF Parsing:
PyMuPDF
(fitz
) - Text-to-AI: Local API calls to
http://localhost:11434/api/generate
git clone https://github.com/sagnikdatta2k6/noto-ai-app.git
cd noto-ai-app
pip install -r requirements.txt
-
Install Ollama
-
Pull LLaMA 3 model:
ollama pull llama3
-
Run the model:
ollama run llama3
python main.py
If needed, configure .env
:
OLLAMA_API_URL=http://localhost:11434
MODEL_NAME=llama3
- Fork the repo
git checkout -b feature/improve-ui
- Make your changes
- Submit a pull request ✨
MIT License. See LICENSE.
Sagnik Datta 🌐 GitHub