Local AI
Converse with Ollama models running on your local machine
Problem
LLMs are everywhere and they require a lot of compute to run and [a lot] of money to use them. For production use cases, this is fine, but for personal use, it's not ideal. Local AI models are becoming more capable and accessible, but there is no easy way to interact with them via a local, privacy first, standalone GUI. This project aims to solve that problem.