a little helper that lives on a server I rent, thinks with a model, and runs whether I'm watching or not
story · What I Built
The one-line version
I built a small AI agent called NAVis that lives on a server I rent. I can chat with it in a browser, message it on Telegram, or let it reach out to me on a schedule — and it remembers everything in plain-text files I own.
What it actually does
When I talk to it, it reads a small set of files that describe who it is and what it knows about me, uses a language model to figure out what to do, takes action (maybe reading my email, searching the web, sending me a Telegram message), and writes down anything worth remembering. Same loop whether I kicked it off or a schedule did.
Why I built it
Marc Andreessen described an agent as LLM + Shell + Filesystem + Markdown + Cron. A model to reason, a shell to act, a filesystem to remember, markdown as a format both humans and models can read, and cron so it runs whether I'm watching or not. Every piece already existed. The combination is what makes it autonomous. I wanted to understand that combination from the inside.
What it taught me
The agent sometimes claims it did something without actually doing it — "I wrote the file" when nothing got written. That's a property of how language models work, not a bug. The lesson generalises: trust what the filesystem says, not what the agent says. That shapes how I think about any AI system that takes actions.
Where it leads
NAVis is the first of two systems I built on the same primitives. The second is PRACtis — an autonomous pipeline that tracks how AI practitioners think, runs every Monday without me touching it, and publishes a weekly digest. Same five pieces, different purpose. That's the architectural point: it's a pattern, not a product.