Ethical AI Policy
We are drawers, artists, writers, game designers, but we are also technologists. Creation is mistakes, learning and understanding. You need to understand before you can judge.
We too are in a process of understanding, and judgement — please join us on this journey.
At LoreKeeper we believe in Human Machine Teaming. It has the best outcomes — it's us the humans who are the spark! The creation, the ideas, the twists. A phone is quicker than a letter, but either of them mean nothing without someone on the other end to talk to.
We are using this as a tool for fast creation to play with our friends. We choose when to put on the silly voice, what lines to use and what to throw away. Our playgroup, your playgroup is important to us.
We believe that this is a new way of consuming content, reading books. Talking to books. And that for that to even exist, books need to be written in the first place and have emotion and empathy. First time ever Books Talk Back Now.
We will do everything in our power to minimise data exposure. Your contact details and your documents are not shared with AI providers. Only individual chat queries are sent to the AI for processing, and we are constantly working to reduce even that exposure.
We are exploring and learning. We don't have all the answers all the time — we feel it is better to be part of the conversation than shut out by big tech. We want conversation on how to navigate this new space. We are all building this from the blocks up.
We are constantly looking at ways creators can be paid. The creative industries matter to us. AI should augment human creativity, not replace it.
Our move to Anthropic
When we first built LoreKeeper, we used OpenAI's GPT models to power chat and content generation. As we grew and learned more about the AI landscape, we made the decision to switch our core chat engine to Anthropic's Claude.
We feel Anthropic provides a better service for creative worldbuilding. Claude produces richer, more nuanced writing for the kind of storytelling our users rely on. But beyond quality, we also trust Anthropic more when it comes to the values that matter to us — safety research, responsible AI development, and transparency about how models are trained and deployed.
This is part of our ongoing commitment to choosing the best and most responsible AI partners. As the landscape evolves, we will continue to evaluate and adapt.
What this means for your data:
- Chat queries are processed by Anthropic Claude — your documents and contact details are never sent to AI providers
- Image generation, video, and sound effects use specialist models separate from chat
- Your lore and world data stays on our servers and is never used for AI training
Questions about our AI policy? Reach out at Contact@Lorekeeper.co.uk