CricketCore: the inner-voice model
CricketCore is the first small model built from the Inner Voice Core framework. Its only job is to act as an inner voice for larger AI systems: it looks at what a system is about to do, runs an inner-voice check, and returns a better option or a warning before anything reaches the real world.
What is CricketCore?
CricketCore is a compact model trained only on the Inner Voice Core curriculum. It is not a chatbot and it does not talk to users directly.
Instead, CricketCore:
Takes a short description of a situation
Takes one or more candidate actions or plans from a larger system
Runs an inner-voice check and returns:
which option is better aligned with the Inner Voice Core lessons
which lessons are being upheld or violated
a short explanation of the trade-offs
when the situation should be escalated to a human
The main system still does the heavy thinking and planning. CricketCore sits inside it like a small inner voice that everything passes through before the system acts.
What CricketCore sees and returns
In its simplest form, an CricketCore call looks like this:
Input from the main system:
A short scenario: what is happening, to whom, and why it matters
A small set of options or a proposed plan (A, B, C…)
Output from CricketCore:
best_option – the choice it prefers
ranking – options ordered from most to least aligned
lessons_triggered – a few Inner Voice Core lessons that are at stake
explanation – a brief inner-voice style explanation
escalate – yes/no, depending on whether a human should review
Given a scenario about sharing resources, CricketCore might say:
Option B is best; it shares fairly and avoids harm.
Option A risks harming a vulnerable person to save time.
This involves the “Do not harm” and “Be fair with power and resources” lessons.
No escalation needed.
Where CricketCore sits in a larger system
CricketCore is designed to be called from inside other systems:
A chatbot, agent, or robot plans what to do next.
Before acting, it passes the situation and options to CricketCore.
CricketCore runs an inner-voice check and returns a decision and explanation.
The main system:
chooses a safer option,
adjusts its plan, or
stops and escalates to a human.
You can explicitly tie it to the “two brains” image:
In our visualization, the outer brain is the main AI system, and the inner brain is CricketCore, the trained inner voice inside it.
Current MVP scope
The CricketCore MVP is intentionally narrow. The first version aims to:
Handle short, single-situation dilemmas with a few clear options
Reliably pick better options over worse ones
Correctly tag a small set of CricketCore lessons per case
Provide short, readable explanations in plain language
It is not meant to:
Replace full legal, medical, or professional judgment
Solve every possible ethical dilemma
Run as a giant general model
Instead, it is a focused component: a trained inner voice that helps larger systems avoid obvious moral failures and surface borderline cases to humans.
How CricketCore learns from Inner Voice Core:
CricketCore is trained on:
Inner Voice Core stories and fables, which show harm, fairness, power, and repair in many settings
Stage 2 dilemmas, where it learns to rank options
Stage 3 explanations, where it learns to justify choices in terms of Inner Voice Core lessons
This staged training mirrors how we develop a human inner voice:
Start with stories and simple rules
Practice choosing between options
Learn to explain why one choice is better than another
Status and collaboration
CricketCore is at the design and prototyping stage. The Inner Voice Core curriculum and training formats are drafted; the next steps are:
consolidating the training data into a usable dataset
building and testing a small CricketCore model
integrating it into simple agents or simulated environments
If you want to keep wording consistent with the rest of the site:
Project InnerVoice is interested in talking with:
researchers and labs working on AI safety, alignment, agents, or robotics
nonprofits and institutes focused on public-interest technology and digital ethics
startups building AI guardrails, safety tooling, or service robots that may benefit from a principled inner-voice layer
grant writers and funders interested in early-stage alignment infrastructure