A trained inner voice for AI
Big models do the heavy thinking.
CricketCore runs the inner-voice check
Introducing CricketCore - The Conscience Framework for AI Systems
Humans don’t rely on intelligence alone. We also develop an inner voice that quietly asks, “Should I actually do this?” before we act.
Project InnerVoice separates that role in AI: big models do the heavy thinking, while a compact CricketCore runs an inner-voice check on their actions.
What Project InnerVoice is building
Project InnerVoice separates “being smart” from “having an inner voice,” and gives AI systems a dedicated layer for moral checking.
Inner Voice Core
A written moral framework and curriculum.
Nineteen core lessons, staged like child development, taught through original stories and structured dilemmas.
CricketCore
A small CricketCore model trained only on Inner Voice Core.
It takes situations and candidate actions from a larger system and returns an inner-voice check before anything happens.
Reusable inner-voice layer
The long-term goal is a reusable, inspectable inner-voice layer that can sit under many agents and robots, not just one product or company.
How CricketCore fits into an AI system
The outer brain represents the main AI system.
The inner brain represents the CricketCore model inside it.
Step 1: InnerVoice curriculum
Teach the inner voice first
InnerVoice defines a staged curriculum: safety, empathy, fairness, integrity, and repair.
The inner-voice model trains on stories and dilemmas built around these lessons.
Step 2: Train CricketCore
Train a small inner-voice model
CricketCore is a compact model (not a chatbot) that learns to recognize which lessons are at stake, rank options, and explain its choices.
Step 3: Wrap larger systems
- Wrap larger AI systems
Bigger models or agents propose actions.
CricketCore runs an inner-voice check and can:
- favor safer options
- flag harmful or unfair plans
- recommend escalation to a human
Who Project InnerVoice is for
Project InnerVoice is an early-stage, public-interest effort. The current focus is on research, prototyping, and collaboration.
AI safety and alignment researchers
Exploring architectures where a separate inner-voice layer evaluates the plans of larger models.
Labs and institutes
Interested in moral curricula, benchmarks, or small models dedicated to ethical checking.
Startups building agents or robots
Especially those working on service robots, AI agents, or guardrail platforms who need a principled inner-voice layer.
Funders and grant writers
Looking to support early-stage infrastructure for safer, more transparent AI systems.
Why this started
Project InnerVoice began from watching how children develop an inner voice.
We don’t give kids a giant rulebook all at once. We teach in stages, through stories, simple rules, and harder situations, until they carry a quiet inner voice that asks, “Should I really do this?”
Large language models are like very capable newborns: they can use language, but they don’t start with an inner voice.
Project InnerVoice is an attempt to build that inner voice explicitly in AI systems.
