Responsible AI.Our principles.
AI is making decisions that affect people's careers. We believe that comes with obligations — to transparency, to fairness, and to keeping humans in control.
Why we wrote this
TiJUBU sits at the intersection of AI and employment. Our platform influences how people are seen inside organisations — which skills they're recognised for, which roles they're considered for, how their potential is mapped. That is not a neutral function.
We wrote this charter because we believe technology companies building AI for HR have an obligation to be specific about their principles — not to publish values that can mean anything, but to make concrete commitments that can be held to account.
These six principles govern every AI feature in TiJUBU today and everything we build from here.
People are never reduced to a score
AI in TiJUBU produces signals, not verdicts. Career trajectories, skill gaps and potential are surfaced as inputs to human conversations — never as automated decisions about someone's future.
We actively work to reduce bias
AI systems trained on historical data inherit historical inequities. We treat bias as an ongoing engineering and governance problem, not a solved one.
You always know when AI is involved
Employees and managers will never unknowingly interact with an AI system. Every AI-assisted feature in TiJUBU is clearly labelled, explained and opt-out capable.
Your data is not our training set
Customer data — including employee profiles, skills, career histories and compensation — is never used to train AI models, ours or anyone else's.
Accountability is built in, not bolted on
TiJUBU maintains a named AI accountability function. Every AI feature has an owner responsible for its fairness, accuracy and compliance with this charter.
EU AI Act alignment
TiJUBU's AI features interact with employment decisions — a category the EU AI Act classifies as high-risk. We are actively implementing the required controls ahead of enforcement deadlines.
Questions or concerns?
If you believe a TiJUBU AI feature has produced a biased, unfair or unexplained outcome, contact us. All reports are reviewed by our AI accountability function within five business days.