Trust Center
Responsible AI

Responsible AI.Our principles.

AI is making decisions that affect people's careers. We believe that comes with obligations — to transparency, to fairness, and to keeping humans in control.

Charter

Why we wrote this

EffectiveMarch 2026
Next reviewMarch 2027
OwnerTiJUBU Product & Legal
Applies toAll TiJUBU AI features

TiJUBU sits at the intersection of AI and employment. Our platform influences how people are seen inside organisations — which skills they're recognised for, which roles they're considered for, how their potential is mapped. That is not a neutral function.

We wrote this charter because we believe technology companies building AI for HR have an obligation to be specific about their principles — not to publish values that can mean anything, but to make concrete commitments that can be held to account.

These six principles govern every AI feature in TiJUBU today and everything we build from here.

Six Principles
01

People are never reduced to a score

AI in TiJUBU produces signals, not verdicts. Career trajectories, skill gaps and potential are surfaced as inputs to human conversations — never as automated decisions about someone's future.

No automated hiring, promotion or dismissal decisions
Every AI output is labelled as a recommendation, not a result
Managers and HR teams retain full override authority at all times
Employees can request a human review of any AI-generated assessment
02

We actively work to reduce bias

AI systems trained on historical data inherit historical inequities. We treat bias as an ongoing engineering and governance problem, not a solved one.

Bias audits on career and skills recommendation models before any release
Demographic fairness metrics tracked continuously across role, gender and tenure
Skills taxonomy reviewed for exclusionary language and credential inflation
Third-party fairness evaluations as part of our roadmap to SOC 2
03

You always know when AI is involved

Employees and managers will never unknowingly interact with an AI system. Every AI-assisted feature in TiJUBU is clearly labelled, explained and opt-out capable.

All AI-generated content carries a clear label within the product UI
Feature descriptions explain what data is used and how
Admins can disable any AI feature independently for their organisation
No hidden scoring, shadow ranking or unexplained prioritisation
04

Your data is not our training set

Customer data — including employee profiles, skills, career histories and compensation — is never used to train AI models, ours or anyone else's.

Training data isolation: customer data never enters model fine-tuning pipelines
Third-party AI vendors contractually prohibited from using customer data for training
Data used only for the purpose explicitly stated at the point of collection
Full deletion on contract termination, including from any AI inference logs
05

Accountability is built in, not bolted on

TiJUBU maintains a named AI accountability function. Every AI feature has an owner responsible for its fairness, accuracy and compliance with this charter.

Dedicated AI Ethics review for every new AI feature before release
Full audit log of every AI-assisted action, accessible to customers
A published incident process for reporting AI-related harms
Annual public update to this charter, with a changelog
06

EU AI Act alignment

TiJUBU's AI features interact with employment decisions — a category the EU AI Act classifies as high-risk. We are actively implementing the required controls ahead of enforcement deadlines.

Risk classification completed for all current AI features
Fundamental rights impact assessment in progress
Technical documentation and conformity assessment on roadmap
Human oversight and transparency requirements already met in product design

Questions or concerns?

If you believe a TiJUBU AI feature has produced a biased, unfair or unexplained outcome, contact us. All reports are reviewed by our AI accountability function within five business days.