Responsible AI Policy
How Wouessi designs, builds, and operates AI systems that customers can defend in front of a regulator, an auditor, or a journalist.
How Wouessi designs, builds, and operates AI systems that customers can defend in front of a regulator, an auditor, or a journalist.
Last updated : 2026-05-01 · Version v3.5
1. Our commitment
Wouessi builds AI that customers can defend in front of a regulator, an auditor, or a journalist. This policy is the operating standard our delivery teams hold themselves to and the standard our products are built on.
2. Frameworks we align to
- USA: NIST AI Risk Management Framework (AI RMF) and the Generative AI Profile (NIST AI 600-1).
- International: ISO/IEC 42001 (AI management systems), ISO/IEC 23894 (AI risk).
- EU: EU AI Act, including Article 14 human oversight obligations for high-risk systems.
- Canada: AIDA (when in force), OSFI E-23 model risk guidance, Treasury Board Directive on Automated Decision-Making.
- OECD AI Principles and the Montreal Declaration for Responsible AI.
3. Our seven principles
- Sovereign by construction. Customer data does not leave the customer perimeter unless the customer has explicitly authorized it. Self-hosted is the default.
- Replayable. Every decision a system makes is reconstructable on day 365 with the exact context the model saw on day 1.
- Refusal-tuned. An agent that refuses gracefully is worth more than an agent that hallucinates confidently.
- Human in the loop where it matters. Article-14-style oversight on any decision that touches a person's rights, money, or care.
- Bias-tested. Pre-deployment fairness review against documented protected attributes; post-deployment drift monitoring.
- Bilingual by architecture. EN+FR parity is enforced at the asset and prompt level, not added in a translation pass.
- Energy-aware. We size the model to the task; we do not run a 70B model where a 7B model meets the bar.
4. Scope
This policy applies to any AI system Wouessi builds, deploys, or operates · for ourselves and on behalf of customers · including the upcoming Stanza-46 platform.
5. Prohibited uses
We will not build or operate AI systems for: untargeted biometric surveillance, social scoring of individuals, real-time emotion recognition in workplaces or schools, or any application prohibited under the EU AI Act, the Canadian Digital Charter, or applicable US state law.
6. Redress
Any individual subject to a Wouessi-operated decision system has the right to a written explanation, a human review path, and (where applicable under GDPR Article 22, Quebec Law 25 §12.1, or Colorado AI Act §6) a contestation channel. Email info@wouessi.com.
7. Review cadence
This policy is reviewed twice per year and any time a regulator publishes new binding guidance. The next scheduled review is November 2026.
8. Contact
AI Governance Lead · info@wouessi.com · 1-844-WOUESSI