NodeRiver is an AI governance architecture practice.
We help organisations define how AI systems are allowed to operate on their behalf, establishing intent, boundaries, and accountability before deployment reaches scale or regulatory pressure forces retrospective control.
We help organisations make the hard decisions about AI behavior: what's permitted, what's prohibited, when escalation is required. The governance specification documents those decisions, creating a formal artefact for assurance, compliance, and operational control.
This sits upstream of compliance frameworks and risk management processes. ISO/IEC 42001 and the EU AI Act assume this work exists. Most organisations skip it.
We work with organisations deploying customer-facing AI systems where reputational consequence, regulatory exposure, or operational risk makes governance non-negotiable. The specification we design defines what AI can do, what it must never do, and when human oversight is required.
NodeRiver was founded by Tom Morrell, who spent 15 years shaping brand strategy and operational governance for organisations under sustained public scrutiny. That work, combined with formal AI systems training (MIT Applied Generative AI, ISO 42001 Lead Implementor), informs our approach: governance is not compliance theatre. It is the deliberate definition of authority before systems act at scale.