This white paper proposes the Modular Cellular Automaton Model (MCAM) as a framework for enforcing safety boundaries on AI systems. Instead of attempting to align AI internally with human values, MCAM constrains behavior externally by evaluating proposed actions against spatial and semantic rules encoded in a cellular-automaton grid. By labeling locations (such as critical infrastructure) and applying rule-based checks before execution, the system functions as a semantic firewall that blocks unsafe actions while providing transparent, auditable enforcement of operational limits. Download it here.
Autonomous “agentic” AI systems challenge traditional doctrines of agency, contract, and corporate governance. As AI tools increasingly execute financial and procurement decisions, existing legal frameworks struggle to allocate responsibility when algorithmic actions cause harm. This white paper analyzes the emerging AI liability gap and outlines practical governance and risk management considerations for enterprises deploying autonomous systems. Download it here.