Posts in Artificial Intelligence
Data Leaks, Moats, and Dark Code

When Anthropic’s “Claude Code” leaked, it wasn’t the model that mattered; it was the machinery around it. The incident underscores a shift in the value of AI intellectual property away from underlying models and into the orchestration layer – the “Dark Code” that makes those models operational for users. At least for some providers, competitive advantage now lives in the harness – permissions, workflows, memory systems, and the invisible logic that turns raw intelligence into reliable execution.

That shift has legal consequences. Copyright can fail to protect functionality once expression is stripped away, and rapid AI-assisted reimplementation can make traditional infringement remedies ineffective. The practical competitive “moat” is no longer only what an AI system knows, but also how it is structured, secured, and deployed – and whether companies can prove they have taken the necessary steps to protect it.

Download the white paper here.

Follow Wave Law on LinkedIn here.

A Mereotopological Cellular Automata Architecture and Method to Demarcate LLM Operational Boundaries

This white paper proposes the Modular Cellular Automaton Model (MCAM) as a framework for enforcing safety boundaries on AI systems. Instead of attempting to align AI internally with human values, MCAM constrains behavior externally by evaluating proposed actions against spatial and semantic rules encoded in a cellular-automaton grid. By labeling locations (such as critical infrastructure) and applying rule-based checks before execution, the system functions as a semantic firewall that blocks unsafe actions while providing transparent, auditable enforcement of operational limits. Download it here.

The Algorithmic Principal – Navigating the AI Liability Gap in Commercial Transactions and Agency Law

Autonomous “agentic” AI systems challenge traditional doctrines of agency, contract, and corporate governance. As AI tools increasingly execute financial and procurement decisions, existing legal frameworks struggle to allocate responsibility when algorithmic actions cause harm. This white paper analyzes the emerging AI liability gap and outlines practical governance and risk management considerations for enterprises deploying autonomous systems. Download it here.