Posts in Cybersecurity & Data Risk
Practical Tips on Mitigating Legal Risks from Ransomware Attacks on Technology Vendors

Ransomware is no longer just an IT problem—it is a contract problem hiding in plain sight. As high-profile incidents like the attacks on Kronos, Change Healthcare, and CDK Global demonstrate, when a critical vendor goes down, the resulting disruption cascades through payroll, HR, and core operations, exposing customers not only to business interruption but also to regulatory penalties, employee claims, and reputational harm. Yet many companies discover too late that their vendor agreements were drafted for yesterday’s “data breach,” not today’s system-crippling ransomware event.

This advisory reframes ransomware as a risk allocation failure—and a fixable one. By dissecting where traditional definitions, liability caps, and insurance provisions fall short, it offers a practical roadmap for shifting exposure back where it belongs: onto the vendors best positioned to manage it. The message is straightforward but urgent: unless contracts evolve as quickly as cyber threats, companies will continue to bear losses they thought they had already outsourced.

Download the advisory here.

Data Leaks, Moats, and Dark Code

When Anthropic’s “Claude Code” leaked, it wasn’t the model that mattered; it was the machinery around it. The incident underscores a shift in the value of AI intellectual property away from underlying models and into the orchestration layer – the “Dark Code” that makes those models operational for users. At least for some providers, competitive advantage now lives in the harness – permissions, workflows, memory systems, and the invisible logic that turns raw intelligence into reliable execution.

That shift has legal consequences. Copyright can fail to protect functionality once expression is stripped away, and rapid AI-assisted reimplementation can make traditional infringement remedies ineffective. The practical competitive “moat” is no longer only what an AI system knows, but also how it is structured, secured, and deployed – and whether companies can prove they have taken the necessary steps to protect it.

Download the white paper here.

Follow Wave Law on LinkedIn here.

A Mereotopological Cellular Automata Architecture and Method to Demarcate LLM Operational Boundaries

This white paper proposes the Modular Cellular Automaton Model (MCAM) as a framework for enforcing safety boundaries on AI systems. Instead of attempting to align AI internally with human values, MCAM constrains behavior externally by evaluating proposed actions against spatial and semantic rules encoded in a cellular-automaton grid. By labeling locations (such as critical infrastructure) and applying rule-based checks before execution, the system functions as a semantic firewall that blocks unsafe actions while providing transparent, auditable enforcement of operational limits. Download it here.