Navigating the New Liquidity Reality: Board Fiduciary Duties in an Era of Prolonged Private Company Lifecycles

Private capital markets have fundamentally shifted. As companies remain private longer, liquidity is no longer tied to a single IPO or exit event but instead driven by an expanding secondary market. This evolution places boards in a new position – balancing long-term value creation against increasing pressure from investors seeking distributions and employees facing expiring equity.

These pressures lead to fiduciary tension under Delaware law. Directors must act in the best interests of the corporation and all its stockholders. Yet, venture-backed boards often face “dual fiduciary” conflicts when investor timelines diverge from the company’s optimal growth path. Delaware case law makes clear that liquidity-driven decisions are not inherently problematic – but when conflicts distort process or outcomes, courts will apply heightened scrutiny, particularly where common stockholders are disadvantaged.

In this environment, boards must treat liquidity as a strategic, process-driven exercise. That means institutionalizing independence, ensuring rigorous valuation and disclosure, and utilizing tools such as tender offers, net exercise provisions, or continuation vehicles with appropriate safeguards. When executed deliberately and transparently, secondary liquidity can support long-term growth; when mismanaged, it creates significant legal and governance risk.

Download the white paper here.

Practical Tips on Mitigating Legal Risks from Ransomware Attacks on Technology Vendors

Ransomware is no longer just an IT problem—it is a contract problem hiding in plain sight. As high-profile incidents like the attacks on Kronos, Change Healthcare, and CDK Global demonstrate, when a critical vendor goes down, the resulting disruption cascades through payroll, HR, and core operations, exposing customers not only to business interruption but also to regulatory penalties, employee claims, and reputational harm. Yet many companies discover too late that their vendor agreements were drafted for yesterday’s “data breach,” not today’s system-crippling ransomware event.

This advisory reframes ransomware as a risk allocation failure—and a fixable one. By dissecting where traditional definitions, liability caps, and insurance provisions fall short, it offers a practical roadmap for shifting exposure back where it belongs: onto the vendors best positioned to manage it. The message is straightforward but urgent: unless contracts evolve as quickly as cyber threats, companies will continue to bear losses they thought they had already outsourced.

Download the advisory here.

Data Leaks, Moats, and Dark Code

When Anthropic’s “Claude Code” leaked, it wasn’t the model that mattered; it was the machinery around it. The incident underscores a shift in the value of AI intellectual property away from underlying models and into the orchestration layer – the “Dark Code” that makes those models operational for users. At least for some providers, competitive advantage now lives in the harness – permissions, workflows, memory systems, and the invisible logic that turns raw intelligence into reliable execution.

That shift has legal consequences. Copyright can fail to protect functionality once expression is stripped away, and rapid AI-assisted reimplementation can make traditional infringement remedies ineffective. The practical competitive “moat” is no longer only what an AI system knows, but also how it is structured, secured, and deployed – and whether companies can prove they have taken the necessary steps to protect it.

Download the white paper here.

Follow Wave Law on LinkedIn here.

Human-Centered MedTech Design: Embedding Ethics, Law, and Social Values in Innovation

Modern MedTech is evolving from isolated devices to complex socio-technical systems in which ethics, law, and social values are core engineering requirements rather than afterthoughts. In this panel discussion, Joe Carvalko — patent attorney, engineer, and Yale technology ethics expert — reframes these considerations as a competitive advantage that builds trust and leads to superior product design.

Joe provides a unique perspective as both an expert and a pacemaker recipient, arguing that medical devices are "socio-psychological products" deeply embedded in cultural contexts. His contribution emphasizes that trust is a fundamental design problem, necessitating the integration of medical ethics—autonomy, beneficence, and justice—into a device’s specifications from the very first day of development.

Watch the full discussion here: https://www.youtube.com/watch?v=W7z6_SokPkY.

Carl Baranowski
New Federal Oversight: National AI Legislative Framework

On March 20, 2026, the White House released a formal national artificial intelligence legislative framework. This framework represents a significant shift toward a centralized, "light-touch" regulatory approach.

Key components of this federal move include:

**State Preemption: The administration is calling on Congress to preempt state laws governing model development to prevent a "patchwork" of conflicting compliance regimes.

**Sector-Specific Regulation: Rather than a single rule-making body, the framework suggests oversight through existing sector-specific regulatory bodies.

**Innovation Focus: The policy prioritizes American competitiveness in the global AI race.

While this framework seeks to streamline innovation, it currently lacks a clear path to accountability for specific harms, leaving the burden of risk management on the deploying enterprise.

Download the revised white paper here.

A Mereotopological Cellular Automata Architecture and Method to Demarcate LLM Operational Boundaries

This white paper proposes the Modular Cellular Automaton Model (MCAM) as a framework for enforcing safety boundaries on AI systems. Instead of attempting to align AI internally with human values, MCAM constrains behavior externally by evaluating proposed actions against spatial and semantic rules encoded in a cellular-automaton grid. By labeling locations (such as critical infrastructure) and applying rule-based checks before execution, the system functions as a semantic firewall that blocks unsafe actions while providing transparent, auditable enforcement of operational limits. Download it here.

The Algorithmic Principal – Navigating the AI Liability Gap in Commercial Transactions and Agency Law

Autonomous “agentic” AI systems challenge traditional doctrines of agency, contract, and corporate governance. As AI tools increasingly execute financial and procurement decisions, existing legal frameworks struggle to allocate responsibility when algorithmic actions cause harm. This white paper analyzes the emerging AI liability gap and outlines practical governance and risk management considerations for enterprises deploying autonomous systems. Download it here.