Why Your Best Employees Should Be Expensive
AI changes the old SaaS logic that more usage always equals better value, because with token-based pricing higher employee usage can raise costs while also driving much greater output. The argument is that top performers should be allowed, and even encouraged, to spend heavily on AI tools when that spend helps them ship more, solve problems faster, and create disproportionate value. The real metric should not be minimizing token spend, but measuring the return the company gets from that spend.
Capital Velocity and the New Compute Infrastructure Cycle
NScale’s recent funding round marks a pivotal transition as AI compute shifts from a venture-backed experiment into a mature, institutional infrastructure asset class. By attracting major private equity and hedge fund investors like Point72 and Citadel, the company is proving that GPU clusters now resemble high-value physical assets such as telecommunications networks or power plants. This influx of deep-pocketed capital allows these firms to remain private longer while rapidly scaling the massive, hardware-intensive environments required to power the next generation of artificial intelligence.
Anthropic’s “Supply Chain Risk” Label
The U.S. Department of War designated Anthropic a national security supply chain risk after negotiations over a $200M Pentagon contract collapsed. The dispute centered on Anthropic’s refusal to remove guardrails preventing its AI from being used for mass domestic surveillance or lethal autonomous weapons without human oversight, leading the Pentagon to move the contract to OpenAI. The episode highlights a growing tension between AI companies that want to control how their systems are used and governments that increasingly view frontier AI models as strategic infrastructure.
Cross-Commodity Swaps
The Watt-Bit Swap is a cross-commodity derivative designed to capture the "Watt-Bit Spread," which is the significant margin between the cost of electricity (Watts) and the market value of the AI compute (Bits) it produces. Much like the "spark spread" allows power plants to hedge the difference between natural gas costs and electricity prices, this swap allows data centers and utilities to link their financial outcomes. Instead of a data center paying a fixed electricity tariff, the payment is indexed to the market value of GPU rental rates. This alignment means that when AI compute is highly profitable, the power provider earns a higher return, and when the market dips, the data center’s power costs decrease, effectively sharing the operational risk and upside of the AI economy.
The Great Hiring Hiatus? Agents and OpenClaw
Agents like OpenClaw may reduce hiring pressure by absorbing recurring operational work that historically justified incremental headcount. Instead of expanding payroll, teams can scale variable token spend, allowing smaller groups to operate with higher leverage while delaying permanent hiring commitments. As this shift unfolds, leaders must balance cost efficiency with governance, deciding which responsibilities require human judgement and which can be formalized into agent workflows.
A New Venture Financing Model
There is a fundamental shift in venture capital from a "growth-at-all-costs" software model to a more disciplined, asset-heavy approach focused on sectors like AI infrastructure, energy, and defense. We argue that today’s most consequential companies are grounded in real-world physics and capital intensity, requiring investors to adopt the rigor of project finance to build durable value. Ultimately, venture is evolving by using sophisticated capital structures to underwrite growth where tangible assets and cash durability are the primary competitive advantages.
Warrants and Covenants in Venture
Most founders focus on valuation, but in later-stage venture rounds the real shift happens in the terms. Warrants and covenants change how upside, risk, and decision-making are allocated once checks get larger and timelines tighten. Understanding them is the difference between raising growth capital and accepting structures that constrain flexibility and dilute outcomes over time.
The Preferred Rights That Matter Most, and How They Shape a Founder’s Fate
Founders often over-index on simple valuation metrics while ignoring the complex "legal DNA" of preferred rights, which ultimately dictate the distribution of control and economics. Because these terms compound across funding rounds, early concessions in seed rounds create a permanent, structural baseline that can drastically reshape a founder’s control and payout at exit.
Ventures Edge Predictions for 2026
2026 is shaping up as a turning point where AI shifts from a standalone product to a layer embedded across products, workflows, and the physical world. As adoption accelerates, compute and energy constraints move from IT concerns to strategic bottlenecks that shape what can scale. At the same time, macro forces like regulation, geopolitics, and cultural backlash increasingly determine where companies build, how they operate, and who wins.
Enterprise SaaS is Changing
Internal AI agents and collapsing software development costs are eroding the traditional moat of enterprise SaaS as intelligence shifts inward. As externally packaged software becomes infrastructure rather than the primary source of differentiation, public markets are repricing incumbents around weaker growth and pricing power. The real value is moving to whoever controls the internal intelligence and orchestration layer that sits on top of enterprise systems.
A $20B Signal About Where AI Infrastructure Is Actually Going
NVIDIA’s $20 billion deal with Groq marks a pivotal shift in AI infrastructure, moving the focus from model training to large-scale inference efficiency. Rather than a traditional acquisition, NVIDIA executed a "strategic intervention" by licensing Groq’s core technology and absorbing its top engineering talent, including founder Jonathan Ross. This "reverse acqui-hire" allows NVIDIA to bypass lengthy antitrust reviews while effectively neutralizing an architectural rival. By integrating Groq’s ultra-low-latency Language Processing Units (LPUs), NVIDIA is positioning itself to own the next phase of the AI boom, where the value is measured not by how models are built, but by the speed and cost of how they are delivered to users.