Tech Giants Extract Security Rents From AI Software Surge #
The deskilling of the professional class is proceeding exactly on schedule, but the resulting margin expansion has created severe digital externalities. The sudden proliferation of 'vibe coding'—a trend where non-technical users leverage autonomous AI agents like Anthropic's Claude or Emergent's 'Wingman' to build software—has triggered an 84% spike in iOS app submissions. Unsurprisingly, this flood of algorithmic slop has introduced catastrophic supply chain vulnerabilities. Malware operators are already distributing the PlugX remote access trojan via fake AI platforms.
The populist response is a panic over cybersecurity. The institutional reality is a massive rent-extraction opportunity for platform monopolists. Apple has begun aggressively delisting vibe-coded applications like Anything and Replit, ostensibly citing unreviewed code execution. In practice, Cupertino is erecting a cognitive enclosure. By strictly policing the App Store perimeter against AI-generated software, Apple is forcing developers to pay for premium security vetting. They are monetising the friction created by biological deskilling.
Simultaneously, enterprise legal departments are waking up to the malpractice liability of AI hallucinations. As firms like Progress Software expand their Bengaluru innovation hubs to manage these exact infrastructure pipelines, corporate boards must realise that deploying autonomous agents without centralised oversight is an unhedged legal liability. If an AI hallucinates a critical business decision, existing anti-wiretapping and software design laws will swiftly annihilate the promised labour savings. Verification is the new moat. Capital should immediately pivot away from the application layer and invest heavily in the foundational auditing and security frameworks required to police this synthetic software surge.