The Owner

The bottom line, above all

Tech Giants Restrict Security Models As Software Deskilling Accelerates #

Friday, 17 April 2026 · words

Close up of hands typing on a mechanical keyboard illuminated by the cold blue glow of a curved monitor. Telephoto zoom lens, cool blue-grey colour palette, sharp studio lighting, 4K HDR professional photography. Corporate server room setting, sharp lines, restrained negative space.
Close up of hands typing on a mechanical keyboard illuminated by the cold blue glow of a curved monitor. Telephoto zoom lens, cool blue-grey colour palette, sharp studio lighting, 4K HDR professional photography. Corporate server room setting, sharp lines, restrained negative space.

Max Levchin sinks into a tufted leather armchair inside a San Francisco recording studio, speaking directly into a foam-covered steel microphone. He taps a stainless steel pen against his knee while dismissing the panic surrounding autonomous software generation. The deskilling of the biological workforce is accelerating rapidly, driving massive margin expansion for tech firms while fundamentally shifting the legal liability of the digital economy. The emergence of vibe coding allows non-technical users to generate enterprise software through natural language, rendering legacy development teams obsolete. "So until OpenClaw can also do things like call every restaurant and negotiate with the owner and install the right kind of tablet and software and extract the menus and all the things that DoorDash did, I think DoorDash is actually quite safe in their business," Levchin said, isolating exactly where biological friction still protects enterprise value. Yet, as basic coding becomes commoditised, the security perimeter is being enclosed by an elite oligopoly. Anthropic has restricted access to its Mythos model via Project Glasswing, effectively gating the tools required to audit these new AI-generated applications. OpenAI responded by releasing its own verified-access cybersecurity model. This dynamic creates a structural corporate vulnerability. Tech giants extract premium security rents by selling the only tools capable of identifying zero-day exploits, while middle-management verifiers inherit the catastrophic legal liability of signing off on hallucinated code. Healthcare administrators are already pricing this transition, with start-up Worki raising 2.75 million dollars to build AI infrastructure that sits atop legacy systems like Workday. Capital must recognise that removing human developers lowers immediate operational costs, but it exponentially increases the enterprise premium required to secure the resulting automated infrastructure.