
A note on scope before the broader argument. Nothing in this piece changes what engineering teams need to do with LLMs in the software development lifecycle. The practical work is unchanged: markdown files for durable context, natural language parsing into structured JSON, and architecture and specification driven context management. Those disciplines are how value gets shipped, and they remain the work.
What follows sits on top of that, not in place of it. It is a broader observation about knowledge itself, and how knowledge will no longer be captive in human brains.
I have been thinking a lot about how the industry talks about large language models. They reason. They understand. They think. The vocabulary is convenient, but it is wrong, and it is costing organizations real money and real time.
A more useful framing is this: LLMs have open sourced human knowledge. They are statistical compressions of the internet, of textbooks, of code repositories, of the cumulative written record. They are not sentient. They do not want anything. They are extraordinary retrieval and synthesis systems, and treating them as colleagues, rivals, or oracles is not just inaccurate. It is unproductive.
That distinction matters, because it changes the strategic question entirely.
An LLM is a function that maps a prompt to a probable next token. The fact that this process produces coherent essays, working code, and credible legal arguments is remarkable, but it does not change what is happening underneath. There is no inner life. There is no judgment. There is pattern matching at a scale and breadth that no individual human has ever had access to.
What this means in practice is that the knowledge layer of human work, the part that used to require years of training, expensive consultants, or proprietary research, is now available at near-zero marginal cost. A junior engineer with a good prompt can produce architecture documentation that a senior consultant would have billed thirty thousand dollars for in 2019. A new analyst can summarize a regulatory filing in minutes. A small company can draft contracts that previously required outside counsel.
This is open source, but applied to knowledge itself.
A parallel conversation about agents obscures the same point. Agents are task executors. They are loops that call tools based on instructions. The novelty is not that they can act. Software has been automating tasks since the first cron job. The novelty is that the planning step inside the loop is now driven by a language model rather than by hand-coded rules.
That is a meaningful efficiency gain. It is not a new category of capability. Most automation worth doing in an enterprise can still be accomplished with deterministic rules, well-defined workflows, and reliable APIs. Treating every task as an opportunity to deploy an autonomous agent is a way to spend money on complexity that a state machine would have handled for less.
The question is not “can an agent do this?” The question is “what is the simplest reliable system that produces the outcome?”
Competitive advantage between humans has always been a blend of two things: what you know and what you do with what you know. For most of history, knowledge was scarce and expensive to acquire. It was the moat. Credentials, networks, and proprietary information sustained careers and companies for decades.
That moat is dissolving. When any motivated person can access expert-level synthesis on any topic in seconds, differentiation collapses onto execution. The ability to decide what to build, to ship it, to operate it, to learn from real users, and to do all of that faster than the competition. That is the remaining variable.
This applies at every level:
The firms gaining ground are not necessarily the ones with the largest models or the most data scientists. They are the ones rewiring how decisions get made, how work moves through the system, and how feedback loops get shorter.
This shift is real, but it is not frictionless.
Execution requires infrastructure. Open access to knowledge does nothing for an organization whose data is fragmented, whose deployment pipelines are manual, and whose decision rights are unclear. The bottleneck moves from “we do not know” to “we cannot act.” That is often a harder problem.
Talent requirements change, but they do not disappear. The premium shifts from people who hold knowledge to people who can synthesize, decide, and operate under uncertainty. Those people are not abundant, and existing hiring rubrics often miss them.
Verification still matters. LLMs hallucinate. Open-sourced knowledge is only useful if the receiving organization has the discipline to check it against reality before acting. Speed without verification produces confident failures, faster.
Cultural inertia is the largest obstacle. Many organizations are structured around protecting and rationing information. The leaders who built those structures will not always welcome a model where knowledge is cheap and execution is the test.
LLMs are not minds. They are the open sourcing of human knowledge, and that is a more accurate, more useful framing than any anthropomorphic story. Agents are not coworkers. They are task executors with a language model in the planning loop, and most automation does not need them at all.
What does need attention is execution. When knowledge becomes a commodity, the organizations that win are the ones that can translate it into outcomes faster, more reliably, and more honestly than their competitors. That has always been true at the margin. It is now the entire game.