The Future of Software According to Javi López: From Strategic Asset to Commodity Powered by LLMs

In the tech ecosystem, few ideas have sparked as much interest and controversy in such a short time as the one recently presented by developer and science communicator Javi López (@javilop) in a lengthy thread on X (formerly Twitter). His thesis is unequivocal: in the near future, all software will be run directly on large language models (LLMs), without the need for intermediary code, and, consequently, it will become a commodity comparable to electricity.

Beyond the intellectual provocation, this proposal raises profound implications for the tech market, the competitive industry structure, and medium- to long-term investment opportunities.


From Traditional Development to Direct Execution in the Model

López begins with an observation that, for those closely monitoring the advancements in generative AI, is already familiar: software development is shifting toward natural language interfaces. Tools like Claude, Grok, or ChatGPT, combined with environments such as Cursor or Windsurf, now enable experienced programmers to build complete applications by communicating with AI as if it were a technical collaborator.

However, the leap López proposes is qualitative: eliminating even that layer of generated code. In his vision, business logic, data persistence, and visual presentation will be handled internally within the model itself, with no backend in Python, relational database, or HTML/CSS frontend.

The concept relies on an idea recently voiced by Elon Musk: “Any input bitstream to any output bitstream”. In other words, a sufficiently advanced LLM could receive an input—a picture, a user instruction, a data file—and directly produce the desired output, whether it be an interactive interface, a computed result, or audiovisual content.


Implications for the Tech Market

If this scenario materializes, the software value chain will be radically reconfigured. The current layers—programming languages, frameworks, databases, application servers—might disappear as independent business segments, absorbed by a unified layer: the LLM.

Economic value, López warns, would concentrate in a very small number of players:

  • Leading multimodal LLM providers capable of interpreting and generating any type of content or functionality.
  • Operators of massive compute infrastructure with the capacity to run these models at large scale with competitive latencies.

In practice, this could lead to a highly concentrated market, where a few companies—possibly current Big Tech firms and some new entrants with a technological edge—control most of the planet’s digital productive capacity.


The Software as a Commodity: Parallels with Electricity

The analogy López proposes is intentional: during the electrification era, competition shifted from who manufactured the lamp or motor, to who could supply energy reliably and cheaply.

In a scenario dominated by advanced LLMs, software would shift from a differentiated product to a capacity service, where customer choice would primarily depend on price and, to a lesser extent, on performance advantages or proprietary IP—such as with licensed video games or specialized applications.

Thus, innovation would not disappear but would move upstream, towards improving the models and optimizing the infrastructure that runs them.


Technical Challenges and Adoption Barriers

A primary concern against this vision is that current LLMs are non-deterministic and lack persistent memory, which could limit their ability to reliably execute applications coherently.

López acknowledges these limitations but believes they will be overcome through persistence and “non-simplistic” memory mechanisms—more sophisticated than traditional databases and natively integrated into the model.

The key to reaching this point may lie in training with synthetic data:

  • LLMs themselves or agents based on them would generate complete end-to-end applications within closed environments.
  • Each execution would be logged, including interface states and user inputs.
  • This data would be used to train new models, enhancing their capacity to run applications without relying on external layers.

This approach, still in early stages, aligns with research directions explored by major AI laboratories, where self-learning and massive simulation are considered essential to scaling capabilities.


Impact on Business and Labor Structures

If software no longer requires a conventional development cycle, the tech labor market will undergo substantial restructuring:

  • The role of developers might evolve into designer of interactions and objectives, rather than code writers.
  • Many intermediate layers of the production chain would vanish, affecting providers of libraries, frameworks, and development tools.
  • The demand for talent would shift toward specialists in model optimization, infrastructure management, and creating complex prompts to maximize results.

At the same time, reliance on a limited group of LLM and infrastructure providers would increase the risk of economic and political power concentration. In an extreme scenario, governments and large corporations could depend on fewer than ten companies for most of their digital activity.


Investment Opportunities and Strategic Positioning

From a financial perspective, López’s hypothesis presents two main lines of interest:

  1. Early identification of winners
    • Investing in companies with potential leadership in developing multimodal LLMs.
    • Considering not only the hyper-scalers (Google, Microsoft, OpenAI, Anthropic, Meta) but also firms with specialized high-quality models and the potential for global scalability.
  2. Critical infrastructure
    • Data center operators and providers of specialized hardware (GPUs, TPUs, custom AI chips) will become central players in the ecosystem.
    • Companies like NVIDIA, AMD, Broadcom, and emerging AI hardware manufacturers would play strategic roles.

In this context, the energy sector analogy is not just conceptual: just as utilities depend on transmission and generation networks, the future of LLM-based computing will depend on installed capacity and hardware efficiency.


Risks and Alternative Scenarios

Although López’s proposal is technically plausible, risks could delay or alter adoption, such as:

  • Physical and energy constraints: massive execution of advanced multimodal LLMs demands significantly higher energy consumption than traditional software.
  • Hardware bottlenecks: limited availability of GPUs and specialized chips, along with geopolitical tensions affecting their production.
  • Regulation: concentration of power in few companies might prompt regulatory interventions restricting vertical integration between LLMs and infrastructure.
  • Corporate and cultural resistance: organizations hesitant to delegate entire business logic to external providers, especially in sensitive sectors like defense, health, or finance.

An alternative scenario might involve a hybrid market, where LLMs capable of executing most logic coexist with applications maintaining certain external layers for control, security, or efficiency reasons.


Timeline and Signals to Watch

There is no consensus on the timeframe for this transformation. Some experts estimate it will take over a decade; López, however, believes “we will see it sooner than later”.

For investors and strategists, key indicators signaling acceleration include:

  • Emergence of multimodal LLMs capable of executing complex interactive interfaces without intermediary code.
  • Consolidation of infrastructure providers with their own models, closing the cycle of production and deployment.
  • Real-world use cases in high-value sectors (finance, energy, health) operating solely on LLMs without traditional backend.
  • Drastic cost reduction per inference of large models.

Conclusion: A Paradigm Shift with High Value Concentration

Javi López’s proposal is more than just a technical speculation; it’s a market hypothesis that, if validated, could dramatically reshape the distribution of value within the digital economy.

In this scenario, software would transition from a unique, differentiated asset to a standardized capability executed by a small number of AI platforms. Investment and growth opportunities would focus on:

  • Leading multimodal LLM development.
  • Critical infrastructure for large-scale deployment.
  • Business models integrating both elements under unified control.

As with any disruption, early identification of winners and addressing inherent risks in a highly concentrated market will be key. As with electricity, price and reliability may become decisive factors in choosing providers.


Frequently Asked Questions (FAQs)

1. What does it mean for software to become a “commodity”?
It will cease to be a single, company-developed product and instead be a standardized service, with few providers and minimal differentiation, competing mainly on price and reliability.

2. Who would benefit most from this change?
Companies that master both the development of advanced multimodal LLMs and the infrastructure necessary to run them at scale.

3. What risks does this scenario entail?
Concentration of economic power and technology, dependency on few suppliers, higher energy consumption, and potential regulatory barriers.

4. What investment opportunities exist?
Investing in leading LLM technology firms, hardware manufacturers specialized in AI chips, and data center operators capable of supporting these workloads.

Source: Noticias inteligencia artificial

Scroll to Top