Cultivating Intelligence
“We’re no longer designing generative AI models. We’re cultivating them. We give them what they need—but they grow and evolve on their own. Always slightly beyond our control.” (Hurtado Torán, 2025)
From Engineers to Digital Gardeners
The rise of generative AI has changed everything. Tools like ChatGPT, Midjourney, and the newest video and voice models have shifted how we work, communicate, and even imagine creativity. But behind the marvels of these systems lies a quieter, deeper shift—a change in how we relate to the very technologies we build.
In the past, creating a digital system felt like assembling a machine: you defined every step, every condition, every rule. You were the architect and the mechanic, much like what Simondon described in On the Mode of Existence of Technical Objects (1958).
But generative AI doesn't work like that.
We don't program every output. We don't script every function. Instead, we train these systems on enormous datasets and watch as they learn patterns, build connections, and begin to respond in ways we didn't plan. They don't behave like tools. They behave like ecosystems.
That's why researchers, philosophers, and technologists are beginning to describe this shift with a new metaphor: we are no longer designing AI—we are cultivating it.
What It Means to "Cultivate" AI
Cultivating AI means creating the right conditions for something to grow—without controlling every detail of what it becomes.
Like gardeners, we prepare the soil (data), we choose the seeds (model architecture), and we manage the climate (training conditions). But once the growing begins, we no longer dictate the form it takes. We can prune it, observe it, redirect it. But we can't fully predict it.
This is especially true with large language models, image generators, and multimodal systems. They operate not as scripted tools but as complex pattern recognizers. They make decisions we don't fully understand. They surprise us—with creativity, with insight, and sometimes with failure.
They are not conscious. But they are also not fully passive. They act with emergent behavior—that is, they demonstrate capacities and outputs that were not explicitly coded into them.
More Power, Less Control
And here's the paradox: the more powerful these systems become, the less control we seem to have over them.
We know what goes in. We know what comes out. But the "black box" in between—the internal decision-making of these massive neural networks—is often opaque, even to their creators.
This lack of transparency is not just a technical challenge. It's also what gives these models their creative power. Their ability to combine ideas, generate novel responses, and improvise comes from the fact that they're not following rigid rules.
But the same unpredictability that enables creativity also enables errors, hallucinations, bias, and even ethical risks.
This isn't just a technical issue. It's a new kind of relationship between human and machine—where we guide and shape, but do not fully command, echoing Haraway's vision in A Cyborg Manifesto (1985).
Technology Is Never Neutral
To cultivate AI is to engage with a living, evolving process—one that doesn't happen in a vacuum. These models are trained on human content. They reflect human culture, language, prejudice, and creativity, as Latour argues in Reassembling the Social (2005).
And so they bring up difficult questions:
Who is responsible for the outputs of a system that no one fully controls?
What happens when an AI generates misleading content?
Who gets to decide what "healthy growth" looks like in an AI system?
These are questions of ethics, of design, and of power.
Because despite the science-fiction narratives, AI is not an alien force. It's not inevitable or autonomous. It's human-made, but not human-controlled.
And that means it's also not neutral.
It reflects the values, biases, and intentions of the people and companies that cultivate it. And just like farming affects the land and the ecosystem around it, building AI affects societies, economies, and relationships.
Coexistence, Not Command
The metaphor of cultivation challenges the dominant fantasy of control.
It invites us to think less like engineers and more like ecologists—people who manage systems they don't fully understand, who learn from emergence, who plan not to dominate, but to coexist.
Generative AI is not something we simply use. It's something we live with. Something we have to shape carefully, ethically, and collectively.
Its unpredictability is both a gift and a threat—and the way we handle that tension will define how technology evolves in the next decade.
Where Do We Go From Here?
We can't go back to a world where machines simply follow orders. Generative AI has opened up a different kind of future—one full of possibility, but also ambiguity.
The challenge ahead is not to shut these systems down or to let them grow wild, but to cultivate them wisely.
That means:
Designing for transparency
Embedding ethics in every layer of the process
Creating diverse, inclusive datasets
Holding tech companies accountable for the systems they grow
And above all, it means realizing that we're no longer just coders or users—we're stewards.
If we get it right, we don't just create artificial intelligence. We create a new kind of partnership—between humans and systems that learn, adapt, and evolve with us.
Further Reading
Donna Haraway. A Cyborg Manifesto (1985)
Bruno Latour. Reassembling the Social (2005)
Gilbert Simondon. On the Mode of Existence of Technical Objects (1958)
David Hurtado Toran - https://www.linkedin.com/posts/davidhurtadotoran_mentesinquietas-activity-7329015248219213824-ogCU?utm_source=share&utm_medium=member_desktop&rcm=ACoAABXuo20B_HdCjAlm4uoXQWYj4LaJwp8U36A
Truly thought-provoking.
I love the AI gardening metaphor, because for multi-layered cognitive systems it is plainly impossible to fully anticipate outputs. Just like with plants, when we prune the weeds that may harm a flower, humans should help AI to nourish aesthetic, fair, and constructive output, while actively uprooting harmful species-code.
Autonomous doesn't mean unsupervised; children and adults alike are agents of their life, but request guidance nonetheless, why should it be any different with strict or general AI?