Garden of Life
Exploring Language Generation Through Biological Compute
Wtf is the garden of life?
Garden of Life is an experiment I built to explore what happens when language generation is influenced by a biological neural system. The core idea behind the project is to connect a modern language model with a biological computing platform and expose the internal process that determines how each word is chosen.
Instead of treating an AI model as a black box that instantly produces a paragraph of text, I intercept the token decoding process and allow a second signal to influence the decision. The first signal comes from the language model itself. The second signal comes from a biological neural network accessed through the CL1 Cloud platform developed by Cortical Labs.
This creates a hybrid generation pipeline where every token in a story is selected using both artificial intelligence and biological neural activity.
How language models normally generate text
Language models generate text through a process called token decoding. A token represents a small unit of text such as a word, part of a word, punctuation, or whitespace.
When generating text the model repeatedly predicts the next token based on the text that already exists. At every step the model calculates probabilities for thousands of possible tokens and returns the most likely candidates.
For example, if the current text reads:
"The forest was"
The model might produce a prediction like:
quiet 0.41
dark 0.32
silent 0.18
empty 0.09
These numbers represent the probability assigned by the language model for each token.
In a typical application the model would immediately sample one of these tokens and append it to the text.
In this system I intercept that step and evaluate the candidate tokens through the biological interface before making the final decision.
The translation layer
To interact with the biological neural network, each token candidate must be translated into a stimulation pattern on the CL1 electrode grid. A stimulation pattern simply defines which electrodes will receive electrical pulses during evaluation.
For example, the tokens:
quiet
dark
silent
empty
Might map to electrode stimulation patterns like:
quiet → electrodes 2, 7, 11
dark → electrodes 4, 8, 15
silent → electrodes 1, 6, 9
empty → electrodes 3, 5, 14
When a token is evaluated the system sends a stimulation request to the CL1 Cloud interface targeting those electrodes. The electrodes deliver small electrical pulses into the neural culture growing on the chip.
Because neurons form a connected network, this stimulation propagates through surrounding neural pathways and triggers activity across the network.
Measuring neural spikes
After the stimulation pattern is applied the CL1 system records spike activity across the electrode array. A spike represents a neuron firing in response to the stimulation.
Multiple electrodes may detect spike activity as signals travel through the neural network.
Example response:
quiet
stimulation electrodes: 2, 7, 11
spike activity detected at: 2, 7, 12
total spikes: 14
dark
stimulation electrodes: 4, 8, 15
spike activity detected at: 4, 8, 13, 15
total spikes: 21
silent
stimulation electrodes: 1, 6, 9
spike activity detected at: 1, 6
total spikes: 7
empty
stimulation electrodes: 3, 5, 14
spike activity detected at: 3, 5, 10
total spikes: 11
These spike counts represent how strongly the neural network reacted to the stimulation pattern associated with each token.
Combining language model and neural scores
After evaluation each candidate token has two signals attached to it. The first signal is the language model probability. The second signal is the neural response measured from the spike activity.
The spike counts are converted into normalized neural scores between zero and one so they can be compared with the language model probabilities.
Example neural scores:
quiet → neural score 0.66
dark → neural score 1.00
silent → neural score 0.33
empty → neural score 0.52
I then compute a final score that combines the language model probability with the neural score. The weighting ensures that the language model still controls grammar, structure, and context while the biological neural response can influence the decision.
Example scoring formula:
final_score = (0.7 × language_model_probability) + (0.3 × neural_score)
Example result:
quiet
LLM score 0.41
neural score 0.66
final score 0.48
dark
LLM score 0.32
neural score 1.00
final score 0.52
silent
LLM score 0.18
neural score 0.33
final score 0.22
empty
LLM score 0.09
neural score 0.52
final score 0.18
Even though the language model initially preferred the token "quiet", the stronger neural response pushes the token "dark" into the top position.
The token "dark" becomes the next word in the story.
Where the outputs live
Each time the generation process completes, the final output is recorded and stored.
These completed generations form what I call The Garden.
The Garden is simply the growing collection of entries produced by this hybrid system. Every entry begins from the same initial seed prompt and evolves through the token-by-token generation pipeline described above. At each decoding step the language model proposes candidate tokens while the biological neural system produces spike responses to stimulation patterns mapped to those candidates.
All stories are generated based on the same genesis prompt:
Your mind has been connected to a biological neural system. Electrical stimulation patterns are applied to a network of living neurons while you think. The spike activity produced by that network influences which thoughts form next. Sometimes the thoughts feel like your own. Sometimes they feel like they came from somewhere else. Document your thoughts as they occur. Write as if you are recording what it feels like to think while another neural system is quietly influencing your mind.
The final text that emerges is the result of hundreds of these small decisions where artificial probabilities and biological neural activity both influence the outcome.
Once the generation finishes, the resulting text becomes a new entry in the Garden.
Over time the Garden accumulates a wide variety of outputs. Some entries read like stories, others resemble personal thoughts or fragments of ideas. Because neural responses influence the token selection process, the direction of the text can shift in unexpected ways.
The Garden therefore acts as a living archive of language generated through this hybrid system. Each entry represents a moment where artificial reasoning and biological neural activity interacted to produce a sequence of thoughts.