Home Research Evolutionary Cellular Optimisation

Evolutionary Cellular Optimisation: From Sampling to Structure

Research Evolutionary CellularAutomata

Every machine learning model has settings that govern how it learns. The set of parameters that shape its behaviour before training begins. Finding the right combination of these settings is one of the most consequential and least glamorous problems in applied ML. The standard approaches work by searching a territory that a human defines in advance.

ECO takes a different approach entirely. Rather than searching a predefined territory, it constructs the territory as it goes.

The Problem

To understand why this matters, it helps to understand how conventional optimisation works and where it quietly fails.

When a practitioner sets up a hyperparameter search, they define boundaries: this parameter should be somewhere between these values, that one somewhere between those. The optimisation process then searches within those boundaries, sampling configurations and learning which regions tend to produce better results.

The limitation is not the search method. It is the assumption embedded before the search begins. The boundaries themselves reflect human intuition about where good solutions are likely to live. That intuition is often reasonable. It is never neutral. It encodes prior assumptions about the problem that may or may not reflect where the actual optimum sits.

In straightforward, well-understood problems, this works adequately. In complex, unfamiliar, or genuinely novel optimisation landscapes, the kind that appear most often in frontier research and high-stakes production systems, human-defined boundaries can quietly exclude the regions where the best solutions actually reside. The search is thorough. The territory is wrong.

There is a second, related problem. Computational resources are finite. Every evaluation of a model configuration has a cost. Standard methods can spend significant portions of their budget sampling regions that prove unproductive, because they have no mechanism for recognising early that a region is not worth exploring. They consume the territory rather than constructing it.


How It Works

ECO is a hybrid algorithm that combines two established ideas in a way that addresses both limitations directly. The first is the evolutionary algorithm. a class of optimisation methods that maintains a population of candidate solutions and improves them over successive generations through selection and variation. The second is cellular automata, a framework in which simple local rules, applied repeatedly across a structured grid, produce complex and often surprising emergent behaviour.

In ECO, each hyperparameter is not a fixed value to be sampled from a predefined range. It is modelled as a cellular structure. A dynamic lattice of candidate values that evolves according to local rules, guided by what the algorithm has already discovered. The cellular automata become the alleles: the building blocks of each candidate solution in the evolutionary process.

Exploration and Refinement

The algorithm operates in two alternating phases. During exploration, it expands the cellular structures into new regions. Inserting candidate values between known performers, venturing into areas not yet evaluated, deliberately maintaining breadth. During refinement, it consolidates what it has found: coalescing similar successful values, dividing the most promising candidates into finer variants, concentrating resolution where the evidence suggests it is warranted.

The effect is that the search landscape is not fixed at the outset. It is built incrementally, shaped by what the algorithm learns as it runs. Regions that prove productive receive more detail. Regions that prove unfruitful are not abandoned arbitrarily. They are recognised as low-yielding through accumulated evidence and resources are redirected accordingly.

The Holland-von Neumann Landscape

The structure that emerges from this process is what ECO terms the Holland-von Neumann landscape: a dynamic representation of the search space that carries the memory of everything the algorithm has discovered. It is named for John Holland, whose work on genetic algorithms established the evolutionary foundation, and John von Neumann, whose work on cellular automata established the local generative rules.

The landscape is not assumed. It is constructed. This is the core departure from conventional optimisation, and it is what allows ECO to find configurations that human-bounded search would not have known to look for.


What Makes It Different

The practical consequence of constructive optimisation is that human assumptions about the problem are no longer load-bearing. The algorithm does not inherit the practitioner's intuitions about where good solutions live. It discovers the topology of the landscape from evidence, and builds its search strategy accordingly.

This matters most in the settings where conventional methods are least reliable: complex, multimodal problems where the landscape has multiple competing optima and where the best solutions are not where intuition might place them. ECO has been validated across medical imaging, natural language processing, and computer vision tasks, as well as abstract mathematical optimisation problems designed specifically to challenge methods that rely on human-defined search structures. The results across these domains informed the doctoral research from which ECO emerged.

The algorithm is also fully transparent. Because the landscape is constructed incrementally and every decision is guided by empirical feedback rather than random sampling, the search history can be examined and understood. This is not a black box that happens to produce good results. It is a transparent process that can be traced, analysed, and improved.

For organisations applying ML to high-stakes problems where the consequences of suboptimal configurations are material, the ability to construct rather than assume the search space is a meaningful operational advantage.

A fuller technical account of ECO, including its architecture and empirical evaluation, is available in a more comprehensive treatment.