A lightweight, efficient Java implementation of the Novelty Search algorithm for evolutionary computation and genetic algorithms.
Novelty Search is a revolutionary approach to evolutionary algorithms that rewards behavioral novelty instead of objective fitness. Rather than optimizing toward a specific goal, it encourages exploration of diverse behaviors, often discovering creative and unexpected solutions that traditional objective-based approaches miss entirely.
- Avoids Local Optima: Escapes deceptive fitness landscapes where the path to the goal requires temporarily moving away from the objective
- Discovers Unexpected Solutions: Finds creative solutions that objective-based search would never consider
- Open-Ended Evolution: Can run indefinitely, continuously discovering new behaviors
- Natural Diversity: Automatically maintains population diversity without explicit mechanisms
- Standard Novelty Search - K-nearest neighbors algorithm with configurable parameters
- Three Archive Strategies - Threshold increase, FIFO, or no-add when archive is full
- Dual Comparison - Compares against both archive and current population
- Memory Safe - Defensive copying prevents external modifications
- Zero Dependencies - Pure Java, no external libraries required
- Well Documented - Complete Javadoc and inline comments
- Thread Safe - Safe for concurrent evolutionary algorithms
- Download
novelty-search.jar - Add to your project's classpath
For IntelliJ IDEA:
- File -> Project Structure -> Libraries
- Click + -> Java
- Select novelty-search.jar
For Eclipse:
- Right-click project -> Properties
- Java Build Path -> Libraries -> Add External JARs
- Select novelty-search.jar
For Command Line:
javac -cp novelty-search.jar YourFile.java
java -cp .:novelty-search.jar YourClassimport main.java.noveltysearch.NoveltySearch;
import main.java.noveltysearch.NoveltySearch.FULL_ARCHIVE_STRATEGY;
public class Example {
public static void main(String[] args) {
// Create novelty search instance
NoveltySearch ns = new NoveltySearch(
10, // behavior vector dimension
500, // archive capacity
15, // k-nearest neighbors
0.5, // novelty threshold
FULL_ARCHIVE_STRATEGY.THRESHOLD_INCREASE // archive strategy
);
// Evaluate an individual's novelty
double[] behavior = {1.2, 3.4, 5.6, 7.8, 9.0, 1.1, 2.2, 3.3, 4.4, 5.5};
double[][] population = getCurrentPopulation(); // your population
double noveltyScore = ns.getNovelty(behavior, population);
System.out.println("Novelty Score: " + noveltyScore);
}
static double[][] getCurrentPopulation() {
// Return your current population's behaviors
return new double[100][10]; // 100 individuals, 10D behavior
}
}NoveltySearch(int vectorLength,
int archiveCapacity,
int kValue,
double noveltyThreshold,
FULL_ARCHIVE_STRATEGY fullArchiveStrategy)| Parameter | Type | Description |
|---|---|---|
| vectorLength | int | Dimensionality of behavior vectors |
| archiveCapacity | int | Maximum number of behaviors to store in archive |
| kValue | int | Number of nearest neighbors for novelty calculation (typically 15-20) |
| noveltyThreshold | double | Minimum novelty score required for archive entry |
| fullArchiveStrategy | enum | Strategy when archive reaches capacity |
| Strategy | Behavior | Best For |
|---|---|---|
| THRESHOLD_INCREASE | Increases threshold by 5% when full | Long-running evolution, maintaining quality |
| NO_ADD | Stops adding when full | Fixed comparison set |
| REMOVE_OLDEST | FIFO replacement | Keeping archive "fresh" |
double getNovelty(double[] vector, double[][] population)Calculates the novelty score for a behavior vector by comparing it against the archive and current population.
Parameters:
- vector - The behavior vector to evaluate (must match vectorLength)
- population - Current population's behavior vectors (can be null)
Returns:
- Novelty score (higher = more novel)
- Returns 0 for invalid inputs
- Returns 10,000 for the first valid vector when archive is empty
int getVectorLength() // Get behavior vector dimension
int getArchiveCapacity() // Get maximum archive size
int getKValue() // Get k-nearest neighbors value
double getNoveltyThreshold() // Get current threshold (may change dynamically)
FULL_ARCHIVE_STRATEGY getFullArchiveStrategy() // Get archive strategy
ArrayList<double[]> getArchive() // Get defensive copy of archiveimport main.java.noveltysearch.NoveltySearch;
import main.java.noveltysearch.NoveltySearch.FULL_ARCHIVE_STRATEGY;
import java.util.*;
public class MazeSolver {
static class Individual {
double[] genome; // Neural network weights or policy
double finalX, finalY; // Where the agent ended up
double fitness; // Novelty score
public Individual(int genomeSize) {
genome = new double[genomeSize];
Random rand = new Random();
for (int i = 0; i < genomeSize; i++)
genome[i] = rand.nextDouble() * 2 - 1; // [-1, 1]
}
public void simulate() {
// Simulate agent navigating maze using genome
// Set finalX, finalY based on where agent ends up
finalX = Math.random() * 100;
finalY = Math.random() * 100;
}
public double[] getBehavior() {
return new double[]{finalX, finalY};
}
}
public static void main(String[] args) {
// Setup
int populationSize = 100;
int genomeSize = 50;
List<Individual> population = new ArrayList<>();
// Initialize population
for (int i = 0; i < populationSize; i++)
population.add(new Individual(genomeSize));
// Create novelty search (2D behavior: final x,y position)
NoveltySearch ns = new NoveltySearch(
2, // 2D behavior space (x, y)
500, // Archive up to 500 novel behaviors
15, // Use 15 nearest neighbors
0.5, // Novelty threshold
FULL_ARCHIVE_STRATEGY.THRESHOLD_INCREASE
);
// Evolution loop
for (int generation = 0; generation < 1000; generation++) {
// Evaluate all individuals
for (Individual ind : population)
ind.simulate();
// Get behavior vectors for entire population
double[][] behaviors = new double[populationSize][2];
for (int i = 0; i < populationSize; i++)
behaviors[i] = population.get(i).getBehavior();
// Calculate novelty for each individual
for (int i = 0; i < populationSize; i++) {
double novelty = ns.getNovelty(
population.get(i).getBehavior(),
behaviors
);
population.get(i).fitness = novelty;
}
// Selection and reproduction based on novelty
population = selectAndReproduce(population);
// Stats
if (generation % 100 == 0) {
double avgNovelty = population.stream()
.mapToDouble(ind -> ind.fitness)
.average().orElse(0);
System.out.printf("Gen %d: Avg Novelty=%.2f, Archive=%d%n",
generation, avgNovelty, ns.getArchive().size());
}
}
}
static List<Individual> selectAndReproduce(List<Individual> pop) {
// Tournament selection based on novelty
List<Individual> newPop = new ArrayList<>();
Random rand = new Random();
while (newPop.size() < pop.size()) {
// Tournament
Individual parent = pop.get(rand.nextInt(pop.size()));
for (int i = 0; i < 3; i++) {
Individual competitor = pop.get(rand.nextInt(pop.size()));
if (competitor.fitness > parent.fitness)
parent = competitor;
}
// Mutate and add
Individual child = new Individual(parent.genome.length);
for (int i = 0; i < child.genome.length; i++) {
child.genome[i] = parent.genome[i] + rand.nextGaussian() * 0.1;
child.genome[i] = Math.max(-1, Math.min(1, child.genome[i]));
}
newPop.add(child);
}
return newPop;
}
}| Domain | Behavior Descriptor | Application |
|---|---|---|
| Robotics | Joint angles, trajectory endpoints | Discovering diverse gaits and locomotion strategies |
| Maze Solving | Final (x, y) position | Finding paths through deceptive mazes |
| Game AI | Game states visited, action sequences | Evolving creative gameplay strategies |
| Neural Architecture | Network topology, layer sizes | Discovering novel network architectures |
| Procedural Generation | Level features, difficulty metrics | Creating diverse game content |
| Multi-agent Systems | Agent positions, interaction patterns | Emergent cooperative behaviors |
- Behavior Characterization: Each individual is described by its behavior (e.g., where it ends up, not its genes)
- Novelty Calculation: Novelty = average distance to k-nearest neighbors in behavior space
- Archive Management: Novel behaviors (above threshold) are stored in an archive
- Selection: Individuals with high novelty scores are selected for reproduction
- Iteration: Process repeats, continuously exploring new behavioral niches
Novelty uses Euclidean distance in behavior space:
distance = sqrt((x1-x2)^2 + (y1-y2)^2 + ... + (xn-yn)^2)
novelty = average distance to k-nearest neighbors
| Parameter | Typical Range | Effect |
|---|---|---|
| kValue | 10-25 | Lower = more local novelty, Higher = more global novelty |
| archiveCapacity | 100-1000 | Larger = more memory, better diversity representation |
| noveltyThreshold | 0.1-2.0 | Lower = easier to enter archive, Higher = only truly novel behaviors |
Tips:
- Start with k=15, adjust based on results
- Set threshold low initially, let THRESHOLD_INCREASE adapt it
- Archive capacity should be approximately 5-10x population size
- Java 8 or higher
- No external dependencies
This project is licensed under the MIT License - see the LICENSE file for details.
- Based on the original Novelty Search algorithm by Joel Lehman and Kenneth O. Stanley
- Thanks to the evolutionary computation community