Zig Language Neural Networks Genetic Algo

Hive Mind:
Evolution from Scratch

A high-performance simulation of artificial life, implementing Neural Networks and Evolutionary strategies purely in Zig, with zero external ML libraries.

The Concept

In a world of high-level frameworks like PyTorch and TensorFlow, I wanted to understand the mathematics behind intelligence.

Hive Mind is a simulation where autonomous agents (Ants) explore a grid, scavenge for food, and evolve over generations. Each ant possesses a unique Neural Network brain and a genetic code that dictates its physical attributes.

The goal? To see if complex behaviors—like efficient pathfinding and resource management—could emerge from simple biological constraints and raw mathematics.

Key Features

  • Zero Dependencies: Neural Network built from scratch (Tensors, Layers, Backprop).
  • Genetic Evolution: Speed vs. Metabolism trade-offs passed to offspring.
  • SIMD Optimization: vectorized operations for high-performance matrix multiplication.
  • Custom Memory Management: Manual allocation using Zig's allocators.

Under the Hood

The core engine is written in Zig for its manual memory control and C-like performance.

🧠 The Brain

A modular Feed-Forward Neural Network. It processes sensory inputs (pheromones, walls, food scent) to decide movement.

🧬 Genetics

Ants inherit genes like gene_speed. Faster ants move more but burn energy exponentially ($E = v^2$), forcing an evolutionary balance.

⚡ SIMD Math

To train thousands of epochs efficiently, matrix multiplications utilize Zig's @Vector for parallel CPU instruction execution.

layer.zig Zig

// SIMD Vectorization for Dense Layer Forward Pass
var i: usize = 0;
while (i + SimdWidth <= self.inputs_count) : (i += SimdWidth) {
    const v_in: Vec = input[i..][0..SimdWidth].*;
    const v_w: Vec = self.weights[w_start + i ..][0..SimdWidth].*;
    
    // Parallel multiplication of 8 floats at once
    vec_sum += v_in * v_w; 
}
sum += @reduce(.Add, vec_sum);
                

Extract from the dense layer implementation showing manual SIMD vectorization.

Challenges & Solutions

1. The "Vanishing Gradient" Problem

Challenge: Initial training attempts resulted in ants spinning in circles. The gradients were becoming too small to update the weights effectively in deep networks.

Solution: I implemented a modular activation system, switching from pure Sigmoid to ReLU for hidden layers, and implemented a proper weight initialization strategy (randomized within specific bounds).

2. Memory Safety in a Complex Simulation

Challenge: Managing thousands of allocations for tensors and neuron layers manually in Zig led to fragmentation and leaks.

Solution: I structured the project with a strictly hierarchical lifecycle (`init`/`deinit`). The World struct owns the Ants, and the Network owns the Layers, using Zig's defer statement to ensure clean memory release at the end of each training epoch.

The Result

The simulation exports the state to JSON, which is rendered by an HTML5 Canvas viewer. Below is a snapshot of the agents converging on food sources using pheromone trails.

Live visualization requires the Zig backend running.

(Grid: 100x100 | Agents: 20)

Final Thoughts

Building "Hive Mind" pushed me out of the comfort zone of Unity's game loop. It taught me the importance of data-oriented design and gave me a profound appreciation for how "intelligence" can be mathematically derived from simple error-correction functions.

This project bridges my passion for interactive experiences with low-level systems engineering.


← Back to Portfolio