Back to Neural Network Playground

Documentation

A comprehensive guide to neural networks and the implementation behind this interactive playground

1. Neural Networks Overview

Neural networks are computational models inspired by the human brain's structure and function. They consist of interconnected nodes (neurons) organized in layers that process information through weighted connections and activation functions.

Core Components

  • Neurons: Basic processing units that receive inputs, apply weights, and produce outputs
  • Layers: Groups of neurons - input, hidden, and output layers
  • Weights: Learnable parameters that determine connection strength between neurons
  • Biases: Additional parameters that allow neurons to adjust their activation threshold
  • Activation Functions: Non-linear functions that introduce complexity to the model

Learning Process

Neural networks learn through backpropagation, a process where:

  1. Input data flows forward through the network
  2. The network produces predictions
  3. Error is calculated by comparing predictions to actual targets
  4. Gradients are computed and propagated backward
  5. Weights and biases are updated to minimize error

2. Mathematical Foundation

Forward Propagation

For a neuron in layer ll, the activation is computed as:

a(l)=f(W(l)a(l1)+b(l))a^{(l)} = f(W^{(l)} \cdot a^{(l-1)} + b^{(l)})

Where:

  • a(l)a^{(l)} is the activation of layer ll
  • W(l)W^{(l)} is the weight matrix connecting layers l1l-1 and ll
  • b(l)b^{(l)} is the bias vector for layer ll
  • ff is the activation function

Activation Functions

This playground supports three activation functions:

Sigmoid

σ(x)=11+ex\sigma(x) = \frac{1}{1 + e^{-x}}

Output range: (0, 1)

Tanh

tanh(x)=exexex+ex\tanh(x) = \frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}

Output range: (-1, 1)

ReLU

ReLU(x)=max(0,x)ReLU(x) = \max(0, x)

Output range: [0, ∞)

Backpropagation

The gradient of the loss function with respect to weights is computed using the chain rule:

LW(l)=La(l)a(l)z(l)z(l)W(l)\frac{\partial L}{\partial W^{(l)}} = \frac{\partial L}{\partial a^{(l)}} \cdot \frac{\partial a^{(l)}}{\partial z^{(l)}} \cdot \frac{\partial z^{(l)}}{\partial W^{(l)}}

Where z(l)=W(l)a(l1)+b(l)z^{(l)} = W^{(l)} \cdot a^{(l-1)} + b^{(l)} is the pre-activation output.

3. Implementation Details

Neural Network Class Structure

The core neural network is implemented as a TypeScript class with methods for forward propagation, backpropagation, and training.

NeuralNetwork.ts
1export class NeuralNetwork { 2 layers: number[]; 3 weights: number[][][]; 4 biases: number[][]; 5 6 constructor(layers: number[]) { 7 this.layers = layers; 8 this.weights = []; 9 this.biases = []; 10 this.initializeNetwork(); 11 } 12 13 initializeNetwork() { 14 for (let i = 0; i < this.layers.length - 1; i++) { 15 const layerWeights: number[][] = []; 16 const layerBiases: number[] = []; 17 18 for (let j = 0; j < this.layers[i + 1]; j++) { 19 const neuronWeights: number[] = []; 20 for (let k = 0; k < this.layers[i]; k++) { 21 // Random initialization between -1 and 1 22 neuronWeights.push(Math.random() * 2 - 1); 23 } 24 layerWeights.push(neuronWeights); 25 layerBiases.push(Math.random() * 2 - 1); 26 } 27 28 this.weights.push(layerWeights); 29 this.biases.push(layerBiases); 30 } 31 } 32}

Forward Propagation Implementation

The forward pass computes activations for each layer sequentially:

NeuralNetwork.ts - Forward Pass
1forward(input: number[], activationFunction: string = 'tanh') { 2 let activation = input; 3 const activations = [activation]; 4 const zs: number[][] = []; 5 6 for (let i = 0; i < this.weights.length; i++) { 7 const layerZ: number[] = []; 8 const layerOutput: number[] = []; 9 10 for (let j = 0; j < this.weights[i].length; j++) { 11 // Compute weighted sum + bias 12 let z = this.biases[i][j]; 13 for (let k = 0; k < this.weights[i][j].length; k++) { 14 z += this.weights[i][j][k] * activation[k]; 15 } 16 layerZ.push(z); 17 // Apply activation function 18 layerOutput.push(this.activate(z, activationFunction)); 19 } 20 21 zs.push(layerZ); 22 activation = layerOutput; 23 activations.push(activation); 24 } 25 26 return { activations, zs }; 27}

Backpropagation Algorithm

The backpropagation method implements gradient descent to update weights and biases:

NeuralNetwork.ts - Backpropagation
1backpropagate(input: number[], target: number[], learningRate: number, activationFunction: string): number { 2 // Forward pass 3 const { activations, zs } = this.forward(input, activationFunction); 4 5 // Initialize gradient arrays 6 const nabla_b = this.biases.map(layer => new Array(layer.length).fill(0)); 7 const nabla_w = this.weights.map(layer => 8 layer.map(neuron => new Array(neuron.length).fill(0)) 9 ); 10 11 // Compute output layer error 12 const outputActivations = activations[activations.length - 1]; 13 const outputZs = zs[zs.length - 1]; 14 let delta = outputActivations.map((output, i) => 15 (output - target[i]) * this.activateDerivative(outputZs[i], activationFunction) 16 ); 17 18 nabla_b[nabla_b.length - 1] = delta; 19 20 // Backpropagate error through hidden layers 21 for (let l = this.weights.length - 2; l >= 0; l--) { 22 const z = zs[l]; 23 const sp = z.map(z => this.activateDerivative(z, activationFunction)); 24 const delta_next = delta; 25 26 delta = new Array(this.weights[l].length).fill(0); 27 for (let i = 0; i < delta.length; i++) { 28 let sum = 0; 29 for (let j = 0; j < delta_next.length; j++) { 30 sum += this.weights[l + 1][j][i] * delta_next[j]; 31 } 32 delta[i] = sum * sp[i]; 33 } 34 35 nabla_b[l] = delta; 36 37 // Compute weight gradients 38 for (let j = 0; j < this.weights[l].length; j++) { 39 for (let k = 0; k < this.weights[l][j].length; k++) { 40 nabla_w[l][j][k] = delta[j] * activations[l][k]; 41 } 42 } 43 } 44 45 // Update weights and biases 46 for (let i = 0; i < this.weights.length; i++) { 47 for (let j = 0; j < this.weights[i].length; j++) { 48 for (let k = 0; k < this.weights[i][j].length; k++) { 49 this.weights[i][j][k] -= learningRate * nabla_w[i][j][k]; 50 } 51 this.biases[i][j] -= learningRate * nabla_b[i][j]; 52 } 53 } 54 55 return this.calculateError(outputActivations, target); 56}

4. Project Architecture

The project is structured using React components with clear separation of concerns:

Component Structure

  • page.tsx - Main component with state management
  • NeuralNetwork.ts - Core neural network implementation
  • NetworkCanvas.tsx - Network structure visualization
  • ResultsCanvas.tsx - Classification results display
  • SettingsPanel.tsx - Configuration controls
  • Legend.tsx - Visual legends for canvases

State Management

The main component manages all application state using React hooks:

page.tsx - State Management
1export default function NeuralNetworkPlayground() { 2 const [network, setNetwork] = useState<NeuralNetwork | null>(null); 3 const [isTraining, setIsTraining] = useState(false); 4 const [isPaused, setIsPaused] = useState(false); 5 const [epoch, setEpoch] = useState(0); 6 const [error, setError] = useState(0); 7 const [accuracy, setAccuracy] = useState(0); 8 const [problemType, setProblemType] = useState<ProblemType>("spiral"); 9 const [hiddenLayers, setHiddenLayers] = useState(3); 10 const [neuronsPerLayer, setNeuronsPerLayer] = useState(8); 11 const [learningRate, setLearningRate] = useState(0.03); 12 const [activationFunction, setActivationFunction] = useState("tanh"); 13 const [isClient, setIsClient] = useState(false); 14 15 const animationFrameRef = useRef<number | undefined>(undefined); 16 17 // Training loop using requestAnimationFrame for smooth animation 18 useEffect(() => { 19 if (!isTraining || isPaused || !network || !isClient) return; 20 21 const animate = () => { 22 trainStep(); 23 animationFrameRef.current = window.requestAnimationFrame(animate); 24 }; 25 26 animationFrameRef.current = window.requestAnimationFrame(animate); 27 28 return () => { 29 if (animationFrameRef.current) { 30 window.cancelAnimationFrame(animationFrameRef.current); 31 } 32 }; 33 }, [isTraining, isPaused, network, learningRate, activationFunction, problemType, isClient]); 34}

5. Visualization Components

Network Structure Canvas

The NetworkCanvas component visualizes the neural network structure with neurons and weighted connections:

NetworkCanvas.tsx - Rendering Logic
1const render = () => { 2 const rect = canvas.getBoundingClientRect(); 3 const dpr = window.devicePixelRatio || 1; 4 5 canvas.width = rect.width * dpr; 6 canvas.height = rect.height * dpr; 7 ctx.scale(dpr, dpr); 8 9 // Clear canvas 10 ctx.fillStyle = '#0a0a0a'; 11 ctx.fillRect(0, 0, rect.width, rect.height); 12 13 const margin = 40; 14 const layerSpacing = (rect.width - margin * 2) / Math.max(1, network.layers.length - 1); 15 16 // Draw connections with weight-based styling 17 for (let i = 0; i < network.layers.length - 1; i++) { 18 const layer1X = margin + layerSpacing * i; 19 const layer2X = margin + layerSpacing * (i + 1); 20 21 for (let j = 0; j < network.layers[i]; j++) { 22 for (let k = 0; k < network.layers[i + 1]; k++) { 23 const weight = network.weights[i][k][j]; 24 25 ctx.beginPath(); 26 const opacity = Math.min(Math.abs(weight), 1); 27 if (weight > 0) { 28 ctx.strokeStyle = `rgba(255, 255, 255, ${opacity * 0.5})`; 29 } else { 30 ctx.strokeStyle = `rgba(115, 115, 115, ${opacity * 0.5})`; 31 } 32 ctx.lineWidth = Math.min(Math.abs(weight) * 3, 4); 33 ctx.moveTo(layer1X, neuron1Y); 34 ctx.lineTo(layer2X, neuron2Y); 35 ctx.stroke(); 36 } 37 } 38 } 39 40 // Draw neurons with glow effects 41 for (let i = 0; i < network.layers.length; i++) { 42 // ... neuron rendering code 43 } 44};

Classification Results Canvas

The ResultsCanvas shows the decision boundary and training data points:

ResultsCanvas.tsx - Decision Boundary
1// Draw the decision boundary 2const resolution = 50; 3const step = size / resolution; 4 5for (let i = 0; i < resolution; i++) { 6 for (let j = 0; j < resolution; j++) { 7 const x = (i / resolution) * 2 - 1; 8 const y = (j / resolution) * 2 - 1; 9 10 // Get network prediction for this point 11 const output = network.forward([x, y], 'tanh').activations.slice(-1)[0][0]; 12 13 // Color based on prediction 14 if (output > 0.5) { 15 ctx.fillStyle = 'rgba(255, 255, 255, 0.2)'; 16 } else { 17 ctx.fillStyle = 'rgba(115, 115, 115, 0.2)'; 18 } 19 20 ctx.fillRect( 21 offsetX + i * step, 22 offsetY + j * step, 23 step + 1, 24 step + 1 25 ); 26 } 27}

6. Training Process

Problem Types

The playground includes two classic classification problems:

Circle Classification

Points inside a circle belong to Class 1, points outside belong to Class 0.

Problems.ts - Circle
1circle: { 2 generateData(): TrainingData[] { 3 const data: TrainingData[] = []; 4 for (let i = 0; i < 100; i++) { 5 const x = Math.random() * 2 - 1; 6 const y = Math.random() * 2 - 1; 7 const target = x * x + y * y < 0.5 ? [1] : [0]; 8 data.push({ input: [x, y], target }); 9 } 10 return data; 11 } 12}

Spiral Classification

Two interleaving spirals represent different classes.

Problems.ts - Spiral
1spiral: { 2 generateData(): TrainingData[] { 3 const data: TrainingData[] = []; 4 const n = 100; 5 6 for (let i = 0; i < n; i++) { 7 for (let j = 0; j < 2; j++) { 8 const r = (i / n) * maxRadius; 9 const theta = (i / n) * 4 * Math.PI + j * Math.PI; 10 11 const x = r * Math.cos(theta); 12 const y = r * Math.sin(theta); 13 14 data.push({ 15 input: [x, y], 16 target: [j] 17 }); 18 } 19 } 20 return data; 21 } 22}

Real-time Training Loop

Training occurs in real-time using requestAnimationFrame for smooth visualization:

page.tsx - Training Step
1const trainStep = () => { 2 if (!network) return; 3 4 const trainingData = Problems[problemType].generateData(); 5 let totalError = 0; 6 let correctPredictions = 0; 7 8 trainingData.forEach(({ input, target }) => { 9 const error = network.backpropagate(input, target, learningRate, activationFunction); 10 totalError += error; 11 12 const output = network.forward(input, activationFunction).activations.slice(-1)[0][0]; 13 if (Math.round(output) === target[0]) { 14 correctPredictions++; 15 } 16 }); 17 18 const newAccuracy = (correctPredictions / trainingData.length * 100); 19 const avgError = totalError / trainingData.length; 20 21 setEpoch(prev => prev + 1); 22 setError(avgError); 23 setAccuracy(newAccuracy); 24};

Performance Considerations

  • Client-side rendering: All computations run in the browser for instant feedback
  • Efficient canvas updates: Double buffering and device pixel ratio handling
  • Optimized training loop: Batch processing of training data each frame
  • Memory management: Proper cleanup of animation frames and event listeners

This documentation provides a comprehensive overview of the neural network implementation. For questions or contributions, feel free to explore the source code or reach out.