Entropy Regularization

Based on our discussion on entropy, our plan is to implement entropy regularization via an entropy bonus in our loss function. Implementing the entropy bonus The formula for entropy which we have to implement, \[ H(p) = -\sum_{i=1}^{C} p_i \log p_i, \]is simple enough: multiply the probabilities for the seven possible moves with their log-probabilities, sum and negate. However, there is one numerical problem we have to worry about: masking out an illegal move \(i\) leads to a zero probability \(p_i=0\) and a log-probability \(\log p_i = -\infty\). However, due to the rules of IEEE 754 floating point numbers, multiplying zero with \(\pm\infty\) is undefined and therefore results in NaN (not a number). For the entropy formula, however, the contribution should be 0. ...

April 24, 2025 · cfh

On Entropy

The last time, we ran our first self-play training loop on a simple MLP model and observed catastrophic policy collapse. Let’s first understand some of the math behind what happened, and then how to combat it. What is entropy? Given a probability distribution \(p=(p_1,\ldots,p_C)\) over a number of categories \(i=1,\ldots,C\), such as the distribution over the columns our Connect 4 model outputs for a given board state, entropy measures the “amount of randomness” and is defined as1 ...

April 23, 2025 · cfh

A First Training Run and Policy Collapse

With the REINFORCE algorithm under our belt, we can finally attempt to start training some models for Connect 4. However, as we’ll see, there are still some hurdles in our way before we get anywhere. It’s good to set your expectations accordingly because rarely if ever do things go smoothly the first time in RL. A simple MLP model As a fruitfly of Connect 4-playing models, let’s start with a simple multilayer perceptron (MLP) model that follows the model protocol we outlined earlier: that means that it has an input layer taking a 6x7 int8 board state tensor, a few simple hidden layers consisting of just a linear layer and a ReLU activation function each, and an output layer of 7 neurons without any activation function—that’s exactly what we meant earlier when we said that the model should output raw logits. ...

April 21, 2025 · cfh

The REINFORCE Algorithm

Let’s say we have a Connect 4-playing model and we let it play a couple of games. (We haven’t really talked about model architecture until now, so for now just imagine a simple multilayer perceptron with a few hidden layers which outputs 7 raw logits, as discussed in the previous post.) As it goes in life, our model wins some and loses some. How do we make it actually learn from its experiences? How does the magic happen? ...

April 20, 2025 · cfh

Connect-Zero: Reinforcement Learning from Scratch

For a long time I’ve wanted to get deeper into reinforcement learning (RL), and the project I finally settled on is teaching a neural network model how to play the classic game Connect 4 (pretty sneaky, sis!). Obviously, the name “Connect-Zero” is a cheeky nod to AlphaGo Zero and AlphaZero by DeepMind. I chose Connect 4 because it’s a simple game everyone knows how to play where we can hope to achieve good results without expensive hardware and high training costs. ...

April 20, 2025 · cfh

Training the First Model

Now that we have plenty of training data, we can load it into PyTorch and start training a model. Loading the data Since the binary file format we chose was so simple, it’s rather straightforward to write a Dataset class which reads it in: import numpy as np import torch from torch.utils.data import Dataset, DataLoader, Subset class BinaryBezierDataset(Dataset): """ Loads Bezier triangle data from a binary file into memory once. Each record: 11 float32 coords, 32 uint8 bytes (packed 16x16 bitmap). """ def __init__(self, filename, device, input_dim=11): super().__init__() self.filename = filename self.input_dim = input_dim coords_bytes = input_dim * np.dtype(np.float32).itemsize # 44 record_bytes = coords_bytes + 32 # 76 (coords + 16x16 bitmap = 256 bits) # Calculate number of samples from file size file_size = os.path.getsize(filename) if file_size % record_bytes != 0: raise ValueError(f"File size {file_size} not multiple of record size {record_bytes}") self.num_samples = file_size // record_bytes print(f"Found {self.num_samples} samples in {filename}.") with open(filename, 'rb') as f: data = np.fromfile(f, dtype=np.uint8, count=file_size) data = data.reshape(self.num_samples, record_bytes) # reshape into records # Extract coords (first 44 bytes = 11 floats) coords = data[:, :coords_bytes].view(np.float32).reshape(self.num_samples, self.input_dim) # Extract and unpack packed bitmaps (last 32 bytes) packed_bitmaps = data[:, coords_bytes:] unpacked_bits = np.unpackbits(packed_bitmaps, axis=1) # (num_samples, 256) # The actual label is the maximum (0 or 1) over the bitmap bits outputs = np.max(unpacked_bits, axis=1) # (num_samples,) # Convert to pytorch tensors and transfer to GPU if required self.x_tensor = torch.from_numpy(coords).float().to(device) # (num_samples, 11) self.y_tensor = torch.from_numpy(outputs).float().to(device) # (num_samples,) def __len__(self): return self.num_samples def __getitem__(self, idx): return self.x_tensor[idx], self.y_tensor[idx] So far, so good. We are in the convenient position that our entire dataset fits quite comfortably into RAM or VRAM, so we just load the entire dataset at once, extract the 11 triangle coordinates, unpack the bitmap and take its maximum to get a binary 0/1 label which tells us whether the triangle self-intersects. This is a pretty straightforward DataSet which we can load into a nn.DataLoader with the desired batch size and shuffling enabled to feed a standard PyTorch training loop. It’s actually not very efficient to use it like this, but we’ll get to that in a later post. ...

April 10, 2025 · cfh

Preparing the Data

With the triangle self-intersection algorithm ready to go, we can start gathering the training data for our machine learning setup. But first we have to think about how exactly we want to represent it. Canonical triangles The curved triangles we work with are specified by six 3D vectors, so that would mean 18 floating point numbers as our input data. But an important insight is that whether a triangle intersects itself doesn’t change when we rotate it, translate it, or uniformly scale it—it’s well known that affine transformations of spline control points result in affine transformations of the surface itself. ...

April 9, 2025 · cfh