Entropy Regularization

Based on our discussion on entropy, our plan is to implement entropy regularization via an entropy bonus in our loss function. Implementing the entropy bonus The formula for entropy which we have to implement, \[ H(p) = -\sum_{i=1}^{C} p_i \log p_i, \]is simple enough: multiply the probabilities for the seven possible moves with their log-probabilities, sum and negate. However, there is one numerical problem we have to worry about: masking out an illegal move \(i\) leads to a zero probability \(p_i=0\) and a log-probability \(\log p_i = -\infty\). However, due to the rules of IEEE 754 floating point numbers, multiplying zero with \(\pm\infty\) is undefined and therefore results in NaN (not a number). For the entropy formula, however, the contribution should be 0. ...

April 24, 2025 · cfh

A First Training Run and Policy Collapse

With the REINFORCE algorithm under our belt, we can finally attempt to start training some models for Connect 4. However, as we’ll see, there are still some hurdles in our way before we get anywhere. It’s good to set your expectations accordingly because rarely if ever do things go smoothly the first time in RL. A simple MLP model As a fruitfly of Connect 4-playing models, let’s start with a simple multilayer perceptron (MLP) model that follows the model protocol we outlined earlier: that means that it has an input layer taking a 6x7 int8 board state tensor, a few simple hidden layers consisting of just a linear layer and a ReLU activation function each, and an output layer of 7 neurons without any activation function—that’s exactly what we meant earlier when we said that the model should output raw logits. ...

April 21, 2025 · cfh

The REINFORCE Algorithm

Let’s say we have a Connect 4-playing model and we let it play a couple of games. (We haven’t really talked about model architecture until now, so for now just imagine a simple multilayer perceptron with a few hidden layers which outputs 7 raw logits, as discussed in the previous post.) As it goes in life, our model wins some and loses some. How do we make it actually learn from its experiences? How does the magic happen? ...

April 20, 2025 · cfh

Basic Setup and Play

Let’s get into a bit more technical detail on how our Connect 4-playing model will be set up, and how a basic game loop works. Throughout all code samples we’ll always assume the standard PyTorch imports: import torch import torch.nn as nn import torch.nn.functional as F Board state The current board state will be represented by a 6x7 PyTorch int8 tensor, initially filled with zeros. board = torch.zeros((ROWS, COLS), dtype=torch.int8, device=DEVICE) The board is ordered such that board[0, :] is the top row. A non-empty cell is represented by +1 or -1. To simplify things, we always represent the player whose move it currently is by +1, and the opponent by -1. This way we don’t need any separate state to keep track of whose move it is. After a move has been made, we simply flip the board by doing ...

April 20, 2025 · cfh

Training the First Model

Now that we have plenty of training data, we can load it into PyTorch and start training a model. Loading the data Since the binary file format we chose was so simple, it’s rather straightforward to write a Dataset class which reads it in: import numpy as np import torch from torch.utils.data import Dataset, DataLoader, Subset class BinaryBezierDataset(Dataset): """ Loads Bezier triangle data from a binary file into memory once. Each record: 11 float32 coords, 32 uint8 bytes (packed 16x16 bitmap). """ def __init__(self, filename, device, input_dim=11): super().__init__() self.filename = filename self.input_dim = input_dim coords_bytes = input_dim * np.dtype(np.float32).itemsize # 44 record_bytes = coords_bytes + 32 # 76 (coords + 16x16 bitmap = 256 bits) # Calculate number of samples from file size file_size = os.path.getsize(filename) if file_size % record_bytes != 0: raise ValueError(f"File size {file_size} not multiple of record size {record_bytes}") self.num_samples = file_size // record_bytes print(f"Found {self.num_samples} samples in {filename}.") with open(filename, 'rb') as f: data = np.fromfile(f, dtype=np.uint8, count=file_size) data = data.reshape(self.num_samples, record_bytes) # reshape into records # Extract coords (first 44 bytes = 11 floats) coords = data[:, :coords_bytes].view(np.float32).reshape(self.num_samples, self.input_dim) # Extract and unpack packed bitmaps (last 32 bytes) packed_bitmaps = data[:, coords_bytes:] unpacked_bits = np.unpackbits(packed_bitmaps, axis=1) # (num_samples, 256) # The actual label is the maximum (0 or 1) over the bitmap bits outputs = np.max(unpacked_bits, axis=1) # (num_samples,) # Convert to pytorch tensors and transfer to GPU if required self.x_tensor = torch.from_numpy(coords).float().to(device) # (num_samples, 11) self.y_tensor = torch.from_numpy(outputs).float().to(device) # (num_samples,) def __len__(self): return self.num_samples def __getitem__(self, idx): return self.x_tensor[idx], self.y_tensor[idx] So far, so good. We are in the convenient position that our entire dataset fits quite comfortably into RAM or VRAM, so we just load the entire dataset at once, extract the 11 triangle coordinates, unpack the bitmap and take its maximum to get a binary 0/1 label which tells us whether the triangle self-intersects. This is a pretty straightforward DataSet which we can load into a nn.DataLoader with the desired batch size and shuffling enabled to feed a standard PyTorch training loop. It’s actually not very efficient to use it like this, but we’ll get to that in a later post. ...

April 10, 2025 · cfh