Implementing A2C

In the previous post, we outlined the general concept of Actor-Critic algorithms and A2C in particular; it’s time to implement a simple version of A2C in PyTorch. Changing the reward function As we noted, our model class doesn’t need to change at all: it already has the requisite value head we introduced when we implemented REINFORCE with baseline. First off, we need to change the way the rewards are computed. We introduce a flag BOOTSTRAP_VALUE which is on when we use A2C. Based on this, we compute the rewards vector for a game with an outcome of +1 for a win, 0 for a draw, and -1 for a loss like this: ...

May 10, 2025 · cfh

Actor-Critic Algorithms

After implementing and evaluating REINFORCE with baseline, we found that it can produce strong models, but takes a long time to learn an accurate value function due to the high variance of the Monte Carlo samples. In this post, we’ll look at Actor-Critic methods, and in particular the Advantage Actor-Critic (A2C) algorithm1, a synchronous version of the earlier Asynchronous Advantage Actor-Critic (A3C) method, as a way to remedy this. Before we start, recall that we introduced a value network as a component of our model; this remains the same for A2C, and in fact we don’t need to modify the network architecture at all to use this newer algorithm. Our model still consists of a residual CNN backbone, a policy head and a value head. The value head serves as the “critic,” whereas the policy head is the “actor”. ...

May 8, 2025 · cfh

Implementing and Evaluating REINFORCE with Baseline

Having introduced REINFORCE with baseline on a conceptual level, let’s implement it for our Connect 4-playing CNN model. Runnable Example connect-zero/train/example3-rwb.py Adding the value head In the constructor of the Connect4CNN model class, we set up the new network for estimating the board state value v(s) which will consume the same 448 downsampled features that the policy head receives: ...

May 1, 2025 · cfh

REINFORCE with Baseline

In the previous post, we introduced a stronger model but observed that it’s quite challenging to achieve a high level of play with basic REINFORCE, due to the high variance and noisy gradients of the algorithm which often lead to unstable learning and slow convergence. Our first step towards more advanced algorithms is a modification called “REINFORCE with baseline” (see, e.g., Sutton et al. (2000)). The value network Given a board state s, recall that our model currently outputs seven raw logits which are then transformed via softmax into the probability distribution p(s) over the seven possible moves. Many advanced algorithms in RL assume that our network also outputs a second piece of information: the value v(s), a number between -1 and 1 which, roughly speaking, gives an estimate of how confident the model is in winning from the current position. ...

April 29, 2025 · cfh

Model Design for Connect 4

With the fearsome RandomPunisher putting our first Connect 4 toy model in its place, it’s time to design something that stands a chance. A design based on CNNs It’s standard practice for board-game playing neural networks to have at least a few convolutional neural network (CNN) layers at the initial inputs. This shouldn’t come as a surprise: the board is a regular grid, much like an image, and CNNs are strong performers in image processing. In our case, it will allow the model to learn features like “here are three of my pieces in a diagonal downward row” which are then automatically applied to every position on the board, rather than having to re-learn these features individually at each board position. ...

April 28, 2025 · cfh

Introducing a Benchmark Opponent

Last time we saw how the entropy bonus enables self-play training without running into policy collapse. However, the model we trained was quite small and probably not capable of very strong play. Before we dive into the details of an improved model architecture, it would be very helpful to have a decent, fixed benchmark to gauge our progress. A benchmark opponent The only model with fixed performance we have right now is the RandomPlayer from the basic setup post. Obviously, that’s not a challenging bar to clear. But it turns out that with some small tweaks, we can turn the fully random player into a formidable opponent for our starter models. ...

April 26, 2025 · cfh

Entropy Regularization

Based on our discussion on entropy, our plan is to implement entropy regularization via an entropy bonus in our loss function. Runnable Example connect-zero/train/example2-entropy.py Implementing the entropy bonus The formula for entropy which we have to implement, ...

April 24, 2025 · cfh

On Entropy

The last time, we ran our first self-play training loop on a simple MLP model and observed catastrophic policy collapse. Let’s first understand some of the math behind what happened, and then how to combat it. What is entropy? Given a probability distribution p=(p1,,pC) over a number of categories i=1,,C, such as the distribution over the columns our Connect 4 model outputs for a given board state, entropy measures the “amount of randomness” and is defined as1 ...

April 23, 2025 · cfh

A First Training Run and Policy Collapse

With the REINFORCE algorithm under our belt, we can finally attempt to start training some models for Connect 4. However, as we’ll see, there are still some hurdles in our way before we get anywhere. It’s good to set your expectations accordingly because rarely if ever do things go smoothly the first time in RL. Runnable Example connect-zero/train/example1-collapse.py A simple MLP model As a fruitfly of Connect 4-playing models, let’s start with a simple multilayer perceptron (MLP) model that follows the model protocol we outlined earlier: that means that it has an input layer taking a 6x7 int8 board state tensor, a few simple hidden layers consisting of just a linear layer and a ReLU activation function each, and an output layer of 7 neurons without any activation function—that’s exactly what we meant earlier when we said that the model should output raw logits. ...

April 21, 2025 · cfh

The REINFORCE Algorithm

Let’s say we have a Connect 4-playing model and we let it play a couple of games. (We haven’t really talked about model architecture until now, so for now just imagine a simple multilayer perceptron with a few hidden layers which outputs 7 raw logits, as discussed in the previous post.) As it goes in life, our model wins some and loses some. How do we make it actually learn from its experiences? How does the magic happen? ...

April 20, 2025 · cfh