BERTIFIER | reaching peak bertishness one token at a time

About. BERTIFIER (mis)uses BERT — one of the first large language models based on the transformer architecture — to bertify sentences, one word at a time. It replaces each word with a more bertish one: BERT’s prediction. This process repeats until the sentence stops changing and we arrive at the most bertish version of the original sentence.

To see a sentence become bertified, simply click it. Click again to pause. Click as many as you like. When it's done, click to reset. Click click click.

BERT works in a way that is a bit different from how the more well-known GPT-style models work. Instead of predicting the next word, BERT can predict a word at any place in a sentence. BERT is not stochastic: given the same input, we get the same output every time.

On this page, I collect instances where I've found BERTIFIER's output to be fun or interesting. Feel free to contact me with suggestions for inputs, will be happy to run them through the bertification process.

BERTIFIER is an active research project. It will change. It may be moved. It is here right now. The general research agenda is to explore other modes of interaction with transformer models, locating more interesting ways of working with transformers in experimental writing, and exploring the paths not taken in the development of so-called AI.

Malthe Stavning Erslev