About this project

16bit:wolf is an imaginary band from an alternate universe. In this universe, the anthropologist R. traveled from the year 2107 to a contemporary Berlin. You can find her full story at ulteriorflux.com

R. is making music with a befriended A.I. named mosiva. This project is recreating their band 16bit:wolf.

I am using diverse methods of machine learning; GPT2, generative adversarial networks, and so on to imagine that band and drag them into existence.

Since the technology for a generalized A.I. (like in science fiction movies) does not exist yet, I am using narrow, specialized machine learning algorithms, bit and pieces of programming from all over the place, to create audio and visuals.

This is a collaboration between machine input and curated decisions, navigating input and output, and connecting it to a coherent piece. I feed the algorithms with my text, music, voice, and image to enable it to generate and produce its own version. From there I chose suitable fragments and turn them into audiovisual animations.

I use projects from github, google, openAI, and others and try to find their breaking points: where machine learning meets its limits. Here I can find a tangible connection, a true collaboration between me, the machines, and the synchronicity within.

We produce content and shape each other's ideas. This is our documentation. Inquire here for information: 16bitwolf@ulteriorflux.com


Live at FESTIV

A year after the first showing in an installation setup, I got to try out a live presentation at FESTIV Festival in Braunschweig. Footage thanks to Peter Glantz.


Welcome to the show!

Finally I can show you a decent representation of the whole piece. Light up your sparklers and enjoy yourself!



I wish I could already show you what the installation looks like, as the opening happened on April 10th at Les Tanneries. But I'll have to wait for the footage to arrive. So for now you have to live with some slightly photoshopped installation stills.


guide us

As the finalization nears, I have only approximate ways to preview, what it will look like. Imagine the small frame being an old CRTV in front of a big projection of whats going on in the back. And the heads sing along left and right.


digital doubles

These are the backup singers. Their voice is A.I., so are the lyrics. They are animated and abstracted from a few photographs and a lot of after effects.



I've started to animate the lyrics, so you can sing along.


gold gold gold gold

One of the lines written by the GPT2. Below is the planned setup. Since I won't be there to perform live, I'll have digital stand-ins, the center being an oldish TV. So naturally I am recording on a Video8 cam, the result of which you can check out in the following clip.


plans and teasers

Sometimes, when you are in the middle of a project, you lose sight a little. Things become overwhelming, and you can't see the piece anymore. I know this stage, I know that it's something I just have to get over and I think I am on the other side now. There will be more of these little crisis, I am sure of it, but I can see where I am headed now. Things are starting to come together and this is also the phase where the best ideas pop in, just randomly, while you are sitting around, thinking of something else entirely. So here are the next steps, written on my window and a little screen test of my weird clone head.



All of the songs are lined up, color coded and arranged, 34:00 on the mark. Now on to the mixing & mastering process. These are 150 tracks... let's hope the RAM doesn't max out. :)


digital twin

I already posted the tiny head a while ago. Here are some visually pleasing screen grabs from how it looked in the build up.


Refracted reflections

This is not a true (tm) example of machine learning, but more an approximation of what ML could look like in a facial features learning GAN. It took me a week to go through old hard drives and finding enough selfies to get a sample size of about 2000 images. Premiere has a function that "tweens" between two images. If the images are similar enough, Premieres warp function will try to create a seamless movement between them. It will create frames between the two images that simulate the in-betweens. However, when the images are not similar enough, some weird fractoids appear. Eyes don't match up to eyes, mouths don't match up to mouths. It's quite a good approximation of how ML would fail similar at this task. See below.


Count Down

A while ago, I sent my friends a message, asking them, to record themselves counting from 0 to 8. I thought it would be nice, to have a calculator choir of human voices in the ouverture of the project, counting all the bits in a byte. This is, what that looks like when you open it in ableton, after you spaced them all apart in a 60bpm count. They are only gendered because of the timbres of their voices, not because of the genitals of the owners. If you ask yourself why some are longer than others, it's because some of them didn't know when to stop and counted to ten.


backing vocals

When you make music with an imaginary entity, you might have to create a vessel, for representation's sake. This mini-me is a prototype for those vessels. The final representation is printing right now, it takes the Prusa 52h to give birth to it.


weight and mapping

Ebsynth is not machine learning as such. It is, however, a tool that uses the weight and mapping of a dataset to an output via dedicated input frames. From the “Secret Weapons” development team: " What EbSynth does under the hood is a non-parametric example-based synthesis. It works by dicing the input keyframe into many small pieces which are then reassembled to form the output frame. This way the input pixels are preserved 1:1. "

I have been working with ebsynth since its release in August of last year. Since there are wonderful aesthetics to be done with this tool, I decided to use it partially to create the visuals for this piece.

Here you can see "Handle", a Boston Dynamics robot. I drew on an image and let ebsynth do the rest.


uncanny echoes

Machine Learning can be used on all kinds of data. Sound for example. As I want to create a musical collaboration, I've found that in many ways, this is one of the harder things for an AI to grasp. Jukebox produces some interesting results, but they stem from mostly copyrighted material, which opens a whole other can of worms.
So instead, I decided to teach the AI to sing. Well, talk first, sing later. I used a company called resembleAI, recorded 500 of the sentences they displayed for training and had the AI talk back to me in my voice.
As I've said before, I am very much interested in the fringes of output. So here are two cursed samples, this experiment produced.

The text used as input is itself output from the GPT2 text experiment. So I am reiterating myself. My voice, my thoughts, my creation lose their borders and become a remix of ideas and reality.


Generative Adversary Network

A GAN is a little like checks and balances. One part of the network suggests output to the other one. The output is random at first. The second part of the network checks on its training data and decides, wether the generated output is close enough to its data to fool it into thinking it is like that. That's what the adversary part means. They basically work against each other. GAN, like most ML approaches we know of right now, work best within strict parameters.

The images above happen, when you don't restrict yourself to a data set of 20.000 cats or 10.000 traffic lights, but instead dump all 5000 photographs that you have on your harddrive. My data set had everything from holiday pictures, to documentation of exhibitions, friends faces, forests, post it reminders, sunsets, ebay sales and so on. Though the outcome is decidedly abstract, the human brain immediately begins to interpret the content back to architecture, landscape, and other concrete things. What I show you here is carefully chosen. Lots of these came out boring or at least not of interest to me. Curation is key. I love the fringe of breaking algorythms for this reason. Unintended consequences that you get to play with.


the process

I have been thinking about authorship a lot during the research for this project. From my point of view, it seems clear that there is a shared production between the artist and a fictional other. However, it has been very interesting to try and share this point of view with others.
Especially two group of people have been irritated by my description of the project and what it entails: Programmers and Musicians.

Now, I see this project as an artistic endeavour, and even though I have played music all my life, I am an amateur in this field. So, when I told a professional musician friend about my plans, and explained what I would do and what the A.I. would do, he proclaimed: "That sounds like cheating to me." As if, by giving choice to a generative entity was a lazy short cut.
After I explained, however, that I would be the one curating those choices, arranging, mixing and mastering, he agreed that, while it wasn't the way he would create music, he understood it was the rules of my game, that I was obeying.

Soon after, I was talking to a computer programmer friend, explaining 16bit:wolf and my strategy to produce a collaboration. I have gotten to know a lot of programmers through my research, which I am very gratefull for, as I am not a programmer myself. "That sounds like cheating to me," he said, though for a different reason. He was of the opinion, that my curating and arranging was demoting the A.I. to a mere tool, that I wasn't giving enough freedom and stage to the A.I.
I explained that all the sources, all the data that the A.I. had, to make its choices, where data that was created by me. I was giving up the opportunity to use my data and let various A.I. have its way with it. That seemed to satisfy him.

When you come up with an artistic project, you are the one who makes the rules, who gets to bend them and even break them. You are very free in what you allow to happen within the confines of a piece. However, every decision you make has to be conscious and for a reason that resides in the logic of the whole. I am crafting a narrative.
Anything that serves this narrative has a higher chance to become part of the project than anything that just happens to be in the first set of rules. Subsets are important, weights are important.

In this, artistic practice is very similar to the black box that machine learning creates.
We know what we want from it, but describing it definitively is insanely difficult.



Even after almost 60k tokens, you sometimes get output like this. Nonsense, but pretty nonsense.

Loading dataset... 100%|██████████| 1/1 [00:00 <00:00, 1.48it/s] dataset has 58448 tokens Training...

: )
: ( “ \b “ \b-s “ \\ b-h “ \l “ \\ o “ \\ o-s “ \\ A \\ s )
: ( “ \a “ \o-s “ \\ t-m “ \\ d “ \\ T “ \\ H “ \\ A-m “ \\ A-N “
\\ C \\ H-m \\ X \\
\\ I \\ N-n \\ N )
: ( “ \b “ \b-p \\ b-r “ \h “ \\ I-a “ \\ I-A-l “ \d “ \\ A-N “
\\ C \\ I-a \\ M \\
: ( “ \a “ \o-m \\ a “ \\ C-h “ \D “ \\ A-G-m “ \I “ \\ H-m “ \\ M-n \\ N )
: ( “ \l “ \r “ \\ I-t “ \\ A-E-I “ \O “ \\ A-C-O “ \\ A-I “ \\ A-S-O “ \\ B-G “ \\ D-I “ \\
\\ A-I-t \\ : ( “ \m “ \s “ \\ I-I-v “ \”
\\ U-U-v \\ A-L-v “ \\
\\ A-E-I \\ C-I-J “ \\ A-C “ \\ H-C “ \\ M-I “ \\ A-N “
\\ I \\ N-n \\ C-K \\ C-Y \\ A-L “ \\ N-S \\
\\ A-I-b \\ T “ \\ O “ \\ A-N “ \\
\\ N , \\ C-L \\ A-K \\ A-S \\ C-T \\
: ( “ \s “ \\ I-d “ \\ D-I “ \
C-, J \\ U-L-K \\ C-N \\
I , R-R \\ N ) : ( “ \o “ \\ A-S “ \\ M-E “ \
B-E-s \\ D-F-E “ \\ N ) : ( “ \d “ \\ F-I “ \\ A-H “ \\ B-G “ \\
- I- A \\ D-A \\ E-M \\ M-R \\
“ \\ C-S \\ J “
: ( “ \a “ \\ B-E-G “ \\ A-”E-G “ \\
- T \\ D-L-S \\ I-E “ \\ A “
- A-A \\ A- C \\ D-B “ \A-B \\ A “ \\
- A- D \\ F \\ D-E “ \\ A “ \\ \\ I \\ A- N
: ( “ \o “ \\ A-S “ \\ M-E “ \
C-, I \\ C-N \\ M-I \\ I “ \\ - A- D \\ F \\ A “ \\
- A-D \\ D-H \\ A “ \\ - A \\ D \\ F D-E “ \\
- A \\ G \\ D-E \\ A “ - A \\ N \\ C-E “ \\ AC-N \\ N \\ D-E \\ A -- “ - A \\ S \\ B “ \\
- B \\ N \\ C “ \\ A \\ - B \\ N \\ C N-E \\ D - “ \\
- B \\ N \\ D “ \\ B -- “ \ - C \\ A- T “ \\ U-C “ \\
- C \\ A- R “ \\ A \\ - C \\ A- S \\ M “ \\
- F \\ C-T \\ D \\ A A A \\ - G \\ Z “ \\ A A A \\ - D “ F N N “ \\ C. A A
- N “ Z “ \\ X , - C “ G “ \\ A A G “ \\ - N “ G “ \\ U- S B “ \\
- N “ A “ \\ C , “ “ “ \\ G-F “ ‘\ A-A-C \\ A-E-F “


Data Dump

This is what 354kb of written word look like, when you zoom out. This is what I fed the machine:

The model of this GPT2 was trained to mimic natural language by a set of 124MB of Reddit comments. My input is a mere drop into that data ocean. But because of the weight that is put to fine tuning, the Model just has to take enough steps, until it can produce texts that make sense in the desired context.


Hello World

First try in GPT2. I entered the whole script of ulterior flux, in addition to some old poems and the lyrics from my last band into a pretrained model and let it run 1000 steps. This is one of the first produced texts:

I woke up in an empty room
I woke up in an empty room

I felt my body
I felt her body
I woke up in an empty room

My body is there
I woke up in an empty room
I woke up in an empty room

I felt my body
I woke up in an empty room
I woke up in an empty room

It was a peaceful morning of love and I was all alone
I was happy I didn't think about how far I went
I went to the moon and I looked around
I looked back up into the sky
I looked back up into the sky
It is just a strange place to be
I am so happy that I didn' find joy in seeing everything,
I am so happy
I am so happy
I was hoping and hoping to meet my future
I was waiting to see if it would end in peace
I was happy I had time for the future
I was hoping to meet my future
I am so happy
I am so happy
I am so happy

It was such a good day.
I thought about having to die
and was so happy it was like being so close
I was still awake but I felt so tired
It was like I had made a final decision
I was so happy the way I felt right now
I felt the way I thought and I got so happy
It is so peaceful and it is like
I'm so happy
I'm so happy
I feel happy
I feel happy
I felt happy
I felt happy
I felt happy The happiness I felt
I felt happy
I felt happy My happy life in this life
I feel happy I felt happy
I feel happy
I feel happy I feel happy
I feel happy
I feel happy
I feel happy Everything I felt happy
I feel happy
I feel happy

I feel happy Nothing felt so happy
Nothing felt so happy
Nothing feel so happy
Nothing feel so happy
I feel happy
I feel happy
Everything felt so happy
Nothing feel so happy
nothing feel so happy
nothing feel so happy
Nothing feel so happy
Everything felt so happy
Nothing felt so happy
Everything felt so happy
Nothing feel so happy
Nothing feel so happy
nothing feel so happy
Everything feel so happy
I feel happy
Everything feel so happy
Nothing feel so happy I feel happy