Nature, Algorithms and Agency

Human Aimachine Art
4 min readMar 29, 2020

I recently found some old yellowed watercolors by my grandfather.
To revive these, I ran neural style transfer methods. I considered my Brittany watercolors as style images and my summer hiking pics, in the San Juan National Forest, Colorado, as content images.
It was a winter forest in summer, most of the trees were dead, killed by some bug, an ominous warning…

From left to right: my grand father painting, the style transfer output and a picture of San Juan Forest in Colorado.

I use a new technique called STROTSS, that has a charming copy-paste effect — I recommend it. It is particularly interesting how my grandfather’s style of painting a sea landscape transfers to my mountain pics, revealing the other possible nature of a mountain landscape. Haven’t you ever experienced that on a peak, when the valley below is covered with a dense layer of clouds, it is like looking over a vast ocean? Here, the style transfer algorithm has not just changed the style, but shifted the content, transmuting my summer valley into a bay in Finistère, Brittany.

Although I dived into the algorithm, my first reaction was to playfully picture the algorithm as an artist. I was not expecting such an output, and it really triggered my memories where mountains and ocean are mingled together.

Yet, whenever it comes to describing what algorithms can do, I’m reminded of one of my teachers in college. He was surprisingly picky about that and would get mad if we wrote something like ‘the algorithm did…’. No, the algorithm does nothing, — I see his face getting purple-red while howling these words — you instruct the algorithm to do a sequence of operations, possibly very complicated. If it fails, put the blame on you.
Keep that in mind, and you will never give up debugging. Here though, the advice does not apply but it helps realize that there is nothing trivial in considering the algorithm as a subject. Actually, I feel that this spontaneous personification of the algorithm may have deep roots and consequences.

As a computer scientist, the algorithm’s lines of code have little mystery. I know that the style transfer methods solve an optimization problem involving image features from a deep convolutional network, which is itself a cascade of convolutional and non-linear operations like ReLu. However, the algorithm is still unpredictable and involves more operations than I can comprehend. Is it this complexity and unpredictability that drives me to consider the algorithm as an agent, something which seems to have its own will?

Actually, this echoes our experience of Nature. Take my childhood belief of lightnings. Winters in Brittany are wet, and thunder is not so reassuring. I was petrified but also aesthetically pleased by these ominous zig-zags. Still, I enjoyed imagining the possible meanings of Thor’s omen, picturing myself as one of the terrified Vikings in my cartoon book. On the other hand, my mother — a science teacher in high school — would keep explaining the underlying natural phenomenon of static electricity gathering in the clouds, breaking my playful belief in Thor.

Well, lightning is the output of Nature. As humans, we have a sense as to when it will roar, but we really cannot predict its particular shape. This is the same situation with the style transfer algorithm output. The algorithm is an even more straightforward phenomenon in the sense that we created the environment to generate its behavior. With thunder, we learn the general rules, but it is still computationally intractable to predict its exact whereabouts and shape: too many particles are involved.

Interestingly then, science has driven us to demystify, or at least not to give any more agency to, natural phenomena we do not understand and still cannot predict. Conversely, I was impulsively ready to bestow the algorithm with an agency. And this goes well beyond the particular case of my style transfer output. It is common in many scenarios. To come up with a single example, in reinforcement learning, the robots that learn how to move are called agents! While this seems absurd, it also tells a lot about our relationship with technology in society.

Many other civilizations gave agency to non-humans. Some Native Americans, for instance, highly considered animals, merely emphasizing a particular relation to Nature and their environment. Here, when we give agency to the machine, it reveals our connection with algorithms. A few decades ago, it was our relationship with goods or objects — with industrialism — that was worth studying. Nowadays, it is also our conception of algorithms.

For my college teacher, not giving agency to the algorithm was a trick used to force the coder to be extra meticulous. At the societal level, collectively better understanding and wording algorithms is paramount to best choosing the technological environment in which we would like to live. Carefully thinking about the agency of algorithms is one crucial aspect. In a paper, we question this aspect deeper by letting painters interact with machine outputs on a real canvas.

Not considering an algorithm’s agency does not lessen at all my infatuation for the style transfer output, just as not believing in Thor’s omen does not mean that I do not enjoy lightning. Actually, the last time was in the San Juan National Forest, and it was pretty frightening…

--

--

Human Aimachine Art

A group of French painters and researchers in what-is-commonly-called A.I. http://www.human-machine.art/