stepping back and taking stock - artificial life

Scrolling through this blog, I realized that my last substantial post was during the course of the “Future Sound / Future Vision” exhibition at Launch Pad Gallery in Yokohama. I had made plans to write about the last weekend of the exhibition, but those plans fell by the wayside.

Since that exhibition, I found myself amidst a flurry of exhibition and exhibition planning activities which ended last week with the close of “Reading Between The Lines”, also at Launch Pad Gallery in Yokohama. With the close of that exhibition, my studio practice reverts back to being a studio practice rather than a hybrid of studio and administrative practice for the remainder of the year.

After “Future Sound / Future Vision”, I spent a week attending the 2018 Artificial Life conference which was held at Miraikan. This conference combined the European and American Artificial Life conference into one week long event. When I registered for this conference, I made the assumption that the topics of discussion would fall under the umbrella of Artificial Life. And it turns out this was an entirely incorrect assumption. I did write a bit about my expectations on an earlier blog post from led’s to arduino to artificial intelligence.

Suffice it to say that my expectations were exceeded beyond my wildest imagination. The five days of speakers, workshops, and posters opened my eyes to a whole new field of research in which artificial intelligence was just one subfield of artificial life. I filled up an entire notebook with notes from the conference and many different possible avenues of exploration. With so many different possibilities, I focused on two aspects of artificial life that I thought would be applicable to my studio practice, in particular, my Daily Drawings Project. The first aspect was the discovery of Avida-ED which is an application designed at Michigan State University to help students understand the dynamics of evolution and the scientific methodology. This application allows to visualize the process of evolution in real time and altering variables related to fitness.

Screen Shot 2018-10-10 at 22.46.58.png
Screen Shot 2018-10-10 at 22.52.19.png

Each species is also assigned a “DNA” sequence which is subject to mutations which help or hinder the fitness of a given species.

Screen Shot 2018-10-10 at 23.03.18.png

I walked into the first Avida-ED workshop out of curiosity. However after seeing what this application could do along with the thoughts that it triggered in relationship to my studio practice, I spent the last two days of workshops covering Avida-ED and artificial life to garner more information.

What exactly did working with Avida-ED trigger in my imagination? I have always been fascinated with alternate scenarios and possibilities and seeing how this application visualized varying outcomes despite using the same variable settings got me thinking about diverging outcomes from a single point. This led me to think about my Daily Drawings Project and how over the last four years, I have been slowly evolving these drawings on a daily basis based on a variety of factors which are much less stable and fixed than the variables of Avida-ED. If I were to make another Daily Drawing on any given day in a parallel universe or if I could get a “redo” for the day, I am certain that the drawing would be different. This difference would then affect all future Daily Drawings and the set of drawings that I would have from the forkpoint would be different. I wanted to know “HOW” different they would be. Somehow, Avida-ED got me thinking this was possible.

My first instinct out the gate was to assign each gene in the Avida-ED genome with a particular drawing mark or motif and running Avida-ED and the drawings would result from the gene sequence of each species. This harkens back to my fascination with chance operations a la John Cage, the I-Ching, and Sol LeWitt to name just a few practitioners.

Being in the midst of a conference with hundreds of researchers in this field, I decided to ask more questions and ask for some time with some of the researchers whom I thought could provide relevant suggestions and input into my idea of artificially evolving my Daily Drawings.

Ever since I began self-tracking back in graduate school through analog means. Notebooks, counters, calculators, Microsoft Excel, and other antiquated tools, I have always resisted the notion of automated analysis and projections of my behavior. Artificial life was a synthetic, bottom up approach to understanding how life develops. A discussion about my Daily Drawings and Big Data during my “Everyday Circuits” exhibition at Gallery Camellia started to chip away at this resistance. Spending a week amidst scientists and researchers whom were exploring diverse fields with the aim of interdisciplinary research took down what was standing of my resistance.

After four years and over 1000 Daily Drawings, I had established a lineage of drawings that felt sufficient in numbers and time to see what an artificial evolution of these drawings would establish. It seems at the surface to be an interesting comparison of myself vs. another version of myself.

The other aspect of artificial life that I wanted to explore more was the field of generative art. Kenneth O. Stanley’s keynote address introduced me to the field of generative art. In particular, two programs that came out of his lab were of particular interest to me - PicBreeder and Chromaria.

From the Picbreeder website…

“Picbreeder is based on an idea called evolutionary art, which is a technique developed by scientists that allows pictures to be bred almost like animals. For example, you can start with a butterfly and actually breed it into an airplane by selecting parents that look like airplanes. While evolutionary art used to be the solitary activity of a small group of experts, Picbreeder brings it to the masses so that not only can you breed our own pictures and share them with others, but you can also create new breeds (called branches) from other people's pictures. It takes a little practice, but it's a surprisingly powerful way to produce original artwork without the need for prior talent or skill.”

Here is a screen capture from Picbreeder

Screen Shot 2018-10-10 at 6.15.42 PM.png

From the abstract of “Identifying Necessary Conditions for Open-Ended Evolution through the Artificial Life World of Chromaria”…

To complement the hypothesized conditions, a new artificial life world called Chromaria is introduced that is designed explicitly for testing them. Chromaria, which is intended to deviate from Earth in key respects that highlight the breadth of possible worlds that can satisfy the four conditions, is shown in this paper to stagnate when one of the four conditions is not met. This initial controlled experiment thereby sets the stage for a broad research program and conversation on investigating and controlling for the key conditions for open-ended evolution.

This is a screen capture of a website containing videos of the visuals supplementing the above paper by Soros and Stanley.

Screen Shot 2018-10-10 at 6.17.22 PM.png

The combined discovery of Picbreeder and Chromaria got me thinking that the open-ended evolution of my Daily Drawings was indeed possible….well, if I had any semblance of a foundation in computer programming to build upon.

Still, I was not deterred and contacted Lisa Soros, the creator of Chromaria, to see if I could meet with her for lunch during the Artificial Life conference and she accepted. We spent an hour or so talking about my project, its aims with generative art, a basic overview of generative art and resources, and where to start for myself.

The starting points were varied which included playing around with Chromaria and PicBreeder and looking at the code for the two programs to see what I could glean. However, looking at code required a few more steps back to include JavaScript and Processing. Processing is a programming language geared towards visual artists and designers and I will talk more about this in a future post.