# Notes 001

"Generative Art Theory" talks about generative art as repeating execution of rule-based art, which incorporates many ancient generative arts not executed by computers.

"Generative Art Theory" by Philip Galanter discussed Effective Complexity. The intuition lies within although the trajectory of individual gas molecules is not predictable, the overall effect of gas property is well known only with little random error. But this is just an intuition, is there a way to find a mathematical definition for Effective Complexity? If we can systematically quantify such metrics, the next generation GAN could be optimized to achieve high Effective Complexity! (Given that the metrics is computable and well defined, we can have genetic algorithm do the generation. It doesn't have to be differentiable)

• The information theory counts every detail in the system as bits of information, but the human perception clearly does not.

• Is the Effective Complexity only exist given human perception or is it more fundamental?

• One way to model a complex system is to use statistical tools like discerning the mean and standard deviation. Two gas systems with different information will still have similar mean and standard deviation which aligns with human perception.

My opinion about The Problem of Authorship: by defining generative art as repeating execution of rule-based art, all information is, therefore, encodable and can be represented by the rules themselves. If the final products follow exactly as the rules describe, then the final product, as a reflection of the rule, does not add additional meaning to the work. In this case, the authorship should fully belong to who wrote the rules. However, in the case of random number generation (especially for pseudo-random numbers), decisions (on which random number to use) are made by the computer, not the artist. Say, you wrote a program that uses total randomness to generate 100px by 100px images. Most of the time the resulting image is an image of white noise. However, it is still possible for the computer to generate something meaningful by small chance. This problem is magnified with artwork that involves latent space (typically in GANs) as this probability becomes larger. A computer can discover interesting random input to the latent space to "discover" an interesting artwork. At this point, we should attribute some authorship to that computer in choosing the right input. The "amount" of authorship we attribute to the executor should be proportional to the search space. This link to computational complexity is intuitive: as search space shrinks, the rule becomes more restrictive, and therefore more percentage of authorship should be rewarded to the rule-writer instead of the executor. In summary, for computer-generated art with uncertainties, I think the authorship should be split to both the rule-writer and the executor based on how restrictive the rule is.

Table of Content