March 21 2025
back

Generative Art and Artificial Intelligence

GENERATIVE ART AND ARTIFICIAL INTELLIGENCE

 

The relationship between art and technology has always been very strong. With the advent of digital technologies, art has found new and unexpected development. Not only for the ease of reproducing or correcting images or digital painting but also for the possibility of creating the image itself through a "programming code". Through this door, a cultural-scientific dimension entered the art: new paradigms from biology, genetic evolution and theory of complexity. Among all this are the learning systems of artificial intelligence.

 

Generative Art

Many experiments started in the field of the Generative Art. These arise rather in the graphic-mathematical field as visualization of dynamic mathematical systems (cellular automata, fractals, attractors...). When these techniques enter the artistic world, they generate great interest for their creativity and dynamism, but also perplexity for their replicability (often algorithms based on easily reproducible mathematical formulas) or their attribution to the artist who produced them. Is the computer generating the artworks by itself or the artist who wrote the language code? This is a complex question. The history of the last 30 years has shown us that the artists can emerge from an indistinct mass of replicative experiences only when they find his own recognizable personal path and innovative ability. 

The situation becomes more complex when artists use the “random case” as a creative element as in generative art. In these works, the case plays a fundamental role giving the work itself a kind of creative autonomy and unpredictable behaviour. Often it represents the main beauty both in terms of creation of complexity and in terms of creation of "aesthetic bio-diversity".

 

In Generative Art, the creative process begins with a starting idea that corresponds to a prototype of the code and figurative elements iteratively modified to achieve an excellent aesthetic until arriving at the final design called “meta-design”. Through the paradigm of Generative Art, the artist becomes the creator of a generative process and puts it into action. Paradoxically, the possibility that the work can regenerate infinitely without losing its aesthetic quality becomes the most fascinating aspect.  

Some GA artworks include interaction (“Interactive Art”): the visitors interact through their movements in installations equipped with sensors. In this case, besides the author and the autonomy of the work, the visitor himself plays a fundamental role, determines the creative path in the hyperspace of parameters controlling the process through his own sensitivity. Other artworks are classified as “Evolutionary Art”. These works have the capacity to evolve over time towards more complex open configurations. “Genetic Art” is based on the introduction of a mechanism of genetic mutations, reproduction and selection. Typically, the parameters that define the dynamics and aesthetics of the work are grouped in numerical sequences thought as a “DNA” subject to mutations in the replication phase. 

When the context is composed of many evolutionary elements it is called artificial life or “Alife Art”. The “living components” are mentioned as “creatures” or “autonomous agents”. Other ways used on the metaphor of “memetics”: the cultural evolution of a social context of a multitude of individuals exchanging “memes”. Although it is not a “scientific” simulations of the real world, the evocation of imaginary worlds as metaphors of real worlds is very interesting, perhaps also to grasp emotional meanings that escape the scientific approach. John Casti in 1997 defined these contexts as "Would-Be Worlds". In 2005 I defined this art form as “Art of Emergence”. 

 

Artificial Intelligence in the Art

The Artificial Intelligence started to enter in the Generative Art field around the end of ‘90. Some art installations start to use “learning capability” to evolve “creatures” or interact with visitors. In other words, the agents are equipped with a neural network which allows they to learn from experiences and develop more complex and targeted behaviour to achieve certain objectives (e.g.: moving, swimming, eating, fighting, interacting with visitors, speaking or making sounds or melodies). These experiments have shown remarkable evolutionary possibilities, often surprising for their mimicry of human intelligence. 

Many artists working in this field asked themselves the fateful question: "Can artificial creatures (machines) develop a form of consciousness?". In this regard, John Searle observes that humans can correlate symbols with meanings while machines can only correlate symbols with other symbols (e.g., words, texts and images) but they have no access to their meanings. According to this position, the machines-creatures can evolve within the limits of adaptive behaviour finalized to specific goal without understanding the meaning of their actions. 

It is clear that we are talking about something quite different from human intelligence. These environments succeed in emulating or reflecting the human material used in learning. They do not deal with meanings but only symbols without feeling emotions even when they give the sensation of doing so. They cannot invent “concepts”. These keys are important to understand what is happening today with generative AI.

 

In 2014, Ian Goodfellow develops an innovative algorithm (GAN: Generative Adversal Network). These schemes have been very successful and are very effective in producing realistic images that respond to words supplied as input (“prompt”). These impressive results, surprise everyone.

Because of the procedure have many business applications, a wide number of software platforms for generating images from text (TTI) as be realized as Midjourney, Stable Diffusion and DALL-E (and now several others). Most of the case, the image databases used are derived from the web without respect copyright rights. Artists (“prompters”) emerge who wish to explore this new way and use this tool to produce evocative images generated from the combination of specific concepts. These tools are still in the experimental phase and very often the result depends on the specific words used (synonyms or simply different prompt composition lead to different results). The resulting images are not photomontages of images contained in the database. For example, I can ask to generate an image of a “fishing boat in the port of Venice in the style of Van Gogh” and the generated image does not correspond to any Van Gogh painting. In essence, the style of Van Gogh is "emulated" but the represented subject is connected to the imputed prompt. 

There is a major difficulty for an artist using these tools. It is difficult to find a graphic style of its own as the system tends to derive it from the images in the database. There is a real risk of generating non-original works (aspect that may not be a problem for many business applications). It takes a lot of experimentation and a strong personality of the artist to produce original and recognizable works. Often it is a matter of exploiting the "bugs" of the system in a compositional sense (with the problem that the next version of the platform could give different results!). In TTI, generating an image is quite simple (one hour is more than enough to learn and therefore lends itself to an easily accessible art) but being the control much less. It is very difficult to generate an original interesting work.

The issue of authorship becomes even more complex and a debate emerged around the real attribution of a work of TTI Generative AI. In the case of Generative Art (including AI techniques) the artist work on a meta-design entirely generated and coded by the artist. There is no doubt about attribution unless you use algorithms extensively explored. In the case of TTI Generative AI, the artwork is a sort of co-creation by the artist, the programmers of the generating tool and all those who have produced the images contained in the training database. Where the DB is an important part of the web content, then the digital society itself participates in the construction of the work in a kind of collective creation. 

This reasoning does not intend to detract from the contribution of the artist but it is necessary evaluate on a case-by-case basis in order to understand the path taken by the artist himself, the narrative of the work elaborated by the artist. Therefore, art based on TTI Generative AI cannot be generally considered as a purely non-original technological artistic form: the work must be assessed in the context proposed by the artist. In example, as a mirror emulative of learning, a work of TTI Generative AI based on collective co-creation can, through its limits, express the cultural contradictions present in the "collective digital culture".  In other way the artist can use personal database for their creations instead collective DB.

There are many objections to the use of TTI Generative AI for art business applications. The most important is the close connection with the business system that produced such platforms. In essence, an economic value taken away from a large number of people is distributed to a few multinational corporates. One of the most interesting routes to mitigate this aspect is the request made by several artists' organizations (including the Artist Rights Alliance) to recognise an economic contribution to the artist if his or her image is used (training right). A regulation of the subject (Ai Act) is proceeding at European level. 

Mauro Annunziato

Abstract of the full paper published in: https://www.mauroannunziato.com