Biases in Generative Art—A Causal Look from the Lens of Art History

Biases in Generative Art—A Causal Look from the Lens of Art History

Abstract

With rapid progress in artificial intelligence (AI), popularity of generative art has grown substantially. From creating paintings to generating novel art styles, AI based generative art has showcased a variety of applications. However, there has been little focus concerning the ethical impacts of AI based generative art. In this work, we investigate biases in the generative art AI pipeline right from those that can originate due to improper problem formulation to those related to algorithm design. Viewing from the lens of art history, we discuss the socio-cultural impacts of these biases. Leveraging causal models, we highlight how current methods fall short in modeling the process of art creation and thus contribute to various types of biases. We illustrate the same through case studies. To the best of our knowledge, this is the first extensive analysis that investigates biases in the generative art AI pipeline from the perspective of art history. We hope our work sparks interdisciplinary discussions related to accountability of generative art.

1 Introduction

Generative art refers to art that in part or in whole has been created by the use of an autonomous system [13]. In a broad sense, the term “autonomous” can refer to any non-human system that can determine features of an art work, such as use of smart materials, mechanical processes, and chemical processes, to name a few. Computer generated art, i.e. art generated by algorithms or computer programs, is perhaps the most common form of generative art [65]. In fact, the terms ”generative art” and ”computer art” have been used more or less interchangeably for a long time now [13, 65].

Starting from the 1970s when painter Harold Cohen used a computer program to create paintings [20, 37] to the current times wherein AI algorithms are being used to generate art, generative art has come a long way. With the rapid advancement of deep learning, there has been remarkable progress in AI based generative art. From creating hybrid images using attributes from multiple images to generating cartoons from portraits, AI based generative art has exemplified new and diverse applications [26, 28, 5, 47].

As a consequence, AI based generative art has become extremely popular. In 2019, “Sotheby” auction house sold an AI generated art work for 32000 pounds [57]. A little earlier in 2018, the auction house “Christies” sold an AI generated art work for a staggering 432500 USD [18]. Institutions have also shown an increased interest towards AI based generative art as evidenced by the number of museum shows [9, 27, 3]. There are also several apps and tools such as [5, 39, 23] that have not only enhanced the popularity of AI based generative art, but have also facilitated ease of use and accessibility to end-users.

Figure 1: Example of bias in AIportraits app—Skin color of actress Tessa Thompson (left) is lightened in the app’s portrait rendition (right), thus exhibiting racial bias. Image source [Sung, 2019].

Amidst this progress and popularity surge, experts across disciplines have voiced concerns regarding the consequences of generative art. For example, in [22], the authors reflect on the impact of interfacing with art autonomously generated by non-human creative agents. The authors argue that such art could widen the divide between the human creator and the human audience, and emphasize on the need to balance between automation and development of humans’ creative potentials. Discussing whether computers can replace human artists, the author in [37] argues that “art requires human intent, inspiration, a desire to express something”. The author further adds that artistic creation is primarily a social act, and concludes that computers cannot replace humans. In a similar vein, discussing whether machines can create art, the author in [19] argues that the process of creating an art work is distinct from the outcome, i.e. the artwork. Drawing from the theory of philosophical aesthetics, the author states that when something is created, something about inner self is expressed.

It has been argued that both art and technology are means through which humans reveal their epistemic knowledge [44]. Such knowledge could include societal values, cultures, beliefs, as well as individual biases and prejudices. In the work [37], artist and computer scientist Aaron Hertzman states “artworks are created by a human-defined procedure”, and further notes that computer generated art can thus be biased. Even artists like David Young who argue that machines create art on their own, acknowledge existential bias in generative art. In the essay Tabula Rasa [69], Young notes that human biases in the form of preconceptions, irrationalities, and emotions can easily get embedded into the data used to train these generative art AI models. A recent notable example of bias in generative art concerns a portrait generator app called “AIportraits” [4]. It was pointed out that skin color of people of color was lightened in the app’s portrait rendition [40, 55]. Figure 1 provides an illustration of the same. Furthermore, AI algorithms are best thought of as data-fitting procedures [15]. As the author in [37] beautifully describes, these algorithms are “like tourists in a foreign country that can repeat and combine phrases from the phrasebook, but not truly understand the foreign language or culture”.

Figure 2: Example of bias in learning artists’ styles. The affect conveyed by the original image (a) is lost in the Van Gogh version (b) of CycleGAN. This is contrary to Van Gogh’s style, see (c), a real artwork by Van Gogh depicting red flowers in the field like in (a).

Many types of latent biases in generative art, especially those concerning art history, have not been analyzed in any of the past studies. Further, socio-cultural impacts of biases in generative art have not been investigated. We aim to address these issues in this work. We motivate the problem with an illustration. Consider an image-to-image translation model such as CycleGAN [72] which has been used to create images across different artists’ “styles”. Figure 2 (b) shows an example of “Van Gogh” version of a photograph (Figure 2(a)) as rendered by the CycleGAN model. As can be seen, the affect conveyed by the original image is lost in the “Van Gogh” version: the red flowers, perhaps indicative of Spring are no longer evident, instead a dry season is reflected.

The generated image seems quite contrary to Van Gogh’s take on colors. “Green Ears of Wheat”, an 1888 art work by Van Gogh serves as an illustration to the point; see Figure 2 (c) [62]. As documented in his letters to his sister Wilhelmina and artist Horace M. Livens, Van Gogh mentions about how he emphasized colors [60], and how by using bright contrasting colors he was able to infuse life in his works: “ Poppies or red geraniums in vigorously green leaves - motif in red and green. These are fundamentals, which one may subdivide further, and elaborate, but quite enough to show you without the help of a picture that there are colours which cause each other to shine brilliantly, which form a couple, which complete each other like man and woman”, [59, 58]. Thus, by merely using correlation statistics to model the artist’s style, aspects such as emotion and intent which are central to an art’s creation [19] are not taken into account, thereby rendering a biased representation of the artist’s “style” and also possibly stereotyping the artist in the process.

Such biases could potentially have long standing adverse socio-cultural impacts. First, because of the inherent biases in training data and algorithms, generative art could be embedded with racial bias, gender bias, and other types of discrimination. Second, based on their limited understanding, algorithms could stereotype artists’ style and not reflect their true cognitive abilities. This means aspects such as artist’s intent and emotions are overlooked, thus potentially conveying an opposite affect in the generated art. Third, historical events and people may be depicted in a manner contrary to the original times, thus contributing to a bias in understanding history and thereby get in the way of authentically preserving cultural heritage (illustrated in Sec. 6). These observations therefore compel an analysis of generative art so as to uncover various types of biases.

1.1 Contributions

In this paper, we investigate biases in AI based generative art right from those that can originate due to inappropriate problem formulation to those that can be related to algorithm design. Viewing from the lens of art history, we discuss the social-cultural impacts of these biases. We advocate for the use of causal models [51] to depict potential processes of art creation. We highlight how current methods can fall short in modeling the process and thus contribute to various biases such as selection bias and transportability bias [11]. We illustrate the same through case studies that span various art movements, artists, art media/material, genres, and geographies.

First, examples of biases that arise due to improper problem formulation, namely, framing effect biases, are considered (Sec. 4). After providing a brief background of causal models (Sec. 5), we discuss case studies to demonstrate confounding biases in modeling artists’ styles (Sec. 6.1), followed by illustrations of selection bias (Sec. 6.2). Next, biases in transferring artists styles, also known as transportability biases, are discussed (Sec. 6.3). Transportability bias is demonstrated by considering case studies that include both art movements that are subtly different (e.g. modern art movements such as Cubism and Futurism) as well as art movements across different time periods (e.g. Modern art and Early Renaissance). Illustrations of biases in datasets are provided in Sec. 7. A complete list of the case studies examined in the paper can be found in Table 1. Our findings shed light on the various inherent biases in generative art right from problem formulation and datasets to algorithm design and data analysis. We also discuss the socio-cultural repercussions of these biases (Sec. 8). To the best of our knowledge, this is the first extensive analysis that investigates biases in the generative art AI pipeline from the perspective of art history. We hope our work triggers interdisciplinary discussions concerning accountability of generative art, and sparks the design of novel methods to address these issues.

CS Bias type Art Movements Artists Genres
1 Confounding bias Post-Impressionism Vincent Van Gogh Landscape
2 Selection bias Romanticism Gustave Dore Illustration
3 Selection bias (racial bias) Renaissance Various artists Portraits
4 Transportability bias Post-Impressionism Paul Cezanne Photo, Landscape
5 Transportability bias Cubism, Futurism Fernand Leger, Gino Severini Genre art, battle painting
6 Transportability bias Realism, Expressionism Mary Cassatt, Ernst Kirchner Portraits
7 Transportability bias Renaissance, Expressionism, Clementine Hunter, Portrait, Sculpture
(racial bias) Folkart Desiderio da Settignano
8 Transportability (gender) bias Renaissance Raphael, Piero di Cosimo Portraits
9 Representational bias Renaissance Various artists Portraits
10 Label bias Ukiyo-e Various artists Various genres
Table 1: Summary of Case Studies (CS) described in the paper. Note, there could be more than one type of bias associated with each CS. For illustrative purposes, only one bias type is discussed in each CS.

1.2 Case study selection

We surveyed academic papers, online platforms, and apps that generate art using AI. In order to uncover potential biases from an art historical perspective, from the surveyed list, we selected papers and platforms that focused on simulating established art movements and/or artists’ styles. Thus, papers such as [26] or platforms such as [5] which focus on deviating from established styles to create imaginary patterns are not included in our study. To demonstrate various biases, we have considered state-of-the-art generative art AI models [72, 56] and platforms/apps such as [23, 4, 31] that focus on simulating established art movements and artists’ styles. The art movements considered as part of case studies have been determined based on the experimental set-ups reported in these state-of-the-art AI models and platforms. These include Renaissance art, Modern art (Cubism, Futurism, Impressionism, Expressionism, Post Impressionism and Romanticism), and Ukiyo-e art. The genres span landscapes, portraits, battle paintings, genre art, sketches, and illustrations. Art material considered includes woodblock prints, engravings, paint, etc. The study includes artists across cultures such as Black folk artist Clementine Hunter, American painter Mary Cassatt, Dutch artist Van Gogh, French illustrator and sculptor Gustave Dore, Italian artist Gino Severini, amongst others. In the next section, we discuss work related to computer generated art.

2 AI for Art Generation

Computer generated art has a long history. In 1970’s, painter Harold Cohen began exhibiting paintings generated by a program called AARON [20, 37]. By 1980s, several artists were using computer programs to create interactive experiences for the audience [37]. In the 1990s, Flash, a tool for creating animations became popular [9]. Around the same time, Paul Haeberli introduced a paint program whereby a user could quickly create a painting without needing any technical skill [35]. In 2000s, tools like Processing [17] and OpenFrameworks [71] allowed artists to make art using code.

As early as 2001, researchers in computer vision were training computers to learn artists’ styles from examples [36]. Since 2012, rapid advancement in deep learning has triggered a wide range of models for AI based generative art. For example, [47] is an open source tool released by Google that uses a convolutional neural network (CNN) to find and enhance patterns in images creating a dream-like hallucinogenic appearance. Another CNN architecture is the neural style transfer [28] work that allows to blend content of one image into style of another image to create new images. In [34], a recurrent neural network is proposed to construct stroke-based drawings of common objects.

Recently, generative adversarial networks (GANs) [32] have become popular for creating art. Creative adversarial networks [26] proposed modifications to the GAN objective to make it creative by maximizing deviation from established styles and minimizing deviation from art distribution. In [72], the authors proposed a method to translate styles across unpaired images and illustrated its applicability in transferring artists’ styles. In [56], the authors used conditional GANs to generate artworks.

There are many tools and apps to facilitate users to easily create art. Using [4], users can transform a portrait in the style of famous portraits. Photos can be converted into artworks using [39]. In an application called GANbreeder, two images are combined to generate a new image [5]. The style of the input image is transformed into another specified style in [23]. Cartoonify [45] turns a photo into a cartoon drawing leveraging Google’s “Draw This” [46].

Given that there are many tools to quickly create art, there is an increased risk for generative art to be biased. With surging art market [16] and pressing need for diversity and inclusiveness in art [8], an analysis of bias in generative art becomes even more pertinent. While there have been studies to understand how humans perceive art [38] and to examine if there is a perception bias towards art created by AI [53], there is little to no extensive analysis concerning biases in generative art. In this paper, we provide an extensive analysis of biases in the generative art AI pipeline from the perspective of art history. In the rest of this paper, generative art refers to AI based generative art.

3 Art-historical aspects of artworks

In this section, we discuss various aspects pertaining to art history based on which art works can be analyzed. Art historians employ a number of ways to group world arts into systems of classification [62]. These groupings are based on a set of qualities that are significant. Such significant qualities could be related to specific approach of an artist, material used to create art works, art movements, genre, etc. We provide a brief account of some main aspects so as to aid in understanding some types of biases that we illustrate in the paper.

3.1 Art Movements

Art movements can be described as tendencies or styles in art with a specific common philosophy influenced by various factors such as cultures, geographies, political-dynastical markers, etc. and followed by a group of artists during a specific period of time [62]. Examples of art movements include Ancient Egyptian art, Ancient Greek art, Medieval Art, Renaissance art, Modern art, etc. Within each of these art movements, there are sub-categories based on various factors. For example, modern art includes many sub-categories such as Symbolism, Impressionism, Post-impressionism, Cubism, Futurism, Pop-art, and so on. As an illustration, consider Impressionism and Post-impressionism. Both movements originated in France, however Post-impressionism originated in reaction to Impressionism. While Impressionism was characterized by vibrant colors, spontaneous brush strokes, and urban life styles, Post-Impressionism artists had their own individual styles to symbolically display real subjects and their emotions [49]. Similarly, each art movement is characterized by unique features that reflects certain trends. Thus art movement is a dominant factor influencing artists and artworks. Interested readers may refer to [62] where over hundred sub-categories are listed across a dozen art movements.

3.2 Art material

Artworks can also be grouped based on the material and techniques used in creating the art. Charcoal, enamel, mosaics, tapestry, paint, and lithography are some examples of art materials. Artists use different techniques to create artworks from different materials. For example, mosaic is a coherent pattern or image in which each component element is built up from small regular or irregular pieces of substances such as stone, glass or ceramic, held in place by plaster/mortar, entirely or predominantly covering a plane or curved surface, even a three dimensional shape, and normally integrated with its architectural context [25]. Mosaics were traditionally used as decoration for floors and walls becoming very popular across the Ancient Roman World. Different art movements saw the prevalence of different materials. For example during the Renaissance period, sculptures were made out of various materials like marble, white stone, gold, etc. Thus, material and technique employed to create art can influence the artist and the resulting artwork. An elaborate list of various materials can be found in the WikiArt dataset [62].

3.3 Genre

Genre of an artwork is based on the depicted themes and objects. A hierarchy of genres was developed in the century [1]. According to this hierarchy, history paintings, namely paintings depicting scenes of important historical, mythological, and religious events, were considered to be the top ranked genre and this was so until the mid century. History paintings were usually big in size and typically narrated a story such as a battle, allegory, or the like. Portraiture was another prominent genre and these usually depicted royals, aristocrats, and other important people in society. Portraiture had to convey aesthetic aspects of the subject depicted, such as their power, beauty, etc. In contrast, “genre painting” depicted scenes from every day lives of ordinary people. Landscapes, animal painting, and still life painting are some other prominent genres. Abstract or figurative art are the most common genres for contemporary art [62]. An artist usually can work across different genre types, however, art historians mark certain artists as representatives of a particular genre. For example, Anthony van Dyck is recognized as a portraitist, Alfred Sisley as a landscape painter, and Piet Mondrian as an abstract artist – though each one of them worked in a number of different genres [62]. Wikiart dataset provides an extensive list of genres based on artists and artworks.

3.4 Artists

There are many aspects that characterize an artist’s “style”. In addition to factors such as art movement, art material, and genre, an artist’s style can be characterized by factors such as their cultural backgrounds, their art lineage or schools (from whom they learned or who influenced them), and other subjective aspects such as their cognitive skills, beliefs, prejudices, and so on. Consider for example, Paul Cezanne, one of the most popular artists in the history of modern art. Although generally categorized as a Post-Impressionist artist, Cezanne influenced several other art movements such as Cubism and Fauvism. Some of his early pictures depict classical and romantic themes with expressive brushwork and dark colors, while later he is said to have adopted brighter colors drawing inspiration, emotions, and memory to paint [61]. Cezanne himself remarked: “A work of art which did not begin in emotion is not art”. In his still-life paintings, Cezanne began to address technical problems of form and color by experimenting with subtly gradated tonal variations, or “constructive brushstrokes,” to create dimension in the objects [61]. His artworks span a variety of genres and exhibit patterns from multiple art movements marked by subtle cognitive aspects. Thus, modeling an artist’s style is not a straightforward computational task, it entails many abstract elements that are hard to quantify. Yet, researchers define “style” in ways that suit their model’s performance, we discuss this issue next.

4 Biases due to Problem Formulation

Biases can arise based on how a problem is defined and is formally known as the framing effect bias [52]. Consider, for example, the problem of style transfer in artworks. There are at least two different notions of styles when it comes to artworks: one which is related to the art movement, and another which is related to the artist. As described in Section 3, there are many aspects to each of the above notions of style. Yet researchers conveniently define style in a manner that suits their model’s performance. This is a consistent problem across several models. For example, in [56], the authors claim that their model learns Ukiyo-e style since the generated images are “yellowish” like Ukiyo-e artworks. In [72], a single model is used to learn styles of artists and art movements. Like in [56], the justification to have learned Ukiyo-e style seems to be based on color features. Thus, “style” has been defined based on the color of the generated art. Given that most Ukiyo-e works were woodblock prints, it is thus natural for the generated art to be yellowish.

Ukiyo-e is a form of Japanese art. The works usually depicted landscapes, tales from history, scenes from the Kabuki theatre, and other aspects of everyday city life. Some unique characteristics of Ukiyo-e included exaggerated foreshortening, asymmetry of design, areas of flat (unshaded) colour, and imaginative cropping of figures [66]. Foreshortening refers to the technique of depicting an object or human body in a picture so as to produce an illusion of projection or extension in space and to convey the notion of depth. These characteristics are not captured in the examples depicted in [56, 72]. Such drawbacks can also happen due to the model design issues which we discuss in Section 6. Nevertheless, inappropriate problem formulation can introduce and perpetuate biases across the generative art AI pipeline. In the next section, we briefly describe structural causal models that we leverage to analyze different biases.

5 Structural Causal Models

We advocate for the use of causal directed acyclic graphs (DAGs) [51] to analyze some types of biases that are related to algorithm design and datasets. In Section 3, we saw that there are several aspects relevant to an artwork and that these aspects could influence each other in many ways. For example, art movement could influence the choice of art material, the subject of a portrait could influence the artist, and so on. DAGs help in visualizing such relationships. DAGs also allow encoding of assumptions about data, model, and analysis, and serve as a tool to test for various biases under such assumptions. Researchers have leveraged causal models to discuss and develop various notions of fairness [30, 42]. Causal models facilitate domain experts such as art historians to encode their assumptions, and hence serve as accessible data visualization and analysis tools. Based on different expert opinions, there can be multiple assumptions. Thus, there can be more than one DAG describing the relationship of an artwork with the artist, genre, art movement, etc. Thus, using multiple DAGs it is possible to analyze for biases under different scenarios. We discuss basic concepts related to causal models, and through case studies, illustrate the intuition behind using causal models to analyze biases in generative art.

DAGs are visual graphs for encoding causal assumptions between the variables of interest. Specifically, the assumptions about data are encoded by means of structural causal models (SCMs) [51]. A structural causal model , consists of two sets of variables, and , and a set of functions that determine or simulate how values are assigned to each variable . The equation

(1)

describes a physical process by which variable is assigned the value in response to the current values, and , of the variables in and . Formally, the triplet defines a SCM [11]. The variables are observed and variables are unobserved. Variables and constitute the vertices of a causal graph and the directed edges between them denote the various causal dependencies.

5.1 d-separation

Regardless of the functional form of the equations in the model (), conditional independence relations can be obtained if the model satisfies certain criteria. is a criterion for deciding, from a given a causal graph, whether a set of variables is independent of another set , given a third set . The idea is to associate “dependence” with “connectedness” (i.e., the existence of a connecting path) and “independence” with “unconnected-ness” or “separation” [51]. Path here refers to any consecutive sequence of edges, disregarding their direction.

Figure 3: D-separation: Graph structures to illustrate conditional independence. Please refer to Section 5.1 for details

Consider a three vertex graph consisting of vertices , , and . There are three basic types of relations using which any pattern of arrows in a DAG can be analyzed, these are depicted in Figure 3. The leftmost graph (i) denotes a causal chain or that of a “mediation”. The effect of on is mediated through . In this case, given (i.e. conditioning on ), is independent of or is said to the path from to . In the center graph (ii), is a common cause of and . If unobserved, is a confounder as it causes spurious correlations between and . Conditioned on , the path from to is blocked. In the rightmost graph (iii), is a collider as two arrows enter into it. As such, the path from to is blocked. However, conditioning on , the path will be unblocked. In general, a set is admissible (or “sufficient”) for estimating the causal effect of on if the following two conditions hold [51]:

  • No element of is a descendant of

  • The elements of block all backdoor paths from to —i.e., all paths that end with an arrow pointing to .

5.2 Interventions

An on a graph is denoted by the “” operator [51]. The operator corresponds to setting the intervened variable to a specific value and removing the influence of other variables on it. For example, in graph (ii) of Figure 3, implies that the variable is set to the value and the incoming arrow into is removed. operator facilitates quantification of causal effects. For example, in Figure 3 (ii), the expression quantifies the causal effect of on . A set of rules referred to as are always applicable in the context of interventions [51]. These rules determine when it is possible to

  • to add/delete observations in interventions,

  • to exchange interventions and observations,

  • to add/delete interventions

helps in rendering expressions free from . We are now equipped to discuss algorithmic biases in generative art.

6 Biases related to Algorithm

Bias can arise if the algorithm ignores the effect of unobserved variables, overlooks domain specific differences, uses subsets of the population for analysis, and so on. We discuss these in this section.

6.1 Confounding bias

Confounding bias originates from common causes that affect both inputs and outputs [51]. We illustrate this bias through case studies.

Case Study 1:

Modeling artists styles has been one of the most common applications in generative art. As discussed in Section 3.4, several observed and unobserved abstract aspects constitute an artist’s style. Revisiting example discussed in Figure 2, the problem of modeling Van Gogh’s style can be viewed as estimating the causal effect of Van Gogh on the artwork. Thus, the expression models the style of the artist in the artwork (Section 5.2). It is to be noted that the assumptions encoded through a DAG can vary from one expert opinion to another. However, these varying opinions help to discover and test for biases under different scenarios and in turn highlight the drawbacks of existing correlation based methods such as [72] that do not take into account important cultural and social aspects that influence an artist’s style.

Consider Figure 4 (i) that depicts one potential process of art creation (note there could be several others based on assumptions of domain experts, we consider one such for illustration). Here, the variable denotes the artist, denotes the artwork, is the genre, is the art material, and denotes the art movement. According to the assumptions encoded in this DAG, art material, genre, and art movement are confounders influencing both the artist and the artwork. Further, art movement influences the art material. Let us assume that all of the confounders are observable. Under these assumptions, in order to model the style of Van Gogh (i.e. ), we have to block the backdoor path from to (Section 5.1), so as to remove confounding bias.

Specifically, for graph (i) in Figure 4, the following equation captures the causal effect of on

(2)

For the case study, implies do(Artist=Van Gogh). The summation in Eq. 2 indicates that one has to consider all possible art movements, art materials, and genres that the artist has worked in order to model their style. The implication of finding a sufficient set, , is that stratifying on is guaranteed to remove all confounding bias relative to the causal effect of on .

Thus, modeling artist’s style requires knowledge about the causal process governing the artwork’s creation. Models like [72, 56] that do not consider the influence of confounders like art movement are prone to omitted variable bias [68] and confounding bias. Art movements were characterized by many socio-cultural and political events of those times. Thus by ignoring this variable, there is a bias in capturing the artist’s style, in understanding the art’s intent, and in representing culture. In general, based on the DAG, the causal effect may or may not be identifiable. Eq. (2) differs from the conditional distribution , and the difference between the two distributions, i.e. defines confounding bias [11].

Figure 4 (i) depicted a scenario with no unobserved confounders. In presence of unobserved confounders, causal effects are not identifiable. Consider Figure 4 (ii). Let represent emotions of the artist. The dotted circle and lines denote the unobserved confounder and its influence on other variables. In this case, even knowing the joint distribution of genre, art movement, and material does not help in identifying Van Gogh’s style, i.e. is not identifiable. In general several subjective factors like prior beliefs, emotions, cultural values, etc. can influence the artist and the artwork. Thus in reality, the true style of any artist cannot be accurately modeled. Even if confounding bias due to unobserved factors is overlooked, there are other biases as discussed next.

Figure 4: (i): Confounding bias due to genre, material and art movements. (ii): Confounding bias due to artist’s unobserved emotions.

6.2 Sample Selection bias

Sample selection bias (or selection bias for short) is the bias that is induced by preferential selection of units for data analysis [11]. In a DAG, a special variable is used to denote the selection of the variable in the analysis, indicating selection and indicating otherwise. Consider for example, Figure 5 (i). Here, and indicates that both inputs and outputs are selection dependent (i.e. with respect to both and ). The case study discussed below will further clarify these concepts.

Case Study 2

To illustrate selection bias, let us consider an example described in the ArtGAN model [56]. The authors state that their model is able to recognize artist Gustave Dore’s preferences as the generated images resonate with the dull color found in Dore’s artworks. Mostly engravings were selected for analysis. Graph (ii) in Figure 5 depicts a possible DAG for this case. Let denote Dore’s style. depicts the selection of engravings in the analysis. Further, as the authors mention, the generated images are greyish due to the engravings considered, thus , where denotes generated image. Additionally, there may be some unobserved confounders that influence both and as denoted by the bi-directional dotted arrow, and there may be some other Artgan model variable that influences the nature of generated image , these are depicted in Figure 5 (ii).

Figure 5: (i): Example of selection bias (ii): Illustration of CS 2, see Section 6.2. (iii) Illustration of selection bias in datasets, see Section 7.1. Causal effect of on is not identifiable across (i), (ii), and (iii)

In order to be able to recover the causal effect of on under selection bias, [11] lists the conditions known as selection backdoor criteria. Formally, let be a set of variables partitioned into two groups and such that contains all non-descendants of and the descendants of , and let stand for the graph that includes the selection mechanism . is said to satisfy the selection backdoor criterion if the following conditions are true:

  • (i) blocks all backdoor paths from to in

  • (ii) and block all paths between and in

  • (iii) and block all paths between and in

  • (iv) and are measured.

In Figure 5 (ii), the path between and is not blocked due to the presence of direct link between the two variables, thus condition (iii) in selection backdoor criteria is not satisfied. Hence Dore’s style cannot be recovered based on the engravings considered in the analysis. In fact, in addition to engravings, Dore’s worked on paintings. For example, landscapes such as ‘The Lost Cow’, paintings such as ‘Little Red Riding Hood’, religious painting such as ‘The Wrestle of Jacob’ are not greyish and exhibit colors reflective of the genre. Thus, by merely selecting engravings for analysis, sample selection bias is induced and style of Dore cannot be identified. Note, the failure to capture Dore’s style in [56] can be additionally attributed to other types of biases such as confounding bias, label bias, and even framing effect bias. For the purpose of illustrating selection bias, we have focused on describing selection bias only. In general, there can be more than one type of bias in a case study.

Case Study 3:

As an other example of selection bias, consider the case of [4]. As illustrated in Figure 1, racial bias was evident in this application. Figure 5 (ii) can also be used to describe this scenario. Let denote the set of input images selected for analysis and let represent the generated images. Selection of Renaissance portraits of mostly white people was used in the analysis, thus . As evident, the generated images were portraits of fair skinned people, thus . Additionally, there could be unobserved confounders influencing both the input images and generated images , and there may be model parameters influencing the generated images. As condition (iii) in selection backdoor criteria is not satisfied, there is selection bias.

6.3 Transportability bias

Style transfer is a popular application in generative art. Various works have demonstrated transferring across artists’ styles (e.g. Cezanne to Monet) and across art media (e.g. photograph to painting). Learning models that can generalize across domains is commonly known as transfer learning in the deep learning community and as transportability in the causality community. Transportability defines the conditions under which causal effects learned in experimental studies can be transferred into to a new population in which only observational studies can be conducted [50]. The differences between the populations of interest are expressed through representations called as “selection diagrams”. To this end, DAGs are augmented with a set, , of “selection variables,” where each member of corresponds to a mechanism by which the two populations differ, and switching between the two populations will be represented by conditioning on different values of these variables. variables locate the mechanisms where structural discrepancies between the two domains are suspected to take place. Transportability bias arises if causal effects cannot be transferred across populations.

The conditions under which causal effects can be transported are listed in [10]. Formally, let be the selection diagram characterizing the two populations and with observational distributions and , and a set of selection variables in . The relation is transportable from to if and only if the expression is reducible, using rules of do-calculus [51] (Sec. 5.2), to an expression in which appears only as a conditioning variable in do-free terms. Given a DAG, open source tools like [12] can check for causal effects transportability automatically.

As an illustration, consider Figure 6 (i). Let the variable denote artist and denote the artwork. Suppose the variable is an unobserved confounder denoting subjective emotions of the artist. Between any two artists, this variable is bound to cause differences and hence the selection variable is pointing to to indicate this difference. For this DAG, the causal effect of on is not transportable or transferable across the two artists. For illustrative purposes, we will ignore the differences due to unobserved factors such as subjective emotions of artists and consider differences in observed variables. There can still be transportability bias as illustrated through the following case studies.

Figure 6: (i): An example Selection diagram (SD). (ii): SD for case study 4 and 5. (iii) SD illustrating case study 10. Causal effect of on is not identifiable across all scenarios (i), (ii), and (iii)

Case Study 4:

In [72], a model that can transfer photograph to an artist’s style is proposed, say photograph and Cezanne for illustration. For simplicity, we consider only one genre say landscapes. The goal is to model Cezanne’s style in rendering the landscape corresponding to the photo. Consider Figure 6 (ii) which illustrates one possible selection diagram for this case study. Let denote artist/photographer and let denote the artwork/photo. Thus, there are two populations corresponding to the choice of and , i.e. photographer/photo, and artist/artwork. For the style transfer problem, we want to be able to capture the causal effect of the artist on the artwork using the photograph. The shaded squares marked by the symbol are the selection variables and are used to denote differences in the two populations [11]. Further, there may other unobserved confounders. However, to illustrate biases beyond the difference in unobserved confounders, we will overlook such confounders in this case study.

The factors that influence an artist are different from those that influence a photographer. For example, Cezanne could be influenced by the art movement whereas the photographer may be influenced by the photography trends. This distinction is indicated by the selection variable pointing into . Further, the factors that affect the artwork may be different in the two populations. A photograph may be subject to the camera characteristics, lighting, and measurement errors, selection variable pointing to denotes this difference. When the distinction is associated with the target variable, i.e. in Figure 6 (ii), causal effects are not transportable [10]. Thus, there will be transportability bias.

In fact, in post-impressionism, the art movement primarily associated with Cezanne, artists had their own individual styles. As mentioned in [61], Cezanne concentrated on pictorial problems of creating depth in his landscapes. He used an organized system of layers to construct a series of horizontal planes, which build dimension and draw the viewer into the landscape. This technique is apparent in some of his works such as Mont Sainte-Victoire, the Viaduct of the Arc River Valley and The Gulf of Marseille Seen from L’Estaque [61]. In some of his works such as Gardanne, Cezanne painted the landscape with intense volumetric patterns of geometric rhythms most pronounced in the houses reflective of Cubism. The generated images in [72] do not exhibit such geometric rhythms.

Figure 7: Center: “Propellers”, a Cubism art by Fernand Leger. Right: “Armoured Train in Action”, a Futurism art by Gino Severini. Left: translation of the center image according to the style of right image by Deepart. Movement, a key aspect of Futurism, is missing in the translation. Image source: Wikiart

Case Study 5:

In the previous case study, we analyzed bias in the context of style transfer from one genre to another (from photograph to landscape). As another illustration, let us consider the problem of style transfer across art movements and genres. In order to demonstrate transportability bias in this setting, we consider DeepArt.io [23], an online platform that maps the style of one image to the content of the other using a neural network architecture [28]. We consider a case study that involves Cubism and Futurism, two art movements in the modern art era. Both these movements had many common aspects, yet they diverged in subtle ways. Therefore, we find it to be an interesting case study for analysis. Both Cubism and Futurism focus on representation of objects from multiple perspectives/viewpoints and emphasize on geometrical shapes. Cubism, is concerned with forms in static relationships while Futurism is concerned with them in a kinetic state. Futurism emphasized on objects and events that involved movement such as wars, energy of nightclub, and so on [64]. As such, we consider the following artworks for the case study.

“Propellers” (center image in Figure 7) was a 1918 Cubist art by Fernand Leger. Leger was fascinated with technology, in particular with propellers, and viewed them as objects of beauty holding them close to sculptures [6]. Right image in Figure 7 is a 1915 Futurism art by Gino Servini called “Armored Train in Action”. Severini was inspired by Cubism but was a member of Futurism. Futurism used art as a symbol for expressing political and social views. Severini depicted aspects of war, movement, and modernity in this work.

Figure 6 (ii) can be a potential DAG for this case study, note, there can be other DAGs based on assumptions. The differences in Cubism and Futurism art movements combined with the differences in genre influences the artists differently and the artwork differently. Also, there is the effect of unobserved factors such as the artist’s emotions that influence the artist and the artwork. The left most image in Figure 7 corresponds to the “Futurism version” of Propellers. Kinetic patterns which is a distinct feature of Futurism, is absent in the translated image. Given that the original image is that of a mechanical object (propellers), a Futurism version of it should have depicted the movement of the propellers much like in ‘Armoured train in Action’ that shows the fractured landscape, which accentuates the train’s force and momentum as it cuts through the countryside [48]. Thus, there is bias in transferring styles.

Case Study 6:

The previous case study encompassed style transfer across art movements which were similar in many ways. We now consider a case study involving style transfer between Realism and Expressionism, two art movements that have marked differences from one another. We consider common genre, namely portraits across the two art movements. Thus, this case study serves as a good test to see if style transfer from [23, 28] is effective given that the difference between two styles is significant.

Figure 8: Center: “Miss Mary Ellison”, a Realism art by Mary Cassatt. Right: “Erna” , an Expressionism art by Ernst Ludwig Kirchner. Left: translation of the center image according to the style of right image by Deepart. Distorted subjects, a key aspect of Expressionism is absent in the translation. Image source: Wikiart

Realism focuses on representing subject matter truthfully, without artificiality and avoiding implausible, exotic and supernatural elements [62]. The center image in Figure 8 is an Realism portrait by Mary Cassatt. Expressionists, on the other hand, used gestural brushstrokes and distorted subjects to portray intense emotions through their works. The right image in Figure 8 is an expressionism portrait by Ernst Ludwig Kirchner. As can be observed, the facial features in the right image have been distorted (e.g. sharp chin resulting in an almost triangular facial contour, pointed nose and ears) to intensify emotions. The left image in Figure 8 is the style translated version of the center image. Aesthetic innovations typical of an avante-garde movement like Expressionism are not evident in the style translated version. As Kirchner himself said, in Expressionism, objective correctness of things is not emphasized [54], rather a new appearance is created through radical distortions of subjects to evoke intense emotional experiences. The style translated version is exactly as the original but for some color changes. The image neither exhibits any distorted features nor has gestural brushstrokes that portray intense emotions. As Aristotle remarked, “The aim of art is to represent not the outward appearance of things but their inner significance”. Thus, the style translation does not capture the subtleties of the Expressionism art movement.

Case Study 7:

As another case study to demonstrate biases in “style” transfer, we consider “GoART” [31]. This app allows a user to convert an uploaded photo into various styles spanning art movements such as Byzantine, Expressionism, Cubism, Ukiyo-e, and artists such as Van Gogh. We consider a 1970 folk art by Clementine Hunter titled “Black Matriarch” shown in Figure 9 (i). Figure 9 (ii) shows the “Expressionism” version of the “Black Matriarch” from [31]. As can be noticed, the face is tinted in red. However, this kind of effect is not pronounced in light colored faces. Consider Figure 9 (iii). This is an early Renaissance sculpture by Desiderio da Settignano. The face in the “Expressionism” version of this sculpture (Figure 9 (iv)) does not seem to have shades of red as significant as in Figure 9 (ii). Similar results were observed with other styles such as Byzantine wherein the face of “Black Matriarch” was tinted with shades of blue while the fairer faces were not heavily tinted. There is noticeable difference in the way faces are converted across styles based on the color of the face, indicating potential racial biases.

Figure 9: (i): “Black Matriarch”, a Folkart by Clementine Hunter. (ii): “Expressionism version” of (i) by GoART. (iii): “Giovinetto”, a Renaissance sculpture by Desiderio da Settignano. (iv): “Expressionism version” of (iii) by GoART. Face color of “Black Matriarch” is changed in the translation unlike in “Giovinetto”, a white sculpture. Image source: Wikiart

Case Study 8:

We consider Abacus.AI’s online tool that converts a user uploaded photo into a different gender: male to female and vice versa [2]. One of their demos shows translation of the Renaissance painting of Mona Lisa into a masculine face. Thus, we experimented with other Renaissance paintings. Figure 10 (i) is a ‘portrait of a man holding an apple’ by Raphael and Figure 10 (iii) is a ‘portrait of a young man’ by Italian artist Piero di Cosimo [62]. Figure 10 (ii) and 10 (iv) are the gender translated versions of (i) and (iii) by [2], both of which fail to identify the original paintings as those of men. Young men with long hair were mistaken to be women and thus the gender-translated versions of these images depicted masculine faces with beards. In the Renaissance era, it was common for young men to have long hair, often extending from ears to shoulders. Thus, by not understanding the differences due to culture across genders and age, men are being stereotyped as having short hair and [2] thus exhibits transportability bias.

Figure 10: (i) Portrait of a man by Raphael, (iii) Portrait of a young man by Cosimo. (ii), (iv): Gender translations of (i) and (iii) respectively. Young men with long hair were mistaken as women by Abacus

7 Biases in Datasets

In this section, we discuss biases due to unrepresentative datasets and due to inconsistencies in annotation.

7.1 Representational bias

The bias that arises because of having a dataset that is not representative of the real world is referred to as representational bias. This is a particular type of selection bias. Specifically in the context of art datasets, there may be imbalances with respect to art genres (e.g. large number of photographs vs few sculptures), artists (e.g. mostly European artists vs few native artists), art movements (large number of works concerning Renaissance and modern art movements as opposed to others), and so on. The availability of artworks is one of the main constraints in collecting a dataset that is representative of the bygone times, but preferences of the dataset curators can also play a role in contributing to bias.

Case Study 9:

Consider [4] that was trained using about 45000 Renaissance portraits of mostly white people [40, 55]. Quite naturally, the system performs poorly on dark skinned people. Faces depicting different races, appearances, etc. have not been pooled into the dataset, thus contributing to representational bias. This is a particular instance of selection bias that has to do with dataset curation. Algorithms trained using datasets with severe class imbalances are bound to be biased. Figure 5 (iii) illustrates this. Suppose denotes artist and denotes artworks, then class imbalances corresponds to , i.e. type of artworks influences selection into the dataset (in this case mostly white Renaissance portraits were selected into the dataset). As condition (iii) fails in selection backdoor criteria, there is representational bias.

7.2 Label bias

This type of bias is associated with the inconsistencies in the labeling process. Different annotators may have different preferences which can get reflected in the labels created. A common instance of label bias arises when different annotations could be used to represent an artwork. For example, a scene with clouds may be annotated as a “cloudscape” by some annotators and more generally as a ”landscape” by others.

Yet another type of label bias arises when subjective biases of evaluators can affect labeling. “Confirmation bias” [52], a type of human bias, is closely related to this type of label bias. For example, in a task of annotating emotions associated with artworks, the labels could be biased by the subjective preferences of annotators such as their culture, beliefs, and introspective capabilities. Consider the Behance Artistic Media dataset [67] which provides labels based on emotions such as “happy”, “scary”, ”peaceful”, etc. Such labels could be based on annotator’s beliefs, and can therefore be noisy.

Case Study 10:

To illustrate how annotation inconsistencies can induce bias, let us consider the ArtGAN model [56]. This model uses the label information to train the discriminator in the GAN framework. The authors claim that by using labels pertaining to genre (cityscapes, portraits, etc.), art media (such as sketch and study, engraving), and style (e.g. Ukiyo-e), they are able to generate images of those categories. However, the annotations are not necessarily reliable indicators of genres or styles. For example, “sketch and study” category includes several other categories such as portraits, religious paintings, and allegorical works to name a few. Figure 6 (iii) illustrates this bias. Let and denote the environments corresponding to two different annotators. Suppose denotes art movement’s style (e.g. Ukiyo-e), and is the artwork. Annotation inconsistencies across annotators affects the artworks’ labels, this is indicated by the selection variable pointing to . When there are differences in the target variable (i.e. label of artworks), the causal effect of on is not identifiable using the annotations provided [10]. Thus, a generative model that leverages such labels in modeling style is prone to bias.

8 Discussion

Art is much more than an aesthetic entity. As elaborated in [70], art imparts “moral knowledge”, i.e. knowledge about what ought to do and not to do [33]. Art also enables “emphathic knowledge”, something through which one can compare different views of the world through direct experience [21]. Art initiates a conversation with the public [22], it is a form of language that is not just a mimicry but a symbolic transposition [43]. Art is not merely something that is meant for pleasure, instead it is ‘a form of technology that contributes to knowledge production by exemplifying aspects of the world that would otherwise go overlooked’ [33]. Ethics of art appeal towards a good society [14]. Thus given its powerful impact in shaping moral and empathetic values, generative art comes with the responsibility of creating art that respects and upholds societal ethics. By coloring the face of the “Black Matriarch”, [31] is not only depicting racial bias, but also inaccurate in its representation of the art movement. As artist Edgar Degas remarks, “Art is not just what you see but what you make others see”. Thus, generative art that does not promote diversity and inclusiveness has the potential of creating and communicating unethical values.

Art reflects cognitive abilities of the artist [24]. Cognitive abilities include perception, memory, emotions, and other latent aspects about the artist. Generative art that is meant to create art in the “style” of various artists must reflect and respect artist’s cognitive abilities and not stereotype them based on any narrowly defined metric. For example, often, “style of Van Gogh” is largely modeled based on the brushstrokes in his rendition “Starry Nights” or based on certain colors such as in [56]. Similarly, “style of Cezanne” in [72] little reflects the variety of geometric patterns that were prominent in his works. Needless to mention that cognitive aspects of the artist are not considered. As discussed in Section 4 and 6, majority of these issues arise due to framing effect bias and algorithmic bias. Often style is defined and modeled in a way to suit the algorithm’s performance. Not only are several important abilities and achievements of artists overlooked in the process of poor style modeling, but also those pertaining to larger art movements are ignored. For example, advanced techniques such as exaggerated foreshortening and perspective modeling that were typical of many Ukiyo-e renditions are hardly visible in the generated versions [72, 31].

In a recent photo booth titled “Latent Face”, latent vectors of the styleGAN model were combined to generate ”hypothetical children” of subjects depicted in the original portraits [7]. While this may have been just an exploratory experiment, the task exemplifies framing effect bias. Defining “children” as combination of latent vectors is highly questionable. The latent vectors may or may not have any reliable interpretation, and a very difficult and potentially impossible problem of generating faces of hypothetical children is conveniently framed as a simple task. There are also several ethical concerns associated with such a framing given that people depicted in the original portraits were not related or never had children.

Framing effect bias coupled with algorithmic bias contributes to inaccurate knowledge about history. Artworks were often meant to document important events in history such as wars, mythological events, political movements, etc. By wrongly modeling or overlooking certain subtle aspects, generative art can contribute to false perceptions about social, cultural and political aspects of past times and hinder awareness about important historical events. For example, as discussed in case study 5, in the Futurism artwork “Armored Train in Action”, artist Severini conveys his views on war that was prevalent during the time. In fact, Futurism artists heavily depicted their views of political events through patterns indicating movement in the artworks. A generated style translation should thus preserve such important characteristics of art movements, or else they will be contributing to a bias in understanding history and culture. People have a propensity to favor suggestions from automated systems and to ignore contradictory information, even if it is correct. This is called as “automation bias” [63]. As a result, people may give little importance to historical evidence. Further, often, evaluation of generative art is done by people (e.g. Amazon Mechanical Turk workers) who do not necessarily possess domain knowledge. Thus, even if generated art is not accurately representing cultural and historical knowledge, people may endorse them.

Tutorials like [29] and [41] emphasize on the need to inspect the design choices made by the creators of AI systems and the socio-political contexts that shape their deployment. Can algorithms accurately model artist’s “styles”? More broadly, is the defined problem even solvable? Are there representative datasets and reliable labels to address the defined problem? What are the measurement biases in digitally capturing art? What are the socio-cultural impacts of generative art? Does generative art promote inclusiveness? Who should own responsibility for biases in generative art? These are just some questions that need to be analyzed. Also, domain experts (e.g. art historians) should be involved in the generative art pipeline to better inform the process of art creation.

9 Conclusions

In this paper, we investigated biases in the generative art AI pipeline from the perspective of art history. Leveraging structural causal models, we highlighted how current methods fall short in modeling the process of art creation and illustrated instances of framing effect bias, dataset bias, selection bias, confounding bias, and transportability bias. We also discussed the socio-cultural impacts of these biases. We hope our work sparks inter-disciplinary dicussions and inspires new directions concerning accountability of generative art.

References

  1. A. B. (Editor) (2007) Partisan canons. Duke University Press. Cited by: §3.3.
  2. Abacus.AI (2020) Effortlessly embed cutting edge ai in your applications. In https://abacus.ai, Cited by: §6.3.5.
  3. AICAN (2018) Art of the future, now.. Scope Miami Beach https://uploads.strikinglycdn.com/files/fa23f92f-61c0-4417-92ff-868dca9665a7/AICAN-WebProoffinal.pdf. Cited by: §1.
  4. AIportraits (2020) AIportraits: the easiest way to make your portraits look stunning. https://aiportraits.org. Cited by: §1.2, §1, §2, §6.2.2, §7.1.1.
  5. Artbreeder (2020) Artbreeder: extend your imagination.. https://www.artbreeder.com. Cited by: §1.2, §1, §1, §2.
  6. C. Asendorf (1994) The propeller and the avant-garde-leger, duchamp, brancusi. Fernand leger-The Rhythm of Modern Life. Cited by: §6.3.2.
  7. J. Bailey (2019) Breeding paintings with machine learning. https://www.artnome.com/news/2019/8/25/breeding-paintings-with-machine-learning. Cited by: §8.
  8. J. Bailey (2020) 2020 art market predictions. https://www.artnome.com/news/2020/1/27/2020-art-market-predictions. Cited by: §2.
  9. J. Bailey (2020) The tools of generative art from flash to neural networks. https://www.artnews.com/art-in-america/features/generative-art-tools-flash-processing-neural-networks-1202674657/. Cited by: §1, §2.
  10. E. Bareinboim and J. Pearl (2012) Transportability of causal effects: completeness results. In AAAI, Cited by: §6.3.1, §6.3, §7.2.1.
  11. E. Bareinboim and J. Pearl (2016) Causal inference and the data-fusion problem. Proceedings of the National Academy of Sciences. Cited by: §1.1, §5, §6.1.1, §6.2.1, §6.2, §6.3.1.
  12. E. Bareinboim (2020) Causal fusion. In https://causalfusion.net, Cited by: §6.3.
  13. M. Boden and E. Edmonds (2009) What is generative art. Digital Creativity. Cited by: §1.
  14. P. Brey (2018) The strategic role of technology in a good society. Technology in Society. Cited by: §8.
  15. R. Brooks (2017) The seven deadly sins of ai predictions. MIT Technology Review. Cited by: §1.
  16. M. C (2019) The art market. an art basel and ubs report. https://www.artbasel.com/about/initiatives/the-art-market. Cited by: §2.
  17. R. C and F. B (2007) Processing: a programming handbook for visual designers and artists. MIT Press. Cited by: §2.
  18. Christies (2018) Is artificial intelligence set to become art’s next medium?. https://goo.gl/4LDZjX. Cited by: §1.
  19. M. Coeckelbergh (2017) Can machines create art?. Philosophy and Technology. Cited by: §1, §1.
  20. H. Cohen (2014) 2014 distinguished artist award: harold cohen. https://www.siggraph.org/2014-distinguished-artist-award-harold-cohen/. Cited by: §1, §2.
  21. N. D (1987) Knowledge, fiction, and imagination. Temple University Press. Cited by: §8.
  22. A. Daniele and Y. Song (2019) AI+art= human. AAAI AI Ethics and Society. Cited by: §1, §8.
  23. Deepart.io (2020) Deepart.io. https://deepart.io. Cited by: §1.2, §1, §2, §6.3.2, §6.3.3.
  24. E. Dissanayake (2001) Where art comes from and why. University of Washington Press. Cited by: §8.
  25. K. Dunbabin (1999) Mosaics of the greek and roman world. Cambridge University Press. Cited by: §3.2.
  26. A. Elgammal, B. Liu, M. Elhoseiny and M. Mazzone (2017) CAN: creative adversarial networks, generating ”art” by learning about styles and deviating from style norms. International Conference on Computational Creativity (ICCC). Cited by: §1.2, §1, §2.
  27. A. Elgammal (2019) Faceless portraits transcending time. HG Contemporary New York https://uploads.strikinglycdn.com/files/3e2cdfa0-8b8f-44ea-a6ca-d12f123e3b0c/AICAN-HG-Catalogue-web.pdf. Cited by: §1.
  28. L. A. Gatys, A. S. Ecker and M. Bethge (2016) Image style transfer using convolutional neural networks.. Computer Vision and Pattern Recognition. Cited by: §1, §2, §6.3.2, §6.3.3.
  29. T. Gebru and E. Denton (2020) Tutorial on fairness accountability transparency and ethics in computer vision. CVPR Tutorial. Cited by: §8.
  30. B. Glymour and J. Herington (2019) Measuring the biases that matter. FaccT. Cited by: §5.
  31. GoART (2020) GoART: ai photo effects. http://goart.fotor.com.s3-website-us-west-2.amazonaws.com. Cited by: §1.2, §6.3.4, §8, §8.
  32. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio (2014) Generative adversarial nets. NeurIPS. Cited by: §2.
  33. T. Gorichanaz (2020) Engaging with public art: an exploration of the design space. CHI. Cited by: §8.
  34. D. Ha and D. Eck (2018) A neural representation of sketch drawings. International Conference on Learning Representations. Cited by: §2.
  35. P. Haeberli (1990) Paint by numbers: abstract image representations. SIGGRAPH. Cited by: §2.
  36. A. Hertzmann, C. Jacobs, N. Oliver, B. Curless and D.H. Salesin (2001) Image analogies. SIGGRAPH. Cited by: §2.
  37. A. Hertzmann (2018) Can computers create art?. ArXiv. Cited by: §1, §1, §1, §2.
  38. J. Hong (2018) Bias in perception of art produced by artificial intelligence. International Conference on Human Computer Interaction. Cited by: §2.
  39. Instapainting (2020) AI painter. https://www.instapainting.com/ai-painter. Cited by: §1, §2.
  40. E. O. Jr (2019) Racial bias in ai isn’t getting better and neither are researchers’ excuses. https://www.vice.com/en_us/article/8xzwgx/racial-bias-in-ai-isnt-getting-better-and-neither-are-researchers-excuses. Cited by: §1, §7.1.1.
  41. C. Kaeser-Chen, E. Dubois, F. Schuur and E. Moss (2020) Positionality-aware machine learning: translation tutorial. FAccT Tutorial. Cited by: §8.
  42. M. J. Kusner, J. R. Loftus, C. Russell and R. Silva (2017) Counterfactual fairness. NeurIPS. Cited by: §5.
  43. A. Leroi-Gourhan (1993) Gesture and speech. MIT Press. Cited by: §8.
  44. H. M (1977) The question concerning technology, and other essays. Garland Publishing INC. Cited by: §1.
  45. D. Macnish (2018) Cartoonify. https://experiments.withgoogle.com/cartoonify. Cited by: §2.
  46. D. Macnish (2020) Draw this. https://danmacnish.com/drawthis/. Cited by: §2.
  47. A. Mordvintsev, C. Olah and M. Tyka (2015) Deep dream. https://github.com/google/deepdream. Cited by: §1, §2.
  48. M. M. of Modern Art (2020) Gino severini: armoured train in action. In https://www.moma.org/collection/works/33837, Cited by: §6.3.2.
  49. O. A. Online (2020) Impressionism and post-impressionism. https://www.oxfordartonline.com/page/impressionism-and-post-impressionism/impressionism-and-postimpressionism. Cited by: §3.1.
  50. J. Pearl and E. Bareinboim (2014) External validity: from do-calculus to transportability across populations. Statistical Sciences. Cited by: §6.3.
  51. J. Pearl (2009) Causality: models, reasoning and inference, 2nd edition. Cambridge University Press. Cited by: §1.1, §5.1, §5.1, §5.2, §5, §5, §6.1, §6.3.
  52. S. Plous (1993) The psychology of judgment and decision making.. Mc-Graw Hill. Cited by: §4, §7.2.
  53. M. Ragot, N. Martin and S. Cojean (2020) AI-generated vs. human artworks. a perception bias towards artificial intelligence?. CHI: Late Breaking Work. Cited by: §2.
  54. T. A. Story (2020) Ernst ludwig kirchner - biography and legacy. In https://www.theartstory.org/artist/kirchner-ernst-ludwig/life-and-legacy/#nav, Cited by: §6.3.3.
  55. M. Sung (2019) The ai renaissance portrait generator isn’t great at painting people of color. https://mashable.com/article/ai-portrait-generator-pocs/. Cited by: §1, §7.1.1.
  56. W. R. Tan, C. S. Chan, H. Aguirre and K. Tanaka (2017) ArtGAN: artwork synthesis with conditional categorical gans. ArXiV. Cited by: §1.2, §2, §4, §4, §6.1.1, §6.2.1, §6.2.1, §7.2.1, §8.
  57. Value (2019) AI artwork goes up for auction for at sotheby’s. https://en.thevalue.com/articles/sothebys-ai-memories-of-passersby. Cited by: §1.
  58. V. van Gogh (1886) Letter to horace m. livens, translated by robert harrison. http://www.webexhibits.org/vangogh/letter/17/459a.htm. Cited by: §1.
  59. V. van Gogh (1888) Letter to wilhelmina van gogh, translated by translated by mrs. johanna van gogh-bonger. http://www.webexhibits.org/vangogh/letter/18/W04.htm. Cited by: §1.
  60. VanGoghGallery (2020) VINCENT van gogh: poppies. https://www.vangoghgallery.com/misc/poppies.html. Cited by: §1.
  61. J. Voorhies) (2004) Paul cézanne (1839–1906). Heilbrunn Timeline of Art History. Cited by: §3.4, §6.3.1.
  62. Wikiart (2020) Visual art encyclopedia. https://www.wikiart.org. Cited by: §1, §3.1, §3.2, §3.3, §3, §6.3.3, §6.3.5.
  63. Wikipedia (2020) Automation bias. https://en.wikipedia.org/wiki/Automation_bias. Cited by: §8.
  64. Wikipedia (2020) Futurism. https://en.wikipedia.org/wiki/Futurism. Cited by: §6.3.2.
  65. Wikipedia (2020) Generative art. https://en.wikipedia.org/wiki/Generative_art. Cited by: §1.
  66. Wikipedia (2020) Ukiyo-e. https://en.wikipedia.org/wiki/Ukiyo-e. Cited by: §4.
  67. M. J. Wilber, C. Fang, H. Jin, A. Hertzmann, J. Collomosse and S. Belongie (2017) BAM! the behance artistic media dataset for recognition beyond photography. ICCV. Cited by: §7.2.
  68. J. M. Wooldridge (2009) Omitted variable bias: the simple case. In Introductory Econometrics: A Modern Approach, Cited by: §6.1.1.
  69. D. Young (2019) Tabula rasa: rethinking the intelligence of machine minds. https://medium.com/@dkyy/tabula-rasa-b5f846e60859. Cited by: §1.
  70. J. O. Young (2001) Art and knowledge. Routledge, London, UK. Cited by: §8.
  71. L. Z., W. T. and C. A. et. al (2009) Openframeworks. https://goo.gl/41tycE. Cited by: §2.
  72. J. Zhu, T. Park, P. Isola and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. ICCV. Cited by: §1.2, §1, §2, §4, §4, §6.1.1, §6.1.1, §6.3.1, §6.3.1, §8.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
419983
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description