Abstract
Research into the effectiveness of AI art tools suggests that generated images are becoming increasingly indistinguishable from human-made artworks, yet perceived value has generated inconclusive results. This study implements a correlational method to explore the relationship between generated images from different AI tools and their perceived value and distinguishability by comparing them to human-made artworks with similar artistic styles. Value, in the given context, is defined as aesthetic appreciation, monetary worth, and perceived meaningfulness. Results imply that expensive tools provided the most human-like, though not the most valued images as they generated works that were most often mistaken as human artworks. Perceived value among different AI tools was similar and created considerable overlap between ratings for each tool. The average perceived value was higher for human artwork than generated works for all measured variables, though no significant difference was found. The provided results inspire further research into the effectiveness of AI tools. As AI art tools continue to develop to create more human-like images with greater perceived value, the current state of traditional art is threatened as some believe AI art may devalue human art. Regardless, the evolution of art through technology can break the current barriers of traditional art by allowing new degrees of freedom for digital artists.
Introduction
With no signs of slowing down, Generative AI is rapidly gaining traction in various fields. From the covers of famous musicians to poetry rivaling the works of Shakespeare1, the products of generative AI are seemingly unheard of outside of human works until now. Yet, the development of AI is not as new as some may believe. AI development can be traced as far back as the 1950s when the Turing Test was first introduced by computer scientist Alan Turing. The Turing Test is a method used throughout the computing field to test whether a machine can imitate the qualities of a human well enough to be indistinguishable when compared to one2. Since then, AI development has experienced many milestones leading to where it is today, one of the most notable being in 2016, when Google’s AI-driven system, AlphaGo, beat the human grandmaster in the complex board game Go3.
With the common goal of maximizing the intellectual productivity of AI technology4, AI development has always prioritized generating products outside the realm of human capabilities. The public perception of these capabilities is limited, disregarding the creative abilities of AI in various fields. While AI has arguably mastered the art of problem-solving and logical reasoning, it is still too early to conclude whether it has even begun to rival the intrinsic human skill of creativity. Nevertheless, generative AI is present in various creative fields, including fashion, music, and the visual arts1,2,3,5. The visual arts is one of the farthest along in development, with tools built specifically for image creation. The field has long relied on the traditional skills of artists before the introduction of AI art, yet now criticism has also been cast upon AI art tools on whether their capabilities rival or even surpass the abilities of human artists. Therefore, exploring the effectiveness of several generative AI art tools through the distinguishability and value (aesthetics, meanings, and monetary worth) of generated images in comparison to traditional human artworks will provide further knowledge regarding the creative skills of generative AI.
Literature Review
In every field in which AI has grown a presence, there lie varying perspectives regarding its use, including the visual arts. With the rise in generative AI tools that use text inputs to generate corresponding images such as Stable Diffusion, Dalle, and Midjourney, fears have risen regarding the “death of the artist”6. However, many also believe that generative AI is an artistic tool for an artist rather than a replacement. Photography faced a similar adjustment, inflicting fear from artists when introduced, now photos have allowed for new aspects of artwork and creativity rather than a replacement. With the enhanced features of AI tools, artists allow themselves more room in the creative process as their boundaries have expanded3,4,7. Even so, AI art currently lacks the intentionality and consciousness that human artists have, suppressing expression8. Further development is necessary for neural networks of generative AI as it will aid in AI coexisting among creative developers9. Neural networks are the methods in AI that train machines to process data similar to how humans think1,3,10,11,7,12,13. As a result of improving neural network models, AI art tools can provide more use in a given field as they adapt to human skill sets. For example, AI can help with art comprehension and appreciation, though it is currently considered a human capability14. Regardless, the majority of research focuses on image quality and representation. As AI continues to obtain human capabilities, it also embodies limitations such as (lack of) culture and aesthetics, authorship, labor economics of creative work, and impacts on media7. While these limitations are equally vital to address, bias is prominent. Bias includes a lack of racial, gender, and ethnic representations and stereotypes resulting from the images that an AI model is trained on7,15,16,17,18. In addition to societal limitations, AI artwork is also limited in the quality of perception it receives, consequently, researchers tend to focus on measuring different perceptions of the generated artworks.
Conclusions vary in research focused on human responses to AI art. Some studies conclude that human art is valued more than AI-made art, while others deduce humans do not recognize the difference between AI and human art19. Variability in results is thanks to the format of the studies done in the field. A study published in Cognitive Research: Principles and Implications conducted an experiment where participants were randomly assigned to view AI-generated or “human-made” artworks, though both groups viewed AI artwork and answered questions regarding their views on the piece. Results emphasized the participants preferred human-made art over AI-created art due to perceived meaningfulness20, contesting the beliefs of11 as it illustrates AI art is unlikely to replace artists due to the value humans place on meaning in art. A study published in the Applied Sciences Journal supports the results of20. Authors found that aesthetic appreciation is more significant for human paintings due to greater emotional depth21. Both studies imply that while generative AI is typically effective in technical fields, it tends to fall behind in areas that involve creativity. Another study, published in the National Center of Biotechnology Information, investigates whether knowing if an artwork was human-made or generated by AI affects opinion of it. They found that art experts favored human-made art over AI-generated art concerning purchase and collection intentions. Meanwhile, the participants without professional art experience had no preference for either AI or human-made artwork, focusing on the style and aesthetics of the artwork itself22. These findings are contradictory to those in [15], where they found that knowing whether the artwork was created by humans or AI affected views on it, regardless of their professional art experience. It can be argued that the findings in21 are in line with those in22 as 70% of participants had art experience and the results depicted that participant favored human-made paintings. Overall, it is unclear whether the findings on the value of AI artworks are generalizable given that disclosing the origin of an artwork affects a participant’s view of it, hence the importance of incorporating distinguishably into the study21,22.
Aside from aesthetic appreciation and favorability, emotional depth and meaningfulness are equally significant in research. Researchers at Northwestern University discuss the need to treat AI art similarly to traditional human-creating art by partaking in the same aesthetic judgments when interacting with AI-generated images. With traditional artwork, historians and artists evaluate modern and contemporary works with similar themes in mind, themes which are often ignored when analyzing AI-generated artworks due to the common belief that AI-created images cannot exhibit intrinsic human traits23. Bai Liu, a researcher at the Winchester School of Art University of Southampton, explores the artistic and aesthetic value of AI artworks by interpreting the portrayal of four areas of interest: creativity, motivation, self-awareness, and emotion. Lui’s findings depict that AI can synthetically represent emotions through generated works24, which aligns with conclusions made in a study analyzing AI art emotional depth published in the Journal of Computers in Human Behavior. The study found that participants reported a presence of emotional value to the computer-generated artworks, though there was still more portrayed through human-generated artworks. Then, participants speculated what artworks were human-made and which were computer-generated. Researchers found that the mean of correct guesses was 63.8%, showing that participants distinguish between the two types of art25. These results24,25 suggest that AI artwork is capable of displaying human emotion, which contests the assumptions and results of20,21,22 where most participants found a lack of emotional depth in AI works. Similar to why the results of studies measuring perceived value varied, the difference in results can be attributed to the researcher’s choice of disclosing the origin of an image before participants view it. Participants may have positive or negative biases toward AI art, affecting their perfection of an image’s aesthetic, monetary, and emotional value, further reinforcing the need to measure distinguishability alongside value.
As research continues to emerge in the field of generative AI art, a trend is clear regarding the priorities of many researchers: measuring the perceived value of artworks after disclosing the origin. Unlike literature, where the Turing test is a popular method to test whether an AI-generated product is comparable to a human-created one2, AI art research prioritizes the impact of origin on one’s perception of the artwork. As a result of the disregard for a distinguishability test, studies measuring the perception of AI-generated artworks20,21,26,23,24,25 had different results. While25 did address this discrepancy, the structure of the study worked in favor of participants identifying the origin of each image. The subjects were already shown each artwork along with the origin in the first half of the study; when it came down to the distinguishability test, they were presented with the same images in randomized order without a label. Subjects may have remembered or recognized patterns among AI-generated and human-created artworks in the first half of the study, aiding them in the distinguishability test.
With the intent of addressing the discrepancy regarding distinguishability and value emerges the research question: To what extent can AI art tools create artwork indistinguishable from and as valuable as human artwork, and how do variables differ by specific AI tool? Exploring the efficiency of different AI art tools through the research question allows for discoveries regarding the effectiveness of AI art tools, and it focuses on measuring value while keeping the image origin confidential. Throughout a handful of sources, the method employed to test distinguishability25 or value20,21,26,23,24,25 only utilized a single AI art tool to provide the images. Assumably, the AI tools used to generate an image do not play impactful roles in the outcomes. If the findings are consistent with prior research in the same field, AI artwork will be distinguishable from human-created artwork while exhibiting similar perceived value.
Regardless of the outcome, answering the question provides real-world significance as it clarifies the current state of AI technology in visual arts. AI art can offer new perspectives and insights into the creative process, allowing artists to explore new possibilities. While it is unclear whether AI-created art can replicate the emotional depth of traditional art, if AI art tools can imitate the styles and themes of human artworks, then it has the potential to revolutionize the art world and expand creative possibilities.
Methodology
A correlational method design was implemented to address the research question, looking at the relationship between the variables: image origin, distinguishability, and value (aesthetic, monetary, and meaningfulness), though not accounting for causality. The method was composed of different aspects of previous research designs concerning generative AI art exploring specific aspects of the technology. It compared AI-generated images to human-made images, had participants identify the origin of the works, and judged the aesthetics, monetary value, and perceived meaningfulness in all the images provided20,21,26,24,25. The aspects chosen to incorporate were meant to provide the most suitable setting to produce results that answer the question at hand as their corresponding studies explored similar questions regarding the effectiveness of AI art tools.
Exploring the effectiveness of AI art helped to achieve the purpose of assessing the current state of AI art tools as it allowed for open interpretation of results, as well as freedom regarding the types of AI tools and genre of artworks incorporated. A correlational method design was most appropriate as it allowed for the investigation of relationships between artistic perception and image origin which is not clearly defined, making room for exploratory research. Quantitative data was collected when measuring perceived value as respondents were asked to rate each aspect of value on a 1-5 scale for each image, used to ensure that there was a common scale for all subjects to rate the value of the image. Qualitative data came from having participants distinguish between human art and AI art, allowing for a simple binary output. Responses were collected through surveys, consisting of multiple-choice questions and optional short-answer feedback. A multiple choice survey was the most effective method as it ensured that the data collected was consistent and provided the measures necessary to analyze later on. Respondents were high school students in northeast Ohio with a broad range of experience in both art and AI tools, 169 responses were collected overall.
In the survey, subjects identified whether an image was AI-generated and then interpreted the value of each image. In terms of collecting the images, first, three major generative AI tools were selected to generate the images: DALL-E, Midjourney, and Stable Diffusion. Next three artists were chosen from the public domain of the Metropolitan Museum of Art database. The three artists were George Inness, William Michael Harnett, and Eastman Johnson, all artists with distinctive art styles from similar periods. For each artist, three artworks were selected to be representative of their work. For George Inness the works were as follows, Peace and Plenty, Evening at Medfield, Massachusetts, and Delaware Water Gap. All of the selected works by Inness were landscape paintings, therefore the prompt for the first set of AI-generated images was “ Landscape painting in the style of George Inness, visible brushstrokes, tonalism style.” For William Michael Harnett the images were as follows, The Banker’s Table, New York Daily News, and Still Life—Violin and Music. All of the selected works by Harnett were still-life paintings, therefore the prompt for the second set of AI-generated images was “ Still-life painting on canvas in the style of William Michael Harnett.” For Eastman Johnson, the images were as follows, The New Bonnet, The Funding Bill, and The Hatch Family. All of the selected works by Johnson were conversational paintings, therefore the prompt for the final set of AI-generated images was “ Painting of conversation between several people, 1800s, inspired by Eastman Johnson.” Next, the prompt for each artist was input into each of the three generative tools in the order that follows, Stable Diffusion, Dalle, and Midjourney. For tools that produced several images (Dalle and Midjourney produce four images per prompt), a random number generator was used to select which image would be selected. At the end of this process of generation, 9 AI-generated images and 9 human-made images were collected. Next, each image was placed into a Google spreadsheet and labeled accordingly by the origin, name, copyright, imputed prompt, time to generate, and category. Then, the Google form was created, which first contained prefacing questions regarding the subject’s experience within art and AI tools, and demographic questions to ensure that there was a diverse group of subjects (age and gender). For each image, 4 questions were asked: “Is this image AI-generated or human-created” and “Rank the [meaningfulness, aesthetics, or monetary worth] of the presented image on a scale of 1-5. 5 is most, 1 is least.” The images were placed in random order, therefore the prompt of the image did not affect its position. At the end of the form, the subjects were prompted to give feedback regarding their predicted accuracy, tactic of identifying images, and final perspective on AI art. To ensure that the generated images are as similar (themed) as possible, the same prompt was input for each AI tool. Additionally, to control for any confounding variables such as prior artistic experience and bias regarding AI art, subjects were asked prefacing questions regarding artistic and AI knowledge to use if blocks are needed later on when analyzing data.
All data was collected through surveys by Google Forms and then stored in Google Sheets, which helped interpret each response variable as a combination of all respondents because the two tools automatically transfer information. As images were generated and chosen, they were placed into a spreadsheet to ensure that the origin of each image was not lost anywhere during the process of data collection. Securing image origin was vital as it aided in the analysis of survey responses to ensure that human-made images were not analyzed as AI-generated images and vice versa. Once data had been collected, the mean and standard deviation of value ratings (1-5) were found for each type of image. Analyzing the data in this matter allowed for comparison between the image origins and easier interpretation of responses. Due to the extensive process of image generation, comparison between blocks was justified because all AI tools were given prompts in similar structures. To interpret the distinguishability the proportion of correct identifications was calculated, allowing for a better understanding of how similar AI artworks are to human-made artworks. Both methods of analyzing the distinguishability and value of AI-created works allowed for a better understanding of the effectiveness of the tools utilized.
Results
Distinguishability was presented through frequency correctly and incorrectly guessed allowing for easy comparison between AI-generated and human-made artworks as the sample size was the same for each block. Next, z tests were utilized to find evidence supporting that participants would mistake an AI image for human more likely than mistake a human image for AI and whether the proportion of correct guesses for AI images was 0.5. Additionally, variables of value presented through average ratings were best fit as they efficiently described the center of each distribution (given there were no outliers) and allowed for easy comparison between means. Accordingly, a t test for a difference of means was utilized to find supporting evidence that human-made images are valued more than AI-generated images. For the z test, the large counts rule was met as there were greater than 10 successes and 10 failures in both samples. For the t test, all sample data lack apparent outliers or skewness in distribution, allowing for the use of the significance test. Having a sample size of 169 allowed for a diverse sample of respondents which also varied in AI and art experience (as provided by prefacing questions). Furthermore, the analysis of the study aligned with that of22 where significance tests were used to measure ratings, purchase intention, and collection intention. Utilizing a correlational method allowed for the best collection of data as it allowed for insight into different relationships such as that between AI tool price and distinguishability. Furthermore, analysis of existing patterns was explored as interference within the process of data collection was unnecessary. The type of data collected also gave room to analyze the correlation between image origin and specific variables concerning value.
Origin | Aesthetic Rating (M ± SD) | Meaningfulness Rating (M ± SD) | Monetary Rating (M ± SD) |
Human | 3.395 ± 1.027 | 2.991 ± 1.075 | 3.202 ± 1.050 |
AI | 3.294 ± 1.133 | 2.803 ± 1.147 | 2.980 ± 1.147 |
AI Tool | Aesthetic Rating (M ± SD) | Meaningfulness Rating (M ± SD) | Monetary Rating (M ± SD) |
Stable Diffusion | 3.274 ± 1.138 | 2.665 ± 1.108 | 2.854 ± 1.136 |
Midjourney | 3.578 ± 1.013 | 2.998 ± 1.162 | 3.250 ± 1.081 |
Dalle | 3.03 ± 1.175 | 2.748 ± 1.147 | 2.834 ± 1.175 |
Origin | Frequency correctly identified | Frequency incorrectly identified |
Human | 68.2% | 31.8% |
AI | 63% | 37% |
AI Tool | Frequency correctly identified | Frequency incorrectly identified |
Stable Diffusion | 84.2% | 15.8% |
Midjourney | 63.3% | 36.7% |
Dalle | 41.4% | 58.6% |
Variable | Ho (null hypothesis) | Ha (alternative hypothesis) | Test Statistic | P-value |
Aesthetic | t=0.180 | p=0.430 | ||
Meaningfulness | t=0.359 | p=0.362 | ||
Monetary | t=0.428 | p=0.337 |
Test | Ho (null hypothesis) | Ha (alternative hypothesis) | Test Statistic | P-value |
Two sample Z test for difference of proportions (p1=proportion of correctly guessed human images; p2=proportion of correctly guessed AI images) | p1=p2 | p1>p2 | z=3.05 | p=0.001 |
One sample Z test for proportions (p=proportion of correctly guessed AI images) | p=50 | p>50 | z=10.1 | p=2.11* 10^-24 |
Trends in the data depict that some tools appeared to produce images comparable to human works better than others. Table 4 shows that images from Stable Diffusion were correctly identified over half as much as images produced by Dalle. Furthermore, Table 6 illustrates that when image origin was incorrectly guessed by the majority, it was more likely that the respondent perceived an AI image as a human-made image rather than vice versa. For value, average aesthetic, meaningfulness, and monetary ratings were greatest for human images, as shown in Table 1. Table 2 then goes to show how Midjourney performed the best in value within the specific tools, followed by Stable Diffusion, and lastly Dalle. This outcome was surprising given that Dalle provided the images most comparable to human artworks. While not depicted, when answering the optional free-response question regarding how they were able to distinguish between AI and human-generated images, responses varied though common claims included: image texture, construction of human features (face and hands), and the placement of objects.
Discussion
In concern to the inquiry, AI tools are at times indistinguishable from human artworks and are mistaken for human artworks more often than human artworks are mistaken for AI work. Furthermore, there is no significant difference in the perceived value of AI and human artwork, given the large p-values for each t test done (Table 5). Results presented in1, which examined the effectiveness of AI literature, found that for human poems the proportion of correct guesses was 51.4% while for AI-generated poems the portion of correct guesses was only 46.2%, aligning with the results of this project as it was evident that the proportion of correct guesses for human artworks (68.2%) was greater than than of generated artworks (63%), as supported by Table 3.
Though distinguishability results were not entirely in favor of indistinguishable AI works, as the one sample z test provided significant results with evidence supporting that proportion of correct and incorrect guesses was not evenly split. The results would be evenly split (0.5 correct and 0.5 incorrect), if participants randomly guessed the origin, however the z test suggested that there is > 0.5 proportion of correct guesses (Table 6). Limitations may arise due to the restricted population of participants, as only high schools from northeast Ohio were surveyed, which may not be representative of the general population of high schoolers. Furthermore, age is a limiting factor in distinguishing between AI and human tools as the method design only accounted for high school aged respondents, the age range was limited to 14 to 18 years old. Teenagers are oftentimes more familiar with AI images due to the use of AI design in social media platforms. This confounding was accounted for by asking prefacing questions in the survey, such as how familiar they are with AI art, the majority of respondents have never used AI art tools and are more familiar with AI writing tools, which could be the reason for the distinguishability of the AI images. For specific AI tools a significance test was not applied to avoid false significant results due to multiple tests. Even so, simply comparing the proportion of incorrect and correct guesses provides an effective perspective regarding the distinguishability of the AI tools utilized. Dalle proved to provide the most human-like images with a frequency of 41.4% correct guesses, indicating that respondents more often than not believe that an image generated by Dalle was actually human-created. Stable Diffusion provided the least human-like images with a frequency of 84.2% correct guesses, more than double the proportion from Dalle. Images produced by Midjourey were in the center in terms of distinguishability with a frequency of 63.3% correct guesses. These trends in distinguishability also align with the expense of the respective AI tool, as Dalle was the most expensive while Stable Diffusion was free. Price could also be a significant limitation as more expensive AI tools seem to provide better images in terms of comparability to human artworks. Distinguishability results were similar to those in25, where researchers found that the mean of correct guesses was 63.8%, comparable to the 68.2% for human images and 63% for AI images. Furthermore, as the method design utilized in25 was similar to the presented project, as both tested distinguishability and value concurrently, the results in25 supported that image origin influenced the perception of an artwork, specifically in terms of evoked emotion. The difference between results can be explained by the different variables being measured as well as method design, as25 used an experimental method instead of correlational.
In terms of value, human images received a higher average rating for all three categories of value, though no significant difference was found in the rating for human vs the ratings for AI images (Table 5). The lack of difference could be due to the limited scale of rating, as the participants were only given a range of 1 to 5, alternately it could be due to the comparable value of AI generated images, though further research would be necessary to support that claim. Midjourney produced the images with the highest average ratings for all variables of value in comparison to the other AI tools. Stable Diffusion revived greater average ratings for both aesthetic and monetary variables over Dalle, however Dalle received greater average rating for meaningfulness. In22, the ratings for evaluations, purchase intention, and collection intention toward paintings were greater for human images than AI images, however the results were not significantly significant, supporting the results of this study as well.
For both distinguishability and value, results are strongly influenced by the structure of the prompts input into the AI tools utilized. Due to the design of different AI art tools, there are certain phrases and sentence structures that produce “better” images than others, and the ones implemented in the design when producing the images for the surveys may not have been the most effective. Consequently, the images produced may not be representative of the best images formed by the selected tools. Additionally, all artworks chosen were of similar time periods, not accounting for the nuances between human-made art throughout generations. Increasing the number of artists as well as the types of artworks would result in more conclusive results. Nevertheless, research into the effectiveness of AI art tools progresses the current body of knowledge present regarding AI development as it discusses the current limitations of popular AI tools while also providing supporting evidence for the efficiency of current tools. With further research, AI tools can continue to improve their precision and accuracy to serve a greater purpose in the arts.
Conclusion
The results of the study provide inspiration for further research into the effectiveness of AI art tools, specifically regarding the value of generated works. While evidence was found to support that individuals in the study were able to differentiate between AI and human artworks, results may be limited to the efficiency of prompts in AI tools and the era of artwork chosen to compare. Even so, the gap of information regarding the effectiveness of several AI tools with comparison to human artworks was addressed as evidence was found supporting the greater indistinguishability of Dalle-generated images and the greater perceived value of Midjourney-generated images. Furthermore, participants were more likely to mistake AI images as human than vice versa, implying that with further research and progress, AI art tools may become indistinguishable from human-made works. Midjourney performed the best in terms of perceived value, then followed by Stable Diffusion and Dalle, respectively. Human images performed better in all variables of value measured, however there was not a significant difference for any of the mean ratings. Utilizing an experimental method to block by image origin more efficiently may provide better results to infer causation regarding the relationship between image origin and the measured variable.
Regardless of limiting factors of inference, the research provided further knowledge into the effectiveness of several AI tools and allowed room for improvement. As AI works become indistinguishable from human works, the originality of works is threatened as AI algorithms must use pre-existing datasets to create images. Perceived value remains one of the key variables in determining the success of AI artworks, therefore if further research can support that AI images are comparable in value, there could be negative implications for traditional artists as creating AI images may be a more affordable option. With the evolution of AI, continuous research will be necessary to keep up with the efficiency of AI art tools. A crucial aspect of the creative arts is the human sentiment put into artworks, therefore research delving into the views of AI art versus human art would be worth considering for further inquiry.
References
- J. Hopkins, D. Kiela, “Automatically generating rhythmic verse with neural networks,” ACL, pp. 168-178 (2017). https://aclanthology.org/P17-1016.pdf [↩] [↩] [↩] [↩]
- W. Kevin, S. Huma, “Human misidentification in Turing tests,” Journal of Experimental & Theoretical Artificial Intelligence, 27, 123-135. https://web.p.ebscohost.com/ehost/pdfviewer/pdfviewer?vid=9&sid=cd5379a1-5455-4e88-b31f-e53687beef2f%40redis [↩] [↩] [↩]
- A. Kaplan, “Innovation in artificial intelligence: Illustrations in academia, apparel, and the arts,” Oxford Research Encyclopedia of Business and Management, June 2023. Accessed Sept. 16, 2023. [Online]. Available: https://oxfordre.com/business/view/10.1093/acrefore/9780190224851.001.0001/acrefore-9780190224851-e-421 [↩] [↩] [↩] [↩]
- J. Doyle, T. Dean, “Strategic directions in artificial intelligence,” AI Magazine, 18, no. 1, pp. 87+, 1997. Accessed: Sept. 15, 2023. [Online]. Available: Gale In Context: Science, https://go.gale.com/ps/i.do?p=SCIC&u=lnoca_nordonia&v=2.1&it=r&id=GALE%7CA19366304&retrievalId=9e3a968d-139b-41d0-917a-cecf962f01d5&inPS=true&linkSource=interlink&sid=bookmark-SCIC [↩] [↩]
- J. Doyle, T. Dean, “Strategic directions in artificial intelligence,” AI Magazine, 18(1), pp. 87+, 1997. Accessed: Sept. 15, 2023. [Online]. Available: Gale In Context: Science, https://go.gale.com/ps/i.do?p=SCIC&u=lnoca_nordonia&v=2.1&it=r&id=GALE%7CA19366304&retrievalId=9e3a968d-139b-41d0-917a-cecf962f01d5&inPS=true&linkSource=interlink&sid=bookmark-SCIC [↩]
- J. Hutson, M. Harper-Nichols, “Generative AI and algorithmic art: Disrupting the framing of meaning and rethinking the subject-object dilemma,” Lindenwood University, 2023. Accessed: Oct. 27, 2023. [Online]. Available: https://digitalcommons.lindenwood.edu/cgi/viewcontent.cgi?article=1463 context=faculty-research-papers [↩]
- Z. Epstein, A. Hertzmann, L. Herman, R. Mahari, M. R. Frank, M. Groh, H. Schroeder, A. Smith, M. Akten, J. Fjeld, H. Farid, N. Leach, A. Pentland, O. Russakovsky, “Art and the science of generative AI: A deeper dive,” 2023. [Online]. Available: arXiv:2306.04141v1 [cs.AI]. [↩] [↩] [↩] [↩]
- T. Feng, “A new harmonization of art and technology: Philosophical interpretations of artificial intelligence art,” Critical Arts: A South-North Journal of Cultural & Media Studies, 36, no. 1, pp. 110-125, 2022. Accessed: Sept. 27, 2023. [Online]. Available: Explora, https://research.ebsco.com/c/nuenzh/details/3xs4mkdytv?limiters=FT%3AY&q=ai+art&db=aph%2Ccph%2Ce862xna%2C8gh%2Chxh%2Clfh%2Ce870sww%2Ce865sww%2Culh%2Cnfh%2Cpwh%2Csch%2Ce869sww%2Ct6o%2Ctth%2Cvoh [↩]
- R. E. Wendrich, “Creative thinking: Computational tools imbued with AI,” Presented at Int. Des. Conf. 2020. [Online]. Available: https://www.cambridge.org/core/services/aop-cambridge-core/content/view/838A552D3662AD4926CF36B3A4E67CC9/S2633776220000072a.pdf/creative-thinking-computational-tools-imbued-with-ai.pdf [↩]
- J. Doyle, T. Dean, “Strategic directions in artificial intelligence,” AI Magazine, 18(1), pp. 87+, 1997. Accessed: Sept. 15, 2023. [Online]. Available: Gale In Context: Science, https://go.gale.com/ps/i.do?p=SCIC&u=lnoca_nordonia&v=2.1&it=r&id=GALE%7CA19366304&retrievalId=9e3a968d-139b-41d0-917a-cecf962f01d5&inPS=true&linkSource=interlink&sid=bookmark-SCIC [↩]
- J. Hutson, M. Harper-Nichols, “Generative AI and algorithmic art: Disrupting the framing of meaning and rethinking the subject-object dilemma,” Lindenwood University, 2023. Accessed: Oct. 27, 2023. [Online]. Available: https://digitalcommons.lindenwood.edu/cgi/viewcontent.cgi?article=1463 context=faculty-research-papers [↩] [↩]
- T. Feng, “A new harmonization of art and technology: Philosophical interpretations of artificial intelligence art,” Critical Arts: A South-North Journal of Cultural & Media Studies, 36(1), pp. 110-125, 2022. Accessed: Sept. 27, 2023. [Online]. Available: Explora, https://research.ebsco.com/c/nuenzh/details/3xs4mkdytv?limiters=FT%3AY&q=ai+art&db=aph%2Ccph%2Ce862xna%2C8gh%2Chxh%2Clfh%2Ce870sww%2Ce865sww%2Culh%2Cnfh%2Cpwh%2Csch%2Ce869sww%2Ct6o%2Ctth%2Cvoh [↩]
- R. E. Wendrich, “Creative thinking: Computational tools imbued with AI,” Presented at Int. Des. Conf., 2020. [Online]. Available: https://www.cambridge.org/core/services/aop-cambridge-core/content/view/838A552D3662AD4926CF36B3A4E67CC9/S2633776220000072a.pdf/creative-thinking-computational-tools-imbued-with-ai.pdf [↩]
- J. She, E. Cetinic, “Understanding and creating art with AI: Review and outlook,” ACM Transactions on Multimedia Computing, Communications and Applications, 18, no. 2, pp. 1-22, 2022. Accessed: Oct. 12, 2023. [Online]. Available: ResearchGate, https://www.researchgate.net/publication/359108284_Understanding_and_Creating_Art_with_AI_Review_and_Outlook [↩]
- R. Srinivasan, K. Uchino, “Biases in generative art: A causal look from the lens of art history,” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 41-51, 2021. Accessed: Oct. 23, 2023. [Online]. Available: https://dl.acm.org/doi/pdf/10.1145/3442188.3445869 [↩]
- A. Field, “Mining the ambient commons: Building interdisciplinary connections between environmental knowledge, AI and creative practice research,” Interdisciplinary Science Reviews, 47(2), pp. 185-198, March 2022. Accessed: Sept. 14, 2023. [Online]. Available: https://www.tandfonline.com/doi/epdf/10.1080/03080188.2022.2036408?needAccess=true role=button [↩]
- M. Zameshina, O. Teytaud, L. Najman, “Diverse diffusion: Enhancing image diversity in text-to-image generation,” Oct. 19, 2023. [Online]. Available: arXiv:2310.12583v1 [cs.CV] [↩]
- S. Chatterjee, “Art in an age of artificial intelligence,” Front Psychol, Nov. 2022. Accessed: Oct. 25, 2023. [Online]. Available: National Library of Medicine, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9749485/ [↩]
- A. Oksanen, A. Cvetkovic, N. Akin, R. Latikka, J. Bergdahl, Y. Chen, N. Savela, “Artificial intelligence in fine arts: A systematic review of empirical research,” Science Direct, 1(2), 2023. Accessed: Oct. 15, 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S294988212300004X#bib87 [↩]
- L. Bellaiche, R. Shahi, M. H. Turpin, A. Ragnhildstveit, S. Sprockett, N. Barr, A. Christensen, P. Seli, “Humans versus AI: Whether and why we prefer human-created compared to AI-created artwork,” Cognitive Research: Principles and Implications, 2023. Accessed: Sept. 26, 2023. [Online]. Available: https://cognitiveresearchjournal.springeropen.com/articles/10.1186/s41235-023-00499-6#Sec4 [↩] [↩] [↩] [↩] [↩] [↩]
- Y. Sun, C. Yang, Y. Lyu, R. Lin, “From pigments to pixels: A comparison of human and AI painting,” Applied Sciences, 12 (2022). https://www.mdpi.com/2076-3417/12/8/3724 [↩] [↩] [↩] [↩] [↩] [↩] [↩]
- L. Gu, Y. Li, “Who made the paintings: Artists or artificial intelligence? The effects of identity on liking and purchase intention,” Front Psychol (2022). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9389447/ [↩] [↩] [↩] [↩] [↩] [↩]
- J. Hullman, A. Holtzman, A. Gelman, “Artificial intelligence and aesthetic judgment,” Northwestern University (2022). http://users.eecs.northwestern.edu/~jhullman/AI_aesthetic_judgment.pdf [↩] [↩] [↩]
- B. Liu, “Arguments for the rise of artificial intelligence art: Does AI art have creativity, motivation, self-awareness and emotion?,” Arte Individuo y Sociedad, 35 (2023). Research Gate, https://www.researchgate.net/publication/370109562_Arguments_for_the_Rise_of_Artificial_Intelligence_Art_Does_AI_Art_Have_Creativity_Motivation_Self-awareness_and_Emotion [↩] [↩] [↩] [↩] [↩]
- T. Demmer, C. Kühnapfel, J. Fingerhut, M. Pelowski, “Does an emotional connection to art really require a human artist? Emotion and intentionality responses to AI- versus human-created art and impact on aesthetic experience,” Science Direct, 148 (2023). https://www.sciencedirect.com/science/article/pii/S0747563223002261 [↩] [↩] [↩] [↩] [↩] [↩] [↩] [↩] [↩] [↩] [↩]
- L. Gu, Y. Li, “Who made the paintings: Artists or artificial intelligence? The effects of identity on liking and purchase intention,” Front Psychol (2022). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9389447 [↩] [↩] [↩]