
Our culture is inundated with technology that has the ability to independently solve problems and even exhibit human-like traits and skills such as art making and writing curriculum. Here on Artfully Learning, I have covered the brief history of artists exploring artificial intelligence (AI) as a means to embrace technological evolution (ex. artist and technologist Harold Cohen’s use of his AARON platform in the post “Living and Learning with AI?”); or in other instances address the issues that are arising from machines learning from and being influenced by our culture (i.e. AI mimicking racial bias, see the post “Social and Emotional Learning for Artificial Intelligence”). The quandary I continue to explore is whether machine learning can be made synonymous to our own learning, in order for us to coexist and even advance within the digital age. Can AI help us to broaden our own educational and artistic pursuits? And how can we ensure that we are ethically using technology to communicate information and create symbolic forms of expression?
With AI becoming readily available and easy to use, it is not uncommon to see AI generated art being posted on the World Wide Web and even in art galleries and museums. But is this current wave of AI art good? Is it ethically sourced? In art theory and art history studies, the topic of authorship has been scrutinized and debated somewhat ad nauseam (see: Hansen, 2021); and with AI, the identity and importance of the artist is even further obscured. This is bolstered by the fact that courts of law are refusing to grant copyright to AI generated art because the work is too heavily reliant on non-human entities (see: Knibbs, 2023).
There is also the issue of how the artwork is used and where the machine sourced its imagery from. Anyone of us who has ever made a work of art before should be familiar with the concept that ideas and imagery do not come from within a vacuum. Even art drawing from subconscious and abstract forms has a basis in our shared human experience. We see or experience something we like or that moves us and we store it in our memory banks, which we then express in our own distinct way through the making of art. So briefly going back to the concept of authorship and its oft-proclaimed “death,” writer Sierra Élise Hansen (2021) notes that: “At the end of the day, the death of the author is about the consumer of the art. It is about the consumption of art without ever stopping to ask the author what they want or what they wanted, at least not directly. The reader can ask themselves what they think the author may have intended, but such a reading or interpretation is still rigorously based on the art itself.”
But what about when that author does not consent to their ideas and images being used by AI? Can the following statement apply when the author might not even know that their work is being sourced for machine learning generated art? A number of artists are pushing back against AI generative art platforms because the imagery used to train machine learning is sourced from vast datasets that contain their intellectual property. Other artists are finding ways to coexist with this technology, and even assuming educator roles that seek to teach AI how to think and behave like a human artist might. Cohen’s collaborations with AARON from the 1970s through 2016, is an example of an early attempt to teach a machine to think and act like an artist. While largely reliant on algorithms Cohen would feed it, eventually AARON was able to make some stylistics and contextual decisions that are in line with the way humans compose a painting.
The issue, as I have witnessed first hand and discussed in depth in a post titled “In Which I Try AI: Musings and Criticisms From an Artist, Educator and Writer,” is that the current wave of readily available AI generated art typically falls short of producing effective works of art. This is because AI is only trained to mimic what it gleans from datasets without actually being given context and content around the history of art and visual culture. In other words, AI is somewhat decent at mimesis, but it has no knowledge of art history, art theory and the foundational principles and elements of aesthetics.
Artists are adept problem solvers and often on the forefront of using new technology, media and cultural phenomena for communicating and expressing various interdisciplinary themes. So it should come as no surprise that while some artists are loath to the idea of AI progressing, others are enthusiastic about utilizing it as a resource and raw material that can expand their creative repertoire. The potential for AI to actually develop artistically in a manner that is akin to how humans build aesthetic skills and knowledge is being advanced by the work of several contemporary artists including David Salle and Holly Herndon.
David Salle is perhaps one of the most accomplished contemporary painters working within the last two centuries. His recent year-long involvement with an AI art platform has been a personal endeavor aimed at discovering whether he could teach it to create a “David Salle” work of art that would fool his gallerist. In order to do that, Salle needed to ensure that the AI would be able to dig deeper than just being able to identify patterns from imagery in its datasets (a process that is called diffusion). Collaborating with technologists Grant Davis and Danika Laszuk, Salle wittily remarked that they were “sending the machine to art school” (quoted in Small, 2023).
To train AI to recognize and understand Salle’s work, which combines technically skilled renderings with conceptual themes, the team needs to teach the machine how to think like Salle. They established a diffusion model that is trained on both complete and detailed images of Salle’s paintings, and fed with poetic prompts sourced from Salle’s literary friends including Sarah French and Ben Lerner. This undertaking seeks to combine the highly technical style with the conceptual framework that is the motivation behind Salle’s art practice. They generate numerous images, which they carefully sift through and select the ones they deem to be most successful. Salle adds his own mark on these selected images by drawing on top of them, which serves as a critique of the machine’s initial output and helps it to further learn Salle’s intricate process and mindset. This endeavor parallels the manner in which an art professor might offer formal critique to their students in an art school setting, hence Salle’s quip about taking the machine to art school is a good description of what is happening.
While Salle has enrolled AI in art school, Holly Herndon has created a music conservatory for AI, so to speak. Herndon is a multidisciplinary artist and musician who along with her collaborator Mat Dryhrust developed distinct AI generators, which they named Spawn and Holly+, which utilize deepfake technology to learn the styling of Herndon’s own voice. For Herndon’s 2019 album Proto, the duo taught Spawn to compose music by feeding it audio files featuring Herndon’s vocals. They facilitated Spawn’s ability to understand a variety of other vocal ranges by coordinating “training ceremonies,” which are live call and response singing performances where Herndon, Dryhurst and several other human participants train Spawn by singing to it. When Spawn develops a grasp of a particular vocal range it is able to compose music on its own, and even improvise on that timbre. You can listen to one of the initial sounds created by Spawn on the first track on Proto, which is aptly titled “Birth,” as Herndon and Dryhurst have called Spawn their “AI baby.” In 2021, they created Holly+, which is an AI music generator where users can upload melodies to be performed by a deepfaked version of Herdon’s voice. In 2022, Herndon released a cover version of Dolly Parton’s “Jolene,” using Holly+.
In my aforementioned post “In Which I Try AI: Musings and Criticisms From an Artist, Educator and Writer,” I detail my frustrations with feeding AI text prompts in order for it to create a visual composition. The results were never on par with my initial intent and vision. That is because AI is not given any context about the intent of the artist, only the final product of their laborious process. In order to ensure that AI creates something more original and in tandem with the concepts its human collaborator is feeding it, it must have an understanding about how art is made.
Backward design is a term that we use a lot in education when we want to ensure that students are given the tools and information needed to gain a replete understanding of specific content. A backward design process defines the objective of teaching as doing more than getting through a predetermined amount of content in order to scaffold students’ learning of the material. This methodology prioritizes the intended learning outcomes instead of topics to be covered and then works backwards to determine the differentiated types of lessons and assessments that will be utilized to facilitate meeting the planned outcomes. I believe that the pedagogy behind backward design is applicable to fostering more ethical and successful results from AI generated art platforms.
In both Salle and Herndon’s work, backward design has allowed them to scaffold the aesthetics and ideas that they intended for AI to produce. These explorations have led to insights into how AI can be used to produce personalized artworks in tandem with the distinct vision and expression of its human collaborator. Backward design is beneficial because it makes learning on par with living purposefully and intentionally. It ensures that knowledge is actually being transferred rather than force fed via rote memorization and standardized forms of data collection. It eschews and resists these kinds of didactic pedagogy in favor of educating in a manner that facilitates understanding and reciprocity. Implementing a form of backwards design when working with AI increases the likelihood of it performing by understanding the meaning and implications behind our tasks. Doing and discerning with purpose should be the purpose of living and learning for humans and machines alike.
References, Notes, Suggested Reading:
Hansen, Sierra Élise. “We must stop getting Death of the Author Wrong,” The Michigan Daily, 2 March 2021. https://www.michigandaily.com/opinion/columns/we-must-stop-getting-death-of-the-author-wrong/
Knibbs, Kate. “Why This Award-Winning Piece of AI Art Can’t Be Copyrighted,” Wired, 6 September 2023. https://www.wired.com/story/ai-art-copyright-matthew-allen/
Small, Zachary. “Turning an Algorithm Into an Art Student,” New York Times, 1 October 2023. https://www.nytimes.com/interactive/2023/09/22/arts/design/david-salle-ai.html
Discover more from Artfully Learning
Subscribe to get the latest posts sent to your email.