In Which I Try AI: Musings and Criticisms From an Artist, Educator and Writer

In an effort to follow up my recent post “Living and Learning with AI?” and gain a greater understanding of artificial intelligence, I have taken the plunge into the world of “smart” computer generated responses. Ample examples of AI’s aesthetic output have been accounted for, but I wanted to see for myself what it does well and what it struggles with. My initial thesis is that if AI is a framework derived from pre-written algorithms and trained to mine through extant datasets of imagery and text, then it is going to produce results that are reflective of generic forms of media and communication. The draw to these AI platforms is that it makes generating information or imagery nearly seamless. You still have to apply some level of judgement and cohesiveness in order to get an apt and fulfilling result out of AI, but it has never been easier for the layperson to produce compelling visual and written content. Is this a sign of a more democratized future, or just another step in the human-initiated process of automation? Does AI threaten the livelihood of artists and writers?

Is AI actually intelligent?

I think it is far too soon to condemn artificial intelligence as the death knell for human creativity or an ominous risk to our careers. Personally, I find the actual process of writing and making traditional forms of art (which includes the painstaking process of creating non-AI digitally rendered art) to be one of the most defining elements of the human condition. For this reason alone, I am not an early adopter of this technology, and have been reluctant to explore it. However, the notion that AI might be used to facilitate both creative and pedagogical processes has piqued my curiosity enough to the point where I bit the bullet and signed up for OpenAI’s Dall-E and ChatGPT platforms.

I do not see these programs as a replacement for physical labor in the arts, academic and literary fields. There are very few instances where I am even compelled to look at, or even discuss a DALL-E (or Midjourney et al) generated artwork with the same regard and attention span as I would devote to a traditional work of art (again, I am considering non-AI digital art to be traditional at this point). It is not that AI rendered art is not impressive; the technical prowess of these platforms are clearly evident. However, I personally find the overwhelming majority of these images to be too shallow in their grasp of human expression, resulting in their lacking the same emotional prowess and profundity as a traditional materials-based or performative artwork. AI art may have its place among more traditional concepts of artistic production and aesthetic discourse. I believe it is best used in collaboration with inquiry-based critiques and material-based investigations into human-machine relationships, such as in the work of Stephanie Dinkins, Beth Frey and Martine Syms. But I am not convinced of its value as a tool for personal and symbolic communication when it is solely presented as a standalone art form. An experiment carried out by graphic designer Marian Bantjes reveals what limitations AI generated art has.

Despite being able to create hyper-realistic imagery and mimic motifs that are common within established art historical styles, there is still a lot left to be desired from both prompting and viewing AI derived visual art. Bantjes (2023) used Midjourney to test a theory among her colleagues that this specific platform has had issues conceptualizing the form and function of hands. Just do a google search for “why is AI bad at drawing hands?” and you will be overwhelmed by the results (I got roughly 29,500,000 results in 0.40 seconds!). Hands are obviously a major facet of figurative art. Artistically representing hands is one of the most challenging aspects of traditional art. I still struggle with it, and teaching it has been especially taxing. However, the major advantage that I have in attempting to draw hands over a machine trying to do so, is the unique element of human observation and of course, the physical sensation of having hands. Designer and AI practitioner Jim Nightingale (2023) explains AI struggles with hands “because the complicated geometry of hands means that there is no universal collection of lines or shapes that AI can use to identify a hand. AI must combine many various shapes and combinations to make convincing hands.” Whereas I can actually turn and distort my own hand in any position that is humanly possible, AI must rely on two-dimensional image data to do the same. A lot of nuance gets lost in this translation.

Bantjes tested the issue by prompting AI to hone in on representing hands and interpreting the quality of hands in order to generate imaginative imagery. She fed Midjourney the prompt: “hands with carrot-fingers, holding a small white rabbit, moody dark, forest background.” In the samples created by Midjourney there is a rabbit, fingers and carrots, but it is not what Bantjes wanted to visualize. The AI “artist” did not grasp the concept of transforming fingers into carrots. In another attempt, she prompted the creation: “a rabbit wearing red shoes, holding hands with a carrot wearing black shoes.” The results were compared with drawings by children who were given the same prompt. Lo and behold, the children’s drawings were far more accurate in their adherence to the creative inquiry. Bantjes asserts that these examples, along with how AI is trained, show that: “AI is not intelligent. NONE OF IT IS. AI should more accurately be called Massive Data Training, or something like that. It’s a system trained to recognize objects, styles, techniques, and even ‘concepts’ to a very limited degree, but it doesn’t understand those things, or how they relate to each other in the real world.” (Bantjes, 2023a).

I decided to try something similar to Bantjes’ exploration to see if DALL-E would fare any better in regards to how it recognizes objects, styles and techniques, and whether it is able to conceptualize these aesthetic elements into a cohesive representational composition that abides by my prompt. I too chose the themes of food and anthropomorphism, and asked DALL-E to render me an “anthropomorphic hamburger eating a Chicago style hot dog, as a hyper-realistic sculpture.” I was excited, because I really expected to see an animated human-hamburger hybrid chowing down on a hot dog. I ran a Google image search to see what kinds of examples of anthropomorphic hamburgers already exist on the Web. The results were aplenty. However, when the AI’s results came in, I was astounded to see that there was absolutely no hamburger imagery whatsoever. Instead, I was presented with freakish looking anthropomorphic dachshunds (aka wiener dogs) chowing down on Chicago-style hot dogs in a hyper-realistic, sculptural style. At the very least, DALL-E recognized anthropomorphism and the entire latter half of my prompt, and the results were rather enjoyable and compelling. Like Bantjes said, the AI can recognize objects, styles and techniques to some extent, but it truly does not understand what these elements are and what associations they have with one another. I was hoping to see some kind of surrealness and it did not disappoint in that regard. However, its failure to adhere to what I wanted conceptually made the whole endeavor feel futile.

This image is certainly absurd, hyper-realistic and sculptural, which was my intent when prompting DALL-E to render me an “anthropomorphic hamburger eating a Chicago-style hot dog, as a hyper-realistic sculpture.” However, I feel like something very significant to the subject matter got left out…Where’s my burger!?!

Another point Bantjes makes is that AI’s ability to “discover” and “interpret” styles of art and artists also has distinct limitations. She explores prompts that include iconic artists like Norman Rockwell, whose style and offshoots of it is well represented throughout digital repositories. Even when Midjourney was able to figure out the gist of the Rockwellian aesthetic, it was not truly able to understand the sensuality or conceptual fundamentals behind Rockwell’s form of social realism. Her first prompt, “barber trimming a boy’s hair in the style of Norman Rockwell,” produced imagery more in vain with what you would expect to see in a Rockwell painting. However, when Bantjes feeds the AI subject matter that strays from extant examples of Rockwell’s art, the results go askew. Prompting it to create a “man with his hair on fire, waving his fist at passing cars, in the style of Norman Rockwell,” produced compositions that show how AI is a poor scholar of art history. Bantjes (2023b) explains that: “this is because it has no actual intelligence—it is riffing off of many thousands of artworks on the internet by those artists….it has no idea that Rockwell is associated with sweetness, innocence, and a particular era. So basic to a human, incomprehensible to it. And this aspect of understanding is not going to improve in the near future, possibly the distant future, or maybe never.”

When AI is asked to present something the original artist never made or addressed, it is unable to do so because it is only tasked with mining and arranging imagery. Conceptualizing and understanding the intent or context that motivated the artist is not in the purview of AI art generators.

I tried exploring these issues in DALL-E, by prompting it to create images of “a Black Friday shopping spree in the style of Hieronymus Bosch.” I chose Bosch, because I believe he is both fairly well known in today’s culture, and because his work conveys strong social, cultural and religious messages. Bosch’s art not only exists within internet datasets, but it has been subject to reinterpretations among contemporary artists, extending the idea of Boschness (qualities in which AI trains) further. One example is Carla Gannis’ immersive, multimedia artwork The Garden of Emoji Delights (2015). Gannis remixed Hieronymus Bosch’s early sixteenth century, Northern Renaissance painting, The Garden of Earthly Delights, into a Pop Art-eque collage reflecting the digital era. Iconography in the form of GIFs and other motifs associated with Web 2.0, are utilized to transform Bosch’s own codified religious vocabulary into a secular dialogue about the signs and symbols we are embedded with and use to communicate in our present era.

The first attempt by DALL-E responding to my prompt: “a Black Friday shopping spree in the style of Hieronymus Bosch.” Note that these four images are WAY too tame and orderly for any work by Bosch. Also, what the heck is a “Black Shuoday”?
The most successful of my unsuccessful attempts using DALL-E to render “a Black Friday shopping spree in the style of Hieronymus Bosch.” This image is both way too cutesy, commercial and orderly to be in the style of Bosch and also reflect the Black Friday shopping experience.

Getting back to my prompt, I thought that Bosch’s visions of hellish landscapes, chaotic interactions and human folly would be spot on for a Black Friday scene. He would have abhorred the notion of Black Friday, or any type of commercialization connected to the Christmas holiday. However, the results were lackluster, and that is an understatement. DALL-E clearly has issues interpreting and analyzing the style of Bosch, and conceptualizing how someone might reference the messages in his work to critique Black Friday. I was hoping for DALL-E to produce something more along the lines of Gannis’ profound rendition of Bosch’s art, but therein lies the stark contrast between the artistic processes of humans and machines.

The shift from creators to editors

The thing with AI art generators is that you have to be explicitly clear in order to get something along the lines of what you are envisioning. I do see this as a good example of a “teachable moment,” where you might have to alter your choice of vocabulary and experience a process of trial-and-error several times before getting something close to your desired intent.

The most successful of my unsuccessful attempts using DALL-E to render “a chaotic Black Friday shopping spree in the style of Hieronymus Bosch.” I am titling it: Shopping Mall of Manufactured Delights

I amended my prompt by adding the word “chaotic” before Black Friday. The results, while a little bit more apt to the anarchic experience of Black Friday and consumerism as a whole, still fall short of anything that could be linked to Bosch’s work. However, by being more descriptive, I was able to vaguely get the surreal aspects and social critique I had initially wanted. But I am still underwhelmed by these images. They make a very weak and generic statement about consumer culture at best. My favorite image (I initially hesitated to use the word “favorite”) is a jam packed composition of consumers, expressively rendered so that their bodily features are nearly indistinguishable and they appear to be in a maelstrom that alludes to the pandemonium associated with massive in-person shopping sprees. I specifically like how the figures become even less recognizable as they approach the horizon line. The shoppers even seem to meld into the consumer goods, packaging and shopping carts. It approaches the surreal and nightmarish nature typical of Bosch’s oeuvre, although it clearly looks more like something Nicole Eisenman or Dana Schutz might paint (perhaps they somehow ended up in the datasets that DALL-E was mining).

I suspected that with further prompting and refinement, I might be able to achieve an even more grotesque and effective composition. And sure enough, the prompt: “a chaotic Black Friday shopping spree in the style of Hieronymus Bosch’s The Garden of Earthly Delights,” produced some results that more closely allude to an actual work of art and some of the symbols that are associated with Bosch. However, as you can see from the samples, they still fall significantly short in both their formal and conceptual relationship to the theme I had in mind.

The most successful of my unsuccessful attempts using DALL-E to render “a chaotic Black Friday shopping spree in the style of Hieronymus Bosch’s The Garden of Earthly Delights.” It is a bit more true to Bosch and his iconic painting, but still lacks the punch I intended for it to have in terms of both style and concept.

My conclusion from this exercise is that if I am going to go through the trouble of employing this much refinement and exploration, I might as well take matters directly into my own hands. Perhaps the DALL-E results can be source material that I then could collage (either by hand or using traditional digital editing tools like Photoshop) into a composition that is more in line with my initial perception.

This endeavor and conundrum reflects what video essayist Evan Puschak describes as a cultural shift from our being creators to serving the role of editors. While AI is able to mimic our visual, spoken or written vocabulary, I have shown how it fails to understand the meaning of what it is producing. Making art (all forms including visual and literary) is how humans understand uniquely. To assign a machine or computer algorithm to paint, draw, sculpt or write is to forfeit our authentic voice by becoming content with tweaking and editing the language of someone else, or more aptly, something else (Puschak, 2023).

My experience is akin to that of Bantjes’ experiment and relative to what she means when she says, “the AI is extremely good at representing paintings by incredibly famous artists within the subject matter that is common to their work” (Bantjes, 2023b). Based on DALL-E’s inability to re-present and re-apply Bosch’s historical criticisms to address contemporary issues of morality, it becomes evident that AI is not too good at thinking like an artist. AI does not embody or employ the artistic habits of mind that artists develop as a result of the art making process, aside from rudimentary examples of observation and noticing patterns. It also lacks the wherewithal to make purposeful decisions about form, function, content and context. It is unaware of the spectrum of art history and art theoretical discourses, and therefore cannot truly reflect, explore and understand art worlds. With AI, you do not get any of the pedagogical benefits of materials-based explorations, because the process is all done behind the scenes via data mining algorithms. I cannot fathom how AI will ever be able replicate the joy and formative development that in-person art making brings.

Similar to how I am averse to using Midjourney/DALL-E and other image generating platforms in my visual art practice, I would never even consider using ChatGPT for persuasive writing projects including blog posts, op-eds, artist statements and scholarly thesis statements. But I have seen and discovered that it can be useful for initiating descriptive, narrative and expository forms of writing. In these instances, I consider ChatGPT to be helpful along the lines of how it can make tedious, but essential steps in academic writing, such as writing an annotated bibliography and structural outline, easier and more accessible. Academic editor Cara Jordan wrote a great summary of how scholars in the humanities field can use ChatGPT ethically, which I highly suggest reading. Some of her suggestions include using AI to cut down on academic admin work (ex. drafting copy to help craft complex emails or writing boilerplate text for department memos, notes and presentations), generating titles for papers (those of us who publish would agree that this is the bane of our existence) and synthesizing already written pieces into abstracts or summaries.

The operative word when using a chatbot to write is “initiating.” I do not consider ChatGPT to be the be-all and end-all of the writing process. It is, however, a good tool to use alongside time honored organizational exercises that lead to solid writing practices. All this being said, I would never submit or publish anything without total scrutiny of its content. There is the existential problem of misinformation being created, manipulated and shared across multimedia platforms. Who is to say that OpenAI’s datasets are not being influenced by the same kinds of revisionist writing and reporting proliferated across online media outlets? We do not have to presume too hard, because AI has actually been used for these purposes already, hence the term “deep fake AI” (see: My Great Learning, 2021).

Educating for a future with AI

When harnessed alongside traditional pedagogical methods, AI has potential to support teachers and students. One of the beneficial outcomes of ChatGPT is that it makes life easier for educators by helping them design lesson plans, evaluations and in-class activities. When a professional educator’s time and energy is stretched thin by a growing number of demands, having ChatGPT in their toolkit can be significantly beneficial for creating a more desirable work-life balance.

Even though ChatGPT is very adept at writing lesson plans, assessments and station rotation models (see: Pickett, 2023), it is still one dimensional in terms of how effective teaching and learning is achieved. You need to have a deep understanding of both the content of the subject and the individual learning styles of your students in order to see any social, emotional and cognitive impact in the classroom. As we have seen with the artwork experiments, AI cannot do differentiation well without significant human influence. A good teacher deserves tools that make their taxing job easier, but knows they will have to do the legwork to truly implement AI generated content into actual quantitative and qualitative results.

We generally do not backtrack when it comes to technology. For better and for worse, we need to acclimate to progressions in our digital age, because as the aforementioned examples show, AI is already being implemented into the fabric of our professional fields, educational experiences and personal lives. For this reason, there are already a multitude of media outlets publishing content suggesting ways to integrate AI into the educational curriculum, so that students learn to use these applications responsibly and in service to analog forms of learning. An example is a lesson plan from Katherine Schulten, editor-in-chief of the New York Times Learning Network. Schulten (2023) explains that “first students learn about and share their thoughts on the issues A.I. chatbots raise for schools. Then, we invite them to help design both ethical guidelines and curriculum projects that use the tool for learning.”

The bottom line is that education, like our culture at large, is always in flux. AI is essentially a part of STEAM (science, technology, engineering, art and math) education, which is the current engine driving our educational and professional motivation. But while being able to adapt to new technology is now a necessary skill, being able to powerfully harness it for transformative change requires a criticality and creativity that only we humans are capable of understanding and expressing. Making art of our own device is a harmonious means of expressing the human condition, and is an essential counterbalance to the impersonal world of AI.

References, Notes, Suggested Reading:

Bantjes, Marion. “What Does Artificial Intelligence Do Well? Print Magazine, 16 January 2023.

Bantjes, Marion. “The Copyright and Impact of AI,” Print Magazine, 24 February 2023.

Jordan, Cara. “5 Ethical Ways Humanities Academics Can Use ChatGPT,” Flatpage, 6 February 2023.

My Great Learning. “All You Need to Know About Deepfake AI,” My Great Learning, 21 November 2022.

Nightingale, Jim “Why Can’t AI Draw Realistic Human Hands?” Dataconomy, 25 January 2023.

Pickett, Ted. “ChatGPT for Teachers – Doing an hour of work in 6 minutes!” YouTube, uploaded by Ted Pickett, 6 January 2023,

Puschak, Evan. “The Real Danger Of ChatGPT,” YouTube, uploaded by Nerdwriter1, 30 December 2022.

Schulten, Katherine. “Lesson Plan: Teaching and Learning in the Era of ChatGPT,” The New York Times, 24 January 2023.


    1. Thank you! That’s certainly a relevant question, so I am glad you asked. Perhaps I should have put a disclaimer at the top of the post stating that “no AI output was included in the writing of this post aside from using DALL-E to generate a few images”!

      I did note in the post that I am completely opposed to using ChatGPT for any persuasive writing project, which especially includes blogging. From what I have seen from how others have used of the platform, I am convinced that AI cannot critically argue and express the issues I am interested in communicating any better than I can! And if it tried, I am sure it would lack both the distinguishable and ineffable qualities that make writing appealing to me.

      One of the main ideas/mantras I find myself repeating in my practice is that “education is experience.” These chatbots do not have any form of lived and learned experience that could compare to ours. Therefore, I feel that using AI to publish any editorial or academic writing is a disservice to the fields of creative writing and education. And unless the author puts a disclaimer at the top of their piece of a work written entirely (or with ample influence) from AI, I would consider it to be a farce, and feel exploited for having read it as if it were argued/expressed by a person.

      So far, I have only used ChatGPT to test whether it can design curriculum well, by prompting it to create a lesson plan for a 9-12 grade self-portrait project using collage materials. It does actually make preparing materials for teaching and learning a lot easier! A lesson plan that would have taken me forty-five minutes or longer, took the AI less than five. And I was then able to spend the time saved tailoring it to make the formulaic structure more suitable for a diverse student body. I think that AI can certainly change the game in terms of facilitating the preparation of content like lesson plans, activity lists and outlines for academic/scholarly papers. However, that is as far as I am willing to go with it as someone who writes for a living as both an academic and creative.

      Liked by 1 person

      1. You’re welcome Adam. I’ve been thinking about your post and your reply since yesterday, very thought-provoking and cogent. I totally agree with you on both chatGPT and also on the visual AI tools. It is interesting that the chatGPT tool was able to help you streamline lesson planning, that seems to be a great use of time and technology. In the past I have had several writings plagiarized, well actually totally lifted verbatim and published without attribution, but not by AI, just by good old-fashioned humans. In terms of my visual art, my best pieces never appear on the blog or online anywhere, so I’m not concerned about any AI artists “borrowing” my visual art works. However, you may find this piece about AI reimagining Vermeer of Interest:

        Fascinating and scary simultaneously. I remember the days when the Art World didn’t want to acknowledge Photography as real Art. Now things have progressed to a new level – is AI art actually Art? Thanks again for the excellent post and reply.

        Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s