AI in Education: A Useful Tool or An Existential Threat To Learning?

When renowned computer scientist and futurist, Ray Kurzweil, discussed the future of artificial intelligence (AI) at a Council on Foreign Relations event in November of 2017, he expressed his belief that AI will not replace humans, but rather, enhance us. Kurzweil acknowledged the potential risks associated with AI, saying, “Technology has always been a double-edged sword. Fire kept us warm, cooked our food, and burned down our houses. [AI] technologies are much more powerful.” But Kurzweil also demonstrated his optimism about emerging advancements in AI, adding, “We have a moral imperative to continue progress in these technologies. […] It’s only continued progress, particularly in AI, that’s going to enable us to continue overcoming poverty and disease and environmental degradation.” Despite Kurzweil’s prevailing hopeful attitude towards AI, the question of whether or not AI will be more helpful or more harmful, whether it should be fueled or feared, has been a growing debate in recent years.

According to Encyclopedia Britannica, “artificial intelligence” refers to “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings, such as the ability to reason, discover meaning, generalize, or learn from past experience.” AI has become increasingly mainstream in recent years, with free websites such as OpenAI host programs such as “DALL-E” (released in January 2021) and “DALL-E 2” (released in April 2022), which are able to generate digital images, and “ChatGPT” (released in November 2022), which is able to generate detailed text responses based on a diverse field of knowledge. While some are excited about these advances in AI technology, others are concerned about the potential implications. These new tools could pose a threat to educators in particular, as teachers are now forced to navigate a classroom landscape in which their students could use AI to produce their work for them.

Madison’s digital art teacher Matt Dunn acknowledges these concerns, but also notes that this is not the first time the world has been forced to reckon with new forms and methods of creating art.

Throughout the 17th and 18th centuries most art was commissioned, in most cases for family portraits, but the advent of photography put artists’ livelihood into question. If photography could capture a moment in a fraction of the time traditional art could, what was the purpose of painters? Instead of allowing this new art form to replace them, many artists responded to this change feeling motivated to experiment. Photography could only capture the world as it was, so artists found new subject matters outside of realism leading to new and unique kinds of art.

“When photography entered into the conversation and started to be able to do those same things, painters started exploring with things they weren’t doing before like abstract expressionism and surrealism and all these other genres of paintings came after photography because you didn’t have to always do that style of realism,” Dunn said. “I think we are probably going to see a new thing like that happen in the art world where AI art is going to wind up opening new avenues of creation.”

The art community is not the only group considering the repercussions of AI; the release of tools like ChatGPT also makes possible the artificial production of writing. While some see these programs as dangerous, others are unconcerned. Madison English teacher Marc Lebendig falls into the latter category, noting, like Dunn, historical similarities.

“If anything I think it presents some interesting ways to teach differently,” Lebendig said. “And I try to take a long view and I see a lot of parallels to other technological advances that have worried people at the time, but ultimately have not harmed the educational process.”

Lebendig elaborated that because of how English assignments are created in his class, it would be difficult for ChatGPT to be able to create something worth submitting.

“I’ve run some ‘experiments’ with ChatGPT on my current assignments, and what it generates…it’s not great,” Lebendig said. “It seems to be able to consistently turn out analysis and writing that, in my class, a student would get a low C for.”

The Hawk Talk gave ChatGPT a prompt from the 2022 AP United States History Exam that asked students to “Evaluate the relative importance of causes of the rise of industrial capitalism in the United States in the period from 1865 to 1900.” College Board’s goal with questions like this one is to encourage students to not just understand the topic—in this case the causes of US industrial capitalism—but to analyze and form opinions on it. Students are given certain guidelines when answering these prompts: include the date and location, give specific evidence, make sure all parts of the prompt have been addressed. When given this prompt and asked to write a thesis, ChatGPT included all of these. In fact the student-written thesis and the AI-produced thesis are arguably difficult to differentiate between. Compare:

The first response: “The rise of industrial capitalism within the United States during the Gilded Age (the period 1865 to 1900) was a result of many factors- the strengthening of regional economies, new business models, a larger labor force- but most impactful was the expansion of the railroad system as it made possible all other factors leading to industrialization and the rise of capitalism in this period.”

And the second response: “The rise of industrial capitalism in the United States from 1865 to 1900 was shaped by a complex interplay of technological advancements, access to resources, favorable economic policies, and a growing market for goods, with technological advancements being the most significant factor in driving the growth of industrial capitalism during this period.”

Which did a student write?

The correct answer is the first one, but they bear striking similarities. Of course, on short pieces like this, it is possible that the use of AI is more difficult to distinguish, whereas on a full-length essay it may be more obvious.

Overall, Lebendig does not see ChatGPT as a major threat to his class, but rather appreciates its value as a future instructional tool for tasks like generating mediocre writing samples. He’s not the only one to express this opinion.

In his blog “Ditch That Textbook,” author and educator Matt Miller gives a list of beneficial uses for AI in the classroom which includes activities like “debate the bot” to practice argumentation and rebuttals or “use it as a more complex, nuanced source of information than Google” to help students study.

Tammi Sisk, Educational Technology Specialist for FCPS explains what makes beneficial AI-oriented class activities.

“My advice is to use inquiry-oriented practices and teach students how to formulate really strong questions for GPT that allow them to explore different perspectives, pursue their own wonderings in the context of their academic investigations, emphasize media and information literacy practices, and help students gain an understanding of how AI has the ability to impact individuals, communities and the world in both good and bad ways,” Sisk said.

The onslaught of new AI programs, which have inspired projects that teach students to work with AI–rather than taking advantage of it–has led to a perspective shift. While initially many educators saw these AI programs as “the end of schools,” others see them as a potential classroom aid. Among students, however, opinions on using AI for their assignments vary.

“The thing that is important about art is the meaning behind it, and AI doesn’t have any meaning,” Henry Barnhill (’24) said.

The “meaning” or “emotion” often referenced in conversations about art can generally be labeled as creativity. The debate surrounding AI and whether it can be creative is intriguing, as it requires a definition for creativity, something notoriously hard to pin down. Those who argue against AI claim that because it doesn’t create original concepts, and instead draws from the work of humans, it therefore isn’t creative. But then, isn’t that what humans do? As the saying goes “there’s no such thing as a new idea.”

One could suggest that the stronger argument against AI is that it is often unethical. Just as a student might collect information from a variety of sources to write a research paper, AI uses the work of creators (writers, artists, musicians, etc.) to inform how it manufactures an image or written response. The difference is that students know when they’re plagiarizing, and even if they do it anyways, they know why it’s wrong. AI doesn’t have this ability, rather, its entire method of production is to take elements of previously (human) created works and blend them to disguise itself as a new piece.

According to a Hawk Talk survey of 183 Madison students, 37.7% believe that using AI to create “original” artwork is ethical.

“I believe that using AI to create artwork is not ethical, but I also think that it will eventually become entirely indistinguishable,” Edward Ping Kirsch (’24) said. “AI artwork is trying to replace artists. A very unethical goal.”

This has been a point of particular concern for artists whose work—in many cases, copyrighted—has been used to train AI programs like Open AI’s DALL-E. The ability for users of programs like DALL-E to request the use of a certain artist’s style, whether that be the style of old masters like Vincent Van Gogh or of modern artists like illustrator Greg Rutkowski, is also worrying to artists whose income is dependent on their art in their unique style. If AI is able to replicate this style, many fear that employers would rather spend the 30 seconds on a free platform to create the images they need rather than pay artists to create an image which may take weeks.

“It’s been just a month. What about in a year? I probably won’t be able to find my work out there because [the internet] will be flooded with AI art,” Rutkowski said in an interview for MIT Technology Review last September.

Though he sees it as a potential tool, Dunn also shares some of Rutkowski’s concerns.

“There is a lot of concern for freelance artists that small businesses in particular will start using AI art for their graphic needs instead of hiring freelance artists,” Dunn said.

Before student artists face these challenges as they enter the job market, Dunn believes it is his job to teach students how to use tools such as these responsibly.

“Now I think we have a responsibility to say ok we have this technology and we understand what it can do now it’s a matter of finding the ethical use of it,” Dunn said. “My job is not to determine whether or not a tool is a positive or negative, my job as a teacher is to teach students to use it responsibly.”

Other Madison educators are also focused on how to aid students as they enter the working world. Economics teacher Andrew Foos explains that as AI develops it will certainly replace jobs but points out that this has been true of all technological developments. He cites the introduction of cars which made farriers (horseshoe makers) obsolete while also introducing new jobs, from assembly line workers to test drivers. What remains important, Foos notes, is skill development.

“I think we’re going to begin to realize, yeah, [AI] is interesting, it’s sort of a toy,” Foos said. “At the end of the day, if you can’t do stuff, you’re useless.”

Ultimately, Foos suggests that if humans can articulate and emphasize the value of their skill set, employers will be attracted to humans rather than AI, and that the best way to deal with AI is by learning to work alongside it.

Technology specialist Tammi Sisk echoes this sentiment.

“It is up to all of us to create the world of AI in which we want to live,” Sisk said.