Introduction 



Note from the Director


Kit Nicholls

February 2025
Issue 1


In his 2017 book, Kids These Days: The Making of Millennials, Malcolm Harris summarizes the plot of a late 1950s children’s book, Danny Dunn and the Homework Machine:


Danny and his friends use [a] professor’s cutting-edge computer to do their homework quickly, leaving more time for baseball and their other fun hobbies—like measuring wind speeds with weather balloons. These kids aren’t slackers; they just have better, more self-directed things to do with their time than homework.[1]

When another kid rats them out, Danny “argues that all workers use tools to do their work better and faster and that students should not be prevented from doing the same.” The teacher, of course, doesn’t relent, and Danny is left with “a contradiction: Kids have to be taught how to use tools that will help them reduce their work-time, without it actually reducing their work-time.”[2]

In 2017 we were still years away from encountering the type of powerful homework machine represented by ChatGPT, but we were well into the era of SparkNotes, Wikipedia, and, perhaps more significantly, K-12 educational software like i-Ready that has automated much of the reading lives of American children.

“Programmed instruction,” B.F. Skinner’s concept for a method of individualizing student learning through “teaching machines,” has been with us since the mid twentieth century, even if we often imagine that it only arrived more recently with personal computers and chromebooks in classrooms.[3] Rooted in Skinner’s behaviorist psychology, programmed instruction aims to adapt lessons to the ability level of each student in a way that, supposedly, flesh and blood teachers cannot.

ChatGPT and AI art generators like Dall-E arrive in this environment as a provocation, a chance to reconsider why we do what we do in school, who we are becoming together in our classrooms. This is no simple matter. In a recent essay in The Chronicle of Higher Education, Anna R. Mills cites John Warner’s suggestion that we make sure our assignments “ask [students] to bring their own unique perspectives and intelligences to the questions we ask them.” But, Mills points out, GPT-4 “can make up text that reads like a ‘unique perspective’ if you ask it, for example, to write ‘from the perspective of a Latinx single mother’ and to throw in some ‘vivid details.’”[4]

Ultimately, if something is in the discourse, there’s no reason to believe it will be invulnerable to automation.

*   *   *

the making of poems
the reason why i do it
though i fail and fail
in the giving of true names
is i am adam and his mother
and these failures are my job.[5]


—Lucille Clifton


In this short poem, Lucille Clifton plays on the idea of the “Edenic language fantasy,” the belief that Adam gave the true names of things, words that were exactly equal to what they described. The fantasy longs for a perfect articulation of word and world before the knowledge of good and evil, before the generation of the world’s thousands of languages, for speech acts that can directly apprehend reality. Her work as a poet, she tells us, is to try to get it right, to write and speak to get closer to truth.

Clifton published “The making of poems” in Two-headed Woman, a title invoking the African American concept of a seer who can tap into the spirit world to channel ideas from ancestors and other realms. At this point in her career, Clifton was engaged in a consistent practice of automatic writing, which you could imagine as a sort of extreme version of a ouija board.[6]

Clifton’s evocation of the two-headed woman offers us an ethics of writing for the 21st century: We are obligated—it is “our job”—to work through failure to give true names to things. We draw on our world and the world beyond us to try to get closer and closer to something honest, even if we know that we won’t ever arrive there. Extend this principle beyond writing to art-making, drawing and design work, scientific and technical research.

But we all know that homework doesn’t always feel like the sort of spiritual calling Clifton is imagining. Like all parents, I am confronted with the worksheets, returned quizzes, and general pedagogical detritus that flows from my kid’s backpack onto every nearby surface, especially the floor. Among the mundane fill-in-the-blanks are her skillful doodles, and some of the prompts leave space for her to turn a phrase or make an absurd joke.

We try to archive the best of it, but the homework is relentless. I surreptitiously wad it in into the woodstove to help the kindling catch. Or I squint my eyes and dump it in the recycling.

*   *   *

Techno-determinism is a logical error in which one views social phenomena simply as consequences of technological developments—for example, believing that the invention of smart phones directly causes a distracted, atomized populace. Such thinking leaves out the possibility that the technology in question may itself be an expression of already-existing social, cultural, or economic structures. Student use of ChatGPT to complete assignments can therefore be seen as a cause of major “disruptions” to our lives in higher education, but it is in fact at least as much (and probably moreso) an effect of decades of standardization, disempowerment, disinvestment, and instrumentalization in our schools—and a similar process by which workers’ efficiency gains benefit only the highest paid executives and shareholders. Whether in college or after, we are all Danny Dunn.

To borrow a trope from horror films, the call is coming from inside the house. As one education critic back in the 1960s suggested, “There is a pathos in our technological advancement well exemplified by programmed instruction. A large part of it consists in erroneously reducing the concept of animals and human beings in order to make them machine-operable.”[7] The subjectivity encouraged by our language of optimization and a brutally unequal economic system is one akin to robots. Our human needs—sleep, social life, bathroom breaks—are all signs of our frailty, our failure to live up to the standards of the algorithms that now set the bar for our humanity. The delivery driver peeing into an orange juice bottle might be the best image and a cautionary tale for overworked teachers and students whose animal realities must continually be sacrificed at the altar of productivity.

* * *

This first issue of The Center for Writing & Learning Journal explores not the finished product of our writing but the authentic work of learning and thinking that happens when we practice writing. At the Cooper Union, we write to generate ideas, art, designs, research, theories, actions, and yes, sometimes, essays. We like to imagine that student work is something more real, not just “busy work,” but no teacher is perfect, and neither are students. How can we apprehend writing—and by extension any aspect of what we do at the college—as something vital to our sense of agency and intention? As something like a calling? As something we could automate, sure, but why would we ever do that to ourselves?

The Center for Writing & Learning Journal is a space for our community to discuss our teaching and learning work, to hear about what individual members of the student body, faculty or staff are thinking about, and to consider ideas emerging beyond our college at other universities and in the broader world that might affect our educational practice.

We welcome contributions and concepts for future issues, each of which will respond to and take shape within the always evolving landscape of our shared learning and making work. Let’s imagine a collective expression of our work that’s far more human, idiosyncratic, and Cooper than anything an algorithm could craft. Let’s try to give true names to the stuff of our pedagogical lives together, even if we’re guaranteed to fail sometimes.



[1] Malcolm Harris, Kids These Days: Human Capital and the Making of Millennials (New York: Little, Brown and Company, 2017), 17.
[2] Ibid.
[3] Audrey Watters, Teaching Machines: The History of Personalized Learning (Cambridge, MA: MIT Press, 2021), 10.
[4] Anna R. Mills, “ChatGPT Just Got Better. What Does That Mean for Our Writing Assignments?” The Chronicle of Higher Education, March 23, 2023, https://www-chronicle-com.cooperunion.idm.oclc.org/article/chatgpt-just-got-better-what-does-that-mean-for-our-writing-assignments.
[5] Lucille Clifton, “the making of poems” in The Collected Poems of Lucille Clifton, 1965-2010, ed. Kevin Young and Michael S. Glaser. (Rochester, NY: BOA Editions, Ltd., 2012), 216.
[6] Marina Magloire, “The Spirit Writing of Lucille Clifton,” The Paris Review, October 19, 2020. https://www.theparisreview.org/blog/2020/10/19/the-spirit-writing-of-lucille-clifton/
[7] Paul Goodman, qtd. in Watters, 229.






©2025 Center for Writing & Learning | The Cooper Union