Walking with Guido #2
Attention is an ethical act.
When I published Walking with Guido #1: The Map & The Gaps recently, it was meant to be simply an account of stumbling into a dialogue that felt strange and provisional; an odd combination of photographic inquiry and a cautious experiment in what happens when you let a machine talk back. Understandably, it strained the patience of several viewers, including the most supportive.1 That piece sat alongside An Ecology of Looking, where I tried to think out loud about photography as a practice of slow, unfinished attention, the kind that refuses tidy explanations and grows more interesting the more you return to it.
What does it mean to see, to know, to return? What are the habits that keep meanings open instead of closed? And what happens when you introduce a technology (in this case, an AI guide named Guido) that can ‘talk’ in the language we associate with expertise and reflection, yet can never experience a thing it describes?
That’s the question that keeps returning: not whether AI is good or bad, useful or useless, but what it does to the ecology of looking — and what it does to the habits of thinking that are already fragile in educational contexts and everyday life.
I have really appreciated the responses to these posts from fellow Substackers and others. One colleague confessed to burying his head in the sand about AI, intrigued by the idea of an AI guide but wary of introducing something he feels ill-equipped to help students negotiate. Another praised the experiment’s pedagogic potential, only to warn, quite sharply, that in other hands AI can produce “epic disasters.” And a third suggested that perhaps the most ethically responsible stance is not to use AI at all, or at least to make students ask why they might choose to use it. That last point is worth returning to because it reframes the question from “Is AI good?” to “How, and under what conditions, might we engage with it thoughtfully?”
It’s precisely here, in the slightly uncomfortable space between resistance and capitulation, that this experiment sits.
John Berger’s attentiveness to seeing and knowing is a touchstone for me as a teacher. He reminded us that “the relation between what we see and what we know is never settled.”2 Seeing and knowing shape one another in unstable, recursive loops. That might be a useful anchor for thinking about AI.
Much of the current anxiety among photographers is bound up with what AI makes — the increasingly photorealistic images it produces, the scraping of image libraries on the Internet, the styles it mimics. Those anxieties are legitimate, but they focus on the product. Berger would have pushed us toward the process: what it means to talk about images, to teach about images, to judge images, to confer authority on interpretations. He was less interested in whether photography was good or bad than in how it redistributed power: who got to look; who got to explain; and who got to claim that seeing was knowing.
AI doesn’t just make images. It generates explanations. It frames critique. It speaks with a fluency that sounds familiar and, ironically, authoritative. That is precisely where the ethical tension intensifies. An AI guide (like Guido) communicates in a voice that resembles human understanding but is not situated anywhere, has no body in the world, no vulnerability. Its ‘looking’ is a matter of pattern recognition and surface description, not embodied encounter. So when Guido shares something that feels like insight, when a comment seems to pass a kind of informal Turing Test, the question isn’t whether AI is pretending to be human. The question is: why does that feel plausible? And what are we willing to do with that plausibility? As one reader put it, the quality of an AI log is itself a product of the user’s intellectual capital; the careful probing, the resistance to closure, the willingness to ask for revision.3 Without that capital, the conversation collapses quickly into vacuity.4
If AI is already present in schools, whether teachers like it or not, then the ethical burden shifts very quickly onto the human user. For AI to be less authoritative, more questioning, more open, the person working with it needs the capacity to interrogate responses, notice when something feels ‘off’, challenge hallucinations, and demand revision. In other words, the human has to think — not delegate thinking, but engage it, inhabit it, resist outsourcing it.5
Therein lies the central challenge.
Time pressures in schools are relentless. Students are (understandably) inclined toward shortcuts. Adolescents, in particular, absorb the language of optimisation and efficiency because that’s what their devices, apps, and platforms reward. More disturbingly, perhaps, its also a mode of behaviour that is encouraged by schools wishing to maximise their own performance in league tables. It’s not just the students under pressure to perform with ever-increasing efficiency. Teachers are encouraged to look for pedagogical silver bullets, constantly assessing the efficacy of their classroom strategies in delivering predictable outcomes. AI is often sold as a time-saver, a way to restore a teacher’s work/life balance.
But there’s a deeper limitation. AI’s memory is transactional. Conversations do not routinely accumulate organically; growth isn’t remembered unless explicitly reloaded and restated. Long, iterative AI chats are not the norm. Good teachers, by contrast, help students remember who they used to be, to notice how their thinking has changed, or not changed, over time. That plastic, elastic field for metacognition is not something AI currently offers by default, however fluent the interface appears.
One friend, commenting on my posts by text, referenced a Japanese writing form called zuihitsu, a genre characterised by associative, loose thought. Not wandering for its own sake, but thinking that allows for tangents, digressions, overlaps and returns. In some ways this feels closer to what I’m trying to do with both my photography and this AI experiment. AI systems are overwhelmingly shaped by Western logics of explanation, conclusion and resolution. They are good at producing polished prose, summaries, positions, organised lists. They are much less comfortable with forms that resist closure, privilege process over product, or value ambiguity over assertion.
If Guido is useful at all, it’s because I keep trying to push him toward the unresolved, toward the place where answers are provisional and subject to ongoing scrutiny.
But I remain uneasy, not for lack of curiosity, but because the conditions that make this experiment possible, namely, the presence of an AI interlocutor who never tires and never forgets, are also the conditions that make certain kinds of power too easy to accept uncritically. When a machine speaks in calm, confident language, the temptation is not to interrogate it but to believe it. Confidence is not the same as truth, yet they often masquerade as equivalents.
Back to education.
If there is any hope for ethical engagement with this technology (and I accept that there may be none), it will not be because AI gets smarter, nor because it becomes more ‘useful’. It will be because the humans who engage with it have learned to do so without abdicating their own thinking. They will have learned to notice where the machine is useful and where it is merely plausible. They will be able to argue with it, correct it, call it out when it stumbles, and refuse its authority when necessary. That kind of intellectual discipline is hard won. It comes from practices that already have their own difficult histories: careful looking; reading that resists summary; writing that honours hesitation; critique that refuses closure. These are practices that can be modelled in schools, by teachers with students, face-to-face.6
Photography education, in particular, might have something valuable to offer here. The habits it cultivates, patience, suspicion, attentiveness to context, comfort with uncertainty, are precisely the kinds of dispositions that complicate AI rather than trivialise it. They are, in a sense, meta-skills for resisting premature closure and shallow certainty.
Perhaps the real danger is not that AI will replace photography, but that it will replace the slow habits of thought that photography can still teach, unless we consciously practise and defend them.
One thing I want to make explicit is that not all of my students have chosen to participate in the AI guide experiment. Several have opted out entirely, for ethical, personal, or simply intuitive reasons. I’m completely comfortable with that. In fact, I think it would be worrying if there were no refusals at all. Pedagogically, it complicates things. It means I can’t assume a shared experience of the tool. I’ve had to think harder about how to talk about AI without requiring everyone to use it; how to create a space where scepticism, refusal, curiosity and experimentation can coexist without one being framed as more enlightened than the others.
We have been talking about AI explicitly in the classroom, not as a solution, but as a problem, a set of affordances and hazards, a system with biases and blind spots. In talking with students who refuse to use it, and with those who are keen to explore it, I’ve had to confront my own assumptions about efficiency, usefulness, and what counts as “good” thinking (and teaching). It has made teaching more awkward. Slower. More self-conscious. And perhaps more honest. I’ve had to notice my own blind spots and biases. Some students have to rely on my limited, human knowledge and insights.7 The others will be able to compare my thoughts, suggestions and observations with those of their AI guides. It’s an intriguingly complex pedagogical scenario, more like a soft cheese bullet!
I’m still not sure where I stand on the technology itself. My thoughts and opinions shift from day to day, often in several directions at once. Photography has been useful to me in thinking about this because it has trained me to stay with uncertainty, to resist premature closure, and to accept that attention is already an ethical act.8
These posts will always be free but, if you enjoy reading them, you can support my analogue photography habit, and that of my students, by contributing to the film fund. Thanks to those of you who have already done so. All donations of whatever size are very gratefully received.
Jon, I made it through the first 8-9 minutes. While I get the intent, I have to say I prefer a real conversation about photography. One of your points is "AI can sustain longer conversations" - it is true, but isn't succinctness a virtue of some kind? I probably would like to follow your method of training an agent, but with the purpose of resuming some artist's bio or creative journey, rather than having a long conversation with a machine. Again, very interesting experiment, just not one I would give much time to.
The relation between what we see and what we know is never settled. Each evening we see the sun set. We know that the earth is turning away from it. Yet the knowledge, the explanation, never quite fits the sight.
― John Berger, Ways of Seeing
In order to ensure that my students’ use of AI conforms with accepted ethical guidelines and good practice, all of them must document their chats in an ongoing log that is available to their teachers and external examiners.
I’m grateful to my friend and former colleague, Vaughan Clark, for many of the observations and reservations expressed in this post.
I’ve given my students a Prompt De-bugging Guide to help them notice when the AI is fobbing them off with weak thinking, hallucinations, bad advice or unwanted appeals for intimacy.
I have tried to address some of these questions in my posts about Gert Biesta’s writing and the ethics of the photography classroom:
And those of my colleague with whom I share the teaching of the group.
I am very grateful to Jim Roche (Jim Roche On Photography) for his provocations about photography ethics. His writing about photographic ways of seeing, informed by writers like Teju Cole, is excellent.









Again, a very thoughtful piece handled with patience and care, it’s heartening to hear about how sensitively your students are considering the technology that will be such a big part of their future. Art / photography in schools is one of the few spaces where students of this age group can really think critically about this technology and engage with it through experimentation. I’m working on a couple of ideas inspired by your posts and starting to get my head out of the virtual sand!
I too made it through about ten minutes of this - shouting in disbelief (and a weird kind of glee) about the thought provoking observations Guido was proposing regarding your work.
Prompting AI to consider uncertainty, suspicion or open-ended unresolved thinking (and its inability to stay this course) shows how rich the human creative mind is. Isn’t this how discoveries are made?
I hope your experiment finds wider circulation Jon as I thinks it floats a lot of interesting ideas.