I want to understand how instructors actually live with AI in their teaching. So I’ll follow a small group over a few weeks, ask them to tell me what’s happening as it happens, look closely at the documents they use in class, and then put those pieces together to see patterns.
“At its core, this entire project is an investigation into what I’m calling the ‘pedagogical cut.’ AI has broken our old boundaries, and every day, instructors are forced to make new cuts—drawing lines in their syllabi, their assignments, and their feedback about what is human, what is machine, and how they can work together.
My methodology is designed specifically to capture these cuts in action. The diaries show the small, daily cuts; the critical incidents reveal the big, transformative ones; and the assignments and rubrics are the material evidence—the cuts made tangible.
My analysis will then build a typology of these cuts—from simple ‘instrumental’ cuts that treat AI as a basic tool, to deep ‘cyborgian’ cuts that reconfigure the very identity of the writer. The ultimate contribution of this study will be a rich, theoretically-grounded map of these practices, offering a practical guide to making more thoughtful, ethical, and effective pedagogical cuts.”
What
“I’m following 4–6 instructors over about two months to see how AI shows up in their real teaching. I’m asking them to jot brief diary notes a couple times a week when something AI-related happens—what happened, how it felt, and whether they changed anything. They’ll also write three short stories about key moments: one where AI really helped, one where it caused tension, and one change they made and kept. Then I’ll meet with each person twice to look closely at their actual teaching materials—syllabus language, assignment prompts, rubrics—and talk through how those documents shape what students do.
To analyze this, I keep it simple. First, I look at how their feelings and decisions move over time in the diaries. Second, I compare their documents to see what changed and why. Third, I group moments by the kind of boundary they draw with AI—whether they treat it as a limited tool, a collaborator with visible process, or a deeper hybrid that requires reflection and ethical judgment. I’m not trying to pick winners—I’m describing what each approach makes possible and what it complicates.
To keep this careful, I look at the same issue from different angles (notes, stories, documents), I keep a clear record of my interpretations, I check short summaries with participants when appropriate, and I protect identities. The outcome will be practical patterns: assignment and rubric language, process/provenance practices, and boundary choices that support learning and accountability.”
What I will do with participants
- Who I’ll talk to
- 4–6 instructors at UVic who teach writing or other text-based creative work. I picked a small number so I can go deep with each person.
- What I’ll ask them to do (over 6–8 weeks)
- Short diary notes:
- 2–3 times a week, they’ll write a quick note (5–7 minutes) about any moment where AI came up in their teaching.
- Three prompts: What happened? How did it feel (even in the body—tense, relieved)? Did you change anything because of it?
- Why: This captures real life as it happens, not just what they remember at the end.
- Three short “critical incident” stories:
- One time AI really helped or amazed them.
- One time it caused trouble or tension.
- One change they made and kept.
- Why: These are anchor points where decisions happen. They show what matters and why.
- Two conversations looking at real materials:
- We sit together with their syllabus language, assignment prompts, rubrics, examples, etc., and talk through how they’ve changed them and what those documents are trying to do.
- Why: Documents don’t just “describe”—they shape what happens in class. Looking at them makes invisible decisions visible.
- Short diary notes:
That’s the method. People, diaries, incidents, and documents.
Analysis happens in three passes, like three simple lenses.
- Lens 1: What is the lived experience?
- I read the diary notes and incident stories and look for patterns in feelings and decisions over time.
- Example: In week 1 a teacher says “I’m anxious,” by week 6 they say “I added a provenance requirement and now I feel calmer.” I note that movement.
- Lens 2: What do the documents do?
- I line up the syllabus, assignment, and rubric versions (before/after).
- I ask: What job is this document doing? How does this wording change student behavior? What gets easier/harder to assess?
- Lens 3: Where do they draw the line with AI?
- I group moments into three practical “boundary types”:
- Instrumental: AI is a tool for small tasks (e.g., brainstorming), with clear limits.
- Collaborative: Human + AI work together, but the process must be visible.
- Cyborg: They treat human+AI as a hybrid writer, and design for reflection, ethics, and judgment.
- I don’t judge which is “best.” I describe what each boundary enables and what it complicates.
- I group moments into three practical “boundary types”:
That’s the analysis. Experiences over time; documents in action; types of boundaries.
Rigor
- I look at the same thing from different angles.
- I don’t rely on just interviews. I combine diaries (what happened), incident stories (what mattered), and real documents (what they changed). If the story, the feeling, and the document all line up, I can trust the pattern. That’s all “triangulation” means.
- I keep a clear record of my own decisions.
- I’ll write brief notes about how I’m interpreting things and when I change my mind. If someone asks, I can show how I got from raw notes to findings. That’s the “audit trail.”
- I show people small summaries and ask if they recognize themselves.
- I may share a short case summary with a participant and ask, “Does this feel true to your experience?” If they correct me, I adjust. That’s “participant resonance.”
- I time-stamp everything that changes.
- When policies or assignments change mid-term, I keep the dates. Then I can say, “Before this date, it worked like X; after, like Y.” That’s “timestamping artifacts.”
- I protect people.
- I remove names, use composites if needed, and don’t report anything to administrators. That’s basic ethics and care.
“why this is enough participants” (plain answer)
“I’m going for depth, not breadth. Because I collect frequent, real-time notes and analyze actual documents over time, each participant generates a rich case. With 4–6 varied instructors, I can see patterns that repeat while still keeping the detail that makes the findings useful.”
“what will come out of this” (plain answer)
“We’ll get clear examples of assignment and rubric language, simple ways to require process and provenance, and a small map of boundary choices teachers are already making—plus when and why each works. These are ready to use in faculty development or policy guidance.”