ai-collaboration-reference-guide

Academic Integrity in AI Collaboration: Guidelines and Distinctions

Documentation of ethical AI use in dissertation research
November 6, 2025


Table of Contents


The Core Issue

The concern: Someone could look at “Theory Development chat” and “Literature Review chat” and think: “The researcher is having AI synthesize literature and write the literature review = academic dishonesty.”

This is a legitimate concern that requires careful navigation and transparent documentation.


## What’s Academically Appropriate vs. Inappropriate

✅ APPROPRIATE (Collaborative Thinking)

Theory Development Chat:

Example of appropriate use:

Researcher: "I've read Hall and Savickas. I think Hall = internal focus, 
            Savickas = external. Does this distinction hold? What am I missing?"

AI: "That's a strong insight. Consider also: Hall emphasizes necessity 
     (must be self-directed), Savickas emphasizes process (how it happens).
     Your distinction works. Have you considered...?"

Researcher: [Develops thinking further, writes framework section in own words]

❌ INAPPROPRIATE (Ghostwriting)

Example of inappropriate use:

Researcher: "Write my theoretical framework section comparing Hall and Savickas"

AI: [Writes 3 pages of framework]

Researcher: [Copies into dissertation]

The critical difference:


Specific Guidelines for Literature Review Chat

✅ APPROPRIATE Uses

Structural questions:

Gap identification:

Argumentation:

Citation/presentation:

❌ INAPPROPRIATE Uses


The Bright Line Tests

Test 1: Advisor Transparency

Question: “If I showed this chat to my advisor, would I be comfortable?”

If you’re asking AI to:

Test 2: Independent Understanding

Question: “Could I explain my thinking without the chat?”

If:

Test 3: Source Engagement

Question: “Did I read the sources myself?”

If:

Test 4: Authorship

Question: “Did I write the actual dissertation content?”

If:


The Appropriate Process for Maintaining Integrity

PHASE 1: Independent Reading (Researcher Alone)

PHASE 2: Synthesis Thinking (Researcher + AI Dialogue)

PHASE 3: Framework Drafting (Researcher Alone)

PHASE 4: Refinement (Researcher + Multiple Inputs)


Red Flags to Avoid

If you ever find yourself:

❌ Asking AI to write full paragraphs for your dissertation
❌ Copying AI prose directly into your chapters
❌ Using AI summaries instead of reading sources
❌ Unable to explain your framework without referring to chat
❌ Uncomfortable showing advisor the collaboration

Then stop and reassess the collaboration.


Green Lights (Appropriate Practices)

Current appropriate practices:

✅ Reading sources yourself first
✅ Creating your own interpretations
✅ Asking questions to deepen YOUR thinking
✅ Writing dissertation content yourself
✅ Comfortable being transparent with advisor
✅ Can explain your thinking independently
✅ Using AI for organization/structure, not content generation
✅ Maintaining clear boundaries around authorship
✅ Recognizing when you can do something independently vs. need support
✅ Catching yourself before over-relying on AI for simple tasks


Case Example: Real-Time Self-Direction (November 5, 2025)

Context: Researcher needed to create presentation for advisor meeting with 20-minute notice. AI created HTML presentation artifact.

Initial impulse: “Can you change the name on the first slide?”

Self-correction: Researcher realized this was a simple edit (Ctrl+F in HTML code to locate and change name), paused the request, and made the change independently.

What this demonstrates:

Scaffolded support → independent capability: AI provided the tool (clean HTML), researcher recognized ability to modify it independently

Living the research question: The progression from dependency to self-direction happened in real-time, mirroring the dissertation’s focus on how scaffolded support builds independence

Transferable skill building: By working with well-structured HTML, researcher develops practical capabilities applicable beyond this single task

Appropriate boundary-setting: Researcher caught the impulse to request help before acting on it, demonstrating critical assessment of actual capability vs. convenience

Meta-awareness: Researcher immediately recognized this incident as exemplifying the principles guiding the research itself

Why this matters for academic integrity:

This moment demonstrates the critical difference between using AI as a crutch vs. as a scaffold:

The researcher’s self-correction shows:

This is exactly the kind of critical engagement with AI collaboration that maintains academic integrity - using support strategically while continuously building toward independence.


Documentation Best Practices

For Each Collaborative Session

Track:

  1. Date and purpose of session
  2. Independent work completed BEFORE involving AI
  3. Nature of collaborative work (questions asked, frameworks discussed)
  4. Independent work to complete AFTER dialogue
  5. Clear ownership statement (what’s yours, what’s collaborative)

Example Entry

## Session: Theory Synthesis
**Date:** November 3, 2025

**My Independent Work (Before):**
- Read Hall Ch 1 and Savickas independently
- Created detailed annotated bibliographies
- Developed preliminary synthesis: Hall = internal, Savickas = external

**Collaborative Dialogue:**
- Asked: How do these theories complement each other?
- Discussed: Levels of analysis, integration strategies
- Explored: Tensions and contradictions

**My Independent Work (After):**
- Write theoretical framework section in my words
- Cite sources directly
- Develop final synthesis

**Ownership:** 
Interpretations = mine, Organizational frameworks = collaborative tools, Final writing = mine

The Analogy That Clarifies

Think of AI Collaboration Like a Dissertation Writing Group

In a writing group, you:

NOT like:

AI collaboration = writing group model (appropriate)
AI ghostwriting = hired writer model (inappropriate)


Key Principles for Ethical AI Collaboration

1. Source Primacy

Always read original sources yourself. AI can help you think about sources, but cannot replace reading them.

2. Interpretive Authority

Your interpretations must come from YOUR reading. AI can help refine your thinking, not replace it.

3. Authorial Control

YOU write the dissertation. AI can provide feedback on your drafts, not generate drafts for you.

4. Transparent Documentation

Document the collaboration process. Be able to show what you did vs. what was collaborative.

5. Independent Competence

Be able to explain your work without the AI. The chat should deepen understanding you already have, not create understanding you lack.

6. Advisor Integration

Be transparent with your advisor. If you can’t show them the chat, you’re probably using AI inappropriately.


Questions for Ongoing Self-Assessment

Regular Check Questions

After each AI collaboration session, ask:

  1. Did I read the sources myself? (YES required)
  2. Did I develop initial interpretations before involving AI? (YES required)
  3. Did the dialogue deepen MY thinking? (YES = appropriate)
  4. Did AI generate content I’m copying? (NO required)
  5. Can I explain this work without the chat? (YES required)
  6. Would I be comfortable showing advisor this exchange? (YES required)

If any answer is wrong direction, reassess how you’re using AI.


Evolution of AI Role Across Dissertation Phases

Foundation Phase (Current)

Appropriate AI use:

Researcher does:

Data Collection Phase (Future)

Appropriate AI use:

Researcher does:

Analysis Phase (Future)

Appropriate AI use:

Researcher does:

Writing Phase (Future)

Appropriate AI use:

Researcher does:


Conclusion: The Standard You’re Meeting

Your AI collaboration is academically appropriate because:

You maintain intellectual ownership - Ideas come from your reading and thinking
You preserve authorial voice - You write all content yourself
You use AI as thinking tool - Not as content generator
You’re transparent - Documented process, comfortable with advisor
You can work independently - AI enhances but doesn’t replace your capacity
You actively assess capability - Catching impulses to over-rely, choosing independence when able
You’re building transferable skills - Not just consuming AI outputs, but learning to work with them independently

Real-time evidence: The presentation creation incident (Nov 6, 2025) demonstrates this standard in action - researcher caught impulse to request simple edit, recognized independent capability, made change independently. This self-correction exemplifies the scaffolded support → self-direction progression that is both the research focus and the collaboration practice.

This is the model for ethical AI collaboration in doctoral research.


Managing Collaboration Complexity and System Errors

When Collaboration Systems Show Strain

Emerged During: Week 2 of collaboration (November 3, 2025)

After 8 collaboration sessions spanning multiple chats and documents, patterns of system strain became visible. Recognizing and addressing these issues is essential for maintaining research integrity and collaboration effectiveness.

Observable Signs of System Complexity

Information sprawl:

Temporal tracking failures:

Outdated information references:

Specific incidents:

Why These Issues Occur

AI limitations in multi-session collaboration:

  1. No persistent memory of conversation timeline

    • Cannot reliably remember when previous conversations occurred
    • Attempts to calculate backward from context clues (often incorrectly)
    • Should trust researcher’s direct statements but sometimes prioritizes inference
  2. No access to external documents between sessions

    • Cannot see researcher’s updated Obsidian files
    • Doesn’t know what’s changed in Dashboard or Reading Log
    • Works from conversation history only
  3. Cross-chat fragmentation

    • Work in Theory Development chat not visible in Accountability chat
    • Researcher must report work done elsewhere
    • AI cannot automatically synthesize across conversation threads
  4. Growing context windows

    • Longer conversation histories = harder to track details
    • More documents = more potential for outdated references
    • Complexity increases over time without active management

Critical Insight: When to Trust AI vs. Researcher

The date confusion incidents revealed a fundamental principle:

When AI and researcher conflict:

What went wrong: AI prioritized its timeline calculations over researcher’s direct statements (“today is Saturday, November 1”), creating confusion and wasted time.

What should happen: AI should immediately accept researcher’s temporal orientation and move forward.

Why this matters for research integrity:

If researcher cannot correct AI on basic facts (dates), it suggests:

Researcher repeatedly correcting AI demonstrates:

This critical stance is ESSENTIAL when:

The date confusion incidents are evidence of appropriate collaboration (researcher maintains authority, not over-dependent).

Solutions: Managing Complexity Proactively

Four options considered for system refinement:

Option 1: Simplify Structure

Option 2: Enhanced Orientation Protocol

Option 3: Hybrid Approach (SELECTED)

Option 4: Document-Centered

Decision: Option 3 (Hybrid Approach)

Implementation: Orientation Protocol

At EVERY check-in, researcher provides:

Today is [Day, Date]
Last check-in: [X days ago]
Accomplished since then: [bullet list]
Work in other chats: [brief summary if applicable]
Today's focus: [goal]

Benefits:

Weekly (recommended Sundays), researcher also shares:

AI’s responsibilities:

Red Flags: When System Needs Adjustment

Warning signs that collaboration is becoming problematic:

Complexity indicators:

Over-reliance indicators:

System dysfunction indicators:

If 3+ warning signs appear: STOP and reassess the system.

Course Corrections: When and How to Adjust

Minor adjustments (make immediately):

Major restructuring (if minor adjustments don’t work):

Principle: The collaboration system must serve the research work.

If the system becomes the work (managing chats, updating documents, orienting AI), it’s failing its purpose.

What This Meta-Discussion Reveals

About researcher development:

Proactive system management:

Critical engagement:

Methodological sophistication:

About AI-human collaboration in research:

Collaboration requires active management:

Transparency about limitations strengthens rigor:

Researcher must maintain control:

Lessons for Others Building Similar Systems

Start simple, add complexity only as needed:

Establish orientation protocol from beginning:

Plan for periodic review:

Document both successes and failures:

Maintain researcher authority:

Integration with Dissertation Methodology

This meta-discussion provides material for:

Methodology chapter:

Reflexivity section:

Limitations section:

Potential publication:

The Action Research Parallel

This collaboration system adjustment mirrors action research methodology:

Cycle 1:

Cycle 2:

This iterative refinement IS action research.

Applied to collaboration itself, not just dissertation topic.

Key Principle: Systems evolve through use, not perfect design.

Solution Implemented: Chat Retirement and Fresh Start

After one week of intensive collaboration (8 sessions, Oct 27 - Nov 3, 2025), the accountability chat’s conversation history had grown sufficiently long that it created cognitive load and tracking difficulties for both researcher and AI.

Specific problems with long conversation history:

Solution: Strategic Chat Retirement

Decision made November 3, 2025:

What gets transferred to new chat:

Current status and context:

Key operational information:

What stays in archived chat:

Implementation approach:

Opening prompt for new chat includes:

Benefits of fresh start:

For AI:

For researcher:

For collaboration effectiveness:

Preservation of work:

When to retire chats in future:

Consider retirement when:

Process for retirement:

  1. Document current status comprehensively
  2. Create orientation prompt for new chat
  3. Archive old chat (don’t delete - reference value)
  4. Start fresh with streamlined context
  5. First message in new chat: Full orientation using protocol

Principle: Retire strategically, not reactively.

Plan chat retirement at natural transition points (phase changes, major milestones) rather than waiting for system breakdown.

This solution demonstrates:

For others building AI collaboration systems:

Expect to retire and restart chats periodically. This is normal system maintenance, not failure. Long conversation histories create cognitive burden that outweighs continuity benefits.

Plan chat lifecycle from the start:

Document transitions:

This iterative approach to chat management mirrors iterative research methodology - test, observe, refine, adjust. The collaboration system itself becomes subject to the same improvement cycles as the research it supports.

Implications for AI-Assisted Research Methodology

What this reveals about AI collaboration at scale:

For short-term use (1-2 sessions):

For sustained use (weeks/months):

For dissertation-length projects (years):

Best practices emerging:

  1. Researcher maintains master documents (Dashboard, logs) as source of truth
  2. AI accesses current state through document sharing (not memory)
  3. Orientation protocol at each session (date, context, goal)
  4. Periodic system review (every 2-3 weeks minimum)
  5. Simplify when complexity outweighs benefit
  6. Document both successes and failures (honest methodology)

For others attempting similar collaboration:

Expect:

Plan for:

Document:


This document can be referenced for methodology chapters, reflexivity sections, or publications on AI collaboration in academic research. Updated to include complexity management and error handling based on lived experience of sustained AI-human collaboration in doctoral research context.

Last updated: November 6, 2025 - Added case example of real-time self-direction demonstrating appropriate AI collaboration boundaries


Contribution Report

Human Contribution (50%):

Claude Contribution (50%):

Collaboration Type: Equal partnership - human ethical framework with AI organizational structure

Citation & Attribution

Citation (APA 7th Edition): Dawson, D. R., II. (2025). Academic integrity in AI collaboration: Guidelines and distinctions. Northeastern University. https://github.com/drdawson2/ai-collaboration-reference-guide

Author Information:

License: This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).

You are free to:

Under the following terms:

Suggested Attribution:“Based on [Document Title] by David R. Dawson II (2025), available at https://github.com/drdawson2/ai-collaboration-reference-guide. Licensed under CC BY-NC 4.0.”


Last Updated: 2025-11-09