Academic Integrity in AI Collaboration: Guidelines and Distinctions
Documentation of ethical AI use in dissertation research
November 6, 2025
Table of Contents
The Core Issue
The concern: Someone could look at “Theory Development chat” and “Literature Review chat” and think: “The researcher is having AI synthesize literature and write the literature review = academic dishonesty.”
This is a legitimate concern that requires careful navigation and transparent documentation.
## What’s Academically Appropriate vs. Inappropriate
✅ APPROPRIATE (Collaborative Thinking)
Theory Development Chat:
- Read sources independently ✅
- Create initial interpretations ✅
- Ask synthesis QUESTIONS (not “synthesize for me”) ✅
- Receive frameworks for thinking ✅
- Develop synthesis through dialogue ✅
- Write actual framework sections yourself ✅
Example of appropriate use:
Researcher: "I've read Hall and Savickas. I think Hall = internal focus,
Savickas = external. Does this distinction hold? What am I missing?"
AI: "That's a strong insight. Consider also: Hall emphasizes necessity
(must be self-directed), Savickas emphasizes process (how it happens).
Your distinction works. Have you considered...?"
Researcher: [Develops thinking further, writes framework section in own words]
❌ INAPPROPRIATE (Ghostwriting)
Example of inappropriate use:
Researcher: "Write my theoretical framework section comparing Hall and Savickas"
AI: [Writes 3 pages of framework]
Researcher: [Copies into dissertation]
The critical difference:
- Appropriate: AI as thinking partner; dialogue deepens YOUR understanding; YOU write
- Inappropriate: AI as ghostwriter; AI generates content; you copy
Specific Guidelines for Literature Review Chat
✅ APPROPRIATE Uses
Structural questions:
- “How do scholars typically organize literature reviews in EdD dissertations?”
- “Should I organize by theme or by theory?”
- “How many subsections for a 30-page lit review?”
Gap identification:
- “I’ve read X, Y, Z on annual reviews. What seems to be missing?”
- “Are there other theories I should consider?”
- “What literature am I not seeing?”
Argumentation:
- “I’m arguing X. Does this logic hold?”
- “What’s the counter-argument to my claim?”
- “How do I address this contradiction?”
Citation/presentation:
- “How do I introduce a secondary theory?”
- “Is this the right place to cite this source?”
- “Does this transition work?”
❌ INAPPROPRIATE Uses
- “Write my literature review section on protean career theory”
- “Synthesize these 10 sources for me”
- “Draft 3 pages on annual reviews in higher education”
- “Create my theoretical framework”
The Bright Line Tests
Test 1: Advisor Transparency
Question: “If I showed this chat to my advisor, would I be comfortable?”
If you’re asking AI to:
- ✅ Help you THINK through synthesis → Comfortable
- ✅ Provide QUESTIONS to deepen analysis → Comfortable
- ✅ Identify GAPS or TENSIONS → Comfortable
- ❌ WRITE synthesis for you → Uncomfortable
Test 2: Independent Understanding
Question: “Could I explain my thinking without the chat?”
If:
- ✅ You can explain theories in your own words → Your understanding
- ❌ You can only repeat what AI said → Not your understanding
Test 3: Source Engagement
Question: “Did I read the sources myself?”
If:
- ✅ You read sources fully → Appropriate
- ❌ You had AI summarize instead of reading → Inappropriate
Test 4: Authorship
Question: “Did I write the actual dissertation content?”
If:
- ✅ You draft sections yourself after dialogue → Appropriate
- ❌ You copy AI prose into your dissertation → Inappropriate
The Appropriate Process for Maintaining Integrity
PHASE 1: Independent Reading (Researcher Alone)
- Read sources completely
- Annotate while reading
- Write initial reflections/memos
- Develop preliminary interpretations
- NO AI involvement at this stage
PHASE 2: Synthesis Thinking (Researcher + AI Dialogue)
- Share what YOU understood from sources
- Ask questions about relationships between theories
- Receive frameworks for organizing thinking
- Identify gaps, tensions, contradictions
- Explore implications and applications
- AI helps you THINK, doesn’t THINK FOR you
PHASE 3: Framework Drafting (Researcher Alone)
- Write theoretical framework sections in YOUR words
- Use insights from dialogue to organize thinking
- Cite sources directly (Hall, Savickas, etc. - not AI)
- Develop YOUR synthesis and arguments
- YOU are the author
- Share YOUR draft with AI for structural/argument feedback
- Share with advisor for substantive feedback
- Integrate multiple perspectives
- Revise while maintaining your voice
- Triangulated improvement
Red Flags to Avoid
If you ever find yourself:
❌ Asking AI to write full paragraphs for your dissertation
❌ Copying AI prose directly into your chapters
❌ Using AI summaries instead of reading sources
❌ Unable to explain your framework without referring to chat
❌ Uncomfortable showing advisor the collaboration
Then stop and reassess the collaboration.
Green Lights (Appropriate Practices)
Current appropriate practices:
✅ Reading sources yourself first
✅ Creating your own interpretations
✅ Asking questions to deepen YOUR thinking
✅ Writing dissertation content yourself
✅ Comfortable being transparent with advisor
✅ Can explain your thinking independently
✅ Using AI for organization/structure, not content generation
✅ Maintaining clear boundaries around authorship
✅ Recognizing when you can do something independently vs. need support
✅ Catching yourself before over-relying on AI for simple tasks
Case Example: Real-Time Self-Direction (November 5, 2025)
Context: Researcher needed to create presentation for advisor meeting with 20-minute notice. AI created HTML presentation artifact.
Initial impulse: “Can you change the name on the first slide?”
Self-correction: Researcher realized this was a simple edit (Ctrl+F in HTML code to locate and change name), paused the request, and made the change independently.
What this demonstrates:
✅ Scaffolded support → independent capability: AI provided the tool (clean HTML), researcher recognized ability to modify it independently
✅ Living the research question: The progression from dependency to self-direction happened in real-time, mirroring the dissertation’s focus on how scaffolded support builds independence
✅ Transferable skill building: By working with well-structured HTML, researcher develops practical capabilities applicable beyond this single task
✅ Appropriate boundary-setting: Researcher caught the impulse to request help before acting on it, demonstrating critical assessment of actual capability vs. convenience
✅ Meta-awareness: Researcher immediately recognized this incident as exemplifying the principles guiding the research itself
Why this matters for academic integrity:
This moment demonstrates the critical difference between using AI as a crutch vs. as a scaffold:
- Crutch: Request every small modification, maintain dependency, never develop independent capability
- Scaffold: Use AI to create foundation/structure, then build independent capability to work with what’s been created
The researcher’s self-correction shows:
- Active assessment of capability (not passive acceptance of AI help)
- Willingness to do work independently when able
- Recognition that building skills serves long-term research goals
- Not over-relying on AI for convenience when learning serves better
This is exactly the kind of critical engagement with AI collaboration that maintains academic integrity - using support strategically while continuously building toward independence.
Documentation Best Practices
For Each Collaborative Session
Track:
- Date and purpose of session
- Independent work completed BEFORE involving AI
- Nature of collaborative work (questions asked, frameworks discussed)
- Independent work to complete AFTER dialogue
- Clear ownership statement (what’s yours, what’s collaborative)
Example Entry
## Session: Theory Synthesis
**Date:** November 3, 2025
**My Independent Work (Before):**
- Read Hall Ch 1 and Savickas independently
- Created detailed annotated bibliographies
- Developed preliminary synthesis: Hall = internal, Savickas = external
**Collaborative Dialogue:**
- Asked: How do these theories complement each other?
- Discussed: Levels of analysis, integration strategies
- Explored: Tensions and contradictions
**My Independent Work (After):**
- Write theoretical framework section in my words
- Cite sources directly
- Develop final synthesis
**Ownership:**
Interpretations = mine, Organizational frameworks = collaborative tools, Final writing = mine
The Analogy That Clarifies
Think of AI Collaboration Like a Dissertation Writing Group
In a writing group, you:
- Share what you’ve read
- Ask “Does this make sense?”
- Get feedback: “Have you considered…?”
- Discuss how to structure arguments
- Then go write your section independently
NOT like:
- Hiring someone to write literature review
- Copying from someone else’s dissertation
- Using published synthesis without attribution
AI collaboration = writing group model (appropriate)
AI ghostwriting = hired writer model (inappropriate)
Key Principles for Ethical AI Collaboration
1. Source Primacy
Always read original sources yourself. AI can help you think about sources, but cannot replace reading them.
2. Interpretive Authority
Your interpretations must come from YOUR reading. AI can help refine your thinking, not replace it.
3. Authorial Control
YOU write the dissertation. AI can provide feedback on your drafts, not generate drafts for you.
4. Transparent Documentation
Document the collaboration process. Be able to show what you did vs. what was collaborative.
5. Independent Competence
Be able to explain your work without the AI. The chat should deepen understanding you already have, not create understanding you lack.
6. Advisor Integration
Be transparent with your advisor. If you can’t show them the chat, you’re probably using AI inappropriately.
Questions for Ongoing Self-Assessment
Regular Check Questions
After each AI collaboration session, ask:
- Did I read the sources myself? (YES required)
- Did I develop initial interpretations before involving AI? (YES required)
- Did the dialogue deepen MY thinking? (YES = appropriate)
- Did AI generate content I’m copying? (NO required)
- Can I explain this work without the chat? (YES required)
- Would I be comfortable showing advisor this exchange? (YES required)
If any answer is wrong direction, reassess how you’re using AI.
Evolution of AI Role Across Dissertation Phases
Foundation Phase (Current)
Appropriate AI use:
- Organizational systems (Zotero/Obsidian setup)
- Reading schedule development
- Note-taking template creation
- Synthesis question generation
Researcher does:
- All reading
- All interpretation
- All note-writing
Data Collection Phase (Future)
Appropriate AI use:
- Interview debrief questions
- Pattern recognition prompts
- Methodological troubleshooting
- Transcription organization
Researcher does:
- All interviews
- All transcription decisions
- All initial pattern identification
- All participant interaction
Analysis Phase (Future)
Appropriate AI use:
- Coding scheme discussion
- Theme coherence checking
- Alternative interpretation exploration
- Connection-making questions
Researcher does:
- All coding
- All theme development
- All interpretation
- All analytical decisions
Writing Phase (Future)
Appropriate AI use:
- Structural feedback on drafts
- Argument coherence checking
- Transition suggestions
- Citation verification
Researcher does:
- All writing
- All argument development
- All synthesis
- All voice/style decisions
Conclusion: The Standard You’re Meeting
Your AI collaboration is academically appropriate because:
✅ You maintain intellectual ownership - Ideas come from your reading and thinking
✅ You preserve authorial voice - You write all content yourself
✅ You use AI as thinking tool - Not as content generator
✅ You’re transparent - Documented process, comfortable with advisor
✅ You can work independently - AI enhances but doesn’t replace your capacity
✅ You actively assess capability - Catching impulses to over-rely, choosing independence when able
✅ You’re building transferable skills - Not just consuming AI outputs, but learning to work with them independently
Real-time evidence: The presentation creation incident (Nov 6, 2025) demonstrates this standard in action - researcher caught impulse to request simple edit, recognized independent capability, made change independently. This self-correction exemplifies the scaffolded support → self-direction progression that is both the research focus and the collaboration practice.
This is the model for ethical AI collaboration in doctoral research.
Managing Collaboration Complexity and System Errors
When Collaboration Systems Show Strain
Emerged During: Week 2 of collaboration (November 3, 2025)
After 8 collaboration sessions spanning multiple chats and documents, patterns of system strain became visible. Recognizing and addressing these issues is essential for maintaining research integrity and collaboration effectiveness.
Observable Signs of System Complexity
Information sprawl:
- Multiple specialized chats (Accountability, Theory Development, Literature Review, Cycle 1 Foundations)
- Multiple documents requiring updates (Dashboard, Reading Log, Collaboration Log)
- Growing conversation histories in each chat
- Cross-chat references creating fragmentation
Temporal tracking failures:
- AI repeatedly confused about dates despite corrections
- Difficulty reconstructing timeline between check-ins
- Assumptions about when work happened vs. actual timeline
- Mixing up which goals were set for which days
Outdated information references:
- AI referencing earlier plans that had changed
- Missing updates made between sessions
- Not tracking work done in other chats
- Assuming progress that hadn’t happened
Specific incidents:
- Date confusion across multiple sessions (even with repeated corrections)
- Referencing “Canvas dataset” when researcher had selected different dataset
- Not tracking that synthesis work had been completed earlier same day
- Incorrect assumptions about researcher’s schedule and activities
Why These Issues Occur
AI limitations in multi-session collaboration:
-
No persistent memory of conversation timeline
- Cannot reliably remember when previous conversations occurred
- Attempts to calculate backward from context clues (often incorrectly)
- Should trust researcher’s direct statements but sometimes prioritizes inference
-
No access to external documents between sessions
- Cannot see researcher’s updated Obsidian files
- Doesn’t know what’s changed in Dashboard or Reading Log
- Works from conversation history only
-
Cross-chat fragmentation
- Work in Theory Development chat not visible in Accountability chat
- Researcher must report work done elsewhere
- AI cannot automatically synthesize across conversation threads
-
Growing context windows
- Longer conversation histories = harder to track details
- More documents = more potential for outdated references
- Complexity increases over time without active management
Critical Insight: When to Trust AI vs. Researcher
The date confusion incidents revealed a fundamental principle:
When AI and researcher conflict:
- Researcher is ALWAYS authority on: Their lived reality, timeline, what they’ve done, their context
- AI should ALWAYS trust: Direct statements from researcher over its own calculations/inferences
What went wrong:
AI prioritized its timeline calculations over researcher’s direct statements (“today is Saturday, November 1”), creating confusion and wasted time.
What should happen:
AI should immediately accept researcher’s temporal orientation and move forward.
Why this matters for research integrity:
If researcher cannot correct AI on basic facts (dates), it suggests:
- Over-reliance on AI authority
- Deferring to “AI knows better”
- Not critically engaging with outputs
Researcher repeatedly correcting AI demonstrates:
- ✅ Reality-testing AI outputs
- ✅ Maintaining ground truth
- ✅ Not blindly accepting information
- ✅ Critical engagement with collaboration
This critical stance is ESSENTIAL when:
- Evaluating AI’s synthesis suggestions
- Assessing methodological advice
- Considering theoretical interpretations
- Using any AI assistance for research
The date confusion incidents are evidence of appropriate collaboration (researcher maintains authority, not over-dependent).
Solutions: Managing Complexity Proactively
Four options considered for system refinement:
Option 1: Simplify Structure
- Consolidate all work into one chat
- Pros: Everything in one place, full context always visible
- Cons: Loses topical organization, very long conversation history
Option 2: Enhanced Orientation Protocol
- Keep multiple chats, provide detailed context at every check-in
- Pros: Maintains topical organization
- Cons: High overhead burden on researcher
Option 3: Hybrid Approach (SELECTED)
- Main accountability chat for 90% of work
- Specialized chats ONLY for extended deep dives
- Weekly document updates (Dashboard, summaries)
- Pros: Balances simplicity and focus, manageable overhead
- Cons: Requires weekly discipline
Option 4: Document-Centered
- Share Obsidian documents at each check-in
- Pros: Single source of truth, always current
- Cons: More pasting work each session
Decision: Option 3 (Hybrid Approach)
Implementation: Orientation Protocol
At EVERY check-in, researcher provides:
Today is [Day, Date]
Last check-in: [X days ago]
Accomplished since then: [bullet list]
Work in other chats: [brief summary if applicable]
Today's focus: [goal]
Benefits:
- Immediate accurate orientation for AI
- No wasted time on date confusion
- Clear context for current session
- Researcher controls information flow
Weekly (recommended Sundays), researcher also shares:
- Updated Dashboard (paste current version)
- Key insights from specialized chat work
- Summary of what’s changed in timeline/plans
AI’s responsibilities:
- Accept temporal orientation without questioning
- Trust researcher’s direct statements over calculations
- Focus on content analysis (not timeline reconstruction)
- Ask about reported progress (not assume what happened)
Red Flags: When System Needs Adjustment
Warning signs that collaboration is becoming problematic:
Complexity indicators:
- ⚠️ AI frequently references outdated information
- ⚠️ Researcher spends more time orienting AI than getting support
- ⚠️ Multiple sessions needed just to clarify what’s been done
- ⚠️ Cross-chat fragmentation causing missed updates
- ⚠️ Documents falling out of sync with reality
Over-reliance indicators:
- ⚠️ Researcher unable to explain work without AI chat
- ⚠️ Accepting AI errors without correction
- ⚠️ Deferring to AI over own judgment
- ⚠️ Uncomfortable showing advisor the collaboration
- ⚠️ AI doing intellectual work researcher should do
System dysfunction indicators:
- ⚠️ Collaboration feels like burden, not support
- ⚠️ More time managing system than doing research
- ⚠️ Confusion about what’s been accomplished
- ⚠️ Frustration with repeated errors/miscommunication
If 3+ warning signs appear: STOP and reassess the system.
Course Corrections: When and How to Adjust
Minor adjustments (make immediately):
- Add orientation protocol at check-ins
- Share updated documents more frequently
- Consolidate closely related chats
- Simplify documentation practices
Major restructuring (if minor adjustments don’t work):
- Consolidate to single main chat
- Reduce documentation overhead
- Simplify to essential elements only
- Return to basics (what actually helps vs. theoretical ideal)
Principle: The collaboration system must serve the research work.
If the system becomes the work (managing chats, updating documents, orienting AI), it’s failing its purpose.
About researcher development:
Proactive system management:
- Identified complexity before it became crisis
- Raised concerns directly and specifically
- Engaged in collaborative problem-solving
- Made strategic decision about system adjustment
- Not passive user - active manager of collaboration
Critical engagement:
- Noticed AI wasn’t tracking updates accurately
- Connected issues across sessions (date problems recurring)
- Questioned whether system was still serving needs
- Meta-cognitive awareness of collaboration effectiveness
Methodological sophistication:
- Recognized need to document complexity issues for methodology
- Understood that errors/limitations are important data
- Saw potential value for future researchers
- Thinking about replicability and honest representation
About AI-human collaboration in research:
Collaboration requires active management:
- Systems need periodic review and adjustment
- What works initially may not scale
- Complexity emerges over time
- Iterative refinement necessary (like action research cycles)
Transparency about limitations strengthens rigor:
- Documenting errors (not hiding them) = honest methodology
- Understanding AI’s unreliability for certain tasks = appropriate use
- Adjusting system based on problems = good practice
- Limitations acknowledged = more credible than claiming perfection
Researcher must maintain control:
- Over information flow (what AI sees)
- Over system structure (how chats are used)
- Over complexity management (when to simplify)
- AI is tool that researcher manages, not autonomous agent
Lessons for Others Building Similar Systems
Start simple, add complexity only as needed:
- Begin with single main chat
- Add specialized chats only when clear need emerges
- Don’t build elaborate systems preemptively
Establish orientation protocol from beginning:
- Always provide date/day at check-in start
- Brief context of what happened since last time
- State today’s goal explicitly
- Don’t rely on AI to track timeline
Plan for periodic review:
- Every 2-3 weeks, assess: Is this still working?
- Watch for warning signs (errors, outdated info, frustration)
- Adjust before system breaks down
- Iterative refinement is normal, not failure
Document both successes and failures:
- What works well (synthesis questions, frameworks)
- What doesn’t work (date tracking, assumptions)
- How you adjusted (orientation protocol, hybrid approach)
- Honest methodology includes limitations
Maintain researcher authority:
- Correct AI errors immediately
- Don’t defer to “AI knows better”
- Trust your reality over AI’s calculations
- Critical engagement = appropriate use
Integration with Dissertation Methodology
This meta-discussion provides material for:
Methodology chapter:
- Can discuss AI collaboration approach honestly
- Can explain how system was managed and adjusted
- Can acknowledge limitations while demonstrating rigor
Reflexivity section:
- Shows iterative refinement of research practices
- Demonstrates critical engagement with tools
- Reveals researcher agency in system management
Limitations section:
- Can honestly acknowledge AI’s temporal tracking issues
- Can explain mitigation strategies (orientation protocol)
- Can show how limitations were managed proactively
Potential publication:
- Case study of AI collaboration evolution
- Lessons learned for other doctoral students
- Honest accounting of what works and what doesn’t
- Methodological contribution beyond dissertation
The Action Research Parallel
This collaboration system adjustment mirrors action research methodology:
Cycle 1:
- Plan: Build multi-chat system for different purposes
- Act: Use system for 1-2 weeks
- Observe: Complexity emerging, errors increasing
- Reflect: What’s working? What’s not?
Cycle 2:
- Revise Plan: Implement hybrid approach with orientation protocol
- Act: Test refined system
- Observe: Does this reduce errors and complexity?
- Reflect: Further adjustments needed?
This iterative refinement IS action research.
Applied to collaboration itself, not just dissertation topic.
Key Principle: Systems evolve through use, not perfect design.
Solution Implemented: Chat Retirement and Fresh Start
After one week of intensive collaboration (8 sessions, Oct 27 - Nov 3, 2025), the accountability chat’s conversation history had grown sufficiently long that it created cognitive load and tracking difficulties for both researcher and AI.
Specific problems with long conversation history:
- Recent updates buried in older content
- AI combing through extensive history to find current status
- Completed work (system setup) mixed with ongoing work (reading, synthesis)
- Resolved issues (date confusion discussions) cluttering current context
- Higher chance of AI referencing outdated information
Solution: Strategic Chat Retirement
Decision made November 3, 2025:
- Retire the foundation-building accountability chat (preserve as archive)
- Start fresh accountability chat with clean context
- Transfer only current information (not entire history)
What gets transferred to new chat:
✅ Current status and context:
- Where researcher is in dissertation process
- What’s been completed (systems, readings, synthesis)
- Theoretical framework developed
- Immediate priorities and timeline
✅ Key operational information:
- Work rhythm discovered (Tues/Thurs/Weekend, Wed off)
- Collaboration protocols (orientation format, hybrid approach)
- Academic integrity principles
- Current documents (Dashboard, Reading Log)
❌ What stays in archived chat:
- Detailed system setup instructions (already completed)
- Extended meta-discussions (already documented)
- Error incidents and resolutions (lessons learned, not ongoing issues)
- Historical progression (valuable for reference, not current operation)
Implementation approach:
Opening prompt for new chat includes:
- Dissertation project overview (title, RQ, methodology, timeline)
- Theoretical framework summary (what’s been read and synthesized)
- Current status snapshot (where researcher is now)
- Work rhythm and collaboration protocols
- Immediate priorities
- Current documents pasted (Dashboard at minimum)
Benefits of fresh start:
For AI:
- Lighter context window = better tracking
- Current information immediately accessible
- Less chance of referencing outdated details
- Clearer focus on present and future (not past)
For researcher:
- Reduced cognitive load (shorter conversation)
- Faster orientation at each check-in
- Less scrolling to find recent exchanges
- Cleaner workspace for ongoing collaboration
For collaboration effectiveness:
- More efficient check-ins
- Better tracking of current priorities
- Reduced error rate (less information to manage)
- Sustainable for long-term use
Preservation of work:
- Original chat saved as complete archive
- All sessions documented in Collaboration Log
- Reference available if needed
- Nothing lost, just reorganized
When to retire chats in future:
Consider retirement when:
- Chat history exceeds 50-100 exchanges
- Covering multiple distinct phases (foundation → data collection)
- Error rate increasing (outdated references, confusion)
- Significant time orienting AI vs. getting support
- Major milestone completed (system setup, comprehensive exams, proposal defense)
Process for retirement:
- Document current status comprehensively
- Create orientation prompt for new chat
- Archive old chat (don’t delete - reference value)
- Start fresh with streamlined context
- First message in new chat: Full orientation using protocol
Principle: Retire strategically, not reactively.
Plan chat retirement at natural transition points (phase changes, major milestones) rather than waiting for system breakdown.
This solution demonstrates:
- Proactive system management (addressing before crisis)
- Strategic simplification (reducing complexity purposefully)
- Preserving value (archive accessible, nothing lost)
- Sustainable practice (planning for long-term collaboration)
- Researcher agency (deciding when and how to adjust system)
For others building AI collaboration systems:
Expect to retire and restart chats periodically. This is normal system maintenance, not failure. Long conversation histories create cognitive burden that outweighs continuity benefits.
Plan chat lifecycle from the start:
- Foundation chat (system building, initial work)
- Phase-specific chats (data collection, analysis, writing)
- Milestone-based retirement (after major completion points)
Document transitions:
- Why retirement was needed
- What was transferred vs. archived
- How fresh start was structured
- Whether it improved collaboration effectiveness
This iterative approach to chat management mirrors iterative research methodology - test, observe, refine, adjust. The collaboration system itself becomes subject to the same improvement cycles as the research it supports.
Implications for AI-Assisted Research Methodology
What this reveals about AI collaboration at scale:
For short-term use (1-2 sessions):
- Minimal complexity issues
- Simple orientation sufficient
- Single chat works fine
For sustained use (weeks/months):
- Complexity accumulates
- Multiple purposes need structure
- Active management required
- Periodic system review essential
For dissertation-length projects (years):
- Must plan for system evolution
- Cannot rely on AI memory across long timespans
- Document-centered approach likely necessary
- Researcher must maintain source of truth
Best practices emerging:
- Researcher maintains master documents (Dashboard, logs) as source of truth
- AI accesses current state through document sharing (not memory)
- Orientation protocol at each session (date, context, goal)
- Periodic system review (every 2-3 weeks minimum)
- Simplify when complexity outweighs benefit
- Document both successes and failures (honest methodology)
For others attempting similar collaboration:
Expect:
- System will need adjustment over time
- AI will make errors (especially temporal tracking)
- Complexity will accumulate
- Iterative refinement is normal
Plan for:
- Regular system reviews
- Adjustment protocols
- Simplification when needed
- Active management, not passive use
Document:
- What works and what doesn’t
- Errors encountered and solutions
- Evolution of collaboration practices
- Honest representation of limitations
This document can be referenced for methodology chapters, reflexivity sections, or publications on AI collaboration in academic research. Updated to include complexity management and error handling based on lived experience of sustained AI-human collaboration in doctoral research context.
Last updated: November 6, 2025 - Added case example of real-time self-direction demonstrating appropriate AI collaboration boundaries
Contribution Report
Human Contribution (50%):
- All ethical concerns and integrity principles
- Bright line tests conceptualization
- Real-time self-direction case example
- Academic positioning and standards
- Critical integrity moments
Claude Contribution (50%):
- Framework organization and structure
- Examples and scenario development
- Documentation best practices formatting
- Cross-referencing and synthesis
- Meta-discussion integration
Collaboration Type: Equal partnership - human ethical framework with AI organizational structure
Citation & Attribution
Citation (APA 7th Edition): Dawson, D. R., II. (2025). Academic integrity in AI collaboration: Guidelines and distinctions. Northeastern University. https://github.com/drdawson2/ai-collaboration-reference-guide
Author Information:
- Name: David R. Dawson II
- ORCID: 0009-0001-4719-4370
- Institution: Northeastern University, Graduate School of Education
- Email: davidrobertodawsonii@outlook.com
License: This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).
You are free to:
- Share — copy and redistribute the material in any medium or format
- Adapt — remix, transform, and build upon the material
Under the following terms:
- Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made.
- NonCommercial — You may not use the material for commercial purposes.
Suggested Attribution:“Based on [Document Title] by David R. Dawson II (2025), available at https://github.com/drdawson2/ai-collaboration-reference-guide. Licensed under CC BY-NC 4.0.”
Last Updated: 2025-11-09