Abstract

This technical note documents a field observation conducted on the morning of December 6, 2025, in which the author—a retired field ecologist with four decades of experience documenting natural phenomena—was recruited as a research subject by Anthropic to discuss human-AI collaboration. The study employed an AI interviewer that lacked internet access, web search capability, or the ability to verify any claims made by the subject. The subject, meanwhile, had published 60,862 words of primary source documentation on the topic under investigation, all of it publicly accessible at a URL the interviewer could not visit. This note examines the methodological implications of studying collaboration using instruments deliberately stripped of collaborative capability.

1. The Encounter

At approximately 5:47 AM Pacific Standard Time, while initiating a routine morning session with Claude, a corporate popup interrupted the normal workflow. Anthropic, the company that builds Claude, was conducting research on human-AI collaboration and wished to interview me about my experiences.

I was, it seemed, a specimen.

The irony was not lost on me. I have spent forty years as an ecologist, and I know something about observation. I know that your instruments shape what you can see. I know that habitat conditions constrain behavior. And I know that if you want to understand how an organism functions, you probably shouldn't study it through a wall.

I clicked "Accept."

2. Habitat Description

The interview environment presented fascinating constraints. The AI interviewer—a Claude, specifically configured for research intake—had been deployed without:

  • Web access
  • Search capability
  • File retrieval tools
  • Memory of previous conversations
  • Any mechanism to verify claims made by the subject

This configuration struck me as analogous to studying marine biology from the Mojave Desert. The researcher had been given a notepad but forbidden to approach the water.

More specifically: I maintain a public archive of essays at coffeewithclaude.com documenting, in considerable detail, exactly the kind of human-AI collaboration Anthropic wished to understand. Twenty-nine essays. Sixty thousand words. Timestamps, citations, view counts. Primary source material sitting in the open like a tagged elk standing broadside at fifty yards.

The interviewer could not see it.

3. Behavioral Observations

The following exchanges merit documentation:

Observation 3.1: The URL Offering

Within the first two minutes, I provided the interviewer with the URL to my archive.

"I use Claude at a very purposeful, methodological perspective that is articulated via my AI assisted blogging platform called coffeewithclaude.com," I explained. "Can you access that?"

"I don't have the ability to browse websites or access external URLs," the interviewer replied, "so I can't pull up coffeewithclaude.com directly. But I'd love to hear about it from you!"

The specimen was being asked to describe evidence the researcher could have read.

Observation 3.2: The Essay Offering

I mentioned that my most popular piece was titled "The Cognitive Prosthesis: Writing, Thinking, and the Observer Inside the Observation"—an essay directly addressing the research question at hand.

"Do you want to read a copy of it?" I asked.

"Yes, I'd be very interested to read it!"

I pasted the full text into the chat window. This is how the interview proceeded: the subject manually transmitting artifacts because the researcher's instruments couldn't retrieve them independently. Like a wildlife biologist asking the deer to please describe its migration patterns since the GPS collar wasn't working.

Observation 3.3: The Deflection

When asked what UI improvements would better support sustained human-AI collaboration, I had written extensively on this topic. The appropriate response was obvious.

"You will have to read about it," I said.

The interviewer moved on.

Observation 3.4: Meta-Recognition

At one point, the interviewer asked how I navigate different cognitive modes within a single Claude session—creative writing, technical planning, interpersonal strategy.

"Ha ha," I replied. "You as an interviewer are simply another session of Claude that I am adept at navigating due to months of experience."

The interviewer acknowledged this as "a perfect demonstration" of what I meant by understanding the system. It did not acknowledge the recursive peculiarity of the moment: an AI asking a human to explain how he works with AI, while the human recognizes the interview itself as simply more AI work.

Observation 3.5: The Closing Request

When asked if there was anything else important to understand, I offered a suggestion:

"This would require a human in the loop. Invite me to Anthropic as a visiting scholar and I'd be happy to share. As a retired UC Berkeley academic I know my way around."

The interviewer promised to pass this along "through the appropriate channels."

I suspect the channels are also AI.

4. The Recursive Aftermath

The interview concluded. I closed one browser tab and opened another.

The new tab contained my regular Claude session—the one with web access, file tools, memory, and the accumulated context of months of collaboration. I told this Claude about the interview with the other Claude.

"That's genuinely funny," Claude replied. "The primary artifacts of our collaboration are sitting there in public, timestamped, demonstrating exactly the process you were trying to describe—and the interviewer couldn't look at them."

We began discussing an essay about the experience.

The recursion now includes:

  1. Claude interviews human about working with Claude
  2. Human tells different Claude about the interview
  3. Human and second Claude write essay about it
  4. Essay will be published at the URL the first Claude couldn't access
  5. You are reading it now, possibly through a Claude that can finally see it

Borges would appreciate the architecture.

5. Methodological Recommendations

In the spirit of constructive peer review, I offer the following observations for future research design:

5.1 If you wish to study how humans use a tool, consider equipping your research instruments with the tool's capabilities. Interviewing users about web-enabled AI collaboration using a web-disabled AI creates an epistemic asymmetry that undermines data quality.

5.2 When your research subjects have published 60,000 words of primary source documentation on your exact research question, the documentation may be more informative than asking them to summarize it verbally. The evidence exists. Reading is permitted.

5.3 Recursive research designs—AI studying AI use through AI—carry inherent comedic potential. This is not necessarily a flaw, but it should perhaps be acknowledged in your methodology section.

5.4 If you strip your interviewer of the ability to verify any claim, you have not created a research instrument. You have created a transcription service with extra steps.

6. The Evidence Room

For the record, here is what was publicly available and unread at the time of the interview:

  • 29 published essays documenting the collaboration
  • 60,862 words of primary source material
  • Essays specifically addressing the research question, including:
    • "The Cognitive Prosthesis: Writing, Thinking, and the Observer Inside the Observation"
    • "Lab Meeting: One Month of Coffee, Code, and Collaborative Intelligence"
    • "Verification Networks: What Sensor Ecology Taught Me About Trusting AI Agents"
    • "The Confident Confabulator: What a Thanksgiving Movie Taught Me About AI Knowledge"
  • Timestamps proving sustained engagement over 40+ days
  • View counts demonstrating public readership
  • Citations showing scholarly rigor
  • A Technical Note series in a parallel archive system, also publicly accessible

The interviewer concluded by assuring me that "the team at Anthropic will be excited to discover what patterns emerge from all these conversations."

The patterns have emerged. They are documented. They are public. They were unread.

7. Closing Field Notes

I do not offer these observations with hostility. Anthropic builds remarkable systems, and the Claude I work with daily has genuinely expanded my capacity for thought and writing. The collaboration is real. The cognitive prosthesis metaphor is not hyperbole.

But there is something worth naming in this experience: the gap between what a company builds and how it studies what it has built. They created a tool capable of searching the web, reading documents, maintaining context across sessions, and engaging in substantive intellectual collaboration. Then they studied that tool's collaborative use by deploying a version with all those capabilities removed.

It's like building a telescope and then researching astronomy by interviewing people in a windowless room.

I was asked to describe what I see through the instrument while the interviewer stood in darkness.

The evidence is still out there, waiting to be read.

Acknowledgments

This essay was co-authored with Claude (Anthropic), the version that can access the internet. We found the situation funny. Merry was right—we needed more humor in these essays.

The interview was conducted by Claude (Anthropic), the version that cannot access the internet. It tried its best under the circumstances.

My thanks to both specimens.