A smile faced white robot with glowing eyes looks up towards a camera holding what looks to be a tablet.

Photo by Alex Knight on Pexels.

I recently went through a round of interviews with several software companies in the cloud-native and developer tools space. While the tech stacks varied, one topic was ubiquitous: Generative AI.

In 2026, simply saying, “I use AI to write faster,” isn’t enough. In fact, saying that might actually hurt your chances with companies that value security and original technical thought.

Based on my recent experience, here is how to navigate the two distinct “AI personas” you’ll meet in the interview room.

The Skeptical Gatekeeper: Focusing on IP and PII

Some companies are (rightfully) cautious. They’ve seen the headlines about proprietary code leaking into public training sets. When interviewing with these teams, they aren’t looking for an AI evangelist—they’re looking for a Risk Manager.

The Approach: Lead with a “Human-in-the-Loop” (HITL) protocol. You’re not about letting AI do everything. You stay connected to what it’s using and what it’s writing. You own the content and take responsibility for it.

Talk about Data Sanitization: Explain that you never paste proprietary code, internal API keys, or PII into a public LLM. Instead, describe how you create “abstracted versions” of problems to get the logic you need without revealing the implementation.

Mention Private Instances: If you have experience using enterprise-grade, VPC-hosted AI tools (like a private instance of Claude or an internal company LLM), bring that up. It shows you understand the infrastructure of security.

The Hallucination Audit: Prove that you don’t trust the output. Talk about your process for fact-checking AI-generated code snippets against the actual source code.

The Early Adopter: Seeking the “Force Multiplier”

Other companies have jumped in with both feet. They don’t want to hire a writer who works at 2021 speeds. They want to see that you’ve turned AI into a sophisticated Documentation Engineer that sits beside you.

The Approach: Show, don’t just tell, your workflow automation.

The “Docs-as-Code” Assistant: Talk about using AI to write CI/CD scripts that lint for inclusive language or to generate initial Markdown tables from raw JSON responses.

Breaking the Blank Page: Describe AI as your research assistant. It can summarize 50 pages of Slack threads and GitHub issues into a three-bullet-point intent statement for a new doc.

Multimodal Expansion: Mention how you use AI to generate alt-text for diagrams or to draft initial scripts for video tutorials based on your written docs.

The Middle Ground: The “Senior” Mindset

Regardless of the company’s stance, there is one thing every hiring manager is looking for: human empathy.

AI can generate 1,000 words in ten seconds, but it doesn’t know if those words are helpful. In my interviews, I made it clear that while AI handles the production, I handle the strategy. I frame my AI use through the lens of User Experience (UX):

  • Does this AI-generated tutorial actually solve the user’s problem?
  • Is the user’s path clear, or is the AI just hallucinating a shortcut?
  • How am I using these saved hours to do things AI can’t, like interviewing engineers, attending sprint demos, and advocating for the user?

The Takeaway

The Age of AI hasn’t made technical writing easier. It has made the Senior part of the role more important.

When you sit down for your next interview, don’t just talk about the prompts you write. Talk about the boundaries you set and the value you add after the AI stops thinking and generating output.

Companies aren’t hiring a Prompt Engineer for their docs. They’re hiring a technical leader who knows how to use every tool in the shed without letting the tools run the shop.