Debunking Myths About Technical Writing for Developers

In my years as a technical writer at Google, Grafana Labs, and Microsoft, I’ve encountered countless misconceptions about what technical writers actually do. Some stakeholders view us as glorified clerical support. Others expect us to be senior-level software engineers who can independently build production systems while maintaining documentation.

The reality is far more nuanced: we are specialized bridge-builders who manage the high-stakes intersection of human language and machine logic. We operate in the gap between what engineers know and what developers need to learn, transforming tribal knowledge into scalable education.

Here are the most common myths I’ve encountered about technical writing, what the role actually entails, and why getting this right matters for your product’s success.

Myth 1: Technical Writers Are “Information Secretaries”

The Misconception: Technical writers wait for engineers to send over notes, then simply reformat those notes into documentation. We’re basically human content management systems.

The Reality: This “secretary” approach produces fragmented, contradictory, and low-quality content. Real technical writing involves investigative research, information architecture, audience analysis, and strategic content design.

What the Work Actually Looks Like

Professional technical writers don’t passively wait for information. Instead, we:

  • Embed ourselves in development cycles, attending sprint planning and standups to understand what’s being built and why
  • Conduct structured interviews with SMEs, asking targeted questions that uncover not just what features do, but why they exist and how they fit into the broader ecosystem
  • Analyze documentation gaps by reviewing support tickets and user feedback to identify where developers actually struggle
  • Create information architectures that map content to user journeys, ensuring developers find answers through intuitive navigation
  • Establish content standards that ensure consistency across teams

Good technical writers architect knowledge systems that serve multiple audiences simultaneously. This is why asking engineers to “write down what they built” produces documentation with no coherent learning path.

Myth 2: The Technical Writer Must Write Every Line of Code

The Misconception: A good technical writer should independently write all code samples, from basic examples to complex production implementations.

The Reality: While I maintain working knowledge of some languages, enterprise software is often too complex for a single writer to generate all code samples. Instead, my role is better described as a Technical Director who orchestrates high-quality examples.

The Technical Director Model

1. Identify the Use Case

Engineers want to document what they built; users need to understand how to solve problems. I translate between these perspectives:

  • Engineer thinking: “We added support for custom retry policies”
  • User thinking: “How do I make my API calls more resilient to network failures?”

2. Enlist the Right Resources

I identify which engineer has the deepest knowledge of a subsystem, who writes clean example code, and who has bandwidth to contribute.

3. Verify, Refine, and Test

Raw code from engineers rarely ships as-is. My review includes:

  • Compilation and execution testing: Does it actually work?
  • Security review: Are we showing unsafe patterns?
  • Readability optimization: Removing complexity, adding explanatory comments
  • Error handling: Ensuring examples show realistic error handling, not just happy paths

4. Maintain and Update

I track which code samples are affected by breaking changes, coordinate updates before deprecated features are removed, and maintain automated testing for critical examples.

When Technical Writers Do Write Code

For simpler examples, I may write the code myself. But attempting to write complex, production-representative code for unfamiliar systems wastes time and produces fragile examples.

Myth 3: Engineers Can Simply Replace Technical Writers

The Misconception: Documentation is just “writing stuff down.” Any engineer with decent communication skills can handle it, eliminating the need for dedicated technical writers.

The Reality: While some engineers are excellent writers, systematically replacing technical writers introduces profound risks. This isn’t about engineers lacking capability—it’s about specialization, incentives, and the invisible complexity of documentation as a discipline.

The Commitment Gap

An engineer’s primary work is building features and fixing bugs. Documentation becomes a secondary concern, creating predictable patterns:

  • Documentation written only when forced by PR requirements
  • Stale documentation when APIs change but docs don’t get updated
  • Inconsistent styles across different engineers’ contributions
  • Coverage gaps where exciting features get documented but essential operational concerns don’t

The Curse of Knowledge

Engineers suffer from unconsciously assuming others share their background knowledge. Here’s what this looks like:

Engineer-written documentation:

Configure the service mesh egress gateway to handle external traffic.

Technical writer-written documentation:

Before your application can make requests to external APIs, you need to configure the service mesh to allow outbound traffic. In Joeware, this requires creating an Egress Gateway.
Prerequisites:
- Joeware installed in your cluster (see Installation Guide)
- kubectl configured to access your cluster
- Basic familiarity with Kubernetes Services (see Kubernetes Primer)
To configure external traffic...

The engineer’s version isn’t wrong, but it assumes you know what a service mesh is, what “egress” means, and where configuration happens. Technical writers are trained to identify implicit assumptions, sequence information to build from foundational concepts, and anticipate failure modes.

The Reality: A Partnership of Specialists

The most effective documentation emerges from genuine partnerships where:

  • Engineers provide deep technical expertise about how systems work
  • Technical writers provide audience expertise about how developers learn and where they struggle
  • Both groups review each other’s work to catch technical errors and pedagogical gaps

When documentation is treated as a first-class citizen of the engineering process, organizations see measurable results:

  • Lower support costs: Developers self-serve answers
  • Faster onboarding: New users become productive in hours instead of days
  • Better product decisions: Explaining features often reveals UX problems before launch
  • Competitive advantage: Documentation quality often determines which product wins

What Good Looks Like

Technical writers aren’t a luxury. We’re strategic investments in product success, developer experience, and sustainable growth. The best documentation teaches developers to think in your product’s paradigm, anticipates their struggles, and makes the complex feel achievable.

That’s not something you get by asking engineers to write more clearly. It’s something you get by respecting technical writing as the specialized craft it is.

Systems Theory: From Ecology to Software Architecture

When I tell people I studied environmental science and political ecology before becoming a technical writer, I usually get puzzled looks. What does understanding threatened ecosystems have to do with documenting APIs or managing AI-assisted workflows?

The answer is everything. Systems theory—the framework that helped me understand how governance impacts ecological resilience—has become one of the lenses through which I approach software architecture, documentation strategy, and AI integration.

Ecological Resilience vs. Software Reliability

In my early academic work, I focused on how human governance systems impact ecological stability. One core concept was resilience: an ecosystem’s ability to maintain essential functions despite external shocks. A resilient forest survives wildfire by regenerating from deep root systems; a resilient wetland processes pollution spikes without collapsing.

Software systems face analogous challenges. The industry has shifted from prioritizing “robustness” (i.e., building systems that resist failure) to embracing “resilience,” designing systems that fail gracefully and recover quickly.

Consider a monolithic application where a single memory leak brings down the entire system. In ecological terms, this is like an invasive species that monopolizes resources and crashes the entire food web. Just as technology can threaten ecological balance through unintended externalities, a poorly isolated “feature” in a monolithic architecture becomes a pollutant that degrades performance across the entire environment.

The solution in both domains is similar: create buffer zones and compartmentalization. Ecologists design wildlife corridors that contain localized disturbances. Software architects implement circuit breakers and microservices that isolate failures and prevent cascading collapses. When learning about software systems, I look for these isolation patterns as an architectural philosophy that protects system health.

Governance and Technical Debt

My background studying governance systems prepared me for understanding technical debt. In environmental management, rigid or poorly informed policies lead to catastrophic outcomes. For example, irrigation systems that create dead zones, or forestry policies that suppress natural fires until fuel loads become catastrophic.

Software has its own governance: the rules, standards, and conventions dictating how components interact. When this governance is rigid or uninformed, systems become brittle. When the “Expert-in-the-Loop”—the human with contextual understanding—is removed from critical decisions, you get the software equivalent of ecological collapse.

This is why I’m cautious about AI-generated code. An AI might confidently suggest deprecated methods or invent non-existent API parameters. Without adaptive governance involving rigorous verification and comprehensive documentation, these errors propagate like invasive species through an unmonitored ecosystem.

The parallel extends to how both systems accumulate debt. Environmental degradation results from short-term decisions that externalize costs to the future. Technical debt accumulates the same way: quick fixes that seem expedient but create compounding maintenance burdens. In both cases, governance determines whether those debts become manageable or catastrophic.

Documentation as Ecosystem Mapping

When I document software systems, I don’t catalog individual API endpoints in isolation. I map the ecosystem: how data flows, how components depend on each other, what feedback loops exist, where boundaries are defined. This is fundamentally a systems-theory approach to information architecture.

Just as an ecologist must understand why species thrive or decline within their context, I must extract the “why” behind architectural decisions. Why does this service retry failed requests? What upstream conditions make this endpoint vulnerable? How does this component fit into larger workflows users actually care about?

The Expert-in-the-Loop as Environmental Steward

Threats to ecosystems often stem from “blind” automation or inadequate oversight: industrial processes that externalize pollution and algorithms that optimize narrow metrics while degrading broader system health. I apply this cautionary principle to AI-assisted workflows.

The ultimate systemic skill—whether in ecology or software—is discernment. Knowing when to rely on automated efficiency and when human strategic thinking is non-negotiable. Understanding that systems are more than the sum of their parts, and that expertise means seeing the whole while attending to the details.

Systems theory gave me mental models to understand threatened ecosystems. Those same models now help me navigate the complex, interconnected world of modern software development. The vocabulary changes, but the fundamental patterns remain consistent.

The “Docs-as-Code” Transition: Moving Beyond the CMS

For years, the standard for technical documentation was the monolithic CMS systems designed for “content” in the abstract, but often divorced from the actual environment where software is built. My journey across organizations like Google, Microsoft, and Grafana Labs has fundamentally shifted my perspective toward docs-as-code workflows.

Why the Shift?

Early in my career, I saw the friction caused by siloed documentation. When docs live in a separate web portal managed by a non-developer editor, they naturally drift away from the source code. By adopting tools like Git, Markdown, Hugo, and Docusaurus, we bring documentation into the developer’s native habitat. This allows developers to take responsibility for documenting their own work, and it allows technical writers to be more fully integrated into the team’s development process.

For technical writers transitioning from traditional CMS platforms, this shift represents a fundamental reimagining of your role within the development team. You’re no longer the downstream recipient of incomplete information; you become an embedded collaborator who can see, understand, and influence the code alongside the documentation. This visibility transforms the quality and accuracy of what you produce.

Treating documentation like code means it follows the same lifecycle as the product:

Version Control: Using Git allows for precise tracking of changes and the ability to revert errors instantly. Beyond basic rollback capabilities, Git enables powerful branching strategies where documentation updates can be developed in parallel with features, tested in staging environments, and merged only when the feature ships. This synchronization prevents the common problem of documentation being published too early or too late relative to feature releases.

Peer Review: At Google and elsewhere, I submitted substantial change-lists or pull-requests (PRs), ensuring every word was vetted by engineers through the same code review process they use for features. This peer review culture catches technical inaccuracies before publication and creates shared ownership of documentation quality. Engineers become invested stakeholders rather than reluctant contributors. The review process also serves as an informal mentorship opportunity whereby junior engineers learn from seeing how senior developers critique and improve documentation, while writers gain deeper technical insights through reviewer feedback.

Automation: CI/CD pipelines can run linters to check for broken links or style guide violations before a single page is published. At Grafana Labs, we used linters to check for divergences from the team’s Writers’ Toolkit (our style guide). Advanced teams integrate Vale or other prose linters to enforce terminology consistency, readability metrics, and brand voice guidelines. Some organizations even run automated accessibility checks, ensuring documentation meets Web Content Accessibility Guidelines (WCAG) standards before deployment. This automation amplifies editorial judgment by catching mechanical errors that would otherwise consume review cycles.

Impact on Developer Experience

At Grafana Labs, I collaborated with a director of development and another engineer to lead the building of a developer documentation portal (grafana.com/developers) designed for discoverability. At the open-source observability company, engineers had built several distinct silos of information for their separate projects, but this made it frustrating to find what you needed. The company’s divergent threads for plugin building, specifications, and design system were brought together under one umbrella. You can read all about our journey to creating the portal at “The Grafana developer portal: your gateway to enhanced plugin development.”

When documentation lives in the repository, it becomes more of a “living” entity. Paid developers and open-source contributors alike are more likely to contribute updates or suggest edits when they can simply open a PR. The psychological barrier to contribution drops precipitously when the workflow mirrors what developers already do dozens of times per day.

Moreover, having documentation in the repository enables powerful cross-referencing. Code comments can link directly to documentation sections; documentation can reference specific lines of code with permanent links that update as the codebase evolves. This bidirectional relationship creates a cohesive knowledge ecosystem rather than two separate information silos.

Conclusion

The transition is about a transformation of documentation culture. It’s an acknowledgment that documentation is a first-class citizen of the software development life cycle (SDLC). When we treat docs like code, we bring software engineers and writers together to build better products. This cultural shift manifests in tangible ways: documentation tickets appear in the same sprint planning as feature work, documentation coverage becomes a release criterion, and engineers budget time for documentation the same way they budget for testing.

In the past 7 years since I first started using the docs-as-code workflow, I’ve learned that it isn’t just about version control or static site generators. It’s about breaking down the artificial barriers between code and documentation, between engineers and writers, between the product and its explanation. When those barriers dissolve, both the code and the docs improve, creating a virtuous cycle that benefits everyone: developers, writers, and most importantly, the users trying to understand and use what we’ve built.

AI-Assisted Workflows: The Future of the Technical Communicator

The rise of Generative AI has sparked intense debate in the technical writing community. Some observers see it as an existential threat, while others view it as an unprecedented boon. I see it as something more nuanced: a powerful extension of our existing toolkit that can be transformative or destructive depending on how thoughtfully it’s deployed.

I began experimenting with AI-assisted workflows during my time at Grafana Labs and SAPinsider, where my goal has always been to optimize processes and elevate documentation quality. Today, integrating tools like ChatGPT, Gemini, and Claude is mandatory for technical communicators. There’s no fighting it, so the best option is to evolve with it.

AI as a Force Multiplier

I use AI not to replace the fundamental writing process, but to automate what I call the “scaffolding” of documentation: the structural and repetitive elements that consume time without adding unique value. My expertise in AI-assisted content workflows and prompt engineering allows me to accelerate delivery without compromising technical accuracy or depth. Here’s how I strategically integrate these tools into my daily practice:

  • Outline Generation: I leverage AI to brainstorm structural frameworks for complex tutorials and technical guides, ensuring I haven’t overlooked standard conceptual steps or logical progressions. This is particularly valuable when documenting unfamiliar systems or when tackling sprawling enterprise platforms.
  • Code Sample Refinement: AI excels at generating clean boilerplate code in languages like Python, JavaScript, or TypeScript. I use it to quickly produce initial examples, which I then rigorously test against actual environments, refine for edge cases, and optimize for clarity and best practices.
  • Drafting Alt-Text and Metadata: I delegate repetitive, SEO-heavy tasks such as crafting image descriptions, meta descriptions, and keyword-rich headers. AI drafts those elements and this frees up cognitive bandwidth to focus on the core technical narrative and the complex explanations that truly require human expertise.

The key is treating AI as a collaborator in the mundane, not a replacement for critical thinking.

The Importance of Human Oversight

During my time at a technical journalist at Wellesley Information Services, I covered the AI/ML beat extensively, researching emerging models and their practical applications to stay current with rapidly evolving technologies. The most critical lesson I learned is that AI can confidently hallucinate technical details such as API parameters, fabricating version numbers, or suggesting deprecated methods as current best practices. Beyond factual accuracy, AI often lacks creativity in its prose; its formulaic patterns and predictable phrasing can be easily spotted by discerning readers and quickly become tiresome or even alienating.

This is where the technical writer’s value proposition has fundamentally shifted. We’re no longer just writers, we’re what the industry calls “Human-in-the-Loop” operators. Though I prefer a more precise term: “Expert-in-the-Loop”. Here’s what that expertise looks like in practice:

  1. Verification and Validation: Every AI-generated code snippet must be meticulously verified against the actual API documentation, tested in representative environments, and validated for current best practices. I’ve caught countless instances where AI confidently suggested outdated information.
  2. Voice and Tone Consistency: AI often produces technically adequate but tonally generic content that lacks the nuanced voice required for specific brand identities. Whether it’s the approachable, reader-first standards I helped establish for SAPinsider or the precision-focused clarity expected at Grafana Labs, maintaining authentic brand voice requires human judgment and sensitivity.
  3. Complex Synthesis and Context: AI fundamentally struggles with activities that require deep contextual understanding such as interviewing subject matter experts to extract the “why” behind new product releases or synthesizing conflicting stakeholder requirements. These remain uniquely human skills that draw on empathy, experience, and professional intuition.

The Takeaway

The future of technical communication isn’t framed by an “AI vs. Human” dichotomy—it’s defined by the emergence of the AI-augmented writer. By mastering prompt engineering, understanding the capabilities and limitations of large language models, and thoughtfully integrating these tools into our docs-as-code pipelines, we can deliver higher-quality documentation at the accelerated pace demanded by modern software development cycles.

The writers who will thrive in this new landscape aren’t those who resist these tools or those who uncritically embrace them. They’re the professionals who develop the discernment to know when to leverage AI for efficiency and when to rely exclusively on human expertise for accuracy, creativity, and strategic thinking. Expert judgment is what separates good documentation from exceptional documentation in the age of AI.

Finding Satisfaction in the “Unseen” Work

After over seven years in technical communication, I am often asked what keeps me motivated in a field that is, by definition, most successful when it is invisible. If a developer uses my documentation to build a plugin for Grafana, integrate an EA API, or navigate Google Maps platform, they aren’t thinking about the writer. They are thinking about their own success. For me, that is exactly where the satisfaction lies.

The Journey That Led Me Here

My path to technical writing began long before I knew this profession existed. As an early computer enthusiast, I spent countless hours pounding out codes of BASIC on my Commodore 64, captivated by the magic of making machines respond to human instructions. That early fascination never left me.

This passion for technology drove me to teach myself essential C programming years ago. While I never progressed to the point where I wanted to join a software development team, I discovered something unexpected: I felt remarkably comfortable documenting APIs. This realization led me to document an API in C and C++ for a speech software company, create comprehensive documentation for a SOAP API at a database company, and edit a book on C# programming for Microsoft Press. These experiences hinted at my true calling: bridging the gap between complex technology and the people who need to use it.

Although I pursued other professional paths for quite a while, I returned to technical communication seven years ago to work as a contractor for Google. Since then, I’ve genuinely enjoyed the challenge of learning how technology is evolving, understanding how it is transforming our world, and playing a small but meaningful role in making that progress accessible and possible for others.

The Beauty of the “Bug Fix”

Early in my career, I realized that technical writing is fundamentally a form of engineering. There is a deep, quiet satisfaction in the “unseen” maintenance tasks that most people never notice but that make all the difference:

  • The Repository Cleanup: At Microsoft, I initiated over 200 pull requests to fix bugs and improve documentation quality, systematically eliminating obstacles that would have frustrated countless developers.
  • The Migration: At Electronic Arts, I consolidated a massive, fragmented documentation set into a single, searchable knowledge base, transforming chaos into order and saving developers hours of searching for critical information.
  • The Content Audit: At Google, I performed the exhaustive audits necessary to raise a Doc Health Score by 500%, meticulously reviewing and refining content to ensure accuracy and usability.

These aren’t “glamour” projects that earn recognition at company meetings. They are the digital equivalent of clearing a path through a dense forest—unglamorous, painstaking work that creates value precisely because it removes friction. When I remove a broken link or clarify a confusing paragraph, I am removing a barrier for someone else, enabling them to move forward with their work unimpeded.

Solving the Puzzle

Technical writing allows me to be a perpetual student, which perfectly suits my lifelong appetite for learning new technologies. One day I am researching logistics systems for Amazon FBA, and the next I am covering the latest enterprise software trends for SAPinsider or ERP Today. I find fulfillment in taking a chaotic pile of information and organizing it into a logical, elegant structure that serves the user’s needs.

This constant learning keeps the work fresh and intellectually stimulating. Each new technology, each new API, each new platform presents its own unique puzzle to solve, its own particular challenges in translating technical complexity into clear, actionable guidance.

The Human Element

Ultimately, my work is fundamentally about empathy and human connection. Whether I’m documenting open-source observability tools for 20 million users at Grafana Labs or writing for a global audience at Amazon, I am advocating for the person on the other side of the screen—someone I’ll likely never meet but whose frustration I can prevent, whose time I can save, whose success I can enable. There is a profound sense of purpose in knowing that my work helps a developer finish their task a little faster, with a little less frustration, so they can get back to what they love doing most.

As a Harvard University graduate with honors, I could have chosen many paths involving language—journalism, academia, creative writing, or communications. I chose this one because I believe that clear, accessible documentation is one of the most important “features” any software product can have. That’s what drives me: the opportunity to unlock technology’s potential by making it comprehensible, accessible, and genuinely useful to the people who need it.