Lokasi ngalangkungan proxy:   
[Ngawartoskeun bug]   [Panyetelan cookie]                

Chanezon on AI

Header

Patrick Chanezon on AI, software development, cloud, containers and developer relations

GitHub - LinkedIn

Reinventing Developer Relations in the age of AI Agents

7 November 2025

I’ve been a developer for 40 years, practicing Developer Relations for 20, AI is the fourth technology innovation wave I have had the privilege to surf (after client/server, web, cloud/mobile) and it’s probably the most profound and impactful: it requires us to reinvent our jobs as developers, and as a consequence, we also need to reinvent the Developer Relations discipline. This post if about the necessary reinvention of the Developer Relations discipline in the age of AI Agents across 3 dimensions: helping developers learn how to become productive managers of agents, helping agents discover and use your services to build applications, and transforming Developer Relations workflows with AI.

If you need some preliminary technical background information on AI Agents, I have given a talk about how to achieve more with AI Agents a few months ago, covering what AI Agents are, the Open Agentic Web with protocols like MCP and A2A, how to build them with Copilot Studio or Azure AI Foundry, AI Agents for Developers with GitHub Copilot Agent. I also share a series of papers I found interesting on this topic. You can find the slides and video for it at Achieve more with AI Agents.

Image

Developers become managers of AI Agents

There are many definitions for AI Agents, for the purpose of this post I will adopt the one from Simon Willison’s Sept 2025 post, an LLM agent runs tools in a loop to achieve a goal. Andrej Karpathy recently pushed back on the “2025 as the year of agents” framing, arguing we still have a lot of work to do to make agents more useful, and that we’re instead in the decade of agents. I agree with that, we have our work cut out for us for the next 10 years making agents more useful. However, 2025 is the year Large Language Models became powerful enough for AI Agents to become useful for many use cases, one of them being software development. One category of AI Agent that made a lot of progress in the past few months is AI Coding Agents software developers can use to build applications: according to StackOverflow 2025 dev survey 64.8% of developers use them weekly, and DORA and DX reports rate adoption at 90%. These AI Coding Agents come in different form factors, such as Agent mode embedded in a code editor (GitHub Copilot, Cursor, Kiro), Coding Agent CLI (GitHub Copilot CLI, Claude Code, Gemini CLI, Codex), or an asynchronous agent running in a cloud sandbox (GitHub Copilot Agent, Claude Code on the web, Codex Web). Last week GitHub announced Agent HQ: An open ecosystem for all agents transforming GitHub in a platform for Coding Agents from GitHub, Anthropic, OpenAI, Google and others. AI Coding Agents are changing the role of developers, from individual contributors collaborating with other humans to build software, to managers of AI Agents collaborating in teams of humans and agents to build software.

The Work Trend Index Annual Report 2025: The year the Frontier Firm is born defines 3 phases of of the AI Transformation journey to the Frontier Firm: Human with assistant, Human-agent teams, Human-led, Agent Operated. One common thread across these phases is that every employee becomes an agent boss, someone who builds, delegates to, and manages agents to amplify their impact. Jack Rowbotham describes what being an “agent boss”, someone who builds, manages, and delegates to agents, looks like in practice. In The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise Dell’Acqua et al. show how AI Agents can elevate individual performance to levels comparable to traditional teams, and that AI adoption requires rethinking team structures and organizational design. For developer roles this transformation is happening now with AI Coding Agents, and the role of Developer Relations is to help developers successfully transition from an individual contributor role where they produce all the code by themselves (using tools) into an agent boss role that has more similarities with a management role and requires new skills.

Image

One book I recommend to individual contributors in my team who move to a management role is What Got You Here Won’t Get You There by Marshall Goldsmith. The book is about the fact that great technical skills may have helped you become a leader and get into a management role, but that the skillset you need to thrive as a manager are behavioral skills like saying thank you, listening well, thinking before you speak, and apologizing for your mistakes, and you need to start focusing on these to thrive as a manager. The skillset required to thrive as a manager of agents may be different, and some of these may be technical, but the required mental shift is similar: you need to learn new skills, delegate and evaluate as opposed to doing everything by yourself.

Image

In 2011 Kent Beck gave a talk at Usenix,Software G Forces: The Effects of Acceleration explaining how what constitutes effective software engineering changes radically as deployment cycles shrink, how practices need to evolve when deployment cycles shrink, the changes required of software engineering and organization at different cycle times from quarterly, monthly, weekly, daily, and hourly. We’re at the beginning of a similar inflexion point today, with required changes to the role of Developers as AI Coding Agent boss, and changes across the whole software development lifecycle. Here are some skills and approaches that are promising, and some questions I’ve seen emerge that we need to find answers to as a discipline:

  • Understanding capabilities of coding agents, models, configurations (MCP servers and Skills exposed): this is the area my team covers the most today, with learnign resources such as MCP for Beginners or Awesome GitHub Copilot Customizations
  • Context Engineering: Management is very much about creating a shared context for a team, how do you create this for teams of AI Coding Agents? Conventions such as the AGENTS.md open format to provide detailed context coding agents need, such as build steps, tests, and conventions that might clutter a README or aren’t relevant to human contributors. Tools such as Eleanor Berger’s ruler to manage instructions across multiple AI coding tools fall into that category.
  • More structured, Specification-Driven Development (SDD) is an emerging approach to raise the level of abstraction we use when communicating with AI Coding Agents, focus human work at the spec level and let the agents create the code. OpenAI’s Sean Grove talk The new code: specs write once, run everywhere and Microsoft’s Den Delimarsky What’s The Deal With GitHub Spec Kit are good resources to get started. There are many approaches and discussions about this topic right now, with 8 talks at Devoxx a few weeks ago, my favorite being Patrick Debois’.
  • Composing your AI Agent team: Nicholas Zakas describes a promising persona-based approach to AI-assisted software development where you compose a team of specialized agents like product manager, architect, implementer, problem solver or tech spec reviewer, leveraging different models, prompts and context, to assist a developer.
  • Quality: how do you test and review the code generated by AI Coding Agents? Pamela Fox has a great talk about helping AI Coding Agents write good tests for Python code. However as the volume of code generated by Agents grow, new questions arise. Last week at GitHub Universe I participated in a panel with Enterprise customers using GitHub Copilot heavily in their team and one of the questions that surfaced was: Do you still read all the generated code? If not, how do you assess the overall quality of produced software? In the same way that when you move to engineering management, you need to focus your attention on creating frameworks to measure the quality and impact of people’s work, so is it with agents where you won’t necessarily be able to review all the code they produce but are still accountable for the quality of the software produced and need methodology and tooling to help you evaluate quality.
  • Deployment, CI/CD: AI Coding Agents should be experts in your cloud provider capabilities and help you pick the right architecture for deployment, then integrate with your CI/CD system.
  • Cost Management for AI Coding Agents: we’ll need tools to assess how many agents we use for what and make trade-offs.
  • Operations: SRE Agents are emerging and will help you operate your software providing explainable root-cause analysis (RCA), and orchestrating incident workflows with human-in-the-loop approvals or autonomous execution within scoped guardrails.

Then AI Coding Agents also raise questions about hiring and organization structure. In Generative AI as Seniority-Biased Technological Change Lichtinger & Hosseini show that AI adotion disproportionately affects junior relative to senior workers, and in Canaries in the Coal Mine? Brynjolfsson et al. show that early-career workers (ages 22-25) in the most AI-exposed occupations have experienced a 13% relative decline in employment, and 20% from peak for software developers. This raises important questions about how expectations change for junior developer roles, how we train and recruit them. Addy Osmani has a good discussion of these issues in AI Won’t Kill Junior Devs - But Your Hiring Strategy Might. Mentorship programs and pair programming may help. In a recent interview Farhan Thawar, Head of Engineering at Shopify provides a contrarian point of view explaining how they’re hiring 1000 interns to learn from this generation who grew up with AI. In DX AI-Assisted Engineering: Q4 Trends Report Laura Tacho outlines another trend: an expansion of the definition of “developer”, AI Coding Agents enabling other roles such as engineering managers, designers, and product managers to contribute code and changing the way these roles collaborate.

Image

We’re at the beginning of this transformation of the role of developers, and Developer Relations’ role is to help developers and operations in this transition and learn from them as they discover new ways of building, testing, deploying and operating software with AI Coding Agents.

Agent Experience (AX): AI Agents as customers of your developer services

In May when Databricks Agreed to Acquire Neon to Deliver Serverless Postgres for Developers + AI Agents Ali Ghodsi, CEO at Databricks mentioned: “The era of AI-native, agent-driven applications is reshaping what a database must do. Neon proves it: four out of every five databases on their platform are spun up by code, not humans.” If your company provides a product or service developers use to build software, you need to consider AI Coding Agents as an important user persona for your service and ensure their experience is great and constantly improves. Or as Corey Quinn eloquently put it: “The thing is, increasingly we’re deploying things to platforms not based on their merits, but rather based upon what the LLM selects.” Developer Relations needs to encompass AI Agents as users to serve and focus on improving Agent Experience (AX) in addition to improving Developer Experience.

Earlier this year Mathias Biilmann coined the term AX, Agent Experience, the holistic experience AI agents will have as the user of a product or platform. Zeno Rocha documented a few best practices for building AI agent-friendly products and improve AX, around documentation using the llms.txt format to provide information to help LLMs use a website at inference time, having a clean REST API exposed with an OpenAPI spec, exposing your service via an MCP server if possible, thinking about API keys and RBAC for agents. There are many areas to cover in AX and best practices in this area are evolving rapidly: for example Claude skills introduced a few weeks ago offer an interesting alternative and complement to MCP servers.

One important dimension of AX is making your services visible to AI Agents with Generative Engine Optimization (GEO), an evolution of Search Engine Optimization for the era of AI where users get their search performed by AI Agents: as a provider of developer services, you want to make sure all AI Coding Agents have heard about your developer service and know how to use it, or know how to find out how. In How Generative Engine Optimization (GEO) Rewrites the Rules of Search Zach Cohen & Seema Amble describe how GEO is still early days and like SEO back then, evolving fast. However their conclusion outlines how important GEO is: “In a world where AI is the front door to commerce and discovery, the question for marketers is: Will the model remember you?”

In order to improve something you need to measure it: in the same way that Developer Relations uses metrics to measures developer experience, we need to define metrics for AX and build benchmarking tools to understand what AI Coding Agent Experience looks like with your company’s services.

Transforming Developer Relations workflows with AI

The 2 sections above are about the What of Developer Relations, this one is about AI Agent’s effect on the How: in the same way AI Agents transform workflows and team organization across all roles in companies, they affect Developer Relations in all we do: content, community, product feedback and programs.

A few recent public examples from my team, focused on code and content:

AI Agents are also great solution to synthesize product feedback or user sentiment from online community channels, and automate some of processes powering Developer Relations programs.

Conclusion

There has been a lot of noise about fears that AI Coding Agents usher the end of our jobs as developers, that we finally automated ourselves out of a job. To those who worry about this I suggest re-reading 2 essays from the 80s proposing different views on what the essence of programming is about: Peter Naur’s 1985 “Programming as Theory Building” arguing that “programmers have to be accorded the status of responsible, permanent developers and managers of the activity of which the computer is a part”, and Donald Knuth’s 1984 “Literate programming” promoting a higher level view of programming: “Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do”. After re-reading these, I consider that AI Coding Agents will enable a golden age of programming where developers can work at a higher level of abstraction and manage the activity of which the computer is a part.

In this context, Developer Relations needs to evolve by helping developers learn how to become productive managers of agents, helping agents discover and use your services to build applications, and transforming our own Developer Relations workflows with AI.

Image

A few papers and posts I found interesting to read in December

22 January 2025

During the holidays I’ve read a few AI blog posts and papers. The following are worth a read:

  • Introducing Phi-4: Microsoft’s Newest Small Language Model Specializing in Complex Reasoning Details about the Phi4 small language model, that runs on your dev machine and is good at reasoning and logic.
  • OpenAI o3 Breakthrough High Score on ARC-AGI-Pub Francois Chollet’s analysis of OpenAI o3 model results of 87.5% on ARC-AGI benchmark in high compute mode. LLM do program synthesis in natural language with these Chains of Thoughts and take time to evaluate them at inference time before answering, creating this new inference time scaling law. Exciting times! This approach reminds me of late 80s Prolog.

    “o3’s core mechanism appears to be natural language program search and execution within token space – at test time, the model searches over the space of possible Chains of Thought (CoTs) describing the steps required to solve the task, in a fashion perhaps not too dissimilar to AlphaZero-style Monte-Carlo tree search.”

  • ARC Prize 2024: Technical Report a paper about the design of the ARC-AGI benchmark.
  • Deliberative Alignment: Reasoning Enables Safer Language Models The OpenAI paper on Deliberative alignment, an approach used on o1, leveraging the model’s reasoning capabilities to make it reason on prompts, answers and the text of it’s safety specifications, resulting in higher resitance ot jailbreaks while lowering overrefusal rates.
  • The Unbearable Slowness of Being: Why do we live at 10 bits/s? a fascinating reflection on the limitations of human brains, where our senses gather data at 10^9 bits/s, but our overall information throughput is only 10 bits/s. Very relevant for the design of human computer interfaces, and as a background for thinking about how humans and AI will interact.
  • Things we learned about LLMs in 2024 Simon Willison’s end of the year summary of what we learned about LLms in 2024, always insightful.