AI and DevRel


In this, the first episode of the DevRel Survival Guide, we look at how developer relations teams are using AI tools such as ChatGPT, GitHub CoPilot, and others.

With contributions from DevRel practitioners, such as AppWrite’s VP of Developer Relations Tessa Mero, and people working on the AI tools themselves, such as OpenAI’s Logan Kilpatrick, this episode helps you navigate the current state of AI tools and developer relations.

With thanks to Common Room for sponsoring.

Video

Watch the video on YouTube >

Key themes

  • What sets LLMs apart from other forms of AI tool
  • How DevRel practitioners are using AI tools for content creation, social media, support, analytics, market research, debugging, and more at AppWrite, Couchbase, and Intercom
  • What tools DevRel teams are using and might consider (including GitHub Copilot, ChatGPT, Superface, Doc-E, and Contenda) to help with code generation, code completion, and API integration.
  • Some of the considerations to take when integrating AI tools into your DevRel strategy
  • What impact, if any, AI tools will have on DevRel practice for the long term

Participants

Transcript

Matthew Revell: Hello, welcome to the DevRel Survival Guide. In this episode, we are going to look at how AI tools can help you in your Devrel practice. My name is Matthew Revell and I’ve been speaking to Devrel practitioners and also people who make AI tools to see what they can do and maybe even if they’re a threat to us as Devrel practitioners. Before we get into this episode, I want to thank Common Room for sponsoring. You can head to commonroom.io to learn more about the platform that brings together your CRM, product, and community data and is used by some of today’s fastest growing companies. That’s common room.io. Thanks for sponsoring. Let’s clarify some terminology. There are lots of different types of AI and what’s been in the news lately are large language models or LLMs, so let’s look at what they are.

Logan Kilpatrick:

My name is Logan Kilpatrick. I lead developer relations at OpenAI focused on helping support developers building with our API as well as building ChatGPT plugins, which is our sort of emerging developer space. So the general take is that large language models is essentially a machine learning model that’s been trained in the large corpus of text, and the whole intent of a large language model is just to predict the next word or the next token in a sequence. So you give it a sentence and it’s literally just trying to predict what is the next sequence of characters that’s most likely to come after this. And you throw in a little bit of randomness for the outputs to be more broad in general, but that’s the basic take is lots of content that it’s been trained on and it’s trying to predict the next piece of content that’s coming after.

As an aside, this is really important to get a sense of how the machine learning technology works because you start to understand that chat G B T, for example, has no worldview. It doesn’t have a lot of these constructs that we as humans have because it’s truly just trying to do that one specific task. It can do it really, really well, which is why it’s able to generate content that’s across all these different domains, but fundamentally it’s super constrained in the, if you want to think about it, the logic of how it approaches solving a problem. It solves them all in the same way using that same sort of next token prediction process.

Matthew Revell:

So then how have these AI tools been helping people with their DevRel work? Let’s hear from three people who’ve been putting them to the test.

Tessa Mero:

My name is Tessa Mero and I’m head of developer relations at AppWrite. So I’ve recently asked different team members, including members from other teams, what kind of tools they’re using and how we’re it, and I realize it’s for a lot more areas than I thought specifically for our team. We’re using it for content, mainly for social media and content for presentations. We have support engineers on the team, so we’re using it to help with support. It’s helping us with evaluating analytics and market trends and data and research in that area. Our engineers are using it for debugging. I think that’s the main areas.

Matthew Groves:

Yeah, my name is Matthew Groves and I work for a company called Couchbase, it’s NoSQL database company. I’ve been there for seven years. My main concern is helping developers and helping them to succeed and helping them learn about Couchbase. Right now I am very much focused on building an example application, trying to build as much as I can with the LLM tools that everyone is aware of, seeing how they work, seeing where they excel, where they are not doing so well, and just trying to put together a story about how these tools can be effective in coding.

Matthew Revell:

Colm Doyle who heads up DevRel for intercom

Colm Doyle:

Today we’re pretty light about it. I think the biggest thing is between ChatGPT and CoPilot, the speed at which you can create code samples has gone through the roof. So it’s like whereas previously creating code sample say to illustrat a point would’ve taken, I dunno, a couple of days, you know what I mean, to build a decent kind of end to end code sample. Now it’s like GPT and CoPilot are writing a bunch of it, which means when we have a new API endpoint or we want to support a new language, it’s much easier for us to just jam out a code sample. And because we have all the context of how our API works, we can kind of see when the LLM is screwing it up, you know what I mean? And we’re like, well that doesn’t make any sense. That’s wrong, but it’s a slight tweak so you’re more kind of editing existing code rather than generating it fresh. The company I work at Intercom has a support bot, which is powered by various lms, but we haven’t turned it live on our docs, so we haven’t turned it on yet as a layer above our documentation, but that’s definitely interesting.

Matthew Revell:

So what tools are Tessa, Matthew and Colm using day-to-day?

Matthew Groves:

Yeah, so I’m focused on three tools mainly at this point for coding as ChatGPT, of course, both three and four, GitHub CoPilot chat and CoPilot in general and then Google Bard. So I’ve looked at all three of those tools and I’ve spent time with them. I actually have a live coding stream on Twitch where I explore all those tools as I build this real world example application. I’ve tried all three of those and I’ve got all kinds of interesting things I’ve learned about them along the way of the end goal with this project is to create a real world example application that can help developers get up and running quickly with, in my case as pnet and Couchbase, but we’re exploring other ones as well like Spring and Node and things like that. So that’s just kind of the short-term goal is to get those example projects available for helping developers.

Of course the long-term goal is for me to understand or try to help understand how effective these tools are in building applications and the goal there is to come up with a plan or roadmap to build future sample applications for other languages and platforms. We’re a database company and we serve 10 to 12 different language communities. So using this to help to learn if we can scale those example applications out to all those different communities instead of just focusing on one at a time. That’s a goal here is basically to help people write more code with Couchbase and this might be a way to help them write more effective code or help us to write more effective example applications faster. I’m focused on building this application in a language and a platform that I know well. So if there is something that’s generated by ChatGPT or CoPilot that jumps out at me as wrong for security reasons or wrong for performance reasons, that’s something I’m able to understand and address immediately, right?

That’s more challenging if I’m working in a language that I’m not familiar with, if I try to build something in Russ for instance, I might have a hard time with that. So it’s not that I’m building them completely with these applications, just copy and pasting exactly what they are generating, but I’m also using them to help understand what is being built and why it’s being built. So that’s something that these tools can also be used for. So people focus on the code generation a lot and that’s pretty cool, but it’s also useful for reading code and as developers, we spend a lot of time, a lot more time reading code than we do writing code. So this can be a very effective tool at understanding a legacy code base. For instance, put in a function that’s poorly named or badly documented and explain what this code does help me out so I can make changes to that code or write better tests.

Matthew Revell:

For Tessa and the team at AppWrite, AI tools have made their way into almost every area of Devrell operations.

Tessa Mero:

We use it for improving our email campaigns and evaluating data with that. Our operations team uses it for work like improving our emails being sent. You can use it for our travel itineraries, like we’re going to a conference in two weeks. Can you help build out an itinerary? I think Google Bar can also do a lot of similar things as well. We also have a developer engineer on our team. He uses, and I had no idea and he was using Notion ai. It helps you automatically finish sentences, it helps you format things, make it look nice, and the amount of content AI tools out there is growing very quickly with copy ai for improving content. There’s an AI tool called Genie I was just reading about today and that one of our team members uses for, and it specializes in summarization of text. Just interesting. I still haven’t compared it to ChatGPT, but he says that it works better with that. Right now we’re experimenting with different video AI tools and there’s several of them popping up here and there. There’s AI tools to create, present a slide deck for you.

I wouldn’t say that’s the greatest technology yet. There’s still a lot of work that needs to be done. I feel like creating presentations is more of a creative process, especially in the dev side of things. I know there’s AI image type of tools, but that’s a dangerous area to explore, especially with working for a company as there’s no milk regulations around using certain tools. There’s also an interesting tool I’ve been using seen as well called Jasper and Jasper specializes in creating blog posts, but I’m still not a fan of exactly how all these AI tools generate blog posts. Every time when I read someone else’s blog posts, I start identifying, yeah, that was written by ai, that was written by ai. I could have just taken the topic title and write this myself and then read it on chat gt rather than going to those blog sites and reading it there. So the kind of content I’m actually really valuing is listening to people’s perspectives, hearing about their opinions. So if you’re able to take AI generated content and spin in your opinion and also train it to speak, there’s a lot of work you can do. So it takes a little time to get used to understanding what prompts to use when working with AI to be able to get the results that you’re looking for.

Matthew Revell:

It’s clear to see that AI tools are already having an impact on how we do Devrell on a day-to-day basis, but what about the longer term approach? How does Colm Doyle Intercom see the next few years and maybe the changes that they need to make it their company to take AI tools into consideration?

Colm Doyle:

Yeah, change our approach might be generous. I guess what it is is my experience of building both the samples and I had to build something recently internally, and I’d say I don’t even know what percentage of the code I actually wrote myself, but because it was just a scrappy internal tool, it didn’t really too heavy. But as I wrote those things, all I was constantly thinking was like, right, if everyone’s going to be writing these things like this, what do I need to change about my documentation approach in order to make it so that the likes of GitHub CoPilot or ChatGPT can easily ingest the content we’re creating? So first of all, that means there needs to be a lot more text.

I love video as a format. I think it accommodates a really popular learning style, but it’s kind of useless to bots unless you could feed it a transcript, but visual stuff is kind of useless to these bots, particularly the popular ones is my understanding anyway. So it immediately puts more of an emphasis on we need to write tutorials, we need to be pretty direct in these tutorials. We need to explain things in quite plain language, which we should probably be doing anyway if I’m honest, like acronyms and stuff like that. It’s bad practice anyway, and then more of the code samples because obviously CoPilot is trained on corpuses of public repos. So basically there’s more of an emphasis away from video towards stuff that LLMs can consume because like I said, if a lot of these apps are going to be written using CoPilot or people consulting G P T for, I’ve never used as much regex, I say I’ve used more regex in the last three months than I have in my entire career combined because regex is terrible to learn and bots are really, really good at it.

So it’s that kind of thinking. I am constantly thinking how do we need to change our docs? And I don’t know the answer if I’m completely honest and I really need to talk to the ML team at my current place to understand it better, but what do I need to adjust to better documentation to make it more l l m friendly? There’s some really interesting, I wouldn’t say problems, but challenges to that. And the two big ones that stick out to me are the first one is when you point one of these bots at a webpage, you have to be relatively smart because webpages are busy. They have lots of different things going on in any one time. So if you think of the Guardian website or the Wall Street Journal or whatever, if you go to their website, you’re going to have a central pane of content and then you’re going to have on the sidebar, you’re probably going to have links to other articles or a byline or something like that and you’re going to have ads and you’re going to have all these different things going on.

So processing text is expensive for these LLMs, so they try to be smart. Some companies try to be smart about what is the essence of this webpage. So I’m only feeding the essence to it. And in the example I gave of the Guardian, the Wall Street Journal, it’s usually a safe bet that the central piece of content contains the vast majority of what we need. So a lot of scrapers will take just that central piece of content and they’ll discard the rest. And that works great to an extent for Wall Street Journal or Guardian, but that kind of completely falls apart with a lot of documentation websites, right? Because you’ve got the central pane, which is story, sorry, you’ve got the central pane, which is explaining the core concept, and then on the right you might have code samples on the left, you might have version pickers or different languages and all this kind of stuff.

And usually in docs, the whole viewport is kind of important. So if you slice out just the core central part, you’re going to lose a bunch of context. The other one that folks have talked to me about ISS are just really bad with versions. The way a lot of doc sites are structured, there’s usually a quick picker to pick between versions and it’s hard to explain the notion of API versions to a bot. So if you ask it how to do something, so if it’s Slack, it’s like get me, how do I get all the messages from a given channel? If there’s multiple different versions of how to do that, the bot’s not really going to know which one of those it should give you. So that’s a bit of a challenge that I haven’t quite figured out if I’m honest. And that’s part of why we haven’t turned on the support bot on our own documentation is because the existing technologies we’ve used have trouble with it. Not that they have trouble, but they don’t give us the results that we’re comfortable with, so we wouldn’t ship it to customers and therefore we won’t use it ourselves. I dunno if I’d go as far as changing our versioning strategy to accommodate other lands, but thought about it,

Matthew Revell:

Clearly these tools have their weaknesses. So what else should we be on the lookout for?

Matthew Groves:

I guess one of the most obvious things is that some of these tools are limited to 2021 and earlier as we’re recording this anyway. And so sometimes it’s generating code or patterns that’s out of date because we’re two plus years past that at this point. So these tools are still learning, so they’re still growing and learning more things as they get more input. So I find it to be helpful to be as specific as you can. If you’re wanting to use a specific SDK version, tell the chat bot that say, I want to use Couchbased SDK three because otherwise it might imply based on the year that it has cut off that you want to use the most popular SDK back then, which might be version two or 2.5 or whatever. So it helps to be very specific when you’re talking to these tools and understanding that it may be giving you outdated information.

Matthew Revell:

The promise of these AI tools is that they’ll help us to be more effective, we’ll get more done or we’ll save time. But if we need to babysit them and constantly check that they’re not hallucinating or otherwise making mistakes, are they actually saving time?

Colm Doyle:

Oh, it’s a hundred percent saving time. It’s like you can focus your energy on the bits that are specific to your API that are like your knowledge. So if you’re writing a TypeScript sample, you could have a code sample that would have switch statements and different control statements and all these kinds of things. And like I said earlier, RegX, and you can use all these things to prove it, but these are kind rote knowledge and they’re coming to all these kinds of different platforms. And of course if you’re an experienced developer, you don’t think a lot about writing these things, but with the likes of CoPilots, you literally don’t have to think about it. You could just start to type out or type a comment to give it a prompt. Then you can just get return, return, return, return until you get to the bit which is specific about the concept you’re trying to demonstrate in your API or the programming concept you’re trying to emphasize, which means you can focus in on that, right?

And you haven’t had to think about all the other stuff. You’re naturally saving time. Would you then spend the same amount of time maybe polishing and refining your code? Maybe. But in my experience, it’s it. It’s just like I said about refining the concept that you’re trying to talk about and I guess it leads to better content because you’ve been able to spend just that little bit of extra on the bit you care about as opposed to like, is this the most efficient control statement for this thing or am I importing this right? Using whatever language best practices. I dunno if it’s there yet, but you can imagine a world at the speed these LMSs advance a world that rises very quickly where you actually write your code sample in pseudocode and then a bot, you sort of build pipeline that spits it out in lots of different languages. That’s going to be really powerful for developer advocates

Matthew Revell:

With so many new AI-based tools around then presumably there must be some AI tools that are built, especially for the needs of Devrell teams. And there are in this interview, recorded in the summer of 2023, Contenda, CEO, Lilly Chen describes an earlier iteration of their offering.

Lilly Chen:

Yeah, my name is Lilly Chen. I am the founder and CEO of Contenda. Contenda is a content catalyst platform for developer advocates specifically. It transforms videos into written tutorials with code samples and all those nice things. One of the big issues with the video transcript is that it’s not meant to be read. So as a human, if you give me a transcript, it’s very, very painful to read. It’s not a good experience. So what we do is we take that and then all the information that was extracted on the video and we ask G P T, Hey, could you convert this into something that’s more readable into more of a blog post? And then we do some checks on our side to make sure that all of the content is accurate, it is correct. It matches the original author’s intent and voice, and that’s what we deliver

Matthew Revell:

If Contenda uses AI to help us generate content. What about the community side of DevRel? Here’s co-founder and chief architect at Common Room, Tom Kleinpeter.

Tom Kleinpeter:

Yeah. Common Room uses AI in a number of different ways. We have sprinkled it in all the places we thought that make a lot of sense. So we’ll do things like just kind of basic starting with things like sentiment analysis where you have content flowing into your community. It’s nice to be able to tell what is positive, what is negative, and to keep metrics on those over time and just to be able to sort of see where things are trending in your community. That’s kind of one of the more basic things we do with it. We also start to categorize messages that come into a community where you can tell what, just show me all the product appreciation we’ve seen recently, or show me all the bug reports to where it’s faster to go and filter the signal out of the noise. You have all this content flowing into a community.

You have people and you have content and LLMs that can understand language. Give us an opportunity to extrude all of this into repeatable shapes where we can standardize things and then have metrics and reporting and sort of know the sentiments training over time. We’ve gotten more bug reports over time or we released some new feature and bugger reports went down. So the first big thing we get is just better consistent, repeatable understanding of the content that’s happening inside of a community. I think a second big thing we do with it is understanding the people in the community more. We use an AI backed merging system where we can identify that different accounts in your community are actually the same people, which gives us just a better or gives whoever’s running the community just a better understanding of who is in their community. And if you want to correlate some happy comment on Twitter with some org report that got fixed on GitHub by being able to merge people together, we can do a better job of that. And merging is incredibly complicated. There are tons of signals and different ways you can look at this, and one of the things AI is really good at is just dealing with more information than humans can’t. And so we’ve built our MER job algorithm on top of that and we’re pretty happy with it.

Matthew Revell:

So that’s content and community. What about code?

Martin Woodward:

I’m Martin Woodward. I’m the VP of developer operations for GitHub. GitHub CoPilot is an AI pair programmer we call it. So it’s a coding assistant that lives inside of your code editor and helps you do your coding with code completion or also kind of ask it do like chat functionality and things like that. As a Dev re person, you tend to be coding a bit, but you’re not coding all the time, but you know how to code programmatic thinking, but you might be having to use a lot of different languages or maybe a lot of different frameworks to then show somebody how to code from where they are into your thing. That’s a lot of what Adel Rail work is, and that’s creating those code samples and things. That’s actually what CoPilot is awesome to help you with in terms of code assistance, but also maybe you chat say, Hey, how do I do this in Python?

I kind of know how to do it in Java, but how do I do this in Python? And then it’ll give you an example, get you the framework, the skeleton, then you do the thing that you want to do and it can assist you there. So really it’s what it says Gil Go by is a co-pilot. It’s not the thing that’s driving. And so for somebody who knows how to code, then you can be incredibly productive with it and it’ll allow you to write a lot of stuff and do stuff. And so for Devrel, well, we’re flipping languages. That helps a bunch really. I find.

Matthew Revell:

While CoPilot will help you in writing code. What about creating integrations? Here’s Martyn Davies from SuperFace.ai.

Martyn Davies:

So at the high level, SuperFace abstracts APIs into the business cases that you need to achieve when you are building applications as a developer. Those abstractions turn into what we call comm links short for communication links, these comm links, they are structured representations of how an application needs to communicate with an API in order to achieve that particular use case, which might be send them SS M Ss or sending an email or get me this list of users or add people to this list in a particular platform. Anything that you might need to achieve on a kind of everyday or repeatable basis, thinking about it in that use case form SuperFace kind of breaks that down, turns it into structured tools that you can then use and continue to use inside of your application. So we’ve got developer specific tooling in the form of a CLI and an SDK that may create and working with those comments links actually possible, but ultimately it’s AI that’s under the hood that’s responsible for figuring out what that communication needs to look like.

And it can do it simply by looking at the documentation for an API or an open API specification and then it forms a plan that is represented as this comm link that then gets turned into a code that you can use directly in your application via the SDK. So the aim is to create this unified developer experience for whoever’s implementing these use cases and an interface that stays the same regardless of the provider you use. So if you’ve got multiple different email providers and multiple different communications providers, your interface through SuperFace would be exactly the same. So your inputs would be the same, your outputs would be the same, but you can swap out the providers just by generating new use cases. So it has an idea of how that API needs to be communicated with.

The idea here is that it’s less labor intensive to get up and running and you get a much faster time to Hello World whilst presenting developers at the end with code they can control, not a chatbot that spits out an amalgamation of code that somebody else wrote at some point in terms of how SuperFace could help Dev Re folks. I think we’re offering another integration channel effectively. So especially for those that API first Dev re, folks that love their Open API I specifications, SuperFace is a great tool to help demonstrate those APIs with a use case first mentality, which can really help, especially when you’re thinking about documentation and example creation like SuperFace could very well sit alongside your NodeJS SDK, Ruby, SDK as a pathway for developers to consume what you do in their own applications. So I think SuperFace is going to be a good tool for developer ions practitioners to employ and to deploy at some point.

Matthew Revell:

And then there’s the intersection of it’s all between code content community and that is support. How do we offer better support, more timely support, more appropriate and relevant support to developers using our APIs and other tools through ai? Here’s Deepak Kumar from Doc E to talk about their tool.

Deepak Kumar:

– help companies grow their developer adoption. So I have a dream that one day developers will not have to wade through pages and pages of documentation to be able to understand an API. With dokey, we are taking one step closer to a world where I can get as a developer the precise accurate information as and when I need it to build the software that I really want to build. And we leverage AI plus human control in a big way to deliver that to our developers. Dokey can help you at three places, number one at answering your community questions, which generally is outside of the groove where your company is operating. So this helps you with the expansion of addressing different concerns that people have as we answer lots of questions, hundreds and thousands of them. At the end of the month we generate a developer pain points report, which essentially is your advocacy to your r and d and product team as to where your developers are struggling.

And third, we not only do advocacy, but we exactly help you create the content which helps the team in addressing those gaps. Because if people are asking questions, there is certainly something missing, right? A person would have gone on Google and other places searched for at least 15 minutes, they could not get help. That’s why they came to a devel person or a community. So we start with support, do the advocacy and then help you with exactly filling the gap that we figured out in advocacy. Our model is to bring help where developers are, since developers are in Slack and Discord, so we bring you help where developers are as a bot for Slack and Discord.

Matthew Revell:

So we’ve heard some of the good things about AI tools for Devrell, but what about some of the trickier aspects? If you’ve spoken to someone who does developer advocacy on behalf of an AI tool, you might have come across the idea that sometimes it can feel like you’re working with a moving target. Here’s Martin Woodward from GitHub to share his experience.

Martin Woodward:

Yeah, GitHub CoPilot could be a nightmare to do DevRel for if I’m honest, and the number of times you sort of try and get it to redo the same thing and do it in the same way and it doesn’t because these coding assistant tools are non-deterministic. They have a certain amount of randomness in them, and remember they also are learning from what’s around you and what options you’re picking, and so sometimes it’s quite hard to get it to do exactly the same thing. The reason why is really interesting actually, because the way that these large language models work is they basically are predicting what’s the next thing you’re about to type? The next thing you’re about to say it is a bit like that meme on social media where you start typing something like, this podcast is awesome because, and then you hit the first word that the autocorrect gives you every time, and then you get an amusing sentence sometimes.

But it turns out actually when we’ve been doing research with these LLMs that actually the most interesting sentence or the best result isn’t when you pick the most likely response every time, but if you pick a response that’s maybe occasionally you’ll pick not the most, but maybe the second most or the third most likely response every time, and that’s what they call that randomness, that entropy that they call that temperature in a lot of these LLM models. So you set a temperature setting and about 0.8 usually gives you a good setting for the usual kind of written pros type stuff, so that’s why it’s non-deterministic, but oh my goodness, you’re only trying to create a video. You get CoPilot and you’re playing with it and you’re coming up with a good demo scenario and then it just gives you an amazing response.

Oh, that’s fantastic. And so you’re like, right, I’m going to record that. And so you rewind your demo, you try and do it again, and it either gives you a completely different response or sometimes when you’re doing dev rail for copi, you’re trying to explain to people how it learns from you and how to prompt it better. So you’ll maybe get one response, which is okay, you say, oh yeah, but how about I give it a better prompt? And so you give it a slightly better comment or something like that, and then it gives you exactly what you want next time you come to then give it kind of what you want. It’ll often just shortcut straight to the answer you definitely wanted because it’s learning. And so that’s some of the hardest things about doing co-pilot with For DevRel, I want to create actually a faux pilot where it just kind of mocks where records real responses back from the service, and then I could repeat my keystrokes kind of thing and it plays it back. I keep, you need to do that so I can kind of replay some of these. But yeah, otherwise it’s good fun and it certainly speeds you up and you get a lot of those, oh my goodness, I can’t believe it just did that moment all the time. It’s amazing.

Matthew Revell:

Let’s wrap up then with some thoughts on what you as a dev rail person might consider when it comes to AI tools, but also what the future for these AI tools holds so you can start to plan over the next 12 to 24 months. How are you going to incorporate them into your programs? One question that frequently comes up with really any new technology is, will it take my job? Here’s Lily Chen from Contenda. With her take.

Lilly Chen:

I would say that DevRel people should not be worried about the growth of AI and what it means for them. The value proposition for Devrell internally is that you bridge the gap between engineering products and then the user base. Writing tutorials is part of the way that you’re communicating to the user base, but it’s actually not the end job for you. There’s lots of other things that Devrel people do that cannot be automated by an ai. I believe communication, open AI put out a report about what are the skills that are threatened by AI and what will become higher in demand. One of them is critical reading and communication.

Matthew Revell:

Let’s wrap up then by considering what these tools might mean for the next 12 months, and we’ll leave the final word with Colm Doyle of Intercom.

Colm Doyle:

DevRel, people tend to be on the bleeding edge of things. They tend to adopt new things, but that sometimes it creates a jadedness in us. You know what I mean? Oh, this is a flash in the pan. It’ll go next. I think the only advice I’d offer is the L L M things are not a flash in the pan. People I respect have been like, no, this is on the order of the iPhone in terms of a technological shift. So I guess my advice would be to not treat it like a flash in the pan and ignore it and give it the intention it deserves. Think about do an audit of your docs, like point an L L M in your docs and see how they do. Just go into CoPilot and G P T four and try to write something with your APIs right now and see how it responds. Because if it responds well, you’re doing something good and you should double down on it. If it responds poorly, then you might be missing it in an opportunity.