The importance of AI literacy in developer relations

Erin Mikail Staples
Erin Mikail Staples
DevRelCon New York 2024
18th to 19th June 2024
Industry City, New York, USA

Erin Mikail Staples, DevEx engineer and stand-up comic, reflects on how a surprise AI job offer during a career gap forced her to get practical with machine learning fast. From toggling ML models at LaunchDarkly to building joke chatbots in her spare time, she shows how play and curiosity can build real confidence. Her key point is that you don’t need to master AI but if you want a seat at the table, you need to speak the language.

Watch the video

Key takeaways
  • 🧠 Learn the language
    Understanding basic AI terms helps you ask better questions and contribute meaningfully to technical discussions.
  • 🧪 Get hands-on early
    Playing with AI tools—even for silly projects—builds confidence and practical insight fast.
  • 📉 Don’t ignore the trend
    Over half of DevRel job listings now mention AI—baseline familiarity is becoming a hiring expectation.
  • 🤝 Make AI a team habit
    Use playful prompts or internal experiments to ease your team into regular, low-pressure AI use.

Transcript

Erin: Let's see if you can go ahead and click this forward. So you're probably here to learn about some AI stuff, right?

Audience member 1: Yeah

Erin: No, you're like, I'm here to learn about DevRel stuff. Yes, but this time we're doing DevRel with an AI twist. So let's talk. We're going to go over some common misconceptions about AI and why it's not going to take our jobs and why sometimes I wish it will sometimes. Doesn't being on the beach sound like so much better? Yes. Okay. All of you apparently like your jobs good for you I guess. But let's check. As John said, my name is Erin Mikail Staples. I am a developer experience engineer at LaunchDarkly By day by night, I am a standup comedian. I have a show at Grizzly pair once monthly, so come hang out with me. I also am in an AI comedy troop, so I use some very cool skills like fine tuning to roast people IRL. It's pretty cool.

So it's taken me really cool places. I've been featured in the New York Times. I'm in an upcoming documentary on Netflix, but more importantly, this helps me do my job better every single day. As a developer experience engineer, and I know many of you in the room are probably wondering why do I care and what does it mean to me? So who in the room has used ai? Any tool? I'm not seeing everybody's hands go up who was like, holy cow, I absolutely hate this. This is going to take my job. This sucks. I appreciate the honesty because I'll be real. I was there flashback a little over a year ago. I found myself unemployed, how many several have been unemployed in the last three years, found myself unemployed and was looking for a job. And it was right after chat GPT had come out. We were starting a talk and I got offered a job at an AI company and by AI company.

It was in the machine learning open source space. And I was like, I know absolutely nothing about machine learning. And I'll be honest, it was a really cool opportunity. The job allowed me to learn a lot about AI on the fly. I worked in the data labelling space. How many people understand what data labelling is? I love the number of hands that are slowly we're decreasing or increasing depending on the hate love of ai. Love that. But today we're going to break down why it's important not only to understand about the AI space, but how you can use it day in and day out. And I'll give you a fantastic, because we love things simple here, a three step formula to starting to play around with it with confidence and being able to talk about this confidence as well. So if no one in this room said they were a sceptic, I'd be a little bit concerned.

We've all read the headlines, we've seen the intellectual property debates. We've heard Scarlett Johansen kind of rant off in the world. We've seen it make some pretty damning misinformation. And as a former journalist, this is kind of terrifying. It's natural, but I'd almost argue that it's more irresponsible to not get involved. We're at a crossroads where AI is really making impact. A lot of the companies that you're hearing of are moving at a rapid pace. The investment dollars in AI has only gone up and if we choose to stick our head in the sand, that does nothing for us as individuals and that does nothing for the industry as a whole. Come on, do we really need 12 more white dudes in tech telling us what to do? Think we have enough of that already?

But there's also some really cool things happening now if you've been, we walk away to the MoMA recently. You've seen this and this is now a permanent acquisition at the MoMA and everybody's worried. They're like, why the heck do I care about AI and art? This is a beautiful, they do screen wearing. This is a beautiful piece. This is actually, they took the API from the collections at the MoMA and they made digital art. So they're remaking it, which I think is one of the coolest things ever. And it's now part of the permanent installation at the New York MoMA. So if you have a chance, go see this. It's right in the lobby, but it means that it is constantly regenerating.

Very cool. You could watch this for hours. I've sat in the MoMA and done exactly that. Nothing like tech conferences, but as much as I want to stare off there and wish that AI would hurry up and take my job so I could beach all day, ignorance isn't bliss. As previously mentioned, it's very irresponsible for us to be in this world of AI and have tools around us developing and not understanding the basics of how it works. So experience with AI and machine learning, and I quoted some people who have higher pay grades than me, like Deloitte and the US government.

There's a joke here in that itself, y'all, I'll read these on my notes here that we've got some pretty damning effects. So 46% of employers would rather just hire new people who have an understanding of AI than train the existing ones they have. So those of you who've said you've never played with ai, I'd start being concerned when I quickly ran on Google jobs and I took every single DevRel dev education dev experience job, I scraped it, put it into a spreadsheet, 52% of those jobs as of Sunday all mentioned that AI or ML skills were a requirement for the job or they were interested and it was a bonus if you had that on your resume.

21% of employers a little over one in five believe that understanding and AI and ML is an essential skill to have by 2030 and 35% of people have left a current job that they're in because of a lack of training. So with those stats in mind, how are we feeling we have some work to do and how many people don't work at an AI company? I will raise my hand. I work at LaunchDarkly. We're not an AI company, but as the joke says, every company is an AI company. We've all heard from our investors, our higher up, someone else in charge probably. We've all heard the command. I see the joke on Twitter all the time. People are like, well, I really need an AI company, or What is that new AI feature we're making?

I would argue that doesn't matter. Everybody in this room has probably used AI and machine learning and never even realised it. How many people took a lift in the last two weeks? Congratulations, you used ai. Lyft is engineering team is one of the largest adopters of AI right now, and they have this really great article, it's linked in the slide deck as well from their principal software engineering team, and they mentioned the use of AI and machine learning is around and when it's done right, nobody even notices it. In fact, if we go back to how we got here, you're probably very thankful for AI today. So again, this is when we're looking and cheating at the notes. We're thankful for it. For document analysis and declassification, how many people know what the Pentagon papers are?

That was one of government's first uses of AI in the workplace would not be possible without natural language processing. We have it thankful for healthcare and medical research, medical imaging. So actually detection of cancer scans have increased because of machine learning. Believe it or not, computers are better at detecting small changes in pixels better than we are at the human eye. If anybody's ever direct deposited a check on their phone, congratulations. That's introductory computer vision. How many people have yelled at their Spotify algorithms? Machine learnings are there content moderation so we can all think what's happening on the platform, formerly known as Twitter, weather prediction, wildlife conservation and ecology. I read at my last company, we had someone who was training the sound of a bird and they were detecting because the birds made this bird that was going extinct, made a very particular sound, and they trained a model to identify that sound and put speakers up in a forest and they actually found where this bird was living to protect its environment. Very cool. Very real life project. The human genome project would not be possible without ai.

Audience member 1:

Hello.

Erin: Also, how many people believe in aliens apparently according to the research I did for this presentation, that's one of the largest places that AI research is going right now. So you're welcome. Now how the heck did we get here? Well, it didn't just all sort happen tomorrow, and this is when I'm going to want names so it can cheat and looking at notes here. But we can thank our origins of AI that came all the way back from 1949. Now, this concept of machine learning at all was invented based on looking at pictures and slides of neurons. That's the right, your physical things in your brain. And they said, if that could be happening within our brains, there's obviously some way we could do that with computers. So Dr. Donald Ebb came up with that, but we didn't see the first term of machine learning used until 1952, and that was by Arthur Samuel of IBM.

Now we're starting to understand we can run algorithms awesome in the 1970s. We're starting to be able to understand how this matches together. So artificial neural networks were like, great, we can finally do that 30 years later, thank you to computers and figuring things out. In the 1990s, we have taken it to the next level where computer vision and deep learning projects have started to happen, and then the two thousands brought us foundation models, generative AI and accessibility to those models on the everyday consumer scale. We finally have the power to work with these models on our own computer, let alone our phones. Very cool. Anybody else watching the Apple developer keynote and freaking out? And this is simultaneously really cool and kind of freaky, but as we all can see, we need more GPUs. But why the heck does that even matter?

Again, internet is really great. So I've got a soapbox incoming and it's not just my soapbox. If you attended Picon 2023, you've seen this soapbox coming as well from Margaret Mitchell, and this is actually from Dr. Tim GI from Google. People need to understand these systems are built and how they make decisions to ensure trust and accountability. Again, if 52% of DevRel jobs in the future are advocating for the requirement of ML and ai, how the heck can we even unsure that we're teaching about these systems properly? If we know our employer cares about that and they'd rather just find someone with that information, how do we know that we're the right person in the job? The best thing we can do for DevRel is get an understanding of why this is so popular. So again, leaning into this autoplay is wonderful when it works. Thank you Wix for the power of ai. We just have to lean into the comedy of the autoplay ads here.

In order to understand and build a future of ethical ai, we need to know what we're up against. So what do we do about it? Who's ready to get your pens out? I see some people frantically taking notes. I see some people rolling their eyes at me. It's fair game goes either way. We have a three step formula to success, not patented. Please steal it. So one, we got to know the lingo. What are the terms that we use in AI today? And I know it might be hard to see on the screen, so if you're close to a projector, feel free. We've got our core concepts. What is the difference between artificial intelligence, machine learning and deep learning? Know it, understand it. Read why. We've got to understand some key components. What goes into these things? Now that means what is an algorithm? How does that differ from a model?

Is it different? We have to understand training data, testing data, features, labels. We have to know the processes that create these things. How does a model come to be? Now this can look, we can run through all of this and this can look very overwhelming and it can feel like a lot alone. You could look at this list and have not just three months of it, but three years of study time. And I'm not saying that you need to be an expert. I'm saying we need to have enough technical proficiency to understand what we're doing and have a conversation and learn more. There's that whole problem when we're learning something new. Has anybody tried to learn something new and you don't know what to Google? Look, I have a problem. This is the major problem. What do I Google? It's a scary problem. I actually experienced it the first time most recently was the rabbit hole. I went down as I'm learning and figuring out angular for a project and my brain is running down and I don't even know what to Google and that part. And if we don't know what to Google, how do we know how to have a conversation with the stakeholders in the room and say maybe that AI feature is a terrible idea.

But again, if we don't have the knowledge, how do we even get there in the first place? Now, step two, and some of you are already here, get your hands dirty. If I see another tweet on the internet from so-called someone in Deral who's like, I'm never ai, I'm very disappointed because again, how do we know what we're up against if we haven't even given it a spin ourselves? Make dummy accounts. I don't care. Ask it to make 20 pictures of your dog. I have learn by doing. Get your hands dirty. Play. You can use on hugging face open ai. There's tools like assembly, ai, alama, 11 labs, midjourney, get your hands dirty. Try something new. And guess what? I know there are multiple people in this room who work at AI companies or AI adjacent companies. They're probably more than happy to teach you.

So again, get your hands dirty, start playing around with it. Understand what it means to be in this world and then go make something yourself. So I've curated a list of fun, open source projects, reposes ideas to get started. There is a reality TV database in there. Have a great time that is actually using some fun stuff. But this will take you to my GitHub in there. I have pinned links on how to get started, but also have them linked at this end of the presentation as well. So try making something. I used a model. How can I use it for what's next? What's the next adventure? How do I build something? Where do I go from here? Play around with it, try to build something. Can I put an AI model in this? Does it actually help? Maybe this is the worst idea ever. Did I build something really terrible? That's great. I love it. Do it more. And that's all of what I have is actually get dirty with it. So thank you. You got a few moments for questions here.

Jon: Thank you Aaron. That was a great talk and great jokes. As I said, Shai is going to be running around with questions. Looks like we have our first one from Maria up front.

Audience member 2: Hello, as the resident accessibility queen, I have a question. So how do we as developer advocates use our abilities to, now that we understand AI and everything, so how do we use that to actually influence engineering, the one to implement bias into their models? So how do we advocate for that?

Erin: This was my favourite question. So previously before I came at LaunchDarkly, I was in the data labelling and data integrity space. And if you understand how models are built, we all have data that makes a lot of assumptions. So we want to make sure that we are cleaning our data and making sure that we're providing right data. But if we don't have the right conversation, we don't have the right lingo. Like, hey, are we using a model off the shelf? We fine tuning. Are we building our own model and we don't have the right talks or metrics in place? How are we testing this model safety? Who has access to this model? What are the impacts that'll happen to this model? We can't be in the room. It's hard for us to even get in the room. How many times have we gone to engineering and we're like, I have this question and concern, and they're like, no, no, be gone. But it's really important to have the technical skillset and comprehension on understanding why, so we can influence those decisions. Margaret Mitchell has a really great talk on this at PyCon of why it's really important and it's something that I read a lot. And then there's dare, which is a nonprofit group that is founded by a lot of people who are doing really great stuff. But here, thanks for asking a question here. Let's see if I can test my, oh, good thing she's in computers, not baseball. Softball.

Jon: Looks like we have another one in back.

Erin: Oh, making me really lob the socks down there. Okay.

Jon: Yep.

Audience member 3: Me, I'll take 'em.

Erin: Okay.

Audience member 3: I was just curious if you had any use cases that have come up in the course of doing your job for this technology. I mean, aside from thinking really more on the LLM space, less on the other types of machine learning.

Erin: Yeah, totally. There's actually my most recent demo, if you go to launchdarkly.com/blog and one of my most recent demos is actually on how to use toggles and switches to turn off machine learning models. So it's like, Hey, I don't want that machine learning model kit. I don't trust myself throwing all the way back there. Look at that. We even got a catch. It talks about how to toggle between two different types of models or not. So thinking about ways of different applications you can think of that are really fun. So with LaunchDarkly, I'm switching between two models and A or a B, other use cases on my goofy. Aaron does weird stuff on the side to learn. I'm building a chatbot and it's taking the form of different famous people. So I have a Elizabeth Holmes chatbot that tells you terrible startup advice. This is a stupid project, but I was just like, Hey, I want to see if I could build it because why not? Let's try building something new. Or I also have an Andrew Huberman dating advice bot. These are dumb things that I've built just to learn. Are they practical? No. Do they give me a laugh and I learn something new? Yes. I'm currently working on a browser extension that has an LLM in it. So it's like drop a picture in and it gives me alt text. It's not finished. But these are just fun ways of learning.

Also, everybody here, LLMs are your secret hack for alt text. Yes. Alama on your desktop for alt text Game changer.

Audience member 1: Yeah.

Erin: Here again, software, not softball.

Audience member 4: Hi. Really enjoying your talk. I'm one of those that actually does work for an AI company, but I'm curious what you see. I know you're encouraging everybody to start adopting AI techniques, apps and everything to use in their daily life, but what are some other incentives that might be good in helping encourage your team members to adopt?

Erin: It's like for me, one, thank you. So if anybody has any great questions, ask you, because I don't have all the answers. I'm just learning along my way myself. And I think that's part of the process is admitting you don't know everything but adopting it. A lot comes down to just don't go at it. Sometimes I feel like people go at it with a very specific, I want to achieve this. Or they're like, wow, that blog post that I asked Chad, GBT to write is terrible. You're right, it is terrible. Probably should be doing that. But what are other things I love? We LaunchDarkly with using probably Suno ai, which is a song generator and sending it back and forth as almost like an internal joke. We're like, let's make a song about this PRD. Funny ha. And it's like a funny, silly way, but it's exposing everybody to using it and it's getting you practise of what can it do and what can it do really badly.

And just also embracing about how bad it does things is part of the learning journey. Another kind of really fun example is actually stole this from someone. I have a friend who works at Sourcegraph, Justin, and he takes his stats from his DevRel metrics and copies and paste it into chat GPT and tells it to write a summary because he is like, I hate writing summaries and this is a five page PDF doc. I don't care. And yeah, I did the five page PDF doc, but I don't want to write a paragraph. Summary alt text is great. Finding little tiny examples and making very bite pies and less focusing on the end product I think is a really great way to kind of encourage that adoption, not being afraid to get silly with it.

Jon: Alright, well thank you again to Erin. You can find her if you have more questions throughout the conference. Thank you.