Principles of developer experience

James Ward
James Ward
Ray Tsang
Ray Tsang
DevRelCon Earth 2020
30th to 10th June 2020
Online

Google's Ray and James argue that developer experience (DX) is 100% about developer productivity and can be measured in terms of value and time. They share the four key principles that help guide them to deliver value and reduce friction to increase productivity.

Watch the video

Key takeaways

Takeaways coming soon!

Transcript

James Ward: I am James Ward, developer advocate on Google Cloud, and we also have Ray.

Ray Tsang: Hello, I'm Ray. I'm also develop Advocate on Google Cloud as well.

James Ward: So this presentation is about the principles of developer experience. So let's talk a little bit about developer experience. So start with the cheesy definition of experience and then apply it to developer experiences. So the practical contact with and observation of facts or events. So developers have a lot of that every day, all day.

And so some examples of that are set up and configure a service via command line or a ui. Use a library or an API read documentation, learn something, try something, troubleshoot a problem, make something work, anything. So we have a lot of developer experiences and usually when we talk about developer experiences, we like to talk about is it a good developer experience, but what does that actually mean to be a good developer experience? Here's an example of me having a bad developer experience. So you can see in my series of commits that things are not working. I'm having a hard time, I'm struggling, and this is a good indicator that what I was working on here, I was not having a good developer experience with this.

Ray Tsang: And here's a different type of experience that's relatively good. This is a tool called Error Prone.

It's a tool that helps you identify bugs and issues upfront and early in your Java code. So it's going to fail if it detected something bad that's happening in the code. And you can see the error message. If you click into it, there's a link that takes you straight to a page and once you go to that page, it tells you exactly why this is a problem and how do you actually fix it. So it fails very fast. So you can go ahead and figure out how to fix the problem from the documentation. It has a lot of the reasoning behind the scenes to explain everything so you understand why do you need to do this, and that's a pretty good experience.

James Ward: Yeah, definitely.

Here's a frustrating developer experience that I had recently. I downloaded the latest version of Node and for Linux and it gave me a Tar XZ file. I don't know what an XXI file is. I don't know how to extract an XZ file. That is not something that I'm familiar with. And what's interesting is that I did a Google Trends search to see how many people were having challenges extracting XZ files, and when they switched from Tar Gz or whatever it was before to a XZ file, you can actually see on Google trends that a lot more people are searching for how to extract an XZ file. And it's something that I do so infrequently that every time I go to extract the latest version of Node, I have to go look up how the command line to be able to extract an XZ file. So that was a frustrating developer experience.

Ray Tsang: And in the contrary, we have this other experience with Spring Boot. Spring Boot is one of the most popular Java framework today, and it hasn't been too long, too bad developer experience, seriously. And now everyone is really liking it and using it a lot. And if you go to their site, which you start that spring io, it's a very easy, simple to use ui and you can add and configure your code, bootstrapping your application very quickly, and then just click on generate. And once you do that, it's actually a zip file so you don't have to think twice on how to actually unpack it, unlike the XZ file that GMs have to deal with.

James Ward: Yep, that's a good developer experience. Okay, let's go back here. Click that link, click that.

There we go. Okay, so that was a good developer experience. Okay, so we've given some examples, but so far the way that we've talked about these developer experiences are all about how they feel. It feels good, it feels nice, it feels horrible, it feels difficult, frustrating, all about feelings. And is developer experience really just all about feelings? Is that the only way that we can talk about it and categorise it? So Ray, Ray has a story to tell us about how he came to a different mindset around this.

Ray Tsang: So this is a story where I was so frustrated with one of the developer experiences on one of the products.

I'm not going to name which one, but you might be able to guess later. But I was so frustrated with this product and I was just in this unconference in Crete. This is a unconference that had a few maybe a hundred people there. We were talking about technology and stuff and we had these excursions and one of the excursions we drove to this 2000 year old olive tree on the island and I had a car. So I took three people with me on the car and asked them about the experiences I was going through. I was very frustrated, I was very unhappy with the outcome of it. So ask them a simple question, which is the next slide on the way to this olive tree.

I ask them, Hey look, if you have an application that starts out in two seconds locally, but it takes 21 seconds to start in a cloud, what do you think about that?

Are you not going to be frustrated or what's your thinking process here? And I really thought that everyone just going to say, well, yeah, that's pretty bad. But to my surprise, one person said, it depends. And I almost stopped the car and asked the person to get out. But no, I didn't do that of course. And so I asked him why, what it depend on? Well, this person said, well, it depends on which one makes me more productive. It depends on, for example, the 21 second that takes to deploy and run.

Maybe that's a production environment that this person may have to spend otherwise hours and hours to create. And so compared to the hours or days that this person needs to spend in the environment, 21 seconds compared to that is nothing.

And it would be actually very productive if that's the case. And that is what occurred to me that the developer experience that I was thinking about has been maybe not the same as what other people were thinking about in reality, which is all about productivity. And then I had the thinking about, well, what does productive mean? What does it mean to be productive? And we kind of thought about this and said, well, maybe it's just how much value that you get over how much input you give to produce this thing to produce the value. And the simplest form of the work that we put in is maybe we can measure it in time.

And so for developer experience, what I'm going to say is that, well, maybe it's productivity that's measured in value over time and the value is really the things that's really meaningful for the developer.

For example, going to production, being able to accomplish some features, being able to adapt certain functionalities, and they have certain expectations on what they should be getting from your product. And the time is really just the steps and the amount of time it takes to achieve that goal. So it's a ratio. And we think in this term then what we can say is, well, sometimes I've seen quick starts guides where it's a single page or very few steps maybe as simple as clone and then deploy. But in the process of that, it's very quick. However, it doesn't really get me the information that I need to reach my goal, in which case this actually not very productive. On the other hand, because it's a ratio, if there are many steps involved and the time is actually relatively low, but the value that you get out of it is significantly more than the others, then it's actually pretty productive.

And the worst that can happen is that you spend a lot of time trying to do something, trying to accomplish your goal that's valuable to you, and you couldn't get it done right, in which case the value that you achieve is zero and you have zero productivity in that time. Cool. So in that case, productivity is probably totally measurable, which what I'm going to say is probably not measured by lines of code. It's really measured by the value that you get out of the work that you have done.

James Ward: So with Ray's insight, we were like, okay, now productivity and thus developer experiences can be measured. But wouldn't it be nice if we had some categories or helpful ways to measure them that weren't just one dimensional but provided two different dimensions to be able to think about 'em. And so we're like, okay, let's think about all the good developer experiences that we have, all the traits of good developer experiences. So we brainstormed a bunch of different good developer experiences and wrote down a giant list.

And it was I think even much longer than this of all the pieces to good developer experiences. And then we started to see some patterns form. And so we're able to take that list and reduce it down to four different core principles of developer experience. Let's start with principle one and Ray,

Ray Tsang: Keep in mind that these principles are also aligned with the time and the value axis as well. Even though they're a lot of these things that we think is good or not so great, we're able to align them and deduce them into just these four things. And the first one is respect the developer knowledge and goals. And this has a lot to do with the value that you're providing. Imagine if somebody comes to your documentation or start using your application or trying to use it to develop their own application and it doesn't really support what they already know.

Then the developers might feel out off place and they have to relearn. And then now there's a learning curve that's significantly more. But if we started to respect the developer's knowledge and goals, we will be designing the products or documentation around this knowledges and also the valuable goals that they want so that when the developers see it, they'll be pretty forwardly and say, yeah, that's exactly what I want and let's go, let's do this.

Right? So I'll show you a very good developer experience that actually I think we think that embodies this principle pretty well, which is GitHub. So in GitHub, one of the things they have done, which we just realised is this is so subtle that I didn't even realise this until when we were doing this talk preparation, which is that if you have a GI repository, which it understands what language it is using. So for example, I have a GitHub repository that has Java code, and if you want to create a GitHub action, it not only automatically detects that it is a Java repository, but it also presents to you in some templates that uses commonly used Java, but tools, in this case, Maven, Grado scholar with SBT. That's the three things that I ever, ever need for as a Java developer.

I know these tools, I can pick one and I can get started, and that's a very good demonstration of just respecting the knowledge that I already know.

And so there are some things that we can measure around these things, right? So for example, in this case, the measurements is really maybe we need to ask the people the questions once they have gone through experience. We need to understand did this experience achieve your goal? Yes or no? Was the experience consistent with the way that you have been doing things? Yes or no? Did you learn something useful?

Because that's probably one of the valuable goals that the developers may want to achieve. And the best way to get these answers is just to ask the question straightforwardly and have the users respond to those so you can really understand are you actually doing this and are they able to achieve their goals successfully?

James Ward: Cool. So as the first principle, the second one, do the simplest thing that could possibly work. I'm an engineer and I definitely love to build the Rube Goldberg machines and have very complex, impressive solutions to things, but we really, as we're trying to find that nice ratio between value and time, provide the best productivity, it's nice to have our experiences lean on the side of doing the simplest thing that could possibly work. And we I'm sure have have encountered lots of experiences where you dive in and all of a sudden there's like 10 different things being thrown at you all at the same time and there's too much to learn and you're presented with a decision that you have to make that you don't understand how to make the decision yet. And so really what we should try to do with our experiences is do the simplest thing that could possibly work.

And so I want to show you an example of that, which is something called Cloud run button.

And so with Cloud Run button, what I did was I made it so that we have something called Cloud Run where we can run apps on the cloud. And as a developer, a lot of times I just want to kick the tyres on something new. I just want to give it a little run through. I want to be successful quickly. I don't want to have to make a bunch of decisions. I don't want to have to learn a bunch of things. And so Cloud run button allows us to go from a GitHub repo to deploying and up and running on cloud run. So I can just click this run on Google cloud button on this repo.

This is a really simple one. This could be on, we have it on a bunch of different repos and you can put it on your own repo if you want.

And it's asking me to confirm that I'm sure that I want to proceed with this, and now it's going to go into Cloud Shell and it's going to automate a bunch of things that I may not understand are pieces to deploying an application on cloud run. So it's now cloning the GI repo, so I didn't have to learn Git to do this, it did that for me. It's asking me which GCP project I want to use. I'll select that one. And then the only real decision I have to make is where do I want to deploy this thing? And I would actually love if we didn't have to ask the user, we have a good default, but at this point I may not know or care where it needs to be deployed.

So this could actually maybe be even a little bit simpler, but I'm going to pick US central one and now it's walking me through the steps that are needed to take this repo and get it up and running on cloud run.

So it's doing the docker build for me. It also can do build packs or jib as well. And it's built the container image, it's pushing it to the container registry, so it's telling me what it's doing. It's not obfuscating or hiding the steps from me, and then it's deploying it up on cloud run. So I really didn't have to make very many decisions to be able to go from source code on GitHub to this thing now up and running on cloud run, and we can go check and make sure that it's all working. So there we go. That's the simplest thing that could possibly work, but you may be wondering like, okay, that's great that we try to do the simplest thing that could possibly work, but not everything can be simple like that.

Not everything should be simple like that because that may not actually provide the value that the user is looking for or help the user achieve the goal that they're trying to achieve.

And so hang on for the next principle if you're thinking that. But before we do that, let's talk about ways to measure this particular principle. So this one I think is pretty straightforward to measure. It's looking at an experience and saying, how many unrelated tasks did the user have to do? How many CLI things did they have to instal on their machine? How many pieces of prerequisite knowledge or prerequisite software did they have to have installed? How many steps did they have to go through?

How many snippets did they have to copy? How many times do they have to search for what they're looking for? How many docs did they have to read, how many clicks, context switches, mistakes wasted effort decisions are a big one. I get paralysed when I reach a decision in a process and I don't know why or how to make that particular decision. So by measuring that, we can see are we actually doing the simplest thing that could possibly work? But keeping in mind we need to keep these experiences relevant to principle one and two, what the goal that the user is trying to experience is what they're actually trying to get through. So as principle two, onto to principle three.

Ray Tsang: So yeah, doing the simplest thing that could possibly work is definitely great to get started.

If I want to learn something new, I have a goal and I should be able to reach that goal without learning a lot. However, as I progress and bringing the system into production, the simplest thing that worked may not be necessarily the thing that I can bring to production. And I may actually have to integrate with other parts of the systems, turn that experience into something that's internal to my own company or the teams that I'm working with. And that's where usually you see the picture on the left, which is I will give you something that's very simple and then you have to fill in the rest and draw a real all afterwards. And that is difficult. What we should be able to do is to allow you the developers to find the relevant information when you're ready for it, or take you through the different steps so that you can create the final picture that you're looking to create that is valuable to you.

And this is what we call learning should be incremental. We should be able to allow you to find the information as you progress through your learning process too.

One. And so one of the examples that does this reasonably well, we're going to use Docker for example. Docker is one of the most popular container tool. It kind of bring container to the masses. And I believe that partially is because a lot of it's because they have a great experience and their documentation is also good. They can help you get started really quickly. But then on the left hand side of a nap, you can also notice that, okay, great, I can run a single container. I understand what's there.

I understand the differences of container versus the vm. Now I want to build my own. And so they have the next step to show you how to build your own.

And the learning is incremental in this way so that when you're ready to go to the next steps, there's always the information there that can help you to get to what you need to do. And there are some ways to measure this as well. We believe that all of these things are measurable. In this case, the incremental learning, we should be able to measure that. Were you able to find the information easily and quickly?

And by this, there are two parts to it. One is if you try to find the information on Google or a stack overflow, are you able to find that piece of information in the first page or even the first search result, right? The secondly, it's about the navigation. In your documentation in your doc. Are you able to help the user to understand the next step that they can continue in their learning journey? Are they able to find it on the left hand navigation to find additional details or reference documentations that really digs into the details as necessary for someone to take it to the next level? And another question you can ask is, do you know where to go next to continue learning? Are we doing this?

Are we showing this to you so that you can make that choice?

James Ward: All right. And then the fourth and final principle before we get into some more applicable ways to work with these principles is that wasted time is a waste. As developers, I think we spend a lot of time just waiting for compilers or waiting for things to happen or copying and pasting things that seem like we shouldn't have to copy and paste. And so we all have, I think a lot of examples where we waste time unnecessarily. And so as we're creating developer experiences, we need to keep this in mind. Can we fail fast? Can we use caching in places to provide a better experience?

There's a lot of places and a lot of room for improvement in this particular world. So I won't give you an example of that one. I think it's pretty straightforward, and I won't waste your time, but let's talk about how to measure this one.

So this one's also pretty straightforward to measure. We can just ask the user, did this take longer than it should have? How much time did it take versus how much time you expected it to take? Do you feel like we've wasted your time? Or how much time do you think we've wasted?

And then what prevented you? So a lot of times wasted time just comes in, this didn't work. And so then I had to go spend hours on Stack Overflow trying to figure out the missing piece to why this didn't work. And so one of the, oh, we'll talk in a little bit about ways that we can actually instrument documentation to track this kind of stuff, but sounds pretty straightforward to measure. So to give a quick recap, principle one, respect the developer knowledge and goals.

Ray Tsang: Princip two do the simplest thing that could possibly work.

James Ward: And three, sorry, Ray, I did those backwards. Mixing it up, okay, keeping people on their toes.

All right. Principle three, learning should be incremental.

Ray Tsang: And the last one, which is the wasted time is a waste.

James Ward: Okay, so now let's talk about how we can practically use these principles. So I think many of us at this conference are in developer relations. And so how do we help product management and engineering teams provide better developer experiences, deliver better developer experiences? So we have a couple of techniques to talk about that we use. And so we'll go through those.

So the first one, which you may have heard of is a technique that Google developer relations and other folks use is called friction logs. So a friction log is when I start using something new. I'm on Google Cloud. So when I start using something new on Google Cloud, I just take a log of everything that I do, everything I try every piece of documentation, every Google search, I do everything that I'm doing. Every command I type, I just keep a log. And as I'm going through this experience, I have a goal in mind.

So in the friction log, I state what the goal is and then what my goal is, and then I start logging. But then we have a way to colour code the log to provide some visual feedback as to how the experience was.

And so we colour code with green, orange, and red. And these are all subjective, feeling based. So we can say, oh, this was awesome, or I was annoyed or frustrated at this part, or, oh, I'm so angry I would quit over this. And so you can see this particular friction log that I wrote recently, there was some frustration there. And so I was able to send this friction log off to the team and hopefully help them create a better developer experience going forward. So the colour coding is subjective, it's how I feel about this in the context of my friction log and everything that I did.

But what Ray and I recently did was we took our principles of developer experience and we added this into our friction log template. So now we can provide a snapshot TLDR of the experience related to productivity value over using the principles of developer experience.

So what we do is just have a grid with each of the principles, and then we also use the colour coding as a way to indicate how well this experience did, how I was able to meet my goal and the challenges that I ran into. So this is just a way for us to categorise the experience that we had into the principles with the hope that that's more actionable. Because if I just were to say, oh, I struggled in this part, then it may not be something that is actionable or something that helps us actually move that developer experience forward into a better place. And so hopefully this will be a tool for the product management engineering teams to have more directed information for how to improve the experience. So that's friction logs, but we can also do collecting metrics.

Ray Tsang: So the other way to measure the productivity set of things, if you're doing this after the fact, then you can measure documentation for example. For example, how did the user find your documentation? How did they rate your documentation?

But typically this is what we see in a documentation site. You have a way to get feedback where the user might rate it to be, in this case, one star, which is completely unusable. If a documentation is unusable, I probably won't even spend the time rating it to be honest, but a lot of people might just walk away. But it's nice that some people will give you this feedback or that on the other spectrum, it is excellent documentation, and I hope that every documentation has five stars, but then we also have something in the middle. What happens to these middle stars? So in this case, if you have three stars, it's an okay documentation.

What does that mean to be an okay documentation? Why isn't it great?

Why isn't it excellent? And what makes it not a hundred percent unusable? But still, why isn't it a great documentation in this context? There's no way to understand the documentation in terms of how valuable they are to the users and how much time the user may have spent on this. So one of the things to think about it here is that maybe you can, in addition to just rating the documentation with stars, ask the questions. If somebody spend the time to rate a documentation to be not useful, ask the question of why is it not useful? Go back to the measurements and ask the yes or no answers, and hopefully we can categorise the reasons why certain documentation are not great. In some cases, they allow users to send the feedback.

But again, freeform feedback.

User can type whatever they want, and I don't know, sometimes I just write, well, the documentation is not great, but again, it's not very actionable in terms of why that's the case. Tell me more. And so the easiest way is to just ask the questions. And another idea that we had was that if I'm going through a documentation that has many, many instructions, and sometimes I might just want to copy and paste the instructions and some documentation have this copy and paste icon, and you can potentially measure that and you can know how many times people will copy and paste it. And if the user copy and paste it through the documentation halfway and then stopped and never accomplishes the rest of it, maybe there's something wrong with the documentation with these instructions. Maybe they're stuck. In which case, it's also great to ask the questions of why are you stuck?

Where are you stuck? And you can see some documentation do better than the others. And the last bit of it is, of course, both of these things, the Phish log and the measuring the documentations and stuff like this, this all happens after the fact. This all happens after the product has been produced. I think I truly believe that in order to have great developer experience, we need to do this from the start, right? Work with the different teams upfront and early, and instil some of these concepts into the product design itself as part of the derail teams. That's what we should be doing and make sure that the products are being developed in the way that provides very good developer experience.

James Ward: All right.

And now I think we have some time for questions.

Tamao Nakahara: Excellent. Thank you so much. So I'll be monitoring the Slack channel, but Dave, do you want to start out with some questions? I've got my own, but I won't keep up all the time.

David Nugent: I also, I just want to make sure people know that when I go like this, I'm not rolling my eyes at people, but I have a really tall monitor and my questions are way up here. Well, I love the talk and I was tweeting about it like crazy. But I was also curious because you mentioned working with other departments, but you also mentioned effective metrics, and I was wondering how important is it to push back on metrics that don't focus on developer goals given that different areas of the business may have different metrics already in place?

James Ward: Yeah, it's easy to measure the wrong thing or the thing that doesn't matter. So how do you convince people to measure the right thing is hard?

Ray Tsang: I think it takes some experimentation. I think when people want to measure something, I'm sure there's a reason of why it may not be the reason that I understand or agree, but in terms of develop experience, there are some measures that we should be taking to understand this. And I think this is a field where we're kind of pushing the boundaries. So it takes some experimentation and to validate some of these principles and measurements as well.

David Nugent: It's an interesting narrative because you both mentioned it and then also Charles Pretzel mentioned previously working with different departments. And then looking all the way back to Modi's talk a few hours ago, he mentioned moving from being in engineering to being in DevRel and having a lot of buy-in at the company.

And so I'm curious if having that buy-in from different groups actually allows you to focus on more effective metrics and have a more effective team.

James Ward: I think what we have the freedom to do a lot with Google Cloud DevRel is to build something that we think will be better than exists currently. And so we are able to take a show, not tell approach, and that's what I did with Cloud Run button was I was like, there should be a better way to go from a GitHub repo to running on cloud run. So I just went off and built it and that's cool. And so this has actually impacted now the product. And so this is not, cloud run button is not an official product, but they, because of I think influenced by cloud run button, they took one of the steps out of the getting started experience for Cloud Run and they noticed that adoption went up. And so that was actually validating that keeping it more simple actually increased the number of people who were getting through the process. I

Tamao Nakahara: Had a question on change management.

So you guys talked about the first principle, including work with what is consistent with your current developer's knowledge, your customer's knowledge. And you gave the example about the node tar XZ file. Yeah, so sometimes you do have to move the needle. For example, there's a period in which a lot of people didn't know Git, and maybe you're moving in a direction and you're thinking, well, I kind of need the generation of people I'm working with to use Git to make this possible. And now it's pretty pervasive. There's no friction there. But when you're at that place, what would you recommend? How to move the needle but not create too much friction?

James Ward: I like that you brought up Git in particular. That was definitely a huge shift in thinking for a lot of us who are used to SVN and CVS before that, and I vaguely remember this was a while ago now, but I vaguely remember a lot of the content that came out around that time was targeted at people who knew SVN. And so it was saying, here's how you take your knowledge of SVN and go from that to gi instead of just saying, here's GI and it's amazing and here's how to use it. They created this bridge from where I was to where I wanted to go. So yeah, I don't think that principle rules out the opportunity to teach people new things and even very different things, but there has to be bridges between those worlds.

Ray Tsang: When I learned it, that's also what happened to me. In fact, I switched what version control three times VS to SVN to GI kind of thing. A lot of people may have gone through it, but I think that this is where the other principle really is important too, which is that learning should be incremental.

We need to make sure that people from all experiences and all levels is able to learn. And there may be people who are just brand new and never used version control in the past, and we need documentation like that to support those use cases. And by then, we also need to respect the developers who also have been using other existing ways of doing things and how do they bridge into this new world. It's very important to have both.

Tamao Nakahara: Yeah. So do you have experience of getting those metrics first as I guess in the product process and saying, okay, this is our sweet spot right now, but then there's also this group that we want to win their hearts and minds, but they are in this space, early adopters, later adopters. Have you gone through those steps and really making those decisions with the metrics you need?

James Ward: We've begun doing that.

So we just came up with these principles pretty recently and we are working with some product teams to start actually measuring them to see how effective they are. That's been one of the interesting points of feedback around the principles was how do you know that these are the right principles? How do you know that these are actually driving the product in the right way or driving the developer experience to be actually better? And so that's something we're working with with some of our internal teams on adding the principles and then doing some tests. I guess at some point it ultimately comes down to customer satisfaction. Are more customers coming in? Are they happier or is the net promoter score increasing? Or something like that.

Maybe a way to actually validate this, but we only have a little bit of anecdotal evidence right now that this is correct. And then Ray and i's experience from working with developers for a long time, but still anecdotal.

Tamao Nakahara: So James, I am so glad you brought up an emphasised over and over the issue of decision fatigue. And it kind of goes back to something that Charles said in his talk. Your users often don't know what they don't know when they're often stuck or asking questions based on really limited information. And I think we've all experienced that onboarding experience. I don't know, why are you asking me this? It's like a whole product experience process.

So can you break down a little bit more how you've been influencing that? And obviously you've had so many different company experiences. How have you brought that topic up and really created actions to change the experience?

James Ward: Yeah, it's certainly one of the harder things being in DevRel is to, we have this mindset in DevRel from spending a lot of time with people who don't know the product, and we're trying to help them learn the product. Whereas engineering typically, they know the product, they know everything about the product, they built it. And so it's very hard to go back to that beginner's mind that the people that we're working with in DevRel have. And so for me, what I've tried to do is bring the engineers and product managers with me, bring them to the workshops where people are mostly on Windows machines, bring them to the conference where they are spending time with people who are not like them and are not the people that they interact with. And try to help them build the perspective that we and DevRel have.

Because I'm sure that a lot of people in DevRel share the frustration that it's very hard to just tell them how it is that they need to arrive at the view of the world that we have on their own journey, not by being told by us. And so yeah, a lot of partnering with engineers and product management on that. Ray, anything to add?

Ray Tsang: Yeah, I think the culture shift is very hard. In some cases. I think we have had successes with some engineering team who are, for example, the team. There's a team in New York where I am building out the spring booth experiences for Google Cloud, and they work very closely with the ecosystem. They know and work with the sprint teams, they work with the open source contributors and the users file GitHub issues and they understand what the issues are.

They really spend the time and investing in the community in that respect. And because of this, they understand what the users need more so than potentially what the product wants to do. And because of this, through this engineering team, because of this cultural change in my mind, they were also able to, through them, influence the product directly influencing other products. And to me, that's very powerful. I feel like sometimes that derail is trying to do everything but in a different mindset is to enabling other teams to do the good stuff where they can actually be so sufficient in making these product changes because that's why they have learned through the community

Can definitely happen.

Tamao Nakahara: Yes. Thank you. And my last question is, you started and ended with the concept of time and the perception of time.

I really appreciate that. So with that fourth principle, yeah, do you have more detailed ways to help us to understand when people are waiting, when there's, are there ways that maybe I'm in the Kubernetes space, it's just going to take time to get those clusters up until you finally get to see what the solution's going on. So are there ways that we can be creative and be like, well, while you're waiting, here's a cool video to watch or, so they feel like their use of time is always high value. Have you guys done anything at Google for that?

James Ward: I don't know about at Google, but Heroku I think did a really good job of this. A lot of things in Heroku just happen instantaneously. You provision a Postgres database, it's available instantly, and so they use a lot of slack pools to keep things just ready so that as soon as somebody wants it, it's there. And I love that because for me as a developer, why can't Google Cloud just keep some Kubernetes clusters sitting around waiting to be used?

I don't know if that actually would technically work or not, but maybe it doesn't ask maybe why it doesn't happen that way. But I think there are techniques that people can do if they really want to deliver a Heroku awesome developer experience, they can use techniques as they build products to not make users wait.

Ray Tsang: And sometimes the perception is that this happened in the past where you type a command line and there's nothing being printed out for 30 seconds and you just don't know what this thing is doing. So just some more verbose output that's useful, just like what James went through with cloud run button and the whole process took some time, but every step of the way you can actually see what this is doing. And so you feel like, I feel like I'm not wasting my time actually reading it and learning something new at the same time. And that's cool. Sometimes in a code lab, for example, I run Kubernetes Code Lab a lot, and when you do create a new cluster, I actually do ask people to watch a video, and that helps in the context of a collab. It can definitely be useful

Tamao Nakahara: Having weird memories of somebody having a game, like people could play a game and it was actually quite well designed, but it might've been a different context.

Anyway, thank you so much and thanks Dave for your questions. This is all great moving for our journey of developer experience and all these sharing, so I really, really appreciate this.