

00:00
From Telemetry to Empathy: Measuring AI in Your Teams
agentic coding
developer wellbeing
telemetry data
ai rollouts
junior developers
peer learning
Lauren Peate shares new data on how AI coding tools impact developer wellbeing revealing why out of hours commits are rising and how leaders should respond.
Hosted by
Deejay
Featuring
Lauren Peate
Guest Role & Company
CEO and Co-founder @ Multitudes
Guest Socials
Episode Transcript
Daniel Jones (00:03) Welcome to the Waves of Innovation podcast. I am DJ, your host. And in this episode, I am talking to Lauren Peet, who is the CEO and co-founder of Multitudes, a company that helps you track metrics within your software delivery function to find out whether the experiments that you're running are having a positive impact or not. We're talking about some research that Multitudes published about the impact of agentic coding on software delivery, both in terms of productivity and also the wellbeing of team members. So if you are looking after a team, if you're an engineering leader looking to roll out agentic coding, then this episode should have some things that are of interest to you. I hope you enjoy it. Daniel Jones (00:45) Lauren Peake ⁓ from Multitudes thank you very much for joining me it's a pleasure to have you here ⁓ would you like to introduce yourself and tell me a little bit about what you and Multitudes do? Lauren Peate (she/her) (00:55) happy to. Let me start with me because that's always good to start with the human. I am a data nerd who likes people. I that's the that's the best one sentence summary. ⁓ I started as a data scientist, was on a PhD economics track, did ⁓ some independent research on a Fulbright and realized I liked the applied side much better. Um, so that was good to know before spending all that time on a PhD and then, um, have, haven't looked back really. Uh, and my career has really gone into, started with big tech, was working in San Francisco for a while, then moved out to the middle East and now New Zealand and have been working with startups. And the question that's always been really interesting to me is how do we take our data, which can feel, you know, we can use it in ways that are harmful to people. that can be really sterile and feel really cold? And how do we turn that into something that's actually really supportive for people and helps get us to think about things that maybe we weren't thinking about before and view them with a fresh perspective and ultimately how do we humanize our data a bit more so that we can use it in ways that are really good for people and our growth and what we need to do in our teams. So that's my bigger why. And then when it comes to multitudes, this question of data and people, not surprising me, led me to this question then of how are we using it in our engineering team specifically? And our bigger goal with multitudes is how is helping engineering teams turn their metrics into a tool for continuous improvement. So that's that humanizing piece of data. And also this belief that the data shouldn't just sit there. We should be doing something with it. The point of data is to help us get better, ⁓ but there's a lot of barriers in the way that make it hard for us to get the right data at the right time, to use it when it comes to data about people in a psychologically safe environment where people feel like it is gonna help them grow instead of push them down, et cetera, et cetera. So those are all parts of our focus. And the last thing I will say, which is very relevant for this conversation is of course the big question now is, well, how is AI impacting our teams? Is it impacting us for the better, for the worse? And how can we get more out of this AI tooling that we are paying so much money for? Daniel Jones (03:21) Yeah, I mean, the, the just, just, ⁓ listening to you speak there about the, use of data. ⁓ like I'm also amused by the fact you're from America. So, the one way of pronouncing data, I'm from the UK another way. And then in New Zealand data is that the native pronunciation? Yeah. So Lauren Peate (she/her) (03:30) I know. Yes. Yes. Which I've managed to retain my American accent after 10 years here. So, yeah, still saying it the American way. Daniel Jones (03:47) Cool. Well, I'm sure they understand you anyway. ⁓ but yes, the use of the use of data, data, data, ⁓ to, ⁓ to optimize teams. mean, that's like, that's a thing that I'm really interested in and maybe we can wedge some of that in because ostensibly this is supposed to be about AI transformation, this podcast. ⁓ but, ⁓ yeah, I've got so many experiences with that and, know, just the social aspects of that and people like the possibility of fear and it being abused and all of those kinds of things. know, talk about psychological safety, so many rich veins to tap into. But the way that we cross paths with some of the research that the multitudes published recently that was looking at the use of AI in teams and obviously, you know, using data, as you said, to answer these quite important pertinent questions. Lauren Peate (she/her) (04:16) And yep. Yes. Daniel Jones (04:37) that first paper that you published and the research that you did there. Do you want to give us an overview of the kind of things that you found and ⁓ what the findings were so people can go and eagerly look up that PDF? Lauren Peate (she/her) (04:50) Yes. Look, I'll start with the key takeaway and then let me zoom out and give a little bit of context about what we looked at, how we got to that, and then we can dive into the details. So the key takeaway is that if you want your AI tooling to have the best possible impact on your team, on your company, you're going to need to make sure that your leaders are rolling it out and enabling people. I mean, with a clear goal and a strategy to get there. And there's some details around it, but to be honest, having a clear why for what we hope to see with our AI tooling and some ways that you're going to go about supporting your people, just doing that will put you ahead of most. So that's the key takeaway. And there's obviously a lot to dive into around that. But the context I think is really important to give as well. this was a piece of research where I mean, actually, the motivation for us is that we have a SaaS product, we're building in this space. We like to make sure that whatever we build is well backed by research and that we're building on those foundations. And so when we set about working on a feature to answer this question of how is AI impacting my engineering team, ⁓ we looked to the existing research and there were lots of different messages coming through. And so being data nerds, the way that we decided to get a sense of what was really going on was to go out and get our own data and get our own hands dirty with it. And so the research that we designed was going for depth in this first phase, which means that we had four different companies that we worked with over a period of months. We ended up pulling 10 months of data. So this was data from January to October of 2025. It meant that we were covering over 500 developers, and we had their telemetry and their AI usage data throughout this period. So lots and lots of really rich telemetry insights. The second piece that we did was make sure that we also ran a survey. So we had 191 responses to the survey, so about 40 % of the people who ⁓ we had the telemetry data for. And then for some of the people who gave really interesting responses, there were interesting behavioral patterns. we then were able to do a series of one hour interviews. So that was with 19 people. And so it meant that we had that time series, almost that year of data, but we were also able to start with saying, well, what are interesting patterns in the telemetry data, and then get at some of the whys with the survey and ultimately with those interviews. So it meant that we were able to ⁓ really round out the picture. And I will say, you and I had a bit of fun back and forth. on Slack and on LinkedIn and the meter study came up because that's such a big one to discuss. And I will say that one of my big takeaways, frankly, my biggest takeaway from that was the importance of having mixed methodologies when you do this research because, know, critiques of the paper side and the sample size and did the engineers actually have experience with AI, et cetera, et cetera. The thing that was really interesting was that you can run a survey and get one answer. and then get telemetry data and get a very different answer. And so anyway, that all did inform our research design and ⁓ making sure that we had those different methodologies. Daniel Jones (08:21) Yeah, and one of the things that was quite important, I think, when we were debating this. So a bit of background. I saw the paper before I'd spoken to you and before I'd heard of Multitudes, and I was a little bit nitpicky on LinkedIn about the way they're presenting with some work that other people had presented. So if you read the addendum on that paper, it's very clear that the conclusions that are drawn are misleading. And I think really the stuff that you put your organization published was more kind of highlighting the fact that on the surface is very easy to get these conflicting kind of things. for folks who just casually following LinkedIn, ⁓ it's very easy to like, well, this paper said that it slows us down and that other paper said that it speeds us up. How do we make sense of this? So that's kind of ⁓ where this conversation came from. ⁓ it's much, it's nice to kind of have this discussion out in the open and all of those kinds of things. When there was one finding that you had in the paper and yeah. Lauren Peate (she/her) (09:24) Sorry, before you do that, can I just on that one too, I think like just in terms of why I think it is important to point out the headlines and because a big thing I've seen over this last year is you've got pressures coming onto engineering teams and engineering leaders from a board and frankly from the broader hype that's happening out in the world around AI. And so the There are some of us who've read the papers and gone in deep and could have debates about, you know, the methods they've used and the validity of the results. But there are also a lot of people who frankly are often setting goals and AI mandates, et cetera, et cetera, who are using those headlines. And maybe this, this, I'm not the first to point out this contrast, but someone who maybe is using AI to help them write emails. is going to have a very different level of experience than someone who's using AI to help them write their PR. so anyway, there's that gap of the understanding is really leading to heightened expectations and then pressure coming in on engineering teams. And so that's why I think it's important to kind of point to that and acknowledge it. Daniel Jones (10:34) Absolutely. And the number of people with an agenda who are either over hyping things or dismissing it all as absolute codswallop and it's never going to work. And really the truth is somewhere between those two. It's not as good as the hype monger say and it's not as terrible as the doom mongers say either. So trying to empirically get to the bottom of that. So we can give people more informed, nuanced opinions about things. And the meta study was That was the one in which they, the kind of conclusion was that open source developers went more slowly when they were using AI assisted coding tools. I don't think it was even when they were like kind of fully agentic. And then in the footnotes, it was not quite in the footnotes, but certainly in the appendix, open source developers have been given a 30 minute introduction call out of which some of it was showing them how to use, think, cursor or some other AI assisted editor. And of course the If you go as far as to read all of that, it's like, well, if you introduce a new tool and you have 30 minutes or less of training, you're probably going to be slower anyway. So that kind of went all around social media last year and created a lot of mixed signals for people. ⁓ the one of the findings that you had that I thought was really interesting. one of the things I questioned in the paper, and your ⁓ dual kind of approach of quantitative and qualitative turned out to be really useful for was you had a finding that developers were more likely to commit out of hours. And when I read that, I was thinking like, well, I know my own patterns. Like I end up now kicking Claude, ⁓ nudging Claude, like giving Claude a friendly pat on the shoulder. Lauren Peate (she/her) (12:13) Hmm. Daniel Jones (12:24) ⁓ in the evenings when I walk past my laptop, because I'm like, you know, it's not doing anything. could just do a little bit more work. So I imagined that all of those hours hours commits might come from something like that. And if it had only been quantitative data, like that's a point that maybe you and I could have different opinions of and have no further clarification on. But you also did the qualitative work, which showed something quite different, didn't it? Lauren Peate (she/her) (12:49) Yeah, yeah, this was a really interesting finding. the down arrow is that we found that, people were doing more commits outside of their previously typical working hours. And in fact, they were doing 19.6 % more commits outside of their typical working hours. And so the quick context on that is those working hours, it's adjusted for different people's time zones and their own individual preferred working hours. And so there's individual configuration as part of that. And it's exactly as you said, what was really helpful was having the survey data and then also those one-to-one interviews where we could dive into the why. And before I tell you what we found with that, let me frame it up a little bit because since we've released this paper, I've gone out and I've had chats with hundreds of engineering leaders around this ⁓ and developers as well. But if I look at the discussions I've had with engineering leaders, 80 % of them, when they hear this, they say, well, surely it's because AI has made coding so much fun. Everyone, they want to jump in out of working hours. Isn't this, you know, they're having a great time. It's so delightful. It's brought back the joy of it. And there certainly are engineers where that was the case, and that's what we heard. So it is not a false statement. but it was not the main driver that came out. so, but I find it really interesting that that was so often the key takeaway from leaders was like, but it's great. This is such, know, it's just cause it's fun. But actually what we heard from engineers and this, was interesting cause this came up really organically in the survey. And we've dived into it a bit more in the interviews was that it was delivery pressures. And so it was people who they were having this pressure to keep delivering and at the same pace, maybe faster paces. Daniel Jones (14:17) You Lauren Peate (she/her) (14:45) they're also trying to learn this new tooling, which as we know has a very steep learning curve and frankly is constantly changing underneath us, right? Like every couple of months there's a new model and there's new capabilities. and so there, was honestly just people trying to keep up with, have to deliver the existing work and actually maybe more. I'm trying to learn this new thing because I need to stay on top of things that are changing in my field. And so I can't do it. in the hours that I did it before, I'm having to put in those extra hours. And so anyway, that, you know, I do want to say there are lots and lots of different experiences. you know, absolutely there's the people having fun, but it was just that the key theme we heard was the delivery pressure pushing people to do those hours. Daniel Jones (15:31) And that's really interesting in a number of ways. ⁓ One, because it makes me check my own biases. Like I'm not working on a big feature backlog. ⁓ I'm also working for a consultancy that's helping people with AI transformation. So I have plenty of time to try and keep on top of this. And it's still not enough because there's so much changing all of the time. So I don't feel. particularly threatened. feel challenged keeping up with it, but I don't feel like my job security is threatened by this and I don't feel that my sense of identity is threatened by this like a developer might do. And I'm definitely much more on the end of the spectrum where it's like, this is really fun, I can build stuff, but that's because I'm building things that I want to build that I think should exist and I'm overjoyed that I can do that with so much less frustration and with so much less focus. which I'm sure is something that other engineering leaders are probably experiencing and then going, well, all of my peons, all of my minions underneath me who have this massive backlog of work under pressure to do, they must be feeling the same thing. Hang on a minute. The situation there is slightly different. Lauren Peate (she/her) (16:42) Exactly, exactly. And then, you you add to that the context of the economy that we're in now, where there have been lots of layoffs, people are aware of that. And I think a lot of us have friends and people we know who maybe left a job thinking, well, you know, this isn't working for me, let me go find my next one. And it took them longer to find the next one than it might have at different points, you know, and nothing about their skill set, it's just a very different market. So yeah, it certainly heightens the pressures. And the one other thing I want to add is that this also links back to that point we were talking about before, where we do have these massive pressures coming down, sometimes from people who have never used AI to code themselves, but they read these things and they think, well, you know, or they hear grand statements from sometimes the folks who are selling these tools and they think, ⁓ okay, well, do I not need engineers anymore? Is that going away? Right. And there's that Mark and Driesen comment. And so there's that pressure again, you know, I think it is that mismatch where we've got these really heightened expectations coming down. People who still have these big workloads and are frankly doing their best and doing a lot, you know, like the people we spoke to were working quite hard and doing everything they could, but it's just a lot. It's just a lot for all of us. Daniel Jones (18:03) Yeah, yeah, a huge amount to keep on top of. those really kind of bold proclamations are not necessarily helpful when people are talking about getting rid of all of their, their mid-level developers by the end of the year and stuff like that. It's how about you tell us what you did do after you've done it, rather than what you might do and, you know, send everyone down some, you know, chasing a wild goose down a rabbit hole at the end of a garden path or, you know, how many more metaphors gonna fit in there? ⁓ It's, you know, that kind of stuff is hugely misleading. And when folks that are ambitious and don't necessarily understand the subject matter hear that exactly to your point, you know, putting those that pressure on people, it's just, it's just not helpful. And then it also, when it doesn't work out creates all of these false ⁓ not necessarily expectations, but people are like, well, okay. AI is rubbish then it's never going to help with anything. And it gives all the people that want to, you know, be at the opposite end doom mongering kind of an excuse. You've told you so like you didn't get rid of half your developers. We still need us and I'm not using any AI ever because it's rubbish. You know, that kind of extreme thinking it lends itself towards that. Lauren Peate (she/her) (19:19) Yeah, I so agree. this is my hope with more research and more data coming out is that we can have exactly as you were talking about at the start, more of these nuanced conversations instead of the ping ponging of it's the best thing ever or no, it didn't live up to the expectations and just saying, look, it's a tool. Here's the things that's good for it. Here's the areas where we still need humans. Great. Done. Obviously there's a lot more to it, but I think that's where we're going to end is, okay, let's augment humans. We still need people. And we can do some cool new things now and that's great too. Daniel Jones (19:54) Yeah, yeah. And, you know, not to mention Jevons paradox and the fact that, maybe we'll end up with loads more software. Maybe people will find all sorts of problems that need fixing. was speaking to a friend of mine that I'm trying to convince to come on the podcast. She's head of machine learning or was head of machine learning networks for a venture studio. And she was talking about ⁓ she's living in Norway. She's not Norwegian. ⁓ her kids are going to school in Norway and like to figure out what the family is doing on any one day, they need to look at four different apps, like the school ones app, school twos app, like some club they're in and something else. And the calendars don't talk to each other. And she was super frustrated by this. ⁓ so she's like, you know what? I'm going to vibe code something to solve this. ⁓ made little app in like an evening, bought a little E Ink display stick on the fridge. And now they know what they're doing every day. It's all in one place. And her friends come around like. This is amazing. Can I buy that? And of course, like no one's going to make that as a product because it's too small and addressable market, but the cost of creation now is so low. Hopefully we will end up with lots of examples like that of like hyper-focused, bespoke, usable software that is improving people's lives rather than having fewer big monolithic software as a service type things. I'm optimistic that it won't be, you know, kind of will code for food written on cardboard in front of the train station in the rain begging for money. There'll be more developers developing more things with any luck. Lauren Peate (she/her) (21:25) Yeah, I love that example. yeah, think, look, so much of this, it's like any tool in that, so much of this comes down to what we as people do with it. And I, we've sort of started with the, think one of the ⁓ most doom and gloom points of our research, which is that increase in pressure and out of hours work, but there's a lot that we did see around the things that teams and leaders can do to make all of this better for everybody. ⁓ so yeah, you know, what we do now, and especially this audience who does understand these tools and is happy about where they work well and where they don't, that's going to have a big impact on is the, things get better or worse now with AI? Daniel Jones (22:10) And maybe to that point, what kind of things did you find that ⁓ people can and should be doing to make this go better? Because I'm having a flick through the PDF, I can see some of the mistakes of the past and kind of the tendencies of tech leaders and tech people to do the same thing they've always done, maybe limiting their success factors. But yeah, what can people do to make sure that this goes better for them and their teams? Lauren Peate (she/her) (22:37) Yeah. So the biggest point is what we were starting to talk about before, which is that, first of all, the leaders rolling it out should have a sense of the why. And if your leaders don't, you as an engineer can ask them, well, what is the why? But that was a big thing that we heard when talking to developers after these rollouts had happened was even in organizations where there was some clarity, they wanted more of it. And if there had been no clarity, there were a lot of people saying, look, if I don't know what the goal is, well then how do I know what I should be iterating on or working towards, or even how should I be measuring success? so that's the first is like, let's, leaders just need to be explicit. If this is because we want to improve developer productivity, say it, own that. And then, you know, we'll have a conversation about that and what that means and how we need to look at it. So that's the first being, leaders need to be really clear. And then they do need to do something to support their people to learn it. And we have lots of examples in the paper and I'll tell you my favorite thing that I think people should do in a moment, but just do something. ⁓ It's not enough to just kind of roll it out and say, there you go. engineers are smart and they will figure it out, but because of those delivery pressures, because of the steep learning curve, Daniel Jones (23:36) You Lauren Peate (she/her) (24:00) it's gonna go a lot better if there's some kind of organizational support around, know, here's how we think about this, here's a markdown file to get you started, here's a peer-to-peer experience sharing session where you can get some ideas, right? There's all sorts of things you can do, but frankly, just pick some things. Start doing that so there's space to learn. that, because I think the core thing here, the core challenge is, There will always be people who love learning and are the early adopters and we'll just dive in. Whether or not you give them time or not, they're the people who are going to love it so much that they will very happily spend their evenings and weekends learning about the latest thing and, okay, great, have agent teams. How does that work? And they will be the people doing that. And then you'll have the other people who just aren't that and are still really great engineers, but... for all sorts of reasons, maybe it's things happening in their life or just how they learn or whatever it is, they're not gonna be doing that, right? But good, good for us as an organization is we need to figure out how to learn and how to learn really quickly because it's just changing too fast underneath us. And so I think that's the core skill for people to think about, whether it's you as an individual or on your team or in your organization, is how do we... build those learning practices so that we can keep up because it's just, this is the new reality where it just keeps changing every couple of months. We're in just a whole new paradigm. And so my favorite example, I promised I would share this, the best thing I would say here is more of the peer-to-peer learning and experience sharing. And the reason I say that is, especially in a world where things are really changing quickly, you could do a playbook and you're gonna have to update it very soon, right? And so, These things that are kind of more static or written down, it's just gonna be a bunch of work to maintain. Whereas the peer-to-peer stuff, means first of all, we can go to those people who love being at the cutting edge and get them to share what they're doing, because they're gonna do it anyway. And any organization that's of a big enough size will have those people already. If you don't know them already, look at who your super users are on your AI tooling. And it'll be them. And so we already have them. ⁓ and then we know that the other engineers in our organization would much rather learn from them than from leaders or, you know, and there, think there is some value for external trainers coming in too, but the most credibility comes when it's someone else who works in your same code base, who deals with those same challenges, right? The legacy code or the unclear standards or It's someone else who's having to deal with that all the time. And then they're showing you, here's how I use an AI tool and I got it to do something really cool in the same context. There's just so much more trust and credibility from that. And one of my favorite quotes is another reason why it's good to have peers instead of leaders was an engineer who we spoke to in the interviews who said, look, if it's my boss or the CTO who's coming and saying, Hey, use this AI tooling. I'm skeptical because are they just gonna make money from me using this? Whereas if it's a peer, I know they're just sharing it because it was useful for them and maybe it'll be useful for me. And the last thing I'll say in this is even in organizations that had lots of peer-to-peer sharing, people wanted more. There's just such a hunger for it. Daniel Jones (27:21) Yeah, no. That's, ⁓ that's so, so many interesting things in there. And, I think one of the things I like about the research that you published was, ⁓ it hits a lot of confirmation bias for me in terms of like the things that I do and the things I like doing, ⁓ winding it all the way back to the why and the intention there and communicating that it's something that, ⁓ it's hugely important for any transformation or change and something that is just a fundamental part of good leadership that I think a lot of A lot of managers miss out on because they're like, well, people get paid to do what I say. So they should, I should just say what they need to do. And that should be enough. ⁓ And that's not really how leadership and emotional engagement and intrinsic motivation work. ⁓ had the pleasure of ⁓ talking to and doing a little bit of work with a chap called Ben Ford, who I can't remember the name of his current company. He's an ex Royal Marines commando. And he was giving a talk to the company I was running at the time about the idea of commander's intent. That when they're doing a mission briefing, they go through like, this is what we're going to do. But this is the intended outcome. And this is why it's important. So one, everyone's motivated to they know the outcome they're supposed to be achieving. So that then when things go horribly wrong, as they often do, they can they can still kind of achieve it and That thread of intentionality, talking about outcomes, talking about the reasons why. I could bore you to tears about stories of user stories in teams that my engineers have worked on where there hasn't been that communication of outcome and things have gone wrong. So that is just a fundamental good practice. So any managers out there, you should probably think about communicating the why and how you want the world to change. But in the sphere of agentic coding uptake, there is that fear that I think you mentioned earlier of like, well, hang on a minute, why are we being asked to use AI? Is it because they want to fire 80 % of us because they think that they're not going to need us anymore? So if you're not addressing that elephant in the room, ⁓ then, you know, people are not going to be eagerly jumping on top of this. On the topic of the peer to peer kind of social learning, really interested in that. ⁓ And in terms of confirmation bias, so when we have delivered training and enablement to people, was one of the things that I find quite amusing is a lot of your recommendations we do before we kind of go into an organization. we spent a lot of time talking about workshops and facilitation and psychosocial factors and things like that. And one of the things I was keen to do there was we had a kind of panel. of, it was like, I knew that me coming in as an external, like there's going to be all of those kind of preconceptions. We had the CTO talking. So like there was the, okay, your boss who pays you, like a little bit of kind of authority being invoked there. But then we also had, and as well as giving people the chance to speak amongst themselves and do all the liberating structure stuff, we had a panel of their peers who were already ahead on the AI journey, sharing like not only how it went well, Lauren Peate (she/her) (30:39) Hmm. Daniel Jones (30:42) but how it went wrong and the disasters they faced. So then they were presented with kind of credible information that's like, it's clear there's no agenda here because we're telling you like what went wrong. ⁓ But with the peer-to-peer social learning, I was always a big fan of pair programming. Kind of got indoctrinated into it eight hours a day of pair programming for years. ⁓ And that's a great way of kind of sharing tips and making sure that Lauren Peate (she/her) (30:56) Hmm. Daniel Jones (31:10) kind of knowledge spreads throughout a team, almost like gossip, ⁓ mob programming is also great. With agentic programming though, almost kind of takes the place of a pair in the, you you're having a conversation with an agent and also you're spending a lot of time kind of sitting there waiting for an agent to do something. So having two people sat waiting for an agent is not a particularly ⁓ great use of time. I haven't, I don't think I know anyone that pairs with like two humans with an AI Lauren Peate (she/her) (31:29) Hmm. Daniel Jones (31:39) I've heard of some people doing mob programming, like kind of six people in a room driving one computer with a coding agent. Were there any particular formats for the peer-to-peer learning that you saw that went down really well or that people were quite enthusiastic by or had particularly good results? Lauren Peate (she/her) (31:55) Yeah, yeah. And I was nodding vigorously for those listening to the audio through that last bit, because I've even heard there's one ⁓ organization where I know they used to enforce pair programming and they've now decided that, you have AI to pair with and so it's no longer two humans, it's you and an AI. And so actually, you know, even in some cases saying, well, this looks different now and we're just we're using AI for it. So I've seen some other themes on that. Daniel Jones (32:00) You Lauren Peate (she/her) (32:23) In terms of the types of peer to peer sharing, things like do your AI demos. So, so that popped up in a few places and it on some regular cadence, then people getting in a room and just saying, okay, cool. Here's a recent thing I built. And exactly what you were saying, that theme of here's what worked, but here's what didn't work. And something within that as well is if it's in a team context, so it's not a big group, it's more intimate. You can still do some of that. tapping into the collective problem-solving skills of the group by saying, hey, actually, here's the thing I tried to do and it didn't work. And then other people jumping in and saying, okay, well, have you thought about this? Have you tried that? I've seen that we've done some of that on our team and I've seen that work really well. Both are people learning new things, but also helping get past that, the pendulum of skepticism that you were talking about where someone tries something, the hype, it doesn't match the hype, because is it ever gonna match the hype? and then starting to go into the like, I tried it and it failed. So therefore this tool is rubbish. And then someone else saying, okay, wait, wait, but actually if we try this or that, then then being able to say, all right, there's a middle pathway in here where we can get this to work. So yeah, the AI demos were the biggest one. Sometimes it was kind of larger groups. So like the AI demo on your team, that was the favorite, but sometimes a bigger thing where maybe you get someone who, like in a lot of organizations, just with the pace of tooling, they might have a couple of folks who try out the latest tool. ⁓ And so then what they would sometimes do is have the folks who've been playing with it and getting a sense of, well, what works best here, then leading a broader team session and saying, all right, we're now rolling this out to the org. And here's some things that we've already found that have been really great use cases to do within our work. So those are all examples. yeah, any. ⁓ any kind of just getting people in a room and having them share, all of that is what people loved. Daniel Jones (34:24) Yeah, it's for a profession that maybe has a stereotype of people being or attracting kind of anti-social loners and introverts. Actually, I think that quite a lot of developers do actually enjoy the social contact, at least with the kindred spirits and other people that have experience and that they can learn from. also mentioned ⁓ that like in order to have a successful rollout, just do something rather than just hoping that it will happen by itself. 2025 to me seemed to be the year of engineering leaders saying like, ⁓ yeah, people can use AI if they want. I let them use whatever tools they want and they're trying it. And like, it all seemed very hands off, almost like people were scared to make recommendations, maybe because they might back the wrong horse or they might alienate people that don't want to use AI. Lauren Peate (she/her) (34:54) Yes. Daniel Jones (35:19) I get the feeling that that's changing in 2026 and people are being a little bit more structured ⁓ now. Have you seen that in your work or am I just imagining that? Lauren Peate (she/her) (35:28) Yeah, I'm smiling because so I'm going to say the M word, which is mandate, which is, so should you have an AI mandate or not? And for most of 2025, seem like no, never. And then the end of 2025, I started to hear some people saying, actually, yeah, we tried some variation of it and I'll talk about the variations of it. And actually that worked really well for us. And so, And I'll tell you the variation, because I think this is where it's interesting. What I think, the key thing that I've seen people shift to on a leadership level, in ⁓ one of the organizations in particular that was part of the pilots, I saw them take this approach, which was, we are, we're going to be really clear that you're going to need to learn how to use AI. We're not going to dance around that. It's just. part of the role of software engineering. let's like from this point forward, you know, this is where we're at. And so again, I think that goes back to that clarity of expectations, the why piece that we were talking about. If that's the view that leaders are holding in their heads, then we need to say it out loud because then your people can respond accordingly. And so they were really clear about that, but they were really open about how you got there. And so it was things like, we want you to have some type of a goal around how you're going to use it. but we don't mind what the goal is and we're gonna make available several different tools. And also we're gonna have all this support so that you have peers and training sessions and guidance from us on how you might use it. But yes, this is the goal. And so that, I think that's a nicer way to do it. Frankly, it's not even nicer, it's more effective. There is the like, okay, well, you have to use this and you have to use it that way and that's just. not going to be, in my opinion, as effective. And I think one thing I was gonna say earlier when you were talking about that commander's intent piece, the why for the why, the reason why I think sharing the why is so important is because we work with smart groups of people. so if we can be, there's two reasons. So first of all, we work with smart groups of people. If they know what we're trying to get to, they might actually come up with better ways to get there. than someone sitting in some room away from the teams, right? Cause they're a manager or managers and they're not in the standups and the retros and the day-to-day work. So we're actually unlocking more of that power and potential of our team by saying, look, here's the goal we want to get to, can you help me? And then the second reason is because of all the fears, can't, don't, fear never goes away by shoving it down. The best thing you can do is bring it out into the open. And you know, this is the hard part of leadership is you, being honest about things ⁓ and knowing how to kind of be open about things when we are in a hard economic climate and all that. Like I don't want to minimize the hard role there, but leaders just being upfront and saying, yep, like here's what we're planning to do and whatever commitments they can and cannot make, right? Like our commitment, I've seen a lot of organizations say the reality is that with AI, we're not going to backfill roles, but we're also not going to do a ref. And so that, you know, we think that we can get productivity expansion from AI. So we're going to focus on that and not backfill, but we will, we can make the commitment that we're not going to riff because of it. you know, and whatever it is, like leaders, at least being honest, but being able to do that, it just helps kind of air out the fears and hopefully dissipate them. So yeah, that wanted to add that. Daniel Jones (39:03) The thing about outcomes and the kind of good example that you used of how to support people in adopting these tools of this is, we expect it, this is what the expectation is, it's gonna be part of software development going forward, you figure out how to get there. Has so many parallels with like a good agile story tells you what needs to be achieved. It doesn't involve product managers telling engineers how to write code. And exactly like you say, you know, they're problem solvers by profession, right? So they'll figure out the best way to cross the river. All they need to know is that they need to be across the river, not whether they need to build a bridge or a boat. ⁓ And then the, you mentioned fears and I'm definitely going to wheel out your report more often when I'm talking to customers, like, look, there are other people that agree with our way of doing things. When we do our workshops, before we even try delivering any training to anyone, there are a couple of things that we do with folks. ⁓ And I'm not saying this just to like, look at the things we do and you should come and use our services more like knowledge sharing. if you're going to like be doing this yourselves, if you're a listener and you're, you're kind of rolling out a change, asking people ⁓ for their insights on how, how, how this can be better and how it might go wrong. So Lauren Peate (she/her) (40:07) you Hmm. Daniel Jones (40:20) We use a couple of formats from a book and a website called liberating structures. And one of the questions we ask people first off is imagine that you're hiring 200 junior developers. come from a country which has a culture where no one ever says no. And they all have an intravenous drip of red bull straight into their arm. They never sleep. All they do is code. How could this go wrong? Like what would need to be true for this to be the biggest disaster possible. And you get people to kind of discuss that amongst groups and whatever, and then you share all the insights and then you ask them, what do we do here? That's a bit like that. So you start surfacing all of the problems and then you repeat that again, the same format, but with the question, what can you do tomorrow without anyone else's permission in order to make things better? So we do all of that, which is more kind of functional, getting people thinking and ideating, solutionizing of like, okay, what do need to do to make this successful? without us having to tell them. But then we also do the same thing again with asking more like, are your fears? How do you feel emotionally about this? Are you excited? Are you scared? Are you nervous? Are you angry? And it is surprising that over the years, ⁓ I've been involved in consultancies, I've done organizational transformation and a number of places that just unilaterally tried to drive through change without asking anyone anything, without giving people an opportunity to feel listened to. without allowing anyone to voice their concerns or their emotions. It's such an easy thing to do. It's so cheap. And one, even if you do nothing with that information, people feel listened to and consulted, even if you're just pretending. But you know what? Maybe, just maybe, and to your point about managers and managers being distant from things, maybe the people on the ground know something that you don't and you might learn something. Who would have thought? Lauren Peate (she/her) (42:09) Yeah, yeah. Yeah. And I think, like, especially with AI, surely we realize that, you know, it's the folks who using it more in the day to day. Like, so much of leadership is the human stuff and all of that, but you're not spending your day coding and figuring out how to use these tools. Maybe you're doing it in the evenings and weekends, but you're not, I'm just gonna pause it now. I've said this to rooms of engineering leaders in the past, and no one's come up and told me that they really disagreed with me. they're not gonna be the ones who know who are the most cutting edge of how AI tooling is working today. And so it's just a reminder that we need to talk to our people. If you're a leader, you need to be talking to your people because you're just not gonna have that full perspective that they're gonna have of these tools and what they can do. So yeah, I think it's great. I mean, I will just say on the talking to your people part, it is... you know, just the point you made of, they'll feel better. It is better if you actually listen to it and try to do it. If you want them to share again, and I've seen this like people who run engagement surveys, if you can always get people to share once, but if you want them to share again, you need to actually do something with what they tell you. And I will say, I think that is one of the things that's hard about it is I know there are leaders who work in big organizations where, you know, there's things in their locus of control and there's things that aren't. Daniel Jones (43:21) Yes. Lauren Peate (she/her) (43:32) And so trying to be really honest with your people about what you think and maybe sometimes there's some bigger company policy that you personally disagree with. But I've just seen, for any leaders listening, the thing I've seen is just people navigating and saying, look, here's what we have to do, but still leaving some space for their own personal opinion too. And just being upfront for this is what's in our locus of control and this is what isn't. I think people... I personally, I can speak for that, and really appreciate when someone's at least really honest about, here's the reality is, you know, I don't like this myself, but here's the reality is, and this is what we need to do. Daniel Jones (44:12) Yeah, absolutely. ⁓ sorry. I was just gonna say authenticity was the words like, as you were speaking that was ringing around my head of exactly like you say, you know, there's a, there's something for towing the company line. And, you know, well, I've, I can't change this. This is the company policy. So I've got to communicate it, but I think people can see through when you are doing that. And if, if they know that your heart's not 100 % in it, or, you know, we're all clever people. I'm not, I'm a raging idiot, but Lauren Peate (she/her) (44:15) Yeah, yeah. Daniel Jones (44:40) You know, most of the people in the industry are clever people and you know, they can spot that kind of inconsistency and you know, inconsistency of action and that sort of thing. So you're just much better off being authentic and you know, this is how it is. And exactly like you say, this is thinking about it in terms of systems. Like there are bits that I can't control. We need to do what we can do here. This is what we've been told to do. It's our job to do what we've been told to and therefore we can do it as well as we can. whether we like it or not. And let's try and make sure that even if this is a bad thing and we disagree with it, which I'm not sure that's necessarily the case in the Gentic coding, but like we're going to ⁓ do it as well as we can with as minimal negative side effects. Lauren Peate (she/her) (45:21) Yeah, and I am, I mean, this is a good point probably for me to segue to some of the highlights of this research that we have coming out as well. Because what I wanted to say off of that is when we're realistic about what we can control and not, then we can say, all right, well, let's then really put our efforts behind the things that we can shift within our organizations. And one of the things that I think is really interesting in that is ⁓ what's the role of developers who care about the future of the well the fate of juniors today and the future of seniors tomorrow and in the future and and and the role of developers for making sure that we still have these programs for juniors to get their start in the industry and get mentored and etc etc and because that was something that that is one area where i think people have a lot more influence and control than maybe they think they do ⁓ And it was an area of big fear and worry that came out, surprisingly for me, even from very experienced engineers in our research who were, there's so much, something that I love seeing in the research was how much kind of civic responsibility people felt for the industry and a recognition that someone gave me a start when I was a junior and let me make a bunch of mistakes in their code base. And so I need to, and that is the only way that I was able to become the experienced engineer I am today. And so what are we going to do to make sure that other juniors today are getting that same opportunity? It was such, it was so, so, so clear and there's a lot we can do. So anyway, I've sort of hinted at it, but yeah, maybe I frame up the second phase of the research a little bit and then I can share some of the things we saw there. Daniel Jones (47:12) Yeah, yeah, go for it. mean, I've got all sorts of opinions on juniors and whatever, like, I'm not the interesting one that's done the research you are. yeah, do you want to do you want to go ahead and so you I mean, there's already this paper that is out and he says scrolling up it, what matters most for our rollouts, how you lead, that's already out and that's already published. And then you've got another one that is coming out soon, bearing in mind. So in March 2026. Yeah. Lauren Peate (she/her) (47:13) Cool. Yeah. Next month. Yeah. ⁓ yes. Yes. Daniel Jones (47:41) I was gonna say it's not gonna go out that much later. I think we've got a time of recording probably about two or three weeks, something like that. But yes, March 26, you'll have another paper coming out. So do you want to tell us about that? Lauren Peate (she/her) (47:49) ⁓ Yeah, yeah. So what we decided after we'd done this in-depth research, 10 months, 500 engineers through AI rollouts, was that we'd learned a lot and it was really interesting. And we wanted to get a bit more breadth now to add to that. And so we took some of the things that were really interesting, like these questions around what happens to junior engineers or things that we'd seen work in terms of rollout activities and peer-to-peer and et cetera, et cetera. And then, We put it into a survey and we sent it out and we had 226 folks, so mostly engineering leaders from all around the world, then respond. And so our goal with that was to say, all right, now we're covering a much wider, a much broader group of organizations than we did before. Let's see how consistent those trends are. And the one about juniors that was really interesting. one of the, asked a couple of questions related to hiring. One was just, are you hiring at the moment? If you are, are you still hiring juniors? why, why not, give us some context about what's driven that decision making in your organization. And then we've started, and then we've been synthesizing those results. And so with that one, there were a couple of interesting tidbits I can pull out. And so these will be in the paper, but this is the sneak peek. The first is about half of the respondents were actually hiring in their organization. And of that half were hiring juniors. So it means 25 % of the respondents were hiring juniors. ⁓ 37 % did say that AI had changed their organization's views of juniors. So there is some lineup there and, know, in some ways the junior hiring looks worse than, than the people say, or the junior hiring is, you know, some portion maybe of the people who said, ⁓ AI had changed their views of juniors. So there are some things that look bleak. ⁓ but one of the bright spots where I'm going to take us back to that locus of control point was that. there were a lot of people, and this was free response. So we had this free response of what's changed, what are you thinking about? And then we did the thematic analysis to code it up, which as a side note, if anyone is doing thematic analysis, we have really honed our technique for using LLMs to speed that up. So that's a whole aside. But in the thematic analysis, What was so interesting was that out of the developers and then, so the individual contributors and then also the managers. the folks who were just leading one team, we saw 24 % of them organically. Again, we didn't ask about this organically start talking about their worries about the future talent pipeline. And that, that matches with what I saw in the deep dive work where in the interviews in the survey, people were saying, I'm really worried. you know, especially seniors saying, I am worried about what's going to happen to juniors. And then when we looked at the senior leaders, so the people who are at manager or managers level, almost none of them mentioned the pipeline. And it wasn't zero, but it was almost none. And so the, know, it's a little bit like, okay, well, I wish our, you more senior leaders were thinking about what's going to happen to the industry and the future talent. But the opportunity there is, Daniel Jones (50:57) You Lauren Peate (she/her) (51:14) guess what, when an organization runs a program for interns or for juniors, the people that you need to do the mentoring and the supporting, it is your individual contributors. It's the people at the ground level. And I can even say that at multitudes too. So we've had times when we've brought in juniors, even as a startup and what made it happen. I'll be upfront, even pre-AI, sort of be like, hey, We're really busy, we have a lot to do. Let's just keep racing on forward. Like it is work to bring in someone new, have them around. If you're doing one of the summer programs, have them around just for a summer and then they wrap up and maybe they stay involved, but maybe they don't, right? There's ⁓ a lot of organizational investment. And the thing that convinced me the times we've done it was that we have people in the organization who said, I really wanna do this. I think it'll be good for my own growth and development. I wanna contribute in this way. and I will put in the work to make sure that this happens and, you know, and that we're still meeting our delivery goals, et cetera, et cetera. And so then we did. And if I hadn't had those people saying, I want this to happen, I wouldn't have done it. I would have been, you know, one of the senior leaders just not driving it. And so I think like there's a friend of mine, Erica Kanga, so I need to cite, because she was the one who was really pointing this out. This is the place where you've got a lot of locus of control. And if you're worried about the future talent pipeline, go make some noise in your organization and, and, know, show that you're willing to be the person who's doing that mentoring and supporting and the legwork to really make those intern programs a success. And then the upside, you know, in the pitch to the organization, and we have this for us is you also might find someone amazing and you end up hiring them at the end of it. And so there is that risk, but also the organization can end up getting this benefit of some great, amazing new talent to you. Daniel Jones (53:02) It's really encouraging that people are keen to be doing that upskilling and the people nearest the juniors, know, the kind of individual contributors are seeing, seeing that need. And, you know, I used to teach martial arts and in our style, unusually, we started teaching clubs at Brown Belt level. So before you're kind of officially kind of qualified until before you really know what you're doing. And part of the reason for that was that teaching makes you, it makes you understand things better because you have to teach people who are much bigger or smaller than you how techniques work. And you have to find ways around things. You have to rather than just do it, you have to be able to explain it and you have to understand the fundamental principles. So being, having juniors to upscale a mentor in an organization will mean that your, your seniors end up being better developers and being able to, especially if they're not pairing and doing things like that, they're going to need to be able to. Lauren Peate (she/her) (53:34) Mm-hmm. Daniel Jones (53:58) explain their hunches and their ways of doing things and why this is a better way of doing things than other ways. It's, with the juniors issue, I kind of feel like we're in a tricky in between space at the moment where we're not hiring people to write assembly anymore, but C compilers aren't reliable yet. And it's like, don't necessarily, we can see the direction of travel that maybe we need less low level work doing. But the thing that's going to replace it isn't ready. ⁓ Maybe by way of another metaphor, like ⁓ we need fewer brick layers, or we will in the future if things go the way we make. I said before we started recording, I'm not going to talk about software factories in this episode. But if we end up with software factories, then maybe we'll need fewer brick layers. But we do need more architects who can talk to people to find out what kind of buildings you want to live in. What features do you want here? Should it have a sunroom? Like how many bedrooms do you want? That kind of stuff. Lauren Peate (she/her) (54:44) you Daniel Jones (54:58) And we also need structural engineers to make sure that the building stands up. ⁓ But we need fewer people to actually kind of put the posts in the ground and that kind of stuff. But we're certainly not there yet. And we're not entirely sure if we are going to get there. ⁓ And if so, when? So there is this precarious kind of gap that we're in where maybe there's not enough knowledge. And you mentioned civic responsibility. I don't know whether the people were thinking in terms of to their fellow humans in terms of giving people a leg up and an opportunity. But one of the things that I've often been concerned about, maybe for different reasons was, you you think about all of the buffer overflow vulnerabilities exist in like C++ libraries that we all depend on every day. you know, years ago, Gen.ai, I was thinking about my kids, like my kids have Lauren Peate (she/her) (55:33) Yeah. Daniel Jones (55:51) absolutely no idea how a mobile phone works. Like they barely understand that Wi-Fi is like wireless signals and the router is over there and they get to the internet through that. So like, you know, even if we took away LLMs and AI, would we end up in 40 years with people standing on like layers and layers of digital cruft that they have no idea how it works? And it's like some kind of dystopian science fiction where people are anointing the holy router to make sure that it works and you know, kind of doing rituals to it. Lauren Peate (she/her) (56:03) you Daniel Jones (56:20) And then we put, you know, agentic coding in this and then so no one understands any of the source code that's being written at the same getting compiled and all anyone can do is prompt it. Yeah, it's a slightly scary thought. And is that a world that we want people to inherit where the things that are vital for their social infrastructure, nobody understands. Lauren Peate (she/her) (56:33) Yeah. I mean, certainly not, certainly not, I will say. ⁓ But yeah, but unclear if that's where we end up. I mean, look, yeah, I hear the questions. Like, there's so much shift around, okay, well, we're still figuring out what skill sets do we need? How do we work with this new tooling? What does that mean for the shape of our teams, right? There are a lot of unanswered questions. What I can say, and this was, you know, Daniel Jones (56:44) You Lauren Peate (she/her) (57:11) I'm going to pull out a bright spot from the research, which was we did have some organizations where people said, you know what, I'm hiring more juniors. And I've seen that, like I've met, ⁓ this was in the survey, but I've also met some of those people where they said, no, no, no, no, no. Like you all are missing out on the opportunity here. And the reasons they cited, and I've just pulled up that, the thematic analysis. I have that is first that they're AI native. And so, you know, it's the digital native thing that we used to talk about. now it's. being AI native, but it's just like, there is no hurdle for, we need to use this tooling and how might you use it? Cause that's just for some of these folks coming up, that's just how coding has always looked in some of these cases. And so there's a benefit to that and a different perspective. The other one too, and we saw this is onboarding is so much faster. And so the time to get a junior up and running, and I know people are going to say, you know, the experience of da, da, da, da, yes, yes, yes, yes, right. Daniel Jones (57:53) Yeah. Lauren Peate (she/her) (58:10) But AI certainly makes it faster to understand that new code base, to pick up a new language. There's a lot of things where it is really helping with ⁓ learning. so saying, actually, yeah, that in some ways actually decreases the effort needed to bring in a junior. And then finally, some folks saying, look, there are really talented, smart people coming through, and there's a lot less competition for them. So why would I not go and hire some of those people? I think all we can do, we're in a phase now where there's so much change. It's the, ⁓ I didn't say this, it Pete Hodgson who does a bunch of great work and he was giving a talk and talking about the Knevin model, which is a model for systems thinking, right? And the phase that we're in now is chaos. So there's different types of systems and the system that we're in with AI is chaos because there's just so much changing and it's changing so quickly. And so the best way to respond in this model and now is just to run some experiments and see what happens. And so I think it's the same with this junior hiring of actually you and I could sit here and posit some things, but neither of us has the answer, right? We're just gonna need some time to tell. And so the best thing that any of us can do is run some experiments. Some teams will hire some juniors, some won't. And then... Let's see. How does it go? know? And so I think that's that would be my entreatment to folks listening is I think the junior experiment is still worth running both for civic responsibility, but generally because we just don't know yet. Daniel Jones (59:50) Yeah. And the, as you point out, there'd be, you know, being callous about it cheaper than ever, ⁓ because the competition will be lower. ⁓ and, know, I used to hire people from a coding bootcamp, great coding bootcamp in London called Makers Academy. I don't know whether they've got branches around the world, but they were, ⁓ one of the entry requirements is that you must not have a, I think I'm right here, that you're not allowed to have a computer science degree. Like they wouldn't take people from an educational background. So we had. all sorts of like we had a professional double bass player, we had a journalist, we had someone from local governments, we had ⁓ one of the best developers like one of you know, like a proper developers developer, like she did her interview using VI not even Vim just VI. ⁓ And her mom was an artist, the only job she'd ever had was ⁓ in a art museum. Lauren Peate (she/her) (1:00:33) you you Daniel Jones (1:00:45) Her mother was a potter, made 3D sculptures and she'd only ever worked in an art museum. Her mom had no idea. Like she was like, why don't you get a real job? Like didn't understand software development at all. She was awesome. ⁓ And to your point about ⁓ AI, them being AI native, one of the things that we, reasons we used to hire from this coding bootcamp was because they had been taught pair programming, they had been taught test run development. Lauren Peate (she/her) (1:00:48) No. Daniel Jones (1:01:13) but they didn't have loads of bad habits. So they were like a blank slate and it's much easier to teach someone like technology than it is to unprogram all their bad habits. So I think as we see development practices kind of shift around agentic coding and people changing the way that they do things, that advantage that juniors have of not having all of those decades of experience in doing it in the old fashioned way will become even more valuable. Lauren Peate (she/her) (1:01:15) Hmm. Daniel Jones (1:01:41) And in terms of the experiments, like it's going to be cheaper to run now than possibly it has been for years. If only somebody was like catching all the data from these experiments and aggregating that into reports. We need someone like that, don't we, Lauren? Lauren Peate (she/her) (1:01:50) I know, I know. I mean, happy to help, happy to help. But no, one of the things I'm really excited about actually on that note is, so this feature, I like doing research. There's a part of me that's like, maybe I should have done a little bit of academia. And so it sort of feeds that itch to dabble in this a little bit. But ultimately I do run a software company. And so it does need to feed back into it somehow. And so off the back, yeah, at some point, at some point. And so off the back of the research, we then. Daniel Jones (1:01:57) You okay. Lauren Peate (she/her) (1:02:22) the informed approach we took when we built and released an AI impact measurement feature. But the thing I'm most excited about within that is we have part of that is logging your experiments and like your, different AI interventions. And it could be tooling interventions. could be human intervention. So here's the date that we started doing weekly AI demos on all of our teams and then measuring both the impact on your AI adoption, but also the types of outcomes that you're seeing in terms of. Daniel Jones (1:02:38) nice. Lauren Peate (she/her) (1:02:52) What's our flow through and throughput with the work? What are some of the early quality signs? How's it impacting collaboration, that out of hours work metric, all of that. And so anyway, one of things I'm most excited about is actually we're building a little database. We'll get customer permission before we do this, but we are building a database of all of these different AI experiments that people are running, plus some data about what are we seeing on the outcome side. So I'm very excited to see, to come back to that and start playing with some of those trends. But yes, this is where for anybody, the clarity of the why are we doing this, using the experimental approach and saying, all right, here's our hypothesis for why, here's what we're gonna measure to see if we think it's working or not. Anyone can do that. We all have so many metrics at our disposal. And so that really is the big thing. We're in an experimental phase. Pick some experiments worth doing. It might be the Junior's experiment. It might be with how you're supporting learning in your teams. It might be actually, you know, more in the nitty gritty of what are we going to do to help AI actually get more across our coding standards and what are we feeding into our rules files or right? Whatever it is, it could be everything from the human to the more code based and tooling ones, but pick experiments worth doing. Be really clear about your hypothesis and your metrics and then have at it. Do one experiment at a time. Because it's all overwhelming enough anyway. Don't do more than one. Exactly. ⁓ Daniel Jones (1:04:20) Yes. Yes, what one variable at a time tends to work out best. So that's something that the multitudes product will help and enable people to do is to, the idea of being able to track those interventions. ⁓ Like I have ⁓ conversations with my customers about exactly that. So I'm going to be talking to you after we're done recording about some of those capabilities, but like we're talking to somebody with, ⁓ you know, 12 and a half thousand developers. Lauren Peate (she/her) (1:04:32) Yes. Yes. We can do that. Daniel Jones (1:04:51) And they're looking for ways to measure the impact of agentic coding and the, you know, there needs to be a systematic tool, tool-based approach for that. So I think it's, it's great that things like that exist. So the listeners now know that if they're running those kinds of experiments, the multitudes, maybe a company they should check out in terms of the research, ⁓ how to, and also we've been going an hour and five minutes and because Lauren is in New Zealand and I'm in the United Kingdom. Lauren Peate (she/her) (1:05:15) wow. Daniel Jones (1:05:18) It's getting on a bit and my ability to perform sentences is going to be even further diminished the longer that I go on. How can people find you? How can people find your research and the upcoming paper that will be published in March? Lauren Peate (she/her) (1:05:32) Yeah, great. So you can find us at www.multi-tuts.com. If you go, we've got a little research page and sign up for our newsletter, sign up on, you'll see a little link actually at the top of our homepage now saying, hey, here's our latest paper. So you can click that. All of our papers are ungated. I will be honest that I listened to some marketing advice and we briefly gated it for a few weeks and then it felt icky and I said, why the heck are we doing this? So we will never get our papers again. Daniel Jones (1:05:58) You Lauren Peate (she/her) (1:06:02) I just had to experience that to see why I really, really didn't like it. So it was an experiment. Thank you. Thank you. And I actually, I'm going to share the data on it, but it was a failed experiment. So ungating it is. Thank you, data. But anyways, so it's ungated. see, can, ⁓ but if you are interested in our research, there's little ways on our website, you can sign up to say, let me know when future research comes out and we'll just let you know about research. won't be ⁓ any of the other stuff. So. Daniel Jones (1:06:06) It was an experiment. Lauren Peate (she/her) (1:06:31) Yeah, get in touch and look, we've got ongoing areas of research and so if people are interested in being part of some of this, there will be other opportunities as well. Like I said, there's a lot of interesting things we're curious about in terms of what are the types of experiments that people are running, what's working, what's not working. ⁓ So anyway, lots of ways to get involved too. Daniel Jones (1:06:54) Awesome. Right. Well, Lauren, Pete, it's been great having you on. It's been a really fun conversation and ⁓ I thoroughly approve of the recommendations you make. They seem to be eminently sensible in terms of ⁓ making change stick and not just being about the tools and actually thinking about the humans involved. So yes, thoroughly approve of all of that. And yeah, we're very grateful for the conversation and hopefully we get to do it again. when you've done some more research and you've got even more fascinating findings to share. Lauren Peate (she/her) (1:07:25) I would love that. Thanks for having me. Cool, bye. Daniel Jones (1:07:27) Cool, thanks very much. Daniel Jones (1:07:33) Many thanks to Lauren for joining me before the start of her working day to have that conversation. I really like a lot of the recommendations that multitudes make in their paper. It's very much aligned with our way of doing things. But then you probably heard me say all that during the conversation. So yes, we will put links to blog posts and things like that in the description where possible. If you have any feedback, on the Waves of Innovation podcast. It'd be great to hear from you if you could email wavesofinnovation at redashsynch.com. That's R-E-C-I-N-Q.com. It'd be great to hear from you, especially ⁓ recommendations on content. And if you're still listening and you haven't tuned out by now, it would be really good to hear how you feel about the editing. We are using some software that does auto ⁓ AI powered cutting of silence between words. and I would be interested to hear whether you would prefer our natural pace with all the ums and pauses or whether you prefer ⁓ cutting the waffle and some of that dead time so you get to the information more quickly. If you have opinions on that either way, please do email in it would be great to know. Otherwise, be good to each other and you'll hear me in the next one.

Episode Highlights
Why AI coding tools cause a 19.6 percent increase in out of hours developer commits.
The danger of relying solely on telemetry data without qualitative developer interviews.
How delivery pressures and steep AI learning curves are contributing to developer burnout.
Why peer to peer AI demos are more effective than top down executive mandates.
The critical role of commanders intent when rolling out AI tools to engineering teams.
Addressing the growing disconnect between senior leadership and individual contributors regarding AI adoption.
Why individual contributors are stepping up to save the junior developer pipeline from AI automation.
Related Episodes
Share This Episode
https://re-cinq.com/podcast/data-and-humanization




