Get the Complete AI Native Guide: From Cloud Native to AI Native

Get the Book

Podcast

Dec 22, 2025

DORA 2025, the Psychology of Agentic Coding, and Value Stream Management

00:00

DORA 2025, the Psychology of Agentic Coding, and Value Stream Management

dora 2025

value stream

generative ai

developer psychology

metacognition

devops research

Daniel Jones joins Google’s Rob Edwards to explore the 2025 DORA report and Rob’s psychology thesis on AI. They discuss how Value Stream Management acts as a force multiplier for AI teams by identifying hidden bottlenecks. Rob details the psychological shift from Coder to Conductor, explaining how agentic workflows demand system-level thinking and metacognition. The conversation also tackles the risks of burnout, the loss of learning by osmosis for juniors, and how AI helps introverts validate ideas.

Hosted by

Deejay

Featuring

Rob Edwards

Guest Role & Company

Developer Experience And Application Delivery Lead @ Google

Guest Socials

Episode Transcript

Daniel Jones (00:02) You are listening to the Waves of Innovation podcast and I am DJ your host. In this week's episode, I am talking to big Rob Edwards, a rugby player in Yorkshireman who I met about a decade ago, who now works at Google and has contributed to the Dora research paper for 2025 with articles about value stream management and value stream mapping. As well as that, ⁓ Rob has also recently finished a psychology masters and his thesis was on generative AI in the coding process. So we talk about that towards the end of the episode. Whilst I was recording, my next door neighbor decided this would be an excellent time to start drilling holes in the walls. So hopefully you won't pick up any of that. We do have like noise gates and AI filters for the audio processing and all those sorts of things, but apologies if there is a bit of background noise. And similarly, Rob has a young family. there was some, you know, stamping of tiny feet going up and down stairs. Hopefully that does not prevent you from enjoying this episode, which maybe I'm being big headed because I'm involved, but actually Rob says all the interesting things I think you're going to enjoy because it's full of really interesting insights that have come from original research. So sit back and enjoy. Daniel Jones (01:13) Rob Edwards, thank you for joining me. Rob works for Google, is a contributor to the Dora report, but is here entirely of his own volition, expressing opinions and ideas entirely of his own. Isn't that right, Rob? Rob Edwards (01:26) It is indeed. I'm excited to be here. These are topics I always like to talk about and especially talking to you, think. Many years ago, we started some random conversations. So this is going to be fun, I think. Daniel Jones (01:39) Yeah, with any luck, as long as I don't totally stuff it up by being an f host. But yeah, I mean, for context, maybe when I finished prattling, you can tell folks what it is roughly that you do at Google. We don't need to go into too much detail there, but we worked together for the first time, I think, nine years ago, pairing on the delivery of a cloud platform to a telco, which is when we kind of discovered that we had similar kind of shared interests in trying to help people do things more effectively. So yeah, what are you doing now at Google and how does kind of improving the efficiency of organizations play into that? Rob Edwards (02:12) Yeah, so now, and actually what, before I was over in the UK, obviously with you being based in, well, you were based in the UK. So I've since relocated to Canada, hence the outfit. It's compulsory, I think, for me to get any sort of visa. But I really work across the entirety of North America now, working with Google Cloud customers. And I have a few different hats. And it depends on what challenges the customer has, but it's often around platform engineering, software engineering, reliability engineering. Inevitably for the last two years, maybe a bit longer, a lot of that has been around how AI could maybe or not help with any of those things. a lot of the challenges we face is not always technical, it's often the human element. And actually the human element is the more complex side of it. So trying to work out how teams best work or extract information to help solve whatever the problem is we have is a bit of fun and a key bit I enjoy doing. But yeah, it's ultimately helping Google Cloud customers help with whatever problems they have in front of them is the way I like to think about it. Daniel Jones (03:14) Yeah. And it's just like in the cloud native days, it's most often a people problem. we could give people, platforms with batteries included, but they would just tend not to use them very well, or they would do exactly what they'd been doing before, not realizing that the technology and enabled new ways of delivering software. And the patterns had changed and kind of segueing into your contributions to the Dora report. People didn't necessarily know like what their delivery system was. Maybe they knew they did scrum, but they didn't really understand the path that value took on the way into production. using that as a nice stepping stone, do you want to talk about your contributions to the Dora reports and your interests there? Rob Edwards (04:00) Yeah, if I take a step back, back in the days of when we started to pair those nine years ago, was my first introduction to Dora as a research kind of project. And part of it was, at that time, I was helping build, set up, evolve, iterate platform teams, and do platform engineering before it became a cool thing, it turns out. one of the things I found useful from Dora was a set of capabilities that helped drive successful software delivery teams. And I saw a lot of that could be applied to platforms and platform engineering. So when I joined Google, one of the things I was very excited to get involved with was Dora. So in Google, we have the concept of 20 % time, which I think most people are familiar with, but it allows us to do things that are not directly related to specific roles, but... Tangential or go and experiment something so my 20 % was actually getting involved in the Dora research So for the last four years, I've been involved in some shape or form with the Dora team Probably in the last 18 months. I've been more Involved with the the some of the research but also around authoring of the reports themselves So people who aren't familiar with Dora, it's It's a research project that's been going on over 10 years looking at the sort of things that drive high performing teams or help drive software delivery and operations. And the last couple of years, we've started to look at AI unsurprisingly, and the impact AI is having on software delivery. And this year I was excited to be able to get some questions around how value stream mapping may or may not help with those software delivery teams. there's always been the hypothesis that doing VSMs are useful and beneficial for teams. We've kind of had hinted data in the past research with Dora, but this year we actually had some specific questions to try and entice that out and validate our hypothesis. So that was part of where it started. And I think part of it is because doing AI for, let's just say AI within the SDLC. One of the things that came apparent to me when we first started doing it is that actually understanding where friction points are within your kind of flow, your end-to-end value delivery is really critical to help you focus on the improvements. I think when AI first came out in coding, everybody was like, great, I can generate more code. that seemed to be the thing where everybody was really super obsessed with how can it help me generate more code. And often that isn't always the thing that is holding back teams. There are other things. So how do we identify those things that are holding back teams like code reviews or documentation or whatever it may be. So this is where value stream mapping gets really important and helps us kind of understand those friction points. Daniel Jones (06:49) that's all great. mean, I'm just like, as you're speaking, there's so many kind of funny things popping into my head, the idea that the problem was ever, we can't get enough code written, like it's not about writing the code, you know, we both worked somewhere that did a lot of pair programming. And, know, it's people's objection to pair programming is normally, you've got two people, they're the same machine, they could be writing two separate lots of code. It's like, yeah, but it's the thought process of figuring out what code to write. That's the hard part. Rob Edwards (07:02) Thank Daniel Jones (07:16) So yeah, that whole, we could go on a whole tangent about productivity. But if a value stream is a kind of fancy term for how ideas turn into software and get into production, what is value stream mapping? What is that process? Rob Edwards (07:32) This is where people who are super skilled in PSN will probably start to look at me angrily, but I tend to look at things in a simplistic way at times and how people who are not familiar with some of these things can actually apply them without having to go read a thousand-page book or be fearful that it's a new concept. for me, again, from a simplistic view, value stream mapping is really... Daniel Jones (07:39) you Rob Edwards (07:54) looking at the end-to-end process, whatever that may be, and in the concepts or in the context of Dora, I kind of try to frame that into the from co-commit to code being in production, mainly because that's actually in the context of Dora, that is often what Dora is looking at. The reality is VSM can be anything. And... in the context of software engineering, you probably should start looking at some of the planning phases and some of the other things. But ultimately it's, what is the start point? What is the end point? Let's work out all of the steps involved in that. Ideally, you would jot them down on a giant whiteboard, it's brilliant if you can get people into the room. The key is to have not just you in the room, but to have other. parts of the team or organization or people with different specialist views to actually give their perception or perspectives. I think some of the most interesting ones are when you get teams that don't often talk to each other in the same room and start to explain what the end-to-end process is. And I've done this in a couple of banks in the UK. think one of them was horrified when their lovely pristine glass window, because they didn't have a whiteboard. Daniel Jones (08:55) You Rob Edwards (08:59) I just kind of took a marker pen and started to write on the glass window and just sketch out what the software delivery looks like. People were a horrified, but at the end they were like, actually now we can see this end to end flow of what is happening from whatever it may be to whatever the outcome is. from the DAW perspective, co-commit to production is a useful one to focus on if that's where you're working. But the key is it's not just mapping out what are all the steps. it's looking at how long it takes for each step to happen or the amount of time it takes for those steps to actually occur in the wild. So there's kind of wait times as well as lead times. And you start to look at that. Once you've got everything mapped out, once you've got those sort of, for an initial phase, those guesstimates of how long it takes things to do, you can start to identify, okay, this looks like a friction point, or maybe the conversations highlight this is a friction point. So then you can start looking at that and saying, okay, as a team, do we look to solve this? But for me, it's really getting what's in people's heads onto something that people can all see and then argue, debate, question, challenge what that flow really is. It's not unusual for large enterprises to not know what all the steps are or that it goes into this black hole of magicness and comes out the other end with either the outcome you're hoping for or not the outcome. So it's useful to map all those things out and get people in the room to discuss. For me, in the most simplistic terms, it's like human connection of working out what everything is, discussing it, agreeing that's the thing, and then trying to work out on how to improve it. Daniel Jones (10:25) I'm glad that you chose not to kind of go into really tedious detail about value stream mapping because my experience of it is to all of your points, often people don't know all of the steps between, especially in a bank or somewhere like that, between code commit and production. I worked with a customer a few years ago who were only able to ⁓ kind of make software releases once every three months. And even then they were slipping. So they were taking five months. Rob Edwards (10:41) and reduction. I worked with a customer a few years ago who were only able to make something that was possibly three months and even that was living, so it was taking five months. And, we believe, it turned out they had five mergers. Daniel Jones (10:54) And would you believe it turned out they had five merges on the critical path to production. and nobody knew that. ⁓ so there were five different instances where there could have been like rebasing issues or, you know, conflicts and nobody in the room knew that until we all got together and everybody from every team kind of drew that path out. And it was a massive diagram to your point of like taking up the whole window. These things end up being really complicated, especially if you have different people with only isolated views of them. even just if you knew nothing about value stream mapping at all, but just from what we've said now of like, draw out the steps, guess the work times, the wait times, that alone, if you've got the people in the room together, we give you most people a lot of value. Rob Edwards (11:39) Yeah, I would say the people who already know what VSM is and how to do it. They already know it. There's no, I can't add any value to that conversation necessarily. I think what I'm trying or at least what I was trying to do in the Dora chapter was trying to introduce a new tool for people to experiment with or to try and to say, actually we have research that backs up that this is a useful exercise. Maybe you should try it this is how you could try it. Now, if you try and say, these are all the steps and this is how you must do it, it puts people off. So it was trying to approach it in a more, engaging way for people to try. There are some great resources. So one of the guides that I put on the Dora.dev site, I co-wrote it with somebody called Andrew Davies. He has actually written a very good book around VSM and actually flow engineering. So I would strongly recommend having a read of that book if this is something you're interested in. Once you've already tried the exercise, if you want to learn more, go and find the resources and great resources out there, but just try it. Don't be fearful is the first thing. And if it works, great. If it doesn't work, actually don't get stuck and saying, well, Rob says it must work. Daniel says it must work. We must do it. if do you, that's the key. But this is another thing for you to go and experiment with, go and try as a team. There is an element of psychological safety needed for some of these exercises. So if you're in an organization that doesn't have that, these can be more challenging. Daniel Jones (13:01) Yeah, because they can they can expose truths that maybe weren't obvious before. The the Dora report, one of the things I like about it is that you're using the term value stream management, and as opposed to mapping. I think that's like mapping is the first step of figuring out like, okay, what, how actually does value get delivered in this organization. And that's often a novel step to a lot of people. But once you've got that, then you're in a position to proactively manage it and think about how the value stream works. The question I was going to ask you is, in your experience, how many software leaders realize that there is a value stream and their job is to manage it? You know, that they are responsible for the efficiency of this software. I don't want to say factory because that has the wrong connotations, but like this software. assembly line, this value, this product feature assembly line. How many folks have you encountered who already kind of got that and knew that? And how many folks were like, had no idea of this tool and thought their job is just, I need to make my developers happy. And my job is to ship products, not to build the system that builds products, if that makes sense. Rob Edwards (14:09) Yeah, to some extent, I probably have a very biased view on some of these things just by my engagement. I suspect that the customers that have really robust value stream management processes in place and doing these sort of things, I probably don't get to talk to those as much because they don't need as much help in this space. So I have a slight bias in the sort of engagements I have. I think over, if I look back, to 10 years ago versus now, people are more familiar with it. I think one of the things that has made people look at it in a different light is definitely, and I don't know if you've had the same experience, but once Team Topology's book was published and they talk about value, that seemed to alter people's mindsets, at least the people I was talking to. So I think that has helped shift people's views in this space and they are looking at it in a different way. I think as we go into the AI space for software engineering, we need to start looking at value and it's become more important. More and more leaders are saying, how do I measure this AI stuff? Well, what are you trying to measure? And those, these value conversations then come up. So I think as a timing, leaders are more interested in the value creation side of it. I think also this is driven by some of the team topology's work, some of the great work they've done and other aspects. As I said, maybe as a biased of the people I get to speak to. Daniel Jones (15:25) Yeah, likewise, being a consultant, nobody calls you up going, hey, everything is awesome. Can we pay you some money to just come and tell us how awesome we are? I'm guessing Google don't get gigs like that either. So with the Dora report and the value stream management section that you contributed there and the kind of research you did, obviously people should go and read it, after everybody putting in so much hard work to create the report. spoilers, what are the kind of... were the key findings? Were there any particularly interesting things that you found through doing that research? Rob Edwards (15:53) Yeah, so the goal was, I said, Dora is always around backing up the things we say with data. if we look at what the questions, so again, people who are not familiar with Dora, it's a survey that goes out yearly. There's a set of questions, people answer those questions, and then we do some analysis on the back end, the research team, they're awesome to do that. This year, actually, we had more interviews as well, but... specifically from the the VSM findings and the questions we asked, what we found is that those teams that do, and we didn't explicitly say VSM, we kind of described it in a certain way that like we know as VSM, but the questions were more around. kind of the outcomes of the things they do. But VSM really does help drive the performance of teams. The interesting thing is it started to lead to engineers feeling they were doing more valuable work. And I think that makes sense because actually if you're removing friction from people's daily lives, they will feel like... the objective they're trying to kind of do is create that value. So actually it frees them up to do more valuable work because there's no, there's less toil or there's less waiting around for things to happen. The other big one was that VSM was really a force multiplier for those teams that used AI. If I take a step back, the core Dora findings this year was that AI in the context of software engineering is an amplifier for teams. It's a mirror of team. So if a team is really good at delivering software, it actually amplifies that and they get even better at delivering software. However, if there's frictions, if there's problems within that, that team, when it comes to delivering software, it makes it worse. But from a VSM perspective, we saw it as a force multiplier for those teams that used AI. And again, because the hypothesis there is allows you to work out the friction points. and focus on that. So if code review is your problem, generating more code is gonna make that even worse because there's even more PRs or MLs to go and review. So actually focusing on that friction point will alleviate some of the pain. So yeah, that's some of the findings. Daniel Jones (17:58) Yeah. the, I mean, we had a customer back in the day in my previous business, ⁓ with Dan Young, who was on the podcast in the last episode. ⁓ we had a customer who came to us with a problem with their CI, CI CD server, or at least that's what they thought was their problem. They're like, ⁓ tests are really flaky. takes two weeks to get through our CI CD pipeline. they keep on failing the state servers are unstable. can you help us? So came in, had a look at the servers, looked at what they were doing, then looked at the number of PRs and was like, on a minute. So these tests, end-to-end tests take eight hours to run. And every time there's a new PR, you spawn up a new instance of the pipeline. So it's new set of tests. They use, I can't remember how many gigabytes of RAM for each set of tests. Okay, right. And what do developers do once they've raised a PR? Well, it's not their responsibility, a different team, the DevOps team, the not DevOps team, then, you know, trying massage it through the pipeline. So the developers pick up a new ticket and then they raise a new PR. Like, hang on a minute, right? So basically what was happening is in the factory or the assembly line, kind of metaphor, they were just piling up inventory, like the massive pile of stuff trying to get through this narrow conveyor belt that was the CI CD pipeline, which was overloading it and causing it to fail. And they had no visibility of their value stream. They had no awareness of that. didn't have any dashboards or anything like that. where kind of everybody had good intentions, but where like different teams have become responsible for different things and nobody had that joined up picture. And it wasn't until we did the value stream mapping exercise with them that kind of they could connect the dots and go, there's a pinch point here. So to your point, you know, just creating loads more code with AI is definitely going to put, pressure on the bottlenecks and, Elliot Beatty who came in, ⁓ he's been on the podcast a couple of times, you know, when they first went fully agentic throughout, ⁓ all of his development teams. you know, they found pinch points in QA. We had to hire way more QA people because they had a separate kind of after development QA function. And then the product people couldn't like give them enough features to work on because they hadn't been prepared for the massive speed up. it's, yeah, it's taking part of a big complex system, making one part go super quick and yeah, all the other parts then kind of feel the strain. Rob Edwards (20:18) Yeah, and you end up getting other benefits as well because that feedback loop reduces. So actually when things do occur or there's a problem with the PR or whatever it may be, suddenly it's still fresh in your mind. The quicker you can get that feedback loop, whatever, we pick on code review, but it's helping reduce that feedback loop and helping while it's still fresh to either pivot, iterate, change, fix, whatever it may be is always beneficial. the reason why the world changed to a more agile approach. I remember the day when it was a six monthly release and I still have those nightmares of being paid when deployments were going on. Daniel Jones (20:51) You Yeah, and mean, it's kind of by metaphor, another one is like, you could get the fastest chef in the world. But if the waiting staff, you know, don't speed up, then you're not going to get you're just going to end up with dishes going cold whilst you're waiting for the other parts that you haven't improved. And you mentioned also, you know, things like lead time and that there are psychological benefit. I mean, maybe this is kind of getting more into the classic Dora stuff of lead times production and the kind of four key metrics there. But there are psychological things about the amount of reward that you perceive. If you do something and you immediately get rewarded, that's quite closely connected in your mind. If you delay that reward for like six weeks until you find out that, the thing you work really hard on, it's in production, the users love it. So that's great. I've got today's problems. That thing of... the folks that are doing value stream management proactively having more motivating satisfied developers totally makes sense. Rob Edwards (21:52) Yeah, I think that so where the movement from mapping to management is really interesting. And this is I came to most people come to Dora around the metrics, the four Dora metrics or the five as we've kind of evolved it into similar with management. If you start with the metrics, it doesn't kind of help the problem. If you start with the let's try and improve things and then use the metrics as a as a guide for how things are being improved. like my brain has gone slightly off from where I was trying to make the point, but the key is the management bit is moving to a more systemic approach in the organization to management, to understanding these mappings. And it then allows you, once you have some data collected, whatever it may be, whatever you're measuring, if you're measuring the right thing for whatever your objective is, those small experiments, those small changes, you can see the impact positively or negatively. So it allows you to do more experimental things for improving those flows and seeing the benefit rather than just developer says, yeah, that looks good. You can see some tangible impact as well. It's funny you were talking, going back to the kind of dopamine here, the psychological of getting the fast feedback. One of the things I'm interested in going a bit deeper on, it's on my list of things to read around over the... holiday season is what does the use of like a Gentic AI or AI in the software delivery process, how does that impact dopamine impact or dopamine hits? Do we have more dopamine hits because we're getting more things done quicker? So there's a whole rabbit hole I want to go down into on that kind of psychological impact and the dopamine of those fast feedbacks or the getting things done quickly. Gone off on a complete tangent there, but that's, you know me well enough. Daniel Jones (23:25) No, no, I mean, it's great. It's a conversation that I've been having with some of the engineers at Resync is, you know, how it affects people's kind of slightly hyperactive traits if they're waiting every 30 seconds for Claude to do something and different coping strategies and different people are going to have different patterns to that. there is definitely a change of behavior that is required by using these agents, but... Seeing as we've kind of stumbled on psychology, which is a topic that we're both kind of interested in you much more deeply than myself. One of the reasons that I was keen to talk to you is that you've recently finished your ⁓ psychology master's thesis. It's been marked. You've got the grade and all that kind of stuff. I refuse to... hear anything about it so I can ask with genuine curiosity. What was your psychology master's thesis about Rob? Please tell me all about it. Rob Edwards (24:17) you It was so. Part of the reason for looking at psychology in the first place and. Was partially Dora, but also partially the human impact of things, and it just happened to be good timing that by the time I got round, because it was a part time Masters is kind of it was a hobby between the day job and looking after a small family, but. what I wanted to do was look at actually the, because we heard a lot about AI is an install of all of your productivity worlds. And to me that had an alarm bell straight away of what is productivity and does it really? And we're talking about humans here. It's not just a kind of input output type thing. So what I wanted to understand was the impact AI is having on Developers be that productivity be that efficiency. However, you want to define it I actually wanted to try and understand what the developers definition of productivity was and see how that aligns with the psychological aspect so The title is very worded very long-winded developer productivity in the age of genitive AI a psychological perspective But really I was looking to answer kind of three core questions. I had in my head one was How do developers describe and evaluate their productivity when using AI tools? The other was how does AI affect the developer's sense of ownership, path to skill mastery and autonomy? And also how does it impact the developer's primary focus? and shape their experience and perceptions when using AI. So really it's around the, does AI really change the work, productivity, agency and focus of an engineer? So to do that, kind of went down the interview route and spoke to a number of engineers to get their kind of thoughts on these. So I'll pause there because I've just thrown a lot of what I was trying to achieve into, and it was quite a lofty goal as well. Because there's a lot of things that each, think each of those questions individually, I could probably have done a thesis on or a doctor on in their own right. So I just wanted to try and capture more broad view of what's going on to them. Daniel Jones (26:14) Yeah, I mean. Rob Edwards (26:26) suggest further analysis or further research. Daniel Jones (26:29) I'm sure you could have done an entire PhD just trying to define developer productivity. Cause you know, I can go quite deep philosophically on that. And when we get down to like the quantum level and things like that, that, um, you know, it's, it's not a particularly easy, um, problem to solve that one. So there were three questions, uh, the definition of productivity when using coding agents like, and how developers perceived that there was this sense of ownership. And then what was the third question? Rob Edwards (26:55) It was around the focus or the experience and perception of using AI. Daniel Jones (27:00) Got you. So that you're cool. So go on then the, the, the productivity aspects, like, cause I find this one interesting. I found myself feeling guilty, like, cause I'm kind of like, I've been getting Claude code to do a thing. And then it's like, I can't do anything meaningful in this two minutes. So I'm just going to go and read a video games news website. And then I'm like, should I have been doing something more productive? But then you try spreading yourself too thin. And then all of sudden you, you know, getting burnout from context switching. So you're probably better off just. doing one thing and then kind of, you know, half baking the other or doing something, ⁓ you know, ⁓ unintensive, it was you're waiting. so yeah, I'll be fascinated to hear more about the, kind of a productivity perception. Rob Edwards (27:38) Yeah, so, the way I conducted the research was, as I said, I interviewed a number of developers. They were actually more seasoned or experienced, let's just say engineers, because actually it was platform engineering and software engineering. I kind of viewed that developers in both spaces were valid for this sort of questioning. So I... Based on the interviews, I then did some thematic analysis. So that was looking at the themes that were present in all of the interviews, or most of them, and trying to pull out some patterns. So, well, the lofty goals were those three questions. the analysis and themes that come out may not be directly related to those or maybe slightly tangential. So maybe if I share the theming or the patterns that I saw, we can kind of loop back to the productivity one, because it's, as we both know, it's a difficult one to define. I can probably share some other deeper thoughts on that. But the... The real themes that came out of the research was really we're going through a redefinition of what it means to be a developer. There is a move from being the identity being a coder to more of a conductor. And I'll go into that in a bit more detail in a second, but really the three themes that come out of this were this code conductive drives the re-architecting of a developer focus. Like what is a developer focused on? There was the other theme was that shifting of the definition of productivity. So what the individual's perception of what it means to be productive has evolved. And then also there was a double-edged kind of sword or challenge when it comes to agency and skill development. So there were the three high themes. I can go into each one or have you got any questions? Daniel Jones (29:20) I mean, the, the code can conduct the one that was having a conversation on the CTO craft Slack, which is an excellent community that I get no kickbacks for people joining, but I recommend anyway, about this very thing of the sense of identity moving more from like, and that somebody phrased it really well. I should probably open it up on my other monitor, but I got distracted, of maybe developers, the act of programming being more about detail and the act of like spec-ing to an agent is more about abstraction and summarizing. So maybe like developers that have been product managers or have done a bit more kind of product management, backlog management stuff are probably going to be better and feel more attuned with agentic development. But you also mentioned kind of the re-architecting of focus there. Just to go off on a tangent very briefly, like I've been reading through the blog post about the fact that the next gen IDEs that we need are going to look nothing like they do at the moment. And they're probably not going to be made by any big companies, no offense to your good colleague, esteemed colleagues at anti-gravity, because it's going to require some like major leaps. Like this isn't going to be incremental on top of like a VS code fork. We're going to need a completely different UI that lets you kind of dashboard, see what multiple agents are doing. And I, my personal hunch is that's all going to kind of come out of open source and weird projects, where someone's going to try something crazy that's never been done before, and then we'll see where it's going. But yeah, that kind coded conductor. Do want to go deeper on that? Rob Edwards (30:47) Yeah, you just prompt two thoughts as well. When I started this research, the whole explosion of agents wasn't really a thing. to some extent, some of the findings are probably more interesting in the world of agentic development. at the time, less people were doing this agentic thing, or it wasn't well defined what agentic was. And you just reminded me the new way of working. I don't know if you were, I've said this before, but the very first person I ever paired with was you. And I remember that day, in fact, I think we did a couple of days and I was absolutely mentally and physically exhausted at the end of that. Daniel Jones (31:22) Was that they trying to resist the urge to punch me repeatedly? Because that's normally the experience people have. Rob Edwards (31:27) Yeah, but if I remember right, I also found out in one of side conversations you did, you did too. So I was like, I'm not punching. Daniel Jones (31:33) I don't know, rugby, jiu-jitsu, you know, it's an interesting battle that one. Rob Edwards (31:37) Yeah, so, but the when anti-grotty first came out, I started to play around with it. So I thought I'll do a few hours and play around in an old code base and also create a new app, less using the VS code bit, but more the, kind of agent management space. And after three or four hours, it actually reminded me of the first time we paired because I actually felt exhausted and In some cases, I was being very controlled with allowing what the agent would do, but in some cases, I was just letting it go off and re-document things and do other things. And it actually became quite difficult to keep track of everything. It became exhausting. So I agree with you. I think there's a new shift in the way we use IDEs and with the way we'll use tools to help with the new way of working. kind of within that conductor, ⁓ sorry, with... kind of re-architecting the cognitive focus. There was really two sub-themes that came out of that. One of them was the more obvious one, let's go there first, was AI as like mentor, collaborator, or cognitive partner. There was people are using AI, or at least the participants were using AI as a combination of the safe person to ask the stupid question. or the safe thing to ask the stupid question, because it's not judgmental and it will give a response. It's usually, you're great Rob, and it's, we all know how AI tends to be quite positive in the way it responds. I quite like hacking the prompts to make sure it comes back with like a grumpy Northern Englishman response. But that's just my personal view. But people are using it more of that safe bounce ideas off. Also to use it as a collaborator to solve problems and kind of some of the planning and thinking about things. So, Daniel Jones (33:01) You Rob Edwards (33:13) I think that if you have a PM mindset, actually that's useful because you can help break down the problems with AI and plan out things a bit more and also use it as a cognitive partner, a bit like pairing and be able to bounce ideas off. So that was one of the sub themes around it. The people are finding or using it quite a lot to help move ideas forward, challenge ideas, coach ideas. The bigger one that I found interesting was that move from coding to system level thinking. And a theme really was that there was a forcing view or metacognition was the term I used within the paper. There's a psychological term, is around that thinking about how you think. So one of the challenges, if you think about when you first used AI and first got it to solve a problem, you would give it a very simplistic, like ask without thinking about how to logically go through the process. You would just go do this and hope it come back with the result. What we've evolved into is starting to have to think about the system level views of actually I need it to do this, this and this because of these reasons. and you end up providing more information to narrow or a better framed question to narrow down to get the right output. And that's all around thinking through how we would solve it. And it's assumed like we know how we think through problems. We don't really think about how we think through problems, but we need to coach the AI to think through problems a bit more. So there is that shift around. just go do a function or a class to do X or Y to go do it because of this, think about these things. And that was a big shift and there was one of the participants really highlighted that they've started to think more about how to write good software, less about the syntax, less about the code itself, but more how about to help better to architect a production enterprise system. And one of the things they noticed is that for a number of years they hadn't read a technical book. But AI was forcing them to read more architectural books to actually better inform themselves on how best to ask for the desired outcome or shape the intent to be what they want it to be. there's an interesting shift of moving away from the syntax coding. like hits in the keyboard and generating code by the kilogram, which is what one of the participants was a quote was a code by the killer to the more of a system level thinking and elevating to a strategic level of what is the problem I want to solve? How do I best solve it for day two as well as just not like, that's not just vibe into existence. Let's think about it in an enterprise context of how best to create it. So there's a lot more thinking. beyond just the coding, which has always been a thing with development, but actually it's even more important now to think about these things if you're using an AI to help partner. Daniel Jones (35:56) Yeah, I mean, there's two things in there that jump out at me. ⁓ Three, if we include coding by the 2.2 pounds for American listeners. The whole metacognition thing and like thinking about how you think and rather than just writing code, thinking about the structure of what you're doing. Like, I mean, there's this kind of two separate things there. ⁓ One is the... thinking about how you think, how am I going to prompt to this thing? How, do I normally solve problems? How do I now have to solve a problem when I've got this new tool in the mix? That is really reminiscent of the point about value stream management. And I was kind of alluding to managers who think their only job is to get feature shipped. Like their job is to build a system that builds code. Rob Edwards (36:34) is to build the system, the build code. And not everyone makes that kind of play. Daniel Jones (36:38) And not everyone makes that kind of higher order thinking jump of like realizing that, it's about the kind of, what's the best way to say it? You're looking at the chess board from the top. You're not one of the pieces anymore. So there's like a really interesting parallel there of people needing to introspect and kind of look at things more largely. And then the point about how to write better software and kind of higher order thinking in terms of architecture. I think that's one of the more optimistic takes on where agentic coding will lead us is that we end up with more architects and structural engineers and fewer bricklayers. You know, there are lots of artisanal bricklayers and people that can make flint walls and nap flint and all that kind of stuff. But the real value when we work to what's going to happen to junior developers. Maybe they'll just need more education. Maybe they'll come out of uni having needed to do a masters before they're even allowed to practice of, you know, here are CRDTs. This is what CQRS is and lots of other acronyms, you know, that sort of increased focus on structure rather than implementation. Rob Edwards (37:38) Yeah, I've wondered this a few times because part of the can go in a bit later around the double-edged sword of learning and not learning and a junior developer is a well I don't I purposely focused on the senior developers for this. One of the takeaways and the conclusion was we need to start thinking about that junior developer side of thing, or how do you bring them up? Because I think I learned a lot by being physically near really good engineers when I was younger. And I learned just by being in the proximity. As we've gone to more remote working, that's become a challenge in its own right. ⁓ I remember physically being sat next and I purposely sat next to people I want to learn from when I was much younger and I learned a lot from them just by being in the proximity. I did question this during the COVID time of like, if we were with more of us are remote, like, how does that work? Do you get to learn by osmosis anymore? because people are jumping around calls and that's a challenge. And then if we take this in the context of AI, is that a challenge? But then I wonder, is it a challenge? Because actually what we're doing is we're just moving the level abstraction up. do we need to know how the code is compiled? Do we need to understand and write and assemble and bytecode and all of that stuff? We've been over the years abstracting up. Is it detrimental to us on most days that we don't have to think about that low level side of things, or we don't have to think about memory management or whatever it may be as a language of abstract? Is this just another level of abstraction and that actually the new generation of junior engineers coming through will learn things in different way and open doors and challenge our way of thinking a bit more? So there's lots of things in that space. Is it a problem? Is it not a problem? Is it just a different way of thinking? And are we grumpy, grumpy old people like this is the way we've always done it. This is the way it must be done. Actually, there's lots of thoughts in that space. I don't have any answers, but that's. Daniel Jones (39:24) I am definitely a grumpy old person. know that much for a fact. yeah, the, mean, just you were talking about, you know, sharing things by osmosis. The conversations I see between engineers and CTOs now, you know, or at least the ones that have kind of rediscovered the joy of coding through vibe coding is they're, you know, they're not sharing. Rob Edwards (39:47) they're Daniel Jones (39:47) keyboard shortcuts anymore. They're sharing like prompting techniques or like, and not necessarily how to craft a prompt, but you know, things like how to do spectrum development in a way that works well for you. ⁓ so I can see that kind of increased level of abstraction showing out in the way that the kind of things that the people are sharing, but yeah, Rob Edwards (39:48) not sharing keyboard shortcuts anymore, they're sharing like prompting techniques, like, know it's only the housecraft prompts, but you know, things like the house you use, vector of development, in a way, that's what's well before you. So I can see that from the extreme level abstraction, showing a kind of theme of the way, the kind of thing that people are sharing. Daniel Jones (40:09) I think time will be the only judge of whether we dangerously lose touch with you know, what we're actually implementing. There was probably, again, I think you alluded to it with the kind of comparisons, bytecode and assembly. Like there was probably a time where people didn't trust compilers. I was like, I want to write machine code myself. I don't trust any of this compiler nonsense, but eventually it turned out safe enough. But compilers were deterministic. maybe it's, you know, not, not an appropriate analogy. Rob Edwards (40:39) Yeah, maybe, maybe not. But it was more the level of abstraction when moving up with abstractions, which is maybe the different way of looking at it. Daniel Jones (40:47) Sure, sure. Rob Edwards (40:47) And if I look at one of the other things, probably the final one on the kind of that cognitive partner side of it, or the kind of the re-architecting of the cognitive focus, as I called it, one of the things that a few of the participants called out, and I've seen this anecdotally, and I've probably been one of those anecdotes as well. I have used and the participants, a few of them have said they use AI to validate some of their thinking and their approaches. And they are then more confident to then voice an opinion in a group setting. So one of the interesting things is those people that are less confident in their ability, maybe more introvert, AI is allowing them to have that cognitive partner to challenge their thinking, validate their thinking. and then allow them to be more comfortable and confident in viewing or expressing their views, which actually could result in a much better outcome on whatever the value stream they're trying to create, which I thought was an interesting angle to it. It kind of goes beyond just the rubber ducking or collaborative side of things. And I do it myself, some documents, I have a really brutal critique and coach that will challenge any of my thoughts and ask some interesting questions, make me really think through the problem or validate that technically. it's sound. So yeah, I thought that was an interesting dynamic that people were starting to use AI for. Daniel Jones (42:04) Yeah, that is one that I wouldn't have predicted or if you'd asked me to like name something that wouldn't have come to my mind. But there was that research that published recently about how people are using their various co-pilot products. And it was overwhelmingly people asking AI for advice, especially in the small hours of the morning. So it's clear that folks are looking for guidance, reassurance, validation, challenge from AI. And the idea of people challenging their ideas or getting more confidence in their ideas, but then bringing it into a group setting, I think is really interesting because all too often, you know, people dismiss conversations with LLMs to be, you know, just, it's sycophantic. It's just going to say that you're right about everything. Well, maybe it is. Maybe that's good for some people. But as long as all of those ideas get mixed into a diverse set of opinions and then they get battle tested and, you know, challenged by reason and things like that, then we end up with a much better solution. Rob Edwards (43:05) Trust is the key with us. And actually the Dora research and last year we looked at trust, this year we looked at trust. People are more trusting of the responses models give them. They're still not completely trusting everything. So there is an element of validation. And actually I think that's a new skill that we've got at home. not just in development, but just generally in the use of AI models is how do you best validate what's coming out and don't just blindly trust it because it may or may not be correct. And that was actually one of the participants there in a really deep specialist topic, probably more niche. The models don't have the knowledge on that. They did not trust AI. They just didn't embed it in the workflow because they couldn't trust anything it did. So there is that element of trust and how do you validate what it's telling you is accurate. And there's obviously skills and techniques we're all developing on how to do that and seeing the source is always a good thing. it's, yeah, I think it's an interesting world on how we can get the best out of people and make people feel more confident in what they're saying. And a key part of that is the trust. Daniel Jones (44:03) Absolutely. Absolutely. it's just, yeah, interesting thinking about the kind of second knock on effects of not just people writing code, but how this affects individuals and their motivation, their confidence and the team dynamics and things like that. talking about motivation and the more psychological aspects, I think number two on your kind of list of questions was about ownership. I know the findings don't necessarily map onto those questions, but did you find anything interesting about people's sense of ownership? Rob Edwards (44:31) It was really mixed, It was really mixed. Some people still felt ownership of what they were doing because they were... They felt like they were directly prompting the result. The fact they didn't write the code itself was less of an issue. They created it into existence based on their thought process and their kind of asking it to do things. I'd be interested to ask this question again in six months once we've gone fully agentic. If you go give a task to an agent and it comes back with a completed code, does that ownership still exist? I don't know. In the context of... the participants that I was talking to, they were all using it still within like that chat interface or auto completion or whatever it may be. So there was still a level of ownership. There was a fear that they would lose some of that ownership, but they still predominantly felt like they owned the output because a couple of them mentioned once they raised the PR, that's a stamp of they've, they kind of believe in what they're committing. So they have to own it. Otherwise. it kind of why would they commit it so there's I would say it was completely inconclusive but there was a sense that people still generally felt that ownership Daniel Jones (45:37) I know when I have spec driven projects and I've been quite particular about the methodology to use, like delivering things exactly as we would have done back in the day, thin vertical slices with an acceptance test that I can validate. And I've, you know, actually had my eyes on the road and kind of checked at each step. Yep, that is exactly what it should be doing. I know I have confidence in this, even if I haven't checked the implementation. Those projects I felt much more like were mine. But maybe it's because I was asking it to do it kind of my way. and then there are other things where I'm like, just go, go and build a thing. And then I ended up with something and I take my eyes off the way off the road and I'm like, ⁓ crikey. Like this, says it works. I think it works. I don't know what's in this. I'm not sure I want to be associated with it it probably doesn't work. Maybe it's just a kind of, kind of fear in there. ⁓ so yeah, what, what other kind of, big findings? ⁓ Did you have, there any other ones that you thought were really powerful and impactful? Rob Edwards (46:33) One of them, think this is really reinforcement what we've already said. So that kind of redefining of productivity, I think most engineers weren't firm believers that number of lines of code was kind of, this is how productive and great I am. I think most engineers for a long time have believed that's a terrible metric. Some leaders still think that's a good metric, but really engineers, the common theme was around actually starting to measure the impact, or sorry, sorry, the impact was a more important measurement than volume of code created. And they were shifting their own mindset. And part of that, think, is part of that high level thinking, systems level thinking, rather than being in the weeds and just churning through and getting the PR done. because they were taking a step back and thinking through some of these things, they were actually understood or had a better link to the value they were trying to create and therefore the impact they were having on contributing to that. So there was that shift of moving from just volume creation of code to actually impactful things and also thinking about the day two side of it. one of the participants had a great... which I don't think I'd heard before, but it's that total cost of build. So he was thinking about it from beyond the immediate impact of just writing code, but thinking about the maintenance, the debugging, the user friction, thinking about how to reduce technical debt as much as possible earlier on because he had a bit more time to think it through was important that that kind of added to the value and the end impact rather than just churn out the feature, get it rolled and just hope for the best. So yeah, I think there was a mind shift change in some of the thinking. And the one which got me was it's a move from just raw speed of doing things to having a more sustainable flow. The problem, I don't know if you've experienced this yourself, the problem is as you get to do or as you do more things, and this is a personal perspective, you, let's say you think you're being more productive. so you are more productive. But the problem is you then move your baseline of productivity. Suddenly if it's doing X things of added value a week, if it's then X plus three, you then move your baseline. So then if you're not doing X plus three, you feel like you're not being productive. So part of the challenge is the mindset of there is this new baseline of productivity or efficiency or getting things done that we're trying to meet. And we always like to, at least I do, I'm... pretty competitive. always like to say, this is what it was like. In fact, right now is a good time because typically in the work I do in this period of time, I'm a bit quieter. So I then start thinking, well, I'm not being impactful because I don't have 101 meetings a week and I'm not doing 101 things because the baseline before there's always a mud rush after the summer holidays and before the kind of the holidays or the Christmas holiday break where I get super busy. So my realignment of productivity has changed. But I think that's the same. What I'm trying to get out there is there was definitely a theme of people fearing that there is now a higher expectation on delivery of things because AI has made it easier, which could, I guess, in theory, start to push us more towards burnout. Like, are we doing more things? Is it good for us as a human? That's the question. I don't have an answer to that one, but that... there was definitely a feel of that baseline of productivity has moved up. Daniel Jones (49:44) Yeah, I was just thinking about your baseline of productivity as a father, father, family member, ⁓ transatlantic emigrants of Google, high flyer, Dora contributor, you know, masters in psychology, part-time hobby student, you know, that's, quite a lot to be productive with over a period of time. So should imagine your baseline is quite high and regular listeners of the podcast will be familiar with one of Elliot's, Elliot Beatty's staff who Rob Edwards (50:00) You Daniel Jones (50:10) got burned out after being super productive spinning multiple Claude code instances on different stories and context switching between all of them. And after a while needed time off of work because they were so kind of mentally fragmented. kind of, what I hope is that we have this little period where people feel a bit overwhelmed by coding agents. Then they realize they can be way more productive. Then they're like, well, I'm still shipping more code than I used to. So, you know, I'm going to make sure that I don't get burned out. I'm going to go and check video games, news websites or whatever in the time between, you know, prompts. And then that holds for a little bit. And the product people are happy because we're shipping more software and, you know, the developers are happy because they're not being totally burnt out. But then, you know, people, the more ambitious people and the less lazy ones like myself will be like, well, I can do more. than them by just not stopping. I'm thinking about some engineers I used to know that would, you know, have had seven espressos by lunchtime. You know, those kind of folks I think might end up pushing a lot of those that are maybe not cognitively capable or like don't have the motivation or aren't in the right point in their lives to work that hard. That definitely scares me. Maybe not so much because I'm an old git, but like, you know, if I was younger in the industry, I'd definitely be worried about that. Rob Edwards (51:27) Yeah, this then comes down to the hope of having good managers and good leaders to help through this transition. I think the J-curve of learning, I don't know you're familiar with the J-curve of learning, but I think that's real with AI where you get some initial gains and you feel everything's great and then you hit a of a wall because actually you're not as efficient as you were. the week before, you realize there's more things you need to learn, which slows you down a little bit. So there's that J curve of learning. I think that's really true with the space we're in. And if I look at, even in the last month, the sort of things that have been coming out. Six months ago, spec-driven development was not a concept that people talked about. context engineering was not really massively understood, certainly not context harvesting. So there's all these new terms that coming out, what feels like on a weekly basis. I work in this space and I struggled to keep track of it. I don't know how engineers who are trying to do the day job can keep track of all of these things. So there is a huge rate of change in this space. I see some of the teams and organizations that give their engineers a bit of time and space to learn these new things are starting to get success. I think the teams and organizations that say you still have to do 100 % of your work, there's no time for learning, will struggle a bit more in this space because I think we need to all take a step back and reevaluate how we do things, experiment a bit more. But I think there is a challenge and it's probably on leaders to see, certainly the managers to see where people are starting to maybe struggle or maybe starting to take on too much. I think that's gonna be a bigger challenge. I know you were talking about all the stuff I've done. One of the things that it's taken me longer to this masters than I originally planned was because halfway through I had bit of too much and I did have to take some time off through a bit of a burnout. So it's, even when you're aware of all these things, it's very easy to still fall into the cycle and fall into that, that kind of pit of despair. so it's, I think it's, yeah, I think that's going to be a real challenge. ⁓ especially as we adopt it more. Daniel Jones (53:27) Yeah, the perils of being ambitious and wanting to do too many things, you know, it's, I've definitely known some folks who are lovely people, great developers, or good developers, maybe not great, who came in at nine, left at five, because that was the deal back then. And never did any overtime, never worked at weekends 15 years later, still working at the same financial institution. And like, I remember thinking like Rob Edwards (53:30) Thank you. Daniel Jones (53:54) Don't you want to do like more? Don't you want to like get promoted? Don't you want to do this? Do that? And then other times thinking he probably quite enjoys his weekends. know, he probably just goes and does something completely different. Doesn't think about work and doesn't worry about work. You know, which one of us is playing this game right? Rob Edwards (54:09) Yeah, well, that ⁓ was part of the thinking of it's great to come out to Vancouver because actually I can go down the rabbit hole of working too many hours. I enjoy what I do. I enjoy learning. it's very easy to go down and become kind of hyper-focused on something. Part of the benefit for me moving out to Vancouver is during the summer, it's awesome. I can go out kayaking. I can just go out and decompress from the world, go kayaking, go on a hike with family, whatever it may be. The younger me would have looked at me bit strangely saying, what do mean you're going off? You're starting early, finishing early and going off to nature. Like younger me, although I'd played a lot of sport as a younger me, that was kind of my weekend work. But during the evenings I'd be trying to learn new things. Maybe it's as I've matured and got a bit older, I have different priorities in life. But yeah, this, I think. AI could be that enabler to people who are already naturally driven to do crazy hours. Maybe they'll just end up doing more within those crazy hours. Is it any different? I don't know. Daniel Jones (55:12) I mean, wouldn't it be a lovely world if we all saved two hours a day because of AI productivity gains, which we could then go and spend outdoors or doing something not related to a computer if we didn't want to. And if we did, if we were young and in our 20s, then you get hardcore code all the time. But the rest of us maybe can chill out and enjoy the non-digital world a little bit more. You talked earlier about kind of Rob Edwards (55:16) state. I think there was an outdoor in school doing something not related to it. Yeah. Daniel Jones (55:35) You know, one in an ideal world being able to follow up on some of this research now that we're fully agentic and you know, the, it's not just auto complete anymore. It's not just ask and edit mode. You know, they're full on agents. Is that something that's on the cards? You're, you did this research on your own as part of the, the, must in psychology, separate from your work with Google. Does this research intersect with Google? Is there any opportunity for Dora to take some of these questions or are you not able to talk about that? Rob Edwards (56:02) It's probably easy to say not able to talk about it. It would be great if we could. I think it's particularly relevant. Some of these questions are being asked, I think internally anyway. We think Google is quite a cool team that looks at engineering productivity and they look at a lot of these things. So there's some really amazing researches in that space looking at it. I would love to go deeper on some of these things. Time will tell, is what I'll say. I have a few other ideas. It feels like some of the stuff I've learned here could be useful to turn into some other paper or something else, even if it's just personally published. I think I'd like to share some of the thinking, because it's one of the areas that... has always been close to my heart is anything I learn I like to share. That's always been the case. I'm definitely not a knowledge is power type person. It's I'd rather free all the knowledge to get me to allow me to look at some of the new things. I think the rabbit hole I went down for all of this, I think there's a lot of useful things in how how we can start thinking about adopting these from a human perspective. So I have some thoughts. Time will tell. Daniel Jones (57:07) Yeah, I mean, useful and important, important and timely as well. mean, I don't know whether you're allowed to use your kind of Google 20 % time for writing blogs and articles and papers and things, but I think it's not many people have had the opportunity or the time to dig into these issues and probably the kind of exposure that you've had to real customers as well as all of the kind of dedicated research and interviews and things. seeing stuff in the field. I think it would be a great outcome if you managed to spend more time kind of dwelling on this stuff and sharing your findings. And, you know, a moment ago we were talking about the kind of challenges of burnout and having to manage that as a human. you know, on a similar human note, one of the wonderful things about working in open source ecosystems is sharing of knowledge, you know, of just working in the open and like, hey, I found a thing. Like, hopefully it will help some people. I'm going to throw it out there. If it helps, it helps. If it doesn't, it doesn't, you know, but it's, it's nice to be in that kind of sharing type of ecosystem. Rob Edwards (58:04) Yeah, I think it unlocks other thinking. The reason I like working with other people is to get their perspective and learn from them. It could be, it's very easy in the world we're in to lock ourselves in, like I'm in a basement right now, lock ourselves in the basement and not talk to another human about any of the things we're working on. It's very easy to do that. I'd learn from others. Recently, in fact, last week I was on a... We had a meeting physically in a room somewhere else with real people. Yeah. But the ability to have that coffee conversation was great. I learned a whole load of nuanced things, which I probably wouldn't have thought about. again, the same happened the other month when I was working with a colleague who is deep into Daniel Jones (58:32) with real people like and you could touch them and everything I mean you're not supposed to I gather I gather that's like frowned upon but you could have touched them if you wanted to with their consent Rob Edwards (58:53) the AI coding assistant tools and there was a couple of just really subtle changes I've been able to make and that wouldn't have happened if I didn't have a coffee conversation with somebody or they shared the knowledge and a few other colleagues share the knowledge through blogs so it's I'm a firm believer in my skill and ability I can leapfrog based on other people's learnings and them sharing it. Rather I spent months looking at this looking at the psychological stuff if I can condense that into a shorter read, people will get value out of it and then that will spark other thinking from them, which would then spark my thinking. We could just enrich everyone's experience rather than just be keeping it in my head saying tick, done it. Which is a risk when the busy life, but that's, have plans over Christmas to do a few things and try a few new projects in the space, but we'll see. Daniel Jones (59:38) Well, I for one will definitely be pestering you to try and get more information out of you. In terms of getting information out of you, the thesis, is it published? Is it going on archive or is it like, are you going to try and get it into journals? Is it going to go on some website somewhere? Can people read it? Rob Edwards (59:54) the moment, my goal is to, I need to work more with my supervisor on this to get it published. We're trying to work out the best place to get it officially published. That is kind of high on my list to get done. I kind of gave a bit of a view of it. I think I blogged about it and did a couple of sketch notes on the high level findings on my website. So sapient.coffee, there's a post on there from coding, from Coder to Conductor. There's a post. I don't write as often as I should, but I tried to put some sketch notes together around, ironically AI generated, but there's a couple of sketch notes on there around the key findings. So the goal is A, to get it published, B, document some of the things in a bit more depth, but. Right now the kind of key findings are on the website. I said website, blog post. Daniel Jones (1:00:40) Awesome. I will make sure we put the link in the description. You know, all those kind of things. Smash that like and subscribe. No, we're not going have any of that. Subscribe if you want to. Don't don't if you don't want to. You don't have to. I'm still going to do it anyway because I get paid to. But yeah, that sounds like a good kind of wrapping up point. Is there anything else that you want to point people in the direction of or anything else that you're working on that is cool or that you can promote? Any local charities that you support? Rob Edwards (1:01:04) ⁓ I can think of the, if you, would say have a, I'm a huge fan before I joined Google around Dora. Have a look on Dora.dev. Look at the Dora report. I think it's fascinating. And that's not just me saying it because I'm involved in it. I actually think there is some useful stuff. We did actually publish a report last week. that goes into a bit more detail around the seven capabilities that we highlighted. So we kind of give some tips around what we mean by clear communication or AI accessible data to, IE context engineering. So there's a whole set of things. strongly suggest having a look at that and getting involved in the Dora.community. We have regular conversations. In fact, I think I've got one this week on VSM. yeah, Dora.dev is a great resource and the community is great. So I strongly recommend joining that and just sharing your knowledge. There's a lot of people that share some awesome knowledge in that space. Daniel Jones (1:01:55) Awesome. Good stuff. Rob Edwards has been an absolute delight. ⁓ Really enjoyed this conversation. We can talk about so many more things, but we've to wrap it up. That's been great. It'd be great to have you back on again if you would be so gracious. yeah, thanks very much, sir. Rob Edwards (1:02:08) Yeah, and thoroughly enjoyed the conversation and absolutely happy to come back. And it's great to connect after, as you said, it feels we started working nine years ago. It feels like about five years ago, we disappeared in different ways and it's great to come back. Daniel Jones (1:02:22) Cool, thanks mate. Rob Edwards (1:02:23) Go. Daniel Jones (1:02:25) Hopefully you enjoyed all of Rob's insights there. He's a great guy, known him for a while, and he really cares about the humans in the development process. I didn't quite catch his website address and I want to get the episode out quickly. So rather than wait for him to send it to me, I will make sure that it ends up in the description once we've edited all of this together. In the meantime, if you would ⁓ like to give us any feedback, then please get in touch via email at wavesofinnovation@re-cinq.com That is R E dash C I N Q.com. We are on more platforms now, so it'd be great to hear from some of the people that are discovering us there. And yes, be good to each other and you'll hear me in the next one.

Episode Highlights

Rob Edwards discusses DORA 2025 findings on Value Stream Management and AI.

Value Stream Mapping reveals hidden friction points like excessive merges before production.

Research shows VSM is a force multiplier for teams adopting AI tools.

Developers are shifting identity from Coders to Conductors of AI agents.

AI encourages metacognition and system-level thinking over simple syntax generation.

Introverts use AI as a safe cognitive partner to validate ideas confidently.

Raising productivity baselines with AI poses a significant risk of developer burnout.

Share This Episode

https://re-cinq.com/podcast/dora-2025-the-psychology-of-agentic-coding-and-value-stream-management

Free Resource

Master the AI Native Transformation

Get the complete 422-page playbook with frameworks, patterns, and real-world strategies from technology leaders building production AI systems.

Get the BookGet the Bookfree-resource

The Community

Stay Connected to the Voices Shaping the Next Wave

Join a community of engineers, founders, and innovators exploring the future of AI-Native systems. Get monthly insights, expert conversations, and frameworks to stay ahead.