Get the Complete AI Native Guide: From Cloud Native to AI Native

Get the Book

Podcast

Nov 13, 2025

From Coding to Context Switching: An AI Retrospective

00:00

From Coding to Context Switching: An AI Retrospective

agentic ai

developer burnout

mcp servers

technical leadership

software engineering

In this episode, Deejay welcomes back Elliott Beatty to discuss the long-term reality of adopting agentic AI coding assistants. While development velocity has soared, the team now faces 'human' challenges like burnout and context-switching fatigue. Elliott explains why frontend teams benefit more than backend engineers and how accelerated coding exposed major bottlenecks in QA and UAT. They also explore the shift toward Model Context Protocol (MCP) servers, the necessity of feature flags, and why strong leadership is crucial for sustainable AI implementation.

Hosted by

Deejay

Featuring

Eliott Beaty

Guest Role & Company

VP of Engineering @ Fruition

Guest Socials

Episode Transcript

Deejay (00:02) This is the Waves of Innovation podcast and I am DJ your host in this episode. I am talking to Elliot Beatty who long time listeners of the podcast might recognize. We are catching up with his journey of agentic AI coding assistant adoption within his organization. In this episode, we kind of talk about the challenges there, what went well, the fact that velocity is still up, but some folks are getting burnouts and we talk about what went well. which parts of the SDLC found themselves under pressure with that increase in velocity and what advice he would give people who are adopting AI coding assistance. So sit back, listen and enjoy. Deejay (00:42) it's been a good few months since we last spoke in a recorded format. We did have a chat, you know, kind of just catching up. So it's going to be interesting to see whether we remember what we've already told the dear listeners about and what we've just spoken about privately. But you went all in on Agentic several months ago. You've got the Agentic CTO podcast, which is awesome. Nice 15 minute bite-sized chunks of insight. How has it been going? Elliott (01:07) Good, you know, it didn't really take at work. It didn't really take the trajectory. thought it would. You know, I was all gung-ho on developers not developing anymore by the end of 2026. don't know if that's going to be a realistic goal anymore. Is it possible? Yes, but we're running into the human challenges, the human element of it and making it really hard for it to be sustainable. Deejay (01:29) I really- Elliott (01:33) I think if you were like, you know, one of these contract agencies that come in and set up AI and, you know, sell yourself on increasing productivity, you could do that and you could get short-term whim wins. But, you know, I'm looking at this from a long-term thing or any to, you know, maintain a, a dev team and a happy dev team that's healthy for the long run. ⁓ and we've had to actually pull back a little bit because it took away so much. job happiness from some of the engineers. Deejay (02:01) That's quite a surprising insight. think there's probably two things that I'll be interested in digging into. It's like one, how well has the actual code writing been going? Like how well have agents actually been able to write the code that you need them to? But then the organizational stuff is super interesting. So with the kind of human challenges there is that... Are people kind of fighting it? Are they worried about it taking away their jobs? Is it leaving them only with the dull work that they don't want to do? How's that playing out? Elliott (02:31) So nobody fought it. Nobody was worried about taking their jobs. And I tried to quell those fears before we got into it. And I think as they saw it play out and saw what it's capable of, they realized that there's still space for humans in the coding world. Where I got pushback is the latter of what you said. It's leaving them with the crap work. And it's also making them do a lot of context switching. Right. When you really put the throttle down and press your engineers to be running multiple, you know, agentic coding solutions and working on multiple things, it's only sustainable for a short period of time. Sure. If you have, you know, a week or two that you really need to crunch it, you could maybe lean on it a bit more, but, you, we got burnout. ⁓ I, you know, I sending developers home for like a full week just to, you know, re recoup. and I've had some of them come back and just say, We're not, I'm not using it as much as I once was. Productivity is still much higher, right? There's a sweet spot. And I think we're dialing in on that, but this idea of like one developer sitting there coding six different projects at once is not like the human body can't do it. That's what I found at least. Deejay (03:41) Yeah, yeah, the, it reminds me of, 10 years ago when I was doing some engineering, ⁓ we were working on things that had really long kind of feedback loops. So we were working on infrastructure and we were trying to automate Cassandra deployments and the whole test suite took about four hours. you know, and data stacks told us at the time, like, don't try and automate Cassandra and like all the garbage collection stuff. Just, just, you're mad. We tried it. It's too hard. Don't do it. But we didn't listen. Elliott (03:55) on you. Deejay (04:06) Um, but the, the, the feedback loop was four hours for the test run. And so you kind of, you know, push some changes in the morning. That was one roll of the dice. And then you maybe got another one later in the day, if you were lucky. So you'd be like, that was your main story. And then you do something on a different product. And then that had, you know, it was all infrastructure. So it still like, you know, an hour and a half for the full test feedback cycle. And, um, you know, so then we pick up a third thing and then you pick up the third thing, but the first thing goes wrong. And then, you know, you're halfway through debugging that and the second thing goes wrong. And the, just the mental fatigue of doing that. So I can imagine that it's similar when you're checking in on all of these different agents. Elliott (04:40) Mm-hmm. Yeah, yeah, it's, is, it's a real thing. You also lose some fidelity when you're reviewing the code in the output, right? Like eventually your eyes glaze over and you just, you you aren't doing as good of a job. So we've, we have seen it really benefit us on our front end team. They move so fast now. I've had to hire another backend engineer and there's a good chance I'll have to hire another one after this one just to keep up. We haven't seen the same benefits from our mobile team. They're doing a lot of flutter and a lot of what they're doing is interacting with the actual operating systems and building stuff that does a lot of problem solving that's beyond just. you know, HTML. it's, would say in terms of productivity gains, we've seen that most on our front end team, which is React and then our Flutter team and then our .NET Core team. And the .NET Core team is, they just don't benefit from the, the agents can't problem solve. like they can, right? They can't build these big complex interconnected microservices systems as well. So, you know, at a professional level, production code is still really dependent on human engineers that are doing the problem solving. If you like doing the vibe coding thing, like I have some proof of concepts. I try and stay ahead of my teams with things I want to work on. And for me on a personal level, I use it for like personal projects. It's great. can like you can just give it a command cloud codes amazing You can just tell it what to do and it can do it can build it But it's pretty hard to maintain right now I'm getting to the point where I have a lot of mess and I've only been doing this for like six months And it wouldn't be production level code you do need a human doing a lot of that still Deejay (06:17) With the front end teams, front end engineers, the jobs that agents are doing well at, is that kind of adding things to UIs? Is it mostly kind of layout type work? Or is it able more able to deal with the front end logic? Is there something about the kind of logic in the front end that's somehow more easy for it to reason about? Elliott (06:37) So, it's inherently. ⁓ Connect API, get data, display data, right? That's what the front end often is. Um, and that's what it is for us. And we have a centralized, um, way of getting data from the server and caching it, uh, locally that, you know, an AI agent can replicate pretty easily. Like AI agents are very good at monkey see monkey do. So if you can point them to some code and say, Hey, do it like this, it figures that out really quickly. So it's not only, um, just displaying it and laying it out, but it's actually going in catching the data. It's good at that because it's pretty straightforward. One thing we have done since we last spoke, DJ, is we have moved away from Jira, which is a hot pile of shit anyway. Sorry, I probably can't swear. ⁓ Deejay (07:20) No, I feel exactly the same way. Elliott (07:22) Okay, we have finally made the jump to linear, which has a lot of AI integrations. You can hook it into Figma. And so now you can give a linear ticket to an AI agent and it can go get the Figma designs and do a pretty good job of implementing it. So there's also just this like tooling stack that is really conducive to your front end team too. Whereas with the backend, it's still like, you know, pretty detailed requirements that or intricate, right? And that you're not building stuff you see, you're pounding through this list of details that have to like coexist together at the end. Deejay (07:56) Got you. So the front end is more kind of crud, kind of like say get data to do things with it. presumably, you know, the, the LLM can see like in context, everything that needs to know, but with your back end, that's kind of more disconnected because you're running an event driven system, aren't you? Elliott (08:12) Yeah, mm-hmm. Yeah, with many microservices and, you know, we're in a point where we have to scale now, right? So we signed some another... kind of big things. signed some big deals since we last spoke at our company. And so now we're really worried about scale. ⁓ it could be huge. And so that's also something that has to be taken into consideration and an agent is going to do what you tell it to. It's not going to also think like, Hey, I got to think, how do I scale this horizontally someday? or how do we like cash this data or will, will we crush the database? These are all things that a seasoned developer just kind of has in the back of their brain. And if you had a good Deejay (08:23) Nice. Elliott (08:48) prompt to go through it or if you had an agent that only focused on scalability, I think you could make it work. we as an organization aren't there yet. And I think you'd have a lot of back and forth. And I'm kind of at the point where I just want to let the industry figure it out and come back. I feel like we've kind of harvested the gains that we will probably get in the short term, the 80-20, like the low hanging fruit. So ⁓ I don't. Deejay (09:02) You Yeah, absolutely. I mean, I think you've done your bit for the industry by, you know, sharing what you've learned and, know, kind of having ⁓ the boldness to jump in on these things early and make some of those early wins. One of the things that I'm kind of seeing is that, you know, context is absolutely a challenge and I don't think we're going to see any models to happen up soon that are going to be fantastically better at reasoning. Otherwise, you know, OpenAI wouldn't be making things like AgentKit. They would just wait for their next model, which we'll be able to do all of these things automatically. But I've been looking at Tessel Registry, which is a library which kind of, it can look at your dependencies. They then offline make summaries for agents, agent-friendly summaries of your dependencies. And they do it every single version so that your LLM doesn't hallucinate about an older version than it knew. You've got things like that and context seven, which is similar, but I gather it summarizes documentation from other libraries. So it's not quite as close to the source of truth as a TESSLa registry, but you see things like that and the whole spectrum development movement, which all seem to be about constraining the amount of context that systems have to think about and constraining the task and the idea that maybe everything is going to be Elliott (10:01) cool. Deejay (10:29) fully automated anytime soon. think we're starting to see the limits of that. Elliott (10:33) Yeah, at least from a practical business perspective, I feel like we've reached the limits where we could probably automate more, we could probably get more AI-centric, even more so than we are today, but it would come at a cost to the business. And so that's why we've let off the gas a little bit. Deejay (10:50) With the kind of context switching the multi-agent approach, presumably that was working on different stories, right? Kind of different work items, having a different agent instance looking after. Okay. Did you try ⁓ or have any of your folks tried having multiple agent instances on the same story? I was talking to somebody the other week who they were full stack. So they've got a front end, down a back end, to worry about. And they had a couple of Claude code instances. one working on the back end, one on the front end and one speculatively on the front end. So I don't know whether they, you know, this agent's going to go down the right route with this one. So I'll have it do another one in parallel. Did any of your folks try that? Do you know? Elliott (11:30) No, I've done it personally. So I have one of my proof of concepts that I use because it's pretty applicable to work as a Discord bot that for a video game that you can, and it tracks stats and you can interact with it through the chatting in an LLM. And I've done that. on that project where I have one agent working on like the interactions with the chat and then the other interacting with the MCP server and the stats and exposing the two. It works. I haven't used it in my job. I couldn't tell you if we're, that might be already too much. You know what I mean? Where right now what seems to be our good fit, at least like. on the backend team is a human doing one thing and an agent doing another thing in the same project. Right. That seems to be where we've settled as like a happy medium. Deejay (12:17) You mentioned that some of the enjoyable work was going away and people were left with the less pleasant stuff. What kind of things were people being left with and what kind of tasks did you find that the agents were well suited to that were maybe people were enjoying when they were stealing away from them? Elliott (12:35) You know, if it was, if I could. make an analogy, would be you have one senior engineer with like five interns doing all the work, right? And you get all the work from those interns at the end of the week or the end of the day. And then you have this like mess of like, oh my God, where do I start? Right? How do I, how do I dive in? How do I, uh, like, I don't even know where my problems are yet sometimes. Right. And it's not fair to send it over to QA like that, to our QA department. Um, So I think that's a big stressor is just the it's maybe not so much the Like it's not that you have this finite amount of expected tasks at the end. It's that you become out of touch with the product. And then when something does happen, right, either you need to change your deployments or you need to modify some secrets in the secret store. Anything like that becomes exponentially more difficult when you aren't intimately familiar with the code that's using it. Right. And so you get this disconnect that just slows you down. and it's also really frustrating, I think. and so that's where we saw, or I, I felt a lot of friction. That's where I saw my engineers spending a lot more time all of a sudden. So yeah, I would say that's where it inherently is. And especially for the back end team where there's so many different ways you can achieve the same objective. And when you have one design pattern that you really want to follow, And if you aren't like super explicit with that, can go off the rails. so you ended up spending a lot of time, more time, like just writing your prompt to be right, like to get it right. And then refactoring your prompt. And that's not, that's not fun. Engineers still want to get their hands dirty and get in there and do that themselves. And so, yeah, it was a lot of just the back and forth, I would say, you know, in the, the course correction. Deejay (14:19) It's, um, I can relate to that from my personal experience because I have no patience whatsoever. Um, my, my, you know, kind of latter years of programming were all extreme programming. So I was pairing, never did any code reviews, never did any kind of like sitting down. I haven't done that since I was at university, which is a very long time ago. Don't have the patience for it. So when Claude code comes back at me, it's like, Hey, here's like a bazillion files I've just changed. I'm like, Oh, can I really be bothered to read these? I know that I should. Thankfully, I'm not working on production systems, so I can get away with that. it's almost like the thing that we've been paying programmers to do for all these years is hold something really complicated in their head. And it's not necessarily the tapping on keyboards, it's the mental modeling. Elliott (14:59) Yeah. Yeah, for at least for the good ones, right? There's always been that that's section of our industry that's just there for the paycheck and that they, you know, they're so disconnected already that this isn't going to blip the radar. Those are the ones that are going to get replaced by AI. But yeah, the ones that are left standing are you're hiring them and paying them a lot of money because they're really intelligent individuals with truly like valuable skill sets. And it's not that, like you said, it's not the slapping the keyboard. It's the thinking part of that job. Deejay (15:28) And with the backend engineers, so, you know, kind of, there was a bold aspiration to go fully agentic as much as possible. then people started to get burnt out from kind of doing too much context switching and, realizing that it is just supervising a load of automated hyper caffeinated, never sleeping juniors. what kind of use cases have people sort of, ⁓ settled on now? Is it that there are certain types of tickets or like bugs or something that would be like, I'll set Claude off to go and do that. focus on something big and complicated. Elliott (15:58) Yeah, it's still really good at solving bugs, problems, right? So if you get something kicked back, it's pretty good at finding those issues and sorting through them. It's still really good at explaining what software is doing or what the code is doing. So when that new back-end engineer starts, it'll be really interesting to see how his onboarding goes, right? Because he should be able to step into the role and figure out the code a lot quicker, right? Because you can use these agents for analysis and actually have it spit out what is this block of code doing, right? And giving you some context. Even in our case with a lot of microservices and many different repositories, it'll be able to scan through all of them and find the relevant code, which is oftentimes the hardest thing for a new engineer to figure out, right? Which repo do I go to for this issue? Deejay (16:46) I'll ask a coding assistant to do that type of thing. Or are they trying to do it on their mainline work as well? Like, where have they kind of settled in where to apply the tools? Elliott (16:56) ⁓ both. So it's, it's very much like the boring stuff that a senior engineer could do in its sleep. It's good at still, right. And you can, you can, you can have a crafted prompt that's pretty reusable where you, you know, you say, you know, Hey, go build me this API to do the X, Y, And here's the data to fetch and it'll go do that. That's easy stuff for an agentic tooling to do because we've done it a million times. It's similar to like the some of the scaffolding tooling that you used to see in like Visual Studio where it would just spit out a template. This just takes it another step a little bit further. So they use it for that. So while they're maybe they're working in a library and they can have it go build the API that's going to consume that library. Right. And then they'll be the ones that'll go through the process and This is some of that boring stuff they don't love, of packaging the library, getting it out into our NuGet server, and handling the deployment stuff, and then come back, and then they can consume that library. So again, the thinking, the logic is probably in that library. That's why the engineer's working there, and then all this boring stuff of just the reusable. You can have the AI agent, you do it. Deejay (18:03) Got you, got you. I'm to take the approach of not editing these. It would be interesting to see what Jonathan makes of all of this blanking. the thought that I had there, and it was about the, when you were talking about libraries and things, I can imagine that with the front end stuff. like because you know, the data structures are probably already defined or at least the ticket that you're working on probably specifies some fields that you need. When you're doing backend stuff, like you're going to be figuring out what that API is, you're kind of working out the shape. And then once you've worked out the shape, getting an agent to implement, you know, within the color in the drawing, like you're drawing the lines, but it's doing the coloring in. I can imagine that that's maybe why. the backend stuff ends up being more challenging. And if it's something rote and you've done this several times or as a design pattern, really common that an agent would be able to just go away and do that. Elliott (19:05) Yeah, what's also really interesting is I think that we as an industry, like really the SaaS industry is going to move away from APIs a lot and we're going to end up using MCP servers. We're already doing that at fruition. And that's... There's a lot that goes into that on the business side. You know, as we've been trying to implement our own MCP server that exposes customer data to an LLM that opens up a whole Pandora's box of new questions and concerns that, I have to have ⁓ as a leader. And that's what's slow to stone there. But with the MCP model, right, you have this, what is essentially a catalog with really descriptive methods that your LLM can interact with. And that that's going to take that, instead of like swagger being, providing the definition and description of what an API returns. Now all of a sudden you'll have this catalog of MCP methods that it can tap into and just know how to use. And so what we're working through right now is, well, how do we organize that catalog? And especially for us, where we're gonna have to... Basically at customer permission for different levels of data, right? Cause we have bank account balance data. have transaction data. We have budget spending data, all of that stuff. And we're going to have to get the okay from each user to expose each of those. And then we need to modify the catalog of the, you know, the MCP catalog so that only the available methods are exposed to the LLM. Right. And for us, we have so many with our microservices architecture that we like, we don't want to expose more than absolutely necessary. So that's where there's going to be some like human element to this, where we have to have like this decision tree of what to expose to the LLM and how do we do that? So we're not there yet. We're still working through a lot of those challenges, but we will be there probably by the end of this year. But that's going to. make the front end even easier, right? But I'm not convinced the front end is gonna look the same in the future anyways, right? Deejay (20:59) ⁓ okay. the, the, you, when you say, kind of, affecting the way the front end looks, are these MCP servers that are kind of built that are providing product functionality? Like when you, when you were speaking, I kind of had in my head, maybe because it's some of the work that we're doing at the moment. ⁓ we've been building, ⁓ agents for like CFOs. kind of internal data sources and, know, the use cases, the CFO wants to ask some question instead of asking human, they just ask an agent. you, are your use cases like that? Or are you kind of using MCP service to provide data to the front end? That's kind of the main user interface of your product. Elliott (21:36) So this all started, I can't remember if we talked about the chat bot that we were building, which we have built and have delivered. And it's basically a rag model, a fairly advanced rag model, but it allows organizations to upload all their HR documents, 401k definitions, retirement plan definitions. vacation policies, any of that stuff that HR doesn't want to have to deal with anymore. It now answers, it can now answer for all the employees within an organization. That was kind of our launching point and we knew that. Now we can, through that same chat bot, can answer and ask questions about the customer's financial data, right? So we have all these tools within our application that let you build and track a budget, let you like actually hook into your bank accounts and track the debt and your paid on strategy for it, right? Now we can expose that through the same chat bot. Now. that the next step after that is probably actually talking, like having a conversation like you and I are, and then actually seeing like, ⁓ you know, an AI generated human talk back, right? Cause another important piece that we have is like the mentor piece where you can, it's kind of like the Uber of financial mentors where you can just book a session and get some help from a human. Well, an intermediary step to that, that won't be as expensive would be if we could do something very similar. with your data within LLM, right? And so the whole dynamic is changing away from this interface where you're interacting with HTML essentially, and it's really about just the conversation, and it's almost like the internet's going back to just plain text or voice and video. Deejay (23:12) It's an interesting one because especially in your use case where you've got folks who, know, the reason they're using your app is to gain more insights and to get a better control of things and to learn and understand. I've been in conversations with some, you know, quite well-informed product people and we were kind of helping them with AI strategy and trying to figure out, you know, what product features they should build. And there was a really good moment where we all got super enthusiastic about an agent. And then the product person is like, is chat really the best interface for this? And for some use cases, you know, you're much better off with a dashboard, but if you've got somebody that is trying to learn, and gain insight, maybe that's not the best thing. And they're that much more conversational approach. You know, it's, it's a place where it really does belong of being able to ask questions, just get the information that you've asked for, not be overwhelmed with a dashboard full of numbers and graphs and things like that. Elliott (24:06) Yeah, know, one a good example of kind of a halfway point of that that we have is we have a trend, you know, we so we have all your bank transactions, like we can aggregate them all across all your accounts. And then we have a search feature where you could type in coffee, and then it will go find Starbucks, Dunkin Donuts, you know, all the places you may have bought coffee in the last three months. And that's something an LLM can do that is a lot more difficult to do in a traditional way. Or it can also like it in ours does this, it'll summarize it'll say you've spent $4,000 on coffee in the last year or whatever it is. And so even that is just taking what is traditionally like a really boring and quite expensive thing to build, which is a strong search index and augmenting it with AI and getting you to the data you actually care about a lot more quickly. Deejay (24:52) Are you building those kind of features? we kind of segue from AI in the development process. So developers using AI to kind of produce more features to the features, you know, being based on AI. Are you using the same kind of staff and experts to develop this? Are you getting machine learning people, AI engineers? Because one of the other kind of interesting conversations that has been floating around in my circles lately is When do you need machine learning experts? When do you need people with prior experience to AI and, know, calling LLMs and doing things and building agents and how much of this can just regular software engineers do is just like another API to them. Elliott (25:33) We're doing it all with regular software engineers. I've. I too have thought like, we need to bring in a machine learning expert? And the answer is I know. I mean, it would be nice to have someone that I could just go to and ask questions on how to do things and get an immediate answer. But I have no idea how I'd keep them busy for 40 hours a week. They're really expensive. And the reality is all of us except for, you know, OpenAI and Anthropic are gonna end up using someone else's LLM for all of this, right? And I don't see the value in it personally for the kind of software that we're building at fruition You know, maybe if you're really positioning yourselves as a AI company you could maybe get your bang bang for your buck, but for the most of us. And I think the value is gonna be in augmenting existing features and existing value with AI. This AI boom where everyone's just trying to build something AI is gonna be, you everyone's talking about, that's what's gonna pop. It's, think the companies that will be left standing are the ones that are taking their existing features that already add value to society and then just enhancing it. Like we did with the transaction search, right? I think that's where it's gonna be. And you don't need a machine learning expert for that at all. And there are companies that specialize in it, right? So like for us, there's a company called Bud Financial, awesome company, great people to work with, ⁓ fantastic. And what they do is they ingest our transactions for us. And then we actually call their API for some of the insights, right? So like the, we were offloading that piece to it because they... They beat us to the punch and it's cheaper for me to pay them to do it than to go rebuild it ourselves. So I think you're going to see companies like that popping up that you can integrate with pretty easily and like Bud specializes in financial data. So it's perfect for us. Deejay (27:15) The idea of an agent for end users who are seeking financial advice, is that something you're kind of working on at the moment or is that further down the product roadmap? Elliott (27:25) No, we're working on it right now. So, you know, the biggest fear, especially. for us is we have to walk a very fine line here in the US. We can't give advice. We can give education, right? Once you start giving advice, you become regulated. we do have people that certified financial CFPs that are on staff that, you know, we have that, but we don't want to step into that regulatory space if we don't have to. We would rather hand you off to a partner that does that better than us anyways. So what we're doing is we are ⁓ training, we're basically getting our own data and training our own so that we can train our own rag models, right? Not train, but. you know, seed our own rag models. and so what we're doing is we're working towards a goal of, think it was 3 million words of interviews with our CFPs. So, ⁓ we have our marketing team sitting down interviewing our mentors, getting their philosophies, right? so that we have a lot of data to then use, you know, on top of an LLM of our choosing. And, you know, this is where we're going to be like, don't If we don't have an answer in our data set, don't go out to the internet to get your own answer, right? And we're going to very narrowly scope it so that like our debt paid on philosophies and our budgeting strategies are what are fed through the LLM. Instead just going out to the internet and getting Dave Ramsey's stuff, right? So yes, we are working on that. It's slow going. That's a lot of content, right? But that's the value. That's like the competitive advantages, right? We're spending all this time getting really good, educated, concise data that we can then reuse by running it through an LLM. Deejay (29:04) Got it. So where a normal enterprise might be kind of looking at its internal processes and exposing that as a knowledge base via RAG, you're getting all the insights from the heads of your advisors codifying that in structured written language and then forming RAG over that. then an agent that's kind of trying to respond to end users has a more informed, something that's in line with experts, you know, kind of not advice, but education. Elliott (29:26) Yeah. one of the, it's not only gonna be reg, but it's gonna be integrated with our MCP too, right? So our MCP server, and we haven't really cracked this nut yet, because we don't have all the content, we're gonna, just to sort through the data and get the data that, the, know, the... documentation that is most valuable to that conversation. We'll have to lean on the MCPs to like, Hey, we're talking about budgeting. Can you please tell me like only expose the budgeting content that we want to then regurgitate, right? Because there'll be so much that we'll have to probably sort through it, you know, pre sort it. with MCP or a lot of the LLM to do it. So the prompt for the LLM is gonna look like, hey, before you answer this question, figure out which of these five categories is the conversation around and then here are the MCP methods for that particular category. And then you can call like maybe these five or six or whatever to actually get the content that we then want to bubble up to the user. Deejay (30:25) Yeah, right. So it's kind of more of a hybrid approach. you your folks, the backend team is C Sharp. What kind of tooling are you looking at for this? you, you know, I've got a few customers that use .NET and I gather the ecosystem is maybe not quite as vibrant for building agents and things as it is in Python. Are you building that in C Sharp and on the .NET stack or are you kind of using agents so that you're Elliott (30:29) Mm-hmm. Deejay (30:52) C sharp developers can write some Python. Elliott (30:54) Right now it's all .NET. So I've actually been using Node.js for my personal stuff and that works pretty well. But .NET Core 8, which is the newest stable version, does have MCP libraries ready to rock and roll. and you can pretty much just use them and they're ready to work. We were on six for a lot of like we've been on six for quite a while now. And so this is forcing us to go through the upgrade process and just get current so that we can use some of this. But .NET it's viable. What we were doing and it does work, but what we were doing is actually our own MCP. Like hand rolled where we literally like return all the method names using reflection to the, the LLM and it responds to us in the a, like here's what I want to call. And then we literally step out of the, you know, the LLM contacts, make the call that it told us to, and then return the data. And it's very rudimentary. ⁓ you know, ideally we wouldn't do that. And we, that's why we've upgraded to eight. But that works like it's actually worked fairly well. Deejay (31:57) Nice. Cool. That is good to know. I should make sure to pass that on to the other folks I know that are using .NET for this kind of thing. Talking of tools, the, you've gone for quite broad adoption across the organization, maybe when other people are still only very tentatively dipping their toes in the water. If I remember rightly, kind of you said to your folks they could use the tools of their choosing. Was that right? Or did you end up settling on a particular set of things as like the standard? Elliott (32:28) ⁓ yeah, I'll pay for anything that they want at this point. They're all fairly close in price. and right now it's moving so quickly. You know, you have, like, I think it was cursor beat windsurf to the, you know, Chad GPT five implementation. right. I, Or maybe was the other way around. Don't quote me on that. that dictates pretty heavily which ones you want to use, especially when GPT-5 was really a cost savings move and benefited us too, So you want to be able to just move in and out of them. Clawed code still seems to be pretty popular just because it's just more the sensation of like handing off and forgetting about it. But I have... Developers on cursor windsurf some other ones. I'm not really familiar with some of them have a Toolings I can't remember the names that interact with their operating system and just do like OS stuff for them So yeah, I just have a ton of accounts now where an engineer reaches out and says hey I'd like you know a seat in Claude code and I'd give it to him and As long as you're using a tool from so from like a CTO perspective as long as you're using a tool that allows you to track usage ⁓ and ensure that everyone's actually using it, you're fine. Like just give it to them. It doesn't matter if it's $200 a month, it's going to pay itself off. It's what you don't want is to handle all these seats and all these licenses and then some of them go unused. So actually right now we've been using a lot of credits with OpenAI and I'm probably spending like 250 bucks a week on credits. So it's not a big deal. I have it set up so I have to go like top it off myself. It's not a controlled expense, but it's, you know, I had to explain that to my leadership team. Like every penny we spend on this is coming back to us in dollars, right? So we're like, I have support in my team, my leadership team, and we, we spend that money freely because it's good bang for buck. Deejay (34:18) Yeah, 250 bucks a week compared to the salary of a engineer in the States is probably good value for money. With the different tools that people are using, do you know if your teams got to the point of trying to share like prompts and, know, agent steering documents and things like that? Were you building your own MCP service to expose prompts for the... Elliott (34:23) Yeah. Deejay (34:42) ways of developing software that you wanted to share or sharing like sub-agents and custom commands. Elliott (34:47) It's so that the MCP idea is really good. And that would be a really good way to tackle that. But we aren't doing it mostly because we just we need to take another crack at AI Thursdays. And we haven't had time. think we've had 13 major releases over the last like seven weeks. Yeah, we're we're. Yeah, it's great. We're cruising. kind of in the you know, we implemented AI and that in tooling. Deejay (35:04) Nice. Elliott (35:11) And then we worked on a bunch of projects and they all backed up in QA because, you know, that's what you would expect. And now they're making their way through QA finally in one large lump. ⁓ And so this this has been very challenging for us because now all of a sudden We've we've hit QA that slowed down then we hit you at where we have to get all of our our business stakeholders to agree and approve and that's put a ton of Burden on them to come in and spend the time like looking through what we're ready to deliver and then launching it So we're working through this backlog still and we will be for another three weeks probably ⁓ like releases ready to go, but we can't get through the human element of it quick enough. Deejay (35:51) And is that primarily, you said the UAT element of kind of getting end stakeholder sign off rather than, I mean, it sounded like QA was a bit of a, know, they were overwhelmed with the amount of work coming their way, but is it, what's the kind of balance there? Elliott (36:04) They're catching up. So we did hire another QA engineer. After we implemented all the tooling and he's a really good resource and he still wasn't enough. We still definitely back up there, but we're moving. We're actually able to get the logjam, you know, unblocked and are moving through it. We've invested in some tooling that integrates with LLMs for them. So N8n has proven to be really valuable. They're using case for their test cases. And then we have custom chat GPT projects that take a little bit of input from like them and creates a great test suite and does a lot of magic. And so they share that amongst their team. So we're catching up. We're not caught up, but we're at least able to function now for a while. They're like, they were so like overwhelmed. Nothing was moving because you never just get to QA and then get through it. Right. It's always just back and forth. And then when you have one team moving at light speed and another team, still doing it all human element or you human speed the back and forth just makes it an absolute mess. ⁓ Deejay (37:07) And I recall that, like, maybe when we last spoke on the podcast, or maybe when we just caught up offline, I mean, it was online, but not recorded. The, I think you mentioned that kind of products were also under strain in that developers were doing things so quickly that the product folks can like keep them fed with stories. Did that situation stay like that? Or did that change over time? Elliott (37:30) I know it changed. So they caught up and then got in front of us quite a bit. You know, they have tooling now to that and they figured out how to use AI. You know, I don't know if it was us that put pressure on them or, or what but they they have sped up as well. And now they are ahead of us but not light years, right? Like we're actually able to tackle stuff pretty damn quickly. When they give it to us, at least the coding piece of it, we're still bogging down at the tail end of the SDLC. But yeah, they caught up, they sped up. They had to, and there's tools out there for them to be to be better about it. And the fidelity and the granularity of their tickets that they create for us is much, much greater. You know, we have a lot more definition, it's very obvious that it's coming from and, you know, an LLS but it does give us a lot and a lot more than we were used to, right? Because when you need something for like if you're an engineer and you need more detail on a particular feature than they gave you, that throws them off on what they were working on at that time, right? And they have to go back and they have to change context. It's very similar to being an engineer in that regard. This gives us a lot more detail upfront where we aren't going back and asking for as much now. Deejay (38:43) Do you know what kind of tools or approach they're using? I can imagine that my experience of, have you used Kero, the Amazon spectrum development kind of code editor? It's good fun. I don't know whether it's still on a wait list, but definitely worth trying out. You can have some really good conversations with it about what you're trying to build. And unlike a lot of agents, it will push back and kind of ask you questions. So I can imagine that. Elliott (38:53) Mm-mm. Deejay (39:08) If the product folks are having a conversation with an LLM, then that would probably lead to that greater level of detail. Do you think that they're kind of talking backwards and forwards and like requirements being coaxed out of them? Or do you think the LLM is filling in the gaps and kind of get, you've asked for this. You probably want to ask for, you know, defensive checks around this, that and the other. Elliott (39:28) It without a doubt does some of the filling in. I use it myself, right? Like I ended up writing a lot of tickets too for more technical stuff and it does a pretty good job of that fill in piece. I don't know for sure, but I would imagine there's, you know, some back and forth, like a Chad GPT project. where you can feed it context and then all future conversations are it uses that context would be really good for this right and then there are like slack integrations and where you can just feed it context and summarize stuff or we use granola for notes I don't know if you know what granola is for note taking it's an amazing note taker I highly recommend it it's way better than like Otter AI and some of those other ones that really bad You know, you can some that you can use these tools to summarize conversations, then feed it to a chat GPT project that has the context for the feature or whatever it is, and then have it spit out or even create the tickets for you. I think that's probably what they're doing because that's what I'm doing. Deejay (40:21) That's a good I've made a note of granola. Unfortunately, I just paid for an auto annual license, which is working okay for me at the moment. Well, I maybe I won't try granola until my license is up and then I won't feel like I'm missing out. The thing about your developers and you know, the kind of journey that you've been on and kind of discovering the risks of burnout. What do you think is like their emotional Elliott (40:27) And I've bashed Otter right here. Sorry. Deejay (40:44) emotional trajectory? What's their kind of sentiment been like? Are they still keen? Are they kind of a little bit more wary now? How do you think they've they felt about it kind of before AI Thursdays during and now in this kind of settling in period? Elliott (40:59) So we are definitely at a place now where it's just been normalized. And it's just like, it's just like another tool or another member on our team. Really. When we started, there was a lot of fear for sure. We talked about that in our previous podcast, but then there was this like aha moment. And then there is this like, how far can we push the envelope? And that's where it was like pretty mind blowing. Like, holy crap. We can do a lot with this. Let's lean in on it. And then we, I think we, we kind of crested and came back down to earth where we realized there were still some limiters that weren't the coding, right? So that's like my backlog with QA. That's, you know, the practical, like the human element of context switching too much. you hit these limiters that kind of, check you and let, know, bring you back down to reality. then we smooth out, you know, kind of internalize those. challenges and you realize where AI fits and where it still struggles and where the human element just makes it not as valuable. and we have, so I feel like we've kind of come up, we hit the crest and then we leveled out much higher than we were, but not at that apex. That apex is where it's really stressful. It's a lot of context switching. it's. You don't want to live up there. You can exist up there for a little bit, but you don't want to live up there. And now we have settled out and. It's been embraced. I don't know of any of my engineers that aren't using it. Most of them use it to like, you don't have, I don't have any like super power users and people that are reluctant to use it. They've all accepted it. feels very normal, right? And I would imagine if you drop me into a shop that's not as progressive, a software shop, I would probably, I would be like, It would feel very normal to me and very foreign to them, right? But for us, it feels it's just business as usual. Again, we haven't lost any engineers. We've had to hire more. Yeah, you know, we haven't had to deal with training junior engineers or hiring or what do we do with these junior guys because we've we've hired well and we have all senior people for the most part. Deejay (42:51) Yeah, so on the kind of staffing front throughput is up, you've not hired any any more folks. And going through this change, no attrition, like nobody was so kind of AI resistant that they were like, screw you, I'm leaving if you make me use this stuff. Elliott (43:07) No. And I think something that's important to note is we have people that want to be engineers, right? Like they're all code junkies. They like doing this. And so this is just an enhancer for them. There are definitely engineers out there. We all know them where they just wanted the job. They hit the market, right coming out of college. It was a high paying job where they didn't have to do a whole lot. I think you'll see more resistance there because It's not as cool. It's we're going to take away what value they did add and their their means for hanging on to their corporate job. I think you'll see more resistance there. And so I've always looked for this when I look for new jobs for myself. I want to be somewhere where my myself and my team are the red winners for the company. You have other organizations where you're a cost center, right? Like you building stuff for the legal team or for the engineering team or whatever, right? You're just building internal tools. That's a very different beast, very different environment, very different set of personalities. You know, where you have a lot more just people that are less enthusiastic about code. So I think that that will be a differentiator. Those are always the companies that are slower to you know adopt this kind of tooling they want to work through it with legal first whereas we're like Fuck it. This is way too good. We can't Like miss this boat. We need to lean in on it now. We'll figure out three percussions later so Deejay (44:22) Got you got you. You mentioned earlier when kind of being at the crest of the productivity wave of like, this is all awesome, we're gonna do 15 stories all at once because we can do so much. When it became clear that, you know, kind of putting pressure on other parts of the system, we talked about QA already, were there any kind of learnings about your CI CD or platform, anything like that, the more kind of technical elements, were there any parts of that element of technical maturity that AI and the increased velocity shone a spotlight on. Elliott (44:50) You know, I thought about this a little bit because I thought it would come up in this podcast and ⁓ AI is inherently very good at Terraform. It understands AWS very well, so it can do a lot of the CIDs, CI CD stuff for you if you want it to. However, DevOps is also inherently very, very fickle. Deejay (44:54) you Elliott (45:13) where one little issue can really screw the whole thing up, right? Like one bad SSL cert or one DNS entry that you need to integrate with another third party can just throw a monkey wrench and everything. And so you still end up having need to have like a high caliber human helping troubleshoot that. I don't think you need a dedicated DevOps engineer anymore. ⁓ we have someone who's a great engineer and also pretty strong in DevOps. And that seems to be like the perfect fit because you can still do a lot, like a tremendous amount with AI, but that 10 % that is just like. Breaking if you don't get it right is where he needs to step in and do a lot of help. and do a lot of stuff for us. So we're still using all the same CID, CI, CD, like pipelines that we have in Azure DevOps. We haven't changed anything. I don't see us changing anything. But you, if you want to go the Terraform route, which we have, we, you we do, the agents are really good at writing Terraform and deploying to AWS for you with that, if you want. So on a smaller scale, it's pretty good. know, we, another thing that it like, doesn't integrate with yet or that we haven't at least is the, the pull request process. Right? So like nothing gets merged into our code base to kick off the CID, CIC, I keep struggling with that CICD pipeline without like a ⁓ pull request approved and, Deejay (46:08) Yeah. You Elliott (46:28) You know, you often for us, because we're microservices oriented, we have many that go at the same time. So you have to get your timing right. So we have to get all the API's out there checked, and then we have to get the front end to theirs out there. And so there's a lot of that human element and coordinating that, the agents aren't doing. Could they? Probably maybe, maybe not very well, but they could, but we're doing it with humans. Deejay (46:50) Okay, so in your development process as a whole, mean, you've freed yourself from JIRA and ⁓ moved over to LydiaB. I could go on rent some out of the Atlassian stack, which is a shame because one of my ⁓ oldest colleagues and sort of good mates is a development manager at Atlassian. But still, anyway. ⁓ Elliott (47:08) They were the best just for the record. They were the best once upon a time. They were such a good product before. Yeah, but yes. Deejay (47:15) Yeah, and then let people customize it to infinity and come up with all sorts of ungodly development methodologies that shouldn't exist. Yeah, so you kind of you made that change. But otherwise, it sounds like the the actual changes the way that software is delivered with you folks is more kind of within the developer like if PRs work the same way and the CI CD pipelines are the same, you haven't put any extra steps in there. There's no kind of additional testing or like static analysis you're doing. Elliott (47:43) No, ⁓ the QA team has, right? So the QA team, we have a full automated regression suite and they're using LLM for that to help build those. that's been enhanced, but we always had that, right? But no, it's for us, we were very quality tilted. So we are not quick to release. We will hammer. ⁓ features and you know things we want to deliver until they are like the only stuff like we let slide through QA is like there are a few pixels missing or right or this is off by a few pixels or you know stuff that wouldn't that a user wouldn't notice that we can come back and clean up later we might but you know we're not putting partially baked products out there and so that's inherently that needs a human, right? It needs a human to make sure the release is smooth, especially when you have a lot of moving pieces like we do. It needs a human to make sure the quality is really high. And then we also need humans, you know, like our stakeholders, our business stakeholders, they need to approve it to make sure we are delivering what they want. Agents can't do any of that stuff yet. It's the reality. Deejay (48:46) It's, it's something that worries me a little bit is folks who don't have good engineering discipline and high quality, what's going to happen to them when they adopt AI coding assistance on mass. You know, if this came up in conversation in the last week for me of, you know, I was, was privileged to work with some really awesome engineers and product folks. And they were like, if our software is a beta, and the American pronunciation or beta over here, then it's not because it's a shonky quality. it will, everything we deliver is going to work. It's going to be fully tested, fully automated all the way through to, you know, user acceptance or production depending on the use case, you know, but it just might not have all the features. So you end up building, you know, vertical slices of this one feature might be the only feature we have, but it sure as hell works. And then you add more and more to it. And the kind of places where it's like, we're to cut some corners because this is early. You know, I wonder about what will happen with those folks. I mean, maybe they'll be able to use AI to do all the test backfilling that they imagine that they're going to have time to do in the future. But that's not worked out historically, particularly well. Elliott (49:55) Yeah, you know, I have two points I'd like to make there. One is we have a really good product team. They actually understand what an MVP is. And so that helps us and helps what you're suggesting, right? Where let's just get out the bare minimum and then build on it. A lot of product teams and product managers don't really understand what MVP is or don't want to or choose not to. And that's where like, you know, as an engineer, it's kind of like, well, fuck it, we'll just send it out, right? That's when your beta becomes this like broken thing that should do a lot and none of it really works. Whereas when you start with a good quality MVP, you can really focus on just getting that functionality right and then keep repeating that process. So that's really important. Another thing is, and we're moving to this, I think ⁓ we should launch our feature flags in a couple, the week of the third, so that's a week from now. we're finally getting to the point where we can feature flag features in production, right? So we can go into LaunchDarkly and turn features on and off or slowly roll them out. And that's gonna, like we prioritize this project on our roadmap because that will allow us to have partially baked stuff or stuff that is getting bogged down in QA or... UAT out into production without releasing it. And that's really important because we have so many things hitting QA and UAT that are, you know, our master branches are really hard to manage. And so what's, what's, what's going to be ideal is we can like say that it's 80 % there. It's good enough. Merge it into to master. and release it behind a feature flag, right? And then we can come back, work on that last 20 % without having all these freaking conflicts in the branches, right? And having branch hell and trying to figure out what's where. We can kind of work out a one branch once everything gets to a point that we're okay enough to actually put it out and then put the finishing touches on it. So I think that's gonna be really important. That's why we prioritize it because it helps that log jam keep moving. Deejay (51:50) It's ⁓ interesting that you talking about the logjam and where those logjams turn up, I think the 2025 Dora report, Rob Edwards, who I had the pleasure of working with, we paired on a project a long time ago, pointed out the sort of value of value stream mapping, the benefits of value stream mapping. And I can imagine that this is going to be just like a lot of other things like, you know, cloud native software delivery, where you speed up one aspect and all of a sudden it shows all the other places where you've got all these merges happening, all these things that need to come together. and it's going to be fascinating seeing like where those bottlenecks end up being. And the folks that aren't aware of where those are, and aren't paying attention are probably going to have a bad time and they're going to have a worse time with AI than they would have done without. And the people that are on top of all that stuff are going to be, you know, accelerating ahead of the pack. Elliott (52:38) Yeah. Yeah, AI can also be a force multiplier on your problems too if you aren't careful. Right? Like it can really expose some big issues if you don't snuff them out before you add an LLM to it. Deejay (52:54) Yeah. So you've been, um, you know, uh, an early adopter compared to a lot of folks that I've spoken to, um, you know, not just on the podcast, but, know, at meetups and things like that, you've, uh, you've, gone, uh, all in on this much earlier. What kind of, uh, advice would you have for people that are still not there yet? It seems to be the case to me that there are lots of organizations where folks are being told, like, you know, use whatever tool you like, and, um, we'll see how it goes. And people are not terribly keen on kind of really biting the bullet and going, okay, we're going to do this in a structured way. We're going to, you know, either pick a tool or we'll find a way of working, or we're going to, you know, insist that people use these tools for X percent of the week or whatever. What kind of advice would you give people? Elliott (53:38) it does start, it needs to start at the top because when you start implementing AI tooling or AI into your processes, it affects everything adjacent to it. Right. So my engineers have not affected my QA team. My QA team has affected my UAT and, you know, as in full circle, our need for feature flags, UAT has affected our engineers. Right. And now we have to learn to manage feature flags and you can't do that unless you have someone quarterbacking. I think you guys know what that is now. and NFL has evaded you guys too, but right, you need someone up top, quarterbacking all of the. Deejay (54:10) you Elliott (54:13) the teams and their use of it and getting everyone to buy in on it. And that's really hard to do without any, any friction. Then you add in the stresses of like meeting deadlines and teams needing to deliver. And if they don't feel like they have the like permission to take a step back and invest in some of this. And if they don't have someone willing to write the check for some of this tooling that they need, they won't, right? They're going to, they're going to try and do their job, protect their job. and create friction between the teams. And so you need someone up high. That's, know, an AI, champion, right? That's me at my company and it has created more AI champions for sure beneath me. but it's also, you know, like I told you, our marketing team is creating all this content for us so that we can have like a really impressive, you know, ⁓ chat bot essentially. A developer can't push for that, right? A developer can't go to the VP of marketing and say, like, hey, we need this. They need someone up high that can also have that conversation with them. They need someone up high that can have a conversation with the VP of product and, you know, get the teams, the two product and engineering teams working in harmony. So you need an advocate. You need someone who's spending energy and time. And like I said, willing to write checks for it. If you just kind of like say that I do whatever you want, you're going to. you're gonna probably get a 10, 20 % bump instead of like the 100 % bump you could get in productivity, right? Deejay (55:32) And zooming into the individuals and the teams, are there any kind of bits of advice you'd give around how to use the tools that you've seen working for your folks or at least how to avoid the burnout? Elliott (55:44) so I would say it's really important. If you're an engineer, it's really important to communicate that burnout. there's still some bravado in tech, right? And the, the, you know, you're seeing it in Silicon Valley right now where they're normalizing 72 hour work weeks, which is asinine. don't be that person on your team because everyone else on your team's feeling it. Right. ⁓ just because, you know, people aren't talking about it, they're still feeling it. And so if you're feeling that burnout, you should definitely let your managers know and your managers need to communicate that upwards. Not every shop, not every software shop is going to receive that like a, you know, a good manager would. But I think that's important is to communicate it. We need to normalize it, right? It's going to this is going to be like the mental health crisis, the financial health crisis, right, where we need to normalize having these conversations because we're going to take a step backwards if we don't. So I would definitely say have those conversations as far as tooling goes. Every organization needs someone that's chasing the new technology because they're like some companies are doing this tooling better than others. Right. Like co-pilot is just not keeping up. And if you have everyone just that bought into co-pilot when it was one of the first players in the scene, you'd be missing out on a ton of productivity gains that you could otherwise get with cursor or windsurf right now, right? So you do need someone who's out there just like playing with the new stuff and communicating it back to your team. Yeah, I would say those are the two. big ones. As far as you know, day to day, how to make yourself more productive, like ask your eight, ask your chat bot or ask your your LLM. Like, what can you do for me? Here's what I do every day. You get a lot I do that regularly, like even for non technical stuff, you don't know that I'm not as in the weeds. I asked it what it can do to, you know, improve my day and it can do a lot. So I would say do that. Deejay (57:27) Cool. And on the subject of finding ways to free up more of your time, are we expecting a return to form on the agentic CTO your podcast? Are we going to be getting some more episodes anytime soon? Do you think? Elliott (57:40) Yes, it's yes, I've been traveling for the last two months. So I've been working from the road, which has slowed me down. And we're about halfway through our long list of releases that keeps me very busy. But yeah, probably next week, hopefully have time to get back and get a few more episodes banged out. You know, what's been interesting is my problems and a lot of I think a lot of content will shift from how do we make engineers and Deejay (57:44) Nice. Elliott (58:01) teams more productive to how do we solve some of these business problems that we're realizing, right? Like the ethics around exposing data to LLM and making sure that your customers know that. And there's like, that's really what my focus has been on, you know, how do we make it work at a much higher level in a responsible manner. So. Deejay (58:21) Well, we can probably help you from this side of the Atlantic because we have a lot of thing called it well, not in the UK because of Brexit, but we have the EU AI Act, which, ⁓ of course, you folks don't need to pay too much attention to if you're not selling inside the EU, but you can definitely be inspired by, instead of writing your own framework, you could probably steal some of that and it's got some guidelines of how to influence it as well as just sort of rules. So that might be I can send that over to you if you'd like. Elliott (58:34) You Okay. Deejay (58:48) ⁓ we've done about an hour. Is there anything else that you'd like to say? Anything you'd like to tell people about? Elliott (58:49) Please do. No, ⁓ it's still as relevant as ever. So if your company's not doing it, you're gonna fall behind if you haven't already. So keep your foot on the gas because we're not there yet. We're not like, through the push. We're still climbing the mountain and you know, it's a very immature science that we're still figuring out. So I would say it's not too late to buy in but you need to buy in if you haven't. Deejay (59:13) Yeah, because whilst we've kind of been talking about, you know, some of the more negative aspects like burnout, and maybe the height of the peak not being sustainable, you're still up, right? You're still delivering more software. Elliott (59:24) tremendously, tremendously. Yeah. ⁓ it's painful figuring it out, like actually implementing it and like changing your SDLC to accommodate it is really painful. and you're going to spend time and money getting through that. but you have to like the gains are tremendous. You know, if you're like me and you're starting to figure out how to use MCP servers correctly, the potential there is enormous. I mean, it's it's huge. It's a new frontier for sure. You need to start figuring that stuff out. Deejay (59:52) Cool. Right, I will wrap up and it's been a pleasure speaking to you again, Elliot. Elliott (59:57) Likewise. Thanks, DJ. Deejay (59:58) Many thanks to Elliot for giving up his morning to talk to us once again and keep us up to date on how things are going with his AI adoption journey. In the episode Elliot talked about a testing call called case that is a qase.io but case spelt with a Q and no U so Q-A-S-E.io. I don't get a kickback but just in case you wanted to check up and see that. When we were done recording Elliot and I continued chatting for a little bit as we often do. And he mentioned that some of the challenges beyond adoption of AI coding tools are much more in the kind of senior leadership side of things of the use of AI and the kind of ethical and regulatory concerns around that. So interesting to see that adoption is the first step and beyond that, there are other concerns that become higher level. Thanks for listening. If you've got any feedback, then please do email us at wavesofinnovation@re-cinq.com. That is R E dash C I N Q dot com. At the time of recording, we've got some webinars coming up. So I do recommend checking out the website to see if any of those would be of value to you and otherwise be good to each other and you'll hear me in the next one.

Episode Highlights

Velocity increased significantly, but developer burnout emerged due to excessive context switching.

Frontend teams see massive gains while backend teams struggle with complex system logic.

Accelerated coding speed caused major logjams in QA and User Acceptance Testing.

AI agents excel at rote tasks but fail at high-level architectural problem solving.

Moving to Model Context Protocol servers will eventually replace traditional API interactions.

Implementing feature flags became essential to manage the high volume of unreleased code.

Successful AI adoption requires executive leadership to manage friction across all departments.

Share This Episode

https://re-cinq.com/podcast/from-coding-to-context-switching-an-ai-retrospective

Listen On

Free Resource

Master the AI Native Transformation

Get the complete 422-page playbook with frameworks, patterns, and real-world strategies from technology leaders building production AI systems.

Get the BookGet the Bookfree-resource

The Community

Stay Connected to the Voices Shaping the Next Wave

Join a community of engineers, founders, and innovators exploring the future of AI-Native systems. Get monthly insights, expert conversations, and frameworks to stay ahead.

From Coding to Context Switching: An AI... | re:cinq Podcast