Get the #1 Bestselling book 'From Cloud Native to AI Native' — normally $26.99

Download it For FREE!

Podcast

April 10, 2026

Scaling Code Review When AI Writes the Software

00:00

Scaling Code Review When AI Writes the Software

agentic coding

code quality

static analysis

automation bias

mcp servers

ci cd

Explore how engineering teams can maintain code quality and security in the era of agentic workflows by combining deterministic gates with AI capabilities.

Hosted by

Deejay

Featuring

Jaime Jorge

Guest Role & Company

COO and Co-founder @ Codacy

Guest Socials

Episode Transcript

Daniel Jones (00:03) This is the Waves of Innovation podcast and I am DJ, your host. This time I am talking to Jaime George, who is the CEO of company called Codacy who offer tooling around code quality and security analysis. Jaime started life off as a software developer before going on to found Codacy and then become an investor in all those kinds of cool things. So we talk a lot about the implications of agentic coding in software development more generally for software businesses. And of course, the co-quality and ⁓ security implications of that. Sit back or stand up depending on what you're doing whilst you're listening to this and enjoy. Daniel Jones (00:47) Good afternoon, Jaime George. I'm very proud of myself for having made some half decent attempt at pronouncing your name there. Thanks for joining me. Would you like to tell everyone who you are, what you do? Jaime Jorge (01:00) Sure. Thanks, Daniel, for having me. I'm Jaime I'm COO and co-founder of Codacy. Codacy is a platform that enforces quality security and AI compliance in the world of AI-generated code, which is the world we live in today. And so quite happy to be here today. Daniel Jones (01:18) Cool. And yeah, thank you very much for joining us. So just before we recording, I was saying that literally yesterday, last night, I was having a conversation with an SRE on Slack who had had to look at an issue because there was an extremely large PR, like 5,000 lines introduced ⁓ by ⁓ a developer assisted with an agentic coding tool. And there's some concern about the kind of largeness of the PRs that are enabled by agentic coding tools and trying to keep on top of things like code quality and security, vulnerability, sneaking through all of that. So it's very much of the time that we need ⁓ more tooling to help with that. So what does Codacy offer in this space and what kind of things have you been seeing from your customers? Jaime Jorge (02:07) That's an incredibly common and frequent observation that we're seeing in our customers and really across the industry. In fact, some research came out last year that showed that PR sizes have increased 150 % while review times increased almost 100%. And that's really because with AI, we are producing just three times more software. with the help of agentic workflows. And so in this new world, the task at hand is how do we make sure that ⁓ these massive pull requests that are now entering GitHub for us to review get actual reviewed? That people just don't click, like, OK, merge, ⁓ or worse even use the same agent that produces the code to accept those pull requests? ⁓ How do we live in the world where there's so many new lines of code to review? And that's kind of been our obsession actually in our company. And so what we actually have been focusing on is how do we enable people, teams, ⁓ individual developers, but also managers to maintain ownership, maintain responsibility, accountability of software being produced, reviewing the AI-generated ⁓ bits and pieces of code being produced by colleagues, by teammates, and just ensure quality and security. Another very interesting trend that you might see in Google Trends is the huge rise in code quality and code security, because people now care about it more than ever before. ⁓ And so we're building a number of very exciting features because of that. For example, the capability of ⁓ leveraging all of the code analysis infrastructure that we've been building for last 12 years to now put at the hands of an AI reviewer that does all of that legwork of code quality analysis. Like this is just one piece of the puzzle, but the point is there are fundamental questions that we as an industry are asking ourselves. What is code review in a world where review is no longer almost humanly feasible, right? What is security analysis where people don't have accountability or feel they're to have ownership truly, or even understanding of the code they are producing and shipping. I actually interviewed an author of a book about AI compliance, where he was introducing the idea of automation bias. What happens in the world where we believe AI is always correct, and we just click to accept? Well, if AI did it, maybe I could just accept it and move forward. And so these are fundamental questions. But as we've seen for our own company, opportunities to help teams and organizations ship safely while not losing any of the speed and efficiency they're hoping to get when they get these tools in place. Long answer, I'm sorry, Daniel. Daniel Jones (05:16) Yeah, long answers are absolutely fine. It reduces the amount of waffle that listeners have to endure me contributing to the conversation. I mean, it's slightly disheartening when you see on social media on LinkedIn, folks who maybe haven't engaged with the tooling that's available, seeing this entirely as downside and not recognizing that there are opportunities for agentic tools to help. improve the quality and, you know, kind of absence of vulnerabilities in software. And they're only thinking about the problem side of things. And then in my mind, there's kind of a couple of different approaches, right? There's the, all of the deterministic stuff that you've been working on for years and is arguably much quicker and more reliable than asking an agent like just, Hey, look at this code. But then you've got the automated review and fix process as well, which presumably must be a lot easier. And now that instead of you folks having to write kind of old fashioned deterministic code to figure out how to patch things, you can kind of get your scanning tools to identify all the problems and then rely on an agent and a model to provide the fixes. Jaime Jorge (06:30) Yeah, the way that I've been kind of talking to our team at Coda C is the advent of AI is removing friction from the process of developing software at an incredible ⁓ unseen way. We've never seen this pace of friction removal before. We've seen obviously, you know, Lather climbing in terms of optimization and of abstraction layers, right? Because we're becoming more more expressive over the last few decades in writing software. But this now feels like a huge quantum leap in terms of just friction that we have. And that is tolerable. So for example, here in our product, what we're seeing is our product remains a gate ⁓ for many customers. rely on it. Every single line of code that is shipped, they require our product to make sure that the standards they define from a quality and security perspective are enforced. And that's a hugely important piece of responsibility. And that remains very relevant, especially now in this world. But one thing that is true is the deterministic layer. comes with certain side effects or certain consequences, they're not as valid today. For example, when you're dealing with static code analysis, one of the things that is just part of how static analysis works, those deterministic rules, is sometimes you have some false positives. Sometimes you have some issues that are marked as true when they're really not. And that is something that people are steadily and quickly losing patience. They don't want to be seeing false positives because AI might give you some hallucination, but ⁓ it's better at finding some of these issues. And so at the same time, AI is not just the full-on silver bullet to every single problem of quality and security. In fact, ⁓ its non-deterministic nature doesn't really allow you to enforce a consistent standard because it can decide to tell you that, well, in this particular execution or run, ⁓ this is what I'm gonna look for, and then in an hour or tomorrow, I'm gonna look for some other stuff. And so what we believe in in Codacy is a cyborg approach in which you need deterministic rules as your backbone, and then you need AI on top of it to make sure that ⁓ it leverages all the code analysis mechanisms to do the best possible analysis. And that's why for us, we see this as a huge, large opportunity for us. ⁓ And there are customers really connecting. We have customers that actually working on ⁓ their own AI agents, ⁓ developing their own agents. And they're using our API to be the backbone of code analysis for them to have better co-review. So yeah, it's an astounding ⁓ moment that we're seeing. ⁓ people have this expectation that now things need to be much more seamless, but at the same time they don't want the consequence of everything being too seamless. So question for us is what is a good type of friction? Daniel Jones (10:07) Yeah, yeah. You mentioned there, the non-determinism and off, off on my right hand monitor, which, ⁓ the, the people listening, we normally do video just for short form is people listening, can't see the direction I'm looking in, but I've got a conversation open with some security folks who are kind of new to agentic coding and asking, you know, we, would like to enforce certain steps. And I'm like, it doesn't really work like that. You can ask the model nicely. ⁓ you know, to, look for something, but whether it will actually, you know, engage and activate MCP tools is another matter. So, ⁓ that's presumably that there's still a place for gates and, ⁓ ordered predefined checks and things like that and things like CICD pipelines. Jaime Jorge (10:55) It's more than there's a place. It's one of the features that are now being introduced in all the major harnesses of AI models. So if you look at Claude code, they have hooks after the model executed. If you look at all these different IDs from cursor to Visual Studio code, they all have plugins in order to introduce essentially deterministic moments. when a AI executes, ⁓ just leveraging an MCP alone requires the AI model to want to call those things at those particular moments. So that's not enough. And so more than just a nice to have is actually what everyone is building is these moments in time where the model is forced to call it to execute. ⁓ And that's actually where we've been. building at Codacy is the ability for some of these models to operate in an optimistic way with our capabilities. Because obviously, we've been, last decade, collecting lots, lots, lots, lots of tools that can run, but also ways of configuring different verticals of security analysis from DAST to secret detection, all these things that can now be at the power and discretion of AI. Daniel Jones (12:22) It's before we started recording, we were talking about, you know, kind of product pitches and things like that. I'm now going to like name drop or something I've been working on called assembly line, which is leveraging those hooks that you were talking about. And the idea is, it's just a little open source repo entirely written by Claude. I haven't written any of the code myself, haven't read any of the code, but I've been really kind of strict about spec driving it and making sure they're good acceptance tests. With this thing, the idea is that you define a set of sequence of prompts. whenever you or Claude commit, it then goes and runs these jobs kind of like a local CI pipeline. ⁓ And it will automatically contribute fixed commits and then rebase on your main branch. that you talk about on the, I say you, the codacy talk about shifting left in the marketing material. And ⁓ it will be interesting to see what the kind of future of CI CD is. ⁓ how much of this stuff is going to be happening on developers workstations where you want and can have that much faster feedback. ⁓ And you've got some tools there that are kind of local, right, for developers to make use of as they're writing code, not just kind of post submit running on a CI server somewhere. Jaime Jorge (13:42) Right. ⁓ When we launched, I think we were one of the first companies to actually launch an MCP server that enabled code analysis. And then we did that in April last year when MCP was cool. some of the things that we learned were that deterministic layer was incredibly important. ⁓ But one of the things that we played with was the concept of the leftmost part of the equation, left side of that. So AI, maybe it's on the left side of the developer, because it's a tool that developers can use to actually produce software. I don't know what will happen to CI-CD. I think that so much has changed over last quarter. Daniel Jones (14:39) You Jaime Jorge (14:39) ⁓ I went from, ⁓ you know, playing with, with IDs that AI enabled IDs like cursor and seeing this is obviously the future looking at CLI is like, why would people do that in the terminal to start using cloud code and understanding why that's valuable, but still looking at the price, the pricing of cloud code and seeing who the hell would pay for a max plan. Who the hell would pay, you know, 90 euros or dollars or whatever for all these tokens? No one's really going to use that. To looking at it today and saying, this is obviously a steal. This is a really good deal because you can get it in parallel and do a lot of bunch of work. Of course, you know, if you're not doing it well, I was talking to Dax from OpenCode and he was saying, well, sometimes people use this, all this parallelization. as a proxy for productivity, and then they just throw all these lines of code away and just wasted time and burned a small forest doing so. what I think, and that's why it's so hard for me to bet, is I don't know. I don't know what's going to happen with CI-CD. If history holds any value for us to try to predict the future, I think more of it will run in developers workflow. Daniel Jones (15:41) you Jaime Jorge (16:00) I think more will run, more will be asked or more pressure, more responsibility will be put on developers because that's really what we're doing today is we have fewer doing more. ⁓ And so I think CICD still has obviously a very important place. We certainly think so at Codacy because that's one of the things that we also ⁓ sell really is having that central moment where whatever happens. in all these local environments, working with all these agents, we have one moment where things need to be right before it goes to production. And that still is a very important moment before things go eerie and go badly. And so I think I like to think that all the lessons and... learnings and true engineering that was made of the last decades of software engineering still hold true. I truly think so. All the best practices, whether those are built by AI agents or processes that we build to protect against the dangers of AI agents, I think those will be important. I don't know which, though. I just think this all changing so much and so quickly and accelerating in an accelerated pace that it's really hard to predict what will happen there. Daniel Jones (17:23) Yeah, yeah, it is moving extremely quickly and it's extremely challenging to keep up with. you know, this is my job to kind of try and keep up with things. But for folks who are, you know, delivering on feature backlogs, who are worried about shipping a product, trying to find the time outside of that to keep on top of everything that's changing is a massive burden and quite a challenge for a lot of people. It's interesting you mentioned about kind of fundamental principles, and I'm inclined to agree, like what we seem to be finding again and again, is that good practice ⁓ still applies for agents. And if anything, it's kind of slightly distressing that lots of things that maybe the not greatest engineering managers in the world were kind of like, overlooking and not caring about too much like things like test coverage and alignment on coding standards. Jaime Jorge (18:16) Mm-hmm. Daniel Jones (18:19) Now that agents are involved. So, well, we've got to have good test coverage. Otherwise our agents won't be able to tell if they've broken something. And we've got to have a good understanding of what good looks like in our code. Cause otherwise the agents won't be able to write good code. And it's kind of like, these things have been important since forever, you know, and when it was humans, you didn't care so much. But now it's agents. It's trendy, trendy to care about. Jaime Jorge (18:42) Right. 100%. I think now more than ever testing and as much as you can do in terms of testing will be the way in which we will be able to prove that AI agents did what we wanted them to do. Particularly we connect things like specs and then acceptance testing and all those very nice things that now kind of measure the outcome. of things working or not. I think there's an interesting piece there, though, which is ⁓ we're introducing AI agents to code, AI agents before that to almost help us write those specs, AI agents to test, AI agents to then ⁓ do also some acceptance testing, AI agents to merge, AI agents to review code from others. the point is, if we just put all of a... AI doing all these different jobs, where how do we enforce checks and balances and all this? So if we get AI agents to write the tests and then them enforcing also the coverage of unit tests, OK, maybe you can get a different model to do different things. That will actually not be bad idea. But the point is also there needs to be a human element, a human in the loop in some of these processes. That's why for me it's so Fascinating to think what is a good type of friction in this process? What is something that should pause us for a second and say hey, we've been going so fast, but ⁓ Please you know take a look at this digest of the last three reviews just to make sure that you're aware of what's happening now, and I think that's that's where Everyone wants to go super fast. That's decided. No one's gonna pause or stop. No one's gonna stop the train Daniel Jones (20:42) Yeah, the Jaime Jorge (20:42) But eventually we have to discuss a little bit what that means, right? Daniel Jones (20:46) Yeah, and the one of the things that seems to be happening in a lot of organizations that adopt agentic coding and do start going faster with the engineering is that they rapidly uncover all the other bottlenecks in their development process. One of which commonly is products, you know, trying to figure out what exactly was it that we were supposed to be building, either run out of well-defined stories, or it becomes clear that they weren't that well-defined in the first place. And I suppose a slightly worrying thing is If people are struggling to define what functionality they want, are they also going to develop a kind of increased blind spot for non-functional requirements? I mean, if folks can't specify the features that they want very well, like how are they going to specify how secure they need their code to be or the quality of it? Jaime Jorge (21:35) Yeah, I have a more, I have a contrary take to that challenge. Obviously, the take being that, know, software development is hard and producing the code was never the bottleneck, right? And I completely subscribe to that. But I also think that as I've been seeing AI can also help you. unblocks some of the thinking to build the prototypes that leads you to have the conversation that gets you the right data point to build the right thing afterwards. And many times ⁓ the lack of clarity comes from lack of information. Either we need to come to have a conversation with the customer to truly understand the requirements or what they need. With AI, we could we can jump start a little bit more. We can jump farther ahead into that conversation at the very least showing prototypes to get the conversation started. The cost of starting something is reduced to much lower factor. so I also see, and this is something that I experience, is that AI makes some actually interesting decisions when you ask it to do some good research, right? So when you use Cloud Code, you can ask it to say, hey, please go in research for as much time as you can, ⁓ this particular problem, build a market research documents, because I'm going to look at this particular problem. OK, so now you have that. Actually, I want to fine tune this particular problem and build a spec for a solution. OK, now I want you to do a red team document to see why is this problem not really real. and test your own assumptions, right? And so as you build this knowledge with AI, you're actually learning also, and you can jumpstart much of those things. But I believe this is something that I believe truly is action brings information. So the more you move, the more you do things in a company, the more you're going to get closer to the right solution or the right moment or the right milestone. And I think AI can be an asset to that. So it's more about, for me it's less about lines of code whenever the block, right, but it's also, I can help you in the previous parts also research. Daniel Jones (24:07) Yeah, yeah, absolutely. And the speed with which prototypes can be created now means that you can realize that maybe you didn't know exactly what you wanted or that what you thought you wanted wasn't the right thing. Or when you have like vague specs and an agent fills in the blanks, you're like, oh, that was surprising. I didn't think of that, but maybe that's interesting. On the subject of prototypes and the rapidity, I don't know where the rapidity is a word. with the speed with which they can be created. Are you seeing much demand from folks who have got the non-developers creating all the lovable prototypes? Because in a couple of organizations that I'm working with, know, tools like lovable and even Claude Code have been given to air quotes, non-technical people like non-software developers. A few of them have really got bitten by the bug. and are now cranking out all sorts of tactical software solutions to business problems. folks trying to keep up with that, it's a bit of a challenge because, you know, they don't necessarily have CI CD pipelines. They don't necessarily have a platform for these kinds of apps to live in. But all of a sudden, there are all these GitHub repos popping up that are coming from like the CFO and the head of marketing. Jaime Jorge (25:23) So in our business, we don't interact much with people that are not technical, because obviously our product is very much technically designed for a world where technical ⁓ folks lead to the whole development. But I do have many friends that have caught the buck and that are trying and in certain cases even succeeding in producing these prototypes. ⁓ I think that's where the engineering ⁓ For me it's interesting because the engineering piece is not really removed from how we produce software. Because even working with Loveable, you can get the best prompt possible to start something. But it will give you a nice looking scaffold boilerplate done for product with some really good assumptions of what you probably wanted in some cases. ⁓ But as you think about it, An engineer look at that and starts thinking, OK, got it. OK, so this is actually a good start. I'm going to plan out to look at this. So they're going to start breaking down in pieces, like a tree, what all the work they were probably going to do to launch that product. A non-technical person that hasn't gone through the engineering school of life doesn't, or at least how I've been experiencing, they don't work in abstraction layers. So they get very close to details, then they get obsessed about a particular detail that Lovable implemented. And so I really want that detail to be as correct as possible, and they spend all these credits there. And so I think ⁓ what I'm trying to get, I think the promise of having someone do something more complicated than landing pages or the inner tool intra company tool. I still, I still don't think we're there. We're much closer than we were when we had low code and no code and all that wave of, ⁓ of software. ⁓ but I still think there's so much more, ⁓ engineering practice. ⁓ it's, it's for example, people that really like producing products, but found it unreasonable because you had to maintain and do all this yak shaving. I put myself there in that category. Now they're having a ⁓ lot of coding with things like level, but also cloud code and so forth. think ⁓ non-technical people are now getting into it, but I think we're still going to need a bit more hand holding and more context windows and better models to help them. That's what I think. Daniel Jones (28:12) Yeah. Yeah. The, you mentioned, ⁓ kind of, people that want to build things, but, ⁓ put off by the friction. mean, I certainly had that experience of like over the last few years, there's definitely been things like I wanted it to exist, but then you end up at your terminal. You're like, God, I've got to like figure out how to do dependency management in the, know, whichever language it is. And I've got to set up the thing. And then there's a clash of versions. ⁓ you, were a software developer when you started, ⁓ codacy, right? Do you still consider yourself a software developer? And have you had that a lot of, a lot of kind of tech leaders that I speak to have had in the last year or so, a moment where like, wow. You know, I can make all the things I'd be super productive. I can like enjoy programming again. Is that a journey that you've been on personally? Jaime Jorge (29:05) Yeah, would, I don't know if, I like to think of myself as a software developer because I truly like software. I started the company and the things that when I started the company parts of the backend were written by me. I don't think that code exists anymore. And some of the things that I really enjoy in the very beginning were just marveling at small pieces of code. You know, like when you looked at, you know, a function that was so smart and so intelligent and how it did so much with little, so expressive and just, you know, looking at, you know, marveling for a second at that and how smart that is. And I remember that ⁓ in the beginning of my career, just, you know, spending time out of college, just looking at code. I don't think that happens anymore. And so I think today, With Cloud Code, I am absolutely amazed and happy because the things that I really enjoy doing are looking at product. I love seeing product being, you know, coming from zero to one. I love seeing features being born, ⁓ creativity being fulfilled in materializing creativity. That's something that drives me a lot. I love that. And so I've built so many different prototypes ⁓ for different things, some of which died immediately and just had a use case of an hour. Others that are now we use internally in Codacy because I got to play for it during the weekend and I built a tool to cut a nice code and do a particular type of analysis using cloud code. So ⁓ I don't think even the question is if I'm good or bad at that. think the point now is can I control an agent well enough so that it covers all the bases required for the software to be good. That's, think, what the job is. Am I controlling it well enough? Am I being an agent herder good enough? An orchestrator of agents. Am I paying attention to all these different facets? And so that's where the game is now. And it's so fun. It feels like, and wrote about this, it feels almost like loot boxes. It's less about code than loot boxes. Let's go good software. Daniel Jones (31:28) Yes. Jaime Jorge (31:31) And it's not good. you know. Daniel Jones (31:33) Yeah, it's that random interval thing of like, you know, you're not always getting a pleasant surprise. Sometimes you're disappointed, give it another go. And it's a real rabbit hole that people get stuck in of just like prompting and reprompting where maybe they should, you know, do a rebase, throw away a few commits and start again. Codacy as an organization, like you've got an advantage, right? In that you are kind of your own customers. Jaime Jorge (31:45) Mm-hmm. Daniel Jones (31:59) Like you understand the end customer and their needs because you're also writing code. Like you're writing software systems and your consumers are writing software systems. How has it been, ⁓ if you're able to speak about that, as an organization kind of facing similar challenges to your customers and adopting agentic coding ⁓ across the organization internally? Jaime Jorge (32:23) So I think pretty much like any other organization out there, we're going through a transition ⁓ in which we have ⁓ a cohort of folks that are extremely excited about AI, some folks are just getting into it, and some folks that maybe don't believe in the true value of it, and they've seen AI-produced software that is not good. And that, until very recently, was just the reality. If you're using AI coding agents a year ago, you probably wouldn't have, you didn't have great results. in the last two quarters, changed dramatically. And with models like Opus 4.5 and 6 and GPT 5.2 and 3, things are dramatically stronger now. And so I think, like any organization, we're going through a transition where some folks are already developing for the same developers that are also excited about AI. And some folks are adapting and learning these tools to use ourselves, to integrate, and then to take to customers. ⁓ I think we're maybe a bit farther ahead than some other industries, we, I am very paranoid about that because what I see also is our customers are trying to enforce, they're trying to make sure that it is being used so that it's like, you know, fire was invented. It is being given to everyone at the same time. You cannot ignore it, right? It's not going to be a silver bullet. And that's what I think most people fear. that optic of those optics of this being seen as a silver bullet by management when it's not, like you said, and many people, and it's true. Many times the biggest bottleneck is actually not producing software, is understanding exactly what we need to build. And so that is an important understanding, but I do think that all the tools need to be, these tools need to be adopted. There's no way, There's no way to do it. We have to adopt these. We have to use Cloud Code and Codecs and all these tools because they're super. They're really, really good. ⁓ And in our case, it's existential because our customers are doing so. so that makes like, what is code review and code quality in enforcing of standards in a world where AI is writing that code? ⁓ What is, know, how do we enable people to review 10,000 lines of code in a pull request? you know, how do we help them? And only by using those tools is we learn, we effectively learn how to help them. So ⁓ that's a big push. mean, myself and Liesh have been trying to help people use AI because it puts us, you know, inside the shoes of our customers, which is very important. Daniel Jones (35:18) You mentioned, or you used the word existential there. It's kind of concerning in one way that software is becoming less of a defensive moat. Like in the past, you could say like, well, we've got, we've invested years and years into building this software product and you know, it will take anyone else years to catch up. And that seems to be evaporating for a lot of organizations. ⁓ As the CEO of a software company, is this something that... Jaime Jorge (35:29) Mm-hmm. Daniel Jones (35:45) keeps you up at night, are you kind of ⁓ re what's the word doubling down on the kind of non software elements, because I think a lot of software developers suddenly coming to the realization that actually running a business that successful is a lot harder than just having some good software and a good idea. Like all of these developers, I've got this great idea for a business and like, but I don't have time to make it. Well, now you can make it over a weekend with core code and then you get the crushing realization of Sales and marketing is really hard and like relationship management is really important. So are you finding those kind of balances shifting? Do you worry about the fact that somebody with core code can try re-implementing your software over the weekend? Jaime Jorge (36:32) ⁓ So in our case, it's not in our particular case, it's not someone writing our software over the weekend, but is actually understanding what is the space and place of static analysis in the future. ⁓ As I mentioned before, you the idea of any false positive will have a much lower tolerance for a user because the alternative is using AI. And so this has been compelling us to develop a number of features that enable that, you know, removal of friction and improve dramatically the experience. For our case even is when you have Cloud Code doing security analysis, what does that mean to us? Those are fundamental questions that we've been having. so one thing that we've learned throughout these years is that Codacy as a company exists in the intersection of these organizations. We exist because it's hard to actually produce software and produce something with quality and security. And no matter if you have ⁓ 10,000 developers or five brilliant ones producing or just all the code codes ⁓ agents of the world producing software, you're still going to have to comply by the same things. You need to have standards of quality and security that you conform to it. Many of these companies also have compliance, actual compliance processes and mechanisms they need to abide by. ⁓ And so because of that, that's always been our space. When you want, it's not about reading code, but it's is making sure that code and software really complies to all these different mechanisms that we have outside of it so that organizations can move safely. ⁓ The way that I many times pitch our team is we are builders of trust. People build software. They write all these lines of code. Now, yeah, it writes thousands of lines of code in each pull request. But how do we trust that? And that's really what we've been doing over the last decade is enabling software produced either by AI or humans to be trustworthy so that when you deploy it, you believe that it won't have vulnerabilities, that it won't just put you to shame on Twitter because someone found a role level security, whatever problem in your database. There's no issues, a high quality of software so you can continue maintaining it. It's really about that. trust element. And I think now more than ever, the intersection of these different processes, people, tools inside companies, is not something that is easily replaced. I think there's also a mirage. I think people like the idea of I give go and I recorded a screen, my screen, I did a video of me looking at Salesforce and I'll get into cloud code and I have a CRM. No, you don't. You have a mock up. You have Salesforce is a collection of Daniel Jones (39:32) You Jaime Jorge (39:36) thousands of different plugins that integrate. Sure, you can have a small CRM at scale that maybe is useful, but you're not really replacing the connection between software and people. That's really what this is about. We exist because we have organizations with, let's say, 5,000 developers. Some of our customers have many that rely on software and the reports of codacy to do their job. So that is more than just not ⁓ reproducible immediately, everything is obviously replaceable. It's more than that for me is how do we exist in a world where ⁓ AI is gonna get into every single corner and that's not the fear that I have. Yeah, I wanna better focus on that world of having larger and larger software that we have to review, maintain and ship, I guess. Daniel Jones (40:22) Yeah. Yeah. And the trust issue is really important there. If know somebody, somebody can kind of create some new thing, vibe coded over the weekend, but is anyone going to trust it in the same way that they trust a company with a, you know, a long track record in terms of humans in the loop ⁓ and the kind of emerging field of software factories kind of becoming a bit of an in-joke that every, every episode I record, now talk about software factories. ⁓ But we're seeing things like Stripe's minions where, you know, they've got Jaime Jorge (41:01) Mm-hmm. Daniel Jones (41:02) agents being spun up to ⁓ act on pull requests in isolated environments. And we've got things like, you know, the kind of dark factory pattern of no humans read code or write code. ⁓ Have you got any kind of, are you running any experiments in how things like code quality can be ensured in those kinds of environments where there is no human in the loop? I guess in some ways, if you've already got ⁓ software that is reacting to deficiencies and kind of proposing fixes in a way, kind of ahead of where maybe the rest of the industry has been like, due to the nature of the space that you're in, it's kind of like there's the early signs of kind of software factory type behavior in those kind of systems. Jaime Jorge (41:52) Yeah, we have enabled customers to implement not fully a black box of AI agents working together, but ⁓ almost autonomous machine of software where you click and develop and all of that. And they go and implement certain features and jury tickets and so forth. And they leverage our own code analysis system to ⁓ give that feedback loop to these agents so that as part of that process, they make sure to pass some of these gates. So we feel we're part of that already, particularly when the organization starts scaling beyond 150 developers. That's really, really useful. ⁓ We've been exploring that internally, even for ourselves. How do we get that idea of minions and agents working? And how do we make sure that that has some safety nets and some standards in that process and how do we make sure that there's some sort of human element reviewing certain moments in time, either a digest or just kind of stop points or something like that. ⁓ We're exploring. ⁓ I think that's really important. I think it's an incredible time ⁓ and incredible opportunity for us right now because we're seeing a huge pull from companies otherwise before were not as interested in quality. Now they're being demanded. ⁓ I think quality and security was always a big point, but now I'm starting to sense that after the AI craze as somewhat ⁓ maybe paused or plateau is a hard word, but it's more after this boom, I am thinking that some folks will get fired because of lack of quality in the software. Whereas before, I don't think this was as true. I don't think this would happen. I think people get fired if you had no security and get breached. That's always a risk. I think quality is gonna become a huge thing also in the future. It's like, well, you vibe coded yourself into a corner and now how you get out of it. I think that's a very interesting opportunity now for companies like ourselves ⁓ if we continue to move quickly as we have. Daniel Jones (44:28) Yeah, there's an interesting kind of dynamic when people can be blamed for things. I can imagine some, not everybody, but some category of customer or of a software development shop being like, oh, well there was security vulnerability and it got out. Well, that was because Jimmy made a mistake and Jimmy's the junior and you these kinds of things happen. But as we shift to more and more automation, ⁓ you can imagine a world where people start to look at quality and security more systemically of like, well, if you built a software factory that didn't check for these things, that's not Jimmy having a bad day because he was hung over. That was on you for designing a system that didn't, ⁓ didn't assure these things. And we all know that, you know, agents don't always do the right thing. So I can imagine, imagine a world where, ⁓ yeah, folks kind of see those things, not as human slips anymore, but as systemic failures. ⁓ in the software factory they're designing. You mentioned, or we talked earlier about the MCP server that you folks offer. Does using that kind of lead to higher quality like in the development process, separate from kind of using analysis as a gate, maybe in a CI, CD process? If I'm developing something and I've got the MCP server added ⁓ to my coding agent, Jaime Jorge (45:23) Mm-hmm. Daniel Jones (45:51) Should I expect my agents to make a of higher quality code? And do you see developers kind of, because it's more immediate and accessible, being able to get that kind of advice and checking done locally? you see or anticipate quality going up kind of before checks are made? Jaime Jorge (46:12) ⁓ So that is the whole thesis of our Codacy MCP server, or essentially our guardrails product that we launched last year, was the ability for a coding agent that is doing a particular task, and when it's done, or kind of before it finishes and returns control to the user, it's done with the task, it runs an MCP task, it calls the MCP server from Codacy, runs a number of... code analysis locally. So that's essentially being ⁓ called by the AI agent, finds issues and then corrects them if the severity is high enough. ⁓ And so that had really interesting results because we essentially proved that you could have a coding agent operate well and then at the end make sure that everything is tidy. You have Anthropic with Claude Claude's recently validated that angle by introducing a command, I don't remember the name, think it's Simplify, that also looks after an execution with the intent of the user, so user has to call that first. ⁓ Look at the code quality of software and make sure that it can find improvements to be made. ⁓ I think over time, ⁓ we're exploring now skills to make sure that we can enable the catalog of code analysis ⁓ capabilities, as well as our big infrastructure of code analysis we've built throughout the last 10 years, to be available for a coding agent to leverage. And so some of these things that we're now ⁓ building are integrations into these new workflows, not necessarily just through the MCP server that we've built, but also through a literal skill that can be called by, you Codex or Cloud Code. And so that flow, I believe, is the future, is having tools, is having an AI agent that can run ⁓ deterministically, in non-deterministic matter, and then leverage tools to give a fundament to really give backing to an assessment of quality and security that they can add their own thinking above. But it becomes supported by that deterministic analysis. And that's, I think, it's the only way to go. And that's how it works in every single part of the AI agent working. If you wanted to research something, you're going to use a tool, web search. you wanted to make a migration, it's going to use a tool to call the database. So if you wanted to make quality and high security, it's going to have to run a tool also to get more data. just to make sure that you find always the same thing. that's how we see the world. Daniel Jones (49:14) We've got a hard stop in five minutes because you've got a family commitment. It's the end of the day for both of us. Is there anything else that you particularly wanted to talk about? Anything else that I've not asked you a question about that you wanted to chat over? Jaime Jorge (49:19) you ⁓ No, I think the way that we're seeing this, there's tremendous excitement in this never, in an industry that really never sleeps, right? We are the New York of all industries. We never sleep. It's always shipping. It's always new things. You wake up and now there's a new way to do things. ⁓ And if you live on Twitter, you probably have mental health problems because it just never stops the scrolling. and the new things AI related. What I think is fascinating, particularly for companies like Codacy, but many others, is there's a big opportunity for you to translate or migrate onto health pass technology from all these new innovators, people that are just inventing the future and playing with OpenClaw or Cloudbot, whatever the name is this week. And then ⁓ you want to have that technology translated and moved to larger companies. And I think that's, I think if there's one sentence that holds true throughout the next few years is you help companies adopt AI the best you can. Whether that is in your CRM, whether it is coding, whether it is reviewing, whether it is quality or security, whatever that means, it's adopting AI at scale and do it really well. And if you that, I think I think every business has a chance to stand on its own, but I, that's, that's also how we see the world. And many of the things that we're bringing to market actually in the next few weeks are related to helping companies be even more efficient, more effective, be more compliant, not lose speed. You don't have to trade the insecurity of just, I have all these agents doing all this shitty work and now apologies cursing all this bad work, but now I have, ⁓ codacy to help me, ⁓ cover my bases. And so. That's where we stand and that's how we see the world. Yeah. Daniel Jones (51:32) Nice, nice. And just to wrap up, if ⁓ somebody listening to this is kind of in an organization where agentic coding is being adopted, and they've got concerns about either security or the incoming torrent of ⁓ PRs being raised, what kind of general advice would you give to them? Jaime Jorge (51:52) ⁓ So first thing is... ⁓ you would be happy first of all to talk because we see this at scale, more than a thousand customers. We see very small, medium, very large organizations with different tech stacks and so we'd be more than happy to help reach out to us and we'd be happy to ⁓ put something in for us to have a conversation. I think the second is talk to some of the engineers that are already playing with some of these tools. Listen to them, ask these questions, see what tools are in and what are the problems. Play with those tools as well yourself so you can get a sense of how the world will be. It was very different, for example, for me to start using and playing with Cloud Code to understand the implications of this for software development. And I think they're massive and exciting, but you need to get your hands dirty. And ⁓ then there's a number of folks that are important to follow. that I believe can also be very helpful in seeing how the world will shape up. But yeah, it's, you know, I asked the same question or kind of a version of that question to CTOs, for example, Dana Lawson, CTO of Netlify, and I asked, how do you stay updated? And no one really has a good answer to it. So if the people producing the innovations don't have good answers to it. I don't think anyone else should be completely obsessed about being updated hourly. I think everyone's figuring out and I think it's mostly the most important thing for me is be open to some change. Be open to some change because change is coming to everyone. So that's it. That's what I would say. Daniel Jones (53:44) Excellent. And we are bang on time. So, Jaime it's been a pleasure. really appreciate you taking the time out in your busy schedule to chat. ⁓ But yeah, thank you. And let's wrap up so you can get on with your evening. Jaime Jorge (53:57) Thank you, Daniel, for the invitation. It was great being here. Bye. Daniel Jones (54:02) Hopefully you enjoyed that and found it interesting. When we were done recording, Jaime and I were chatting and he did point out that it was me that kept on bringing up codacy in the services they offer. You know, when you bring people onto the podcast, it's nice to be able to share what they do and help make sure that good things come their way. But yeah, that was definitely my doing rather than Jaime coming in as like sort of full hardcore sales trying to push his product. So that was entirely my doing. Either way, I hope you enjoyed that. We would still like feedback on the editing and whether you would rather just have all pauses removed so you can get straight to the most important information without listening to me say ⁓ and ⁓ every few seconds or whether you'd like a more natural unedited version instead. If you have any feedback, please send it to wavesofinnovationatresync.com. That is R E dash C I N Q dot com. It'd be great to hear from you. Otherwise. Be good to each other and you'll hear me in the next one.

Episode Highlights

PR sizes have increased by 150 percent due to the adoption of agentic coding workflows.

Automation bias leads developers to blindly accept massive AI generated pull requests without proper review.

Non deterministic AI models require deterministic rules as a backbone to enforce consistent security standards.

The cyborg approach combines traditional static analysis with AI agents to ensure code reliability.

Managing AI coding tools feels like opening loot boxes with unpredictable but often rewarding results.

Foundational engineering practices like rigorous test coverage are more crucial now than ever before.

Trust and compliance remain the true defensive moats for software businesses in an AI native world.

Share This Episode

https://re-cinq.com/podcast/the-rise-of-ai-in-software-development

Free Resource

Master the AI Native Transformation

174 patterns, 422 pages — #1 Bestseller From Cloud Native to AI Native is FREE for a limited time

Get it For Free!Get it For Free!free-resource

The Community

Stay Connected to the Voices Shaping the Next Wave

Join a community of engineers, founders, and innovators exploring the future of AI-Native systems. Get monthly insights, expert conversations, and frameworks to stay ahead.