The sql_squared Podcast!!!
About The sql_squared Podcast
The sql_squared podcast is your guide to navigating the ever-evolving world of data. We go beyond the code to explore the tools, techniques, and trends that shape the data landscape, from SQL Server and cloud platforms to AI and developer productivity. Join us as we chat with experts from the community to help you learn, grow, and make the right decisions on your data journey.
Connect With Us
- Website: https://www.sqlsquared.co.uk/
- Twitter/X: @sql_squared
- LinkedIn: @sql_squared
- Email: mailbag@sqlsquared.co.uk
The sql_squared Podcast!!!
sql_squared: Github Universe - The future of Agentic Development w/ Sergio Sisternes
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of The sql_squared Podcast!!!, we're joined by our special guest, Sergio Sisternes, for a deep dive into the latest announcements from GitHub Universe and the rapidly evolving world of AI-driven development.
As the Director of Technology Solutions for Microsoft Azure at EPAM, Sergio shares his unique insights fresh off the plane from Silicon Valley, giving us a direct debrief on the tech that is rewriting the future. This conversation is centered around moving beyond the hype of AI to understand the practical shifts in the Software Development Life Cycle (SDLC).
Listeners will walk away with a deeper understanding of the difference between standard AI and Agentic AI, practical tips for building effective Model Context Protocols (MCPs), and a fresh perspective on how junior developers and architects must adapt their mindsets from "what" to "how". This is a can't-miss episode for anyone interested in Cloud Native development, Copilot strategies, and surviving the productivity "arms race".
Guest Information
Name: Sergio Sisternes
Bio: Sergio is the Director of Technology Solutions for Microsoft Azure at EPAM, where he leads high-performance teams and architects delivering cloud and AI solutions for the future.
LinkedIn: https://www.linkedin.com/in/sesispla/
00:00 Intro 01:58 The GitHub Universe Vibe: Agent HQ & Mission Control 05:17 Why Code Quality Matters Before AI Adoption 10:37 Explained: Machine Learning vs. Deep Learning vs. GenAI 15:07 The Mindset Shift: Moving from "What" to "How" 21:00 3 Ways to Optimize Copilot: Instructions, Prompts & Agents 28:33 The Right Tool for the Job: Understanding MCP 38:18 Can Junior Developers Survive the AI Era? 44:15 Becoming a "100x Engineer" 47:53 The Global Productivity "Arms Race" 52:07 Outro
#data #ai #github #softwareengineering #podcast #azure #agents #sdlc #cloud
Support the show!
Enjoying The sql_squared Podcast? The best way to support us is by subscribing to our YouTube channel!
- Watch & Subscribe on YouTube
- Listen on other platforms: Apple Podcasts | Spotify
- Leave a Review: If you found this episode helpful, please leave a 5-star review wherever you get your podcasts or leave a like and comment on YouTube.
- Share: Know someone who would love this episode? Go ahead and share a link!
The sql_squared podcast is your guide to navigating the ever-evolving world of data. We go beyond the code to explore the tools, techniques, and trends that shape the data landscape, from SQL Server and cloud platforms to AI and developer productivity. Join us as we chat with experts from the community to help you learn, grow, and make the right decisions on your data journey.
- Website: https://www.sqlsquared.co.uk/
- Twitter/X: @sql_squared
- LinkedIn: @sql_squared
- Email: mailbag@sqlsquared.co.uk
David Morgan-Gumm (00:01)
Hey everyone, and welcome back to SQL Squared, where we chat about all things data development and the cool tech shaping our world. Today, we have a fantastic guest, Sergio Cisternas. Sergio is the Director of Technology Solutions for Microsoft Azure at EPAM, where he leads a high performance team and architects delivering cloud and AI solutions for the future. And we've caught him at the perfect time. He's fresh off the plane at GitHub universe, and I'm hoping he has a ton of firsthand news and insights for us. Sergio, it's great to have you on the show.
Sergio Sisternes (00:31)
Hi, David. Thank you so much for the opportunity. I'm really looking forward to this.
David Morgan-Gumm (00:35)
Good, good. I'm looking forward to the chat as well. We're incredibly lucky today because we're getting a direct debrief from one of the biggest events in tech. We're going to be unpacking the key announcements from GitHub universe and most importantly discussing what they actually mean for us developers. We'll explore how the new wave of AI is changing the game and how that impacts the day-to-day of a developer versus an architect. We'll look at how these new tools fit into a cloud native world.
And we'll cap it off by looking at what this all means for the future of our roles. So the first topic ⁓ we have today, ⁓ see mainly around GitHub universe, it's going to set the scene, the vibe and the keynotes that you went to, hoping you really enjoyed the event and the traveling back and forth. ⁓ Wasn't too painful. know you flew back.
into the UK yesterday, so I also hope you're not too tired this morning. So thanks for keeping the recording time in. But for those that weren't there, I wasn't there. I'm going to act like the audience right now. How was it? Was it good? What was the overall vibe? And I also want to know, I can imagine what it is already, but what was the one word you kept hearing over and over again?
Sergio Sisternes (01:58)
Now, Universe is an amazing experience, right? To be in Silicon Valley, right? With all the bright minds surrounding you and yeah, that's where the future is rewritten again, right? ⁓ so yeah, I think, I obviously you can imagine that the one word that we hear in the two days that the event lasted, it was co-pilot, ⁓ co-pilot everywhere, co-pilot.
David Morgan-Gumm (02:10)
Yeah.
Sergio Sisternes (02:22)
not only for developers, but for all the stages of the software development lifecycle, right? From requirements to deployment to operations. ⁓ Copilot is becoming the center stage for all things SDLC, right? And for instance, they announced Agent HQ, which in the end is a new way to make GitHub a central platform where all the AI agents from different vendors are going to get consolidated, creating a unified ⁓
David Morgan-Gumm (02:34)
Mm.
Sergio Sisternes (02:51)
user experience centrally on GitHub. Also, ⁓ mission control is going to be key, a key part of the strategy, It's a way in which all these centralized agents can be governed, can be controlled, can be monetized, right? Because the strategy is clear. We want to see more more agents working for us, playing specific roles in the CLC and helping us to take away the burdens of the common task that we are doing day to day.
David Morgan-Gumm (03:13)
Mm-hmm.
Yeah, I've been, I've been looking through the recap page and for those listening, I'll link it in the description. That agent HQ looks like a game changer to me. You know, being able to use the right agents for the right job all within GitHub itself. I'm kind of setting it all up there as well. And the link to VS code, being able to manage it all.
Sergio Sisternes (03:25)
Mm-hmm.
Hmm.
David Morgan-Gumm (03:49)
within your IDE rather than going directly into GitHub with the MCP. Yeah, seems like a real game changer. I've been using GitHub Co-pilot a lot recently. It was ever since your talk in Manchester, actually, that really kind of really put it into perspective for me what agents can do for you ⁓ when developing. So I've been doing a few projects myself and using all those features and
Sergio Sisternes (04:02)
Thank you. Yeah.
Yeah.
David Morgan-Gumm (04:19)
really looking forward to kind of expanding my portfolio with all the new features.
Sergio Sisternes (04:24)
Yeah, and also we see that we're not only getting more and more AI capabilities, but GitHub is focusing also in making sure that they can be leveraged at enterprise level, right? With all the centralized AI governance tools, for instance, all the metrics that we are getting out of GitHub Copilot, we really have from an enterprise level, more and more controls.
David Morgan-Gumm (04:34)
Yes, definitely.
Mm-hmm.
Sergio Sisternes (04:47)
Something that's not that big, but I think it's going to be really relevant for many enterprise clients as well, is the ability to have self-hosted GitHub coding agents. Today, when you're using the coding agent, the version that you run on github.com and you assign pull requests runs on github.com actions. Now, they announced also support to run this on self-hosted agents, which is another step to get closer to...
David Morgan-Gumm (05:08)
Yeah.
Sergio Sisternes (05:16)
have tighter governance and help enterprise clients. It's really interesting.
David Morgan-Gumm (05:17)
⁓
Yeah, that definitely seems like a major introduction. And with the cost stuff as well, you know, I can imagine, I only pay for a couple, but I can imagine lots of ⁓ enterprise clients that are in this space and looking at AI for development, like really seriously, will probably have quite a few different subscriptions going on. And being able to centralize all of that with the major models will be a big win for them.
Sergio Sisternes (05:23)
Mm-hmm.
Mm-hmm.
Mm-hmm.
And I think that the other thing that really amazed me, was the introduction of more more security tools and another security that also they announced code quality, which for me is essentially something that we're discussing with many clients. Many clients are asking us, ⁓ Sergio, when we adopt the copilot, what is the strategy and how we start, how we continue, what is the overall strategy? ⁓
David Morgan-Gumm (05:57)
Mm-hmm.
Sergio Sisternes (06:17)
I'm always saying the same, right? You can use Copilot to go faster, but before you go faster, probably you have to ⁓ fix some of the technical debts that you have, right? And yeah, if you don't have good unit tests, good integrated tests, obviously, both Copilot and humans will struggle to know if they're creating some kind of regressions. If you don't have good ⁓ code quality, good ⁓ code linting, good comments, AI will struggle to...
David Morgan-Gumm (06:30)
Mm-hmm.
Sergio Sisternes (06:45)
to follow the code and help you to improve that. So code quality, to me, is the first step in the journey. And then when you start improving that, it can help you to accelerate the development process. But obviously, having code quality tools embedded in Copilot, embedded in GitHub, is going to be also a game changer to help us accelerate.
David Morgan-Gumm (07:04)
Yeah, definitely. I agree with that. Again, I'm on their recap website, and they've got quite a few screenshots of the code quality dashboards and examples of improvements that you can make to your code base. I'm going have to make you to that. I'm going have to make you to that file.
Sergio Sisternes (07:11)
Mm-hmm.
Hmm.
Yeah.
And on HQ, it was really amazing. We had on stage OpenAI showing their codex ⁓ agent integrated into Copilot. There was an amazing demo, as you can see. Again, they trigger four jobs in parallel within ⁓ Visual Studio Code, and you can see how
David Morgan-Gumm (07:36)
Mm-hmm.
Sergio Sisternes (07:46)
four instances of the Codex agent were working at same time fixing different parts of the code. And then there was a really mind blowing moment when you could see all four working together integrated into the Copilot experience and working for you. was like, wow, that's really cool. It's really cool.
David Morgan-Gumm (07:52)
Wow.
Yeah, is
that for future release or is that available now?
Sergio Sisternes (08:06)
So this has been put in public preview for all GitHub ProPlus subscribers. So if you have a GitHub co-pilot ProPlus subscriber, you can get your hands on and start playing with it.
David Morgan-Gumm (08:16)
Give it a try.
Great. ⁓ Yeah, that's something I need to get more into. mean, at the moment, whenever I'm using an agent, I'm using one. ⁓ And I will switch my models depending on what I'm doing, but everything is in series. Being able to ⁓ expand that to do things in parallel ⁓ would be a game changer, definitely.
Sergio Sisternes (08:29)
Mm-hmm.
Mm-hmm.
David Morgan-Gumm (08:43)
So ⁓ I don't know if it's one of the things you've already talked about, but did you have like a big wow moment ⁓ during one of the talks?
Sergio Sisternes (08:52)
for me, agent, HQ, having multi-agents and expanding beyond the current experience and being able to bring different models into the copilot experience was for me, wow, that's big. That's going to help to accelerate and inform the development, right? Because you can get much more input and much more powerful models into the copilot experience. So was like, yeah, that's... ⁓
David Morgan-Gumm (09:20)
Mm-hmm.
Sergio Sisternes (09:22)
Really cool. That's really cool.
David Morgan-Gumm (09:23)
And it's only
going to get better from here on out, isn't it? It's only the start.
Sergio Sisternes (09:27)
Yeah,
exactly.
David Morgan-Gumm (09:30)
Well, just a message for the audience. If you were at GitHub universe ⁓ or you've followed and watched the keynotes, I'd love to hear from you. You know, we can discuss it in the comments further. ⁓ Send me an email at mailbag at sqlsquared.co.uk. If you have any questions for Sergio as well, I'm sure I can pass them on and we can ⁓ get some answers back to you. Moving on from GitHub co-pilot then and
a little bit further into the main, you know, overall world of, of AI. I'm not surprised at all that AI and agents were the star of the show at GitHub universe. This year, you know, it's all the rage at the moment and there's a lot of advancements within the industry. One of the, I remember we had a chat a few months ago and you gave a very good
description on what the difference between machine learning, AI agents, and AI chat, like Copilot or ChatGPT, are, like what the differences are. Because I know the terms are used interchangeably by people, but they do mean different things. So I was wondering how would you practically distinguish between them and what would you describe their differences?
Sergio Sisternes (10:37)
Mm-hmm.
Hmm. Yeah, that's a very, very good question. Right. And I think when we say AI, people always say gene AI or LLMs, right? Because obviously I think what changed everything a couple of years ago was when ChatGPT came to the market. Right. So I think that was the biggest wow moment that we've seen so far because really exactly when it kicked things off and the thing
David Morgan-Gumm (11:16)
to the general public here, just.
Sergio Sisternes (11:21)
people to actually associate AI with GenAI. But in the end, have AI as a field of study ⁓ in computer science, focused on how different subsets within it, where we have machine learning with the branch that studies the creation of intelligence machines and learns from data. Then we have deep learning, which in the end is the use of artificial
neural networks as a subset of machine learning methods. And then within deep learning, we have GenAI, which is obviously the application of large language models to generate data that looks similar to the data they've been trained, right? It really mimics really well the, let's say, the human language. So this is how we can differentiate between them as the subsets of this field of study, which is AI.
David Morgan-Gumm (12:16)
Yeah, I think some people might not realize that the field of AI has actually been around for quite a long time. Yeah, a very long time. the technologies like the LLMs that we have these days have been multiple decades in the process of development and research and advancement. Because I was even working closely
Sergio Sisternes (12:22)
Yeah, long time. Yeah, long time.
David Morgan-Gumm (12:44)
with some kind of machine learning models, you know, nearly 10 years ago, and playing around with them. So yeah, I know firsthand that it's not brand new and, you know, a lot of the field of AI and machine learning algorithms that we've had in the past to do predictive analytics and stuff like that, they've been around for a while and been doing their job well.
Sergio Sisternes (13:09)
Yeah, exactly. this is not now we reach a moment right where technology matured. So we had the field, we had the theory. And now we've got all the computing power, think the cloud, the cloud race that we saw in the last decade, right, really accelerated the concentration of computing capacity. And also we've been evolving, we've seen an evolution on
David Morgan-Gumm (13:15)
Yeah.
Mm-hmm.
Yeah.
Sergio Sisternes (13:35)
CPU, GPU capabilities and power. And if you combine this growth in compute power coming out of individual ⁓ players like Nvidia and GPUs, you combine them with this concentration of power in cloud vendors, what's really enabled companies to start ⁓ getting more out of the theory and the AI studies that for so many years so many people worked on.
David Morgan-Gumm (14:04)
Yeah, a scalable and configurable cost as well.
Sergio Sisternes (14:06)
Exactly.
David Morgan-Gumm (14:09)
So think if you were to invest in an in-house AI server, it'll set you back quite a few tens of thousands of pounds, yeah.
Sergio Sisternes (14:16)
Yeah,
exactly, exactly. It's really expensive at the scale that you need, right? You've seen ⁓ Elon Musk building MacroHeart now in Texas, right? Building MacroHeart, right? The Colossus II is a massive investment and it's the kind of scale that you need to build this kind of next generation solution.
David Morgan-Gumm (14:24)
Yeah.
Hahaha.
Yeah.
Yeah.
Now, moving from that into ⁓ the development life cycle, I suppose. The discussions we've had in the past have been very development focused. The tools and features that were brought out in GitHub universe, again, very ⁓ development focused. ⁓ think lots of developers, think especially experienced developers.
Sergio Sisternes (14:45)
Mm-hmm.
David Morgan-Gumm (15:07)
that have been in the industry for quite a long time will struggle to adapt their workflows for the age of agents. Do you have any tips for them on how to create an efficient AI development lifecycle?
Sergio Sisternes (15:28)
Yeah. So I think we had Sajan Adel on stage for the final part of the universe keynote. And the last question that the panel asked to Sajan was, what are the skills required to thrive in this AI world? And I think he made a very good, let's say, ⁓
⁓ reflection on the I think in terms of important beats right the important scale is to move from what to how right is. With AI we start moving away from what we need to do exactly in terms of lines of code and we are we are we start we just start moving to how we make things happen right how we combine the technologies how how the architect should look like right how the.
David Morgan-Gumm (16:04)
Ha
Mm-hmm.
Sergio Sisternes (16:23)
how the security control should look like. It's moving to how the SDLC should look like rather than the exact what. And then what we do is we partner today with AI agents in what GitHub called the wave two of agentic AI. In the future, we start seeing autonomous team members and autonomous teams that will start picking more and more work for us. And it can be a bit scary, right?
I've been a software engineer as well, an architect, and trying to let go some of the bits that we used to do is scary. So for me, important bit is have this mindset that moving from what to how. And as I was saying earlier, start with the basics and focus on the low-level value tasks. When you saw in that session that we did in Manchester a few months ago, the closing,
David Morgan-Gumm (16:52)
Mm-hmm.
You
Yeah.
Sergio Sisternes (17:20)
the closing message was there's a lot of things that we can do today, right? And this creating technical data, how many people have the perfect unit testing, internet testing, code structure. So we can start using a co-pilot and GenAI tools to help us take away all these technical debts, right? So start with unit testing, start with.
code quality issues, set a good baseline so when you bring in AI to do more complex jobs, you can make sure that the regression is under control and AI won't struggle with ⁓ lack of documentation. ⁓ And when you have that baseline, ⁓ obviously AI is about data and is another point that we discuss with clients many times. So you need to not only have a
David Morgan-Gumm (18:03)
Mm-hmm.
Sergio Sisternes (18:16)
good models that are able to code, but you also need to have good requirements and AI can help you with that, but from requirements gathering. So some of the advice that we are giving to clients is that, okay, you have Microsoft Teams transcripts, right? So you can start storing them and you can start using them to create technical designs to review if all the requirements have been captured to start shaping the product backlog, right? Using Copilot itself. And then when you have a good product backlog,
grounded in your requirements and you can accelerate the process with that. And obviously the coding agents will be able to deliver faster because you would have good input for the model because otherwise you have now what's happening now and I think it's creating a lot of frustration is engineers are basically working to fix the gap that we have in terms of context for the models. You spend a lot of time writing long prompts, right?
moving from capturing requirements manually, building the documents, fitting into the agents, and when the time comes to do the fun part, which is code, agent takes on. It's like, so we need to help quality and requirements. We need to start the streamlining all the process. So this is fun again.
David Morgan-Gumm (19:15)
Hmm.
Yeah. Yeah.
Yes, because that really builds into that how, doesn't it? There's a lot more.
Sergio Sisternes (19:38)
Yeah, correct.
David Morgan-Gumm (19:40)
A lot more work around the how than I think you'd realize when you start asking that question. I've, yeah, everything you said there, I've found out firsthand. When I started, a few months ago, ⁓ using agents for development, none of that additional context was there. Struggling to provide it with the prompts and give it the right tools to do the right job.
Sergio Sisternes (19:46)
Mm-hmm.
David Morgan-Gumm (20:09)
And it would always make mistakes. And I'd find myself correcting it more often than not. But the more I built that context in the background, the more I used it to first go and audit the code base and create instructions itself on how to improve and ⁓ what the context behind the change that I wanted, the descriptive documentation, details on
security principles, unit testing, like you said, when you give it all of that context is a much, much, much better job. ⁓ And so I've definitely become a lot better at producing a lot of those backend or background more accurately ⁓ materials to help the agents be more successful.
Sergio Sisternes (21:00)
And GitHub Copilot basically ⁓ give us three core tools that lives in the short term memory space, let's say, right? Because you have the most basic unit is obviously the Copilot instructions, which obviously is a, it was initially a file and is evolving to a set of files, right? That lives within the repo, but can also be expanded to at ⁓ organization or enterprise level, right? Which obviously is giving you the basic instructions that you want the agents to follow.
David Morgan-Gumm (21:07)
Okay.
Mm-hmm.
Sergio Sisternes (21:29)
Instruction has been evolving really nicely now because now we can have instructions that are targeted to specific folders within the solution. you can effectively... It was really cool when I found out because if you're using compiler instructions today, it's a massive file with all these structures that sometimes adds a lot of noise for the model, right? Because you have all these structures for all the things you want to cover in a project, but the project can be massive. So you can now create inside the dot...
David Morgan-Gumm (21:37)
I didn't know that.
Yeah.
Yeah.
Sergio Sisternes (21:56)
GitHub folder, can create copilot instructions. It's like the general, general file. And then within .github, you can create the instructions folder. And in that file, you can use from matter to say, these specific instructions apply to this ⁓ folder or folder pattern. And then you have specific instructions for that. And what copilot does is it starts aggregating instructions for the different matches that it touching. And say, OK, I'm going to edit this file with the instructions.
and start dynamically building a set of instructions, combining them to be more accurate. And that's really helpful to avoid lots of noise, because sometimes it happens. Large distractions and noise and start behaving a bit weirdly. Once we've got the instructions in a good shape, and again, Copilot can help you to shape the instructions and fine-tune them. The second. ⁓
David Morgan-Gumm (22:38)
Mm-hmm.
Sergio Sisternes (22:52)
thing that I see is helping and is a good natural logic step is using prompt. along with the instructions folder, you can create a prompt folder. And then you can start using the slash, whatever command that you want. So for instance, one of the my commands is a slash build test or a slash push. So I've automated the build process before making a push with copilot, please.
David Morgan-Gumm (23:14)
Okay.
Sergio Sisternes (23:20)
Run unit test, run linting, is the code building. So it's all these steps that I always follow to do certain tasks in my day to day. I've automated with a command. So I ask a copilot, please run this for me. I don't want to do all the manual checks before I push. Because it's always the same, as I was saying. It's always the same. It's boring. ⁓ It's not adding value. So I ask a pilot, please do these validations for me while I focus on other tasks. instructions. ⁓
David Morgan-Gumm (23:44)
Okay.
Sergio Sisternes (23:49)
Prompt is the second one. And then you have Agents, which is something that evolved in the last month from... Initially, they were called Chat Modes. I don't know if you heard about Chat Modes.
David Morgan-Gumm (23:59)
Yeah, I've used the new agents feature. used, ⁓ I've created like a custom one for audit purposes. So basically something I can go ask a question, but it actually won't make any changes. It'll just go and audit the code base or the change that I want to make or the bug that I have. And then it will return a report and then I can feed that report into another agent that will go and make the change for me.
Sergio Sisternes (24:10)
Mm-hmm.
Correct. Again, for me, it's really useful because you don't have to start switching tools because one of the frustrating things as well when you have 200 tools is, OK, now I'm in architect mode and I want to use this specific tool from the tool set. So with agent mode, you can define what is the expected behavior, what is the tools that you want the agent to use, and you only switch mode and then you ask. If you start combining these three elements, it really helps you to start working in the short term, co-pilot short term memory and behavior.
David Morgan-Gumm (24:23)
Hahaha
Yeah.
Mm-hmm.
Sergio Sisternes (24:52)
But as you
were saying, David, as well, you to start working more on the context. And this is where we're with telling clients, start collecting your transcripts. Make sure that your documentation starts to be ⁓ more up to date. Try to use Copilot to update your documentation. And then what we are doing is we are using AI Foundry, for instance, building RAG pipelines and Graph pipelines to start extracting information from ⁓
these different data sources and then in Azure and Azure and AI founders start creating ⁓ an AI search ⁓ vector indexes. So we can then plug this into GitHub Copilot using MCP servers.
David Morgan-Gumm (25:31)
Mm-hmm.
I need to I need to talk to you a bit more of that maybe outside of the podcast. That's something I'm very interested in.
Sergio Sisternes (25:41)
Yeah.
Because if you joined everything together, as we explained to clients as well, you have the shorter memory, is the context within the one and the structure that you're creating for the agent. But obviously, you have to have more long-term memory. So for me, the long-term memory is split into big areas. The intermediate memory, which is it can be ⁓
David Morgan-Gumm (25:52)
Yeah.
Sergio Sisternes (26:10)
GitHub issues, can be ADO boards, right? So it's the tasks that we as engineers created for our purpose in the past where we describe the tasks, what we need to do in more detail, right? The feature epic task. But it does the intermediate memory. I'm using, and we're using this GitHub issues or ADO board task as a place where we work with Copilot to refine the requirements and.
David Morgan-Gumm (26:35)
Mm-hmm.
Sergio Sisternes (26:39)
then go to the long-term memory, to the embeddings in AI Search, and extract the information and validate that all the instructions and all the content that we need for the context is prepared in the task. And then the task took a pilot as an input. So it's from long-term to short-term memory, let's say.
David Morgan-Gumm (26:59)
Yeah, suppose the transition from those two points is very important to get in the right output.
Sergio Sisternes (27:04)
It's not much different from what we've done. You think it is what we've done in the past, but we do use work documents to capture the requirements for clients, created the data tasks, and then hand it over to engineers to do the implementation.
David Morgan-Gumm (27:07)
No. Yeah. Yeah.
⁓
Yeah, transferable to human skills when you actually think about it.
Sergio Sisternes (27:22)
Yeah, correct.
David Morgan-Gumm (27:24)
But we started talking about tools. ⁓ And one thing that I found is really important, especially when providing instruction and context, is using the right tool for the right job. You'll find that there'll be huge improvements to the output of your prompts if you're using the right tools. One of the things where you just touched on the ⁓
uh, the agents in Visual Studio code. And then also the tools that sit in there, tools might be provided by MCPs, um, or other extensions that you've got installed. I remember a few months ago, you know, it was, it was so frustrating that it only took in a hundred, like, was it 128 tools you could take that action in any one prompt. Um, and then, you know, without the, uh, various agents that you could configure.
Sergio Sisternes (28:01)
Hmm.
Yeah.
David Morgan-Gumm (28:22)
you'd have to go and manually untick and change the tools that you wanted. But now I noticed, I think it was since that agent feature release, that if you don't bother selecting what tools you want to use, which sometimes I don't, it does an automatic optimization of tool set behind the scenes. there's...
Sergio Sisternes (28:33)
Mm-hmm.
David Morgan-Gumm (28:47)
No longer any errors that tell me you can't have more than 128 tools selected and I have to go and say, okay, right. Or do I want to untick to get to that point? Goes and and optimizes itself to get those tools.
Sergio Sisternes (28:55)
Yeah.
David Morgan-Gumm (28:59)
expanding on that like
I know that you work mostly, I think, architecture and design of ⁓ software and cloud solutions. From that standpoint, for those listening out there, what does that term, the right tool for the right job, mean for you?
Sergio Sisternes (29:08)
Mm-hmm.
Yeah, good. It's a good question. So for me, in the copilot era, the right tool for the right job means that we have to make a careful choice of the inputs that we give to AI. As you said, we can have 5,000 tools. And the combination of that 5,000 tools
can produce very different outcomes. And going back to the how, right? We need to think how we combine those two, what is the right tool combination that we can put in the mix to make sure that when AI starts working, it's making the right decisions in choosing and combining them, right? Because you give a too wide range of tools. Sometimes I've seen, for instance, copilot trying to build Python scripts to do things that basically you can do with a command line. It's like,
David Morgan-Gumm (29:57)
Mm-hmm.
Sergio Sisternes (30:17)
Using Python, I just use the command line. And that's the consequence of having too many options and too many tools. When you have too many choices, it can work, can work. But we need to have with the how. And one of the side events that we've got in GitHub universe was a really cool event as well in GitHub HQ, which was called MCP universe. And it was a really nice three-hour event in the end.
David Morgan-Gumm (30:23)
Yeah.
Sergio Sisternes (30:45)
their ACQs, was saying, which we have Anthropic and we have other companies there where we're talking about all things MCP because when it comes to tools, obviously MCP as model context protocol, is right now is the leading community effort to bring more and more tools into AI. And there was very good conversations as well, not only in using, but also building a MCP service, right?
Should we be building them? How we build them? There's the gap in how we do it. Because obviously, you have MCP servers and everyone is building MCP servers. But you might need to do something for you specifically. And there was a very interesting discussion in the room that should we take an API and build an MCP server? Why to run?
David Morgan-Gumm (31:18)
Mm-hmm.
Mm-hmm.
Sergio Sisternes (31:36)
Should
we shape MCP service around intents and a combination of APIs to satisfy specific needs? again, it's the tooling conversation and making sure that we build tools that's really relevant for AIs and not just throwing them everything and guess for the best, let's say.
David Morgan-Gumm (31:58)
Yeah, I went to a talk recently, actually, where they were discussing ⁓ development of MCPs for APIs. ⁓ Something that I took a lot of interest in. I've always weighed up the question in my head of when it comes to MCPs, is it better to have an MCP that does ⁓ a lot for a business? So one MCP that links to multiple systems is able to manage
⁓ different processes, or is it better to have an MCP that's pinpoint? It's built for a specific reason. You link to it for that specific reason. And then obviously when you have your agent flows in the background, you use your various MCPs for the various, again, tool for right job, or yeah, is it best to have something that's...
does a lot more in one system.
Sergio Sisternes (32:55)
Yeah, I think that was part of the discussion as well, right? How you balance. Is an API, is a one-to-one API, or is a combination of APIs? yeah, think it's more the shape around behaviors and shape about intents, right? What do you want to do with this? Are you trying to get some, or are you trying to...
David Morgan-Gumm (33:04)
Yeah.
Sergio Sisternes (33:18)
give booking capabilities into Copilot. So you are creating a booking endpoint that can search for hotels and then can book and then can make the payment. Or is creating an API and then hoping for the AI to combine. And I think we need to balance. It shouldn't be a one-to-one API. It shouldn't be a big endpoint that is too vague. And then the AI really don't know what the endpoint or the MCP do, right? Because that's the other risk. If you have a very generic ⁓ MCP server,
⁓ Obviously, what Copilot or any model we do to choose the tools is go to the description and try to understand what the tool does. This is how the optimization process works. Okay, I 400 tools. What are the descriptions? What's my job? And they start planning. So if you find the right balance between ⁓ vagueness or journalism and low-level API calls,
Obviously, Copilot will be much more able to do that. That's the right tool for me for the job and then pick it up.
David Morgan-Gumm (34:22)
For those that are looking at building MCPs, because I just tried to, I did a bit of proof of concept building one for just like connection to one of our work SQL servers. It's very, very simple. I found it was actually not as hard or scary as I thought it was gonna be. ⁓ What advice would you give to someone who's attempting to build an MCP themselves? Like what technologies should they use?
Sergio Sisternes (34:31)
Mm-hmm.
Mm-hmm.
So in terms of technologies, obviously copilot, use copilot, will copilot, right? It's really helping us to accelerate the process. In terms of technologies, I think the language that you're more comfortable with, ⁓ MCP in the end is a protocol, right? It's an open source effort to drive connectivity or to connect agents or LLM modules into an external source in a standardized way. So solving a critical problem and...
David Morgan-Gumm (34:54)
Hehehehehe
Mm-hmm.
Sergio Sisternes (35:17)
And some of the panelists were using the term is like TCP, right, for agents. So if you are building it today, you can use Python, you can use TypeScript, choose the tools that you are most comfortable with. But think in the problem that you are trying to solve. What is missing today in the model context that you are trying to bring? And try to shape that into intents, right? And trying to get the average sales and trying to get the, I don't know, ⁓
the customer that are currently available in our service system. Try to find the intent that you're trying to fill into the context of the agent and start using Copilot to shape the MCP from there. Don't go there and say, I'm going to create an MCP where I can do a cell query into the database.
David Morgan-Gumm (36:04)
Yeah, yeah, exactly. Now, you mentioned the word intent there. I think that's a very important term to take into development of these things. You have to know what you're trying to do. ⁓
Sergio Sisternes (36:15)
Correct. More
more with AI, as we move from what to how, knowing what we're trying to achieve, the intent is becoming our critical, let's say, core element as engineers.
David Morgan-Gumm (36:22)
Yeah.
Yeah, think there's historically, or at least in my experience, ⁓ certain development of platforms and products and tools can be rushed and developers figure it out as they go along. You know, you've got some high level requirements, but you don't really know how it's all going to piece together until you start developing it. And I think that's one of the reasons lots of technical debt appears later on down the line.
Sergio Sisternes (36:46)
Yeah.
Mm-hmm.
David Morgan-Gumm (36:57)
It's, there's a flip of skills here where you need to understand how it's all going to fit together and how it's all going to work and the intent behind the code that you're writing to use the agents effectively. That's my biggest learn so far from this conversation.
Sergio Sisternes (37:13)
Yes, and I think that's a very good point, David, that I've been discussing with some people as well, that in the end, we are underestimating how AI, how Copilot can be useful in the, let's say, creation stage or the creative stages, right? We don't know exactly what we want to do or how we can do it. We can ask Copilot to draft ideas, right? And once we...
David Morgan-Gumm (37:30)
Yeah.
Sergio Sisternes (37:41)
get our heads around the problem, say, OK, that's it. I want to do this, this, this, and this, and this is how all pieces are put together. Then you can ask copilot, OK, clean this up in this structure. And then you can bring in your engineering experience and structure the code in this way, refactor this, refactor that. And then it can really help you to accelerate, as you were saying. You have some technical data that you can quickly pipe up.
David Morgan-Gumm (38:01)
Mmm.
Yeah. It's like you have to have more of the mind of an architect than the mind of a developer.
Sergio Sisternes (38:18)
Yeah, I think it's evolving and you start having a bit of both, right? Only that the low level tasks rather than having only juniors that because how our profession evolved over time, right? So I started in this world like 15 years ago. So yeah, I'm not that old, but it's been 15 years now. And you always had junior people coming in, learning from the senior, thinking more complex tasks, right?
David Morgan-Gumm (38:22)
Yeah.
Yeah.
Yeah, yeah.
Sergio Sisternes (38:44)
The problem that we have today, and it's a very interesting topic as well, I'm mentoring some juniors today that are asking me, Sergio, need help? How can I get into this world, right? The problem today that obviously you can give all these junior tasks to AI, right? And AI will do it for you. So we are moving from simple tasks to the more complex tasks, because obviously AI can take simple ones. But we also need to balance and we need to give juniors the opportunity to learn with AI and juniors, if you're hearing me.
David Morgan-Gumm (39:07)
Yeah.
Sergio Sisternes (39:13)
You can learn a lot from co-pilot. So for me, it's not taking juniors out of the roles. And it's a call as well for companies to hire more juniors that give them the opportunity, that give them co-pilot. They can learn much faster than I did 15 years ago. And we need to combine ⁓ seniors and AI, and then juniors and AI. They will work together. We need to give juniors simple tasks ⁓ so they can work with AI to solve them. And then the senior people will.
David Morgan-Gumm (39:19)
Yeah, definitely.
Sergio Sisternes (39:43)
work with AI to solve more complex problems. And this will help us to continue having this flow of talent that we need in the world.
David Morgan-Gumm (39:51)
Yeah, that's, that's a really good point. Well, one of the, you know, ⁓ when I work with the juniors in the company I work for, I always say to them that I've got no problem with the using copilot, no problem with using agents, you know, bring it into your workflow, enhance what you do, make what you do better, but make sure you understand what it's done. ⁓ I've, had a few times where people have just, again, they've been on the start of that agent
journey where I said they're not providing the context. They haven't got the additional resource around it. They're not providing that how they're still thinking about the what and they go and try and develop using the, using the agent and what comes out as a mess. And they don't understand it. They haven't reviewed it. They bring it to me and I'm like, what on earth is this? This, this just doesn't work. And I'm not going to be able to scale this. is the, you know, so teaching people to
Sergio Sisternes (40:28)
Hmm.
Mm-hmm.
Yeah.
David Morgan-Gumm (40:49)
I said that how, I think very important message from yourself. Teaching people to do that properly, especially at junior level will put people at the right foundations to actually make a success of using AI and still feel growth within their careers as well.
Sergio Sisternes (41:07)
And it's very important to as we did, give juniors tasks that they can manage. Because if you give a big task to a junior today, they can ask a pilot to do it and a pilot can do lots of things, but it can also get out of control really quickly, right? And as you said, then they don't know what exactly seen on the screen. They don't understand. Then obviously go to the senior person and say, basically, say, I need help.
David Morgan-Gumm (41:24)
Yeah.
Sergio Sisternes (41:33)
I made a mess here. I don't know how to deal with this. Sometimes it's try to reduce the scope of the task. Even for seniors, it's start chunking the big task and the smaller task. So Copilot don't pan too much because it can write calls much faster than we can understand and we can build in our mind. need to visualize the code, right? So it's start chunking the job as we did in smaller task, fix smaller problems, and then take it from there.
David Morgan-Gumm (41:35)
Hahaha.
Yeah.
Sergio Sisternes (42:00)
And I David, you made a very good point as well. there's a heated discussion in LinkedIn. always jump in because when people say there was, saw a post the other day in LinkedIn, where basically it was a guy saying that he saw the people using AI as lazy, right? And he wasn't hiring laziness. My answer was like then if that was the case in the past, we haven't hired people using IDEs and using Stack Overflows. We hired people.
David Morgan-Gumm (42:08)
No.
Yeah.
Sergio Sisternes (42:30)
coding in low level programming languages and doing no research. And I think that's not the case. The important bit is we need to embed the tools in the process. And I want in my team, I want people that are really able to embed the tools and evolve the process with the tools that are available. But as you said, the important thing is you need to understand what's coming out. So we shouldn't be valuing in interviews. We shouldn't be valuing the process.
David Morgan-Gumm (42:48)
Mm-hmm.
Sergio Sisternes (42:57)
We need to value the outcome and the ability of engineers of explaining the outcome because that means they are using tools to think on the problem and the how and they are in control of what they are doing because otherwise that's laziness. If you're vibe coding and you know what producing you're lazy. Right. That's, that's what we need to be evaluating interviews now.
David Morgan-Gumm (43:12)
Yeah, exactly.
Yeah, I've started ⁓ asking questions in interviews that I've conducted. How do you use AI? How are you implementing it in your workflow? Because I want people to use it. Why would you purposefully not use something that can be so beneficial to your efficiency in terms of both time and cost? ⁓
You know, the skills that you can provide juniors by using AI that they would never have before is insane. They've just got to not be lazy and they've got to stick with it, work with it, build with it, understand what it's done. And it can be an amazing learning tool on top of it, top of everything. That's what I found. mean, I'm a data engineer.
Sergio Sisternes (43:53)
Hmm.
Hmm.
David Morgan-Gumm (44:15)
I thought I knew Python. I thought I knew Python. I started building more complex integrations, data movement pipelines and processes, notebooks with agents. And I found I learned so much. I was like, Oh my God, I didn't know this feature exists or that feature existed or this process or that architectural principle. And I feel like within six months I've
Sergio Sisternes (44:43)
Mm-hmm.
David Morgan-Gumm (44:44)
doubled, even tripled my ⁓ ability with Python than what existed before, just from using AI and actually walking through it and trying to understand what it's done.
Sergio Sisternes (44:56)
Absolutely. that's the key. That's key, to be honest. I wrote recently a post on Medium, my Medium page that was called a bit of click-baity here. The 100x engineer, building on the concept of the 10 engineer. And the message there is, with AI, we now have a next stage. So with AI, you have the ability to.
David Morgan-Gumm (45:11)
Yeah.
Sergio Sisternes (45:23)
go from 10 to 100 because it's helping you to multiply your capabilities today. So if you are 10x today, you are 100. So it's a different league and different ability to deliver value to the business. But this is, as we say, AI helps you for the good and the bad. AI magnifies the good. So you're a 10, you're 100. But also, if you are not really doing well at your job and you are not careful and you don't pay attention.
Obviously, you are magnifying those weaknesses or those bad things. You are becoming worse. So you need to be very careful. And the article I covered that the performance in the end is a combination of four factors. Obviously, the human skills and your experience as engineer, your human AI skills, your ability to master the tools like the copliot to create craft from context, et cetera, then the AI capabilities
themselves that are available to you in your company, and then your attitude. Because if you don't have the right attitude, you can have the best tools. You can be the best in the world. But it's like, So you don't get it. But it's really important that we see performance today as this combination of four. Because basically, if you are saying, no, no, no, I don't hire people that use AI, basically, you are out of the game. Because the performance today will be
David Morgan-Gumm (46:42)
Yeah. Yeah.
Sergio Sisternes (46:44)
the human AI skills combined with the rest of the skills and the tools that you have and your attitude.
David Morgan-Gumm (46:50)
You mentioned there about obviously rejecting people that use AI. And I mentioned that, you know, I try and look for it. Would you recommend, especially for hiring managers, seniors, lead developers out there, would you recommend that they've got to onboard AI to take them to that next step, to take them from that 10X engineer to 100X engineer?
Sergio Sisternes (47:14)
If you want to remain competitive in the market, you don't have other choice. You have to do it. It's absolutely necessary and you have to focus on the outcomes and you want people in the teams that know how to use these tools because otherwise others will and then you won't be competitive. You'll be out of the game.
David Morgan-Gumm (47:18)
You don't have a choice. It's absolutely necessary.
Mm-hmm.
Yeah, so one last point. I've heard a lot on the news recently, kind of within the tech space. might have seen GitHub, a universe was over in Silicon Valley. I know that they've been adopting, you know, even this process of 996 working, you know, when you work, what is it, nine till nine, six days a week, just to...
Sergio Sisternes (47:53)
Mm-hmm.
David Morgan-Gumm (48:01)
you know, try to get ahead of the competition and try and get, get people in work, working on new systems, R and D experiments as much as they can to try and get ahead and be a leader in this, in this AI space. That's obviously something I definitely don't want to ever do. But I felt like using
Sergio Sisternes (48:19)
Hmm.
David Morgan-Gumm (48:29)
agents and AI to its full potential within your normal workday can kind of take you there. It's that bump to going ahead of the competition. It's like having two or three extra developers on your team ⁓ where you're effectively working more hours than you actually are if you're doing things in parallel side by side.
Sergio Sisternes (48:58)
Hmm.
David Morgan-Gumm (49:00)
And that's the analogy that I think of when implementing AI within a business. It's that, you know, if you compare a business's work in 996 and a business is working nine to five, five days a week, inevitably that one working 996 with the right people, right implementations in place will probably get ahead. And having AI and not having AI in your workflows ⁓ to me has the same effect.
Sergio Sisternes (49:14)
Mm-hmm.
Exactly. imagine if you are doing 996 with AI and without AI. Basically, people don't use AI in 996. They're out of the game anyway. And the problem right now is that we are in an armed race. So everyone is rushing because they want to be ahead. And this goes well beyond technology. You see all governments, US, China. Everyone is rushing, building data centers, building nuclear plants.
David Morgan-Gumm (49:37)
Yeah, yeah. Yeah.
Yeah.
Sergio Sisternes (49:57)
securing chips, securing rare errors. It is an arms race. This is a political game as well. You see this rush and everyone is rushing here right now. But in terms of productivity, some clients as well. To your point, David, is how much time I'm going to save? So how less time I will be spending developing? And my answer is always, I think there's enough...
⁓ technical debt and there's enough things to do that to rather than saying how less time I would spend developing how much time do we spend doing more things for the business how can we how we will help our business deliver more business value through software how much technical debt we can get rid of today that can help us to accelerate to the delivery of more business solutions because in the end it's always the same
David Morgan-Gumm (50:32)
Yeah, people thinking about it in the wrong way.
Sergio Sisternes (50:53)
Technology is not the end, it's the means, right? We are using technology to help the business to be competitive, to deliver business value to our clients. So we need to think in those terms. We have AI, we are 9 to 5 Monday to Friday. How much technical debt we can get? How many applications can we modernize? How can we make our lives easier to focus on? And not because we are lazy, but why we should be spending time on?
on things that we can automate and then deliver more business value for our business and our clients. And I think the conversation is to shift to technical debt, um optimizing the SDLC process and evaluating the business impact that we are having, not from a time, but from a business value and features perspective.
David Morgan-Gumm (51:41)
Sergio, this has been an incredibly insightful conversation. I'm going to take away a lot from this. And thank you for bringing us all these fresh insights directly from GitHub universe and for breaking down like all this new tech and what it actually means for us people on the ground. It was a real pleasure.
Sergio Sisternes (52:02)
A pleasure. Well, David, thank you so much for having me. It was really fun to be here with you today. All the best on copilot, everyone. Thank you.
David Morgan-Gumm (52:07)
Thank you. Yeah. And the same to you.
And a huge thank you to everyone listening and watching as always. If you enjoyed this chat, please do us a huge favor, like, subscribe and hit that notification bell on YouTube and wherever you're listening, whether it be Spotify, Apple Music, all the other platforms this goes out to, please make sure to subscribe on there. You can visit
podcast.sqlsquared.co.uk to see all the episodes in one place and all the descriptions there so you can find the right episode for you from the past ones that we've had.
SQLsquared.co.uk also has a blog. So if you want more articles and content, you can visit my site and read those. There's some interesting reads there. I'm sure I'll be doing some stuff on a lot of these AI agents in the future. Finally, if you have a question that you'd like us to answer on the show, especially about AI or the cloud development, send it to mailbag at SQLsquared.co.uk.
Until next time, keep building, keep learning, stay curious. Cheers, everyone, and thank you again, Sergio.
Sergio Sisternes (53:26)
Thank you.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.