software engineering in 2025

from reading documentation, scouring stack overflow and watching youtube videos back in late 2023 to having an AI assistant on the side panel of my IDE reading and writing 80-90% of all my code (yes that is the truth) mid 2025, a lot has changed in how we write software. This is more of a personal record and checkpoint of my current development workflow and something to look back later when what I do for work right now becomes obsolete lol. I also have a few thoughts on how the future looks like for us engineers, how our jobs are going to evolve and what we should focus on next but we will come to that later.

how we got here

back in early 2024, when OpenAI released function calling, LLMs suddenly got the ability to become agentic i.e. call external tools to get information and perform actions. This allowed AI workflows to transcend from being simple text completion to being able to do math, call external APIs, and generate structured output. This was still nascent, and the models at that time weren’t trained enough to be highly accurate with the function calling output they generated which led to these downstream capabilities, but this started igniting ideas in people’s brains to start building agentic workflows. This first started with deterministically stitched DAGs with known branches for different workflows. We had frameworks like LangChain come into the picture which conveniently abstracted out the prompt stitching, API parameters as well as the entire request. Soon after, people started thinking about agent patterns which would expand the list of workflows LLMs could take over meanwhile also making these agent systems smarter. We started having a lot of these agent designs. A planning agent which plans next actions, router agent which routes between multiple agents and tools, reflection agent which reflects on output and improves it, RAG agent which fetches the most semantically similar context from knowledge repositories.

I was lucky enough to be working on a project which required me to go through research papers on all of these design patterns and implement these for a marketing agent we were building at the startup I worked in. This made me see the entire journey from how LLMs evolved into systems over the course of a blazingly fast time.

now we have tools like Cursor, Windsurf, CoPilot, Devin who have used these patterns in a much better way to build abstracted systems for engineers to use and build software. Now these systems do feel like magic when you’re able to build software in english but most of the tweets you see on software engineering being dead are exaggerated at best. I’ll give you an example of what my work as an engineer in mid-2025 looks like.

what my day actually looks like

the work I do is divided into three parts:

solution design

I’m responsible for solving abstracted problems (mostly around product engineering) assigned to me by the product teams and passed down through the engineering management. These are problems which are big, hairy and have a lot of unknown unknowns. To be honest, this is the space I enjoy working in the most as it’s chaotic, difficult and still requires me to think a lot. My work here starts with understanding the customer’s demand, absorbing the PRD and thinking about the functional requirements that I need to bake into the product. I then start building the technical spec for what all is to be developed. Apart from this being an iterative cycle of communication with my product and engineering leads where I need to get approvals and feedback on my solution design, it is also a lot of back and forth with other teams whose code/feature I’m touching while building mine. To format and structure my thoughts and solution on a document, I do use a lot of ChatGPT and Gemini. But as you can see, it’s not just throwing the PRD to Cursor, giving it context of my codebase and asking it to spin up a solution. It is a lot of product thinking, customer context and communication before the idea even goes to AI.

implementation scoping

this tech spec is fed into LLMs and a v1 draft of the implementation scope is created. This is the part where if an internal API is to be integrated in the design, I need to go through the postman collection, understand that an M2M authentication through Cognito is to be done and put all of this information in the implementation scope. It is also the part of work where I trim down the solution into what we can do, what we can do but later, and what we can’t do and why. I also start thinking about edge cases while I go through each line of the tech spec in detail. This iterated implementation doc is again put up for a review and updated accordingly. Once these two are finalized, I finally move to Cursor to execute.

execution

for a new feature/functionality, I have my tech spec and implementation doc ready to be put as context to Cursor. My prompts always involve asking the LLM to critique my solution, ask me clarification questions before starting its execution and so far it’s helped a lot to keep the agent grounded tightly to context and prevent hallucinations. The agent currently is able to get 95% of the code right most of the time but sometimes I find myself intervening and guiding it towards a better implementation. I also sometimes catch bugs when I analyze the diff and prompt it again to fix it also telling it why what it’s written is wrong. With a bunch of back and forth, I’m able to guide the agent to get 100% of the solution right. I usually let it create and run its own unit tests and just do a few manual tests for sanity before I raise the PR for review. Though going through the diff and understanding and calling out the changes becomes really important. You can’t build blindly when it’s a customer feature tied to millions of users on your platform.

apart from this, another execution item I focus on are bug fixes where I monitor Datadog dashboards and do an RCA on any repetitive error coming in. I create an error report, go through the code to understand where it’s coming from and then again hop on a conversation with Cursor to take an attempt to fix it. Now here it 99% gets it right but that 1% is the times where it over-engineers solutions (I still think it can be solved by good prompting).

so all in all, you could have got it by now that most of my actual brain work goes into breaking problems down into solutions, figuring out how to implement the solution and managing and guiding LLMs to do it for me. Another alpha I have here is that I have good context of the codebases that power the product and some engineering experience. This helps me with designing solutions as well as identifying when AI is bullshitting me. So I would say if you’re an engineer, don’t stop coding lol.

where we’re headed

this now also brings me to how I think about the future of software engineering. I feel that for the next few years, till the models become smart enough to solve entire problems in one shot without any errors, there will always be a need for engineers to be around. The job role for early engineers would change to guidance, operation and reviewing these coding agents. The job roles for experienced engineers would start intersecting more with product thinkers where they start designing software with taste. This taste would come from the experience they have and the tight communication loops they will have to maintain with product and leadership teams. When it comes to number of people working on a single problem, I strongly feel that it will reduce drastically. I personally also now take 1/5th the time to build something compared to how I did things in late 2023. But that doesn’t mean that the work to be done gets reduced. That’s the nuance people miss. There will always be new problems, there will always be more things to do and the floor now opens up for people to think about these instead of putting days and nights into a single problem.

the best thing we can start doing is embracing AI, using it to be more productive and exploring all the different tools we can arm ourselves up with to get the job done. That is the most important thing that will matter and has been mattering for a while. We should still keep being in touch with code as well as these engineering systems and we can do it with coding challenges, understanding and writing code along with AI (auto-complete helps here) and also spending time understanding system architecture. The low level execution first got abstracted out with assembly, then with high level languages and now with english. But the ability to solve problems, understand software systems and having taste will remain constant as important skills. I would also encourage pen and paper math or reasoning to keep our brains analytical as it’s too easy to just tab tab tab and accept and run, paste errors and get things running. It’s important to be intentional with the things we do and that includes overseeing a coding agent running code.

there will be more jobs which open in the intersection of software, AI and hardware (think robots) but I will cover that in a later blog.