
Table of contents
AI agents are quickly evolving from experimental tools to essential components of LLM-based applications. While the general concept and architecture of AI agents, including aspects of LLM agent orchestration, have been explored in previous articles, now we dive into a more focused and practical use case: autonomous coding agents.
TL;DR:
- AI Teammate is an autonomous coding agent that picks up Jira tickets and delivers ready-to-merge pull requests—no IDE or manual setup needed.
- It writes code, responds to PR comments, and adds tests and docs, acting like a junior developer embedded in your team.
- By automating routine tasks and integrating seamlessly with GitHub/GitLab, it boosts engineering productivity, empowers non-devs to trigger changes, and shows what’s next for LLM-powered software development.
From Jira to Pull Request in Minutes — No IDE Required
Imagine this: you create a task on your Jira board, and within moments, an AI agent opens a ready-to-review pull request in your Git repository, implementing the issue. You leave a comment on the PR, and the agent quickly responds by addressing your feedback and updating the code. You prepare a pull request with the implementation of a new feature, and missing tests and documentation updates are added in a separate branch instantly.
No IDEs. No manual coding. No context switching.
Sounds Ambitious? It’s Already Happening!
In this article, we’ll walk through the design, capabilities, and advantages of AI Teammate – our AI coding agent built to work with your project board and codebase. Whether you’re a developer, tech lead, or project manager, this solution will change the way you think about task execution and developer productivity.
Fig. AI Teammate automation flow.
What (or Maybe Who) is AI Teammate
AI Teammate is a cloud-native, LLM-powered agent designed to act like a junior developer embedded in your team. It seamlessly integrates with tools like Jira, GitLab, and GitHub, and supports a wide range of language models – from Claude, Gemini, and GPT to private LLM deployments customized for companies with strict safety requirements. This flexibility allows teams to align AI Teammate with their preferred tech stack and data governance policies.
From ticket creation to a fully-formed pull request, AI Teammate automates the entire development workflow for routine engineering tasks. It significantly reduces manual development time by independently interpreting task descriptions, generating high-quality code, and pushing changes to your repository.
But its capabilities do not stop there. AI Teammate continuously monitors pull request discussions and automatically responds to reviewer comments – updating code, applying suggestions, and refining logic with minimal human intervention. This close feedback loop helps accelerate the review cycle and keeps the development momentum high.
What Makes AI Teammate Valuable
AI Teammate is not just about the writing code – it is about the entire development process faster, smoother, and more collaborative. Here is what makes it especially useful for modern engineering teams:
- Reduces developer overhead by automating repetitive or low-complexity coding tasks, freeing engineers to focus on high-impact features and architectural work.
- Provides a scalable foundation for internal tooling and R&D efforts, acting as a digital co-developer across any team that uses project management tools such as Jira and Git-based workflows.
- Bridges the gap between planning and implementation by empowering non-developers – such as product leads, analysts, and project managers – to initiate code-level changes simply by creating or commenting on tickets.
What AI Teammate Can Do
Let’s take a look at what features AI Teammate can bring to your development workflow.
- Solving Git and Jira tickets
Once a Jira ticket is assigned to AI Teammate, the agent automatically takes action – just like a junior developer would. This automated software development process begins with analyzing the ticket description, implementing the required code, and pushing the changes to a newly created branch in the Git repository. Then, it opens a pull request, assigns the ticket reporter as the reviewer, and adds a relevant label to clearly indicate that the changes were made by the AI agent.
At the same time, the agent updates the Jira or Git board by moving the ticket from TODO to IN PROGRESS column, keeping project tracking aligned in real time without manual updates.
For newly created issues, AI Teammate can also auto-fill the issue description with a predefined template, which requires the issue creator to provide enough task details and context.
- Addressing PR reviews and comments
AI Teammate actively monitors pull requests it’s assigned to or mentioned in. It reads reviewer comments, makes the necessary changes, and updates the pull request – automatically addressing the feedback.
It supports both general review comments and individual inline comments, though full review submissions are preferred for better performance and traceability.
- Generating tests and documentation
When a developer opens a new pull request, AI Teammate automatically generates suggested tests and documentation updates based on the code changes. It then checks whether the generated tests pass. If they fail, the agent updates either the original code or the tests to fix the issue and tries again. This process repeats until all tests pass or the maximum number of retries is reached.
The final suggestions are submitted in a separate PR to keep everything organized and easy to review. The original PR is mentioned in the description and the reviewer of the original PR is assigned as the reviewer of the new one.
How It Works Under the Hood
From a technical perspective, AI Teammate is a FastAPI-based server application, containerized with Docker and deployed on Google Cloud Run for scalable, public access. The server exposes a dedicated endpoint for handling new Jira or Git tickets. When triggered, it extracts key information from the ticket and initiates the implementation pipeline asynchronously in the background.
The endpoint is automatically triggered when a new ticket is created and assigned to AI Teammate – either in Jira board or Git. This behavior is made possible through automation rules using webhooks configured directly in the particular projects.
Integration via Automation Rules
In Jira, the automation rule is triggered when a ticket is assigned to our AI Teammate account, using the Work item assigned trigger event and the Assignee equals condition.
A webhook sends the issue details to the specified API endpoint. Along with the payload, custom headers are included to pass the Git repository path, provider type (GitHub or GitLab), and any optional settings.
To ensure proper flow, the automation rule is configured to wait for the webhook response and continue even if it fails. If the response status is 201, the ticket is automatically moved to the In Progress column. Otherwise, the rule execution is stopped.
Fig. Jira automation rule setup.
Ticket Data Extraction
Once the endpoint is triggered, the first step is to extract key information from the ticket so it can be processed correctly. This includes fields such as the ticket ID, title, description, reporter, and assignee.
Since AI Teammate supports multiple sources – Jira, GitLab, and GitHub – we built custom handlers for each platform’s payload format. Each one has its own structure and includes different details, so we had to analyze them manually. Not all metadata is available everywhere. For instance, GitHub issues do not expose user email addresses, which are required to assign pull request reviewers properly.
To work around this, we use PyGitHub SDK to fetch the missing data. We first extract the user ID from the payload, then use the SDK to retrieve the user’s email address.
Implementation Pipeline
The implementation pipeline for pull request creation consists of multiple steps:
- The target repository is identified based on the Git path provided in the webhook payload. The latest changes from the main branch are cloned into a temporary working directory for isolated processing.
- A repository map is generated using the aider framework. This map includes a list of project files, along with the key symbols (such as classes, methods, and function signatures) defined in each one. This gives the LLM a global view of the project structure, enabling it to reason about code relationships and dependencies. In many cases, this high-level context is enough to solve the task. When more detail is needed, the map helps the LLM decide which files to read in full.
- Based on the ticket description and the project structure, the agent analyses the codebase and prepares an implementation plan. It identifies which files need to be created or modified, and outlines the proposed changes for each of them.
- The system prompt provided to the LLM includes not only the task description and implementation plan but also explicit instructions on code quality. This covers clean code principles, performance considerations, and a strong requirement to deliver complete code – avoiding placeholders like TODO for tricky parts or commented-out sections.
- The aider framework begins applying the necessary changes using either one of the supported LLMs or a locally deployed model. In our setup, we experimented with several options, and found that both Claude 4.0 and Claude 3.7 Sonnet by Anthropic consistently delivered the best results – thanks to their strong capabilities in code editing and reasoning. However, given the cost difference, the choice ultimately depends on the project’s needs and budget.
- The implementation process runs in a loop. After making changes, the system runs linters and any tests defined in the project configuration . If errors are detected, the LLM receives feedback and attempts to fix the issues. This retry cycle continues until the code passes validation or the maximum number of iterations is reached. Once successful, the final version is ready to be committed and pushed to the repository.
Fig. Implementation pipeline logs.
Code Delivery
Once the implementation is complete, AI Teammate creates a new branch based on the latest state of the main branch. It commits the generated changes and pushes them to this new branch. Then, it opens a pull request from the new branch, assigns itself as the assignee, and sets the ticket reporter as the reviewer.
The PR title and description are generated using details from the original ticket to maintain context and traceability. Additionally, a label is added to indicate that the changes were authored by the AI agent.
All of these actions – branch creation, commit, push, and pull request creation – are handled programmatically using Python SDKs. The agent uses either python-gitlab or PyGitHub library, depending on whether the target repository is hosted on GitLab or GitHub.
Summary & Reflections on AI Teammate:
Real-World Impact of LLM Agents in Software Development
AI Teammate demonstrates the real potential of LLM-powered agents to support everyday development work. While it is not a perfect solution, its use in specific scenarios can increase productivity and improve collaboration in your team.
PROS:
- Performs well on simple, repetitive tasks – AI Teammate performs very well on smaller, clearly defined issues that are often avoided by developers due to their repetitive nature. By automating these tasks, engineers can focus on more challenging and valuable problems.
- Enables non-technical AI adoption – Team members who typically work in Jira – such as product owners, analysts, or project managers – will appreciate the ability to initiate code-level changes by simply creating the tickets.
CONS:
- Not suitable for complex tasks – The agent struggles with issues that require architectural decisions or involve changes across multiple parts of the codebase. It also does not perform very well on big repositories containing dozens of thousands of lines of code.
- Not ideal for experienced developers – Developers working in powerful IDEs often prefer tools like Cursor, which offer real-time interaction by giving more direct control than the agent operating asynchronously. The feedback cycle during code reviews can be slower and more cumbersome in comparison to the chatbot-like workflow.
An interesting fact is that we used AI Teammate while developing AI Teammate – a great example of AI-assisted software development in action. Our typical workflow involved using the agent to draft an initial version of the changes, which was then taken over by a developer to review, fix, or improve. This approach made our work go much, much faster than usual.
Accelerate Development with AI Coding Agents
LLM-powered coding agents like AI Teammate aren’t just tools — they’re process accelerators. By automating routine tasks, responding to PR feedback, and generating code, tests, and documentation on the fly, they help engineering teams ship faster, reduce bottlenecks, and stay focused on what truly matters: innovation and impact.
At deepsense.ai, we build custom AI agents that integrate directly with your existing stack — from Jira and GitHub to private LLMs — giving you a reliable AI teammate that delivers real code, real outcomes, and real speed.
Table of contents