“How do I stay in control if part of the engineering is handled externally?”
We often hear this question from CTOs and VPs of Engineering who are scaling aggressively and considering co-sourcing models. Not outsourcing—true co-sourcing: external engineering teams embedded into your product squads, owning delivery alongside your internal devs.
The challenge? You’re still responsible. For delivery, for code quality, for velocity — and for everything that goes wrong.
And while co-sourcing can be a force multiplier, it introduces new risks:
👀 blind spots in delivery,
⏱ delayed productivity,
⚠️ hidden quality issues.
The only way to stay in control is with the right metrics — ones that reflect actual engineering performance, not vanity KPIs.
✅ In this article:
• Why traditional agile metrics fail in co-sourcing scenarios
• The 3 most actionable metrics every CTO should track
• How to implement them using Azure DevOps, GitHub, and Microsoft observability tools
• ⚠️ Bonus: How to eliminate visibility gaps using CI/CD telemetry
Why Velocity and Burndown Aren’t Enough
Co-sourcing is not staff augmentation. You’re not just renting devs — you’re integrating external teams into your architecture, tools, and delivery pipelines. That makes tracking harder.
Here’s why traditional agile metrics like velocity, burndown, or story point completion fall short:
• They’re easily inflated or gamed
• They don’t reflect autonomy or code quality
• They say nothing about engineering maturity
What you need instead are engineering-centric, behavior-driven metrics — ones that show:
1. How fast the external team becomes productive
2. How independently they operate
3. Whether they’re producing sustainable, high-quality code
Let’s break these down.
1. Lead Time to First PR (LTTPR)
Metric type: Early productivity
Goal: <10 business days from onboarding
This metric tracks how long it takes a co-sourced engineer to go from granted access to first meaningful pull request merged.
Why it matters:
• It shows how smooth your onboarding and domain handoff really are
• It reflects how fast the team builds context and contributes
• It’s a leading indicator of future delivery velocity
What to watch: If LTTPR exceeds 2 sprints, it’s a signal of deeper issues: unclear documentation, fragmented environments, or misaligned expectations.
💡 How to track in Azure DevOps / GitHub:
• Use Azure DevOps Auditing logs to detect User added to project events
• Correlate with Pull Requests > Completed PRs filtered by author
• For GitHub, use GitHub Audit Logs + API
• Build a Power BI dashboard via Azure DevOps Analytics View to visualize average LTTPR across teams
2. % of Tickets Resolved Without Escalation
Metric type: Team autonomy
Goal: >80% of backlog items closed without internal escalation
This is about operational maturity. A strong co-sourced team should handle most work items independently, with minimal “help me unblock this” escalations to internal leads.
How to define escalation:
• PR rework requested due to architectural or business misunderstanding
• Ticket reassigned to an internal engineer
• Manual intervention from internal tech leads or product owners
💡 How to track in Azure:
• Use custom work item tags in Azure Boards (e.g., #escalated)
• Encourage leads to tag tickets manually when escalation is required
• Create an Analytics View KPI:
(Total closed tickets – Tickets with #escalated tag) / Total closed tickets
Bonus: Correlate this with team Slack/Teams activity (volume of internal help requests) via Microsoft Graph APIs.
3. Defect Density in Co-Owned Modules
Metric type: Code quality
Goal: Defect rate not higher than baseline across internal teams
Even high-velocity teams can be unsustainable if they generate bugs or regressions downstream. This metric tracks how many critical/major bugs are found in modules where the co-sourced team made recent changes.
Why this matters:
• Quality issues typically surface weeks after delivery
• Defect clustering around one team is a red flag for rushed or low-context coding
• You need to catch this before it hits production
💡 How to implement in Azure ecosystem:
• Enable Application Insights and Azure Monitor to capture exception rates by service/module
• Use Git blame + GitHub/Azure DevOps commit data to link defects to contributing teams
• Correlate bug reports (from Azure Boards or external bug tracking tools) with recent deployment history
Advanced: Use Azure DevOps Test Plans or integrate with SonarQube for real-time code quality scoring per repo.
Bonus Tactic: Embed Telemetry into the Dev Workflow
You can’t measure what you can’t see. And most co-sourcing efforts fail because teams operate in silos — outside of your CI/CD, observability, or incident tracking stack.
Here’s how to fix it:
1. Require CI/CD integration from day one
• All PRs go through your pipelines (Azure Pipelines, GitHub Actions)
• All builds, tests, and deploys are observable in your environment
2. Instrument feature flags and exceptions using Azure App Configuration + Application Insights
• Know which team deployed what
• Track exceptions and performance regressions by feature/team
3. Make code ownership explicit
• Use CODEOWNERS in GitHub or set branch policies in Azure Repos
• Automate alerts if “unowned” PRs are pushed
Final Thoughts
Co-sourcing isn’t just about scaling headcount. It’s about scaling engineering capacity without compromising visibility, control, or code quality.
By tracking:
• LTTPR → Onboarding effectiveness
• % of work resolved without escalation → Team autonomy
• Defect density → Long-term sustainability
…you shift from reactive firefighting to proactive delivery management.
These metrics give you the signals you need to stay in control — even when the engineering ownership is shared.
We often hear this question from CTOs and VPs of Engineering who are scaling aggressively and considering co-sourcing models. Not outsourcing—true co-sourcing: external engineering teams embedded into your product squads, owning delivery alongside your internal devs.
The challenge? You’re still responsible. For delivery, for code quality, for velocity — and for everything that goes wrong.
And while co-sourcing can be a force multiplier, it introduces new risks:
👀 blind spots in delivery,
⏱ delayed productivity,
⚠️ hidden quality issues.
The only way to stay in control is with the right metrics — ones that reflect actual engineering performance, not vanity KPIs.
✅ In this article:
• Why traditional agile metrics fail in co-sourcing scenarios
• The 3 most actionable metrics every CTO should track
• How to implement them using Azure DevOps, GitHub, and Microsoft observability tools
• ⚠️ Bonus: How to eliminate visibility gaps using CI/CD telemetry
Why Velocity and Burndown Aren’t Enough
Co-sourcing is not staff augmentation. You’re not just renting devs — you’re integrating external teams into your architecture, tools, and delivery pipelines. That makes tracking harder.
Here’s why traditional agile metrics like velocity, burndown, or story point completion fall short:
• They’re easily inflated or gamed
• They don’t reflect autonomy or code quality
• They say nothing about engineering maturity
What you need instead are engineering-centric, behavior-driven metrics — ones that show:
1. How fast the external team becomes productive
2. How independently they operate
3. Whether they’re producing sustainable, high-quality code
Let’s break these down.
1. Lead Time to First PR (LTTPR)
Metric type: Early productivity
Goal: <10 business days from onboarding
This metric tracks how long it takes a co-sourced engineer to go from granted access to first meaningful pull request merged.
Why it matters:
• It shows how smooth your onboarding and domain handoff really are
• It reflects how fast the team builds context and contributes
• It’s a leading indicator of future delivery velocity
What to watch: If LTTPR exceeds 2 sprints, it’s a signal of deeper issues: unclear documentation, fragmented environments, or misaligned expectations.
💡 How to track in Azure DevOps / GitHub:
• Use Azure DevOps Auditing logs to detect User added to project events
• Correlate with Pull Requests > Completed PRs filtered by author
• For GitHub, use GitHub Audit Logs + API
• Build a Power BI dashboard via Azure DevOps Analytics View to visualize average LTTPR across teams
2. % of Tickets Resolved Without Escalation
Metric type: Team autonomy
Goal: >80% of backlog items closed without internal escalation
This is about operational maturity. A strong co-sourced team should handle most work items independently, with minimal “help me unblock this” escalations to internal leads.
How to define escalation:
• PR rework requested due to architectural or business misunderstanding
• Ticket reassigned to an internal engineer
• Manual intervention from internal tech leads or product owners
💡 How to track in Azure:
• Use custom work item tags in Azure Boards (e.g., #escalated)
• Encourage leads to tag tickets manually when escalation is required
• Create an Analytics View KPI:
(Total closed tickets – Tickets with #escalated tag) / Total closed tickets
Bonus: Correlate this with team Slack/Teams activity (volume of internal help requests) via Microsoft Graph APIs.
3. Defect Density in Co-Owned Modules
Metric type: Code quality
Goal: Defect rate not higher than baseline across internal teams
Even high-velocity teams can be unsustainable if they generate bugs or regressions downstream. This metric tracks how many critical/major bugs are found in modules where the co-sourced team made recent changes.
Why this matters:
• Quality issues typically surface weeks after delivery
• Defect clustering around one team is a red flag for rushed or low-context coding
• You need to catch this before it hits production
💡 How to implement in Azure ecosystem:
• Enable Application Insights and Azure Monitor to capture exception rates by service/module
• Use Git blame + GitHub/Azure DevOps commit data to link defects to contributing teams
• Correlate bug reports (from Azure Boards or external bug tracking tools) with recent deployment history
Advanced: Use Azure DevOps Test Plans or integrate with SonarQube for real-time code quality scoring per repo.
Bonus Tactic: Embed Telemetry into the Dev Workflow
You can’t measure what you can’t see. And most co-sourcing efforts fail because teams operate in silos — outside of your CI/CD, observability, or incident tracking stack.
Here’s how to fix it:
1. Require CI/CD integration from day one
• All PRs go through your pipelines (Azure Pipelines, GitHub Actions)
• All builds, tests, and deploys are observable in your environment
2. Instrument feature flags and exceptions using Azure App Configuration + Application Insights
• Know which team deployed what
• Track exceptions and performance regressions by feature/team
3. Make code ownership explicit
• Use CODEOWNERS in GitHub or set branch policies in Azure Repos
• Automate alerts if “unowned” PRs are pushed
Final Thoughts
Co-sourcing isn’t just about scaling headcount. It’s about scaling engineering capacity without compromising visibility, control, or code quality.
By tracking:
• LTTPR → Onboarding effectiveness
• % of work resolved without escalation → Team autonomy
• Defect density → Long-term sustainability
…you shift from reactive firefighting to proactive delivery management.
These metrics give you the signals you need to stay in control — even when the engineering ownership is shared.