appinsights-instrumentation
by githubappinsights-instrumentation helps instrument Azure-hosted web apps with Application Insights. It guides App Service auto-instrumentation or manual ASP.NET Core and Node.js setup, including connection string and IaC updates.
This skill scores 78/100, which means it is a solid directory listing candidate: agents get a clear trigger, concrete branching guidance, and reusable implementation references for adding Azure Application Insights telemetry to supported web apps, though users should expect some scope and completeness limits.
- Strong triggerability: SKILL.md clearly says to use it when a user wants telemetry enabled for a web app and instructs the agent to identify language/framework/hosting first.
- Good operational leverage: language-specific references for ASP.NET Core and Node.js include exact package installs, code changes, and the required APPLICATIONINSIGHTS_CONNECTION_STRING setup.
- Useful workflow assets: it includes an auto-instrument path for Azure App Service, a Bicep example for resource creation, and a PowerShell/Azure CLI support script reference.
- Scope is narrower than the reference set suggests: prerequisites only name ASP.NET Core and Node.js in Azure, while a Python reference exists but its fit and trigger conditions are not clearly integrated.
- Some execution still requires guesswork because install/use steps are split across multiple files and the provided excerpts show truncated or incomplete guidance in places.
Overview of appinsights-instrumentation skill
The appinsights-instrumentation skill helps an agent add Azure Application Insights telemetry to a web app with less guesswork than a generic observability prompt. Its real job is not just “turn on logs,” but to choose the right instrumentation path for the app’s language and hosting model, create or wire up the App Insights resource, and make sure the app gets the required APPLICATIONINSIGHTS_CONNECTION_STRING.
Who this skill is best for
This skill is a strong fit if you have a web app in Azure and want practical help instrumenting it for observability, especially for:
- ASP.NET Core apps hosted in Azure
- Node.js apps hosted in Azure
- teams using Azure App Service
- repos that already use IaC such as Bicep and want telemetry added cleanly
It is also useful when you want the agent to inspect the repo, infer framework details, and recommend either auto-instrumentation or code changes.
What users actually care about first
Before installing appinsights-instrumentation, most users want answers to these questions:
- Will it work for my app stack?
- Can I avoid code changes?
- What files need editing?
- How do I create or find the App Insights connection string?
- Should I update infrastructure code or patch settings manually?
This skill directly addresses those adoption blockers better than a broad “add observability” instruction.
Key differentiator: auto vs manual instrumentation
The biggest value in the appinsights-instrumentation skill is that it does not treat all apps the same. It explicitly prefers auto-instrumentation for supported Azure App Service cases, then falls back to manual code changes when needed.
That matters because many users would rather enable telemetry without touching application code if Azure App Service supports it.
Supported paths surfaced by the repository
From the repository evidence, the practical paths are:
- Azure App Service auto-instrumentation for supported ASP.NET Core and Node.js apps
- manual ASP.NET Core instrumentation with
Azure.Monitor.OpenTelemetry.AspNetCore - manual Node.js instrumentation with
@azure/monitor-opentelemetry - guidance for Python in
references/PYTHON.md, even though the top-level prerequisite text is narrower
Main limitation to know up front
The skill is Azure-specific and hosting-aware. If your app is not hosted in Azure, or you only want vendor-neutral OpenTelemetry architecture advice, appinsights-instrumentation for Observability may feel too narrow. Its value is highest when the agent can inspect your app and deployment shape, then apply Azure Monitor/App Insights conventions correctly.
How to Use appinsights-instrumentation skill
Install context and where this skill lives
This skill comes from github/awesome-copilot under skills/appinsights-instrumentation. If your tooling supports skill installation, use your normal add-skill flow for that repository and then invoke the skill when asking for Azure App Insights setup.
Because the repository does not center on a custom CLI for this skill itself, the important install decision is less about package management and more about whether your workspace contains:
- app source code
- deployment or hosting clues
- IaC files such as Bicep or Terraform
- enough Azure context to identify the running app
Start by giving the agent the right context
For effective appinsights-instrumentation usage, do not begin with “add App Insights.” Start with the app tuple the skill cares about:
- language
- framework
- hosting target
- deployment style
- whether code changes are acceptable
A strong first request looks like this:
- “Instrument this ASP.NET Core app running in Azure App Service. Prefer codeless setup if supported. If not, update code and Bicep.”
- “This Node.js app is deployed to Azure App Service from this repo. Find the entry file, add Azure Monitor instrumentation, and show the env var changes needed.”
- “Inspect this repo and tell me whether auto-instrumentation is possible or whether manual App Insights instrumentation is required.”
The most important question the skill needs answered
The repository is explicit: the agent should always determine where the application is hosted. That single input changes the plan more than any other. If you omit hosting details, expect extra back-and-forth.
Useful hosting answers include:
- Azure App Service as code deployment
- Azure App Service as container
- Azure Container Apps
- local machine only
- unknown, please infer from repo and deployment files
Best repository files to read first
If you are evaluating appinsights-instrumentation install quality or trying to guide an agent, read these files in this order:
SKILL.mdreferences/AUTO.mdreferences/ASPNETCORE.mdreferences/NODEJS.mdexamples/appinsights.bicepscripts/appinsights.ps1references/PYTHON.md
Why this order works:
SKILL.mdgives the routing logicAUTO.mdtells you when no code change is needed- language files show the exact package and code edits
- the Bicep example clarifies infrastructure changes
- the PowerShell script points to Azure CLI operations for connection strings and settings
How to decide between auto and manual instrumentation
Use this decision pattern:
- If the app is ASP.NET Core or Node.js on Azure App Service, check auto-instrumentation first.
- If auto-instrumentation is unsupported, unwanted, or too opaque for your deployment process, switch to manual instrumentation.
- If your team manages infra declaratively, prefer updating IaC and app config together instead of making one-off portal changes.
This is one of the strongest practical parts of the appinsights-instrumentation guide: it reduces wasted effort on unnecessary code edits.
Manual ASP.NET Core workflow
For ASP.NET Core, the repository guidance points to:
- install package:
dotnet add package Azure.Monitor.OpenTelemetry.AspNetCore - add
using Azure.Monitor.OpenTelemetry.AspNetCore; - before
builder.Build(), addbuilder.Services.AddOpenTelemetry().UseAzureMonitor();
Then provide the App Insights connection string through the environment, not by casually editing appsettings. That warning matters because many teams accidentally hardcode or localize configuration in a way that does not survive deployment cleanly.
Manual Node.js workflow
For Node.js, the practical flow is:
- install package:
npm install @azure/monitor-opentelemetry - find the entry file, usually from the
mainfield inpackage.json - require the library near the top:
const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); - call
useAzureMonitor();
The timing matters: load environment variables first, then call useAzureMonitor(), then load the rest of the app. If the app uses dotenv, initialize dotenv before Azure Monitor setup.
App Insights resource and connection string handling
A frequent adoption blocker is not code instrumentation but resource wiring. This skill helps with both sides:
- create or reference the Application Insights resource
- retrieve the connection string
- set
APPLICATIONINSIGHTS_CONNECTION_STRING - persist that setting in IaC when possible
The repo includes examples/appinsights.bicep and scripts/appinsights.ps1, which is a strong sign that the skill is meant to work across both code and infrastructure layers, not just edit source files.
Prompt pattern that gets better results
A weak prompt:
- “Add observability.”
A stronger prompt:
- “Use the appinsights-instrumentation skill on this repo. First detect whether this is ASP.NET Core, Node.js, or Python and how it is hosted. Prefer Azure App Service auto-instrumentation if supported. Otherwise, make the minimum code and IaC changes needed to send telemetry to Azure Application Insights. Show the exact files to edit and explain how to set
APPLICATIONINSIGHTS_CONNECTION_STRING.”
Why this is better:
- it forces stack detection
- it encodes the auto-first preference
- it asks for file-level changes
- it includes the non-obvious env var requirement
What to inspect after the first output
After the agent responds, verify these items before accepting the plan:
- Did it identify the hosting environment, not just the language?
- Did it check for Azure App Service auto-instrumentation first where relevant?
- Did it specify the correct package for the language?
- Did it place initialization early enough in app startup?
- Did it handle the connection string as an environment variable?
- Did it suggest IaC changes if the repo already uses IaC?
If those are missing, the output is likely generic rather than truly skill-guided.
appinsights-instrumentation skill FAQ
Is appinsights-instrumentation better than a normal prompt?
Usually yes, if your goal is Azure App Insights setup in a real repo. A generic prompt often forgets the hosting-dependent decision, the auto-instrumentation option, or the exact connection string workflow. The appinsights-instrumentation skill is better when you want fewer Azure-specific omissions.
Is this skill beginner-friendly?
Moderately. It is practical, but it assumes you can answer basic deployment questions or let the agent inspect the repo. Beginners can still use it well if they provide:
- app language/framework
- Azure hosting type
- whether they use App Service
- whether infrastructure is managed in code
Without that, the skill will need clarification before it can produce a trustworthy plan.
Does it only work for Azure App Service?
No, but Azure App Service is where its most valuable decision logic appears because auto-instrumentation may be available there. Outside that path, the skill still helps with manual instrumentation, resource creation, and connection string configuration.
Does it support Python?
The repo includes references/PYTHON.md, so there is Python guidance available. However, the top-level prerequisite text emphasizes ASP.NET Core and Node.js. Treat Python support as a useful reference path, but verify fit against your actual hosting model before relying on it as the primary scenario.
When should I not use appinsights-instrumentation?
Skip appinsights-instrumentation if:
- your app is not Azure-hosted and you want cloud-agnostic observability guidance
- you need deep custom tracing design rather than initial App Insights enablement
- you already have mature OpenTelemetry instrumentation and only need small tweaks
- your task is mostly dashboarding, alerting, or KQL, not instrumentation
Does the skill create Azure resources for me?
It can guide the resource setup and points to infrastructure examples like examples/appinsights.bicep, but whether resources are actually created depends on your agent permissions and workflow. In practice, it is best used to produce the exact IaC or CLI steps your environment allows.
How to Improve appinsights-instrumentation skill
Give the skill a complete deployment picture
The fastest way to improve appinsights-instrumentation usage is to provide the deployment picture up front:
- source language and framework
- Azure hosting service
- deployment method
- infra-as-code files present
- whether portal changes are allowed
This reduces the skill’s biggest failure mode: choosing a technically valid path that does not match your operating model.
Ask for a decision before asking for edits
A high-quality workflow is:
- ask the agent to classify the app and hosting
- ask whether auto-instrumentation is supported
- only then ask for file edits or IaC patches
This improves output because the main branch point in the skill is architectural, not syntactic.
Point the agent to the right files explicitly
If the repo is large, tell the agent where to look:
Program.csfor ASP.NET Corepackage.jsonand the entry file for Node.js- Bicep or Terraform files for infra config
- deployment manifests or workflows that reveal hosting
This helps avoid shallow edits in the wrong startup file or missing the right IaC location for the env var.
Require file-level diffs, not generic guidance
For better appinsights-instrumentation guide output, ask for:
- exact files to change
- exact package install commands
- exact startup initialization placement
- exact environment variable insertion point
- exact IaC additions for the App Insights resource and app settings
That turns the skill from advisory text into an implementation plan.
Common failure modes to watch for
The most likely quality issues are:
- skipping the hosting check
- missing the auto-instrumentation option
- initializing telemetry too late in app startup
- setting the connection string in the wrong place
- updating code but forgetting deployment-time config
- treating local app settings as the source of truth when the app runs in Azure
These are the places where a second review adds the most value.
Improve outputs with a stronger follow-up prompt
If the first answer is generic, use a correction prompt like:
- “Re-run appinsights-instrumentation with hosting-aware decisions. Confirm whether this Azure App Service app qualifies for auto-instrumentation before proposing code changes.”
- “Revise this plan to include the exact file edits, package command, and IaC changes for
APPLICATIONINSIGHTS_CONNECTION_STRING.” - “Compare manual instrumentation vs auto-instrumentation for this repo and recommend one based on the deployment files present.”
Validate observability, not just compilation
A successful result is not only that the app builds. Ask the agent to define how to confirm telemetry is actually flowing:
- where the connection string is sourced
- what deployment step applies the setting
- what request or startup activity should generate telemetry
- what Azure-side signal you should expect after deployment
That makes appinsights-instrumentation for Observability more useful in production, where silent misconfiguration is common.
Best way to extend appinsights-instrumentation in practice
If you want more value from this skill over time, extend your own prompting pattern around it:
- always include hosting details
- always request auto-vs-manual comparison
- always request infra and code changes together
- always ask for a post-deploy verification checklist
That pattern aligns tightly with how the repository is structured and leads to much better results than a one-line request.
