feat: Add Official Microsoft & Gemini Skills (845+ Total)
🚀 Impact Significantly expands the capabilities of **Antigravity Awesome Skills** by integrating official skill collections from **Microsoft** and **Google Gemini**. This update increases the total skill count to **845+**, making the library even more comprehensive for AI coding assistants. ✨ Key Changes 1. New Official Skills - **Microsoft Skills**: Added a massive collection of official skills from [microsoft/skills](https://github.com/microsoft/skills). - Includes Azure, .NET, Python, TypeScript, and Semantic Kernel skills. - Preserves the original directory structure under `skills/official/microsoft/`. - Includes plugin skills from the `.github/plugins` directory. - **Gemini Skills**: Added official Gemini API development skills under `skills/gemini-api-dev/`. 2. New Scripts & Tooling - **`scripts/sync_microsoft_skills.py`**: A robust synchronization script that: - Clones the official Microsoft repository. - Preserves the original directory heirarchy. - Handles symlinks and plugin locations. - Generates attribution metadata. - **`scripts/tests/inspect_microsoft_repo.py`**: Debug tool to inspect the remote repository structure. - **`scripts/tests/test_comprehensive_coverage.py`**: Verification script to ensure 100% of skills are captured during sync. 3. Core Improvements - **`scripts/generate_index.py`**: Enhanced frontmatter parsing to safely handle unquoted values containing `@` symbols and commas (fixing issues with some Microsoft skill descriptions). - **`package.json`**: Added `sync:microsoft` and `sync:all-official` scripts for easy maintenance. 4. Documentation - Updated `README.md` to reflect the new skill counts (845+) and added Microsoft/Gemini to the provider list. - Updated `CATALOG.md` and `skills_index.json` with the new skills. 🧪 Verification - Ran `scripts/tests/test_comprehensive_coverage.py` to verify all Microsoft skills are detected. - Validated `generate_index.py` fixes by successfully indexing the new skills.
This commit is contained in:
204
skills/official/microsoft/python/monitoring/ingestion/SKILL.md
Normal file
204
skills/official/microsoft/python/monitoring/ingestion/SKILL.md
Normal file
@@ -0,0 +1,204 @@
|
||||
---
|
||||
name: azure-monitor-ingestion-py
|
||||
description: |
|
||||
Azure Monitor Ingestion SDK for Python. Use for sending custom logs to Log Analytics workspace via Logs Ingestion API.
|
||||
Triggers: "azure-monitor-ingestion", "LogsIngestionClient", "custom logs", "DCR", "data collection rule", "Log Analytics".
|
||||
package: azure-monitor-ingestion
|
||||
---
|
||||
|
||||
# Azure Monitor Ingestion SDK for Python
|
||||
|
||||
Send custom logs to Azure Monitor Log Analytics workspace using the Logs Ingestion API.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install azure-monitor-ingestion
|
||||
pip install azure-identity
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
# Data Collection Endpoint (DCE)
|
||||
AZURE_DCE_ENDPOINT=https://<dce-name>.<region>.ingest.monitor.azure.com
|
||||
|
||||
# Data Collection Rule (DCR) immutable ID
|
||||
AZURE_DCR_RULE_ID=dcr-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
||||
|
||||
# Stream name from DCR
|
||||
AZURE_DCR_STREAM_NAME=Custom-MyTable_CL
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before using this SDK, you need:
|
||||
|
||||
1. **Log Analytics Workspace** — Target for your logs
|
||||
2. **Data Collection Endpoint (DCE)** — Ingestion endpoint
|
||||
3. **Data Collection Rule (DCR)** — Defines schema and destination
|
||||
4. **Custom Table** — In Log Analytics (created via DCR or manually)
|
||||
|
||||
## Authentication
|
||||
|
||||
```python
|
||||
from azure.monitor.ingestion import LogsIngestionClient
|
||||
from azure.identity import DefaultAzureCredential
|
||||
import os
|
||||
|
||||
client = LogsIngestionClient(
|
||||
endpoint=os.environ["AZURE_DCE_ENDPOINT"],
|
||||
credential=DefaultAzureCredential()
|
||||
)
|
||||
```
|
||||
|
||||
## Upload Custom Logs
|
||||
|
||||
```python
|
||||
from azure.monitor.ingestion import LogsIngestionClient
|
||||
from azure.identity import DefaultAzureCredential
|
||||
import os
|
||||
|
||||
client = LogsIngestionClient(
|
||||
endpoint=os.environ["AZURE_DCE_ENDPOINT"],
|
||||
credential=DefaultAzureCredential()
|
||||
)
|
||||
|
||||
rule_id = os.environ["AZURE_DCR_RULE_ID"]
|
||||
stream_name = os.environ["AZURE_DCR_STREAM_NAME"]
|
||||
|
||||
logs = [
|
||||
{"TimeGenerated": "2024-01-15T10:00:00Z", "Computer": "server1", "Message": "Application started"},
|
||||
{"TimeGenerated": "2024-01-15T10:01:00Z", "Computer": "server1", "Message": "Processing request"},
|
||||
{"TimeGenerated": "2024-01-15T10:02:00Z", "Computer": "server2", "Message": "Connection established"}
|
||||
]
|
||||
|
||||
client.upload(rule_id=rule_id, stream_name=stream_name, logs=logs)
|
||||
```
|
||||
|
||||
## Upload from JSON File
|
||||
|
||||
```python
|
||||
import json
|
||||
|
||||
with open("logs.json", "r") as f:
|
||||
logs = json.load(f)
|
||||
|
||||
client.upload(rule_id=rule_id, stream_name=stream_name, logs=logs)
|
||||
```
|
||||
|
||||
## Custom Error Handling
|
||||
|
||||
Handle partial failures with a callback:
|
||||
|
||||
```python
|
||||
failed_logs = []
|
||||
|
||||
def on_error(error):
|
||||
print(f"Upload failed: {error.error}")
|
||||
failed_logs.extend(error.failed_logs)
|
||||
|
||||
client.upload(
|
||||
rule_id=rule_id,
|
||||
stream_name=stream_name,
|
||||
logs=logs,
|
||||
on_error=on_error
|
||||
)
|
||||
|
||||
# Retry failed logs
|
||||
if failed_logs:
|
||||
print(f"Retrying {len(failed_logs)} failed logs...")
|
||||
client.upload(rule_id=rule_id, stream_name=stream_name, logs=failed_logs)
|
||||
```
|
||||
|
||||
## Ignore Errors
|
||||
|
||||
```python
|
||||
def ignore_errors(error):
|
||||
pass # Silently ignore upload failures
|
||||
|
||||
client.upload(
|
||||
rule_id=rule_id,
|
||||
stream_name=stream_name,
|
||||
logs=logs,
|
||||
on_error=ignore_errors
|
||||
)
|
||||
```
|
||||
|
||||
## Async Client
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from azure.monitor.ingestion.aio import LogsIngestionClient
|
||||
from azure.identity.aio import DefaultAzureCredential
|
||||
|
||||
async def upload_logs():
|
||||
async with LogsIngestionClient(
|
||||
endpoint=endpoint,
|
||||
credential=DefaultAzureCredential()
|
||||
) as client:
|
||||
await client.upload(
|
||||
rule_id=rule_id,
|
||||
stream_name=stream_name,
|
||||
logs=logs
|
||||
)
|
||||
|
||||
asyncio.run(upload_logs())
|
||||
```
|
||||
|
||||
## Sovereign Clouds
|
||||
|
||||
```python
|
||||
from azure.identity import AzureAuthorityHosts, DefaultAzureCredential
|
||||
from azure.monitor.ingestion import LogsIngestionClient
|
||||
|
||||
# Azure Government
|
||||
credential = DefaultAzureCredential(authority=AzureAuthorityHosts.AZURE_GOVERNMENT)
|
||||
client = LogsIngestionClient(
|
||||
endpoint="https://example.ingest.monitor.azure.us",
|
||||
credential=credential,
|
||||
credential_scopes=["https://monitor.azure.us/.default"]
|
||||
)
|
||||
```
|
||||
|
||||
## Batching Behavior
|
||||
|
||||
The SDK automatically:
|
||||
- Splits logs into chunks of 1MB or less
|
||||
- Compresses each chunk with gzip
|
||||
- Uploads chunks in parallel
|
||||
|
||||
No manual batching needed for large log sets.
|
||||
|
||||
## Client Types
|
||||
|
||||
| Client | Purpose |
|
||||
|--------|---------|
|
||||
| `LogsIngestionClient` | Sync client for uploading logs |
|
||||
| `LogsIngestionClient` (aio) | Async client for uploading logs |
|
||||
|
||||
## Key Concepts
|
||||
|
||||
| Concept | Description |
|
||||
|---------|-------------|
|
||||
| **DCE** | Data Collection Endpoint — ingestion URL |
|
||||
| **DCR** | Data Collection Rule — defines schema, transformations, destination |
|
||||
| **Stream** | Named data flow within a DCR |
|
||||
| **Custom Table** | Target table in Log Analytics (ends with `_CL`) |
|
||||
|
||||
## DCR Stream Name Format
|
||||
|
||||
Stream names follow patterns:
|
||||
- `Custom-<TableName>_CL` — For custom tables
|
||||
- `Microsoft-<TableName>` — For built-in tables
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use DefaultAzureCredential** for authentication
|
||||
2. **Handle errors gracefully** — use `on_error` callback for partial failures
|
||||
3. **Include TimeGenerated** — Required field for all logs
|
||||
4. **Match DCR schema** — Log fields must match DCR column definitions
|
||||
5. **Use async client** for high-throughput scenarios
|
||||
6. **Batch uploads** — SDK handles batching, but send reasonable chunks
|
||||
7. **Monitor ingestion** — Check Log Analytics for ingestion status
|
||||
8. **Use context manager** — Ensures proper client cleanup
|
||||
@@ -0,0 +1,207 @@
|
||||
---
|
||||
name: azure-monitor-opentelemetry-exporter-py
|
||||
description: |
|
||||
Azure Monitor OpenTelemetry Exporter for Python. Use for low-level OpenTelemetry export to Application Insights.
|
||||
Triggers: "azure-monitor-opentelemetry-exporter", "AzureMonitorTraceExporter", "AzureMonitorMetricExporter", "AzureMonitorLogExporter".
|
||||
package: azure-monitor-opentelemetry-exporter
|
||||
---
|
||||
|
||||
# Azure Monitor OpenTelemetry Exporter for Python
|
||||
|
||||
Low-level exporter for sending OpenTelemetry traces, metrics, and logs to Application Insights.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install azure-monitor-opentelemetry-exporter
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=xxx;IngestionEndpoint=https://xxx.in.applicationinsights.azure.com/
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
| Scenario | Use |
|
||||
|----------|-----|
|
||||
| Quick setup, auto-instrumentation | `azure-monitor-opentelemetry` (distro) |
|
||||
| Custom OpenTelemetry pipeline | `azure-monitor-opentelemetry-exporter` (this) |
|
||||
| Fine-grained control over telemetry | `azure-monitor-opentelemetry-exporter` (this) |
|
||||
|
||||
## Trace Exporter
|
||||
|
||||
```python
|
||||
from opentelemetry import trace
|
||||
from opentelemetry.sdk.trace import TracerProvider
|
||||
from opentelemetry.sdk.trace.export import BatchSpanProcessor
|
||||
from azure.monitor.opentelemetry.exporter import AzureMonitorTraceExporter
|
||||
|
||||
# Create exporter
|
||||
exporter = AzureMonitorTraceExporter(
|
||||
connection_string="InstrumentationKey=xxx;..."
|
||||
)
|
||||
|
||||
# Configure tracer provider
|
||||
trace.set_tracer_provider(TracerProvider())
|
||||
trace.get_tracer_provider().add_span_processor(
|
||||
BatchSpanProcessor(exporter)
|
||||
)
|
||||
|
||||
# Use tracer
|
||||
tracer = trace.get_tracer(__name__)
|
||||
with tracer.start_as_current_span("my-span"):
|
||||
print("Hello, World!")
|
||||
```
|
||||
|
||||
## Metric Exporter
|
||||
|
||||
```python
|
||||
from opentelemetry import metrics
|
||||
from opentelemetry.sdk.metrics import MeterProvider
|
||||
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
|
||||
from azure.monitor.opentelemetry.exporter import AzureMonitorMetricExporter
|
||||
|
||||
# Create exporter
|
||||
exporter = AzureMonitorMetricExporter(
|
||||
connection_string="InstrumentationKey=xxx;..."
|
||||
)
|
||||
|
||||
# Configure meter provider
|
||||
reader = PeriodicExportingMetricReader(exporter, export_interval_millis=60000)
|
||||
metrics.set_meter_provider(MeterProvider(metric_readers=[reader]))
|
||||
|
||||
# Use meter
|
||||
meter = metrics.get_meter(__name__)
|
||||
counter = meter.create_counter("requests_total")
|
||||
counter.add(1, {"route": "/api/users"})
|
||||
```
|
||||
|
||||
## Log Exporter
|
||||
|
||||
```python
|
||||
import logging
|
||||
from opentelemetry._logs import set_logger_provider
|
||||
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
|
||||
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
|
||||
from azure.monitor.opentelemetry.exporter import AzureMonitorLogExporter
|
||||
|
||||
# Create exporter
|
||||
exporter = AzureMonitorLogExporter(
|
||||
connection_string="InstrumentationKey=xxx;..."
|
||||
)
|
||||
|
||||
# Configure logger provider
|
||||
logger_provider = LoggerProvider()
|
||||
logger_provider.add_log_record_processor(BatchLogRecordProcessor(exporter))
|
||||
set_logger_provider(logger_provider)
|
||||
|
||||
# Add handler to Python logging
|
||||
handler = LoggingHandler(level=logging.INFO, logger_provider=logger_provider)
|
||||
logging.getLogger().addHandler(handler)
|
||||
|
||||
# Use logging
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.info("This will be sent to Application Insights")
|
||||
```
|
||||
|
||||
## From Environment Variable
|
||||
|
||||
Exporters read `APPLICATIONINSIGHTS_CONNECTION_STRING` automatically:
|
||||
|
||||
```python
|
||||
from azure.monitor.opentelemetry.exporter import AzureMonitorTraceExporter
|
||||
|
||||
# Connection string from environment
|
||||
exporter = AzureMonitorTraceExporter()
|
||||
```
|
||||
|
||||
## Azure AD Authentication
|
||||
|
||||
```python
|
||||
from azure.identity import DefaultAzureCredential
|
||||
from azure.monitor.opentelemetry.exporter import AzureMonitorTraceExporter
|
||||
|
||||
exporter = AzureMonitorTraceExporter(
|
||||
credential=DefaultAzureCredential()
|
||||
)
|
||||
```
|
||||
|
||||
## Sampling
|
||||
|
||||
Use `ApplicationInsightsSampler` for consistent sampling:
|
||||
|
||||
```python
|
||||
from opentelemetry.sdk.trace import TracerProvider
|
||||
from opentelemetry.sdk.trace.sampling import ParentBasedTraceIdRatio
|
||||
from azure.monitor.opentelemetry.exporter import ApplicationInsightsSampler
|
||||
|
||||
# Sample 10% of traces
|
||||
sampler = ApplicationInsightsSampler(sampling_ratio=0.1)
|
||||
|
||||
trace.set_tracer_provider(TracerProvider(sampler=sampler))
|
||||
```
|
||||
|
||||
## Offline Storage
|
||||
|
||||
Configure offline storage for retry:
|
||||
|
||||
```python
|
||||
from azure.monitor.opentelemetry.exporter import AzureMonitorTraceExporter
|
||||
|
||||
exporter = AzureMonitorTraceExporter(
|
||||
connection_string="...",
|
||||
storage_directory="/path/to/storage", # Custom storage path
|
||||
disable_offline_storage=False # Enable retry (default)
|
||||
)
|
||||
```
|
||||
|
||||
## Disable Offline Storage
|
||||
|
||||
```python
|
||||
exporter = AzureMonitorTraceExporter(
|
||||
connection_string="...",
|
||||
disable_offline_storage=True # No retry on failure
|
||||
)
|
||||
```
|
||||
|
||||
## Sovereign Clouds
|
||||
|
||||
```python
|
||||
from azure.identity import AzureAuthorityHosts, DefaultAzureCredential
|
||||
from azure.monitor.opentelemetry.exporter import AzureMonitorTraceExporter
|
||||
|
||||
# Azure Government
|
||||
credential = DefaultAzureCredential(authority=AzureAuthorityHosts.AZURE_GOVERNMENT)
|
||||
exporter = AzureMonitorTraceExporter(
|
||||
connection_string="InstrumentationKey=xxx;IngestionEndpoint=https://xxx.in.applicationinsights.azure.us/",
|
||||
credential=credential
|
||||
)
|
||||
```
|
||||
|
||||
## Exporter Types
|
||||
|
||||
| Exporter | Telemetry Type | Application Insights Table |
|
||||
|----------|---------------|---------------------------|
|
||||
| `AzureMonitorTraceExporter` | Traces/Spans | requests, dependencies, exceptions |
|
||||
| `AzureMonitorMetricExporter` | Metrics | customMetrics, performanceCounters |
|
||||
| `AzureMonitorLogExporter` | Logs | traces, customEvents |
|
||||
|
||||
## Configuration Options
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|-----------|-------------|---------|
|
||||
| `connection_string` | Application Insights connection string | From env var |
|
||||
| `credential` | Azure credential for AAD auth | None |
|
||||
| `disable_offline_storage` | Disable retry storage | False |
|
||||
| `storage_directory` | Custom storage path | Temp directory |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use BatchSpanProcessor** for production (not SimpleSpanProcessor)
|
||||
2. **Use ApplicationInsightsSampler** for consistent sampling across services
|
||||
3. **Enable offline storage** for reliability in production
|
||||
4. **Use AAD authentication** instead of instrumentation keys
|
||||
5. **Set export intervals** appropriate for your workload
|
||||
6. **Use the distro** (`azure-monitor-opentelemetry`) unless you need custom pipelines
|
||||
@@ -0,0 +1,224 @@
|
||||
---
|
||||
name: azure-monitor-opentelemetry-py
|
||||
description: |
|
||||
Azure Monitor OpenTelemetry Distro for Python. Use for one-line Application Insights setup with auto-instrumentation.
|
||||
Triggers: "azure-monitor-opentelemetry", "configure_azure_monitor", "Application Insights", "OpenTelemetry distro", "auto-instrumentation".
|
||||
package: azure-monitor-opentelemetry
|
||||
---
|
||||
|
||||
# Azure Monitor OpenTelemetry Distro for Python
|
||||
|
||||
One-line setup for Application Insights with OpenTelemetry auto-instrumentation.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install azure-monitor-opentelemetry
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=xxx;IngestionEndpoint=https://xxx.in.applicationinsights.azure.com/
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
```python
|
||||
from azure.monitor.opentelemetry import configure_azure_monitor
|
||||
|
||||
# One-line setup - reads connection string from environment
|
||||
configure_azure_monitor()
|
||||
|
||||
# Your application code...
|
||||
```
|
||||
|
||||
## Explicit Configuration
|
||||
|
||||
```python
|
||||
from azure.monitor.opentelemetry import configure_azure_monitor
|
||||
|
||||
configure_azure_monitor(
|
||||
connection_string="InstrumentationKey=xxx;IngestionEndpoint=https://xxx.in.applicationinsights.azure.com/"
|
||||
)
|
||||
```
|
||||
|
||||
## With Flask
|
||||
|
||||
```python
|
||||
from flask import Flask
|
||||
from azure.monitor.opentelemetry import configure_azure_monitor
|
||||
|
||||
configure_azure_monitor()
|
||||
|
||||
app = Flask(__name__)
|
||||
|
||||
@app.route("/")
|
||||
def hello():
|
||||
return "Hello, World!"
|
||||
|
||||
if __name__ == "__main__":
|
||||
app.run()
|
||||
```
|
||||
|
||||
## With Django
|
||||
|
||||
```python
|
||||
# settings.py
|
||||
from azure.monitor.opentelemetry import configure_azure_monitor
|
||||
|
||||
configure_azure_monitor()
|
||||
|
||||
# Django settings...
|
||||
```
|
||||
|
||||
## With FastAPI
|
||||
|
||||
```python
|
||||
from fastapi import FastAPI
|
||||
from azure.monitor.opentelemetry import configure_azure_monitor
|
||||
|
||||
configure_azure_monitor()
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
@app.get("/")
|
||||
async def root():
|
||||
return {"message": "Hello World"}
|
||||
```
|
||||
|
||||
## Custom Traces
|
||||
|
||||
```python
|
||||
from opentelemetry import trace
|
||||
from azure.monitor.opentelemetry import configure_azure_monitor
|
||||
|
||||
configure_azure_monitor()
|
||||
|
||||
tracer = trace.get_tracer(__name__)
|
||||
|
||||
with tracer.start_as_current_span("my-operation") as span:
|
||||
span.set_attribute("custom.attribute", "value")
|
||||
# Do work...
|
||||
```
|
||||
|
||||
## Custom Metrics
|
||||
|
||||
```python
|
||||
from opentelemetry import metrics
|
||||
from azure.monitor.opentelemetry import configure_azure_monitor
|
||||
|
||||
configure_azure_monitor()
|
||||
|
||||
meter = metrics.get_meter(__name__)
|
||||
counter = meter.create_counter("my_counter")
|
||||
|
||||
counter.add(1, {"dimension": "value"})
|
||||
```
|
||||
|
||||
## Custom Logs
|
||||
|
||||
```python
|
||||
import logging
|
||||
from azure.monitor.opentelemetry import configure_azure_monitor
|
||||
|
||||
configure_azure_monitor()
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
logger.info("This will appear in Application Insights")
|
||||
logger.error("Errors are captured too", exc_info=True)
|
||||
```
|
||||
|
||||
## Sampling
|
||||
|
||||
```python
|
||||
from azure.monitor.opentelemetry import configure_azure_monitor
|
||||
|
||||
# Sample 10% of requests
|
||||
configure_azure_monitor(
|
||||
sampling_ratio=0.1
|
||||
)
|
||||
```
|
||||
|
||||
## Cloud Role Name
|
||||
|
||||
Set cloud role name for Application Map:
|
||||
|
||||
```python
|
||||
from azure.monitor.opentelemetry import configure_azure_monitor
|
||||
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
|
||||
|
||||
configure_azure_monitor(
|
||||
resource=Resource.create({SERVICE_NAME: "my-service-name"})
|
||||
)
|
||||
```
|
||||
|
||||
## Disable Specific Instrumentations
|
||||
|
||||
```python
|
||||
from azure.monitor.opentelemetry import configure_azure_monitor
|
||||
|
||||
configure_azure_monitor(
|
||||
instrumentations=["flask", "requests"] # Only enable these
|
||||
)
|
||||
```
|
||||
|
||||
## Enable Live Metrics
|
||||
|
||||
```python
|
||||
from azure.monitor.opentelemetry import configure_azure_monitor
|
||||
|
||||
configure_azure_monitor(
|
||||
enable_live_metrics=True
|
||||
)
|
||||
```
|
||||
|
||||
## Azure AD Authentication
|
||||
|
||||
```python
|
||||
from azure.monitor.opentelemetry import configure_azure_monitor
|
||||
from azure.identity import DefaultAzureCredential
|
||||
|
||||
configure_azure_monitor(
|
||||
credential=DefaultAzureCredential()
|
||||
)
|
||||
```
|
||||
|
||||
## Auto-Instrumentations Included
|
||||
|
||||
| Library | Telemetry Type |
|
||||
|---------|---------------|
|
||||
| Flask | Traces |
|
||||
| Django | Traces |
|
||||
| FastAPI | Traces |
|
||||
| Requests | Traces |
|
||||
| urllib3 | Traces |
|
||||
| httpx | Traces |
|
||||
| aiohttp | Traces |
|
||||
| psycopg2 | Traces |
|
||||
| pymysql | Traces |
|
||||
| pymongo | Traces |
|
||||
| redis | Traces |
|
||||
|
||||
## Configuration Options
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|-----------|-------------|---------|
|
||||
| `connection_string` | Application Insights connection string | From env var |
|
||||
| `credential` | Azure credential for AAD auth | None |
|
||||
| `sampling_ratio` | Sampling rate (0.0 to 1.0) | 1.0 |
|
||||
| `resource` | OpenTelemetry Resource | Auto-detected |
|
||||
| `instrumentations` | List of instrumentations to enable | All |
|
||||
| `enable_live_metrics` | Enable Live Metrics stream | False |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Call configure_azure_monitor() early** — Before importing instrumented libraries
|
||||
2. **Use environment variables** for connection string in production
|
||||
3. **Set cloud role name** for multi-service applications
|
||||
4. **Enable sampling** in high-traffic applications
|
||||
5. **Use structured logging** for better log analytics queries
|
||||
6. **Add custom attributes** to spans for better debugging
|
||||
7. **Use AAD authentication** for production workloads
|
||||
252
skills/official/microsoft/python/monitoring/query/SKILL.md
Normal file
252
skills/official/microsoft/python/monitoring/query/SKILL.md
Normal file
@@ -0,0 +1,252 @@
|
||||
---
|
||||
name: azure-monitor-query-py
|
||||
description: |
|
||||
Azure Monitor Query SDK for Python. Use for querying Log Analytics workspaces and Azure Monitor metrics.
|
||||
Triggers: "azure-monitor-query", "LogsQueryClient", "MetricsQueryClient", "Log Analytics", "Kusto queries", "Azure metrics".
|
||||
package: azure-monitor-query
|
||||
---
|
||||
|
||||
# Azure Monitor Query SDK for Python
|
||||
|
||||
Query logs and metrics from Azure Monitor and Log Analytics workspaces.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install azure-monitor-query
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
# Log Analytics
|
||||
AZURE_LOG_ANALYTICS_WORKSPACE_ID=<workspace-id>
|
||||
|
||||
# Metrics
|
||||
AZURE_METRICS_RESOURCE_URI=/subscriptions/<sub>/resourceGroups/<rg>/providers/<provider>/<type>/<name>
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
```python
|
||||
from azure.identity import DefaultAzureCredential
|
||||
|
||||
credential = DefaultAzureCredential()
|
||||
```
|
||||
|
||||
## Logs Query Client
|
||||
|
||||
### Basic Query
|
||||
|
||||
```python
|
||||
from azure.monitor.query import LogsQueryClient
|
||||
from datetime import timedelta
|
||||
|
||||
client = LogsQueryClient(credential)
|
||||
|
||||
query = """
|
||||
AppRequests
|
||||
| where TimeGenerated > ago(1h)
|
||||
| summarize count() by bin(TimeGenerated, 5m), ResultCode
|
||||
| order by TimeGenerated desc
|
||||
"""
|
||||
|
||||
response = client.query_workspace(
|
||||
workspace_id=os.environ["AZURE_LOG_ANALYTICS_WORKSPACE_ID"],
|
||||
query=query,
|
||||
timespan=timedelta(hours=1)
|
||||
)
|
||||
|
||||
for table in response.tables:
|
||||
for row in table.rows:
|
||||
print(row)
|
||||
```
|
||||
|
||||
### Query with Time Range
|
||||
|
||||
```python
|
||||
from datetime import datetime, timezone
|
||||
|
||||
response = client.query_workspace(
|
||||
workspace_id=workspace_id,
|
||||
query="AppRequests | take 10",
|
||||
timespan=(
|
||||
datetime(2024, 1, 1, tzinfo=timezone.utc),
|
||||
datetime(2024, 1, 2, tzinfo=timezone.utc)
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
### Convert to DataFrame
|
||||
|
||||
```python
|
||||
import pandas as pd
|
||||
|
||||
response = client.query_workspace(workspace_id, query, timespan=timedelta(hours=1))
|
||||
|
||||
if response.tables:
|
||||
table = response.tables[0]
|
||||
df = pd.DataFrame(data=table.rows, columns=[col.name for col in table.columns])
|
||||
print(df.head())
|
||||
```
|
||||
|
||||
### Batch Query
|
||||
|
||||
```python
|
||||
from azure.monitor.query import LogsBatchQuery
|
||||
|
||||
queries = [
|
||||
LogsBatchQuery(workspace_id=workspace_id, query="AppRequests | take 5", timespan=timedelta(hours=1)),
|
||||
LogsBatchQuery(workspace_id=workspace_id, query="AppExceptions | take 5", timespan=timedelta(hours=1))
|
||||
]
|
||||
|
||||
responses = client.query_batch(queries)
|
||||
|
||||
for response in responses:
|
||||
if response.tables:
|
||||
print(f"Rows: {len(response.tables[0].rows)}")
|
||||
```
|
||||
|
||||
### Handle Partial Results
|
||||
|
||||
```python
|
||||
from azure.monitor.query import LogsQueryStatus
|
||||
|
||||
response = client.query_workspace(workspace_id, query, timespan=timedelta(hours=24))
|
||||
|
||||
if response.status == LogsQueryStatus.PARTIAL:
|
||||
print(f"Partial results: {response.partial_error}")
|
||||
elif response.status == LogsQueryStatus.FAILURE:
|
||||
print(f"Query failed: {response.partial_error}")
|
||||
```
|
||||
|
||||
## Metrics Query Client
|
||||
|
||||
### Query Resource Metrics
|
||||
|
||||
```python
|
||||
from azure.monitor.query import MetricsQueryClient
|
||||
from datetime import timedelta
|
||||
|
||||
metrics_client = MetricsQueryClient(credential)
|
||||
|
||||
response = metrics_client.query_resource(
|
||||
resource_uri=os.environ["AZURE_METRICS_RESOURCE_URI"],
|
||||
metric_names=["Percentage CPU", "Network In Total"],
|
||||
timespan=timedelta(hours=1),
|
||||
granularity=timedelta(minutes=5)
|
||||
)
|
||||
|
||||
for metric in response.metrics:
|
||||
print(f"{metric.name}:")
|
||||
for time_series in metric.timeseries:
|
||||
for data in time_series.data:
|
||||
print(f" {data.timestamp}: {data.average}")
|
||||
```
|
||||
|
||||
### Aggregations
|
||||
|
||||
```python
|
||||
from azure.monitor.query import MetricAggregationType
|
||||
|
||||
response = metrics_client.query_resource(
|
||||
resource_uri=resource_uri,
|
||||
metric_names=["Requests"],
|
||||
timespan=timedelta(hours=1),
|
||||
aggregations=[
|
||||
MetricAggregationType.AVERAGE,
|
||||
MetricAggregationType.MAXIMUM,
|
||||
MetricAggregationType.MINIMUM,
|
||||
MetricAggregationType.COUNT
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
### Filter by Dimension
|
||||
|
||||
```python
|
||||
response = metrics_client.query_resource(
|
||||
resource_uri=resource_uri,
|
||||
metric_names=["Requests"],
|
||||
timespan=timedelta(hours=1),
|
||||
filter="ApiName eq 'GetBlob'"
|
||||
)
|
||||
```
|
||||
|
||||
### List Metric Definitions
|
||||
|
||||
```python
|
||||
definitions = metrics_client.list_metric_definitions(resource_uri)
|
||||
for definition in definitions:
|
||||
print(f"{definition.name}: {definition.unit}")
|
||||
```
|
||||
|
||||
### List Metric Namespaces
|
||||
|
||||
```python
|
||||
namespaces = metrics_client.list_metric_namespaces(resource_uri)
|
||||
for ns in namespaces:
|
||||
print(ns.fully_qualified_namespace)
|
||||
```
|
||||
|
||||
## Async Clients
|
||||
|
||||
```python
|
||||
from azure.monitor.query.aio import LogsQueryClient, MetricsQueryClient
|
||||
from azure.identity.aio import DefaultAzureCredential
|
||||
|
||||
async def query_logs():
|
||||
credential = DefaultAzureCredential()
|
||||
client = LogsQueryClient(credential)
|
||||
|
||||
response = await client.query_workspace(
|
||||
workspace_id=workspace_id,
|
||||
query="AppRequests | take 10",
|
||||
timespan=timedelta(hours=1)
|
||||
)
|
||||
|
||||
await client.close()
|
||||
await credential.close()
|
||||
return response
|
||||
```
|
||||
|
||||
## Common Kusto Queries
|
||||
|
||||
```kusto
|
||||
// Requests by status code
|
||||
AppRequests
|
||||
| summarize count() by ResultCode
|
||||
| order by count_ desc
|
||||
|
||||
// Exceptions over time
|
||||
AppExceptions
|
||||
| summarize count() by bin(TimeGenerated, 1h)
|
||||
|
||||
// Slow requests
|
||||
AppRequests
|
||||
| where DurationMs > 1000
|
||||
| project TimeGenerated, Name, DurationMs
|
||||
| order by DurationMs desc
|
||||
|
||||
// Top errors
|
||||
AppExceptions
|
||||
| summarize count() by ExceptionType
|
||||
| top 10 by count_
|
||||
```
|
||||
|
||||
## Client Types
|
||||
|
||||
| Client | Purpose |
|
||||
|--------|---------|
|
||||
| `LogsQueryClient` | Query Log Analytics workspaces |
|
||||
| `MetricsQueryClient` | Query Azure Monitor metrics |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use timedelta** for relative time ranges
|
||||
2. **Handle partial results** for large queries
|
||||
3. **Use batch queries** when running multiple queries
|
||||
4. **Set appropriate granularity** for metrics to reduce data points
|
||||
5. **Convert to DataFrame** for easier data analysis
|
||||
6. **Use aggregations** to summarize metric data
|
||||
7. **Filter by dimensions** to narrow metric results
|
||||
Reference in New Issue
Block a user