feat: Add Official Microsoft & Gemini Skills (845+ Total)
🚀 Impact Significantly expands the capabilities of **Antigravity Awesome Skills** by integrating official skill collections from **Microsoft** and **Google Gemini**. This update increases the total skill count to **845+**, making the library even more comprehensive for AI coding assistants. ✨ Key Changes 1. New Official Skills - **Microsoft Skills**: Added a massive collection of official skills from [microsoft/skills](https://github.com/microsoft/skills). - Includes Azure, .NET, Python, TypeScript, and Semantic Kernel skills. - Preserves the original directory structure under `skills/official/microsoft/`. - Includes plugin skills from the `.github/plugins` directory. - **Gemini Skills**: Added official Gemini API development skills under `skills/gemini-api-dev/`. 2. New Scripts & Tooling - **`scripts/sync_microsoft_skills.py`**: A robust synchronization script that: - Clones the official Microsoft repository. - Preserves the original directory heirarchy. - Handles symlinks and plugin locations. - Generates attribution metadata. - **`scripts/tests/inspect_microsoft_repo.py`**: Debug tool to inspect the remote repository structure. - **`scripts/tests/test_comprehensive_coverage.py`**: Verification script to ensure 100% of skills are captured during sync. 3. Core Improvements - **`scripts/generate_index.py`**: Enhanced frontmatter parsing to safely handle unquoted values containing `@` symbols and commas (fixing issues with some Microsoft skill descriptions). - **`package.json`**: Added `sync:microsoft` and `sync:all-official` scripts for easy maintenance. 4. Documentation - Updated `README.md` to reflect the new skill counts (845+) and added Microsoft/Gemini to the provider list. - Updated `CATALOG.md` and `skills_index.json` with the new skills. 🧪 Verification - Ran `scripts/tests/test_comprehensive_coverage.py` to verify all Microsoft skills are detected. - Validated `generate_index.py` fixes by successfully indexing the new skills.
This commit is contained in:
230
skills/official/microsoft/java/monitoring/ingestion/SKILL.md
Normal file
230
skills/official/microsoft/java/monitoring/ingestion/SKILL.md
Normal file
@@ -0,0 +1,230 @@
|
||||
---
|
||||
name: azure-monitor-ingestion-java
|
||||
description: |
|
||||
Azure Monitor Ingestion SDK for Java. Send custom logs to Azure Monitor via Data Collection Rules (DCR) and Data Collection Endpoints (DCE).
|
||||
Triggers: "LogsIngestionClient java", "azure monitor ingestion java", "custom logs java", "DCR java", "data collection rule java".
|
||||
package: com.azure:azure-monitor-ingestion
|
||||
---
|
||||
|
||||
# Azure Monitor Ingestion SDK for Java
|
||||
|
||||
Client library for sending custom logs to Azure Monitor using the Logs Ingestion API via Data Collection Rules.
|
||||
|
||||
## Installation
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-monitor-ingestion</artifactId>
|
||||
<version>1.2.11</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
Or use Azure SDK BOM:
|
||||
|
||||
```xml
|
||||
<dependencyManagement>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-sdk-bom</artifactId>
|
||||
<version>{bom_version}</version>
|
||||
<type>pom</type>
|
||||
<scope>import</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</dependencyManagement>
|
||||
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-monitor-ingestion</artifactId>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Data Collection Endpoint (DCE)
|
||||
- Data Collection Rule (DCR)
|
||||
- Log Analytics workspace
|
||||
- Target table (custom or built-in: CommonSecurityLog, SecurityEvents, Syslog, WindowsEvents)
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
DATA_COLLECTION_ENDPOINT=https://<dce-name>.<region>.ingest.monitor.azure.com
|
||||
DATA_COLLECTION_RULE_ID=dcr-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
||||
STREAM_NAME=Custom-MyTable_CL
|
||||
```
|
||||
|
||||
## Client Creation
|
||||
|
||||
### Synchronous Client
|
||||
|
||||
```java
|
||||
import com.azure.identity.DefaultAzureCredential;
|
||||
import com.azure.identity.DefaultAzureCredentialBuilder;
|
||||
import com.azure.monitor.ingestion.LogsIngestionClient;
|
||||
import com.azure.monitor.ingestion.LogsIngestionClientBuilder;
|
||||
|
||||
DefaultAzureCredential credential = new DefaultAzureCredentialBuilder().build();
|
||||
|
||||
LogsIngestionClient client = new LogsIngestionClientBuilder()
|
||||
.endpoint("<data-collection-endpoint>")
|
||||
.credential(credential)
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
### Asynchronous Client
|
||||
|
||||
```java
|
||||
import com.azure.monitor.ingestion.LogsIngestionAsyncClient;
|
||||
|
||||
LogsIngestionAsyncClient asyncClient = new LogsIngestionClientBuilder()
|
||||
.endpoint("<data-collection-endpoint>")
|
||||
.credential(new DefaultAzureCredentialBuilder().build())
|
||||
.buildAsyncClient();
|
||||
```
|
||||
|
||||
## Key Concepts
|
||||
|
||||
| Concept | Description |
|
||||
|---------|-------------|
|
||||
| Data Collection Endpoint (DCE) | Ingestion endpoint URL for your region |
|
||||
| Data Collection Rule (DCR) | Defines data transformation and routing to tables |
|
||||
| Stream Name | Target stream in the DCR (e.g., `Custom-MyTable_CL`) |
|
||||
| Log Analytics Workspace | Destination for ingested logs |
|
||||
|
||||
## Core Operations
|
||||
|
||||
### Upload Custom Logs
|
||||
|
||||
```java
|
||||
import java.util.List;
|
||||
import java.util.ArrayList;
|
||||
|
||||
List<Object> logs = new ArrayList<>();
|
||||
logs.add(new MyLogEntry("2024-01-15T10:30:00Z", "INFO", "Application started"));
|
||||
logs.add(new MyLogEntry("2024-01-15T10:30:05Z", "DEBUG", "Processing request"));
|
||||
|
||||
client.upload("<data-collection-rule-id>", "<stream-name>", logs);
|
||||
System.out.println("Logs uploaded successfully");
|
||||
```
|
||||
|
||||
### Upload with Concurrency
|
||||
|
||||
For large log collections, enable concurrent uploads:
|
||||
|
||||
```java
|
||||
import com.azure.monitor.ingestion.models.LogsUploadOptions;
|
||||
import com.azure.core.util.Context;
|
||||
|
||||
List<Object> logs = getLargeLogs(); // Large collection
|
||||
|
||||
LogsUploadOptions options = new LogsUploadOptions()
|
||||
.setMaxConcurrency(3);
|
||||
|
||||
client.upload("<data-collection-rule-id>", "<stream-name>", logs, options, Context.NONE);
|
||||
```
|
||||
|
||||
### Upload with Error Handling
|
||||
|
||||
Handle partial upload failures gracefully:
|
||||
|
||||
```java
|
||||
LogsUploadOptions options = new LogsUploadOptions()
|
||||
.setLogsUploadErrorConsumer(uploadError -> {
|
||||
System.err.println("Upload error: " + uploadError.getResponseException().getMessage());
|
||||
System.err.println("Failed logs count: " + uploadError.getFailedLogs().size());
|
||||
|
||||
// Option 1: Log and continue
|
||||
// Option 2: Throw to abort remaining uploads
|
||||
// throw uploadError.getResponseException();
|
||||
});
|
||||
|
||||
client.upload("<data-collection-rule-id>", "<stream-name>", logs, options, Context.NONE);
|
||||
```
|
||||
|
||||
### Async Upload with Reactor
|
||||
|
||||
```java
|
||||
import reactor.core.publisher.Mono;
|
||||
|
||||
List<Object> logs = getLogs();
|
||||
|
||||
asyncClient.upload("<data-collection-rule-id>", "<stream-name>", logs)
|
||||
.doOnSuccess(v -> System.out.println("Upload completed"))
|
||||
.doOnError(e -> System.err.println("Upload failed: " + e.getMessage()))
|
||||
.subscribe();
|
||||
```
|
||||
|
||||
## Log Entry Model Example
|
||||
|
||||
```java
|
||||
public class MyLogEntry {
|
||||
private String timeGenerated;
|
||||
private String level;
|
||||
private String message;
|
||||
|
||||
public MyLogEntry(String timeGenerated, String level, String message) {
|
||||
this.timeGenerated = timeGenerated;
|
||||
this.level = level;
|
||||
this.message = message;
|
||||
}
|
||||
|
||||
// Getters required for JSON serialization
|
||||
public String getTimeGenerated() { return timeGenerated; }
|
||||
public String getLevel() { return level; }
|
||||
public String getMessage() { return message; }
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```java
|
||||
import com.azure.core.exception.HttpResponseException;
|
||||
|
||||
try {
|
||||
client.upload(ruleId, streamName, logs);
|
||||
} catch (HttpResponseException e) {
|
||||
System.err.println("HTTP Status: " + e.getResponse().getStatusCode());
|
||||
System.err.println("Error: " + e.getMessage());
|
||||
|
||||
if (e.getResponse().getStatusCode() == 403) {
|
||||
System.err.println("Check DCR permissions and managed identity");
|
||||
} else if (e.getResponse().getStatusCode() == 404) {
|
||||
System.err.println("Verify DCE endpoint and DCR ID");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Batch logs** — Upload in batches rather than one at a time
|
||||
2. **Use concurrency** — Set `maxConcurrency` for large uploads
|
||||
3. **Handle partial failures** — Use error consumer to log failed entries
|
||||
4. **Match DCR schema** — Log entry fields must match DCR transformation expectations
|
||||
5. **Include TimeGenerated** — Most tables require a timestamp field
|
||||
6. **Reuse client** — Create once, reuse throughout application
|
||||
7. **Use async for high throughput** — `LogsIngestionAsyncClient` for reactive patterns
|
||||
|
||||
## Querying Uploaded Logs
|
||||
|
||||
Use [azure-monitor-query](../query/SKILL.md) to query ingested logs:
|
||||
|
||||
```java
|
||||
// See azure-monitor-query skill for LogsQueryClient usage
|
||||
String query = "MyTable_CL | where TimeGenerated > ago(1h) | limit 10";
|
||||
```
|
||||
|
||||
## Reference Links
|
||||
|
||||
| Resource | URL |
|
||||
|----------|-----|
|
||||
| Maven Package | https://central.sonatype.com/artifact/com.azure/azure-monitor-ingestion |
|
||||
| GitHub | https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/monitor/azure-monitor-ingestion |
|
||||
| Product Docs | https://learn.microsoft.com/azure/azure-monitor/logs/logs-ingestion-api-overview |
|
||||
| DCE Overview | https://learn.microsoft.com/azure/azure-monitor/essentials/data-collection-endpoint-overview |
|
||||
| DCR Overview | https://learn.microsoft.com/azure/azure-monitor/essentials/data-collection-rule-overview |
|
||||
| Troubleshooting | https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/monitor/azure-monitor-ingestion/TROUBLESHOOTING.md |
|
||||
@@ -0,0 +1,284 @@
|
||||
---
|
||||
name: azure-monitor-opentelemetry-exporter-java
|
||||
description: |
|
||||
Azure Monitor OpenTelemetry Exporter for Java. Export OpenTelemetry traces, metrics, and logs to Azure Monitor/Application Insights.
|
||||
Triggers: "AzureMonitorExporter java", "opentelemetry azure java", "application insights java otel", "azure monitor tracing java".
|
||||
Note: This package is DEPRECATED. Migrate to azure-monitor-opentelemetry-autoconfigure.
|
||||
package: com.azure:azure-monitor-opentelemetry-exporter
|
||||
---
|
||||
|
||||
# Azure Monitor OpenTelemetry Exporter for Java
|
||||
|
||||
> **⚠️ DEPRECATION NOTICE**: This package is deprecated. Migrate to `azure-monitor-opentelemetry-autoconfigure`.
|
||||
>
|
||||
> See [Migration Guide](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/monitor/azure-monitor-opentelemetry-exporter/MIGRATION.md) for detailed instructions.
|
||||
|
||||
Export OpenTelemetry telemetry data to Azure Monitor / Application Insights.
|
||||
|
||||
## Installation (Deprecated)
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-monitor-opentelemetry-exporter</artifactId>
|
||||
<version>1.0.0-beta.x</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
## Recommended: Use Autoconfigure Instead
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-monitor-opentelemetry-autoconfigure</artifactId>
|
||||
<version>LATEST</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=xxx;IngestionEndpoint=https://xxx.in.applicationinsights.azure.com/
|
||||
```
|
||||
|
||||
## Basic Setup with Autoconfigure (Recommended)
|
||||
|
||||
### Using Environment Variable
|
||||
|
||||
```java
|
||||
import io.opentelemetry.sdk.autoconfigure.AutoConfiguredOpenTelemetrySdk;
|
||||
import io.opentelemetry.sdk.autoconfigure.AutoConfiguredOpenTelemetrySdkBuilder;
|
||||
import io.opentelemetry.api.OpenTelemetry;
|
||||
import com.azure.monitor.opentelemetry.exporter.AzureMonitorExporter;
|
||||
|
||||
// Connection string from APPLICATIONINSIGHTS_CONNECTION_STRING env var
|
||||
AutoConfiguredOpenTelemetrySdkBuilder sdkBuilder = AutoConfiguredOpenTelemetrySdk.builder();
|
||||
AzureMonitorExporter.customize(sdkBuilder);
|
||||
OpenTelemetry openTelemetry = sdkBuilder.build().getOpenTelemetrySdk();
|
||||
```
|
||||
|
||||
### With Explicit Connection String
|
||||
|
||||
```java
|
||||
AutoConfiguredOpenTelemetrySdkBuilder sdkBuilder = AutoConfiguredOpenTelemetrySdk.builder();
|
||||
AzureMonitorExporter.customize(sdkBuilder, "{connection-string}");
|
||||
OpenTelemetry openTelemetry = sdkBuilder.build().getOpenTelemetrySdk();
|
||||
```
|
||||
|
||||
## Creating Spans
|
||||
|
||||
```java
|
||||
import io.opentelemetry.api.trace.Tracer;
|
||||
import io.opentelemetry.api.trace.Span;
|
||||
import io.opentelemetry.context.Scope;
|
||||
|
||||
// Get tracer
|
||||
Tracer tracer = openTelemetry.getTracer("com.example.myapp");
|
||||
|
||||
// Create span
|
||||
Span span = tracer.spanBuilder("myOperation").startSpan();
|
||||
|
||||
try (Scope scope = span.makeCurrent()) {
|
||||
// Your application logic
|
||||
doWork();
|
||||
} catch (Throwable t) {
|
||||
span.recordException(t);
|
||||
throw t;
|
||||
} finally {
|
||||
span.end();
|
||||
}
|
||||
```
|
||||
|
||||
## Adding Span Attributes
|
||||
|
||||
```java
|
||||
import io.opentelemetry.api.common.AttributeKey;
|
||||
import io.opentelemetry.api.common.Attributes;
|
||||
|
||||
Span span = tracer.spanBuilder("processOrder")
|
||||
.setAttribute("order.id", "12345")
|
||||
.setAttribute("customer.tier", "premium")
|
||||
.startSpan();
|
||||
|
||||
try (Scope scope = span.makeCurrent()) {
|
||||
// Add attributes during execution
|
||||
span.setAttribute("items.count", 3);
|
||||
span.setAttribute("total.amount", 99.99);
|
||||
|
||||
processOrder();
|
||||
} finally {
|
||||
span.end();
|
||||
}
|
||||
```
|
||||
|
||||
## Custom Span Processor
|
||||
|
||||
```java
|
||||
import io.opentelemetry.sdk.trace.SpanProcessor;
|
||||
import io.opentelemetry.sdk.trace.ReadWriteSpan;
|
||||
import io.opentelemetry.sdk.trace.ReadableSpan;
|
||||
import io.opentelemetry.context.Context;
|
||||
|
||||
private static final AttributeKey<String> CUSTOM_ATTR = AttributeKey.stringKey("custom.attribute");
|
||||
|
||||
SpanProcessor customProcessor = new SpanProcessor() {
|
||||
@Override
|
||||
public void onStart(Context context, ReadWriteSpan span) {
|
||||
// Add custom attribute to every span
|
||||
span.setAttribute(CUSTOM_ATTR, "customValue");
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean isStartRequired() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onEnd(ReadableSpan span) {
|
||||
// Post-processing if needed
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean isEndRequired() {
|
||||
return false;
|
||||
}
|
||||
};
|
||||
|
||||
// Register processor
|
||||
AutoConfiguredOpenTelemetrySdkBuilder sdkBuilder = AutoConfiguredOpenTelemetrySdk.builder();
|
||||
AzureMonitorExporter.customize(sdkBuilder);
|
||||
|
||||
sdkBuilder.addTracerProviderCustomizer(
|
||||
(sdkTracerProviderBuilder, configProperties) ->
|
||||
sdkTracerProviderBuilder.addSpanProcessor(customProcessor)
|
||||
);
|
||||
|
||||
OpenTelemetry openTelemetry = sdkBuilder.build().getOpenTelemetrySdk();
|
||||
```
|
||||
|
||||
## Nested Spans
|
||||
|
||||
```java
|
||||
public void parentOperation() {
|
||||
Span parentSpan = tracer.spanBuilder("parentOperation").startSpan();
|
||||
try (Scope scope = parentSpan.makeCurrent()) {
|
||||
childOperation();
|
||||
} finally {
|
||||
parentSpan.end();
|
||||
}
|
||||
}
|
||||
|
||||
public void childOperation() {
|
||||
// Automatically links to parent via Context
|
||||
Span childSpan = tracer.spanBuilder("childOperation").startSpan();
|
||||
try (Scope scope = childSpan.makeCurrent()) {
|
||||
// Child work
|
||||
} finally {
|
||||
childSpan.end();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Recording Exceptions
|
||||
|
||||
```java
|
||||
Span span = tracer.spanBuilder("riskyOperation").startSpan();
|
||||
try (Scope scope = span.makeCurrent()) {
|
||||
performRiskyWork();
|
||||
} catch (Exception e) {
|
||||
span.recordException(e);
|
||||
span.setStatus(StatusCode.ERROR, e.getMessage());
|
||||
throw e;
|
||||
} finally {
|
||||
span.end();
|
||||
}
|
||||
```
|
||||
|
||||
## Metrics (via OpenTelemetry)
|
||||
|
||||
```java
|
||||
import io.opentelemetry.api.metrics.Meter;
|
||||
import io.opentelemetry.api.metrics.LongCounter;
|
||||
import io.opentelemetry.api.metrics.LongHistogram;
|
||||
|
||||
Meter meter = openTelemetry.getMeter("com.example.myapp");
|
||||
|
||||
// Counter
|
||||
LongCounter requestCounter = meter.counterBuilder("http.requests")
|
||||
.setDescription("Total HTTP requests")
|
||||
.setUnit("requests")
|
||||
.build();
|
||||
|
||||
requestCounter.add(1, Attributes.of(
|
||||
AttributeKey.stringKey("http.method"), "GET",
|
||||
AttributeKey.longKey("http.status_code"), 200L
|
||||
));
|
||||
|
||||
// Histogram
|
||||
LongHistogram latencyHistogram = meter.histogramBuilder("http.latency")
|
||||
.setDescription("Request latency")
|
||||
.setUnit("ms")
|
||||
.ofLongs()
|
||||
.build();
|
||||
|
||||
latencyHistogram.record(150, Attributes.of(
|
||||
AttributeKey.stringKey("http.route"), "/api/users"
|
||||
));
|
||||
```
|
||||
|
||||
## Key Concepts
|
||||
|
||||
| Concept | Description |
|
||||
|---------|-------------|
|
||||
| Connection String | Application Insights connection string with instrumentation key |
|
||||
| Tracer | Creates spans for distributed tracing |
|
||||
| Span | Represents a unit of work with timing and attributes |
|
||||
| SpanProcessor | Intercepts span lifecycle for customization |
|
||||
| Exporter | Sends telemetry to Azure Monitor |
|
||||
|
||||
## Migration to Autoconfigure
|
||||
|
||||
The `azure-monitor-opentelemetry-autoconfigure` package provides:
|
||||
- Automatic instrumentation of common libraries
|
||||
- Simplified configuration
|
||||
- Better integration with OpenTelemetry SDK
|
||||
|
||||
### Migration Steps
|
||||
|
||||
1. Replace dependency:
|
||||
```xml
|
||||
<!-- Remove -->
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-monitor-opentelemetry-exporter</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Add -->
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-monitor-opentelemetry-autoconfigure</artifactId>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
2. Update initialization code per [Migration Guide](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/monitor/azure-monitor-opentelemetry-exporter/MIGRATION.md)
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use autoconfigure** — Migrate to `azure-monitor-opentelemetry-autoconfigure`
|
||||
2. **Set meaningful span names** — Use descriptive operation names
|
||||
3. **Add relevant attributes** — Include contextual data for debugging
|
||||
4. **Handle exceptions** — Always record exceptions on spans
|
||||
5. **Use semantic conventions** — Follow OpenTelemetry semantic conventions
|
||||
6. **End spans in finally** — Ensure spans are always ended
|
||||
7. **Use try-with-resources** — Scope management with try-with-resources pattern
|
||||
|
||||
## Reference Links
|
||||
|
||||
| Resource | URL |
|
||||
|----------|-----|
|
||||
| Maven Package | https://central.sonatype.com/artifact/com.azure/azure-monitor-opentelemetry-exporter |
|
||||
| GitHub | https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/monitor/azure-monitor-opentelemetry-exporter |
|
||||
| Migration Guide | https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/monitor/azure-monitor-opentelemetry-exporter/MIGRATION.md |
|
||||
| Autoconfigure Package | https://central.sonatype.com/artifact/com.azure/azure-monitor-opentelemetry-autoconfigure |
|
||||
| OpenTelemetry Java | https://opentelemetry.io/docs/languages/java/ |
|
||||
| Application Insights | https://learn.microsoft.com/azure/azure-monitor/app/app-insights-overview |
|
||||
417
skills/official/microsoft/java/monitoring/query/SKILL.md
Normal file
417
skills/official/microsoft/java/monitoring/query/SKILL.md
Normal file
@@ -0,0 +1,417 @@
|
||||
---
|
||||
name: azure-monitor-query-java
|
||||
description: |
|
||||
Azure Monitor Query SDK for Java. Execute Kusto queries against Log Analytics workspaces and query metrics from Azure resources.
|
||||
Triggers: "LogsQueryClient java", "MetricsQueryClient java", "kusto query java", "log analytics java", "azure monitor query java".
|
||||
Note: This package is deprecated. Migrate to azure-monitor-query-logs and azure-monitor-query-metrics.
|
||||
package: com.azure:azure-monitor-query
|
||||
---
|
||||
|
||||
# Azure Monitor Query SDK for Java
|
||||
|
||||
> **DEPRECATION NOTICE**: This package is deprecated in favor of:
|
||||
> - `azure-monitor-query-logs` — For Log Analytics queries
|
||||
> - `azure-monitor-query-metrics` — For metrics queries
|
||||
>
|
||||
> See migration guides: [Logs Migration](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/monitor/azure-monitor-query-logs/migration-guide.md) | [Metrics Migration](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/monitor/azure-monitor-query-metrics/migration-guide.md)
|
||||
|
||||
Client library for querying Azure Monitor Logs and Metrics.
|
||||
|
||||
## Installation
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-monitor-query</artifactId>
|
||||
<version>1.5.9</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
Or use Azure SDK BOM:
|
||||
|
||||
```xml
|
||||
<dependencyManagement>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-sdk-bom</artifactId>
|
||||
<version>{bom_version}</version>
|
||||
<type>pom</type>
|
||||
<scope>import</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</dependencyManagement>
|
||||
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-monitor-query</artifactId>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Log Analytics workspace (for logs queries)
|
||||
- Azure resource (for metrics queries)
|
||||
- TokenCredential with appropriate permissions
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
LOG_ANALYTICS_WORKSPACE_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
|
||||
AZURE_RESOURCE_ID=/subscriptions/{sub}/resourceGroups/{rg}/providers/{provider}/{resource}
|
||||
```
|
||||
|
||||
## Client Creation
|
||||
|
||||
### LogsQueryClient (Sync)
|
||||
|
||||
```java
|
||||
import com.azure.identity.DefaultAzureCredentialBuilder;
|
||||
import com.azure.monitor.query.LogsQueryClient;
|
||||
import com.azure.monitor.query.LogsQueryClientBuilder;
|
||||
|
||||
LogsQueryClient logsClient = new LogsQueryClientBuilder()
|
||||
.credential(new DefaultAzureCredentialBuilder().build())
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
### LogsQueryAsyncClient
|
||||
|
||||
```java
|
||||
import com.azure.monitor.query.LogsQueryAsyncClient;
|
||||
|
||||
LogsQueryAsyncClient logsAsyncClient = new LogsQueryClientBuilder()
|
||||
.credential(new DefaultAzureCredentialBuilder().build())
|
||||
.buildAsyncClient();
|
||||
```
|
||||
|
||||
### MetricsQueryClient (Sync)
|
||||
|
||||
```java
|
||||
import com.azure.monitor.query.MetricsQueryClient;
|
||||
import com.azure.monitor.query.MetricsQueryClientBuilder;
|
||||
|
||||
MetricsQueryClient metricsClient = new MetricsQueryClientBuilder()
|
||||
.credential(new DefaultAzureCredentialBuilder().build())
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
### MetricsQueryAsyncClient
|
||||
|
||||
```java
|
||||
import com.azure.monitor.query.MetricsQueryAsyncClient;
|
||||
|
||||
MetricsQueryAsyncClient metricsAsyncClient = new MetricsQueryClientBuilder()
|
||||
.credential(new DefaultAzureCredentialBuilder().build())
|
||||
.buildAsyncClient();
|
||||
```
|
||||
|
||||
### Sovereign Cloud Configuration
|
||||
|
||||
```java
|
||||
// Azure China Cloud - Logs
|
||||
LogsQueryClient logsClient = new LogsQueryClientBuilder()
|
||||
.credential(new DefaultAzureCredentialBuilder().build())
|
||||
.endpoint("https://api.loganalytics.azure.cn/v1")
|
||||
.buildClient();
|
||||
|
||||
// Azure China Cloud - Metrics
|
||||
MetricsQueryClient metricsClient = new MetricsQueryClientBuilder()
|
||||
.credential(new DefaultAzureCredentialBuilder().build())
|
||||
.endpoint("https://management.chinacloudapi.cn")
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
## Key Concepts
|
||||
|
||||
| Concept | Description |
|
||||
|---------|-------------|
|
||||
| Logs | Log and performance data from Azure resources via Kusto Query Language |
|
||||
| Metrics | Numeric time-series data collected at regular intervals |
|
||||
| Workspace ID | Log Analytics workspace identifier |
|
||||
| Resource ID | Azure resource URI for metrics queries |
|
||||
| QueryTimeInterval | Time range for the query |
|
||||
|
||||
## Logs Query Operations
|
||||
|
||||
### Basic Query
|
||||
|
||||
```java
|
||||
import com.azure.monitor.query.models.LogsQueryResult;
|
||||
import com.azure.monitor.query.models.LogsTableRow;
|
||||
import com.azure.monitor.query.models.QueryTimeInterval;
|
||||
import java.time.Duration;
|
||||
|
||||
LogsQueryResult result = logsClient.queryWorkspace(
|
||||
"{workspace-id}",
|
||||
"AzureActivity | summarize count() by ResourceGroup | top 10 by count_",
|
||||
new QueryTimeInterval(Duration.ofDays(7))
|
||||
);
|
||||
|
||||
for (LogsTableRow row : result.getTable().getRows()) {
|
||||
System.out.println(row.getColumnValue("ResourceGroup") + ": " + row.getColumnValue("count_"));
|
||||
}
|
||||
```
|
||||
|
||||
### Query by Resource ID
|
||||
|
||||
```java
|
||||
LogsQueryResult result = logsClient.queryResource(
|
||||
"{resource-id}",
|
||||
"AzureMetrics | where TimeGenerated > ago(1h)",
|
||||
new QueryTimeInterval(Duration.ofDays(1))
|
||||
);
|
||||
|
||||
for (LogsTableRow row : result.getTable().getRows()) {
|
||||
System.out.println(row.getColumnValue("MetricName") + " " + row.getColumnValue("Average"));
|
||||
}
|
||||
```
|
||||
|
||||
### Map Results to Custom Model
|
||||
|
||||
```java
|
||||
// Define model class
|
||||
public class ActivityLog {
|
||||
private String resourceGroup;
|
||||
private String operationName;
|
||||
|
||||
public String getResourceGroup() { return resourceGroup; }
|
||||
public String getOperationName() { return operationName; }
|
||||
}
|
||||
|
||||
// Query with model mapping
|
||||
List<ActivityLog> logs = logsClient.queryWorkspace(
|
||||
"{workspace-id}",
|
||||
"AzureActivity | project ResourceGroup, OperationName | take 100",
|
||||
new QueryTimeInterval(Duration.ofDays(2)),
|
||||
ActivityLog.class
|
||||
);
|
||||
|
||||
for (ActivityLog log : logs) {
|
||||
System.out.println(log.getOperationName() + " - " + log.getResourceGroup());
|
||||
}
|
||||
```
|
||||
|
||||
### Batch Query
|
||||
|
||||
```java
|
||||
import com.azure.monitor.query.models.LogsBatchQuery;
|
||||
import com.azure.monitor.query.models.LogsBatchQueryResult;
|
||||
import com.azure.monitor.query.models.LogsBatchQueryResultCollection;
|
||||
import com.azure.core.util.Context;
|
||||
|
||||
LogsBatchQuery batchQuery = new LogsBatchQuery();
|
||||
String q1 = batchQuery.addWorkspaceQuery("{workspace-id}", "AzureActivity | count", new QueryTimeInterval(Duration.ofDays(1)));
|
||||
String q2 = batchQuery.addWorkspaceQuery("{workspace-id}", "Heartbeat | count", new QueryTimeInterval(Duration.ofDays(1)));
|
||||
String q3 = batchQuery.addWorkspaceQuery("{workspace-id}", "Perf | count", new QueryTimeInterval(Duration.ofDays(1)));
|
||||
|
||||
LogsBatchQueryResultCollection results = logsClient
|
||||
.queryBatchWithResponse(batchQuery, Context.NONE)
|
||||
.getValue();
|
||||
|
||||
LogsBatchQueryResult result1 = results.getResult(q1);
|
||||
LogsBatchQueryResult result2 = results.getResult(q2);
|
||||
LogsBatchQueryResult result3 = results.getResult(q3);
|
||||
|
||||
// Check for failures
|
||||
if (result3.getQueryResultStatus() == LogsQueryResultStatus.FAILURE) {
|
||||
System.err.println("Query failed: " + result3.getError().getMessage());
|
||||
}
|
||||
```
|
||||
|
||||
### Query with Options
|
||||
|
||||
```java
|
||||
import com.azure.monitor.query.models.LogsQueryOptions;
|
||||
import com.azure.core.http.rest.Response;
|
||||
|
||||
LogsQueryOptions options = new LogsQueryOptions()
|
||||
.setServerTimeout(Duration.ofMinutes(10))
|
||||
.setIncludeStatistics(true)
|
||||
.setIncludeVisualization(true);
|
||||
|
||||
Response<LogsQueryResult> response = logsClient.queryWorkspaceWithResponse(
|
||||
"{workspace-id}",
|
||||
"AzureActivity | summarize count() by bin(TimeGenerated, 1h)",
|
||||
new QueryTimeInterval(Duration.ofDays(7)),
|
||||
options,
|
||||
Context.NONE
|
||||
);
|
||||
|
||||
LogsQueryResult result = response.getValue();
|
||||
|
||||
// Access statistics
|
||||
BinaryData statistics = result.getStatistics();
|
||||
// Access visualization data
|
||||
BinaryData visualization = result.getVisualization();
|
||||
```
|
||||
|
||||
### Query Multiple Workspaces
|
||||
|
||||
```java
|
||||
import java.util.Arrays;
|
||||
|
||||
LogsQueryOptions options = new LogsQueryOptions()
|
||||
.setAdditionalWorkspaces(Arrays.asList("{workspace-id-2}", "{workspace-id-3}"));
|
||||
|
||||
Response<LogsQueryResult> response = logsClient.queryWorkspaceWithResponse(
|
||||
"{workspace-id-1}",
|
||||
"AzureActivity | summarize count() by TenantId",
|
||||
new QueryTimeInterval(Duration.ofDays(1)),
|
||||
options,
|
||||
Context.NONE
|
||||
);
|
||||
```
|
||||
|
||||
## Metrics Query Operations
|
||||
|
||||
### Basic Metrics Query
|
||||
|
||||
```java
|
||||
import com.azure.monitor.query.models.MetricsQueryResult;
|
||||
import com.azure.monitor.query.models.MetricResult;
|
||||
import com.azure.monitor.query.models.TimeSeriesElement;
|
||||
import com.azure.monitor.query.models.MetricValue;
|
||||
import java.util.Arrays;
|
||||
|
||||
MetricsQueryResult result = metricsClient.queryResource(
|
||||
"{resource-uri}",
|
||||
Arrays.asList("SuccessfulCalls", "TotalCalls")
|
||||
);
|
||||
|
||||
for (MetricResult metric : result.getMetrics()) {
|
||||
System.out.println("Metric: " + metric.getMetricName());
|
||||
for (TimeSeriesElement ts : metric.getTimeSeries()) {
|
||||
System.out.println(" Dimensions: " + ts.getMetadata());
|
||||
for (MetricValue value : ts.getValues()) {
|
||||
System.out.println(" " + value.getTimeStamp() + ": " + value.getTotal());
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Metrics with Aggregations
|
||||
|
||||
```java
|
||||
import com.azure.monitor.query.models.MetricsQueryOptions;
|
||||
import com.azure.monitor.query.models.AggregationType;
|
||||
|
||||
Response<MetricsQueryResult> response = metricsClient.queryResourceWithResponse(
|
||||
"{resource-id}",
|
||||
Arrays.asList("SuccessfulCalls", "TotalCalls"),
|
||||
new MetricsQueryOptions()
|
||||
.setGranularity(Duration.ofHours(1))
|
||||
.setAggregations(Arrays.asList(AggregationType.AVERAGE, AggregationType.COUNT)),
|
||||
Context.NONE
|
||||
);
|
||||
|
||||
MetricsQueryResult result = response.getValue();
|
||||
```
|
||||
|
||||
### Query Multiple Resources (MetricsClient)
|
||||
|
||||
```java
|
||||
import com.azure.monitor.query.MetricsClient;
|
||||
import com.azure.monitor.query.MetricsClientBuilder;
|
||||
import com.azure.monitor.query.models.MetricsQueryResourcesResult;
|
||||
|
||||
MetricsClient metricsClient = new MetricsClientBuilder()
|
||||
.credential(new DefaultAzureCredentialBuilder().build())
|
||||
.endpoint("{endpoint}")
|
||||
.buildClient();
|
||||
|
||||
MetricsQueryResourcesResult result = metricsClient.queryResources(
|
||||
Arrays.asList("{resourceId1}", "{resourceId2}"),
|
||||
Arrays.asList("{metric1}", "{metric2}"),
|
||||
"{metricNamespace}"
|
||||
);
|
||||
|
||||
for (MetricsQueryResult queryResult : result.getMetricsQueryResults()) {
|
||||
for (MetricResult metric : queryResult.getMetrics()) {
|
||||
System.out.println(metric.getMetricName());
|
||||
metric.getTimeSeries().stream()
|
||||
.flatMap(ts -> ts.getValues().stream())
|
||||
.forEach(mv -> System.out.println(
|
||||
mv.getTimeStamp() + " Count=" + mv.getCount() + " Avg=" + mv.getAverage()));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Response Structure
|
||||
|
||||
### Logs Response Hierarchy
|
||||
|
||||
```
|
||||
LogsQueryResult
|
||||
├── statistics (BinaryData)
|
||||
├── visualization (BinaryData)
|
||||
├── error
|
||||
└── tables (List<LogsTable>)
|
||||
├── name
|
||||
├── columns (List<LogsTableColumn>)
|
||||
│ ├── name
|
||||
│ └── type
|
||||
└── rows (List<LogsTableRow>)
|
||||
├── rowIndex
|
||||
└── rowCells (List<LogsTableCell>)
|
||||
```
|
||||
|
||||
### Metrics Response Hierarchy
|
||||
|
||||
```
|
||||
MetricsQueryResult
|
||||
├── granularity
|
||||
├── timeInterval
|
||||
├── namespace
|
||||
├── resourceRegion
|
||||
└── metrics (List<MetricResult>)
|
||||
├── id, name, type, unit
|
||||
└── timeSeries (List<TimeSeriesElement>)
|
||||
├── metadata (dimensions)
|
||||
└── values (List<MetricValue>)
|
||||
├── timeStamp
|
||||
├── count, average, total
|
||||
├── maximum, minimum
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```java
|
||||
import com.azure.core.exception.HttpResponseException;
|
||||
import com.azure.monitor.query.models.LogsQueryResultStatus;
|
||||
|
||||
try {
|
||||
LogsQueryResult result = logsClient.queryWorkspace(workspaceId, query, timeInterval);
|
||||
|
||||
// Check partial failure
|
||||
if (result.getStatus() == LogsQueryResultStatus.PARTIAL_FAILURE) {
|
||||
System.err.println("Partial failure: " + result.getError().getMessage());
|
||||
}
|
||||
} catch (HttpResponseException e) {
|
||||
System.err.println("Query failed: " + e.getMessage());
|
||||
System.err.println("Status: " + e.getResponse().getStatusCode());
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use batch queries** — Combine multiple queries into a single request
|
||||
2. **Set appropriate timeouts** — Long queries may need extended server timeout
|
||||
3. **Limit result size** — Use `top` or `take` in Kusto queries
|
||||
4. **Use projections** — Select only needed columns with `project`
|
||||
5. **Check query status** — Handle PARTIAL_FAILURE results gracefully
|
||||
6. **Cache results** — Metrics don't change frequently; cache when appropriate
|
||||
7. **Migrate to new packages** — Plan migration to `azure-monitor-query-logs` and `azure-monitor-query-metrics`
|
||||
|
||||
## Reference Links
|
||||
|
||||
| Resource | URL |
|
||||
|----------|-----|
|
||||
| Maven Package | https://central.sonatype.com/artifact/com.azure/azure-monitor-query |
|
||||
| GitHub | https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/monitor/azure-monitor-query |
|
||||
| API Reference | https://learn.microsoft.com/java/api/com.azure.monitor.query |
|
||||
| Kusto Query Language | https://learn.microsoft.com/azure/data-explorer/kusto/query/ |
|
||||
| Log Analytics Limits | https://learn.microsoft.com/azure/azure-monitor/service-limits#la-query-api |
|
||||
| Troubleshooting | https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/monitor/azure-monitor-query/TROUBLESHOOTING.md |
|
||||
Reference in New Issue
Block a user