feat: Add Official Microsoft & Gemini Skills (845+ Total)
🚀 Impact Significantly expands the capabilities of **Antigravity Awesome Skills** by integrating official skill collections from **Microsoft** and **Google Gemini**. This update increases the total skill count to **845+**, making the library even more comprehensive for AI coding assistants. ✨ Key Changes 1. New Official Skills - **Microsoft Skills**: Added a massive collection of official skills from [microsoft/skills](https://github.com/microsoft/skills). - Includes Azure, .NET, Python, TypeScript, and Semantic Kernel skills. - Preserves the original directory structure under `skills/official/microsoft/`. - Includes plugin skills from the `.github/plugins` directory. - **Gemini Skills**: Added official Gemini API development skills under `skills/gemini-api-dev/`. 2. New Scripts & Tooling - **`scripts/sync_microsoft_skills.py`**: A robust synchronization script that: - Clones the official Microsoft repository. - Preserves the original directory heirarchy. - Handles symlinks and plugin locations. - Generates attribution metadata. - **`scripts/tests/inspect_microsoft_repo.py`**: Debug tool to inspect the remote repository structure. - **`scripts/tests/test_comprehensive_coverage.py`**: Verification script to ensure 100% of skills are captured during sync. 3. Core Improvements - **`scripts/generate_index.py`**: Enhanced frontmatter parsing to safely handle unquoted values containing `@` symbols and commas (fixing issues with some Microsoft skill descriptions). - **`package.json`**: Added `sync:microsoft` and `sync:all-official` scripts for easy maintenance. 4. Documentation - Updated `README.md` to reflect the new skill counts (845+) and added Microsoft/Gemini to the provider list. - Updated `CATALOG.md` and `skills_index.json` with the new skills. 🧪 Verification - Ran `scripts/tests/test_comprehensive_coverage.py` to verify all Microsoft skills are detected. - Validated `generate_index.py` fixes by successfully indexing the new skills.
This commit is contained in:
388
skills/official/microsoft/java/data/blob/SKILL.md
Normal file
388
skills/official/microsoft/java/data/blob/SKILL.md
Normal file
@@ -0,0 +1,388 @@
|
||||
---
|
||||
name: azure-storage-blob-java
|
||||
description: Build blob storage applications with Azure Storage Blob SDK for Java. Use when uploading, downloading, or managing files in Azure Blob Storage, working with containers, or implementing streaming data operations.
|
||||
package: com.azure:azure-storage-blob
|
||||
---
|
||||
|
||||
# Azure Storage Blob SDK for Java
|
||||
|
||||
Build blob storage applications using the Azure Storage Blob SDK for Java.
|
||||
|
||||
## Installation
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-storage-blob</artifactId>
|
||||
<version>12.33.0</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
## Client Creation
|
||||
|
||||
### BlobServiceClient
|
||||
|
||||
```java
|
||||
import com.azure.storage.blob.BlobServiceClient;
|
||||
import com.azure.storage.blob.BlobServiceClientBuilder;
|
||||
|
||||
// With SAS token
|
||||
BlobServiceClient serviceClient = new BlobServiceClientBuilder()
|
||||
.endpoint("<storage-account-url>")
|
||||
.sasToken("<sas-token>")
|
||||
.buildClient();
|
||||
|
||||
// With connection string
|
||||
BlobServiceClient serviceClient = new BlobServiceClientBuilder()
|
||||
.connectionString("<connection-string>")
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
### With DefaultAzureCredential
|
||||
|
||||
```java
|
||||
import com.azure.identity.DefaultAzureCredentialBuilder;
|
||||
|
||||
BlobServiceClient serviceClient = new BlobServiceClientBuilder()
|
||||
.endpoint("<storage-account-url>")
|
||||
.credential(new DefaultAzureCredentialBuilder().build())
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
### BlobContainerClient
|
||||
|
||||
```java
|
||||
import com.azure.storage.blob.BlobContainerClient;
|
||||
|
||||
// From service client
|
||||
BlobContainerClient containerClient = serviceClient.getBlobContainerClient("mycontainer");
|
||||
|
||||
// Direct construction
|
||||
BlobContainerClient containerClient = new BlobContainerClientBuilder()
|
||||
.connectionString("<connection-string>")
|
||||
.containerName("mycontainer")
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
### BlobClient
|
||||
|
||||
```java
|
||||
import com.azure.storage.blob.BlobClient;
|
||||
|
||||
// From container client
|
||||
BlobClient blobClient = containerClient.getBlobClient("myblob.txt");
|
||||
|
||||
// With directory structure
|
||||
BlobClient blobClient = containerClient.getBlobClient("folder/subfolder/myblob.txt");
|
||||
|
||||
// Direct construction
|
||||
BlobClient blobClient = new BlobClientBuilder()
|
||||
.connectionString("<connection-string>")
|
||||
.containerName("mycontainer")
|
||||
.blobName("myblob.txt")
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
## Core Patterns
|
||||
|
||||
### Create Container
|
||||
|
||||
```java
|
||||
// Create container
|
||||
serviceClient.createBlobContainer("mycontainer");
|
||||
|
||||
// Create if not exists
|
||||
BlobContainerClient container = serviceClient.createBlobContainerIfNotExists("mycontainer");
|
||||
|
||||
// From container client
|
||||
containerClient.create();
|
||||
containerClient.createIfNotExists();
|
||||
```
|
||||
|
||||
### Upload Data
|
||||
|
||||
```java
|
||||
import com.azure.core.util.BinaryData;
|
||||
|
||||
// Upload string
|
||||
String data = "Hello, Azure Blob Storage!";
|
||||
blobClient.upload(BinaryData.fromString(data));
|
||||
|
||||
// Upload with overwrite
|
||||
blobClient.upload(BinaryData.fromString(data), true);
|
||||
```
|
||||
|
||||
### Upload from File
|
||||
|
||||
```java
|
||||
blobClient.uploadFromFile("local-file.txt");
|
||||
|
||||
// With overwrite
|
||||
blobClient.uploadFromFile("local-file.txt", true);
|
||||
```
|
||||
|
||||
### Upload from Stream
|
||||
|
||||
```java
|
||||
import com.azure.storage.blob.specialized.BlockBlobClient;
|
||||
|
||||
BlockBlobClient blockBlobClient = blobClient.getBlockBlobClient();
|
||||
|
||||
try (ByteArrayInputStream dataStream = new ByteArrayInputStream(data.getBytes())) {
|
||||
blockBlobClient.upload(dataStream, data.length());
|
||||
}
|
||||
```
|
||||
|
||||
### Upload with Options
|
||||
|
||||
```java
|
||||
import com.azure.storage.blob.models.BlobHttpHeaders;
|
||||
import com.azure.storage.blob.options.BlobParallelUploadOptions;
|
||||
|
||||
BlobHttpHeaders headers = new BlobHttpHeaders()
|
||||
.setContentType("text/plain")
|
||||
.setCacheControl("max-age=3600");
|
||||
|
||||
Map<String, String> metadata = Map.of("author", "john", "version", "1.0");
|
||||
|
||||
try (InputStream stream = new FileInputStream("large-file.bin")) {
|
||||
BlobParallelUploadOptions options = new BlobParallelUploadOptions(stream)
|
||||
.setHeaders(headers)
|
||||
.setMetadata(metadata);
|
||||
|
||||
blobClient.uploadWithResponse(options, null, Context.NONE);
|
||||
}
|
||||
```
|
||||
|
||||
### Upload if Not Exists
|
||||
|
||||
```java
|
||||
import com.azure.storage.blob.models.BlobRequestConditions;
|
||||
|
||||
BlobParallelUploadOptions options = new BlobParallelUploadOptions(inputStream, length)
|
||||
.setRequestConditions(new BlobRequestConditions().setIfNoneMatch("*"));
|
||||
|
||||
blobClient.uploadWithResponse(options, null, Context.NONE);
|
||||
```
|
||||
|
||||
### Download Data
|
||||
|
||||
```java
|
||||
// Download to BinaryData
|
||||
BinaryData content = blobClient.downloadContent();
|
||||
String text = content.toString();
|
||||
|
||||
// Download to file
|
||||
blobClient.downloadToFile("downloaded-file.txt");
|
||||
```
|
||||
|
||||
### Download to Stream
|
||||
|
||||
```java
|
||||
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()) {
|
||||
blobClient.downloadStream(outputStream);
|
||||
byte[] data = outputStream.toByteArray();
|
||||
}
|
||||
```
|
||||
|
||||
### Download with InputStream
|
||||
|
||||
```java
|
||||
import com.azure.storage.blob.specialized.BlobInputStream;
|
||||
|
||||
try (BlobInputStream blobIS = blobClient.openInputStream()) {
|
||||
byte[] buffer = new byte[1024];
|
||||
int bytesRead;
|
||||
while ((bytesRead = blobIS.read(buffer)) != -1) {
|
||||
// Process buffer
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Upload via OutputStream
|
||||
|
||||
```java
|
||||
import com.azure.storage.blob.specialized.BlobOutputStream;
|
||||
|
||||
try (BlobOutputStream blobOS = blobClient.getBlockBlobClient().getBlobOutputStream()) {
|
||||
blobOS.write("Data to upload".getBytes());
|
||||
}
|
||||
```
|
||||
|
||||
### List Blobs
|
||||
|
||||
```java
|
||||
import com.azure.storage.blob.models.BlobItem;
|
||||
|
||||
// List all blobs
|
||||
for (BlobItem blobItem : containerClient.listBlobs()) {
|
||||
System.out.println("Blob: " + blobItem.getName());
|
||||
}
|
||||
|
||||
// List with prefix (virtual directory)
|
||||
import com.azure.storage.blob.models.ListBlobsOptions;
|
||||
|
||||
ListBlobsOptions options = new ListBlobsOptions().setPrefix("folder/");
|
||||
for (BlobItem blobItem : containerClient.listBlobs(options, null)) {
|
||||
System.out.println("Blob: " + blobItem.getName());
|
||||
}
|
||||
```
|
||||
|
||||
### List Blobs by Hierarchy
|
||||
|
||||
```java
|
||||
import com.azure.storage.blob.models.BlobListDetails;
|
||||
|
||||
String delimiter = "/";
|
||||
ListBlobsOptions options = new ListBlobsOptions()
|
||||
.setPrefix("data/")
|
||||
.setDetails(new BlobListDetails().setRetrieveMetadata(true));
|
||||
|
||||
for (BlobItem item : containerClient.listBlobsByHierarchy(delimiter, options, null)) {
|
||||
if (item.isPrefix()) {
|
||||
System.out.println("Directory: " + item.getName());
|
||||
} else {
|
||||
System.out.println("Blob: " + item.getName());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Delete Blob
|
||||
|
||||
```java
|
||||
blobClient.delete();
|
||||
|
||||
// Delete if exists
|
||||
blobClient.deleteIfExists();
|
||||
|
||||
// Delete with snapshots
|
||||
import com.azure.storage.blob.models.DeleteSnapshotsOptionType;
|
||||
blobClient.deleteWithResponse(DeleteSnapshotsOptionType.INCLUDE, null, null, Context.NONE);
|
||||
```
|
||||
|
||||
### Copy Blob
|
||||
|
||||
```java
|
||||
import com.azure.storage.blob.models.BlobCopyInfo;
|
||||
import com.azure.core.util.polling.SyncPoller;
|
||||
|
||||
// Async copy (for large blobs or cross-account)
|
||||
SyncPoller<BlobCopyInfo, Void> poller = blobClient.beginCopy("<source-blob-url>", Duration.ofSeconds(1));
|
||||
poller.waitForCompletion();
|
||||
|
||||
// Sync copy from URL (for same account)
|
||||
blobClient.copyFromUrl("<source-blob-url>");
|
||||
```
|
||||
|
||||
### Generate SAS Token
|
||||
|
||||
```java
|
||||
import com.azure.storage.blob.sas.*;
|
||||
import java.time.OffsetDateTime;
|
||||
|
||||
// Blob-level SAS
|
||||
BlobSasPermission permissions = new BlobSasPermission().setReadPermission(true);
|
||||
OffsetDateTime expiry = OffsetDateTime.now().plusDays(1);
|
||||
|
||||
BlobServiceSasSignatureValues sasValues = new BlobServiceSasSignatureValues(expiry, permissions);
|
||||
String sasToken = blobClient.generateSas(sasValues);
|
||||
|
||||
// Container-level SAS
|
||||
BlobContainerSasPermission containerPermissions = new BlobContainerSasPermission()
|
||||
.setReadPermission(true)
|
||||
.setListPermission(true);
|
||||
|
||||
BlobServiceSasSignatureValues containerSasValues = new BlobServiceSasSignatureValues(expiry, containerPermissions);
|
||||
String containerSas = containerClient.generateSas(containerSasValues);
|
||||
```
|
||||
|
||||
### Blob Properties and Metadata
|
||||
|
||||
```java
|
||||
import com.azure.storage.blob.models.BlobProperties;
|
||||
|
||||
// Get properties
|
||||
BlobProperties properties = blobClient.getProperties();
|
||||
System.out.println("Size: " + properties.getBlobSize());
|
||||
System.out.println("Content-Type: " + properties.getContentType());
|
||||
System.out.println("Last Modified: " + properties.getLastModified());
|
||||
|
||||
// Set metadata
|
||||
Map<String, String> metadata = Map.of("key1", "value1", "key2", "value2");
|
||||
blobClient.setMetadata(metadata);
|
||||
|
||||
// Set HTTP headers
|
||||
BlobHttpHeaders headers = new BlobHttpHeaders()
|
||||
.setContentType("application/json")
|
||||
.setCacheControl("max-age=86400");
|
||||
blobClient.setHttpHeaders(headers);
|
||||
```
|
||||
|
||||
### Lease Blob
|
||||
|
||||
```java
|
||||
import com.azure.storage.blob.specialized.BlobLeaseClient;
|
||||
import com.azure.storage.blob.specialized.BlobLeaseClientBuilder;
|
||||
|
||||
BlobLeaseClient leaseClient = new BlobLeaseClientBuilder()
|
||||
.blobClient(blobClient)
|
||||
.buildClient();
|
||||
|
||||
// Acquire lease (-1 for infinite)
|
||||
String leaseId = leaseClient.acquireLease(60);
|
||||
|
||||
// Renew lease
|
||||
leaseClient.renewLease();
|
||||
|
||||
// Release lease
|
||||
leaseClient.releaseLease();
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```java
|
||||
import com.azure.storage.blob.models.BlobStorageException;
|
||||
|
||||
try {
|
||||
blobClient.download(outputStream);
|
||||
} catch (BlobStorageException e) {
|
||||
System.out.println("Status: " + e.getStatusCode());
|
||||
System.out.println("Error code: " + e.getErrorCode());
|
||||
// 404 = Blob not found
|
||||
// 409 = Conflict (lease, etc.)
|
||||
}
|
||||
```
|
||||
|
||||
## Proxy Configuration
|
||||
|
||||
```java
|
||||
import com.azure.core.http.ProxyOptions;
|
||||
import com.azure.core.http.netty.NettyAsyncHttpClientBuilder;
|
||||
import java.net.InetSocketAddress;
|
||||
|
||||
ProxyOptions proxyOptions = new ProxyOptions(
|
||||
ProxyOptions.Type.HTTP,
|
||||
new InetSocketAddress("localhost", 8888));
|
||||
|
||||
BlobServiceClient client = new BlobServiceClientBuilder()
|
||||
.endpoint("<endpoint>")
|
||||
.sasToken("<sas-token>")
|
||||
.httpClient(new NettyAsyncHttpClientBuilder().proxy(proxyOptions).build())
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
AZURE_STORAGE_CONNECTION_STRING=DefaultEndpointsProtocol=https;AccountName=...
|
||||
AZURE_STORAGE_ACCOUNT_URL=https://<account>.blob.core.windows.net
|
||||
```
|
||||
|
||||
## Trigger Phrases
|
||||
|
||||
- "Azure Blob Storage Java"
|
||||
- "upload download blob"
|
||||
- "blob container SDK"
|
||||
- "storage streaming"
|
||||
- "SAS token generation"
|
||||
- "blob metadata properties"
|
||||
258
skills/official/microsoft/java/data/cosmos/SKILL.md
Normal file
258
skills/official/microsoft/java/data/cosmos/SKILL.md
Normal file
@@ -0,0 +1,258 @@
|
||||
---
|
||||
name: azure-cosmos-java
|
||||
description: |
|
||||
Azure Cosmos DB SDK for Java. NoSQL database operations with global distribution, multi-model support, and reactive patterns.
|
||||
Triggers: "CosmosClient java", "CosmosAsyncClient", "cosmos database java", "cosmosdb java", "document database java".
|
||||
package: azure-cosmos
|
||||
---
|
||||
|
||||
# Azure Cosmos DB SDK for Java
|
||||
|
||||
Client library for Azure Cosmos DB NoSQL API with global distribution and reactive patterns.
|
||||
|
||||
## Installation
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-cosmos</artifactId>
|
||||
<version>LATEST</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
Or use Azure SDK BOM:
|
||||
|
||||
```xml
|
||||
<dependencyManagement>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-sdk-bom</artifactId>
|
||||
<version>{bom_version}</version>
|
||||
<type>pom</type>
|
||||
<scope>import</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</dependencyManagement>
|
||||
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-cosmos</artifactId>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
COSMOS_ENDPOINT=https://<account>.documents.azure.com:443/
|
||||
COSMOS_KEY=<your-primary-key>
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
### Key-based Authentication
|
||||
|
||||
```java
|
||||
import com.azure.cosmos.CosmosClient;
|
||||
import com.azure.cosmos.CosmosClientBuilder;
|
||||
|
||||
CosmosClient client = new CosmosClientBuilder()
|
||||
.endpoint(System.getenv("COSMOS_ENDPOINT"))
|
||||
.key(System.getenv("COSMOS_KEY"))
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
### Async Client
|
||||
|
||||
```java
|
||||
import com.azure.cosmos.CosmosAsyncClient;
|
||||
|
||||
CosmosAsyncClient asyncClient = new CosmosClientBuilder()
|
||||
.endpoint(serviceEndpoint)
|
||||
.key(key)
|
||||
.buildAsyncClient();
|
||||
```
|
||||
|
||||
### With Customizations
|
||||
|
||||
```java
|
||||
import com.azure.cosmos.ConsistencyLevel;
|
||||
import java.util.Arrays;
|
||||
|
||||
CosmosClient client = new CosmosClientBuilder()
|
||||
.endpoint(serviceEndpoint)
|
||||
.key(key)
|
||||
.directMode(directConnectionConfig, gatewayConnectionConfig)
|
||||
.consistencyLevel(ConsistencyLevel.SESSION)
|
||||
.connectionSharingAcrossClientsEnabled(true)
|
||||
.contentResponseOnWriteEnabled(true)
|
||||
.userAgentSuffix("my-application")
|
||||
.preferredRegions(Arrays.asList("West US", "East US"))
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
## Client Hierarchy
|
||||
|
||||
| Class | Purpose |
|
||||
|-------|---------|
|
||||
| `CosmosClient` / `CosmosAsyncClient` | Account-level operations |
|
||||
| `CosmosDatabase` / `CosmosAsyncDatabase` | Database operations |
|
||||
| `CosmosContainer` / `CosmosAsyncContainer` | Container/item operations |
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Create Database
|
||||
|
||||
```java
|
||||
// Sync
|
||||
client.createDatabaseIfNotExists("myDatabase")
|
||||
.map(response -> client.getDatabase(response.getProperties().getId()));
|
||||
|
||||
// Async with chaining
|
||||
asyncClient.createDatabaseIfNotExists("myDatabase")
|
||||
.map(response -> asyncClient.getDatabase(response.getProperties().getId()))
|
||||
.subscribe(database -> System.out.println("Created: " + database.getId()));
|
||||
```
|
||||
|
||||
### Create Container
|
||||
|
||||
```java
|
||||
asyncClient.createDatabaseIfNotExists("myDatabase")
|
||||
.flatMap(dbResponse -> {
|
||||
String databaseId = dbResponse.getProperties().getId();
|
||||
return asyncClient.getDatabase(databaseId)
|
||||
.createContainerIfNotExists("myContainer", "/partitionKey")
|
||||
.map(containerResponse -> asyncClient.getDatabase(databaseId)
|
||||
.getContainer(containerResponse.getProperties().getId()));
|
||||
})
|
||||
.subscribe(container -> System.out.println("Container: " + container.getId()));
|
||||
```
|
||||
|
||||
### CRUD Operations
|
||||
|
||||
```java
|
||||
import com.azure.cosmos.models.PartitionKey;
|
||||
|
||||
CosmosAsyncContainer container = asyncClient
|
||||
.getDatabase("myDatabase")
|
||||
.getContainer("myContainer");
|
||||
|
||||
// Create
|
||||
container.createItem(new User("1", "John Doe", "john@example.com"))
|
||||
.flatMap(response -> {
|
||||
System.out.println("Created: " + response.getItem());
|
||||
// Read
|
||||
return container.readItem(
|
||||
response.getItem().getId(),
|
||||
new PartitionKey(response.getItem().getId()),
|
||||
User.class);
|
||||
})
|
||||
.flatMap(response -> {
|
||||
System.out.println("Read: " + response.getItem());
|
||||
// Update
|
||||
User user = response.getItem();
|
||||
user.setEmail("john.doe@example.com");
|
||||
return container.replaceItem(
|
||||
user,
|
||||
user.getId(),
|
||||
new PartitionKey(user.getId()));
|
||||
})
|
||||
.flatMap(response -> {
|
||||
// Delete
|
||||
return container.deleteItem(
|
||||
response.getItem().getId(),
|
||||
new PartitionKey(response.getItem().getId()));
|
||||
})
|
||||
.block();
|
||||
```
|
||||
|
||||
### Query Documents
|
||||
|
||||
```java
|
||||
import com.azure.cosmos.models.CosmosQueryRequestOptions;
|
||||
import com.azure.cosmos.util.CosmosPagedIterable;
|
||||
|
||||
CosmosContainer container = client.getDatabase("myDatabase").getContainer("myContainer");
|
||||
|
||||
String query = "SELECT * FROM c WHERE c.status = @status";
|
||||
CosmosQueryRequestOptions options = new CosmosQueryRequestOptions();
|
||||
|
||||
CosmosPagedIterable<User> results = container.queryItems(
|
||||
query,
|
||||
options,
|
||||
User.class
|
||||
);
|
||||
|
||||
results.forEach(user -> System.out.println("User: " + user.getName()));
|
||||
```
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### Partition Keys
|
||||
|
||||
Choose a partition key with:
|
||||
- High cardinality (many distinct values)
|
||||
- Even distribution of data and requests
|
||||
- Frequently used in queries
|
||||
|
||||
### Consistency Levels
|
||||
|
||||
| Level | Guarantee |
|
||||
|-------|-----------|
|
||||
| Strong | Linearizability |
|
||||
| Bounded Staleness | Consistent prefix with bounded lag |
|
||||
| Session | Consistent prefix within session |
|
||||
| Consistent Prefix | Reads never see out-of-order writes |
|
||||
| Eventual | No ordering guarantee |
|
||||
|
||||
### Request Units (RUs)
|
||||
|
||||
All operations consume RUs. Check response headers:
|
||||
|
||||
```java
|
||||
CosmosItemResponse<User> response = container.createItem(user);
|
||||
System.out.println("RU charge: " + response.getRequestCharge());
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Reuse CosmosClient** — Create once, reuse throughout application
|
||||
2. **Use async client** for high-throughput scenarios
|
||||
3. **Choose partition key carefully** — Affects performance and scalability
|
||||
4. **Enable content response on write** for immediate access to created items
|
||||
5. **Configure preferred regions** for geo-distributed applications
|
||||
6. **Handle 429 errors** with retry policies (built-in by default)
|
||||
7. **Use direct mode** for lowest latency in production
|
||||
|
||||
## Error Handling
|
||||
|
||||
```java
|
||||
import com.azure.cosmos.CosmosException;
|
||||
|
||||
try {
|
||||
container.createItem(item);
|
||||
} catch (CosmosException e) {
|
||||
System.err.println("Status: " + e.getStatusCode());
|
||||
System.err.println("Message: " + e.getMessage());
|
||||
System.err.println("Request charge: " + e.getRequestCharge());
|
||||
|
||||
if (e.getStatusCode() == 409) {
|
||||
System.err.println("Item already exists");
|
||||
} else if (e.getStatusCode() == 429) {
|
||||
System.err.println("Rate limited, retry after: " + e.getRetryAfterDuration());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Reference Links
|
||||
|
||||
| Resource | URL |
|
||||
|----------|-----|
|
||||
| Maven Package | https://central.sonatype.com/artifact/com.azure/azure-cosmos |
|
||||
| API Documentation | https://azuresdkdocs.z19.web.core.windows.net/java/azure-cosmos/latest/index.html |
|
||||
| Product Docs | https://learn.microsoft.com/azure/cosmos-db/ |
|
||||
| Samples | https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples |
|
||||
| Performance Guide | https://learn.microsoft.com/azure/cosmos-db/performance-tips-java-sdk-v4-sql |
|
||||
| Troubleshooting | https://learn.microsoft.com/azure/cosmos-db/troubleshoot-java-sdk-v4-sql |
|
||||
334
skills/official/microsoft/java/data/tables/SKILL.md
Normal file
334
skills/official/microsoft/java/data/tables/SKILL.md
Normal file
@@ -0,0 +1,334 @@
|
||||
---
|
||||
name: azure-data-tables-java
|
||||
description: Build table storage applications with Azure Tables SDK for Java. Use when working with Azure Table Storage or Cosmos DB Table API for NoSQL key-value data, schemaless storage, or structured data at scale.
|
||||
package: com.azure:azure-data-tables
|
||||
---
|
||||
|
||||
# Azure Tables SDK for Java
|
||||
|
||||
Build table storage applications using the Azure Tables SDK for Java. Works with both Azure Table Storage and Cosmos DB Table API.
|
||||
|
||||
## Installation
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-data-tables</artifactId>
|
||||
<version>12.6.0-beta.1</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
## Client Creation
|
||||
|
||||
### With Connection String
|
||||
|
||||
```java
|
||||
import com.azure.data.tables.TableServiceClient;
|
||||
import com.azure.data.tables.TableServiceClientBuilder;
|
||||
import com.azure.data.tables.TableClient;
|
||||
|
||||
TableServiceClient serviceClient = new TableServiceClientBuilder()
|
||||
.connectionString("<your-connection-string>")
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
### With Shared Key
|
||||
|
||||
```java
|
||||
import com.azure.core.credential.AzureNamedKeyCredential;
|
||||
|
||||
AzureNamedKeyCredential credential = new AzureNamedKeyCredential(
|
||||
"<account-name>",
|
||||
"<account-key>");
|
||||
|
||||
TableServiceClient serviceClient = new TableServiceClientBuilder()
|
||||
.endpoint("<your-table-account-url>")
|
||||
.credential(credential)
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
### With SAS Token
|
||||
|
||||
```java
|
||||
TableServiceClient serviceClient = new TableServiceClientBuilder()
|
||||
.endpoint("<your-table-account-url>")
|
||||
.sasToken("<sas-token>")
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
### With DefaultAzureCredential (Storage only)
|
||||
|
||||
```java
|
||||
import com.azure.identity.DefaultAzureCredentialBuilder;
|
||||
|
||||
TableServiceClient serviceClient = new TableServiceClientBuilder()
|
||||
.endpoint("<your-table-account-url>")
|
||||
.credential(new DefaultAzureCredentialBuilder().build())
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
## Key Concepts
|
||||
|
||||
- **TableServiceClient**: Manage tables (create, list, delete)
|
||||
- **TableClient**: Manage entities within a table (CRUD)
|
||||
- **Partition Key**: Groups entities for efficient queries
|
||||
- **Row Key**: Unique identifier within a partition
|
||||
- **Entity**: A row with up to 252 properties (1MB Storage, 2MB Cosmos)
|
||||
|
||||
## Core Patterns
|
||||
|
||||
### Create Table
|
||||
|
||||
```java
|
||||
// Create table (throws if exists)
|
||||
TableClient tableClient = serviceClient.createTable("mytable");
|
||||
|
||||
// Create if not exists (no exception)
|
||||
TableClient tableClient = serviceClient.createTableIfNotExists("mytable");
|
||||
```
|
||||
|
||||
### Get Table Client
|
||||
|
||||
```java
|
||||
// From service client
|
||||
TableClient tableClient = serviceClient.getTableClient("mytable");
|
||||
|
||||
// Direct construction
|
||||
TableClient tableClient = new TableClientBuilder()
|
||||
.connectionString("<connection-string>")
|
||||
.tableName("mytable")
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
### Create Entity
|
||||
|
||||
```java
|
||||
import com.azure.data.tables.models.TableEntity;
|
||||
|
||||
TableEntity entity = new TableEntity("partitionKey", "rowKey")
|
||||
.addProperty("Name", "Product A")
|
||||
.addProperty("Price", 29.99)
|
||||
.addProperty("Quantity", 100)
|
||||
.addProperty("IsAvailable", true);
|
||||
|
||||
tableClient.createEntity(entity);
|
||||
```
|
||||
|
||||
### Get Entity
|
||||
|
||||
```java
|
||||
TableEntity entity = tableClient.getEntity("partitionKey", "rowKey");
|
||||
|
||||
String name = (String) entity.getProperty("Name");
|
||||
Double price = (Double) entity.getProperty("Price");
|
||||
System.out.printf("Product: %s, Price: %.2f%n", name, price);
|
||||
```
|
||||
|
||||
### Update Entity
|
||||
|
||||
```java
|
||||
import com.azure.data.tables.models.TableEntityUpdateMode;
|
||||
|
||||
// Merge (update only specified properties)
|
||||
TableEntity updateEntity = new TableEntity("partitionKey", "rowKey")
|
||||
.addProperty("Price", 24.99);
|
||||
tableClient.updateEntity(updateEntity, TableEntityUpdateMode.MERGE);
|
||||
|
||||
// Replace (replace entire entity)
|
||||
TableEntity replaceEntity = new TableEntity("partitionKey", "rowKey")
|
||||
.addProperty("Name", "Product A Updated")
|
||||
.addProperty("Price", 24.99)
|
||||
.addProperty("Quantity", 150);
|
||||
tableClient.updateEntity(replaceEntity, TableEntityUpdateMode.REPLACE);
|
||||
```
|
||||
|
||||
### Upsert Entity
|
||||
|
||||
```java
|
||||
// Insert or update (merge mode)
|
||||
tableClient.upsertEntity(entity, TableEntityUpdateMode.MERGE);
|
||||
|
||||
// Insert or replace
|
||||
tableClient.upsertEntity(entity, TableEntityUpdateMode.REPLACE);
|
||||
```
|
||||
|
||||
### Delete Entity
|
||||
|
||||
```java
|
||||
tableClient.deleteEntity("partitionKey", "rowKey");
|
||||
```
|
||||
|
||||
### List Entities
|
||||
|
||||
```java
|
||||
import com.azure.data.tables.models.ListEntitiesOptions;
|
||||
|
||||
// List all entities
|
||||
for (TableEntity entity : tableClient.listEntities()) {
|
||||
System.out.printf("%s - %s%n",
|
||||
entity.getPartitionKey(),
|
||||
entity.getRowKey());
|
||||
}
|
||||
|
||||
// With filtering and selection
|
||||
ListEntitiesOptions options = new ListEntitiesOptions()
|
||||
.setFilter("PartitionKey eq 'sales'")
|
||||
.setSelect("Name", "Price");
|
||||
|
||||
for (TableEntity entity : tableClient.listEntities(options, null, null)) {
|
||||
System.out.printf("%s: %.2f%n",
|
||||
entity.getProperty("Name"),
|
||||
entity.getProperty("Price"));
|
||||
}
|
||||
```
|
||||
|
||||
### Query with OData Filter
|
||||
|
||||
```java
|
||||
// Filter by partition key
|
||||
ListEntitiesOptions options = new ListEntitiesOptions()
|
||||
.setFilter("PartitionKey eq 'electronics'");
|
||||
|
||||
// Filter with multiple conditions
|
||||
options.setFilter("PartitionKey eq 'electronics' and Price gt 100");
|
||||
|
||||
// Filter with comparison operators
|
||||
options.setFilter("Quantity ge 10 and Quantity le 100");
|
||||
|
||||
// Top N results
|
||||
options.setTop(10);
|
||||
|
||||
for (TableEntity entity : tableClient.listEntities(options, null, null)) {
|
||||
System.out.println(entity.getRowKey());
|
||||
}
|
||||
```
|
||||
|
||||
### Batch Operations (Transactions)
|
||||
|
||||
```java
|
||||
import com.azure.data.tables.models.TableTransactionAction;
|
||||
import com.azure.data.tables.models.TableTransactionActionType;
|
||||
import java.util.Arrays;
|
||||
|
||||
// All entities must have same partition key
|
||||
List<TableTransactionAction> actions = Arrays.asList(
|
||||
new TableTransactionAction(
|
||||
TableTransactionActionType.CREATE,
|
||||
new TableEntity("batch", "row1").addProperty("Name", "Item 1")),
|
||||
new TableTransactionAction(
|
||||
TableTransactionActionType.CREATE,
|
||||
new TableEntity("batch", "row2").addProperty("Name", "Item 2")),
|
||||
new TableTransactionAction(
|
||||
TableTransactionActionType.UPSERT_MERGE,
|
||||
new TableEntity("batch", "row3").addProperty("Name", "Item 3"))
|
||||
);
|
||||
|
||||
tableClient.submitTransaction(actions);
|
||||
```
|
||||
|
||||
### List Tables
|
||||
|
||||
```java
|
||||
import com.azure.data.tables.models.TableItem;
|
||||
import com.azure.data.tables.models.ListTablesOptions;
|
||||
|
||||
// List all tables
|
||||
for (TableItem table : serviceClient.listTables()) {
|
||||
System.out.println(table.getName());
|
||||
}
|
||||
|
||||
// Filter tables
|
||||
ListTablesOptions options = new ListTablesOptions()
|
||||
.setFilter("TableName eq 'mytable'");
|
||||
|
||||
for (TableItem table : serviceClient.listTables(options, null, null)) {
|
||||
System.out.println(table.getName());
|
||||
}
|
||||
```
|
||||
|
||||
### Delete Table
|
||||
|
||||
```java
|
||||
serviceClient.deleteTable("mytable");
|
||||
```
|
||||
|
||||
## Typed Entities
|
||||
|
||||
```java
|
||||
public class Product implements TableEntity {
|
||||
private String partitionKey;
|
||||
private String rowKey;
|
||||
private OffsetDateTime timestamp;
|
||||
private String eTag;
|
||||
private String name;
|
||||
private double price;
|
||||
|
||||
// Getters and setters for all fields
|
||||
@Override
|
||||
public String getPartitionKey() { return partitionKey; }
|
||||
@Override
|
||||
public void setPartitionKey(String partitionKey) { this.partitionKey = partitionKey; }
|
||||
@Override
|
||||
public String getRowKey() { return rowKey; }
|
||||
@Override
|
||||
public void setRowKey(String rowKey) { this.rowKey = rowKey; }
|
||||
// ... other getters/setters
|
||||
|
||||
public String getName() { return name; }
|
||||
public void setName(String name) { this.name = name; }
|
||||
public double getPrice() { return price; }
|
||||
public void setPrice(double price) { this.price = price; }
|
||||
}
|
||||
|
||||
// Usage
|
||||
Product product = new Product();
|
||||
product.setPartitionKey("electronics");
|
||||
product.setRowKey("laptop-001");
|
||||
product.setName("Laptop");
|
||||
product.setPrice(999.99);
|
||||
|
||||
tableClient.createEntity(product);
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```java
|
||||
import com.azure.data.tables.models.TableServiceException;
|
||||
|
||||
try {
|
||||
tableClient.createEntity(entity);
|
||||
} catch (TableServiceException e) {
|
||||
System.out.println("Status: " + e.getResponse().getStatusCode());
|
||||
System.out.println("Error: " + e.getMessage());
|
||||
// 409 = Conflict (entity exists)
|
||||
// 404 = Not Found
|
||||
}
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
# Storage Account
|
||||
AZURE_TABLES_CONNECTION_STRING=DefaultEndpointsProtocol=https;AccountName=...
|
||||
AZURE_TABLES_ENDPOINT=https://<account>.table.core.windows.net
|
||||
|
||||
# Cosmos DB Table API
|
||||
COSMOS_TABLE_ENDPOINT=https://<account>.table.cosmosdb.azure.com
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Partition Key Design**: Choose keys that distribute load evenly
|
||||
2. **Batch Operations**: Use transactions for atomic multi-entity updates
|
||||
3. **Query Optimization**: Always filter by PartitionKey when possible
|
||||
4. **Select Projection**: Only select needed properties for performance
|
||||
5. **Entity Size**: Keep entities under 1MB (Storage) or 2MB (Cosmos)
|
||||
|
||||
## Trigger Phrases
|
||||
|
||||
- "Azure Tables Java"
|
||||
- "table storage SDK"
|
||||
- "Cosmos DB Table API"
|
||||
- "NoSQL key-value storage"
|
||||
- "partition key row key"
|
||||
- "table entity CRUD"
|
||||
Reference in New Issue
Block a user