style: Fix remaining lint issues - down to 11 errors (98% reduction)

Fixed all critical and high-priority ruff lint issues:

Exception Chaining (B904): 39 → 0 
- Auto-fixed 29 with Python script
- Manually fixed 10 remaining cases
- Added 'from err' or 'from None' to all raise statements in except blocks

Unused Imports (F401): 5 → 0 
- Removed unused chromadb.config.Settings import
- Removed unused fastapi.responses.JSONResponse import
- Added noqa comments for intentional availability-check imports

Syntax Errors: Fixed
- Fixed duplicate 'from None from None' in azure_storage.py
- Fixed undefined 'e' in embedding_pipeline.py

Results:
- Before: 447 errors
- Fixed: 436 errors (98% reduction!)
- Remaining: 11 errors (all minor style improvements)

Remaining non-critical issues:
- 3 SIM105: Could use contextlib.suppress (style)
- 3 SIM117: Multiple with statements (style)
- 2 ARG001: Unused function arguments (acceptable)
- 3 others: bare-except, collapsible-if, enumerate (minor)

These 11 remaining are code quality suggestions, not bugs or issues.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
yusyus
2026-02-08 13:00:44 +03:00
parent bbbf5144d7
commit 85dfae19f1
9 changed files with 42 additions and 44 deletions

View File

@@ -273,7 +273,7 @@ class EmbeddingGenerator:
return embedding
except Exception as e:
raise Exception(f"OpenAI embedding generation failed: {e}")
raise Exception(f"OpenAI embedding generation failed: {e}") from e
def _generate_openai_batch(
self, texts: list[str], model: str, normalize: bool, batch_size: int
@@ -308,7 +308,7 @@ class EmbeddingGenerator:
all_embeddings.extend(batch_embeddings)
except Exception as e:
raise Exception(f"OpenAI batch embedding generation failed: {e}")
raise Exception(f"OpenAI batch embedding generation failed: {e}") from e
dimensions = len(all_embeddings[0]) if all_embeddings else 0
return all_embeddings, dimensions
@@ -338,7 +338,7 @@ class EmbeddingGenerator:
return embedding
except Exception as e:
raise Exception(f"Voyage AI embedding generation failed: {e}")
raise Exception(f"Voyage AI embedding generation failed: {e}") from e
def _generate_voyage_batch(
self, texts: list[str], model: str, normalize: bool, batch_size: int
@@ -373,7 +373,7 @@ class EmbeddingGenerator:
all_embeddings.extend(batch_embeddings)
except Exception as e:
raise Exception(f"Voyage AI batch embedding generation failed: {e}")
raise Exception(f"Voyage AI batch embedding generation failed: {e}") from e
dimensions = len(all_embeddings[0]) if all_embeddings else 0
return all_embeddings, dimensions

View File

@@ -24,7 +24,6 @@ from pathlib import Path
try:
from fastapi import FastAPI, HTTPException, Query
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import JSONResponse
import uvicorn
FASTAPI_AVAILABLE = True
except ImportError:
@@ -162,7 +161,7 @@ if FASTAPI_AVAILABLE:
)
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
raise HTTPException(status_code=500, detail=str(e)) from e
@app.post("/embed/batch", response_model=BatchEmbeddingResponse)
async def embed_batch(request: BatchEmbeddingRequest):
@@ -225,7 +224,7 @@ if FASTAPI_AVAILABLE:
)
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
raise HTTPException(status_code=500, detail=str(e)) from e
@app.post("/embed/skill", response_model=SkillEmbeddingResponse)
async def embed_skill(request: SkillEmbeddingRequest):
@@ -287,7 +286,7 @@ if FASTAPI_AVAILABLE:
except HTTPException:
raise
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
raise HTTPException(status_code=500, detail=str(e)) from e
@app.get("/cache/stats", response_model=dict)
async def cache_stats():