Databricks MLOps Playbook: From MLflow to Production

This playbook distills a pragmatic MLOps path on Databricks: from data prep to robust deployment with guardrails. Why another MLOps guide? Focus on operational reality: lineage, reproducibility, cost/latency, and KPI reliability. Re-usable patterns you can drop into teams without heavy ceremony. Reference architecture flowchart TD A["Ingest: Batch/Streaming"] --> B["Bronze Delta"] B --> C["Curate: Features"] C --> D["`ML Training MLflow tracking`"] D --> E["`Registry Stages: Staging/Prod`"] E --> F["Serving/Batch Scoring"] F --> G["`Monitoring Drift, KPI, Cost`"] Building blocks Delta Lake: schema evolution, Z-order, OPTIMIZE + VACUUM policies. MLflow: experiment tracking, model registry, stage transitions with approvals. CI/CD: notebooks/jobs packaged via repo; tests for data contracts and model code. Observability: input DQ, feature coverage, drift monitors, KPI windows, cost budgets. Sample: register and deploy import mlflow from mlflow.tracking import MlflowClient run_id = mlflow.active_run().info.run_id mlflow.sklearn.log_model(model, "model") client = MlflowClient() model_uri = f"runs:/{run_id}/model" client.create_registered_model("churn_model") client.create_model_version("churn_model", model_uri, run_id) client.transition_model_version_stage("churn_model", 1, stage="Staging") Guardrails Promotion requires DQ + performance gates; auto-revert on KPI regression. Cost envelopes by job cluster policy; latency SLOs per endpoint. Takeaways Ship small, measurable increments; automate checks; keep lineage and docs close to the code.

2025-09-24 · 1 min · rokorolev

RAG on Databricks: Embeddings, Vector Search, and Cost/Latency Tuning

A practical guide to building Retrieval-Augmented Generation on Databricks. Pipeline overview flowchart LR A[Docs/Transcripts] --> B[Chunk & Clean] B --> C["Embed GTE Large EN v1.5"] C --> D["Vector Index Vector Search"] D --> E["Retrieve"] E --> F["Compose Prompt"] F --> G["LLM Inference Hybrid"] G --> H["Post-process Policy/PII"] Key choices Chunking: semantic + overlap; store offsets for citations. Embeddings: GTE Large EN v1.5; evaluate coverage vs latency. Index: Delta-backed vector search; freshness vs cost trade-offs. Inference: hybrid (open + hosted) to balance latency and accuracy. Example: embed and upsert from databricks.vector_search.client import VectorSearchClient vsc = VectorSearchClient() index = vsc.get_index("main", "transcripts_idx") index.upsert([ {"id": "doc:123#p5", "text": "...", "metadata": {"source": "call"}} ]) Evaluation & guardrails Offline: Recall@k, response faithfulness, toxicity/policy checks. Online: user feedback, fallback/abstain behavior. Cost/latency tips Batch embeddings; cache frequent queries; keep vector dim reasonable. Monitor token usage; pre-validate prompts; route by difficulty.

2025-09-24 · 1 min · rokorolev