| layout | title | description | permalink |
|---|---|---|---|
default |
API Reference |
Complete reference for all Dataproc MCP Server tools with practical examples and usage patterns |
/API_REFERENCE/ |
Complete reference for all 17 Dataproc MCP Server tools with practical examples and usage patterns.
The Dataproc MCP Server provides 17 comprehensive tools organized into four categories:
- Cluster Management (6 tools)
- Job Execution (6 tools)
- Profile Management (3 tools)
- Monitoring & Utilities (2 tools)
For detailed authentication setup and best practices, refer to the Authentication Implementation Guide.
All tools support intelligent default parameters. When projectId and region are not provided, the server automatically uses configured defaults from config/default-params.json.
Creates a new Dataproc cluster with basic configuration.
Parameters:
projectId(string, optional): GCP project IDregion(string, optional): Dataproc regionclusterName(string, required): Name for the new clusterclusterConfig(object, optional): Custom cluster configuration
Example:
{
"tool": "start_dataproc_cluster",
"arguments": {
"clusterName": "my-analysis-cluster",
"clusterConfig": {
"masterConfig": {
"numInstances": 1,
"machineTypeUri": "n1-standard-4"
},
"workerConfig": {
"numInstances": 3,
"machineTypeUri": "n1-standard-2"
}
}
}
}Response:
{
"content": [
{
"type": "text",
"text": "Cluster my-analysis-cluster started successfully in region us-central1.\nCluster details:\n{\n \"clusterName\": \"my-analysis-cluster\",\n \"status\": {\n \"state\": \"RUNNING\"\n }\n}"
}
]
}Creates a cluster using a YAML configuration file.
Parameters:
projectId(string, required): GCP project IDregion(string, required): Dataproc regionyamlPath(string, required): Path to YAML configuration fileoverrides(object, optional): Runtime configuration overrides
Example:
{
"tool": "create_cluster_from_yaml",
"arguments": {
"projectId": "my-project-123",
"region": "us-central1",
"yamlPath": "./configs/production-cluster.yaml",
"overrides": {
"clusterName": "prod-cluster-001"
}
}
}Creates a cluster using a predefined profile.
Parameters:
projectId(string, required): GCP project IDregion(string, required): Dataproc regionprofileName(string, required): Name of the profile to useclusterName(string, required): Name for the new clusteroverrides(object, optional): Configuration overrides
Example:
{
"tool": "create_cluster_from_profile",
"arguments": {
"projectId": "my-project-123",
"region": "us-central1",
"profileName": "production/high-memory/analysis",
"clusterName": "analytics-cluster-prod"
}
}Lists all Dataproc clusters in a project and region with intelligent response optimization.
Parameters:
projectId(string, optional): GCP project IDregion(string, optional): Dataproc regionfilter(string, optional): Filter expressionpageSize(number, optional): Number of results per page (1-100)pageToken(string, optional): Token for paginationverbose(boolean, optional): Return full response without filtering (default: false)
Response Optimization:
- Default (optimized): 96.2% token reduction (7,651 β 292 tokens)
- Verbose mode: Full response with complete cluster details
- Storage: Full data automatically stored in Qdrant for later access
Example (Optimized Response):
{
"tool": "list_clusters",
"arguments": {
"filter": "status.state=RUNNING",
"pageSize": 10
}
}Optimized Response:
{
"content": [
{
"type": "text",
"text": "Found 3 clusters in my-project-123/us-central1:\n\nβ’ analytics-cluster-prod (RUNNING) - n1-standard-4, 5 nodes\nβ’ data-pipeline-dev (RUNNING) - n1-standard-2, 3 nodes \nβ’ ml-training-cluster (CREATING) - n1-highmem-8, 10 nodes\n\nπΎ Full details stored: dataproc://responses/clusters/list/abc123\nπ Token reduction: 96.2% (7,651 β 292 tokens)"
}
]
}Verbose Response:
{
"tool": "list_clusters",
"arguments": {
"filter": "status.state=RUNNING",
"pageSize": 10,
"verbose": true
}
}Full Response (verbose=true):
{
"content": [
{
"type": "text",
"text": "Clusters in project my-project-123, region us-central1:\n{\n \"clusters\": [\n {\n \"clusterName\": \"analytics-cluster-prod\",\n \"status\": {\n \"state\": \"RUNNING\",\n \"stateStartTime\": \"2024-01-01T10:00:00Z\"\n },\n \"config\": {\n \"masterConfig\": {\n \"numInstances\": 1,\n \"machineTypeUri\": \"n1-standard-4\"\n },\n \"workerConfig\": {\n \"numInstances\": 4,\n \"machineTypeUri\": \"n1-standard-4\"\n }\n }\n }\n ]\n}"
}
]
}Gets detailed information about a specific cluster with intelligent response optimization.
Parameters:
projectId(string, required): GCP project IDregion(string, required): Dataproc regionclusterName(string, required): Name of the clusterverbose(boolean, optional): Return full response without filtering (default: false)
Response Optimization:
- Default (optimized): 64.0% token reduction (553 β 199 tokens)
- Verbose mode: Full cluster configuration and metadata
- Storage: Complete cluster details stored in Qdrant
Example (Optimized Response):
{
"tool": "get_cluster",
"arguments": {
"projectId": "my-project-123",
"region": "us-central1",
"clusterName": "my-analysis-cluster"
}
}Optimized Response:
{
"content": [
{
"type": "text",
"text": "Cluster: my-analysis-cluster (RUNNING)\nπ₯οΈ Master: 1x n1-standard-4\nπ₯ Workers: 4x n1-standard-2\nπ Zone: us-central1-b\nβ° Created: 2024-01-01 10:00 UTC\n\nπΎ Full config: dataproc://responses/clusters/get/def456\nπ Token reduction: 64.0% (553 β 199 tokens)"
}
]
}Verbose Response:
{
"tool": "get_cluster",
"arguments": {
"projectId": "my-project-123",
"region": "us-central1",
"clusterName": "my-analysis-cluster",
"verbose": true
}
}Deletes a Dataproc cluster.
Parameters:
projectId(string, required): GCP project IDregion(string, required): Dataproc regionclusterName(string, required): Name of the cluster to delete
Example:
{
"tool": "delete_cluster",
"arguments": {
"projectId": "my-project-123",
"region": "us-central1",
"clusterName": "temporary-cluster"
}
}Submits a Hive query to a Dataproc cluster.
Parameters:
projectId(string, required): GCP project IDregion(string, required): Dataproc regionclusterName(string, required): Name of the clusterquery(string, required): Hive query to execute (max 10,000 characters)async(boolean, optional): Whether to run asynchronouslyqueryOptions(object, optional): Query configuration options
Example:
{
"tool": "submit_hive_query",
"arguments": {
"projectId": "my-project-123",
"region": "us-central1",
"clusterName": "analytics-cluster",
"query": "SELECT customer_id, COUNT(*) as order_count FROM orders WHERE order_date >= '2024-01-01' GROUP BY customer_id ORDER BY order_count DESC LIMIT 100",
"async": false,
"queryOptions": {
"timeoutMs": 300000,
"properties": {
"hive.exec.dynamic.partition": "true",
"hive.exec.dynamic.partition.mode": "nonstrict"
}
}
}
}Submits a generic Dataproc job (Hive, Spark, PySpark, etc.) with enhanced local file staging support.
Parameters:
projectId(string, required): GCP project IDregion(string, required): Dataproc regionclusterName(string, required): Name of the clusterjobType(string, required): Type of job (hive, spark, pyspark, presto, pig, hadoop)jobConfig(object, required): Job configuration objectasync(boolean, optional): Whether to submit asynchronously
π§ LOCAL FILE STAGING:
The baseDirectory parameter in the local file staging system controls how relative file paths are resolved when using the template syntax {@./relative/path} or direct relative paths in job configurations.
Configuration:
The baseDirectory parameter is configured in config/default-params.json with a default value of ".", which refers to the current working directory where the MCP server process is running (typically the project root directory).
Path Resolution Logic:
- Absolute Paths: If a file path is already absolute (starts with
/), it's used as-is - Relative Path Resolution: For relative paths, the system:
- Gets the baseDirectory value from configuration (default:
".") - Resolves the baseDirectory if it's relative:
- First tries to use
DATAPROC_CONFIG_PATHenvironment variable's directory - Falls back to
process.cwd()(current working directory)
- First tries to use
- Combines baseDirectory with the relative file path
- Gets the baseDirectory value from configuration (default:
Template Syntax Support:
// Template syntax - recommended approach
{@./relative/path/to/file.py}
{@../parent/directory/file.jar}
{@subdirectory/file.sql}
// Direct relative paths (also supported)
"./relative/path/to/file.py"
"../parent/directory/file.jar"
"subdirectory/file.sql"Practical Examples:
Example 1: Default Configuration (baseDirectory: ".")
- Template:
{@./test-spark-job.py} - Resolution:
/Users/srivers/Documents/Cline/MCP/dataproc-server/test-spark-job.py
Example 2: Config Directory Base
- Configuration:
baseDirectory: "config" - Template:
{@./my-script.py} - Resolution:
/Users/srivers/Documents/Cline/MCP/dataproc-server/config/my-script.py
Example 3: Absolute Base Directory
- Configuration:
baseDirectory: "/absolute/path/to/files" - Template:
{@./script.py} - Resolution:
/absolute/path/to/files/script.py
Environment Variable Influence:
The DATAPROC_CONFIG_PATH environment variable affects path resolution:
- If set: The directory containing the config file becomes the reference point for relative
baseDirectoryvalues - If not set: The current working directory (
process.cwd()) is used as the reference point
Best Practices:
- Use Template Syntax: Prefer
{@./file.py}over direct relative paths for clarity - Organize Files Relative to Project Root: With the default
baseDirectory: ".", organize your files relative to the project root - Consider Absolute Paths for External Files: For files outside the project structure, use absolute paths
Supported File Extensions:
.py- Python files for PySpark jobs.jar- Java/Scala JAR files for Spark jobs.sql- SQL files for various job types.R- R script files for SparkR jobs
Troubleshooting:
- File Not Found: Check that the resolved absolute path exists
- Permission Denied: Ensure the MCP server has read access to the file
- Unexpected Path Resolution: Verify your
baseDirectorysetting and current working directory
Debug Path Resolution: Enable debug logging to see the actual path resolution:
DEBUG=dataproc-mcp:* node build/index.jsConfiguration Override:
You can override the baseDirectory in your environment-specific configuration:
{
"environment": "development",
"parameters": {
"baseDirectory": "./dev-scripts"
}
}Files are automatically staged to GCS and cleaned up after job completion.
Example - Spark Job:
{
"tool": "submit_dataproc_job",
"arguments": {
"projectId": "my-project-123",
"region": "us-central1",
"clusterName": "spark-cluster",
"jobType": "spark",
"jobConfig": {
"mainClass": "com.example.SparkApp",
"jarFileUris": ["{@./spark-app.jar}"],
"args": ["--input", "gs://my-bucket/input/", "--output", "gs://my-bucket/output/"],
"properties": {
"spark.executor.memory": "4g",
"spark.executor.cores": "2"
}
},
"async": true
}
}Example - PySpark Job with Local File Staging:
{
"tool": "submit_dataproc_job",
"arguments": {
"projectId": "my-project-123",
"region": "us-central1",
"clusterName": "pyspark-cluster",
"jobType": "pyspark",
"jobConfig": {
"mainPythonFileUri": "{@./test-spark-job.py}",
"pythonFileUris": ["{@./utils/helper.py}", "{@/absolute/path/library.py}"],
"args": ["--date", "2024-01-01"],
"properties": {
"spark.sql.adaptive.enabled": "true",
"spark.sql.adaptive.coalescePartitions.enabled": "true"
}
}
}
}Example - Traditional PySpark Job (GCS URIs):
{
"tool": "submit_dataproc_job",
"arguments": {
"projectId": "my-project-123",
"region": "us-central1",
"clusterName": "pyspark-cluster",
"jobType": "pyspark",
"jobConfig": {
"mainPythonFileUri": "gs://my-bucket/scripts/data_processing.py",
"pythonFileUris": ["gs://my-bucket/scripts/utils.py"],
"args": ["--date", "2024-01-01"],
"properties": {
"spark.sql.adaptive.enabled": "true",
"spark.sql.adaptive.coalescePartitions.enabled": "true"
}
}
}
}Local File Staging Process:
- Detection: Local file paths are automatically detected using template syntax
- Staging: Files are uploaded to the cluster's staging bucket with unique names
- Transformation: Job config is updated with GCS URIs
- Execution: Job runs with staged files
- Cleanup: Staged files are automatically cleaned up after job completion
Supported File Extensions:
.py- Python files for PySpark jobs.jar- Java/Scala JAR files for Spark jobs.sql- SQL files for various job types.R- R script files for SparkR jobs
Gets the status of a Dataproc job.
Parameters:
projectId(string, optional): GCP project IDregion(string, optional): Dataproc regionjobId(string, required): Job ID to check
Example:
{
"tool": "get_job_status",
"arguments": {
"jobId": "job-12345-abcdef"
}
}Response:
{
"content": [
{
"type": "text",
"text": "Job status for job-12345-abcdef:\n{\n \"status\": {\n \"state\": \"DONE\",\n \"stateStartTime\": \"2024-01-01T12:00:00Z\"\n },\n \"driverOutputResourceUri\": \"gs://bucket/output/\"\n}"
}
]
}Gets the results of a completed Hive query.
Parameters:
projectId(string, required): GCP project IDregion(string, required): Dataproc regionjobId(string, required): Job ID to get results formaxResults(number, optional): Maximum number of results (1-10,000)pageToken(string, optional): Pagination token
Example:
{
"tool": "get_query_results",
"arguments": {
"projectId": "my-project-123",
"region": "us-central1",
"jobId": "hive-job-12345",
"maxResults": 50
}
}Gets the results of a completed Dataproc job.
Parameters:
projectId(string, required): GCP project IDregion(string, required): Dataproc regionjobId(string, required): Job ID to get results formaxResults(number, optional): Maximum rows to display (default: 10, max: 1,000)
Example:
{
"tool": "get_job_results",
"arguments": {
"projectId": "my-project-123",
"region": "us-central1",
"jobId": "spark-job-67890",
"maxResults": 100
}
}Cancels a running or pending Dataproc job with intelligent status handling and job tracking integration.
Parameters:
jobId(string, required): The ID of the Dataproc job to cancelprojectId(string, optional): GCP project ID (uses defaults if not provided)region(string, optional): Dataproc region (uses defaults if not provided)verbose(boolean, optional): Return full response without filtering (default: false)
π CANCELLATION WORKFLOW:
- Attempts to cancel jobs in PENDING or RUNNING states
- Provides informative messages for jobs already in terminal states
- Updates internal job tracking when cancellation succeeds
π STATUS HANDLING:
- PENDING/RUNNING β Cancellation attempted
- DONE/ERROR/CANCELLED β Informative message returned
- Job not found β Clear error message
π‘ MONITORING:
After cancellation, use get_job_status("jobId") to confirm the job reaches CANCELLED state.
Example:
{
"tool": "cancel_dataproc_job",
"arguments": {
"jobId": "Clean_Places_sub_group_base_1_cleaned_places_13b6ec3f"
}
}Successful Cancellation Response:
{
"content": [
{
"type": "text",
"text": "π Job Cancellation Status\n\nJob ID: Clean_Places_sub_group_base_1_cleaned_places_13b6ec3f\nStatus: 3\nMessage: Cancellation request sent for job Clean_Places_sub_group_base_1_cleaned_places_13b6ec3f."
}
]
}Job Already Completed Response:
{
"content": [
{
"type": "text",
"text": "Cannot cancel job Clean_Places_sub_group_base_1_cleaned_places_13b6ec3f in state: 'DONE'; cancellable states: '[PENDING, RUNNING]'"
}
]
}Use Cases:
- Emergency Cancellation: Stop runaway jobs consuming excessive resources
- Pipeline Management: Cancel dependent jobs when upstream processes fail
- Cost Control: Terminate expensive long-running jobs
- Development Workflow: Cancel test jobs during development iterations
Best Practices:
- Monitor job status before and after cancellation attempts
- Use with get_job_status to verify cancellation completion
- Handle gracefully when jobs are already in terminal states
- Consider dependencies before cancelling pipeline jobs
Lists available cluster configuration profiles.
Parameters:
category(string, optional): Filter by category (e.g., "development", "production")
Example:
{
"tool": "list_profiles",
"arguments": {
"category": "production"
}
}Response:
{
"content": [
{
"type": "text",
"text": "Available profiles:\n[\n {\n \"id\": \"production/high-memory/analysis\",\n \"name\": \"High Memory Analysis\",\n \"category\": \"production\"\n }\n]"
}
]
}Gets details for a specific cluster configuration profile.
Parameters:
profileId(string, required): ID of the profile (e.g., "development/small")
Example:
{
"tool": "get_profile",
"arguments": {
"profileId": "development/small"
}
}Lists clusters that were created and tracked by this MCP server.
Parameters:
profileId(string, optional): Filter by profile ID
Example:
{
"tool": "list_tracked_clusters",
"arguments": {
"profileId": "production/high-memory/analysis"
}
}Gets the Zeppelin notebook URL for a cluster (if enabled).
Parameters:
projectId(string, required): GCP project IDregion(string, required): Dataproc regionclusterName(string, required): Name of the cluster
Example:
{
"tool": "get_zeppelin_url",
"arguments": {
"projectId": "my-project-123",
"region": "us-central1",
"clusterName": "jupyter-cluster"
}
}Response:
{
"content": [
{
"type": "text",
"text": "Zeppelin URL for cluster jupyter-cluster:\nhttps://jupyter-cluster-m.us-central1-a.c.my-project-123.internal:8080"
}
]
}π Quick status check for all active and recent jobs with intelligent response optimization.
Parameters:
projectId(string, optional): GCP project ID (shows all if not specified)region(string, optional): Dataproc region (shows all if not specified)includeCompleted(boolean, optional): Include recently completed jobs (default: false)verbose(boolean, optional): Return full response without filtering (default: false)
Response Optimization:
- Default (optimized): 80.6% token reduction (1,626 β 316 tokens)
- Verbose mode: Complete job details and metadata
- Storage: Full job data stored in Qdrant for analysis
Example (Optimized Response):
{
"tool": "check_active_jobs",
"arguments": {
"includeCompleted": true
}
}Optimized Response:
{
"content": [
{
"type": "text",
"text": "π Active Jobs Summary:\n\nβΆοΈ RUNNING (2):\nβ’ hive-analytics-job (5m ago) - analytics-cluster\nβ’ spark-etl-pipeline (12m ago) - data-pipeline-cluster\n\nβ
COMPLETED (3):\nβ’ daily-report-job (1h ago) - SUCCESS\nβ’ data-validation (2h ago) - SUCCESS \nβ’ backup-process (3h ago) - SUCCESS\n\nπΎ Full details: dataproc://responses/jobs/active/ghi789\nπ Token reduction: 80.6% (1,626 β 316 tokens)"
}
]
}type OutputFormat = 'text' | 'json' | 'csv' | 'unknown';interface JobOutputOptions extends ParseOptions {
/**
* Whether to use cache
*/
useCache?: boolean;
/**
* Whether to validate file hashes
*/
validateHash?: boolean;
/**
* Custom cache config overrides
*/
cacheConfig?: Partial<CacheConfig>;
}interface ParseOptions {
/**
* Whether to trim whitespace from values
*/
trim?: boolean;
/**
* Custom delimiter for CSV parsing
*/
delimiter?: string;
/**
* Whether to parse numbers in JSON/CSV
*/
parseNumbers?: boolean;
/**
* Whether to skip empty lines
*/
skipEmpty?: boolean;
}The table structure used in the formatted output feature:
interface Table {
/**
* Array of column names
*/
columns: string[];
/**
* Array of row objects, where each object has properties matching column names
*/
rows: Record<string, any>[];
}The formatted output feature enhances job results by providing a clean, readable ASCII table representation of the data alongside the structured data.
When a job produces tabular output, the result will include:
{
// Job details...
parsedOutput: {
tables: [
{
columns: ["column1", "column2", ...],
rows: [
{ "column1": "value1", "column2": "value2", ... },
// More rows...
]
},
// More tables...
],
formattedOutput: "βββββββββββ¬ββββββββββ\nβ column1 β column2 β\nβββββββββββΌββββββββββ€\nβ value1 β value2 β\nβββββββββββ΄ββββββββββ"
}
}To access and display the formatted output:
const results = await getDataprocJobResults({
projectId: 'your-project',
region: 'us-central1',
jobId: 'job-id',
format: 'text',
wait: true
});
if (results.parsedOutput && results.parsedOutput.formattedOutput) {
console.log('Formatted Table Output:');
console.log(results.parsedOutput.formattedOutput);
}If the job produces multiple tables, they will be formatted separately with table numbers:
Table 1:
βββββββββββ¬ββββββββββ
β column1 β column2 β
βββββββββββΌββββββββββ€
β value1 β value2 β
βββββββββββ΄ββββββββββ
Table 2:
βββββββββββ¬ββββββββββ
β column3 β column4 β
βββββββββββΌββββββββββ€
β value3 β value4 β
βββββββββββ΄ββββββββββ
The formatted output is generated using the table library with specific configuration options for clean formatting:
- Border style: Uses the 'norc' border character set for a clean, minimal look
- Column padding: Adds 1 space of padding on both sides of column content
- Horizontal lines: Draws horizontal lines only at the top, after the header, and at the bottom
For more detailed implementation information, see the source code in src/services/output-parser.ts.
The API includes comprehensive error handling for various scenarios:
- GCS Access Errors: When files cannot be accessed or downloaded
- Parse Errors: When content cannot be parsed in the expected format
- Job Execution Errors: When jobs fail or are cancelled
- Timeout Errors: When operations exceed specified timeouts
Each error type includes detailed information to help diagnose and resolve issues.
- Check for existence: Always check if
formattedOutputexists before using it - Display as-is: The formatted output is already optimized for console display
- Preserve original data: Use the structured data in
tablesfor programmatic processing - Handle large outputs: For very large tables, consider implementing pagination in your UI
- Use caching: Enable the cache for frequently accessed job results
- Specify format: Explicitly specify the expected format when known
- Limit wait time: Set appropriate timeouts for waiting operations
- Use async mode: For long-running jobs, submit in async mode and check status separately
Invalid Parameters:
{
"error": {
"code": "INVALID_PARAMS",
"message": "Input validation failed: clusterName: Cluster name must start with lowercase letter"
}
}Rate Limit Exceeded:
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded. Try again after 2024-01-01T12:01:00.000Z"
}
}Authentication Error:
{
"error": {
"code": "AUTHENTICATION_FAILED",
"message": "Service account authentication failed: Permission denied"
}
}- Always check job status for long-running operations
- Use async mode for jobs that take more than a few minutes
- Implement retry logic for transient failures
- Clean up resources by deleting clusters when done
- Use appropriate cluster sizes for your workload
- Monitor costs by tracking cluster usage
- Default: 100 requests per minute
- Configurable: Adjust in server configuration
- Per-tool: Some tools may have specific limits
- Burst: Short bursts above limit may be allowed
- All inputs are validated and sanitized
- Credentials are never logged or exposed
- Audit logs track all operations
- Rate limiting prevents abuse
- GCP IAM controls actual permissions
This API reference provides comprehensive documentation for all tools with practical examples and usage patterns.