You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: scaleway-async/scaleway_async/datalab/v1beta1/api.py
+11-11Lines changed: 11 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -72,15 +72,15 @@ async def create_datalab(
72
72
Create a new Data Lab. In this call, one can personalize the node counts, add a notebook, choose the private network, define the persistent volume storage capacity.
73
73
:param name: The name of the Data Lab.
74
74
:param description: The description of the Data Lab.
75
-
:param has_notebook: Whether a JupyterLab notebook shall be created with the Data Lab or not.
75
+
:param has_notebook: Select this option to include a notebook as part of the Data Lab.
76
76
:param spark_version: The version of Spark running inside the Data Lab, available options can be viewed at ListClusterVersions.
77
-
:param private_network_id: The private network to which the Data Lab is connected. Important for accessing the Spark Master URL from a private cluster.
77
+
:param private_network_id: The unique identifier of the private network the Data Lab will be attached to.
78
78
:param region: Region to target. If none is passed will use default region from the config.
79
79
:param project_id: The unique identifier of the project where the Data Lab will be created.
80
80
:param tags: The tags of the Data Lab.
81
-
:param main: The Spark main node configuration of the Data Lab, has one parameter `node_type` which specifies the compute node type of the main node. See ListNodeTypes for available options.
82
-
:param worker: The Spark worker node configuration of the Data Lab, has two parameters `node_type` for selecting the type of the worker node, and `node_count` for specifying the amount of nodes.
83
-
:param total_storage: The total storage selected by the user for Spark workers. This means the workers will not use more then this amount for their workload.
81
+
:param main: The cluster main node specification. It holds the parameters `node_type` which specifies the node type of the main node. See ListNodeTypes for available options. See ListNodeTypes for available options.
82
+
:param worker: The cluster worker node specification. It holds the parameters `node_type` which specifies the node type of the worker node and `node_count` for specifying the amount of nodes.
83
+
:param total_storage: The maximum persistent volume storage that will be available during workload.
List the available compute node types upon which a Data Lab can be created.
389
+
List the available compute node types for creating a Data Lab.
390
390
:param region: Region to target. If none is passed will use default region from the config.
391
391
:param page: The page number.
392
392
:param page_size: The page size.
393
393
:param order_by: The order by field. Available fields are `name_asc`, `name_desc`, `vcpus_asc`, `vcpus_desc`, `memory_gigabytes_asc`, `memory_gigabytes_desc`, `vram_bytes_asc`, `vram_bytes_desc`, `gpus_asc`, `gpus_desc`.
394
-
:param targets: Filter on the wanted targets, whether it's for main node or worker.
394
+
:param targets: Filter based on the target of the nodes. Allows to filter the nodes based on their purpose which can be main or worker node.
395
395
:param resource_type: Filter based on node type ( `cpu`/`gpu`/`all` ).
List the available compute node types upon which a Data Lab can be created.
434
+
List the available compute node types for creating a Data Lab.
435
435
:param region: Region to target. If none is passed will use default region from the config.
436
436
:param page: The page number.
437
437
:param page_size: The page size.
438
438
:param order_by: The order by field. Available fields are `name_asc`, `name_desc`, `vcpus_asc`, `vcpus_desc`, `memory_gigabytes_asc`, `memory_gigabytes_desc`, `vram_bytes_asc`, `vram_bytes_desc`, `gpus_asc`, `gpus_desc`.
439
-
:param targets: Filter on the wanted targets, whether it's for main node or worker.
439
+
:param targets: Filter based on the target of the nodes. Allows to filter the nodes based on their purpose which can be main or worker node.
440
440
:param resource_type: Filter based on node type ( `cpu`/`gpu`/`all` ).
Copy file name to clipboardExpand all lines: scaleway-async/scaleway_async/datalab/v1beta1/types.py
+16-16Lines changed: 16 additions & 16 deletions
Original file line number
Diff line number
Diff line change
@@ -250,7 +250,7 @@ class Datalab:
250
250
251
251
project_id: str
252
252
"""
253
-
The identifier of the project where the Data Lab has been created.
253
+
The unique identifier of the project where the Data Lab has been created.
254
254
"""
255
255
256
256
name: str
@@ -270,7 +270,7 @@ class Datalab:
270
270
271
271
status: DatalabStatus
272
272
"""
273
-
The status of the Data Lab. For a working Data Lab this should be `ready`.
273
+
The status of the Data Lab. For a working Data Lab the status is marked as `ready`.
274
274
"""
275
275
276
276
region: ScwRegion
@@ -290,17 +290,17 @@ class Datalab:
290
290
291
291
private_network_id: str
292
292
"""
293
-
The private network to which the data lab is connected. This is important for accessing the Spark Master URL.
293
+
The unique identifier of the private network to which the Data Lab is attached to.
294
294
"""
295
295
296
296
main: Optional[DatalabSparkMain] =None
297
297
"""
298
-
The Spark Main node specification of Data lab. It holds the parameters `node_type` the compute node type of the main node, `spark_ui_url` where the Spark UI is available, `spark_master_url` with which one can connect to the cluster from within one's VPC, `root_volume` the size of the volume assigned to the main node.
298
+
The Spark Main node specification of Data lab. It holds the parameters `node_type`, `spark_ui_url` (available to reach Spark UI), `spark_master_url` (used to reach the cluster within a VPC), `root_volume` (size of the volume assigned to the cluster).
299
299
"""
300
300
301
301
worker: Optional[DatalabSparkWorker] =None
302
302
"""
303
-
The worker node specification of the Data Lab. It presents the parameters `node_type` the compute node type of each worker node, `node_count` the number of worker nodes currently in the cluster, `root_volume` the root volume size of each executor.
303
+
The cluster worker nodes specification. It holds the parameters `node_type`, `node_count`, `root_volume` (size of the volume assigned to the cluster).
304
304
"""
305
305
306
306
created_at: Optional[datetime] =None
@@ -315,17 +315,17 @@ class Datalab:
315
315
316
316
notebook_url: Optional[str] =None
317
317
"""
318
-
The URL of said notebook if exists.
318
+
The URL of the notebook if available.
319
319
"""
320
320
321
321
total_storage: Optional[Volume] =None
322
322
"""
323
-
The total storage selected by the user for Spark.
323
+
The total persistent volume storage selected to run Spark.
324
324
"""
325
325
326
326
notebook_master_url: Optional[str] =None
327
327
"""
328
-
The URL to the Spark Master endpoint from, and only from the perspective of the JupyterLab Notebook. This is NOT the URL to use for accessing the cluster from a private server.
328
+
The URL that is used to reach the cluster from the notebook when available. This URL cannot be used to reach the cluster from a server.
329
329
"""
330
330
331
331
@@ -436,7 +436,7 @@ class CreateDatalabRequest:
436
436
437
437
has_notebook: bool
438
438
"""
439
-
Whether a JupyterLab notebook shall be created with the Data Lab or not.
439
+
Select this option to include a notebook as part of the Data Lab.
440
440
"""
441
441
442
442
spark_version: str
@@ -446,7 +446,7 @@ class CreateDatalabRequest:
446
446
447
447
private_network_id: str
448
448
"""
449
-
The private network to which the Data Lab is connected. Important for accessing the Spark Master URL from a private cluster.
449
+
The unique identifier of the private network the Data Lab will be attached to.
The Spark main node configuration of the Data Lab, has one parameter `node_type` which specifies the compute node type of the main node. See ListNodeTypes for available options.
469
+
The cluster main node specification. It holds the parameters `node_type` which specifies the node type of the main node. See ListNodeTypes for available options. See ListNodeTypes for available options.
The Spark worker node configuration of the Data Lab, has two parameters `node_type` for selecting the type of the worker node, and `node_count` for specifying the amount of nodes.
474
+
The cluster worker node specification. It holds the parameters `node_type` which specifies the node type of the worker node and `node_count` for specifying the amount of nodes.
475
475
"""
476
476
477
477
total_storage: Optional[Volume] =None
478
478
"""
479
-
The total storage selected by the user for Spark workers. This means the workers will not use more then this amount for their workload.
479
+
The maximum persistent volume storage that will be available during workload.
480
480
"""
481
481
482
482
@@ -563,7 +563,7 @@ class ListClusterVersionsResponse:
Copy file name to clipboardExpand all lines: scaleway/scaleway/datalab/v1beta1/api.py
+11-11Lines changed: 11 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -72,15 +72,15 @@ def create_datalab(
72
72
Create a new Data Lab. In this call, one can personalize the node counts, add a notebook, choose the private network, define the persistent volume storage capacity.
73
73
:param name: The name of the Data Lab.
74
74
:param description: The description of the Data Lab.
75
-
:param has_notebook: Whether a JupyterLab notebook shall be created with the Data Lab or not.
75
+
:param has_notebook: Select this option to include a notebook as part of the Data Lab.
76
76
:param spark_version: The version of Spark running inside the Data Lab, available options can be viewed at ListClusterVersions.
77
-
:param private_network_id: The private network to which the Data Lab is connected. Important for accessing the Spark Master URL from a private cluster.
77
+
:param private_network_id: The unique identifier of the private network the Data Lab will be attached to.
78
78
:param region: Region to target. If none is passed will use default region from the config.
79
79
:param project_id: The unique identifier of the project where the Data Lab will be created.
80
80
:param tags: The tags of the Data Lab.
81
-
:param main: The Spark main node configuration of the Data Lab, has one parameter `node_type` which specifies the compute node type of the main node. See ListNodeTypes for available options.
82
-
:param worker: The Spark worker node configuration of the Data Lab, has two parameters `node_type` for selecting the type of the worker node, and `node_count` for specifying the amount of nodes.
83
-
:param total_storage: The total storage selected by the user for Spark workers. This means the workers will not use more then this amount for their workload.
81
+
:param main: The cluster main node specification. It holds the parameters `node_type` which specifies the node type of the main node. See ListNodeTypes for available options. See ListNodeTypes for available options.
82
+
:param worker: The cluster worker node specification. It holds the parameters `node_type` which specifies the node type of the worker node and `node_count` for specifying the amount of nodes.
83
+
:param total_storage: The maximum persistent volume storage that will be available during workload.
List the available compute node types upon which a Data Lab can be created.
389
+
List the available compute node types for creating a Data Lab.
390
390
:param region: Region to target. If none is passed will use default region from the config.
391
391
:param page: The page number.
392
392
:param page_size: The page size.
393
393
:param order_by: The order by field. Available fields are `name_asc`, `name_desc`, `vcpus_asc`, `vcpus_desc`, `memory_gigabytes_asc`, `memory_gigabytes_desc`, `vram_bytes_asc`, `vram_bytes_desc`, `gpus_asc`, `gpus_desc`.
394
-
:param targets: Filter on the wanted targets, whether it's for main node or worker.
394
+
:param targets: Filter based on the target of the nodes. Allows to filter the nodes based on their purpose which can be main or worker node.
395
395
:param resource_type: Filter based on node type ( `cpu`/`gpu`/`all` ).
List the available compute node types upon which a Data Lab can be created.
434
+
List the available compute node types for creating a Data Lab.
435
435
:param region: Region to target. If none is passed will use default region from the config.
436
436
:param page: The page number.
437
437
:param page_size: The page size.
438
438
:param order_by: The order by field. Available fields are `name_asc`, `name_desc`, `vcpus_asc`, `vcpus_desc`, `memory_gigabytes_asc`, `memory_gigabytes_desc`, `vram_bytes_asc`, `vram_bytes_desc`, `gpus_asc`, `gpus_desc`.
439
-
:param targets: Filter on the wanted targets, whether it's for main node or worker.
439
+
:param targets: Filter based on the target of the nodes. Allows to filter the nodes based on their purpose which can be main or worker node.
440
440
:param resource_type: Filter based on node type ( `cpu`/`gpu`/`all` ).
0 commit comments