You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
DX-18737: Add Helm C3 executor and dist store caching
- Dremio 4.0.0 or later required.
- Adds the concept of an imageTag to expose features that are
introduced only in newer versions of Dremio.
- Removes the dremioVersion value that needs to be manually set
to reference the same version that is used by the image.
- Adds optional Cloud Cache support. Dist is split between PDFS and
cloud storage.
Change-Id: I645c53bb772c0d52362052ef77925c08b30cc494
This is a Helm chart to deploy a Dremio cluster in kubernetes. It uses a persistent volume for the master node to store the metadata for the cluster. The default configuration uses the default persistent storage supported by the kubernetes platform. For example,
5
+
This is a Helm chart to deploy a Dremio cluster in kubernetes. It uses
6
+
a persistent volume for the master node to store the metadata for the
7
+
cluster. The default configuration uses the default persistent storage
8
+
supported by the kubernetes platform. For example,
6
9
7
10
| Kubernetes platform | Persistent store |
8
11
|---------------------|------------------|
@@ -11,139 +14,210 @@ This is a Helm chart to deploy a Dremio cluster in kubernetes. It uses a persist
11
14
| Google GKE | Persistent Disk |
12
15
| Local K8S on Docker | Hostpath |
13
16
14
-
If you want to use a different storage class available in your kubernetes environment, add the storageClass in values.yaml.
15
-
16
-
An appropriate distributed file store (S3, ADLS, HDFS, etc) should be used for paths.dist as this deployment will lose locally persisted reflections and uploads. You can update config/dremio.conf. Dremio [documentation](https://docs.dremio.com/deployment/distributed-storage.html) provides more information on this.
17
-
18
-
This assumes you already have kubernetes cluster setup, kubectl configured to talk to your kubernetes cluster and helm setup in your cluster. Review and update values.yaml to reflect values for your environment before installing the helm chart. This is specially important for for the memory and cpu values - your kubernetes cluster should have sufficient resources to provision the pods with those values. If your kubernetes installation does not support serviceType LoadBalancer, it is recommended to comment the serviceType value in values.yaml file before deploying.
17
+
If you want to use a different storage class available in your
18
+
kubernetes environment, add the storageClass in values.yaml.
19
+
20
+
An appropriate distributed file store (S3, ADLS, HDFS, etc) should be
21
+
used for paths.dist as this deployment will lose locally persisted
22
+
reflections and uploads. You can update config/dremio.conf. Dremio
This assumes you already have kubernetes cluster setup, kubectl
27
+
configured to talk to your kubernetes cluster and helm setup in your
28
+
cluster. Review and update values.yaml to reflect values for your
29
+
environment before installing the helm chart. This is specially
30
+
important for for the memory and cpu values - your kubernetes cluster
31
+
should have sufficient resources to provision the pods with those
32
+
values. If your kubernetes installation does not support serviceType
33
+
LoadBalancer, it is recommended to comment the serviceType value in
34
+
values.yaml file before deploying.
19
35
20
36
#### Installing the helm chart
21
-
Review charts/dremio/values.yaml and adjust the values as per your requirements. Note that the values for cpu and memory for the coordinator and the executors are set to work with AKS on Azure with worker nodes setup with machine types Standard_E16s_v3.
37
+
38
+
Review charts/dremio/values.yaml and adjust the values as per your
39
+
requirements. Note that the values for cpu and memory for the
40
+
coordinator and the executors are set to work with AKS on Azure with
41
+
worker nodes setup with machine types Standard_E16s_v3.
22
42
23
43
Run this from the charts directory
44
+
24
45
```bash
25
-
cd charts
26
-
helm install --wait dremio
27
-
```
28
-
If it takes longer than a couple of minutes to complete, check the status of the pods to see where they are waiting. If they are pending scheduling due to limited memory or cpu, either adjust the values in values.yaml and restart the process or add more resources to your kubernetes cluster.
46
+
cd charts helm install --wait dremio ```
47
+
48
+
If it takes longer than a couple of minutes to complete, check the
49
+
status of the pods to see where they are waiting. If they are pending
50
+
scheduling due to limited memory or cpu, either adjust the values in
51
+
values.yaml and restart the process or add more resources to your
52
+
kubernetes cluster.
29
53
30
54
#### Connect to the Dremio UI
31
-
If your kubernetes supports serviceType LoadBalancer, you can get to the Dremio UI on the load balancer external ip.
32
55
33
-
For example, if your service output is:
56
+
If your kubernetes supports serviceType LoadBalancer, you can get to
57
+
the Dremio UI on the load balancer external IP. For example, if your
you can get to the Dremio UI using the value under column EXTERNAL-IP:
66
+
You can get to the Dremio UI using the value under column EXTERNAL-IP:
42
67
43
68
http://35.226.31.211:9047
44
69
45
-
If your kubernetes does not have support of serviceType LoadBalancer, you can access the Dremio UI on the port exposed on the node. For example, if the service output is:
70
+
If your kubernetes does not have support of serviceType LoadBalancer,
71
+
you can access the Dremio UI on the port exposed on the node. For
where there is no external ip and the Dremio master is running on node "localhost", you can get to Dremio UI using:
53
79
54
-
http://localhost:30670
80
+
Where there is no external IP and the Dremio master is running on node
81
+
"localhost", you can get to Dremio UI using:
55
82
83
+
http://localhost:30670
56
84
57
85
#### Dremio Client Port
58
-
The port 31010 is used for ODBC and JDBC connections. You can look up service dremio-client in kubernetes to find the host to use for ODBC or JDBC connections. Depending on your kubernetes cluster supporting serviceType LoadBalancer, you will use the load balancer external-ip or the node on which a coordinator is running.
86
+
87
+
The port 31010 is used for ODBC and JDBC connections. You can look up
88
+
service dremio-client in kubernetes to find the host to use for ODBC
89
+
or JDBC connections. Depending on your kubernetes cluster supporting
90
+
serviceType LoadBalancer, you will use the load balancer external-ip
For example, in the above output, the service is exposed on an external-ip. So, you can use 35.226.31.211:31010 in your ODBC or JDBC connections.
99
+
For example, in the above output, the service is exposed on an
100
+
external-ip. So, you can use 35.226.31.211:31010 in your ODBC or JDBC
101
+
connections.
67
102
68
103
#### Viewing logs
69
-
Logs are written to the container's console. All the logs - server.log, server.out, server.gc and access.log - are written into the console simultaneously. You can view the logs using kubectl.
70
-
```
71
-
kubectl logs <container-name>
72
-
```
73
-
You can also tail the logs using the -f parameter.
74
-
```
75
-
kubectl logs -f <container-name>
76
-
```
104
+
105
+
Logs are written to the container's console. All the logs -
106
+
server.log, server.out, server.gc and access.log - are written into
107
+
the console simultaneously. You can view the logs using kubectl. ```
108
+
kubectl logs <container-name> ``` You can also tail the logs using the
Existing pods will be terminated and new pods will be created with the new image. You can
187
+
Existing pods will be terminated and new pods will be created with the
188
+
new image. You can
189
+
138
190
monitor the status of the pods by running:
139
191
```
140
192
kubectl get pods
141
193
```
142
194
143
-
Once all the pods are restarted and running, your Dremio cluster is upgraded.
195
+
Once all the pods are restarted and running, your Dremio cluster is
196
+
upgraded.
144
197
145
198
#### Customizing Dremio configuration
146
199
147
-
Dremio configuration files used by the deployment are in the config directory. These files are propagated to all the pods in the cluster. Updating the configuration and upgrading the helm release - just like doing an upgrade - would refresh all the pods with the new configuration. [Dremio documentation](https://docs.dremio.com/deployment/README-config.html) covers the configuration capabilities in Dremio.
148
-
149
-
If you need to add a core-site.xml, you can add the file to the config directory and it will be propagated to all the pods on install or upgrade of the deployment.
200
+
Dremio configuration files used by the deployment are in the config
201
+
directory. These files are propagated to all the pods in the
202
+
cluster. Updating the configuration and upgrading the helm release -
203
+
just like doing an upgrade - would refresh all the pods with the new
0 commit comments