Skip to content

Commit ec31385

Browse files
committed
Update CSV desc with loki demo mode
1 parent 16bf7e8 commit ec31385

File tree

3 files changed

+35
-29
lines changed

3 files changed

+35
-29
lines changed

bundle/manifests/netobserv-operator.clusterserviceversion.yaml

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -525,10 +525,10 @@ spec:
525525
name: flowmetrics.flows.netobserv.io
526526
version: v1alpha1
527527
description: |-
528-
NetObserv Operator is an OpenShift / Kubernetes operator for network observability. It deploys a monitoring pipeline that consists in:
528+
NetObserv Operator is an OpenShift / Kubernetes operator for network observability. It deploys a monitoring pipeline consisting in:
529529
- an eBPF agent, that generates network flows from captured packets
530530
- flowlogs-pipeline, a component that collects, enriches and exports these flows
531-
- when used in OpenShift, a Console plugin for flows visualization with powerful filtering options, a topology representation and more
531+
- a web console for flows visualization with powerful filtering options, a topology representation and more
532532
533533
Flow data is then available in multiple ways, each optional:
534534
@@ -548,16 +548,20 @@ spec:
548548
549549
- Installing using [Grafana's official documentation](https://grafana.com/docs/loki/latest/). Here also we wrote a ["distributed Loki" step by step guide](https://github.com/netobserv/documents/blob/main/loki_distributed.md).
550550
551-
For a quick try that is not suitable for production and not scalable (it deploys a single pod, configures a 10GB storage PVC, with 24 hours of retention), you can simply run the following commands:
551+
For a quick try that is not suitable for production and not scalable, the demo mode can be configured in `FlowCollector` with:
552552
553-
```
554-
kubectl create namespace netobserv
555-
kubectl apply -f <(curl -L https://raw.githubusercontent.com/netobserv/documents/5410e65b8e05aaabd1244a9524cfedd8ac8c56b5/examples/zero-click-loki/1-storage.yaml) -n netobserv
556-
kubectl apply -f <(curl -L https://raw.githubusercontent.com/netobserv/documents/5410e65b8e05aaabd1244a9524cfedd8ac8c56b5/examples/zero-click-loki/2-loki.yaml) -n netobserv
553+
```yaml
554+
spec:
555+
loki:
556+
mode: Monolithic
557+
monolithic:
558+
installDemoLoki: true
557559
```
558560
561+
It deploys a single pod, configures a 10GB storage PVC, with 24 hours of retention.
562+
559563
If you prefer to not use Loki, you must set `spec.loki.enable` to `false` in `FlowCollector`.
560-
In that case, you can still get the Prometheus metrics or export raw flows to a custom collector. But be aware that some of the Console plugin features will be disabled. For instance, you will not be able to view raw flows there, and the metrics / topology will have a more limited level of details, missing information such as pods or IPs.
564+
In that case, you still get the Prometheus metrics or export raw flows to a custom collector. But be aware that some of the Console plugin features will be disabled. For instance, you will not be able to view raw flows there, and the metrics / topology will have a more limited level of details, missing information such as pods or IPs.
561565
562566
### Kafka
563567
@@ -585,8 +589,6 @@ spec:
585589
586590
- Loki (`spec.loki`): configure here how to reach Loki. The default values match the Loki quick install paths mentioned above, but you might have to configure differently if you used another installation method. Make sure to disable it (`spec.loki.enable`) if you don't want to use Loki.
587591
588-
- Quick filters (`spec.consolePlugin.quickFilters`): configure preset filters to be displayed in the Console plugin. They offer a way to quickly switch from filters to others, such as showing / hiding pods network, or infrastructure network, or application network, etc. They can be tuned to reflect the different workloads running on your cluster. For a list of available filters, [check this page](https://github.com/netobserv/network-observability-operator/blob/1.10.1-community/docs/QuickFilters.md).
589-
590592
- Kafka (`spec.deploymentModel: Kafka` and `spec.kafka`): when enabled, integrates the flow collection pipeline with Kafka, by splitting ingestion from transformation (kube enrichment, derived metrics, ...). Kafka can provide better scalability, resiliency and high availability ([view more details](https://www.redhat.com/en/topics/integration/what-is-apache-kafka)). Assumes Kafka is already deployed and a topic is created.
591593
592594
- Exporters (`spec.exporters`) an optional list of exporters to which to send enriched flows. KAFKA and IPFIX exporters are supported. This allows you to define any custom storage or processing that can read from Kafka or use the IPFIX standard.

config/descriptions/ocp.md

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -13,20 +13,24 @@ Flow data is then available in multiple ways, each optional:
1313

1414
### Loki
1515

16-
[Loki](https://grafana.com/oss/loki/), from GrafanaLabs, can optionally be used as the backend to store all collected flows. The Network Observability operator does not install Loki directly, however we provide some guidance to help you there.
16+
[Loki](https://grafana.com/oss/loki/), from GrafanaLabs, can optionally be used as the backend to store all collected flows. The Network Observability operator does not install Loki directly, except in demo mode; however we provide some guidance to help you there.
1717

1818
- For a production or production-like environment usage, refer to [the operator documentation](https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/network_observability/installing-network-observability-operators).
1919

20-
- For a quick try that is not suitable for production and not scalable (it deploys a single pod, configures a 10GB storage PVC, with 24 hours of retention), you can simply run the following commands:
20+
- For a quick try that is not suitable for production and not scalable, the demo mode can be configured in `FlowCollector` with:
2121

22+
```yaml
23+
spec:
24+
loki:
25+
mode: Monolithic
26+
monolithic:
27+
installDemoLoki: true
2228
```
23-
oc create namespace netobserv
24-
oc apply -f <(curl -L https://raw.githubusercontent.com/netobserv/documents/5410e65b8e05aaabd1244a9524cfedd8ac8c56b5/examples/zero-click-loki/1-storage.yaml) -n netobserv
25-
oc apply -f <(curl -L https://raw.githubusercontent.com/netobserv/documents/5410e65b8e05aaabd1244a9524cfedd8ac8c56b5/examples/zero-click-loki/2-loki.yaml) -n netobserv
26-
```
29+
30+
It deploys a single pod, configures a 10GB storage PVC, with 24 hours of retention.
2731
2832
If you prefer to not use Loki, you must set `spec.loki.enable` to `false` in `FlowCollector`.
29-
In that case, you can still get the Prometheus metrics or export raw flows to a custom collector. But be aware that some of the Console plugin features will be disabled. For instance, you will not be able to view raw flows there, and the metrics / topology will have a more limited level of details, missing information such as pods or IPs.
33+
In that case, you still get the Prometheus metrics or export raw flows to a custom collector. But be aware that some of the Console plugin features will be disabled. For instance, you will not be able to view raw flows there, and the metrics / topology will have a more limited level of details, missing information such as pods or IPs.
3034

3135
### Kafka
3236

@@ -54,8 +58,6 @@ A couple of settings deserve special attention:
5458

5559
- Loki (`spec.loki`): configure here how to reach Loki. The default values match the Loki quick install paths mentioned above, but you might have to configure differently if you used another installation method. Make sure to disable it (`spec.loki.enable`) if you don't want to use Loki.
5660

57-
- Quick filters (`spec.consolePlugin.quickFilters`): configure preset filters to be displayed in the Console plugin. They offer a way to quickly switch from filters to others, such as showing / hiding pods network, or infrastructure network, or application network, etc. They can be tuned to reflect the different workloads running on your cluster. For a list of available filters, [check this page](https://github.com/netobserv/network-observability-operator/blob/1.10.1-community/docs/QuickFilters.md).
58-
5961
- Kafka (`spec.deploymentModel: Kafka` and `spec.kafka`): when enabled, integrates the flow collection pipeline with Kafka, by splitting ingestion from transformation (kube enrichment, derived metrics, ...). Kafka can provide better scalability, resiliency and high availability ([view more details](https://www.redhat.com/en/topics/integration/what-is-apache-kafka)). Assumes Kafka is already deployed and a topic is created.
6062

6163
- Exporters (`spec.exporters`) an optional list of exporters to which to send enriched flows. KAFKA and IPFIX exporters are supported. This allows you to define any custom storage or processing that can read from Kafka or use the IPFIX standard.

config/descriptions/upstream.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
1-
NetObserv Operator is an OpenShift / Kubernetes operator for network observability. It deploys a monitoring pipeline that consists in:
1+
NetObserv Operator is an OpenShift / Kubernetes operator for network observability. It deploys a monitoring pipeline consisting in:
22
- an eBPF agent, that generates network flows from captured packets
33
- flowlogs-pipeline, a component that collects, enriches and exports these flows
4-
- when used in OpenShift, a Console plugin for flows visualization with powerful filtering options, a topology representation and more
4+
- a web console for flows visualization with powerful filtering options, a topology representation and more
55

66
Flow data is then available in multiple ways, each optional:
77

@@ -21,16 +21,20 @@ For normal usage, we recommend two options:
2121

2222
- Installing using [Grafana's official documentation](https://grafana.com/docs/loki/latest/). Here also we wrote a ["distributed Loki" step by step guide](https://github.com/netobserv/documents/blob/main/loki_distributed.md).
2323

24-
For a quick try that is not suitable for production and not scalable (it deploys a single pod, configures a 10GB storage PVC, with 24 hours of retention), you can simply run the following commands:
24+
For a quick try that is not suitable for production and not scalable, the demo mode can be configured in `FlowCollector` with:
2525

26+
```yaml
27+
spec:
28+
loki:
29+
mode: Monolithic
30+
monolithic:
31+
installDemoLoki: true
2632
```
27-
kubectl create namespace netobserv
28-
kubectl apply -f <(curl -L https://raw.githubusercontent.com/netobserv/documents/5410e65b8e05aaabd1244a9524cfedd8ac8c56b5/examples/zero-click-loki/1-storage.yaml) -n netobserv
29-
kubectl apply -f <(curl -L https://raw.githubusercontent.com/netobserv/documents/5410e65b8e05aaabd1244a9524cfedd8ac8c56b5/examples/zero-click-loki/2-loki.yaml) -n netobserv
30-
```
33+
34+
It deploys a single pod, configures a 10GB storage PVC, with 24 hours of retention.
3135
3236
If you prefer to not use Loki, you must set `spec.loki.enable` to `false` in `FlowCollector`.
33-
In that case, you can still get the Prometheus metrics or export raw flows to a custom collector. But be aware that some of the Console plugin features will be disabled. For instance, you will not be able to view raw flows there, and the metrics / topology will have a more limited level of details, missing information such as pods or IPs.
37+
In that case, you still get the Prometheus metrics or export raw flows to a custom collector. But be aware that some of the Console plugin features will be disabled. For instance, you will not be able to view raw flows there, and the metrics / topology will have a more limited level of details, missing information such as pods or IPs.
3438

3539
### Kafka
3640

@@ -58,8 +62,6 @@ A couple of settings deserve special attention:
5862

5963
- Loki (`spec.loki`): configure here how to reach Loki. The default values match the Loki quick install paths mentioned above, but you might have to configure differently if you used another installation method. Make sure to disable it (`spec.loki.enable`) if you don't want to use Loki.
6064

61-
- Quick filters (`spec.consolePlugin.quickFilters`): configure preset filters to be displayed in the Console plugin. They offer a way to quickly switch from filters to others, such as showing / hiding pods network, or infrastructure network, or application network, etc. They can be tuned to reflect the different workloads running on your cluster. For a list of available filters, [check this page](https://github.com/netobserv/network-observability-operator/blob/1.10.1-community/docs/QuickFilters.md).
62-
6365
- Kafka (`spec.deploymentModel: Kafka` and `spec.kafka`): when enabled, integrates the flow collection pipeline with Kafka, by splitting ingestion from transformation (kube enrichment, derived metrics, ...). Kafka can provide better scalability, resiliency and high availability ([view more details](https://www.redhat.com/en/topics/integration/what-is-apache-kafka)). Assumes Kafka is already deployed and a topic is created.
6466

6567
- Exporters (`spec.exporters`) an optional list of exporters to which to send enriched flows. KAFKA and IPFIX exporters are supported. This allows you to define any custom storage or processing that can read from Kafka or use the IPFIX standard.

0 commit comments

Comments
 (0)