Skip to content

Commit 576483d

Browse files
committed
address feedback
1 parent 964a5e4 commit 576483d

File tree

3 files changed

+9
-3
lines changed

3 files changed

+9
-3
lines changed

bundle/manifests/netobserv-operator.clusterserviceversion.yaml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -586,14 +586,16 @@ spec:
586586
kubectl edit flowcollector cluster
587587
```
588588
589-
As it operates cluster-wide on every node, only a single `FlowCollector` is allowed, and it has to be named `cluster`.
589+
Only a single `FlowCollector` is allowed, and it has to be named `cluster`.
590590
591591
A couple of settings deserve special attention:
592592
593593
- Sampling (`spec.agent.ebpf.sampling`): a value of `100` means: one flow every 100 is sampled. `1` means all flows are sampled. The lower it is, the more flows you get, and the more accurate are derived metrics, but the higher amount of resources are consumed. By default, sampling is set to 50 (ie. 1:50). Note that more sampled flows also means more storage needed. We recommend to start with default values and refine empirically, to figure out which setting your cluster can manage.
594594
595595
- Loki (`spec.loki`): configure here how to reach Loki. The default values match the Loki quick install paths mentioned above, but you might have to configure differently if you used another installation method. Make sure to disable it (`spec.loki.enable`) if you don't want to use Loki.
596596
597+
- Processor replicas (`spec.processor.consumerReplicas`): how many replicas of `flowlogs-pipeline` should be deployed. Those pods collect, transform and re-export network flows. They can also be configured as unmanaged via `unmanagedReplicas`, if you want to use an auto-scaler.
598+
597599
- Kafka (`spec.deploymentModel: Kafka` and `spec.kafka`): when enabled, integrates the flow collection pipeline with Kafka, by splitting ingestion from transformation (kube enrichment, derived metrics, ...). Kafka can provide better scalability, resiliency and high availability ([view more details](https://www.redhat.com/en/topics/integration/what-is-apache-kafka)). Assumes Kafka is already deployed and a topic is created.
598600
599601
- Exporters (`spec.exporters`) an optional list of exporters to which to send enriched flows. KAFKA and IPFIX exporters are supported. This allows you to define any custom storage or processing that can read from Kafka or use the IPFIX standard.

config/descriptions/ocp.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,14 +50,16 @@ To edit configuration in cluster, run:
5050
oc edit flowcollector cluster
5151
```
5252

53-
As it operates cluster-wide on every node, only a single `FlowCollector` is allowed, and it has to be named `cluster`.
53+
Only a single `FlowCollector` is allowed, and it has to be named `cluster`.
5454

5555
A couple of settings deserve special attention:
5656

5757
- Sampling (`spec.agent.ebpf.sampling`): a value of `100` means: one flow every 100 is sampled. `1` means all flows are sampled. The lower it is, the more flows you get, and the more accurate are derived metrics, but the higher amount of resources are consumed. By default, sampling is set to 50 (ie. 1:50). Note that more sampled flows also means more storage needed. We recommend to start with default values and refine empirically, to figure out which setting your cluster can manage.
5858

5959
- Loki (`spec.loki`): configure here how to reach Loki. The default values match the Loki quick install paths mentioned above, but you might have to configure differently if you used another installation method. Make sure to disable it (`spec.loki.enable`) if you don't want to use Loki.
6060

61+
- Processor replicas (`spec.processor.consumerReplicas`): how many replicas of `flowlogs-pipeline` should be deployed. Those pods collect, transform and re-export network flows. They can also be configured as unmanaged via `unmanagedReplicas`, if you want to use an auto-scaler.
62+
6163
- Kafka (`spec.deploymentModel: Kafka` and `spec.kafka`): when enabled, integrates the flow collection pipeline with Kafka, by splitting ingestion from transformation (kube enrichment, derived metrics, ...). Kafka can provide better scalability, resiliency and high availability ([view more details](https://www.redhat.com/en/topics/integration/what-is-apache-kafka)). Assumes Kafka is already deployed and a topic is created.
6264

6365
- Exporters (`spec.exporters`) an optional list of exporters to which to send enriched flows. KAFKA and IPFIX exporters are supported. This allows you to define any custom storage or processing that can read from Kafka or use the IPFIX standard.

config/descriptions/upstream.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,14 +54,16 @@ To edit configuration in cluster, run:
5454
kubectl edit flowcollector cluster
5555
```
5656

57-
As it operates cluster-wide on every node, only a single `FlowCollector` is allowed, and it has to be named `cluster`.
57+
Only a single `FlowCollector` is allowed, and it has to be named `cluster`.
5858

5959
A couple of settings deserve special attention:
6060

6161
- Sampling (`spec.agent.ebpf.sampling`): a value of `100` means: one flow every 100 is sampled. `1` means all flows are sampled. The lower it is, the more flows you get, and the more accurate are derived metrics, but the higher amount of resources are consumed. By default, sampling is set to 50 (ie. 1:50). Note that more sampled flows also means more storage needed. We recommend to start with default values and refine empirically, to figure out which setting your cluster can manage.
6262

6363
- Loki (`spec.loki`): configure here how to reach Loki. The default values match the Loki quick install paths mentioned above, but you might have to configure differently if you used another installation method. Make sure to disable it (`spec.loki.enable`) if you don't want to use Loki.
6464

65+
- Processor replicas (`spec.processor.consumerReplicas`): how many replicas of `flowlogs-pipeline` should be deployed. Those pods collect, transform and re-export network flows. They can also be configured as unmanaged via `unmanagedReplicas`, if you want to use an auto-scaler.
66+
6567
- Kafka (`spec.deploymentModel: Kafka` and `spec.kafka`): when enabled, integrates the flow collection pipeline with Kafka, by splitting ingestion from transformation (kube enrichment, derived metrics, ...). Kafka can provide better scalability, resiliency and high availability ([view more details](https://www.redhat.com/en/topics/integration/what-is-apache-kafka)). Assumes Kafka is already deployed and a topic is created.
6668

6769
- Exporters (`spec.exporters`) an optional list of exporters to which to send enriched flows. KAFKA and IPFIX exporters are supported. This allows you to define any custom storage or processing that can read from Kafka or use the IPFIX standard.

0 commit comments

Comments
 (0)