You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The LDBC FinBench Data Generator (Datagen) produces the datasets for
8
-
the [LDBC FinBench's workloads](https://ldbcouncil.org/benchmarks/finbench/).
7
+
The LDBC FinBench Data Generator (Datagen) produces the datasets for the [LDBC FinBench's workloads](https://ldbcouncil.org/benchmarks/finbench/).
9
8
10
-
This data generator produces labelled directred property graphs based on the simulation of financial activities in
11
-
business systems. The key features include generation, factorization and transformation. A detailed description of the
12
-
schema produced by Datagen, as well as the format of the output files, can be found in the latest version of official
13
-
LDBC FinBench specification document.
9
+
This data generator produces labelled directed property graphs based on the simulation of financial activities in business systems. The key features include generation, factorization, and transformation. A detailed description of the schema produced by Datagen, as well as the format of the output files, can be found in the latest version of the official LDBC FinBench specification document.
**Note: The main branch is a work-in-progress for upcoming `v0.2` release aiming scale larger than SF100. For the stable version, please refer to version `0.1.0` on the `v0.1.0` branch.**
14
18
15
19
## DataGen Design
16
20
@@ -36,41 +40,29 @@ Note:
36
40
37
41
- Java 8 installed.
38
42
- Python3 and related packages installed. See each `install-dependencies.sh` for details.
39
-
- Scala 2.12, note that it will be integrated when maven builds.
43
+
- Scala 2.12, note that it will be integrated when Maven builds.
40
44
- Spark deployed. Spark 3.2.x is the recommended runtime to use. The rest of the instructions are provided assuming
41
45
Spark 3.2.x.
42
46
43
47
### Workflow
44
48
45
-
- Use the spark application to generate the factor tables and raw data.
46
-
- Use the python scripts to transform the data to snapshot data and write queries.
49
+
- Use the Spark application to generate the factor tables and raw data.
50
+
- Use the Python scripts to transform the data into snapshot data and write queries.
47
51
48
52
### Generation of Raw Data
49
53
50
54
- Deploy Spark
51
-
-use`scripts/get-spark-to-home.sh` to download pre-built spark to home directory and then decompress it.
55
+
-Use`scripts/get-spark-to-home.sh` to download pre-built Spark to the home directory and then decompress it.
52
56
- Set the PATH environment variable to include the Spark binaries.
53
57
- Build the project
54
58
- run `mvn clean package -DskipTests` to package the artifacts.
55
59
- Run locally with scripts
56
-
- See `scripts/run_local.sh` for details. It uses spark-submit to run the data generator. Please make sure you have
57
-
the pre-requisites installed and the build is successful.
60
+
- See `scripts/run_local.sh` for details. It uses spark-submit to run the data generator. Please ensure that you have the prerequisites installed and that the build is successful.
58
61
- Run in cloud: To be supported
59
62
- Run in cluster: To be supported
60
63
61
64
### Transformation of Raw Data
62
65
63
-
- set the `${FinBench_DATA_ROOT}` variable in `transformation/transform.sh` and run.
0 commit comments