[Scheduler][PIC] create pdat as list#1514
[Scheduler][PIC] create pdat as list#1514aserhani wants to merge 11 commits intoShamrock-code:mainfrom
Conversation
This reverts commit 7e08f2f.
Summary of ChangesHello @aserhani, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request refactors the PatchScheduler and ShamrockCtx components to support multiple PatchDataLayer instances. Previously, these components were designed to handle a single PatchDataLayer. The change introduces std::deque containers to manage a list of PatchDataLayerLayout shared pointers and SchedulerPatchData objects, enhancing the system's flexibility for managing different data layers. Existing functionalities are updated to either operate on a specified layer index or iterate through all layers, while backward compatibility is preserved for the primary layer. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
The pull request introduces support for multiple patch data layers by replacing single PatchDataLayerLayout and SchedulerPatchData members with std::deques of these types in ShamrockCtx and PatchScheduler. Many methods across these classes, including those for adding fields, setting domain bounds, gathering data, and iterating over patches, have been updated to accept an optional layer_idx argument, defaulting to 0, to specify which data layer to operate on. The PatchScheduler constructor and initialization logic were modified to handle the list of data layouts. The Python binding for get_patch_list_global was also updated to iterate through all data layers. Review comments highlighted several issues: the pyShamrockCtx.cpp change incorrectly gathers data for layer 0 and then attempts to find fields from other layers within it, potentially overwriting data; an std::invalid_argument exception was constructed but not thrown in PatchScheduler::set_coord_domain_bound, making the type check ineffective; ShamrockCtx::init_sched lacked a check for an empty pdl_list; PatchScheduler::add_root_patches used a default layer_idx=0 for get_layout_ptr() when creating PatchDataLayer, leading to incorrect data layouts for other layers; PatchScheduler::get_box_tranform and get_box_volume specializations incorrectly checked pdl() (layer 0) while accessing data from patch_data_list.at(layer_idx); and PatchScheduler::dump_status iterated over patch_data_list by value, causing inefficient copies.
| for (u32 layer_idx = 0; layer_idx < ctx.pdl_list.size(); layer_idx++) { | ||
| for (auto fname : ctx.pdl_list.at(layer_idx)->get_field_names()) { | ||
| append_to_map<f32>(fname, data, dic_out); | ||
| append_to_map<f32_2>(fname, data, dic_out); | ||
| append_to_map<f32_3>(fname, data, dic_out); | ||
| append_to_map<f32_4>(fname, data, dic_out); | ||
| append_to_map<f32_8>(fname, data, dic_out); | ||
| append_to_map<f32_16>(fname, data, dic_out); | ||
| append_to_map<f64>(fname, data, dic_out); | ||
| append_to_map<f64_2>(fname, data, dic_out); | ||
| append_to_map<f64_3>(fname, data, dic_out); | ||
| append_to_map<f64_4>(fname, data, dic_out); | ||
| append_to_map<f64_8>(fname, data, dic_out); | ||
| append_to_map<f64_16>(fname, data, dic_out); | ||
| append_to_map<u32>(fname, data, dic_out); | ||
| append_to_map<u64>(fname, data, dic_out); | ||
| append_to_map<u32_3>(fname, data, dic_out); | ||
| append_to_map<u64_3>(fname, data, dic_out); | ||
| append_to_map<i64_3>(fname, data, dic_out); | ||
| } | ||
| } |
There was a problem hiding this comment.
The current implementation for collecting data across multiple layers is incorrect. ctx.allgather_data() is called only once before the loops, and it only gathers data for layer 0 due to the current implementation of ShamrockCtx::allgather_data. The loop then iterates through field names of all layers, but attempts to find them in the data of layer 0, which is wrong. Additionally, if different layers have fields with the same name, their data will be overwritten in the output dictionary.
To fix this, you should:
- Modify
ShamrockCtx::gather_dataandShamrockCtx::allgather_datainShamrockCtx.hppto accept alayer_idx. - In this file, move the
allgather_datacall inside the loop to fetch data for each layer. - Consider how to handle fields with the same name across different layers. Returning a list of dictionaries (one per layer) or prefixing field names with the layer index would be a robust solution.
| std::invalid_argument( | ||
| std::string("the main field is not of the correct type to call this function\n") | ||
| + "fct called : " + __PRETTY_FUNCTION__ | ||
| + shambase::format("current patch data layout of index {} : ", layer_idx) | ||
| + pdl(layer_idx).get_description_str() | ||
| ); |
There was a problem hiding this comment.
The std::invalid_argument exception is constructed but not thrown. This makes the type check ineffective, and the program will proceed even if there's a type mismatch, potentially leading to undefined behavior.
throw std::invalid_argument(
std::string("the main field is not of the correct type to call this function\n")
+ "fct called : " + __PRETTY_FUNCTION__
+ shambase::format("current patch data layout of index {} : ", layer_idx)
+ pdl(layer_idx).get_description_str()
);| for (u32 layer_idx = 0; layer_idx < pdl_list.size(); layer_idx++) { | ||
| if (!pdl_list.at(layer_idx)) { | ||
| throw ShamAPIException(shambase::format("patch data layout of index {} is not initialized", layer_idx)); | ||
| } | ||
| } |
There was a problem hiding this comment.
The loop checks for nullptrs in pdl_list, but it doesn't handle the case where pdl_list is empty. If pdl_list is empty, the PatchScheduler constructor will be called with an empty deque, leading to a crash when it tries to access pdl_ptr_list.at(0). An explicit check for an empty list should be added before this loop.
| for (u32 layer_idx = 0; layer_idx < pdl_list.size(); layer_idx++) { | |
| if (!pdl_list.at(layer_idx)) { | |
| throw ShamAPIException(shambase::format("patch data layout of index {} is not initialized", layer_idx)); | |
| } | |
| } | |
| if (pdl_list.empty()) { | |
| throw ShamAPIException("patch data layout list is not initialized"); | |
| } | |
| for (u32 layer_idx = 0; layer_idx < pdl_list.size(); layer_idx++) { | |
| if (!pdl_list.at(layer_idx)) { | |
| throw ShamAPIException(shambase::format("patch data layout of index {} is not initialized", layer_idx)); | |
| } | |
| } |
|
|
||
| if (shamcomm::world_rank() == node_owner_id) { | ||
| patch_data.owned_data.add_obj(root.id_patch, PatchDataLayer(get_layout_ptr())); | ||
| patch_data_list.at(layer_idx).owned_data.add_obj(root.id_patch, PatchDataLayer(get_layout_ptr())); |
There was a problem hiding this comment.
get_layout_ptr() is called without arguments, so it defaults to layer_idx=0. This means a PatchDataLayer is always created with the layout of layer 0, regardless of the layer_idx passed to add_root_patches. This is incorrect for multi-layer setups and will lead to wrong data layouts for layers other than 0.
patch_data_list.at(layer_idx).owned_data.add_obj(root.id_patch, PatchDataLayer(get_layout_ptr(layer_idx)));| template<> | ||
| std::tuple<f32_3, f32_3> PatchScheduler::get_box_tranform() { | ||
| std::tuple<f32_3, f32_3> PatchScheduler::get_box_tranform(u32 layer_idx) { | ||
| if (!pdl().check_main_field_type<f32_3>()) |
There was a problem hiding this comment.
In get_box_tranform<f32_3>, pdl() is called without an argument, so it defaults to checking the layout of layer 0. However, the data is then fetched from patch_data_list.at(layer_idx). The check should be for the correct layer using pdl(layer_idx). This same issue exists for get_box_tranform<f64_3> (line 276) and get_box_volume specializations (lines 290, 299, 308).
if (!pdl(layer_idx).check_main_field_type<f32_3>())| for (auto _patch_data : patch_data_list) { | ||
| _patch_data.for_each_patchdata([&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) { | ||
| ss << "patch id : " << patch_id << " len = " << pdat.get_obj_cnt() << "\n"; | ||
| }); | ||
| } |
There was a problem hiding this comment.
The loop iterates over patch_data_list by value (for (auto _patch_data : ...)), which causes SchedulerPatchData objects to be copied. This is inefficient. You should iterate by reference to avoid unnecessary copies.
for (auto& _patch_data : patch_data_list) {
_patch_data.for_each_patchdata([&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
ss << "patch id : " << patch_id << " len = " << pdat.get_obj_cnt() << "\n";
});
}|
Thanks @aserhani for opening this PR! You can do multiple things directly here: Once the workflow completes a message will appear displaying informations related to the run. Also the PR gets automatically reviewed by gemini, you can: |
Workflow reportworkflow report corresponding to commit 3f3f5a5 Pre-commit check reportSome failures were detected in base source checks checks. Suggested changesDetailed changes :diff --git a/src/shamrock/include/shamrock/scheduler/PatchScheduler.hpp b/src/shamrock/include/shamrock/scheduler/PatchScheduler.hpp
index 9e43f54f..bdf754e3 100644
--- a/src/shamrock/include/shamrock/scheduler/PatchScheduler.hpp
+++ b/src/shamrock/include/shamrock/scheduler/PatchScheduler.hpp
@@ -28,8 +28,8 @@
#include "shamrock/solvergraph/NodeSetEdge.hpp"
#include "shamrock/solvergraph/PatchDataLayerRefs.hpp"
#include <nlohmann/json.hpp>
-#include <deque>
#include <unordered_set>
+#include <deque>
#include <fstream>
#include <functional>
#include <memory>
@@ -66,23 +66,27 @@ class PatchScheduler {
using SchedulerPatchData = shamrock::scheduler::SchedulerPatchData;
std::deque<std::shared_ptr<shamrock::patch::PatchDataLayerLayout>> pdl_ptr_list;
- std::deque<SchedulerPatchData> patch_data_list; ///< handle the data of the patches of the scheduler for every layer
- SchedulerPatchData & patch_data; ///< first layer patch data (legacy) patch_data_list.at(0)
+ std::deque<SchedulerPatchData>
+ patch_data_list; ///< handle the data of the patches of the scheduler for every layer
+ SchedulerPatchData &patch_data; ///< first layer patch data (legacy) patch_data_list.at(0)
u64 crit_patch_split; ///< splitting limit (if load value > crit_patch_split => patch split)
u64 crit_patch_merge; ///< merging limit (if load value < crit_patch_merge => patch merge)
SchedulerPatchList patch_list; ///< handle the list of the patches of the scheduler
- PatchTree patch_tree; ///< handle the tree structure of the patches
+ PatchTree patch_tree; ///< handle the tree structure of the patches
// using unordered set is not an issue since we use the find command after
std::unordered_set<u64> owned_patch_id; ///< list of owned patch ids updated with
///< (owned_patch_id = patch_list.build_local())
- inline shamrock::patch::PatchDataLayerLayout &pdl(u32 layer_idx = 0) { return shambase::get_check_ref(pdl_ptr_list.at(layer_idx)); }
+ inline shamrock::patch::PatchDataLayerLayout &pdl(u32 layer_idx = 0) {
+ return shambase::get_check_ref(pdl_ptr_list.at(layer_idx));
+ }
- inline std::shared_ptr<shamrock::patch::PatchDataLayerLayout> get_layout_ptr(u32 layer_idx = 0) const {
+ inline std::shared_ptr<shamrock::patch::PatchDataLayerLayout> get_layout_ptr(
+ u32 layer_idx = 0) const {
return pdl_ptr_list.at(layer_idx);
}
@@ -139,8 +143,7 @@ class PatchScheduler {
std::string("the main field is not of the correct type to call this function\n")
+ "fct called : " + __PRETTY_FUNCTION__
+ shambase::format("current patch data layout of index {} : ", layer_idx)
- + pdl(layer_idx).get_description_str()
- );
+ + pdl(layer_idx).get_description_str());
}
patch_data_list.at(layer_idx).sim_box.set_bounding_box<vectype>({bmin, bmax});
}
@@ -178,7 +181,8 @@ class PatchScheduler {
[[deprecated]]
void dump_local_patches(std::string filename);
- std::vector<std::unique_ptr<shamrock::patch::PatchDataLayer>> gather_data(u32 rank, u32 layer_idx = 0);
+ std::vector<std::unique_ptr<shamrock::patch::PatchDataLayer>> gather_data(
+ u32 rank, u32 layer_idx = 0);
/**
* @brief add patch to the scheduler
@@ -232,28 +236,30 @@ class PatchScheduler {
template<class Function>
inline void for_each_patch_data(Function &&fct, u32 layer_idx = 0) {
- patch_data_list.at(layer_idx).for_each_patchdata([&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
- shamrock::patch::Patch &cur_p
- = patch_list.global[patch_list.id_patch_to_global_idx[patch_id]];
+ patch_data_list.at(layer_idx).for_each_patchdata(
+ [&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
+ shamrock::patch::Patch &cur_p
+ = patch_list.global[patch_list.id_patch_to_global_idx[patch_id]];
- if (!cur_p.is_err_mode()) {
- fct(patch_id, cur_p, pdat);
- }
- });
+ if (!cur_p.is_err_mode()) {
+ fct(patch_id, cur_p, pdat);
+ }
+ });
}
template<class Function>
inline void for_each_patch(Function &&fct, u32 layer_idx = 0) {
- patch_data_list.at(layer_idx).for_each_patchdata([&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
- shamrock::patch::Patch &cur_p
- = patch_list.global[patch_list.id_patch_to_global_idx[patch_id]];
+ patch_data_list.at(layer_idx).for_each_patchdata(
+ [&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
+ shamrock::patch::Patch &cur_p
+ = patch_list.global[patch_list.id_patch_to_global_idx[patch_id]];
- // TODO should feed the sycl queue to the lambda
- if (!cur_p.is_err_mode()) {
- fct(patch_id, cur_p);
- }
- });
+ // TODO should feed the sycl queue to the lambda
+ if (!cur_p.is_err_mode()) {
+ fct(patch_id, cur_p);
+ }
+ });
}
inline void for_each_global_patch(std::function<void(const shamrock::patch::Patch)> fct) {
@@ -273,7 +279,8 @@ class PatchScheduler {
}
inline void for_each_local_patchdata(
- std::function<void(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &)> fct, u32 layer_idx = 0) {
+ std::function<void(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &)> fct,
+ u32 layer_idx = 0) {
for (shamrock::patch::Patch p : patch_list.local) {
if (!p.is_err_mode()) {
fct(p, patch_data_list.at(layer_idx).get_pdat(p.id_patch));
@@ -283,14 +290,15 @@ class PatchScheduler {
inline void for_each_local_patch_nonempty(
std::function<void(const shamrock::patch::Patch &)> fct, u32 layer_idx = 0) {
- patch_data_list.at(layer_idx).for_each_patchdata([&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
- shamrock::patch::Patch &cur_p
- = patch_list.global[patch_list.id_patch_to_global_idx.at(patch_id)];
+ patch_data_list.at(layer_idx).for_each_patchdata(
+ [&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
+ shamrock::patch::Patch &cur_p
+ = patch_list.global[patch_list.id_patch_to_global_idx.at(patch_id)];
- if ((!cur_p.is_err_mode()) && (!pdat.is_empty())) {
- fct(cur_p);
- }
- });
+ if ((!cur_p.is_err_mode()) && (!pdat.is_empty())) {
+ fct(cur_p);
+ }
+ });
}
inline u32 get_patch_rank_owner(u64 patch_id) {
@@ -300,26 +308,31 @@ class PatchScheduler {
}
inline void for_each_patchdata_nonempty(
- std::function<void(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &)> fct, u32 layer_idx = 0) {
- patch_data_list.at(layer_idx).for_each_patchdata([&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
- shamrock::patch::Patch &cur_p
- = patch_list.global[patch_list.id_patch_to_global_idx.at(patch_id)];
-
- if ((!cur_p.is_err_mode()) && (!pdat.is_empty())) {
- fct(cur_p, pdat);
- }
- });
+ std::function<void(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &)> fct,
+ u32 layer_idx = 0) {
+ patch_data_list.at(layer_idx).for_each_patchdata(
+ [&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
+ shamrock::patch::Patch &cur_p
+ = patch_list.global[patch_list.id_patch_to_global_idx.at(patch_id)];
+
+ if ((!cur_p.is_err_mode()) && (!pdat.is_empty())) {
+ fct(cur_p, pdat);
+ }
+ });
}
template<class T>
inline shambase::DistributedData<T> map_owned_patchdata(
- std::function<T(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &pdat)> fct, u32 layer_idx = 0) {
+ std::function<T(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &pdat)> fct,
+ u32 layer_idx = 0) {
shambase::DistributedData<T> ret;
using namespace shamrock::patch;
- for_each_patch_data([&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
- ret.add_obj(id_patch, fct(cur_p, pdat));
- }, layer_idx);
+ for_each_patch_data(
+ [&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
+ ret.add_obj(id_patch, fct(cur_p, pdat));
+ },
+ layer_idx);
return ret;
}
@@ -350,26 +363,32 @@ class PatchScheduler {
template<class T>
inline shambase::DistributedData<T> map_owned_patchdata_fetch_simple(
- std::function<T(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &pdat)> fct, u32 layer_idx = 0) {
+ std::function<T(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &pdat)> fct,
+ u32 layer_idx = 0) {
shambase::DistributedData<T> ret;
using namespace shamrock::patch;
- for_each_patch_data([&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
- ret.add_obj(id_patch, fct(cur_p, pdat));
- }, layer_idx);
+ for_each_patch_data(
+ [&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
+ ret.add_obj(id_patch, fct(cur_p, pdat));
+ },
+ layer_idx);
return distrib_data_local_to_all_simple(ret);
}
template<class T>
inline shambase::DistributedData<T> map_owned_patchdata_fetch_load_store(
- std::function<T(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &pdat)> fct, u32 layer_idx = 0) {
+ std::function<T(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &pdat)> fct,
+ u32 layer_idx = 0) {
shambase::DistributedData<T> ret;
using namespace shamrock::patch;
- for_each_patch_data([&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
- ret.add_obj(id_patch, fct(cur_p, pdat));
- }, layer_idx);
+ for_each_patch_data(
+ [&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
+ ret.add_obj(id_patch, fct(cur_p, pdat));
+ },
+ layer_idx);
return distrib_data_local_to_all_load_store(ret);
}
@@ -390,9 +409,11 @@ class PatchScheduler {
StackEntry stack_loc{};
using namespace shamrock::patch;
u64 num_obj = 0; // TODO get_rank_count() in scheduler
- for_each_patch_data([&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
- num_obj += pdat.get_obj_cnt();
- }, layer_idx);
+ for_each_patch_data(
+ [&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
+ num_obj += pdat.get_obj_cnt();
+ },
+ layer_idx);
return num_obj;
}
@@ -419,21 +440,23 @@ class PatchScheduler {
using namespace shamrock::patch;
u64 ptr = 0; // TODO accumulate_field() in scheduler ?
- for_each_patch_data([&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
- using namespace shamalgs::memory;
- using namespace shambase;
-
- if (pdat.get_obj_cnt() > 0) {
- write_with_offset_into(
- shamsys::instance::get_compute_scheduler().get_queue(),
- get_check_ref(ret),
- pdat.get_field<T>(field_idx).get_buf(),
- ptr,
- pdat.get_obj_cnt() * nvar);
-
- ptr += pdat.get_obj_cnt() * nvar;
- }
- }, layer_idx);
+ for_each_patch_data(
+ [&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
+ using namespace shamalgs::memory;
+ using namespace shambase;
+
+ if (pdat.get_obj_cnt() > 0) {
+ write_with_offset_into(
+ shamsys::instance::get_compute_scheduler().get_queue(),
+ get_check_ref(ret),
+ pdat.get_field<T>(field_idx).get_buf(),
+ ptr,
+ pdat.get_obj_cnt() * nvar);
+
+ ptr += pdat.get_obj_cnt() * nvar;
+ }
+ },
+ layer_idx);
}
return ret;
@@ -462,7 +485,8 @@ class PatchScheduler {
// }
template<class Function, class Pfield>
- inline void compute_patch_field(Pfield &field, MPI_Datatype &dtype, Function &&lambda, u32 layer_idx = 0) {
+ inline void compute_patch_field(
+ Pfield &field, MPI_Datatype &dtype, Function &&lambda, u32 layer_idx = 0) {
field.local_nodes_value.resize(patch_list.local.size());
for (u64 idx = 0; idx < patch_list.local.size(); idx++) {
@@ -485,9 +509,11 @@ class PatchScheduler {
[&](shamrock::solvergraph::PatchDataLayerRefs &edge) {
edge.free_alloc();
using namespace shamrock::patch;
- for_each_patchdata_nonempty([&](Patch cur_p, PatchDataLayer &pdat) {
- edge.patchdatas.add_obj(cur_p.id_patch, std::ref(pdat));
- }, layer_idx);
+ for_each_patchdata_nonempty(
+ [&](Patch cur_p, PatchDataLayer &pdat) {
+ edge.patchdatas.add_obj(cur_p.id_patch, std::ref(pdat));
+ },
+ layer_idx);
});
return std::make_shared<decltype(node_set_edge)>(std::move(node_set_edge));
@@ -499,7 +525,8 @@ class PatchScheduler {
* @param coords coordinates of the patch
* @return u64 the id of the made patch
*/
- std::vector<u64> add_root_patches(std::vector<shamrock::patch::PatchCoord<3>> coords, u32 layer_idx = 0);
+ std::vector<u64> add_root_patches(
+ std::vector<shamrock::patch::PatchCoord<3>> coords, u32 layer_idx = 0);
shamrock::patch::SimulationBoxInfo &get_sim_box() { return patch_data_list.at(0).sim_box; }
diff --git a/src/shamrock/include/shamrock/scheduler/ShamrockCtx.hpp b/src/shamrock/include/shamrock/scheduler/ShamrockCtx.hpp
index 4e58ec40..5326ba02 100644
--- a/src/shamrock/include/shamrock/scheduler/ShamrockCtx.hpp
+++ b/src/shamrock/include/shamrock/scheduler/ShamrockCtx.hpp
@@ -81,7 +81,8 @@ class ShamrockCtx {
pdl_list.at(layer_idx)->add_field<T>(fname, nvar);
}
- inline void pdata_layout_add_field_t(std::string fname, u32 nvar, std::string type, u32 layer_idx = 0) {
+ inline void pdata_layout_add_field_t(
+ std::string fname, u32 nvar, std::string type, u32 layer_idx = 0) {
if (sched) {
throw ShamAPIException("cannot modify patch data layout while the scheduler is on");
}
@@ -90,7 +91,8 @@ class ShamrockCtx {
inline void pdata_layout_print(u32 layer_idx = 0) {
if (!pdl_list.at(layer_idx)) {
- throw ShamAPIException(shambase::format("patch data layout of index {} is not initialized", layer_idx));
+ throw ShamAPIException(
+ shambase::format("patch data layout of index {} is not initialized", layer_idx));
}
std::cout << pdl_list.at(layer_idx)->get_description_str() << std::endl;
}
@@ -107,7 +109,9 @@ class ShamrockCtx {
for (u32 layer_idx = 0; layer_idx < pdl_list.size(); layer_idx++) {
if (!pdl_list.at(layer_idx)) {
- throw ShamAPIException(shambase::format("patch data layout of index {} is not initialized", layer_idx));
+ throw ShamAPIException(
+ shambase::format(
+ "patch data layout of index {} is not initialized", layer_idx));
}
}
@@ -150,7 +154,8 @@ class ShamrockCtx {
void set_coord_domain_bound(std::tuple<f64_3, f64_3> box, u32 layer_idx = 0) {
if (!pdl_list.at(layer_idx)) {
- throw ShamAPIException(shambase::format("patch data layout of index {} is not initialized", layer_idx));
+ throw ShamAPIException(
+ shambase::format("patch data layout of index {} is not initialized", layer_idx));
}
if (!sched) {
diff --git a/src/shamrock/src/io/ShamrockDump.cpp b/src/shamrock/src/io/ShamrockDump.cpp
index 3e75178c..7775f05d 100644
--- a/src/shamrock/src/io/ShamrockDump.cpp
+++ b/src/shamrock/src/io/ShamrockDump.cpp
@@ -190,7 +190,8 @@ namespace shamrock {
json jpdat_info = json::parse(patchdata_infos);
ctx.pdata_layout_new();
- *ctx.pdl_list.at(0) = jmeta_patch.at("patchdata_layout").get<patch::PatchDataLayerLayout>(); // ->astodo ->hc
+ *ctx.pdl_list.at(0) = jmeta_patch.at("patchdata_layout")
+ .get<patch::PatchDataLayerLayout>(); // ->astodo ->hc
ctx.init_sched(
jmeta_patch.at("crit_patch_split").get<u64>(),
jmeta_patch.at("crit_patch_merge").get<u64>());
@@ -246,7 +247,8 @@ namespace shamrock {
shamalgs::SerializeHelper ser(
shamsys::instance::get_compute_scheduler_ptr(), std::move(out));
- patch::PatchDataLayer pdat = patch::PatchDataLayer::deserialize_buf(ser, ctx.pdl_list.at(0)); // ->astodo ->hc
+ patch::PatchDataLayer pdat
+ = patch::PatchDataLayer::deserialize_buf(ser, ctx.pdl_list.at(0)); // ->astodo ->hc
sched.patch_data.owned_data.add_obj(pid, std::move(pdat));
}
diff --git a/src/shamrock/src/scheduler/PatchScheduler.cpp b/src/shamrock/src/scheduler/PatchScheduler.cpp
index 22264b20..7b6ecd94 100644
--- a/src/shamrock/src/scheduler/PatchScheduler.cpp
+++ b/src/shamrock/src/scheduler/PatchScheduler.cpp
@@ -126,7 +126,8 @@ std::vector<u64> PatchScheduler::add_root_patches(
patch_list._next_patch_id++;
if (shamcomm::world_rank() == node_owner_id) {
- patch_data_list.at(layer_idx).owned_data.add_obj(root.id_patch, PatchDataLayer(get_layout_ptr()));
+ patch_data_list.at(layer_idx).owned_data.add_obj(
+ root.id_patch, PatchDataLayer(get_layout_ptr()));
shamlog_debug_sycl_ln("Scheduler", "adding patch data");
} else {
shamlog_debug_sycl_ln(
@@ -213,10 +214,10 @@ PatchScheduler::PatchScheduler(
u64 crit_merge)
: pdl_ptr_list(pdl_ptr_list),
patch_data_list{SchedulerPatchData(
- pdl_ptr_list.at(0),
- {{0, 0, 0}, {max_axis_patch_coord, max_axis_patch_coord, max_axis_patch_coord}})},
+ pdl_ptr_list.at(0),
+ {{0, 0, 0}, {max_axis_patch_coord, max_axis_patch_coord, max_axis_patch_coord}})},
patch_data(patch_data_list.at(0)) // patch_data is kept for legacy reasons
- {
+{
for (u32 layer_idx = 1; layer_idx < pdl_ptr_list.size(); layer_idx++) {
patch_data_list.push_back(SchedulerPatchData(
pdl_ptr_list.at(layer_idx),
@@ -476,7 +477,7 @@ void PatchScheduler::scheduler_step(bool do_split_merge, bool do_load_balancing)
timers.load_balance_apply = shambase::Timer{};
timers.load_balance_apply->start();
// apply LB change list
- for (auto & _patch_data : patch_data_list) {
+ for (auto &_patch_data : patch_data_list) {
_patch_data.apply_change_list(change_list, patch_list);
}
timers.load_balance_apply->end();
@@ -679,7 +680,7 @@ void check_locality_t(PatchScheduler &sched) {
using namespace shamrock::patch;
sched.for_each_patch_data([&](u64 pid, Patch p, shamrock::patch::PatchDataLayer &pdat) {
PatchDataField<vec> &main_field = pdat.get_field<vec>(0);
- auto [bmin_p0, bmax_p0] = sched.patch_data_list.at(0).sim_box.patch_coord_to_domain<vec>(p);
+ auto [bmin_p0, bmax_p0] = sched.patch_data_list.at(0).sim_box.patch_coord_to_domain<vec>(p);
main_field.check_err_range(
[&](vec val, vec vmin, vec vmax) {
diff --git a/src/tests/benchmark_selfgrav.cpp b/src/tests/benchmark_selfgrav.cpp
index 962e8214..40200a19 100644
--- a/src/tests/benchmark_selfgrav.cpp
+++ b/src/tests/benchmark_selfgrav.cpp
@@ -32,7 +32,8 @@ std::tuple<f64, f64> benchmark_selfgrav(f32 dr, u32 npatch) {
u64 Nesti = (2.F / dr) * (2.F / dr) * (2.F / dr);
- std::deque<std::shared_ptr<shamrock::patch::PatchDataLayerLayout>> pdl_ptr_list {std::make_shared<shamrock::patch::PatchDataLayerLayout>()};
+ std::deque<std::shared_ptr<shamrock::patch::PatchDataLayerLayout>> pdl_ptr_list{
+ std::make_shared<shamrock::patch::PatchDataLayerLayout>()};
auto &pdl = *pdl_ptr_list.at(0);
pdl.add_field<f32_3>("xyz", 1);
diff --git a/src/tests/shamrock_article_1Tests.cpp b/src/tests/shamrock_article_1Tests.cpp
index db4cb9ed..672189d2 100644
--- a/src/tests/shamrock_article_1Tests.cpp
+++ b/src/tests/shamrock_article_1Tests.cpp
@@ -489,8 +489,9 @@ f64 amr_walk_perf(
using namespace shamrock::patch;
using namespace shamrock::scheduler;
- std::deque<std::shared_ptr<PatchDataLayerLayout>> layout_ptr_list {std::make_shared<PatchDataLayerLayout>()};
- auto &layout = *layout_ptr_list.at(0);
+ std::deque<std::shared_ptr<PatchDataLayerLayout>> layout_ptr_list{
+ std::make_shared<PatchDataLayerLayout>()};
+ auto &layout = *layout_ptr_list.at(0);
layout.add_field<u64_3>("cell_min", 1);
layout.add_field<u64_3>("cell_max", 1);
PatchScheduler sched(layout_ptr_list, 1e9, 1);
|
patch_dataas reference to the first element of the list, for backward compatibility.