Skip to content

[Scheduler][PIC] create pdat as list#1514

Draft
aserhani wants to merge 11 commits intoShamrock-code:mainfrom
aserhani:pic/pdat-as-list
Draft

[Scheduler][PIC] create pdat as list#1514
aserhani wants to merge 11 commits intoShamrock-code:mainfrom
aserhani:pic/pdat-as-list

Conversation

@aserhani
Copy link
Collaborator

@aserhani aserhani commented Jan 2, 2026

  • Created a list of PatchDataLayer.
  • Set the default PatchDataLayer to be the first element of the list.
  • Kept patch_data as reference to the first element of the list, for backward compatibility.

@gemini-code-assist
Copy link

Summary of Changes

Hello @aserhani, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the PatchScheduler and ShamrockCtx components to support multiple PatchDataLayer instances. Previously, these components were designed to handle a single PatchDataLayer. The change introduces std::deque containers to manage a list of PatchDataLayerLayout shared pointers and SchedulerPatchData objects, enhancing the system's flexibility for managing different data layers. Existing functionalities are updated to either operate on a specified layer index or iterate through all layers, while backward compatibility is preserved for the primary layer.

Highlights

  • Multi-Layer PatchData Support: The core PatchScheduler and ShamrockCtx classes now manage PatchDataLayer instances as a list (std::deque) instead of a single object, enabling more complex data management scenarios.
  • Backward Compatibility: A reference to the first PatchDataLayer in the newly introduced list is maintained for existing code that expects a single patch_data object, ensuring minimal disruption.
  • Indexed Operations: Many scheduler functions have been updated to accept an optional layer_idx parameter, allowing specific operations to be performed on individual PatchDataLayer instances within the list.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces support for multiple patch data layers by replacing single PatchDataLayerLayout and SchedulerPatchData members with std::deques of these types in ShamrockCtx and PatchScheduler. Many methods across these classes, including those for adding fields, setting domain bounds, gathering data, and iterating over patches, have been updated to accept an optional layer_idx argument, defaulting to 0, to specify which data layer to operate on. The PatchScheduler constructor and initialization logic were modified to handle the list of data layouts. The Python binding for get_patch_list_global was also updated to iterate through all data layers. Review comments highlighted several issues: the pyShamrockCtx.cpp change incorrectly gathers data for layer 0 and then attempts to find fields from other layers within it, potentially overwriting data; an std::invalid_argument exception was constructed but not thrown in PatchScheduler::set_coord_domain_bound, making the type check ineffective; ShamrockCtx::init_sched lacked a check for an empty pdl_list; PatchScheduler::add_root_patches used a default layer_idx=0 for get_layout_ptr() when creating PatchDataLayer, leading to incorrect data layouts for other layers; PatchScheduler::get_box_tranform and get_box_volume specializations incorrectly checked pdl() (layer 0) while accessing data from patch_data_list.at(layer_idx); and PatchScheduler::dump_status iterated over patch_data_list by value, causing inefficient copies.

Comment on lines +289 to 309
for (u32 layer_idx = 0; layer_idx < ctx.pdl_list.size(); layer_idx++) {
for (auto fname : ctx.pdl_list.at(layer_idx)->get_field_names()) {
append_to_map<f32>(fname, data, dic_out);
append_to_map<f32_2>(fname, data, dic_out);
append_to_map<f32_3>(fname, data, dic_out);
append_to_map<f32_4>(fname, data, dic_out);
append_to_map<f32_8>(fname, data, dic_out);
append_to_map<f32_16>(fname, data, dic_out);
append_to_map<f64>(fname, data, dic_out);
append_to_map<f64_2>(fname, data, dic_out);
append_to_map<f64_3>(fname, data, dic_out);
append_to_map<f64_4>(fname, data, dic_out);
append_to_map<f64_8>(fname, data, dic_out);
append_to_map<f64_16>(fname, data, dic_out);
append_to_map<u32>(fname, data, dic_out);
append_to_map<u64>(fname, data, dic_out);
append_to_map<u32_3>(fname, data, dic_out);
append_to_map<u64_3>(fname, data, dic_out);
append_to_map<i64_3>(fname, data, dic_out);
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The current implementation for collecting data across multiple layers is incorrect. ctx.allgather_data() is called only once before the loops, and it only gathers data for layer 0 due to the current implementation of ShamrockCtx::allgather_data. The loop then iterates through field names of all layers, but attempts to find them in the data of layer 0, which is wrong. Additionally, if different layers have fields with the same name, their data will be overwritten in the output dictionary.

To fix this, you should:

  1. Modify ShamrockCtx::gather_data and ShamrockCtx::allgather_data in ShamrockCtx.hpp to accept a layer_idx.
  2. In this file, move the allgather_data call inside the loop to fetch data for each layer.
  3. Consider how to handle fields with the same name across different layers. Returning a list of dictionaries (one per layer) or prefixing field names with the layer index would be a robust solution.

Comment on lines +138 to +143
std::invalid_argument(
std::string("the main field is not of the correct type to call this function\n")
+ "fct called : " + __PRETTY_FUNCTION__
+ shambase::format("current patch data layout of index {} : ", layer_idx)
+ pdl(layer_idx).get_description_str()
);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The std::invalid_argument exception is constructed but not thrown. This makes the type check ineffective, and the program will proceed even if there's a type mismatch, potentially leading to undefined behavior.

                throw std::invalid_argument(
                    std::string("the main field is not of the correct type to call this function\n")
                    + "fct called : " + __PRETTY_FUNCTION__
                    + shambase::format("current patch data layout of index {} : ", layer_idx)
                    + pdl(layer_idx).get_description_str()
                );

Comment on lines +108 to 112
for (u32 layer_idx = 0; layer_idx < pdl_list.size(); layer_idx++) {
if (!pdl_list.at(layer_idx)) {
throw ShamAPIException(shambase::format("patch data layout of index {} is not initialized", layer_idx));
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The loop checks for nullptrs in pdl_list, but it doesn't handle the case where pdl_list is empty. If pdl_list is empty, the PatchScheduler constructor will be called with an empty deque, leading to a crash when it tries to access pdl_ptr_list.at(0). An explicit check for an empty list should be added before this loop.

Suggested change
for (u32 layer_idx = 0; layer_idx < pdl_list.size(); layer_idx++) {
if (!pdl_list.at(layer_idx)) {
throw ShamAPIException(shambase::format("patch data layout of index {} is not initialized", layer_idx));
}
}
if (pdl_list.empty()) {
throw ShamAPIException("patch data layout list is not initialized");
}
for (u32 layer_idx = 0; layer_idx < pdl_list.size(); layer_idx++) {
if (!pdl_list.at(layer_idx)) {
throw ShamAPIException(shambase::format("patch data layout of index {} is not initialized", layer_idx));
}
}


if (shamcomm::world_rank() == node_owner_id) {
patch_data.owned_data.add_obj(root.id_patch, PatchDataLayer(get_layout_ptr()));
patch_data_list.at(layer_idx).owned_data.add_obj(root.id_patch, PatchDataLayer(get_layout_ptr()));

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

get_layout_ptr() is called without arguments, so it defaults to layer_idx=0. This means a PatchDataLayer is always created with the layout of layer 0, regardless of the layer_idx passed to add_root_patches. This is incorrect for multi-layer setups and will lead to wrong data layouts for layers other than 0.

            patch_data_list.at(layer_idx).owned_data.add_obj(root.id_patch, PatchDataLayer(get_layout_ptr(layer_idx)));

template<>
std::tuple<f32_3, f32_3> PatchScheduler::get_box_tranform() {
std::tuple<f32_3, f32_3> PatchScheduler::get_box_tranform(u32 layer_idx) {
if (!pdl().check_main_field_type<f32_3>())

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

In get_box_tranform<f32_3>, pdl() is called without an argument, so it defaults to checking the layout of layer 0. However, the data is then fetched from patch_data_list.at(layer_idx). The check should be for the correct layer using pdl(layer_idx). This same issue exists for get_box_tranform<f64_3> (line 276) and get_box_volume specializations (lines 290, 299, 308).

    if (!pdl(layer_idx).check_main_field_type<f32_3>())

Comment on lines +618 to +622
for (auto _patch_data : patch_data_list) {
_patch_data.for_each_patchdata([&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
ss << "patch id : " << patch_id << " len = " << pdat.get_obj_cnt() << "\n";
});
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The loop iterates over patch_data_list by value (for (auto _patch_data : ...)), which causes SchedulerPatchData objects to be copied. This is inefficient. You should iterate by reference to avoid unnecessary copies.

    for (auto& _patch_data : patch_data_list) {
        _patch_data.for_each_patchdata([&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
            ss << "patch id : " << patch_id << " len = " << pdat.get_obj_cnt() << "\n";
        });
    }

@github-actions
Copy link
Contributor

github-actions bot commented Jan 2, 2026

Thanks @aserhani for opening this PR!

You can do multiple things directly here:
1 - Comment pre-commit.ci run to run pre-commit checks.
2 - Comment pre-commit.ci autofix to apply fixes.
3 - Add label autofix.ci to fix authorship & pre-commit for every commit made.
4 - Add label light-ci to only trigger a reduced & faster version of the CI (need the full one before merge).
5 - Add label trigger-ci to create an empty commit to trigger the CI.

Once the workflow completes a message will appear displaying informations related to the run.

Also the PR gets automatically reviewed by gemini, you can:
1 - Comment /gemini review to trigger a review
2 - Comment /gemini summary for a summary
3 - Tag it using @gemini-code-assist either in the PR or in review comments on files

@github-actions
Copy link
Contributor

github-actions bot commented Jan 2, 2026

Workflow report

workflow report corresponding to commit 3f3f5a5
Commiter email is anass.serhani@cnrs.fr

Pre-commit check report

Some failures were detected in base source checks checks.
Check the On PR / Linting / Base source checks (pull_request) job in the tests for more detailled output

Suggested changes

Detailed changes :
diff --git a/src/shamrock/include/shamrock/scheduler/PatchScheduler.hpp b/src/shamrock/include/shamrock/scheduler/PatchScheduler.hpp
index 9e43f54f..bdf754e3 100644
--- a/src/shamrock/include/shamrock/scheduler/PatchScheduler.hpp
+++ b/src/shamrock/include/shamrock/scheduler/PatchScheduler.hpp
@@ -28,8 +28,8 @@
 #include "shamrock/solvergraph/NodeSetEdge.hpp"
 #include "shamrock/solvergraph/PatchDataLayerRefs.hpp"
 #include <nlohmann/json.hpp>
-#include <deque>
 #include <unordered_set>
+#include <deque>
 #include <fstream>
 #include <functional>
 #include <memory>
@@ -66,23 +66,27 @@ class PatchScheduler {
     using SchedulerPatchData = shamrock::scheduler::SchedulerPatchData;
 
     std::deque<std::shared_ptr<shamrock::patch::PatchDataLayerLayout>> pdl_ptr_list;
-    std::deque<SchedulerPatchData> patch_data_list; ///< handle the data of the patches of the scheduler for every layer
-    SchedulerPatchData & patch_data; ///< first layer patch data (legacy) patch_data_list.at(0)
+    std::deque<SchedulerPatchData>
+        patch_data_list; ///< handle the data of the patches of the scheduler for every layer
+    SchedulerPatchData &patch_data; ///< first layer patch data (legacy) patch_data_list.at(0)
 
     u64 crit_patch_split; ///< splitting limit (if load value > crit_patch_split => patch split)
     u64 crit_patch_merge; ///< merging limit (if load value < crit_patch_merge => patch merge)
 
     SchedulerPatchList patch_list; ///< handle the list of the patches of the scheduler
 
-    PatchTree patch_tree;          ///< handle the tree structure of the patches
+    PatchTree patch_tree; ///< handle the tree structure of the patches
 
     // using unordered set is not an issue since we use the find command after
     std::unordered_set<u64> owned_patch_id; ///< list of owned patch ids updated with
     ///< (owned_patch_id = patch_list.build_local())
 
-    inline shamrock::patch::PatchDataLayerLayout &pdl(u32 layer_idx = 0) { return shambase::get_check_ref(pdl_ptr_list.at(layer_idx)); }
+    inline shamrock::patch::PatchDataLayerLayout &pdl(u32 layer_idx = 0) {
+        return shambase::get_check_ref(pdl_ptr_list.at(layer_idx));
+    }
 
-    inline std::shared_ptr<shamrock::patch::PatchDataLayerLayout> get_layout_ptr(u32 layer_idx = 0) const {
+    inline std::shared_ptr<shamrock::patch::PatchDataLayerLayout> get_layout_ptr(
+        u32 layer_idx = 0) const {
         return pdl_ptr_list.at(layer_idx);
     }
 
@@ -139,8 +143,7 @@ class PatchScheduler {
                     std::string("the main field is not of the correct type to call this function\n")
                     + "fct called : " + __PRETTY_FUNCTION__
                     + shambase::format("current patch data layout of index {} : ", layer_idx)
-                    + pdl(layer_idx).get_description_str()
-                );
+                    + pdl(layer_idx).get_description_str());
             }
             patch_data_list.at(layer_idx).sim_box.set_bounding_box<vectype>({bmin, bmax});
         }
@@ -178,7 +181,8 @@ class PatchScheduler {
     [[deprecated]]
     void dump_local_patches(std::string filename);
 
-    std::vector<std::unique_ptr<shamrock::patch::PatchDataLayer>> gather_data(u32 rank, u32 layer_idx = 0);
+    std::vector<std::unique_ptr<shamrock::patch::PatchDataLayer>> gather_data(
+        u32 rank, u32 layer_idx = 0);
 
     /**
      * @brief add patch to the scheduler
@@ -232,28 +236,30 @@ class PatchScheduler {
     template<class Function>
     inline void for_each_patch_data(Function &&fct, u32 layer_idx = 0) {
 
-        patch_data_list.at(layer_idx).for_each_patchdata([&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
-            shamrock::patch::Patch &cur_p
-                = patch_list.global[patch_list.id_patch_to_global_idx[patch_id]];
+        patch_data_list.at(layer_idx).for_each_patchdata(
+            [&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
+                shamrock::patch::Patch &cur_p
+                    = patch_list.global[patch_list.id_patch_to_global_idx[patch_id]];
 
-            if (!cur_p.is_err_mode()) {
-                fct(patch_id, cur_p, pdat);
-            }
-        });
+                if (!cur_p.is_err_mode()) {
+                    fct(patch_id, cur_p, pdat);
+                }
+            });
     }
 
     template<class Function>
     inline void for_each_patch(Function &&fct, u32 layer_idx = 0) {
 
-        patch_data_list.at(layer_idx).for_each_patchdata([&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
-            shamrock::patch::Patch &cur_p
-                = patch_list.global[patch_list.id_patch_to_global_idx[patch_id]];
+        patch_data_list.at(layer_idx).for_each_patchdata(
+            [&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
+                shamrock::patch::Patch &cur_p
+                    = patch_list.global[patch_list.id_patch_to_global_idx[patch_id]];
 
-            // TODO should feed the sycl queue to the lambda
-            if (!cur_p.is_err_mode()) {
-                fct(patch_id, cur_p);
-            }
-        });
+                // TODO should feed the sycl queue to the lambda
+                if (!cur_p.is_err_mode()) {
+                    fct(patch_id, cur_p);
+                }
+            });
     }
 
     inline void for_each_global_patch(std::function<void(const shamrock::patch::Patch)> fct) {
@@ -273,7 +279,8 @@ class PatchScheduler {
     }
 
     inline void for_each_local_patchdata(
-        std::function<void(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &)> fct, u32 layer_idx = 0) {
+        std::function<void(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &)> fct,
+        u32 layer_idx = 0) {
         for (shamrock::patch::Patch p : patch_list.local) {
             if (!p.is_err_mode()) {
                 fct(p, patch_data_list.at(layer_idx).get_pdat(p.id_patch));
@@ -283,14 +290,15 @@ class PatchScheduler {
 
     inline void for_each_local_patch_nonempty(
         std::function<void(const shamrock::patch::Patch &)> fct, u32 layer_idx = 0) {
-        patch_data_list.at(layer_idx).for_each_patchdata([&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
-            shamrock::patch::Patch &cur_p
-                = patch_list.global[patch_list.id_patch_to_global_idx.at(patch_id)];
+        patch_data_list.at(layer_idx).for_each_patchdata(
+            [&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
+                shamrock::patch::Patch &cur_p
+                    = patch_list.global[patch_list.id_patch_to_global_idx.at(patch_id)];
 
-            if ((!cur_p.is_err_mode()) && (!pdat.is_empty())) {
-                fct(cur_p);
-            }
-        });
+                if ((!cur_p.is_err_mode()) && (!pdat.is_empty())) {
+                    fct(cur_p);
+                }
+            });
     }
 
     inline u32 get_patch_rank_owner(u64 patch_id) {
@@ -300,26 +308,31 @@ class PatchScheduler {
     }
 
     inline void for_each_patchdata_nonempty(
-        std::function<void(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &)> fct, u32 layer_idx = 0) {
-        patch_data_list.at(layer_idx).for_each_patchdata([&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
-            shamrock::patch::Patch &cur_p
-                = patch_list.global[patch_list.id_patch_to_global_idx.at(patch_id)];
-
-            if ((!cur_p.is_err_mode()) && (!pdat.is_empty())) {
-                fct(cur_p, pdat);
-            }
-        });
+        std::function<void(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &)> fct,
+        u32 layer_idx = 0) {
+        patch_data_list.at(layer_idx).for_each_patchdata(
+            [&](u64 patch_id, shamrock::patch::PatchDataLayer &pdat) {
+                shamrock::patch::Patch &cur_p
+                    = patch_list.global[patch_list.id_patch_to_global_idx.at(patch_id)];
+
+                if ((!cur_p.is_err_mode()) && (!pdat.is_empty())) {
+                    fct(cur_p, pdat);
+                }
+            });
     }
 
     template<class T>
     inline shambase::DistributedData<T> map_owned_patchdata(
-        std::function<T(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &pdat)> fct, u32 layer_idx = 0) {
+        std::function<T(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &pdat)> fct,
+        u32 layer_idx = 0) {
         shambase::DistributedData<T> ret;
 
         using namespace shamrock::patch;
-        for_each_patch_data([&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
-            ret.add_obj(id_patch, fct(cur_p, pdat));
-        }, layer_idx);
+        for_each_patch_data(
+            [&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
+                ret.add_obj(id_patch, fct(cur_p, pdat));
+            },
+            layer_idx);
 
         return ret;
     }
@@ -350,26 +363,32 @@ class PatchScheduler {
 
     template<class T>
     inline shambase::DistributedData<T> map_owned_patchdata_fetch_simple(
-        std::function<T(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &pdat)> fct, u32 layer_idx = 0) {
+        std::function<T(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &pdat)> fct,
+        u32 layer_idx = 0) {
         shambase::DistributedData<T> ret;
 
         using namespace shamrock::patch;
-        for_each_patch_data([&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
-            ret.add_obj(id_patch, fct(cur_p, pdat));
-        }, layer_idx);
+        for_each_patch_data(
+            [&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
+                ret.add_obj(id_patch, fct(cur_p, pdat));
+            },
+            layer_idx);
 
         return distrib_data_local_to_all_simple(ret);
     }
 
     template<class T>
     inline shambase::DistributedData<T> map_owned_patchdata_fetch_load_store(
-        std::function<T(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &pdat)> fct, u32 layer_idx = 0) {
+        std::function<T(const shamrock::patch::Patch, shamrock::patch::PatchDataLayer &pdat)> fct,
+        u32 layer_idx = 0) {
         shambase::DistributedData<T> ret;
 
         using namespace shamrock::patch;
-        for_each_patch_data([&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
-            ret.add_obj(id_patch, fct(cur_p, pdat));
-        }, layer_idx);
+        for_each_patch_data(
+            [&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
+                ret.add_obj(id_patch, fct(cur_p, pdat));
+            },
+            layer_idx);
 
         return distrib_data_local_to_all_load_store(ret);
     }
@@ -390,9 +409,11 @@ class PatchScheduler {
         StackEntry stack_loc{};
         using namespace shamrock::patch;
         u64 num_obj = 0; // TODO get_rank_count() in scheduler
-        for_each_patch_data([&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
-            num_obj += pdat.get_obj_cnt();
-        }, layer_idx);
+        for_each_patch_data(
+            [&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
+                num_obj += pdat.get_obj_cnt();
+            },
+            layer_idx);
 
         return num_obj;
     }
@@ -419,21 +440,23 @@ class PatchScheduler {
             using namespace shamrock::patch;
 
             u64 ptr = 0; // TODO accumulate_field() in scheduler ?
-            for_each_patch_data([&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
-                using namespace shamalgs::memory;
-                using namespace shambase;
-
-                if (pdat.get_obj_cnt() > 0) {
-                    write_with_offset_into(
-                        shamsys::instance::get_compute_scheduler().get_queue(),
-                        get_check_ref(ret),
-                        pdat.get_field<T>(field_idx).get_buf(),
-                        ptr,
-                        pdat.get_obj_cnt() * nvar);
-
-                    ptr += pdat.get_obj_cnt() * nvar;
-                }
-            }, layer_idx);
+            for_each_patch_data(
+                [&](u64 id_patch, Patch cur_p, PatchDataLayer &pdat) {
+                    using namespace shamalgs::memory;
+                    using namespace shambase;
+
+                    if (pdat.get_obj_cnt() > 0) {
+                        write_with_offset_into(
+                            shamsys::instance::get_compute_scheduler().get_queue(),
+                            get_check_ref(ret),
+                            pdat.get_field<T>(field_idx).get_buf(),
+                            ptr,
+                            pdat.get_obj_cnt() * nvar);
+
+                        ptr += pdat.get_obj_cnt() * nvar;
+                    }
+                },
+                layer_idx);
         }
 
         return ret;
@@ -462,7 +485,8 @@ class PatchScheduler {
     // }
 
     template<class Function, class Pfield>
-    inline void compute_patch_field(Pfield &field, MPI_Datatype &dtype, Function &&lambda, u32 layer_idx = 0) {
+    inline void compute_patch_field(
+        Pfield &field, MPI_Datatype &dtype, Function &&lambda, u32 layer_idx = 0) {
         field.local_nodes_value.resize(patch_list.local.size());
 
         for (u64 idx = 0; idx < patch_list.local.size(); idx++) {
@@ -485,9 +509,11 @@ class PatchScheduler {
             [&](shamrock::solvergraph::PatchDataLayerRefs &edge) {
                 edge.free_alloc();
                 using namespace shamrock::patch;
-                for_each_patchdata_nonempty([&](Patch cur_p, PatchDataLayer &pdat) {
-                    edge.patchdatas.add_obj(cur_p.id_patch, std::ref(pdat));
-                }, layer_idx);
+                for_each_patchdata_nonempty(
+                    [&](Patch cur_p, PatchDataLayer &pdat) {
+                        edge.patchdatas.add_obj(cur_p.id_patch, std::ref(pdat));
+                    },
+                    layer_idx);
             });
 
         return std::make_shared<decltype(node_set_edge)>(std::move(node_set_edge));
@@ -499,7 +525,8 @@ class PatchScheduler {
      * @param coords coordinates of the patch
      * @return u64 the id of the made patch
      */
-    std::vector<u64> add_root_patches(std::vector<shamrock::patch::PatchCoord<3>> coords, u32 layer_idx = 0);
+    std::vector<u64> add_root_patches(
+        std::vector<shamrock::patch::PatchCoord<3>> coords, u32 layer_idx = 0);
 
     shamrock::patch::SimulationBoxInfo &get_sim_box() { return patch_data_list.at(0).sim_box; }
 
diff --git a/src/shamrock/include/shamrock/scheduler/ShamrockCtx.hpp b/src/shamrock/include/shamrock/scheduler/ShamrockCtx.hpp
index 4e58ec40..5326ba02 100644
--- a/src/shamrock/include/shamrock/scheduler/ShamrockCtx.hpp
+++ b/src/shamrock/include/shamrock/scheduler/ShamrockCtx.hpp
@@ -81,7 +81,8 @@ class ShamrockCtx {
         pdl_list.at(layer_idx)->add_field<T>(fname, nvar);
     }
 
-    inline void pdata_layout_add_field_t(std::string fname, u32 nvar, std::string type, u32 layer_idx = 0) {
+    inline void pdata_layout_add_field_t(
+        std::string fname, u32 nvar, std::string type, u32 layer_idx = 0) {
         if (sched) {
             throw ShamAPIException("cannot modify patch data layout while the scheduler is on");
         }
@@ -90,7 +91,8 @@ class ShamrockCtx {
 
     inline void pdata_layout_print(u32 layer_idx = 0) {
         if (!pdl_list.at(layer_idx)) {
-            throw ShamAPIException(shambase::format("patch data layout of index {} is not initialized", layer_idx));
+            throw ShamAPIException(
+                shambase::format("patch data layout of index {} is not initialized", layer_idx));
         }
         std::cout << pdl_list.at(layer_idx)->get_description_str() << std::endl;
     }
@@ -107,7 +109,9 @@ class ShamrockCtx {
 
         for (u32 layer_idx = 0; layer_idx < pdl_list.size(); layer_idx++) {
             if (!pdl_list.at(layer_idx)) {
-                throw ShamAPIException(shambase::format("patch data layout of index {} is not initialized", layer_idx));
+                throw ShamAPIException(
+                    shambase::format(
+                        "patch data layout of index {} is not initialized", layer_idx));
             }
         }
 
@@ -150,7 +154,8 @@ class ShamrockCtx {
     void set_coord_domain_bound(std::tuple<f64_3, f64_3> box, u32 layer_idx = 0) {
 
         if (!pdl_list.at(layer_idx)) {
-            throw ShamAPIException(shambase::format("patch data layout of index {} is not initialized", layer_idx));
+            throw ShamAPIException(
+                shambase::format("patch data layout of index {} is not initialized", layer_idx));
         }
 
         if (!sched) {
diff --git a/src/shamrock/src/io/ShamrockDump.cpp b/src/shamrock/src/io/ShamrockDump.cpp
index 3e75178c..7775f05d 100644
--- a/src/shamrock/src/io/ShamrockDump.cpp
+++ b/src/shamrock/src/io/ShamrockDump.cpp
@@ -190,7 +190,8 @@ namespace shamrock {
         json jpdat_info  = json::parse(patchdata_infos);
 
         ctx.pdata_layout_new();
-        *ctx.pdl_list.at(0) = jmeta_patch.at("patchdata_layout").get<patch::PatchDataLayerLayout>(); // ->astodo ->hc
+        *ctx.pdl_list.at(0) = jmeta_patch.at("patchdata_layout")
+                                  .get<patch::PatchDataLayerLayout>(); // ->astodo ->hc
         ctx.init_sched(
             jmeta_patch.at("crit_patch_split").get<u64>(),
             jmeta_patch.at("crit_patch_merge").get<u64>());
@@ -246,7 +247,8 @@ namespace shamrock {
             shamalgs::SerializeHelper ser(
                 shamsys::instance::get_compute_scheduler_ptr(), std::move(out));
 
-            patch::PatchDataLayer pdat = patch::PatchDataLayer::deserialize_buf(ser, ctx.pdl_list.at(0)); // ->astodo ->hc
+            patch::PatchDataLayer pdat
+                = patch::PatchDataLayer::deserialize_buf(ser, ctx.pdl_list.at(0)); // ->astodo ->hc
 
             sched.patch_data.owned_data.add_obj(pid, std::move(pdat));
         }
diff --git a/src/shamrock/src/scheduler/PatchScheduler.cpp b/src/shamrock/src/scheduler/PatchScheduler.cpp
index 22264b20..7b6ecd94 100644
--- a/src/shamrock/src/scheduler/PatchScheduler.cpp
+++ b/src/shamrock/src/scheduler/PatchScheduler.cpp
@@ -126,7 +126,8 @@ std::vector<u64> PatchScheduler::add_root_patches(
         patch_list._next_patch_id++;
 
         if (shamcomm::world_rank() == node_owner_id) {
-            patch_data_list.at(layer_idx).owned_data.add_obj(root.id_patch, PatchDataLayer(get_layout_ptr()));
+            patch_data_list.at(layer_idx).owned_data.add_obj(
+                root.id_patch, PatchDataLayer(get_layout_ptr()));
             shamlog_debug_sycl_ln("Scheduler", "adding patch data");
         } else {
             shamlog_debug_sycl_ln(
@@ -213,10 +214,10 @@ PatchScheduler::PatchScheduler(
     u64 crit_merge)
     : pdl_ptr_list(pdl_ptr_list),
       patch_data_list{SchedulerPatchData(
-            pdl_ptr_list.at(0),
-            {{0, 0, 0}, {max_axis_patch_coord, max_axis_patch_coord, max_axis_patch_coord}})},
+          pdl_ptr_list.at(0),
+          {{0, 0, 0}, {max_axis_patch_coord, max_axis_patch_coord, max_axis_patch_coord}})},
       patch_data(patch_data_list.at(0)) // patch_data is kept for legacy reasons
-    {
+{
     for (u32 layer_idx = 1; layer_idx < pdl_ptr_list.size(); layer_idx++) {
         patch_data_list.push_back(SchedulerPatchData(
             pdl_ptr_list.at(layer_idx),
@@ -476,7 +477,7 @@ void PatchScheduler::scheduler_step(bool do_split_merge, bool do_load_balancing)
         timers.load_balance_apply = shambase::Timer{};
         timers.load_balance_apply->start();
         // apply LB change list
-        for (auto & _patch_data : patch_data_list) {
+        for (auto &_patch_data : patch_data_list) {
             _patch_data.apply_change_list(change_list, patch_list);
         }
         timers.load_balance_apply->end();
@@ -679,7 +680,7 @@ void check_locality_t(PatchScheduler &sched) {
     using namespace shamrock::patch;
     sched.for_each_patch_data([&](u64 pid, Patch p, shamrock::patch::PatchDataLayer &pdat) {
         PatchDataField<vec> &main_field = pdat.get_field<vec>(0);
-        auto [bmin_p0, bmax_p0]         = sched.patch_data_list.at(0).sim_box.patch_coord_to_domain<vec>(p);
+        auto [bmin_p0, bmax_p0] = sched.patch_data_list.at(0).sim_box.patch_coord_to_domain<vec>(p);
 
         main_field.check_err_range(
             [&](vec val, vec vmin, vec vmax) {
diff --git a/src/tests/benchmark_selfgrav.cpp b/src/tests/benchmark_selfgrav.cpp
index 962e8214..40200a19 100644
--- a/src/tests/benchmark_selfgrav.cpp
+++ b/src/tests/benchmark_selfgrav.cpp
@@ -32,7 +32,8 @@ std::tuple<f64, f64> benchmark_selfgrav(f32 dr, u32 npatch) {
 
     u64 Nesti = (2.F / dr) * (2.F / dr) * (2.F / dr);
 
-    std::deque<std::shared_ptr<shamrock::patch::PatchDataLayerLayout>> pdl_ptr_list {std::make_shared<shamrock::patch::PatchDataLayerLayout>()};
+    std::deque<std::shared_ptr<shamrock::patch::PatchDataLayerLayout>> pdl_ptr_list{
+        std::make_shared<shamrock::patch::PatchDataLayerLayout>()};
     auto &pdl = *pdl_ptr_list.at(0);
 
     pdl.add_field<f32_3>("xyz", 1);
diff --git a/src/tests/shamrock_article_1Tests.cpp b/src/tests/shamrock_article_1Tests.cpp
index db4cb9ed..672189d2 100644
--- a/src/tests/shamrock_article_1Tests.cpp
+++ b/src/tests/shamrock_article_1Tests.cpp
@@ -489,8 +489,9 @@ f64 amr_walk_perf(
     using namespace shamrock::patch;
     using namespace shamrock::scheduler;
 
-    std::deque<std::shared_ptr<PatchDataLayerLayout>> layout_ptr_list {std::make_shared<PatchDataLayerLayout>()};
-    auto &layout                                     = *layout_ptr_list.at(0);
+    std::deque<std::shared_ptr<PatchDataLayerLayout>> layout_ptr_list{
+        std::make_shared<PatchDataLayerLayout>()};
+    auto &layout = *layout_ptr_list.at(0);
     layout.add_field<u64_3>("cell_min", 1);
     layout.add_field<u64_3>("cell_max", 1);
     PatchScheduler sched(layout_ptr_list, 1e9, 1);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants