You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _bibliography/references.bib
+12Lines changed: 12 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,17 @@
1
1
---
2
2
---
3
+
@inproceedings{
4
+
rubio-madrigal2025fixed,
5
+
title={Fixed Aggregation Features Can Rival {GNN}s},
6
+
author={Celia Rubio-Madrigal and Rebekka Burkholz},
7
+
booktitle={Women in Machine Learning Workshop @ NeurIPS 2025},
8
+
year={2025},
9
+
url={https://openreview.net/forum?id=OHgWEMce80},
10
+
pdf={https://openreview.net/pdf?id=OHgWEMce80},
11
+
img={fixed-aggregation-features.png},
12
+
abstract={Graph neural networks (GNNs) are widely believed to excel at node representation learning through trainable neighborhood aggregations. We challenge this view by introducing Fixed Aggregation Features (FAFs), a training-free approach that transforms graph learning tasks into tabular problems. This simple shift enables the use of well-established tabular methods, offering strong interpretability and the flexibility to deploy diverse classifiers. Across 14 benchmarks, well-tuned multilayer perceptrons trained on FAFs rival or outperform state-of-the-art GNNs and graph transformers on 12 tasks -- often using only mean aggregation. The only exceptions are the Roman Empire and Minesweeper datasets, which typically require unusually deep GNNs. To explain the theoretical possibility of non-trainable aggregations, we connect our findings to Kolmogorov–Arnold representations and discuss when mean aggregation can be sufficient. In conclusion, our results call for (i) richer benchmarks benefiting from learning diverse neighborhood aggregations, (ii) strong tabular baselines as standard, and (iii) employing and advancing tabular models for graph data to gain new insights into related tasks.},
13
+
}
14
+
3
15
@inproceedings{ zhou2025payattentionsmallweights,
4
16
title={Pay Attention to Small Weights},
5
17
author={Chao Zhou and Tom Jacobs and Advait Gadhikar and Rebekka Burkholz},
Copy file name to clipboardExpand all lines: _data/alumni_members.yml
+17-15Lines changed: 17 additions & 15 deletions
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,17 @@
1
1
2
+
- role: Research scientists
3
+
full_width: true
4
+
members:
5
+
- name: Nikita Vedeneev
6
+
last_name: Vedeneev
7
+
photo: c01mive.jpg
8
+
start_date: Dec 24
9
+
end_date: May 25
10
+
email: mikita.vedzeneyeu@cispa.de
11
+
url: https://github.com/nikitaved
12
+
description: "I am interesting in making modern AI models efficient. In particular, I work on discovering and exploiting structure in Neural Networks (sparsity, low-dimensional representations and similar) for efficient training, fine-tuning and inference. I am a former full-time core developer for [PyTorch](https://github.com/pytorch/pytorch) and [Lightning Thunder](https://github.com/Lightning-AI/lightning-thunder). Check my [GitHub](https://github.com/nikitaved) to see what I work on now."
13
+
next: Senior Engineer at NVIDIA
14
+
2
15
- role: Research assistants
3
16
members:
4
17
- name: Ben Horvath
@@ -14,33 +27,21 @@
14
27
end_date: Oct 24
15
28
email: adarsh.jamadandi@cispa.de
16
29
url: https://adarshmj.github.io
17
-
next: PhD candidate at IRISA Rennes
30
+
next: PhD at IRISA Rennes
18
31
19
32
- name: Harsha Nelaturu
20
33
last_name: Nelaturu
21
34
photo: c02hane.jpg
22
35
start_date: Aug 23
23
36
end_date: Jul 24
24
37
url: https://nelaturuharsha.github.io/
25
-
next: Applied Scientist Intern at Amazon
38
+
next: PhD at Zuse Institute Berlin
26
39
27
40
- name: Advait Athreya
28
41
last_name: Athreya
29
42
start_date: Dec 21
30
43
end_date: Oct 22
31
44
32
-
- role: Research engineers
33
-
members:
34
-
- name: Nikita Vedeneev
35
-
last_name: Vedeneev
36
-
photo: c01mive.jpg
37
-
start_date: Dec 24
38
-
end_date: May 25
39
-
email: mikita.vedzeneyeu@cispa.de
40
-
url: https://github.com/nikitaved
41
-
description: "I am interesting in making modern AI models efficient. In particular, I work on discovering and exploiting structure in Neural Networks (sparsity, low-dimensional representations and similar) for efficient training, fine-tuning and inference. I am a former full-time core developer for [PyTorch](https://github.com/pytorch/pytorch) and [Lightning Thunder](https://github.com/Lightning-AI/lightning-thunder). Check my [GitHub](https://github.com/nikitaved) to see what I work on now."
> We are currently hiring for PhD and Postdoc positions. Check the [openings](/openings)page and apply now!
11
+
> **Open positions — extended deadline**
12
+
> We are recruiting PhD candidates and postdoctoral researchers. See the <ahref="/openings"><strongstyle="color:#d9534f">Openings</strong></a> page and apply now!
13
13
14
14
Welcome! We are the Relational ML research group.
15
15
We are part of the [CISPA Helmholtz Center for Information Security](https://cispa.de) in Saarbrücken and St. Ingbert, Germany, and are grateful to [Saarland University (UdS)](https://www.uni-saarland.de) for granting us supervision rights.
> We are currently hiring for PhD and Postdoc positions. Check the details below and apply now!
11
+
> **Open positions — extended deadline**
12
+
> We are recruiting PhD candidates and postdoctoral researchers. We will consider candidates until positions are filled — please apply via the [CISPA Career portal](https://career.cispa.de/jobs/group-relationalml-53).
13
13
14
14
Are you curious about some of our research and have further questions? Feel free to drop us an e-mail ([burkholz@cispa.de](mailto:burkholz@cispa.de)) to get in touch.
15
15
@@ -49,11 +49,14 @@ We are a small team with a flat management structure and a collaborative work cu
49
49
50
50
The starting dates of the positions are flexible. We are committed to providing a healthy work environment and fostering diversity and respectful interaction. We welcome applications by candidates from all backgrounds and also support non-standard careers.
51
51
52
+
### Current open positions
53
+
54
+
* We have PhD and postdoc positions available for 2026.
55
+
*[PhD and Postdocs in Efficient Deep Learning](https://career.cispa.de/jobs/group-relationalml-53) at CISPA Helmholtz Center for Information Security.
56
+
52
57
### Past open positions
58
+
This is a non-exhaustive list of past open positions in our group.
53
59
54
-
* We have some PhD and postdoc positions available for 2026.
55
-
*[PhD and Postdocs in Efficient Deep Learning](https://career.cispa.de/jobs/group-relationalml-53)
56
-
*[Postdoctoral Researcher in Efficient Deep Learning](https://de.linkedin.com/jobs/view/postdoctoral-researcher-in-efficient-deep-learning-at-relational-machine-learning-lab-4300176253).
57
60
* We received an ERC Starting Grant in 2023 ([SPARSE-ML](https://cispa.de/en/research/grants/sparse-ml)) and had several open positions for PhD students and Postdocs:
58
61
*[PhD position in sparse machine learning](https://euraxess.ec.europa.eu/jobs/144401).
59
62
*[Postdoctoral position in sparse machine learning](https://euraxess.ec.europa.eu/jobs/144392).
We are recruiting PhD candidates and postdoctoral researchers. See the <ahref="/openings"><strongstyle="color:#d9534f">Openings</strong></a> page and apply now!</p>
116
116
</blockquote>
117
117
118
118
<p>Welcome! We are the Relational ML research group.
We are recruiting PhD candidates and postdoctoral researchers. We will consider candidates until positions are filled — please apply via the <ahref="https://career.cispa.de/jobs/group-relationalml-53">CISPA Career portal</a>.</p>
81
81
</blockquote>
82
82
83
83
<p>Are you curious about some of our research and have further questions? Feel free to drop us an e-mail (<ahref="mailto:burkholz@cispa.de">burkholz@cispa.de</a>) to get in touch.</p>
@@ -125,15 +125,20 @@ <h3 id="what-we-offer">What we offer</h3>
125
125
126
126
<p>The starting dates of the positions are flexible. We are committed to providing a healthy work environment and fostering diversity and respectful interaction. We welcome applications by candidates from all backgrounds and also support non-standard careers.</p>
127
127
128
-
<h3id="past-open-positions">Past open positions</h3>
128
+
<h3id="current-open-positions">Current open positions</h3>
129
129
130
130
<ul>
131
-
<li>We have some PhD and postdoc positions available for 2026.
131
+
<li>We have PhD and postdoc positions available for 2026.
132
132
<ul>
133
-
<li><ahref="https://career.cispa.de/jobs/group-relationalml-53">PhD and Postdocs in Efficient Deep Learning</a></li>
134
-
<li><ahref="https://de.linkedin.com/jobs/view/postdoctoral-researcher-in-efficient-deep-learning-at-relational-machine-learning-lab-4300176253">Postdoctoral Researcher in Efficient Deep Learning</a>.</li>
133
+
<li><ahref="https://career.cispa.de/jobs/group-relationalml-53">PhD and Postdocs in Efficient Deep Learning</a> at CISPA Helmholtz Center for Information Security.</li>
135
134
</ul>
136
135
</li>
136
+
</ul>
137
+
138
+
<h3id="past-open-positions">Past open positions</h3>
139
+
<p>This is a non-exhaustive list of past open positions in our group.</p>
140
+
141
+
<ul>
137
142
<li>We received an ERC Starting Grant in 2023 (<ahref="https://cispa.de/en/research/grants/sparse-ml">SPARSE-ML</a>) and had several open positions for PhD students and Postdocs:
138
143
<ul>
139
144
<li><ahref="https://euraxess.ec.europa.eu/jobs/144401">PhD position in sparse machine learning</a>.</li>
<p>Graph neural networks (GNNs) are widely believed to excel at node representation learning through trainable neighborhood aggregations. We challenge this view by introducing Fixed Aggregation Features (FAFs), a training-free approach that transforms graph learning tasks into tabular problems. This simple shift enables the use of well-established tabular methods, offering strong interpretability and the flexibility to deploy diverse classifiers. Across 14 benchmarks, well-tuned multilayer perceptrons trained on FAFs rival or outperform state-of-the-art GNNs and graph transformers on 12 tasks – often using only mean aggregation. The only exceptions are the Roman Empire and Minesweeper datasets, which typically require unusually deep GNNs. To explain the theoretical possibility of non-trainable aggregations, we connect our findings to Kolmogorov–Arnold representations and discuss when mean aggregation can be sufficient. In conclusion, our results call for (i) richer benchmarks benefiting from learning diverse neighborhood aggregations, (ii) strong tabular baselines as standard, and (iii) employing and advancing tabular models for graph data to gain new insights into related tasks.</p>
0 commit comments