Add functions for computation of Hilbert series#98
Add functions for computation of Hilbert series#98mohabsafey merged 31 commits intoalgebraic-solving:mainfrom
Conversation
… to groebner_basis
src/algorithms/hilbert.jl
Outdated
| groebner_basis(I, complete_reduction = true) | ||
| end | ||
| lead_exps = Vector{Vector{Int}}(undef, length(gb)) | ||
| Threads.@threads for i in eachindex(gb) |
There was a problem hiding this comment.
Is it really beneficial to call such a small function multithreaded?
There was a problem hiding this comment.
I found that for examples with large enough Groebner bases (e.g. Katsura 12), when a GB is known, most of the time is actually spent computing the leading terms (unless the ideal already has internal_ordering==:degrevlex).
So this almost divides the elapsed time by the number of threads.
For the same reason, I changed the way these leading terms are computed (using internal flint based polynomial constructors) as it turns out to be significantly faster than pure Julia operations.
However, I agree that the overall timings of these functions are not so much related to the GB computation. For example, Katsura12 takes about 500ms (150ms with 4 threads) on my laptop, while GB takes ~1min. So if you think it's important to reduce multi-threaded calls to a minimum, I don't mind not doing it here.
However, unlike msolve calls, the @threads command cannot use more threads than the number defined by the user when running Julia (with the --threads n parameter). So the scale of multithreading here still seems to me to be a matter of user choice.
There was a problem hiding this comment.
Yes, but exactly this is what I find a bit problematic: The user can give a parameter for #threads used when computing a GB, but it is not possible to do so here. Moreover, it might be confusing to set the number of julia threads and then getting a sequential GB computation. On the other hand: Using the number of julia threads as default value for the GB call might also lead to unwanted behaviour, thinking about inner loop calls of the GB computation in a parallelized julia loop, for example.
There was a problem hiding this comment.
I see, I agree that this is probably not worth doing here. Or at most having a nr_thrds parameter to communicate to msolve.
However, this leaves the question of multithreading (and perhaps multiprocessing) in Julia for cases where it is more critical. This will be the case, for example, when computing rational parameterisation for curves (multiple evaluations/interpolations). Maybe we can leave this for the next PR on this topic, and I'll investigate the possibilities in the meantime.
There was a problem hiding this comment.
Yes, maybe we leave this out for this PR, but you are also right that we need to find a solution for the usage of julia threads. I will try to discuss this in the Oscar meeting this week to get a bigger picture also from packages depending on AlgebraicSolving.
There was a problem hiding this comment.
Thank you! I removed the macros in dd6c189.
|
I'm fine with this, maybe @mohabsafey can have another look. |
|
Fine for me. Many thanks. |
Add a function to compute the Hilbert series of a polynomial ideal from a Groebner basis.
From this Hilbert series there are additional functions to compute the Hilbert polynomial, index of regularity and the dimension/degree. The last two functions may not be relevant (too high level) in
AlgebraicSolving.jl.I just realized such function might be already in Oscar (thought maybe for more general inputs). I don't know what this implies regarding that AlgebraicSolving.jl is integrated into Oscar. However, this also means that Oscar functions cannot be used inside AlgebraicSolving.jl.