Conversation
Member
Author
How to use the Graphite Merge QueueAdd the label graphite-merge to this PR to add it to the merge queue. You must have a Graphite account in order to use the merge queue. Sign up using this link. An organization admin has enabled the Graphite Merge Queue in this repository. Please do not merge from GitHub as this will restart CI on PRs being processed by the merge queue. This stack of pull requests is managed by Graphite. Learn more about stacking. |
39f084f to
0611106
Compare
andyw8
reviewed
Feb 27, 2025
andyw8
approved these changes
Feb 27, 2025
vinistock
added a commit
to Shopify/tapioca
that referenced
this pull request
Feb 27, 2025
### Motivation The changes in Shopify/ruby-lsp#3252 will be breaking because the return of `document.parse_result` will change. However, that doesn't impact Tapioca and we will not include any breaking changes that do impact Tapioca in v0.24. Let's bump our upper requirement ahead of time, so that people can already start upgrading and so we can eliminate any windows where the Tapioca add-on wouldn't work. ### Implementation Just bumped the upper bound to < 0.25.
8b6dce7 to
21a1986
Compare
21a1986 to
c2815a7
Compare
c2815a7 to
c8965fa
Compare
c8965fa to
29e300a
Compare
Morriar
reviewed
May 1, 2025
bdd7521 to
588942c
Compare
588942c to
9185c3c
Compare
9185c3c to
a149893
Compare
st0012
added a commit
that referenced
this pull request
Jun 6, 2025
In #3252, `RubyDocument#parse_result` becomes `Prism::ParseLexResult`, which's `value` is an array instead of a single node. And we forgot to update `collect_references` to account for this change because the codepath was not covered in the test. So this commit adds a test for the codepath and updates `collect_references` to use `Prism::LexResult`.
st0012
added a commit
that referenced
this pull request
Jun 6, 2025
Update collect_references to use Prism::LexResult In #3252, `RubyDocument#parse_result` becomes `Prism::ParseLexResult`, which's `value` is an array instead of a single node. And we forgot to update `collect_references` to account for this change because the codepath was not covered in the test. So this commit adds a test for the codepath and updates `collect_references` to use `Prism::LexResult`.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Motivation
This is part of #1849, which is moving forward now with the immense help of @koic.
This PR switches the LSP from using
parseto usingparse_lexfor Ruby and ERB documents. The difference is thatparseonly returns the AST nodes, whereasparse_lexreturns both the AST + tokenization information. That information is required for RuboCop to use the Prism AST.Implementation
The LSP itself doesn't need the lex part of the information. We will only need it for RuboCop related things.
So I created an
astmethod to return the same thing asparsedoes and fixed all of the call sites.Note
Parse lex is actually quite a bit slower than regular parse. However, the performance gains we will get from avoiding the double-parse for every diagnostic computation will more than compensate for this price.
We'll monitor our performance telemetry to see if there's any meaningful degradation, but I don't expect that to be the case.
Automated Tests
Updated accordingly.