Skip to content

Comments

Implement documentation parser#103

Open
snelusha wants to merge 18 commits intoballerina-platform:mainfrom
snelusha:feat/documentation-parser
Open

Implement documentation parser#103
snelusha wants to merge 18 commits intoballerina-platform:mainfrom
snelusha:feat/documentation-parser

Conversation

@snelusha
Copy link
Contributor

@snelusha snelusha commented Feb 6, 2026

Purpose

Resolves #16

This PR implements a documentation parser and end-to-end Markdown documentation support across the parser, syntax tree, pretty-printer, and test corpus.

Key changes

  • Documentation lexer & parser: adds a documentation-oriented lexer and a new DocumentationParser that tokenize and parse Markdown-style documentation (doc lines, inline code, code blocks, parameter/return/deprecation sections, and name/code references) with lookahead, diagnostics, and AST construction.
  • Parser integration: parseDocumentationString now invokes the documentation lexer/parser pipeline; documentation files are no longer ignored by the corpus parser and documentation parsing is integrated into the main parsing flow.
  • Syntax tree and transformer: adds public ST node constructors for documentation elements and removes several legacy transformer hooks to consolidate documentation handling via new nodes.
  • AST transformation & attachments: implements helpers to extract documentation from metadata and attach structured Markdown documentation to function and constant declarations.
  • Pretty-printer: extends printing logic to render Markdown documentation (doc lines, parameters, return docs, deprecation notes, references, and code blocks) alongside existing declaration output.
  • Tokenizer/lexer API adjustments: introduces a Lexer interface to support pluggable lexers, updates internal lexer naming/receivers, and adjusts TokenReader usage and mode handling.
  • Tests & corpus: adds a broad set of documentation-focused corpus files (bal, ast, bir, cfg, parser) and updates test harness control flow; includes many Git LFS pointer updates for large test assets.
  • Cleanup & replacements: replaces placeholder panics/stubs with concrete implementations for documentation handling and makes related import/formatting adjustments.

Outcomes

  • Enables structured extraction, representation, and printing of Markdown documentation within the syntax tree and source output.
  • Integrates documentation parsing into the existing parsing pipeline and expands test coverage for documentation constructs.
  • Introduces significant new parsing components and public ST constructors; reviewers should evaluate API surface, compatibility, and test coverage.

Resolves: #16

@snelusha snelusha changed the title Define lexer interface Implement documentation parser Feb 6, 2026
@snelusha snelusha force-pushed the feat/documentation-parser branch from 06f9272 to 3fd5e13 Compare February 6, 2026 20:19
@snelusha snelusha force-pushed the feat/documentation-parser branch from fa23cc0 to 90d1fc1 Compare February 8, 2026 18:57
@coderabbitai
Copy link

coderabbitai bot commented Feb 9, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a full Markdown/ documentation pipeline: a documentation lexer + parser, integrates it into the main parser and AST builder, attaches Markdown docs to functions/constants, extends the pretty-printer to emit docs, adjusts lexer/tokenizer API, and adds many documentation-focused corpus and test assets.

Changes

Cohort / File(s) Summary
Documentation core
parser/documentation_lexer.go, parser/documentation_parser.go
New documentation lexer and parser implementing tokenization and parsing of markdown-like docs (lines, params, references, backticks, code blocks), node factories, diagnostics, and ST node construction.
Parser integration
parser/parser.go, parser/corpus_parser_test.go
Integrates documentation parsing into parseDocumentationString; updates TokenReader usage to accept lexer pointer; removes documentation-ignore behavior in corpus tests.
AST builder & markdown wiring
ast/node_builder.go, parser/tree/st_node.go, parser/tree/node_transformer.go
Added helpers to extract/transform documentation into AST attachments, new ST node constructors for documentation elements; removed several markdown transform hooks from NodeTransformer.
Pretty-printer
ast/pretty_printer.go
Implements printing routines for Markdown documentation nodes and prints attached docs for functions and constants.
Lexer / tokenizer API
parser/lexer.go, parser/tokenizer.go, parser/tokenizer.go
Concrete Lexer renamed to private lexer; new public Lexer interface added; TokenReader/consumer callsites adapted; mode handling adjusted.
Tests / harness
ast/corpus_ast_test.go
Centralized test control via added TestMain and removed local flag.Parse() calls in tests.
Corpus assets (sources, AST, BIR, CFG)
corpus/bal/..., corpus/ast/..., corpus/bir/..., corpus/cfg/..., corpus/parser/...
Added many documentation-focused test sources and generated artifacts (basic docs, code blocks, params, deprecations, multiline docs) across multiple corpus directories.
Testdata LFS updates
parser/testdata/parser/** (many *.json)
Updated numerous Git LFS pointer metadata entries (oid/size) for test artifacts; no behavior changes.
Tokenizer consumers
parser/parser.go, parser/tokenizer.go, parser/lexer.go
Adjusted CreateTokenReader and TokenReader usage sites to match new lexer/tokenizer interface and private type changes.

Sequence Diagram(s)

sequenceDiagram
    participant MainParser as Main Parser
    participant DocLexer as Documentation Lexer
    participant TokenReader as TokenReader
    participant DocParser as Documentation Parser
    participant ASTBuilder as AST Builder
    participant PrettyPrinter as Pretty Printer

    MainParser->>DocLexer: newDocumentationLexer(CharReader, leadingTrivia, diagnostics)
    DocLexer->>TokenReader: emit documentation tokens
    MainParser->>TokenReader: CreateTokenReader(lexer, debugCtx)
    MainParser->>DocParser: NewDocumentationParser(TokenReader, dbg)
    MainParser->>DocParser: Parse() -> markdown ST node(s)
    DocParser->>ASTBuilder: return STNode tree (markdown nodes)
    ASTBuilder->>ASTBuilder: createMarkdownDocumentationAttachment(...)
    ASTBuilder->>ASTBuilder: attach docs to function/constant nodes
    ASTBuilder->>PrettyPrinter: pass AST with documentation nodes
    PrettyPrinter->>PrettyPrinter: printMarkdownDocumentation() -> output
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Suggested reviewers

  • warunalakshitha

Poem

"🐰 I nibble backticks, hop through each line,
I stitch tiny docs till the references shine.
From lexer burrow to parser's bright den,
I bind stories to functions and skip back again. 🥕"

🚥 Pre-merge checks | ✅ 3 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is extremely minimal, containing only 'Resolves #16' with no details about implementation, approach, testing, or impact. Expand the description to include Purpose, Goals, Approach, testing details, and any relevant context beyond just the issue number.
Docstring Coverage ⚠️ Warning Docstring coverage is 28.57% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Implement documentation parser' clearly and concisely summarizes the main change—adding documentation parser functionality.
Linked Issues check ✅ Passed Code changes implement comprehensive documentation parsing infrastructure (lexer, parser, AST node builders, pretty printers) with extensive test corpus files, addressing the objective to migrate documentation parser as indicated in issue #16.
Out of Scope Changes check ✅ Passed All changes are focused on documentation parser implementation: new lexer/parser modules, AST transformations, test data files, and supporting infrastructure. Changes to Lexer visibility and added Lexer interface support the documentation parser goals.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

@snelusha snelusha force-pushed the feat/documentation-parser branch from 2e145ac to 3aa9182 Compare February 9, 2026 06:30
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
ast/pretty_printer.go (1)

728-738: ⚠️ Potential issue | 🟠 Major

Remove the extra indent decrement in printErrorTypeNode.

indentLevel is decremented without a matching increment, which can make indentation negative and corrupt subsequent output formatting.

🔧 Suggested fix
 	if !node.IsTop() {
 		p.indentLevel++
 		p.PrintInner(node.detailType.TypeDescriptor.(BLangNode))
 		p.indentLevel--
 	}
-	p.indentLevel--
 	p.endNode()
🤖 Fix all issues with AI agents
In `@ast/node_builder.go`:
- Around line 2905-2939: In transformCodeBlock on NodeBuilder, keep code block
line breaks when flattening: when iterating codeLines (variable codeLines and
codeLine.CodeDescription()), append a newline (e.g., "\n") after each
codeDescription.Text() (except possibly the last line) and ensure you insert a
newline before appending endBacktick.Text(); adjust the docText construction
around startBacktick, codeLines loop, and endBacktick so multi-line code blocks
preserve their line breaks in the resulting bLangDocLine.Text.

In `@parser/documentation_lexer.go`:
- Around line 637-650: In readDocReferenceTypeToken, call dl.reader.Mark() at
the start so the lexer snapshot is set before consuming chars; this ensures
subsequent getLexeme() (via processReferenceType or
getDocSyntaxTokenWithoutTrivia) returns the correct token text without including
prior trivia or being empty. Add the Mark() before checking dl.peek(), keep the
existing behavior for the BACKTICK branch (still Advance() and SwitchMode), and
then proceed to advance identifier chars and call processReferenceType() as
before.
- Around line 30-45: Rename the exported DocumentationLexer type and
NewDocumentationLexer constructor to unexported names (documentationLexer and
newDocumentationLexer) and update all internal references accordingly;
specifically change the type declaration "type DocumentationLexer struct" to
"type documentationLexer struct" and the constructor signature from "func
NewDocumentationLexer(charReader text.CharReader, leadingTriviaList
[]tree.STNode, diagnostics []tree.STNodeDiagnostic, debugCtx
*debugcommon.DebugContext) *DocumentationLexer" to "func
newDocumentationLexer(...) *documentationLexer", preserving the body (calls to
NewLexer, setting lexer.context.leadingTriviaList/diagnostics,
StartMode(PARSER_MODE_DOC_LINE_START_HASH), and previousBacktickMode =
PARSER_MODE_DEFAULT_MODE) and then replace every use site that expects
DocumentationLexer/NewDocumentationLexer (e.g., the single usage in parser code)
to the new unexported names and adjust types accordingly so compilation
succeeds.

In `@parser/documentation_parser.go`:
- Around line 105-115: The isInlineCodeRef function calls
p.getNextNextToken().Kind() without nil checks and can panic on truncated input;
update isInlineCodeRef to first capture t := p.getNextNextToken(), check if t ==
nil and if so return a conservative boolean (e.g., false) and emit a
missing-token diagnostic via the parser's diagnostic API (e.g., p.addDiagnostic
or p.reportMissingToken) before proceeding; then use t.Kind() in the existing
switch arms (replace direct calls to p.getNextNextToken().Kind() with the
guarded local t) to avoid panics.

In `@parser/testdata/parser/documentation/default_value_initialization/main.json`:
- Around line 1-3: Add an ignore rule to biome.json to prevent Biome from
attempting to parse Git LFS pointer files under parser/testdata; update the
file's configuration (look for the biome.json root object) and add an "ignore"
or "ignorePatterns" entry that excludes "parser/testdata/**" (or a broader
pattern matching LFS-managed files) so Biome will skip those paths during
lint/parse steps and avoid JSON parse errors from LFS pointer files.

In `@parser/testdata/parser/documentation/type_models_project/type_models.json`:
- Around line 1-3: The file is a Git LFS pointer (it starts with "version
https://git-lfs.github.com/spec/v1" and contains "oid sha256:"), so JSON
consumers must detect and skip or resolve LFS pointers instead of parsing them
as JSON; update the JSON-reading logic/tests (where files like type_models.json
are opened) to check the file start for the LFS pointer header and either (a)
treat it as non-JSON and skip the file, or (b) invoke the CI/checkout step to
fetch LFS content before parsing, and add a unit test that verifies the reader
returns a clear error/skip when encountering the LFS pointer header.

In `@parser/testdata/parser/record/record_annotation.json`:
- Around line 1-3: The CI release workflow is not fetching Git LFS objects so
parser/testdata/**/*.json LFS pointers fail JSON linting; update the release
workflow (release.yml) to set lfs: true so LFS files are fetched before linting,
or alternately add an exclusion for parser/testdata/** (or the JSON linting
rule) in biome.json to skip those testdata files during linting—change either
the release.yml lfs flag or the biome.json lint exclude pattern accordingly.

In `@parser/tokenizer.go`:
- Around line 23-29: Rename the exported interface Lexer to an unexported lexer
(lowercase) and update all internal type references: change the interface
declaration from "type Lexer interface { ... }" to "type lexer interface {
NextToken() tree.STToken; StartMode(ParserMode); SwitchMode(ParserMode);
EndMode(); GetCurrentMode() ParserMode }", then update the tokenizer.go field
type at the previous line 32 and the parameter type at the previous line 39 to
use the unexported lexer type; ensure any other internal parser package
references are updated accordingly (no external packages should rely on this
exported name).
🧹 Nitpick comments (1)
parser/testdata/parser/statements/vardeclr/module_record_var_decl_annotation_negetive.json (1)

1-3: Test data metadata update is appropriate.

This Git LFS pointer file indicates the underlying test data has been updated (size increased from 17001 to 23217 bytes), which aligns with the PR's objective of adding parser tests and documentation corpus.

Note: The static analysis errors from Biome are false positives—it's attempting to parse this as JSON when it's actually a Git LFS pointer file with a specific non-JSON format.


Optional: Consider fixing the filename typo.

The filename contains "negetive" instead of "negative". While this is a pre-existing typo and fixing it is low priority, consider renaming the file in a future cleanup if it won't break test references.

}

// Look ahead for a "Deprecated" word match.
for i := 0; i < 10; i++ {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we do something like this

if dl.reader.PeekN(10) != "Deprecated" {
return dl.readDocInternalToken()
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'd need to add a PeekString method to charReader since PeekN returns a single char, not a string.

@snelusha snelusha force-pushed the feat/documentation-parser branch from c67558f to c4f63b4 Compare February 10, 2026 06:17
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
ast/pretty_printer.go (1)

741-752: ⚠️ Potential issue | 🟠 Major

Unbalanced indent decrement causes misalignment.

When node.IsTop() is true, the if block (lines 745-749) is skipped, but line 750 still decrements indentLevel. This causes the indent to become misaligned (potentially negative).

🔧 Proposed fix
 func (p *PrettyPrinter) printErrorTypeNode(node *BLangErrorTypeNode) {
 	p.startNode()
 	p.printString("error-type")
 	if !node.IsTop() {
 		p.indentLevel++
 		p.PrintInner(node.detailType.TypeDescriptor.(BLangNode))
 		p.indentLevel--
 	}
-	p.indentLevel--
 	p.endNode()
 }
🤖 Fix all issues with AI agents
In `@ast/pretty_printer.go`:
- Around line 419-436: The parameter printing is skipped when a node has
documentation but zero RequiredParams because the condition uses "if hasParams
|| !hasDoc"; update pretty_printer.go so the parameter block is always emitted:
always call p.printString("("), increment p.indentLevel, iterate over
node.RequiredParams (printing none if empty), decrement p.indentLevel, then call
p.printSticky(")"); remove the "!hasDoc" dependency and keep existing
indent/PrintInner logic around node.RequiredParams and
MarkdownDocumentationAttachment to preserve formatting.

In `@parser/documentation_lexer.go`:
- Around line 34-43: The constructor call in newDocumentationLexer uses the
exported NewLexer for an unexported lexer type; change the mismatch by either
renaming NewLexer to newLexer in the lexer implementation and update this call
to newLexer(charReader, debugCtx), or export the type by renaming the unexported
lexer type to Lexer and keep NewLexer; update all references accordingly (look
for NewLexer and the lexer type declaration and usage in newDocumentationLexer
and documentationLexer to ensure consistent exported/unexported naming).

In `@parser/documentation_parser.go`:
- Around line 567-586: The default panic in
DocumentationParser.parseBacktickExpr should be replaced with non-crashing
parser behavior: detect the unexpected token from p.peek() and return a
synthesized/missing token node (with a diagnostic/error recorded via the
parser's existing error reporting) instead of panicking; preserve current
behavior for BACKTICK_TOKEN, DOT_TOKEN (calling parseMethodCall) and
OPEN_PAREN_TOKEN (calling parseFuncCall), and use the same error reporting
mechanism other parse methods use so callers receive a recoverable node rather
than a crash (refer to parseQualifiedIdentifier, peek, consume, parseMethodCall,
parseFuncCall to find where to attach the diagnostic).
🧹 Nitpick comments (3)
parser/documentation_lexer.go (2)

188-203: Minor: switch structure can be simplified.

The switch in processWhitespaces (lines 196-201) has an empty case and a default that both lead to the same outer break. Consider simplifying:

♻️ Optional simplification
 		switch c {
 		case SPACE, TAB, FORM_FEED:
 			reader.Advance()
 			continue
-		case CARRIAGE_RETURN, NEWLINE:
-		default:
-			break
 		}
 		break

498-532: Consider using strings.Builder for identifier construction.

Lines 503-507 use string concatenation in a loop (identifier += string(lookAheadChar)), which creates a new string allocation on each iteration. While identifiers are typically short, using strings.Builder is more idiomatic and efficient.

♻️ Optional improvement
 func (dl *documentationLexer) processDocumentationReference(nextChar rune) bool {
 	lookAheadChar := nextChar
 	lookAheadCount := 0
-	identifier := ""
+	var identifier strings.Builder

 	for isIdentifierInitialChar(lookAheadChar) {
-		identifier += string(lookAheadChar)
+		identifier.WriteRune(lookAheadChar)
 		lookAheadCount++
 		lookAheadChar = dl.reader.PeekN(lookAheadCount)
 	}

-	switch identifier {
+	switch identifier.String() {

Add "strings" to the imports if not already present.

ast/pretty_printer.go (1)

733-737: Remove commented-out code.

Dead code clutters the codebase. If this formatting was intentional but not needed, remove it.

♻️ Proposed cleanup
 	p.indentLevel--
-	// p.buffer.WriteString("\n")
-	// for i := 0; i < p.indentLevel; i++ {
-	// 	p.buffer.WriteString("  ")
-	// }
 	p.buffer.WriteString(")")
 	p.addSpaceBeforeNode = true
 }

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@ast/pretty_printer.go`:
- Around line 746-748: The unconditional p.indentLevel-- after p.endNode() will
corrupt indentation because indentLevel is only incremented when !node.IsTop();
change the logic so you only decrement the same branch that incremented it: make
the decrement conditional on the same check (node.IsTop()) or remove the
duplicated decrement so that p.indentLevel is decremented exactly once for the
earlier increment; locate the surrounding node.IsTop() check and the two
p.indentLevel-- occurrences and ensure the decrement mirrors the increment
(i.e., wrap p.indentLevel-- in the same if !node.IsTop() block or delete the
extra decrement).
🧹 Nitpick comments (1)
ast/pretty_printer.go (1)

729-732: Remove commented-out code.

Dead code should be removed to maintain clarity. If this was debugging code or an intentional design decision, consider documenting the rationale elsewhere.

🧹 Proposed fix
-	// p.buffer.WriteString("\n")
-	// for i := 0; i < p.indentLevel; i++ {
-	// 	p.buffer.WriteString("  ")
-	// }
 	p.buffer.WriteString(")")

@snelusha snelusha force-pushed the feat/documentation-parser branch from 5a200ae to 0ba1dbb Compare February 10, 2026 07:57
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@ast/pretty_printer.go`:
- Around line 815-853: In printMarkdownReturnParameterDocumentation, when
ReturnParameterDocumentationLines is non-empty the code leaves
p.addSpaceBeforeNode = false which can cause "(return-type" to be concatenated
with the closing ")" of the doc-lines; before printing the return type (i.e.
before calling p.printString("(return-type") /
p.PrintInner(node.ReturnType.(BLangNode))) ensure a separator by setting
p.addSpaceBeforeNode = true (or inserting a single space) after finishing
doc-lines so the printString("(return-type") will be emitted with a space
between the doc-lines closing ")" and the "(return-type" token.

In `@corpus/bal/subset2/01-documentation/function_deprecation-v.bal`:
- Around line 34-35: This test file is missing the required `@output` comment
block so the corpus harness can't record expected output; add an `@output` comment
block after the public function main() containing the actual output the test
should produce (use the exact marker "@output" and the output lines beneath it)
so the *-v.bal corpus test can verify end-to-end output.

In `@parser/lexer.go`:
- Around line 48-54: The public constructor NewLexer should be unexported:
rename NewLexer to newLexer so the package does not expose a public constructor
that returns an unexported type (*lexer); update all internal call sites to use
newLexer and keep the signature (reader text.CharReader, debugCtx
*debugcommon.DebugContext) *lexer unchanged, ensuring imports/tests inside the
parser package are updated accordingly and no external packages reference
NewLexer.

In `@parser/testdata/parser/documentation/markdown_native_function.json`:
- Around line 1-3: The CI is failing because
parser/testdata/parser/documentation/*.json contains Git LFS pointer files (not
real JSON); update the CI lint job (the JSON lint step) to either fetch LFS
objects before running linters or skip these testdata JSON files — i.e.,
configure the CI JSON lint step to run `git lfs pull` prior to linting or add an
exclude pattern for parser/testdata/parser/documentation/*.json (or update the
JSON linter config used by the JSON lint job) so the linter does not attempt to
parse LFS pointer files.
🧹 Nitpick comments (1)
parser/testdata/parser/statements/vardeclr/module_record_var_decl_annotation_negetive.json (1)

1-3: Git LFS pointer update looks correct.

This is a Git LFS pointer file tracking test data. The static analysis errors from Biome are false positives—it's attempting to parse the LFS pointer format as JSON content, which is incorrect.

One minor note: the filename contains a typo ("negetive" → "negative"). Consider renaming for consistency if other files follow the correct spelling.

,

@snelusha snelusha force-pushed the feat/documentation-parser branch from 0ba1dbb to 4f2571f Compare February 11, 2026 10:18
@warunalakshitha warunalakshitha added this to the M3 milestone Feb 13, 2026
@snelusha snelusha force-pushed the feat/documentation-parser branch from 4f2571f to 34d2d8a Compare February 19, 2026 10:10
@snelusha snelusha force-pushed the feat/documentation-parser branch from 34d2d8a to 3932af1 Compare February 19, 2026 10:14
@codecov
Copy link

codecov bot commented Feb 19, 2026

Codecov Report

❌ Patch coverage is 62.12412% with 592 lines in your changes missing coverage. Please review.
✅ Project coverage is 29.00%. Comparing base (0a24001) to head (bc84e0d).

Files with missing lines Patch % Lines
parser/documentation_parser.go 49.19% 192 Missing and 28 partials ⚠️
parser/documentation_lexer.go 67.13% 147 Missing and 16 partials ⚠️
ast/node_builder.go 52.74% 119 Missing and 10 partials ⚠️
ast/pretty_printer.go 80.00% 46 Missing and 3 partials ⚠️
parser/lexer.go 63.15% 21 Missing ⚠️
parser/tree/st_node.go 80.00% 10 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #103      +/-   ##
==========================================
+ Coverage   27.55%   29.00%   +1.45%     
==========================================
  Files         252      254       +2     
  Lines       53694    55159    +1465     
==========================================
+ Hits        14794    15999    +1205     
- Misses      37969    38162     +193     
- Partials      931      998      +67     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Migrate Documentation parser

3 participants