-
Notifications
You must be signed in to change notification settings - Fork 1
Proposal: Two-stage DAP debugger #4
Description
@regetom and I were discussing lean methods to connect Daffodil to a DAP-compliant debugger like VS Code, and would like to propose a two-stage approach:
- augment the existing "tracing debugger" to serialize Daffodil processing state to a file; then
- create a DAP-compliant debugging process which loads a serialized trace and adapts it to DAP.
This architecture would support a (very) minimal debugging experience: at every step,
- highlight the currently processing schema element; and
- highlight the current location within the input data being processed.
Re (1), The current trace debugger is "merely" an instance of the interactive debugger with a static set of commands, which then outputs a subset of the current state to stdout using an ad-hoc rendering format. Instead, a new trace debugger could output the captured state in a structured way, such as JSON, to be read by downstream programs. A minimal set of information can be serialized to support the desired debugging tasks, and this set can be grown as needed. We also imagine a serialized trace--in a known and more structured format!--would be useful for other scenarios outside of debugging, like testing.
Re (2), to create a DAP-compliant client, we would load a trace file and infer DAP-related state from it:
- the processing "thread" along with its "stacktrace";
- "variables";
- etc.
(I'm still learning what concepts DAP requires.)
Relating to the existing code in this repository, the notion of an "event stream" (type EStream = ZStream[Any, Nothing, Event]) can be reused: the tracing debugger produces an event stream, which then happens to be serialized to a file, or wherever. The "trace consumer" DAP process would consume such a stream into memory, and maintain the current debugging state and allow "movement" within the trace history via received DAP commands. The current set of Command subtypes would be replaced/integrated with DAP-related types (using [scala-debug-adapter](https://github.com/scalacenter/scala-debug-adapter, etc.).
What do you think about this architectural split, and about the structure of such a DAP-compliant debugger?
Some open questions:
- Given the available state from Daffodil, can a stacktrace of the schema processing be inferred?