|
1 | | -[English](README.en-US.md) | [简体中文](README.md) |
| 1 | +[简体中文](README.md) | [English](README.en-US.md) |
2 | 2 |
|
3 | 3 | # LLM Client for WPF |
4 | 4 |
|
5 | | -A lightweight, open-source large language model (LLM) client built with `.NET` and `WPF`. This project provides an intuitive and feature-rich interaction tool for utilizing various LLM services. By default, it supports some GitHub Copilot service models (e.g., `GPT-4o`, `O1`, and `DeepSeek`), with planned extensions for other endpoints. |
6 | | - |
7 | | - |
8 | | - |
9 | | -## Key Features |
10 | | - |
11 | | -1. **Pure .NET WPF Implementation** |
12 | | - - Built with the `MaterialDesign` library for a modern UI. |
13 | | - - Uses `Microsoft.Extensions.AI` for seamless LLM API integration. |
14 | | - |
15 | | -2. **Basic LLM Interaction** |
16 | | - - Configure and interact with language models. |
17 | | - |
18 | | -3. **Code Highlighting** |
19 | | - - Integrated `TextmateSharp` for syntax highlighting in various programming languages. |
20 | | - |
21 | | -4. **Context Management** |
22 | | - - Manually manage chat context by excluding specific conversation entries without deleting them. |
23 | | - |
24 | | -5. **Theme Switching** |
25 | | - - Supports light and dark themes for UI. |
26 | | - - Allows switching between different code highlighting themes. |
27 | | - |
28 | | -6. **UI Performance Optimization** |
29 | | - - Conversation records implement UI virtualization for improved performance with large data sets. |
30 | | - |
31 | | -7. **Markdown Export** |
32 | | - - Save chat history in Markdown format for sharing or archiving. |
33 | | - |
34 | | -## Project Screenshots |
35 | | - |
36 | | - |
37 | | - |
38 | | -## Planned Features |
39 | | - |
40 | | -The following features are under active development: |
41 | | - |
42 | | -1. **Multi-Endpoint Support** |
43 | | - - Add support for other LLM endpoints, such as `Claude`. |
44 | | - |
45 | | -2. **Chain-of-Thought (CoT) Presets** |
46 | | - - Enable users to orchestrate predefined Chain-of-Thought (CoT) workflows for multi-step reasoning. |
47 | | - |
48 | | -3. **Auto-CoT** |
49 | | - - Automatically generate Chain-of-Thought processes for better handling of complex tasks. |
50 | | - |
51 | | -4. **RAG Integration** |
52 | | - - Introduce Retrieval-Augmented Generation (RAG) for advanced knowledge-driven generation. |
53 | | - |
54 | | -5. **Automatic Context Management** |
55 | | - - Offer intelligent context management, eliminating the need for manual exclusions. |
56 | | - |
57 | | -6. **Multi-Model Output Comparison** |
58 | | - - Compare outputs from different LLMs for better model evaluation. |
59 | | - |
60 | | -7. **Searching Functionality** |
61 | | - - Quickly search through chat history and knowledge base content. |
62 | | - |
63 | | -## How to Get Involved |
64 | | - |
65 | | -This project is still in active development. You can contribute in the following ways: |
66 | | - |
67 | | -1. **Submit Issues or Pull Requests**: All bug reports, feature requests, or suggestions are welcome! |
68 | | -2. **Become a Contributor**: Fork this repository and submit your changes through Pull Requests. |
69 | | -3. **Contact the Author**: Reach out via [GitHub Issues](https://github.com/) for questions or collaboration opportunities. |
70 | | - |
71 | | - |
72 | | -## Usage Instructions |
73 | | - |
74 | | -> Detailed instructions on how to compile, run, and configure the project will be added. |
75 | | -
|
76 | | -## Acknowledgements |
77 | | - |
78 | | -Special thanks to the following open-source libraries and tools: |
79 | | - |
80 | | -- [MaterialDesignInXAML](https://github.com/MaterialDesignInXAML/MaterialDesignInXamlToolkit) |
81 | | -- [TextmateSharp](https://github.com/microsoft/TextMateSharp) |
82 | | -- [Microsoft.Extensions.AI](https://learn.microsoft.com/en-us/dotnet/) |
83 | | -- And other great tools and frameworks not listed here. |
| 5 | +A large language model (LLM) client project implemented based on `.NET` and `WPF` technology, aiming to provide a lightweight, intuitive, and feature-rich interaction tool to use various supported LLM services. The project natively supports some models provided under the GitHub Copilot service (such as `GPT-4o`, `O1`, and `DeepSeek`), and can be extended to support other service endpoints. |
| 6 | + |
| 7 | + |
| 8 | + |
| 9 | +## Pure .NET WPF Implementation |
| 10 | + I want to prove that it is feasible to build a usable, modern LLM client entirely with .NET and classic WPF👍. Most current implementations are based on python+ts😢. |
| 11 | + - Utilize `MaterialDesignThemes` for modern interface design. |
| 12 | + - Use `Microsoft.Extensions.AI` for integration with large language model APIs. |
| 13 | + - Use `Markdig.Wpf` for Markdown parsing. |
| 14 | + - Utilize `Microsoft.SemanticKernel` for core large model dialogue and RAG capabilities. |
| 15 | + - Use `ig` for reading and parsing PDF documents. |
| 16 | + - Implement `ModelContextProtocol` for MCP protocol support. |
| 17 | + - Utilize `TextMateSharp` for syntax highlighting. |
| 18 | + --- |
| 19 | +## Basic Session Features |
| 20 | + **Endpoint-Model Configuration** |
| 21 | + Configuration is divided by endpoint, and users can add multiple endpoints and different models for each endpoint. |
| 22 | + The GitHub Copilot endpoint is preset, and users only need to provide a token to use it. |
| 23 | +  |
| 24 | + It also allows custom OpenAI API compatible endpoints. Here I added four API Endpoints: |
| 25 | +  |
| 26 | + The above is the largest model supplier [openrouter](https://openrouter.com). You can see that besides manually configuring models, you can also fill in the model id and retrieve model-related information from openrouter, making it convenient for users. |
| 27 | + **Create a Dialogue** |
| 28 | + Select 'New Dialogue' in the bottom left corner, choose an endpoint and a model, and enter a question to start a dialogue. <img src="images/createdialog.png" alt="start dialog" style="zoom:50%;"/> |
| 29 | +  |
| 30 | + Each model has its characteristics, and different characteristics come with different functions, such as supporting streaming output and function calls. These characteristics also influence the availability of dialogue functionalities. |
| 31 | + **Dialogue Interface** |
| 32 | +  |
| 33 | + - Fine dialogue context management, users can clear history, isolate history, and exclude individual replies. Unlike those tools that automatically manage context, users completely control context transmission. |
| 34 | + - Supports message type differentiation (e.g., user messages, model replies, system messages, etc.). |
| 35 | + - Saving and loading dialogue records for long-term use and management is supported. |
| 36 | + - Resending messages is possible. |
| 37 | + - Messages can be compared, fully supporting context like function calls. |
| 38 | + - Model switching and model parameter adjustments are allowed. |
| 39 | + <img src="images/model_param.png" alt="edit param" style="zoom:60%;" /> |
| 40 | + - The dialogue is searchable and highlighted in yellow, allowing quick navigation to search results. |
| 41 | + - Markdown rendering and code highlighting are supported. |
| 42 | + - Dialogues can be exported to Markdown format for easier archiving and sharing. |
| 43 | + - Supports cloning dialogues. |
| 44 | + - Supports backup and import of dialogues. |
| 45 | + - Supports UI virtualization of dialogue records to enhance performance. |
| 46 | + - Theme switching (dark mode and light mode) is supported. |
| 47 | + |
| 48 | + |
| 49 | + - Supports code highlighting theme switching. |
| 50 | + **Dialogue Features** |
| 51 | + - Parameter changes and model switching. |
| 52 | + - Supports message resending. |
| 53 | + - Supports message comparison. |
| 54 | + - Supports corpus search and manual context addition. |
| 55 | + - Supports streaming output. |
| 56 | + - Supports function calling (including MCP). |
| 57 | + - Supports image input (images can be pasted directly into the input box). |
| 58 | + - Supports RAG function (corpus functionality is encapsulated as a function to maximize the flexibility of large models). |
| 59 | + - Supports search parameters (additional parameters are available for different API suppliers, such as OpenRouter). |
| 60 | + - Supports search tools (Function Call). |
| 61 | + - Supports Thinking switch (OpenRouter). |
| 62 | +---- |
| 63 | + |
| 64 | +## Fine-grained RAG |
| 65 | +Retrieval-Augmented Generation refers to enhancing the response capabilities of generative models by retrieving relevant information. In this project, the RAG feature allows users to combine external knowledge bases (e.g., documents, websites) with large language models to provide more accurate and contextually relevant responses. Unlike ordinary RAG, this project's RAG features have three significant characteristics: |
| 66 | +1. **Fine-grained Document Import**: It is known that the bottleneck of RAG lies in document preprocessing, during which much information may be lost. This project supports importing multiple file formats (such as PDF, Word, text files, etc.), providing fine-grained chunking and embedding options to ensure maximum preservation of document content integrity. |
| 67 | +2. **Function Call Integration**: The RAG feature is encapsulated as a function call, not limited to Query, but also extends to document structure queries, allowing LLM to generate Query strings based on understanding the document's overview. This fully utilizes the inferencing capability of large models, dynamically deciding when to invoke RAG functionality, thereby improving response relevance and accuracy. |
| 68 | +3. **Structured Query**: Files will not be directly split into flat chunks but parsed into structured data (e.g., chapters, paragraphs, etc.), with each node automatically generating a Summary to support more complex query strategies. The query process will selectively execute top-down or bottom-up retrieval strategies based on the structure's characteristics. The query results are also returned in a structured format, making it easier for large models to understand and use. For instance, a returned paragraph will include its chapter information, facilitating the model's contextual understanding. |
| 69 | + |
| 70 | +**File Import** |
| 71 | +Users can import files through the UI interface, supporting multiple formats (currently supporting only PDF, Markdown). The file management page is as follows: |
| 72 | + |
| 73 | +After selecting a file, use the toggle button on the right  to start the import process. Generally speaking, RAG encounters information loss at the first step of **file structuring**. To achieve high control over the RAG process, the file import process provides additional controls. For example, with the following PDF, you can select Margin in the interface, preview, and change the Bookmark (some PDFs have incorrect Bookmark markings). |
| 74 | + |
| 75 | + |
| 76 | +The above image shows nodes divided according to Bookmarks; the next step generates Summaries, and finally, the file is embedded and stored in a vector database (local Sqlite). We can preview them in the end: |
| 77 | + |
| 78 | +As shown above, the data is displayed in a tree structure, showing the Summary for Bookmark nodes and the actual content for Paragraph/Page nodes. |
| 79 | +If building fails, you can view the reason in the log: |
| 80 | +<img src="images/rag_log.png" alt="log" style="zoom:70%;" /> |
| 81 | +## MCP Support |
| 82 | +- Supports adding tools via UI and Json  |
| 83 | +- Json method formats are similar to tools like Claude Code |
| 84 | + |
| 85 | +- After adding, you can manually refresh to obtain the tool list |
| 86 | + |
| 87 | +- MCP tools can also attach a prompt, which automatically attaches to the System Prompt when selecting that tool |
| 88 | + |
| 89 | +## Project (Experimental) |
| 90 | +The project feature allows users to create and manage multiple related dialogues that share the same context. This is suitable for scenarios requiring collaboration across multiple dialogues or tracking different topics. Each project can contain multiple dialogues and can be conveniently switched and managed between them. |
| 91 | + |
| 92 | + |
| 93 | +## Features in Progress |
| 94 | + |
| 95 | +The following features are under development: |
| 96 | + |
| 97 | +1. **Predefined CoT Orchestration** |
| 98 | + - Support orchestration based on the Chain-of-Thought (CoT) reasoning process, helping users efficiently obtain multi-step reasoning outputs. |
| 99 | + |
| 100 | +2. **Auto-CoT** |
| 101 | + - Automatically generate Chain-of-Thought reasoning to enhance automated processing outcomes for complex tasks. |
| 102 | + |
| 103 | +3. **Automatic Context Management** |
| 104 | + - Provides intelligent context management features, eliminating the need to manually exclude historical records. |
| 105 | +## How to Contribute to the Project |
| 106 | + |
| 107 | +As the project is still under development, you can contribute in the following ways: |
| 108 | + |
| 109 | +1. Submit an Issue or PR: Any feedback on features, bug fixes, or suggestions for new features is highly welcomed! |
| 110 | +2. Become a Contributor: Directly fork the project and initiate a Pull Request. |
| 111 | +3. Contact the Author: For any questions or collaboration interests, you can contact me through [GitHub Issues](https://github.com/). |
84 | 112 |
|
85 | 113 | --- |
86 | 114 |
|
87 | | -This is a project full of potential. Contributions are warmly welcomed! |
| 115 | +This is a learning project, and I appreciate your valuable feedback! |
0 commit comments