#
ipex-llm
Here are 8 public repositories matching this topic...
Hardware monitoring, benchmarks, and technical reports for NPUs (Intel Meteor Lake, Apple Silicon, Qualcomm).
-
Updated
Feb 8, 2026 - Python
LoRA + QLoRA fine‑tuning toolkit optimized for Intel Arc Battlemage GPUs
-
Updated
Aug 5, 2025 - Python
Semantic Kernel example project (Hangman) with RAG (all local inference)
multi-agent sqlite3 charp rag azure-ai intel-gpu ai-102 semantic-kernel local-llm ollama agentic-ai ipex-llm
-
Updated
Jan 26, 2026 - C#
Home Assistant Addont: Ollama Portable on Intel GPU with IPEX-LLM
-
Updated
Nov 24, 2025 - Dockerfile
Hangman with Microsoft Agent Framework
multi-agent charp azure-ai intel-gpu ai-102 semantic-kernel local-llm ollama agentic-ai ipex-llm microsoft-agent-framework
-
Updated
Jan 21, 2026 - C#
Improve this page
Add a description, image, and links to the ipex-llm topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the ipex-llm topic, visit your repo's landing page and select "manage topics."