Skip to content

Add vLLM version compatibility for AudioMediaIO changes#223

Draft
Copilot wants to merge 3 commits intomainfrom
copilot/fix-vllm-asr-problem
Draft

Add vLLM version compatibility for AudioMediaIO changes#223
Copilot wants to merge 3 commits intomainfrom
copilot/fix-vllm-asr-problem

Conversation

Copy link

Copilot AI commented Jan 29, 2026

vLLM's AudioMediaIO class was removed or relocated in recent versions, breaking import with AttributeError: module 'vllm.multimodal.audio' has no attribute 'AudioMediaIO'.

Changes

  • Version-agnostic AudioMediaIO handling: Try-except pattern detects presence of AudioMediaIO at import time
    • Old vLLM: inherit from AudioMediaIO and override methods with FFmpeg implementations
    • New vLLM: define standalone class with identical interface
  • Diagnostic warnings: Log which compatibility path was taken to aid debugging
  • Broader exception coverage: Catch both AttributeError and ImportError for robustness
try:
    _OriginalAudioMediaIO = _vllm_audio_module.AudioMediaIO
    class _PatchedAudioMediaIO(_OriginalAudioMediaIO):
        # Override with FFmpeg implementations
except (AttributeError, ImportError) as e:
    warnings.warn(f"AudioMediaIO not found ({e}). Using standalone implementation.")
    class _PatchedAudioMediaIO:
        # Standalone FFmpeg-based implementation

FFmpeg-based audio loading behavior remains unchanged across all vLLM versions.

Warning

Firewall rules blocked me from connecting to one or more addresses (expand for details)

I tried to connect to the following addresses, but was blocked by firewall rules:

  • docs.vllm.ai
    • Triggering command: /home/REDACTED/work/_temp/ghcca-node/node/bin/node /home/REDACTED/work/_temp/ghcca-node/node/bin/node --enable-source-maps /home/REDACTED/work/_temp/copilot-developer-action-main/dist/index.js (dns block)

If you need me to access, download, or install something from one of these locations, you can either:

Original prompt

This section details on the original issue you should resolve

<issue_title>vllm asr problem</issue_title>
<issue_description>I am currently following the document VibeVoice vLLM ASR Deployment
to employ the vibevoice model. But I encountered this problem:

Traceback (most recent call last):
  File "<frozen runpy>", line 189, in _run_module_as_main
  File "<frozen runpy>", line 112, in _get_module_details
  File "/app/vllm_plugin/__init__.py", line 17, in <module>
    from .model import VibeVoiceForCausalLM
  File "/app/vllm_plugin/model.py", line 76, in <module>
    _OriginalAudioMediaIO = _vllm_audio_module.AudioMediaIO
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'vllm.multimodal.audio' has no attribute 'AudioMediaIO'
Traceback (most recent call last):
  File "/app/vllm_plugin/scripts/start_server.py", line 170, in <module>

============================================================
  VibeVoice vLLM ASR Server - One-Click Deployment
    main()
  File "/app/vllm_plugin/scripts/start_server.py", line 163, in main
============================================================

============================================================
  Updating package list
============================================================


============================================================
  Installing FFmpeg and audio libraries
============================================================


============================================================
    generate_tokenizer(model_path)
  Installing VibeVoice with vLLM support
============================================================


============================================================
  Generating tokenizer files
============================================================

  File "/app/vllm_plugin/scripts/start_server.py", line 73, in generate_tokenizer
    run_command(
  File "/app/vllm_plugin/scripts/start_server.py", line 30, in run_command
    subprocess.run(cmd, check=True)
  File "/usr/lib/python3.12/subprocess.py", line 571, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/usr/bin/python3', '-m', 'vllm_plugin.tools.generate_tokenizer_files', '--output', '/models/VibeVoice-ASR']' returned non-zero exit status 1.

The docker command I use is:

docker run -d --gpus '"device=2"' --name vibevoice-vllm \
  --ipc=host \
  -p 8000:8000 \
  -e VIBEVOICE_FFMPEG_MAX_CONCURRENCY=64 \
  -e PYTORCH_ALLOC_CONF=expandable_segments:True \
  -e HF_ENDPOINT=https://hf-mirror.com \
  -e VLLM_USE_V1=0 \
  -v $(pwd):/app \
  -v /141nfs/xxx/hf_models:/models \
  -w /app \
  --entrypoint bash \
  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/vllm/vllm-openai:latest \
  -c "python3 /app/vllm_plugin/scripts/start_server.py --model /models/VibeVoice-ASR"

Note that to save the time of downloading the model, I already download the VibeVoice-ASR on my local device and changed the corresponding loading procedure:

    # Step 3: Download model
    # model_path = download_model(args.model)
    model_path = args.model
```</issue_description>

## Comments on the Issue (you are @copilot in this section)

<comments>
</comments>

✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

Copilot AI and others added 2 commits January 29, 2026 02:22
Add try-except blocks to handle both old and new vLLM versions where AudioMediaIO may not exist or may have been moved. This fixes the AttributeError when using newer vLLM versions.

- Handle missing AudioMediaIO by creating standalone implementation
- Add fallback for utils module patching
- Maintain backward compatibility with older vLLM versions

Co-authored-by: donglixp <1070872+donglixp@users.noreply.github.com>
- Add warnings to inform users which compatibility mode is being used
- Handle both AttributeError and ImportError for better coverage
- Add __init__ method to inherited class for consistency
- Provide clear diagnostic messages when patching fails

Co-authored-by: donglixp <1070872+donglixp@users.noreply.github.com>
Copilot AI changed the title [WIP] Fix attribute error in vLLM ASR deployment Add vLLM version compatibility for AudioMediaIO changes Jan 29, 2026
Copilot AI requested a review from donglixp January 29, 2026 02:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

vllm asr problem

2 participants