vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. When vLLM is configured to use Mooncake, unsafe deserialization exposed directly over ZMQ/TCP on all network interfaces will allow attackers to execute remote code on distributed hosts. This is a remote code execution vulnerability impacting any deployments using Mooncake to distribute KV across distributed hosts. This vulnerability is fixed in 0.8.0.
| Software | From | Fixed in |
|---|---|---|
vllm
|
0.6.5 | 0.8.0 |
| vllm / vllm | 0.6.5 | 0.8.0 |