vLLM Benchmarking

Integrates with
vLLM

Summary

Provides interactive benchmarking for vLLM through the Model Context Protocol. Allows users to specify endpoints, models, iteration counts, and prompt numbers for performance testing.

GitHub