🎉 Efficiency资源干货全收录
12 小时前
#AI #MCP 多模型视觉理解MCP服务器。支持GLM-4.5V、DeepSeek-OCR(免费)和Qwen3-VL-Flash技术。为不支持图片理解的 AI 编码模型提供视觉处理能力。
https://github.com/JochenYang/luma-mcp
GitHub
GitHub - JochenYang/luma-mcp: Multi-Model Visual Understanding MCP Server, GLM-4.6V, DeepSeek-OCR (free), and Qwen3-VL-Flash. Provide…
Multi-Model Visual Understanding MCP Server, GLM-4.6V, DeepSeek-OCR (free), and Qwen3-VL-Flash. Provide visual processing capabilities for AI coding models that do not support image understanding.多...
Home
Powered by
BroadcastChannel
&
Sepia
ღゝ◡╹ノ♡
Noise