Hi all! I wanted to share a local LLM playground I made called Apples2Oranges (https://github.com/bitlyte-ai/apples2oranges) that let’s you compare models side by side (of different quants, families) just like OpenAI model playground or Google AI Studio. It also comes with hardware telemetry. Though if you’re data obsessed, you use it as a normal inference GUI with all the visualizations.
It’s built with Tauri + React + Rust and while is currently only compatible with mac (all telemetry is designed to interface with macos) but we will be adding Windows support.
It currently uses rust bindings for llama.cpp (llama-cpp-rs), however we are open to experimenting with different inference engines depending on community wants. It runs models sequentially, and you can set it to automatically wait for hardware cooldown for robust comparisons.
It’s a very early release, and there is much to do in making this better for the community so we’re welcoming all kinds of contributors. The current limitations are detailed on our github.
Comments URL: https://news.ycombinator.com/item?id=45351351
Points: 1
# Comments: 0
Source: github.com