Show HN: Talk to your Mac offline – sub-second Voice AI (Apple Silicon and MLX)

Share This Post

I wanted a voice assistant that feels realtime but runs completely offline.
This prototype uses MLX + FastAPI on Apple Silicon to hit sub-second latency for speech-to-speech conversations.

Repo: https://github.com/shubhdo­tai/offline-voice-ai

It’s fast, minimal, and hackable — would love feedback on latency tricks, model swaps, or use-cases you’d like to see next.


Comments URL: https://news.ycombinator.com/item?id=45670364

Points: 1

# Comments: 1

Source: github.com

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Do You Want To Boost Your Business?

drop us a line and keep in touch

We are here to help

One of our technicians will be with you shortly.