Would I be able to run ggml models such as whisper.cpp or llama.cpp on a raspberry pi with a coral ai USB Accelerator?
Would I be able to run ggml models such as whisper.cpp or llama.cpp on a raspberry pi with a coral ai USB Accelerator?

Would I be able to run ggml models such as whisper.cpp or llama.cpp on a raspberry pi with a coral ai USB Accelerator?

This is a project that I'm working on: making a type of "Alexa", "Hey Google", or "Siri" for my workplace. I'm very new to AI and am looking forward to learning a lot. I thought initially to use different models that interact together to create such a voice assistant. For example, I would use Whisper.cpp to transcribe audio, then send the text to Llama.cpp, and then use a text-to-speech software to reply. I want to do this all on a raspberry pi 3 B2 (it's what I have available).

However, a pi doesn't have the strength to run something like Llama.cpp, of course, so I've been considering using something like the Coral USB Accelerator (https://coral.ai/products/accelerator). As I've been learning more about it, it seems to be very geared towards TensorFlow Lite models. But whisper.cpp and Llama.cpp use ggml models.

Here are my questions:

  1. Could the coral ai USB Accelerator run ggml models and, if so, how?
  2. Is there a better system to creating a local (no 3rd party api) at-home assistant?

Please let me know if I could do something better and what that thing is. I'd appreciate all sorts of advice. Thank you!

Links

  1. Coral USB Accelerator https://coral.ai/products/accelerator
  2. Whisper.cpp https://github.com/ggerganov/whisper.cpp
  3. Llama.cpp https://github.com/ggerganov/llama.cpp
submitted by /u/Effective_Muffin_700
[link] [comments]