AI Command Line Interface With Standard Input, Standard Output, Standard Error, And Pipes?
I've been looking for an open source AI model command line interface that works with standard input, standard output, standard error, and pipes. It's okay if the model is running externally (for example Google Gemini), but a locally running model would be interesting too. Any hints for me? Thanks!
I hope everyone gets the servers they want!
Tagged:
Comments
llama.cpp is probably where I'd start. No idea whether it supports pipes etc though
Be sure to install the right cuda/vulkan/rocm/whatever stack you need for your hardware else it'll be slow
There is a llama guide somewhere in my submission history
https://github.com/simonmysun/ell
this match your description?
youtube.com/watch?v=k1BneeJTDcU
Not sure. Have to check. Thanks for the tip! I didn't know about ell.
I hope everyone gets the servers they want!
@Not_Oles curious on what you ended up doing if you’re open to sharing. I had the same question.
@huntercop
Haha, what I ended up doing is not so much . . . yet.
I looked at the simonmysun/ell repo and really liked what I saw in the README.md. I haven't looked at the code yet, and I haven't installed it yet. The new Chrome window I opened on August 2 still persists. Not that it matters, but it's one of eleven windows my Chromebook has open at the moment.
Part of the reason I haven't done much is that I already had installed and have continued to use eliben/gemini-cli, which works great with Google Gemini even though eliben/gemini-cli doesn't have full support for standard I/O/E and pipes. Eli was kind enough to add a
$load <file path>
command when I emailed him about needing multiline input for chats. Now that gemini-cli supports multi-line input via a file, gemini-cli accomplishes what I have really needed so far.I still plan to try simonmysun/ell. And, if anybody else can suggest another CLI interface to AI models which also has full support for standard I/O/E and pipes, that would be great. I'm looking forward to full CLI support and also to trying additional models.
Thanks for asking! Please let us know what you end up doing and how it works. Thanks again!
I hope everyone gets the servers they want!
Oh I wish I had the time right now, my time is already preoccupied by many others eggs in my daily basket.
Well, tbh I don't know if it will be useful to you but ollama let's run model locally on machine through CLI, do check it out once. 😀
Thanks @cainyxues!
Some links for the curious:
https://ollama.com/
https://github.com/ollama/ollama
From https://www.doprax.com/tutorial/a-step-by-step-guide-for-installing-and-running-ollama-and-openwebui-locally-part-1/ :
ollama
FWIW, in my quick check, I didn't see mention of standard input, output, error, and pipes.
I hope everyone gets the servers they want!