Today we are excited to introduce Rubra, an open-source alternative to the OpenAI stack designed to be run locally and to simplify the development of AI assistants. Rubra was built to combine the power of local computing and open source LLMs, and put it in the hands of anyone interested in developing AI applications. With Rubra, you can create assistants tailored to your needs, equipped with tools such as web browsers and knowledge from uploaded files, making it a versatile agent for a myriad of tasks.
Local AI development has been complex and painful.
Our journey with Rubra commenced with a deep appreciation for the capabilities of ChatGPT and the OpenAI platform. However, as we delved deeper, we recognized that there were many use cases for running models locally, especially around data privacy and cost. Moreover, the emergence of various open-source LLMs with expanding capabilities hinted at a transformative shift in the landscape. Projects like LocalAI and OpenGPTs paved the way for local model inferencing and GPT creation, yet none provided a holistic, high-quality experience that aligned with the work we were doing with OpenAI in the cloud. Specifically, we were looking for somethign that was failthful to the OpenAI APIs, and supported the latest approaches to building assistants.
As we looked at building a local stack, one major obstacle we identified was the complexity of the LLM application stack, comprising disparate components such as a separate embedding model RAG, a vector database, and an LLM cache. This intricacy posed a significant challenge for anyone looking to build agents locally, particularly those new to language models. Additionally, for users interested in harnessing open-source models, the process often proved tedious and demanded deep domain expertise.
Rubra aims to simplify AI at the local level
Our goal with Rubra was to take a comprehensive approach to the local development of AI assistants. We built a familiar chat interface for seamless interaction with assistants and base LLMs. We also implemented the OpenAI python and JavaScript SDKs, so that Rubra could be used as a drop-in replacement for OpenAI development. Our initial UI aims to capture the simplicity of ChatGPT, while optimizing for the creation of assistants. Agents can be granted access to files and tools like web browsers and code interpreters—all running locally on their machines.
Our goal with Rubra was to take a comprehensive approach to the local development of AI assistants. We built a familiar chat interface for seamless interaction with assistants and base LLMs. We also implemented the OpenAI python and JavaScript SDKs, so that Rubra could be used as a drop-in replacement for OpenAI development. Our initial UI aims to capture the simplicity of ChatGPT, while optimizing for the creation of assistants. Agents can be granted access to files and tools like web browsers and code interpreters—all running locally on their machines.
Rubra runs a fine-tuned, Mistral-based model as the LLM powerfing the platform. As time goes on, we anticipate updating our local model to take advantage of more powerful general purpose models as they become available. Users can also leverage OpenAI and Anthropic models directly from Rubra to compare how their agents work with both local and cloud-based models.
Rubra and Acorn
At Acorn we’re interested in building the tools and platforms that will run the next generation of cloud applications. Rubra is still early, but it continues our focus on creating open-source, simplified frameworks for application development. At the moment Rubra is a stand alone project, being developed in parallel to our work on Acorn and Acorn Runtime, but we are incredibly excited about how these streams will come together in the future.
Learn more
You can see Rubra in action and learn more about our plans for the project on its website. You can also read our much more detailed blog introducing Rubra. If you want to jump in and try it out, you can visit the quickstart guide in our docs.