Open Interpreter: How It Works, Supported LLMs & Getting Started

What Is Open Interpreter?

Open Interpreter is an open-source tool that allows developers to work with large language models (LLMs) on their local machine. It can be used via a terminal interface and supports popular programming languages, including Python, JavaScript, and Bash.

With Open Interpreter, users can perform various tasks, such as conducting data analysis, controlling web browsers for research purposes, generating and editing photos, videos, and PDFs, as well as managing and analyzing large datasets.

One of the most important features of Open Interpreter is its ability to control a PC's graphical user interface (GUI), allowing for direct interaction with the desktop environment. Additionally, it now includes vision capabilities, enabling it to analyze and interpret images.

Open Interpreter is open sourced under the AGPL-3.0 license. It has over 50K stars on GitHub and has over 100 contributors. Get Open Interpreter from the official GitHub repo

This is part of a series of articles about code interpreter (content to come).

In this article:

Open Interpreter vs OpenAI’s Code Interpreter

OpenAI’s Code Interpreter is a tool integrated with GPT-3.5 and GPT-4, which can perform various computational tasks using natural language. However, it operates in a hosted, closed-source environment with several restrictions, such as limited access to pre-installed packages and the absence of internet connectivity. Additionally, it has a runtime limit of 120 seconds and a file upload cap of 100MB, which can be limiting for extensive or complex tasks.

Open Interpreter offers a more flexible solution by running directly on your local machine. This local execution provides full access to the internet and allows the use of any package or library needed for your projects. With no restrictions on runtime or file size, Open Interpreter is well-suited for handling large datasets and lengthy computations. Its open-source nature ensures that you have complete control over the tool and your data. It also supports multiple large language models (LLMs) beyond those offered by OpenAI.

Which LLMs Can You Use with Open Interpreter?

Open Interpreter uses LiteLLM to connect the terminal interface or development environment to hosted language models. When running open interpreter, users can determine what LLM model to run, like this:

interpreter --model gpt-4-turbo interpreter --model claude-3

You can see a full list of supported LLMs here. The following LLM providers are supported:

ProviderSpecial Comments
OpenAISupports Chat + Embedding calls
OpenAI (Text Completion)Supports text completion models
OpenAI-Compatible EndpointsAllows models hosted behind an OpenAI proxy
Azure OpenAIAPI Keys, Params
Azure AI StudioSupports all models on Azure AI Studio
VertexAISupports Anthropic, Gemini, Model Garden
PaLM API - Google
Gemini - Google AI Studio
AnthropicSupports all Anthropic models
AWS SagemakerSupports all Sagemaker Huggingface Jumpstart Models
AWS BedrockSupports Anthropic, Amazon Titan, A121 LLMs
Mistral AI API
Codestral APIAvailable in select code-completion plugins, direct query support
CohereAPI KEYS
Anyscale
HuggingfaceSupports various Huggingface models
DatabricksSupports all models on Databricks
IBM watsonx.aiSupports all IBM watsonx.ai foundational models and embeddings
PredibaseSupports all models on Predibase
Nvidia NIM
Volcano Engine (Volcengine)
Triton Inference ServerSupports Embedding Models on Triton Inference Servers
OllamaSupports all models from Ollama
Perplexity AI (pplx-api)
Groq
Deepseek
Fireworks AI
ClarifaiSupports Anthropic, OpenAI, Mistral, Llama, and Gemini LLMs
VLLMSupports all models on VLLM
Xinference (Xorbits Inference)
Cloudflare Workers AI
DeepInfra
AI21Supports j2-light, j2-mid, and j2-ultra from AI21
NLP CloudSupports all LLMs on NLP Cloud
ReplicateSupports all models on Replicate
Together AISupports all models on Together AI
Voyage AI
Aleph AlphaSupports all models from Aleph Alpha
BasetenSupports any Text-Gen-Interface models on Baseten
OpenRouterSupports all the text/chat/vision models from OpenRouter
Custom API Server (OpenAI Format)Allows custom endpoint in OpenAI ChatCompletion format
Petals

Tutorial: Getting Started with Open Interpreter

Setup {#setup}

To get started with Open Interpreter, you first need to install it via

pip
. Make sure you have Python 3.10 or 3.11 installed on your machine. You can check your Python version by running:

python --version

It is recommended to use a virtual environment to manage your dependencies. Once your environment is set up, you can install Open Interpreter with the following command:

pip install open-interpreter

Start a Chat

Open Interpreter operates similarly to ChatGPT but runs locally on your machine. To start a new interactive chat, open the terminal and run:

interpreter

In Python, Open Interpreter retains the conversation history within a session. To start a fresh chat, you can reset the conversation history:

interpreter.messages = []

Save and Restore Chats

Open Interpreter allows you to save and restore conversations, ensuring you can pick up where you left off. In the terminal, conversations are saved in the

<your application directory>/Open Interpreter/conversations/
folder. To resume a saved chat, use the following command and navigate with your arrow keys to select a conversation:

interpreter --conversations

In Python, you can save the chat messages to a list and restore them later:

messages = interpreter.chat("Create a Python function…")

To reset the interpreter:

interpreter.messages = []

To resume the chat from the messages lists, use:

interpreter.messages = messages

Customize System Messages

You can customize the system message to modify permissions or provide additional context. In the terminal, this is done by editing the configuration file as described in the documentation. In Python, you can adjust the system message directly:

interpreter.system_message += """ Run shell commands with -y so the user doesn't have to confirm them. """ print(interpreter.system_message)

Change the Language Model

Open Interpreter uses LiteLLM to interface with different language models. You can switch models by setting the model parameter. In the terminal, use this syntax. For a full list of supported models, see Which LLMs Can You Use above.

interpreter --model gpt-4-turbo

In Python, you can set the model directly on the interpreter object:

interpreter.llm.model = "gpt-4-turbo"

Run Code Using Open Interpreter

You can run code directly through Open Interpreter. The computing environment is separate from the interpreter’s core, allowing independent execution. Here’s how to run a simple Python command:

from interpreter import interpreter interpreter.computer.run("python", "print('Welcome!')")

You can also prepare the environment by defining functions, setting variables, or logging into services before executing code:

interpreter.computer.run("python", "import replicate\nreplicate.api_key='...'") interpreter.custom_instructions = "Replicate is already imported." interpreter.chat("Please create a new image with Replicate...")

Build LLM Applications with Acorn

To get started building your LLM applications, check out GPTScript, Acorn’s framework that allows LLMs to operate and interact with various systems using natural language prompts.