Frequently Asked Questions
This pages covers specific questions. A more general introduction to the project and its goals can be found here.
In this page, there are some of the most frequently asked questions.
Questions about the project
How far along is this project?
This project has concluded. We have released supervised finetuning (SFT) models using Llama 2, LLaMa, Falcon, Pythia, and StabeLM as well as reinforcement learning from human feedback trained models and reward models, all of which are available at here. In addition to our models, we have released three datasets from OpenAssistant conversations, and a research paper.
Is a model ready to test yet?
Our online demonstration is no longer available, but the models remain available to download here.
Can I install Open Assistant locally and chat with it?
All of our models are available on HuggingFace and can be loaded via the HuggingFace Transformers library or other runners if converted. As such you may be able to use them with sufficient hardware. There are also spaces on HF which can be used to chat with the OA candidate without your own hardware. However, some of these models are not final and can produce poor or undesirable outputs.
LLaMa (v1) SFT models cannot be released directly due to Meta's license but XOR weights are released on the HuggingFace org. Follow the process in the README there to obtain a full model from these XOR weights. Llama 2 models are not required to be XORed.
What is the Docker command in the README for?
The docker compose
command in the README is for setting up the project for
local development on the website or data collection backend. It does not launch
an AI model or the inference server. There is likely no point in running the
inference setup and UI locally unless you wish to assist in development.
What license does Open Assistant use?
All Open Assistant code is licensed under Apache 2.0. This means it is available for a wide range of uses including commercial use.
Open Assistant models are released under the license of their respective base models, be that Llama 2, Falcon, Pythia, or StableLM. LLaMa (not 2) models are only released as XOR weights, meaning you will need the original LLaMa weights to use them.
The Open Assistant data is released under Apache-2.0 allowing a wide range of uses including commercial use.
Who is behind Open Assistant?
Open Assistant is a project organized by LAION and developed by a team of volunteers worldwide. You can see an incomplete list of developers on our website.
The project would not be possible without the many volunteers who have spent time contributing both to data collection and to the development process. Thank you to everyone who has taken part!
Will Open Assistant be free?
The model code, weights, and data are free. Our free public instance of our best models is not longer available due to the project's conclusion.
What hardware will be required to run the models?
The current smallest models are 7B parameters and are challenging to run on consumer hardware, but can run on a single professional GPU or be quantized to run on more widely available hardware.
How can I contribute?
This project has now concluded.
What technologies are used?
The Python backend for the data collection app as well as for the inference backend uses FastAPI. The frontend is built with NextJS and Typescript.
The ML codebase is largely PyTorch-based and uses HuggingFace Transformers as well as accelerate, DeepSpeed, bitsandbytes, NLTK, and other libraries.
Questions about the development process
Docker-Compose instead of Docker Compose
If you are using docker-compose
instead of docker compose
(note the " "
instead of the "-"), you should update your docker cli to the latest version.
docker compose
is the most recent version and should be used instead of
docker-compose
.
For more details and information check out this StackOverflow thread that explains it all in detail.
Enable Docker's BuildKit Backend
BuildKit is Docker's new and improved builder backend. In addition to being faster and more efficient, it supports many new features, among which is the ability to provide a persistent cache, which outlives builds, to compilers and package managers. This is very useful to speed up consecutive builds, and is used by some container images of OpenAssistant's stack.
The BuildKit backend is used by
default by Compose V2
(see above).
But if you want to build an image with docker build
instead
of docker compose build
, you might need to enable BuildKit.
To do so, just add DOCKER_BUILDKIT=1
to your environment.
For instance:
export DOCKER_BUILDKIT=1
You could also, more conveniently, enable BuildKit by default, or use Docker Buildx.
Pre-commit
We are using pre-commit to ensure the quality of the code as well as the same code standard.
The steps that you need to follow to be able to use it are:
# install the pre-commit Python package
pip3 install pre-commit
# install pre-commit to the Git repo to run automatically on commit
pre-commit install
So from now on, in your next commits it will run the pre-commit
on the files
that have been staged. Most formatting issues are automatically resolved by the
hooks so the files can simply be re-added and you can commit. Some issues may
require manual resolution.
If you wish to run pre-commit on all files, not just ones your last commit has
modified, you can use pre-commit run --all-files
.
Docker Cannot Start Container: Permission Denied
Instead of running docker with the root command always, you could create a
docker
group with granted permissions (root):
# Create new linux user
sudo groupadd docker
# Add the actual user to the group
sudo usermod -aG docker $USER
# Log in the group (apply the group changes to actual terminal session)
newgrp docker
After that, you should be able to run docker: docker run .
. In the case you
still are not able, can try to reboot terminal:
reboot
Docker Cannot Stop Container
If you try to shut down the services (docker-compose down
), and you are
getting permission denied (using root user), you can try the following:
# Restart docker daemon
sudo systemctl restart docker.socket docker.service
# And remove the container
docker rm -f <container id>
Docker Port Problems
Oftentimes people already have some Postgres instance running on the dev
machine. To avoid port problems, change the ports in the docker-compose.yml
to
ones excluding 5433
, like:
- Change
db.ports
to- 5431:5431
. - Add
POSTGRES_PORT: 5431
todb.environment
- Change
webdb.ports
to- 5432:5431
- Add
POSTGRES_PORT: 5431
todb.environment
- Add
- POSTGRES_PORT=5432
tobackend.environment
- Change
web.environment.DATABASE_URL
topostgres://postgres:postgres@webdb:5432/oasst_web