Build LocalAI from source

Building LocalAI from source is an installation method that allows you to compile LocalAI yourself, which is useful for custom configurations, development, or when you need specific build options.

For complete build instructions, see the Build from Source documentation in the Installation section.

you may need to install some prerequisites using brew.

The below has been tested by one mac user and found to work. Note that this doesn’t use Docker to run the server:

Install xcode from the Apps Store (needed for metalkit)

brew install abseil cmake go grpc protobuf wget protoc-gen-go protoc-gen-go-grpc

git clone https://github.com/go-skynet/LocalAI.git

cd LocalAI

make build

wget https://huggingface.co/TheBloke/phi-2-GGUF/resolve/main/phi-2.Q2_K.gguf -O models/phi-2.Q2_K

cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/phi-2.Q2_K.tmpl

./local-ai backends install llama-cpp

./local-ai --models-path=./models/ --debug=true

curl http://localhost:8080/v1/models

curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "phi-2.Q2_K",
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 0.9 
   }'

Troubleshooting mac

xcode-select --print-path

sudo xcode-select --switch /Applications/Xcode.app/Contents/Developer
brew reinstall go grpc protobuf wget

make clean

make build

Build backends

LocalAI have several backends available for installation in the backend gallery. The backends can be also built by source. As backends might vary from language and dependencies that they require, the documentation will provide generic guidance for few of the backends, which can be applied with some slight modifications also to the others.

Manually

Typically each backend include a Makefile which allow to package the backend.

In the LocalAI repository, for instance you can build bark-cpp by doing:

git clone https://github.com/go-skynet/LocalAI.git

make -C LocalAI/backend/go/bark-cpp build package

make -C LocalAI/backend/python/vllm

With Docker

Building with docker is simpler as abstracts away all the requirement, and focuses on building the final OCI images that are available in the gallery. This allows for instance also to build locally a backend and install it with LocalAI. You can refer to Backends for general guidance on how to install and develop backends.

In the LocalAI repository, you can build bark-cpp by doing:

git clone https://github.com/go-skynet/LocalAI.git

make docker-build-bark-cpp

Note that make is only by convenience, in reality it just runs a simple docker command as:

docker build --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) -t local-ai-backend:bark-cpp -f LocalAI/backend/Dockerfile.golang --build-arg BACKEND=bark-cpp .               

Note: