If you want to deploy Xyne using your local machine, this document will give you a detailed guide to do so.
Follow the steps listed below to get started :
Remember to ensure that your Docker service is running. Incase you’re using Docker Desktop, ensure that is running too
If you have postgres running, we suggest you kill the process before starting docker
Run the application with the following command from the xyne folder:
Copy
docker-compose -f deployment/docker-compose.yml up
And that is all 🎉 ! The app will now be available in port 3001. Go to xyne
Since the size of the downloading models can be quite large, wait for the application to start running, this can take around 10 - 15 minutes,
depending on your internet connection.
You can also choose to follow the Step guide mentioned above
Inside the server folder of the xyne folder, you will find a .env.default file, this is the environment file that our docker uses.
For the moment you will find some default generated environment variables that we’ve set up for the app to work.
We strictly recommend generating your own ENCRYPTION_KEY, SERVICE_ACCOUNT_ENCRYPTION_KEY and JWT_SECRET values for security.
Due to our agentic RAG implementation, the maximum TPM limit exceeds for Open AI’s gpt4o model.
For the best experience, we recommend using AWS Bedrock or Claude, as it enhances performance and accuracy.
In the .env.default file, you can modify the following and replace the missing values with your own :
.env.default file
Copy
ENCRYPTION_KEY=<YOUR_ENCRYPTION_KEY>SERVICE_ACCOUNT_ENCRYPTION_KEY=<YOUR_SERVICE_ACCOUNT_ENCRYPTION_KEY>GOOGLE_CLIENT_ID=<YOUR_GOOGLE_CLIENT_ID>GOOGLE_CLIENT_SECRET=<YOUR_GOOGLE_CLIENT_SECRET>GOOGLE_REDIRECT_URI=http://localhost:3000/v1/auth/callbackGOOGLE_PROD_REDIRECT_URI=http://localhost:3001/v1/auth/callbackJWT_SECRET=<YOUR_JWT_SECRET>DATABASE_HOST=xyne-dbVESPA_HOST=vespa## If using AWS BedrockAWS_ACCESS_KEY=<YOUR_AWS_ACCESS_KEY>AWS_SECRET_KEY=<YOUR_AWS_ACCESS_SECRET>AWS_REGION=<YOUR_AWS_REGION>## OR [ If using Open AI ]OPENAI_API_KEY=<YOUR_OPEN_API_KEY>## OR [ If using Ollama ] OLLAMA_MODEL=<YOUR_OLLAMA_MODEL_NAME>## OR [ If using Together AI ] TOGETHER_API_KEY=<YOUR_TOGETHER_API_KEY>TOGETHER_MODEL=<YOUR_TOGETHER_MODEL>TOGETHER_FAST_MODEL=<YOUR_TOGETHER_FAST_MODEL>## OR [ If using Fireworks AI ] FIREWORKS_API_KEY=<YOUR_FIREWORKS_API_KEY>FIREWORKS_MODEL=<YOUR_FIREWORKS_MODEL>FIREWORKS_FAST_MODEL=<YOUR_FIREWORKS_FAST_MODEL>## OR [If using Google AI]GEMINI_API_KEY=<YOUR_GEMINI_API_KEY>GEMINI_MODEL=<YOUR_GEMINI_MODEL_NAME>## If you are using custom OpenAI or Together AI endpointsBASE_URL=<YOUR_BASE_URL>HOST=http://localhost:3001
To use the chat feature of Xyne, you need any one AI provider (AWS, Ollama, OpenAI Together AI or Fireworks AI). Missing keys will disable chat functionality.
You can checkout the AI Providers section for better clarity :
Chat will be unavailable without a Service Account or OAuth Account connection.
And build the container again from the xyne folder using :
Copy
docker-compose -f deployment/docker-compose.yml up
Currently the client side has .env variables that refer to port 3001, if you’ve changed the port number ensure to change the values in the .env as well.