LLM-Bot is a Discord bot built with discord.js that brings LLMs directly into your server. It can connect to multiple AI backends including OpenAI, Ollama, and Llama.cpp allowing you to chat with and query different models seamlessly.
When mentioned in a chat, LLM-Bot listens, processes your question, and replies using the selected LLM API. Its designed to be modular, easy to configure, and perfect for integrating AI-powered assistants into your Discord community.
Moderator Commands:
/help- Displays this message/prompt set- Set the current personality/prompt get- Get the current personality/model set- Set the current model/model get- Get the current model/api url- Change the current API URL/api key- Change the current API Key/limit set- Set the current token limit/limit get- Get the current token limit/temperature set- Set the current temperature/temperature get- Get the current temperature/thinking- Toggle display of LLM thinking sections/memory set- Set the current memory length/memory get- Get the current memory length/memory toggle- Toggle message history/debug- Toggle verbose log messages/reset- Restart the bot and revert to default settings
Note
All of these configuration options above can & need to be defined in .env . By default these commands require the "ModerateMembers" permission as a saftey measure, this can however be overwritten in the integrations tab of your guild.
- Clone the repository:
git clone --depth 1 https://github.com/Justus0405/LLM-Bot.git- Navigate to the directory:
cd LLM-Bot- Create a .env file from .env.example:
nano .env- Build and run with docker:
docker-compose up -d --builddocker
docker-compose
docker-buildx
Copyright © 2025-present Justus0405