autodeploydocker is a tiny Python package that makes it easy to automate the deployment of Docker and Docker‑Compose applications to remote servers.
It drives a language model (LLM) to generate the exact deployment steps, ensuring zero‑downtime deployments without the need for manual configuration changes.
pip install autodeploydockerfrom autodeploydocker import autodeploydocker
# Minimal usage – the package will create a ChatLLM7 instance for you.
response = autodeploydocker(
user_input="Deploy the latest version of my web‑app using Docker Compose on server X."
)
print(response) # -> list of strings extracted from the LLM responsedef autodeploydocker(
user_input: str,
api_key: Optional[str] = None,
llm: Optional[BaseChatModel] = None,
) -> List[str]:
...| Parameter | Type | Description |
|---|---|---|
| user_input | str |
The natural‑language description of the deployment you want to perform. |
| api_key | Optional[str] |
API key for the default ChatLLM7 backend. If omitted, the function reads LLM7_API_KEY from the environment. |
| llm | Optional[BaseChatModel] |
A LangChain‑compatible LLM instance. If provided, it overrides the default ChatLLM7. |
The function builds a system prompt (system_prompt) and a human prompt (human_prompt) and sends them to the selected LLM.
The LLM’s output is then validated against a regular‑expression pattern defined in prompts.pattern.
If the output matches, the extracted data (a List[str]) is returned; otherwise a RuntimeError is raised.
You can safely supply any LangChain LLM that follows the BaseChatModel interface.
from langchain_openai import ChatOpenAI
from autodeploydocker import autodeploydocker
llm = ChatOpenAI(model="gpt-4o") # configure as you need
response = autodeploydocker(
user_input="Deploy the staging environment with Docker Compose.",
llm=llm,
)from langchain_anthropic import ChatAnthropic
from autodeploydocker import autodeploydocker
llm = ChatAnthropic(model="claude-3-5-sonnet-20240620")
response = autodeploydocker(
user_input="Roll out a new version of the API service.",
llm=llm,
)from langchain_google_genai import ChatGoogleGenerativeAI
from autodeploydocker import autodeploydocker
llm = ChatGoogleGenerativeAI(model="gemini-1.5-pro")
response = autodeploydocker(
user_input="Update the production stack using Docker Compose.",
llm=llm,
)If you do not pass an llm instance, autodeploydocker falls back to ChatLLM7 from the langchain_llm7 package:
pip install langchain_llm7
ChatLLM7 works out‑of‑the‑box with a free tier that is sufficient for most use cases.
To use a personal key, set the environment variable LLM7_API_KEY or pass the key directly:
response = autodeploydocker(
user_input="Deploy …",
api_key="my-llm7-key",
)You can obtain a free API key by registering at https://token.llm7.io/.
If you encounter any issues, have a feature request, or want to contribute, please open an issue on GitHub:
https://github....
Eugene Evstafev – chigwell
✉️ Email: hi@euegne.plus
This project is licensed under the MIT License.