This is a basic chat program created as a simple web application. It features a chat window for sending and receiving messages, and a settings page to configure API keys for different LLMs (Language Model Models): ChatGPT, Claude AI, and a Local LLM.
- Chat Window: A user interface for real-time messaging. Users can type messages and send them to the selected LLM.
- Settings Page: Accessible via a "Settings" button in the header. This page allows users to:
- Enter API keys for ChatGPT, Claude AI, and a Local LLM.
- Select which LLM to use for the chat from a dropdown menu.
- Save settings, including API keys and the selected LLM, to the browser's local storage.
- Return to the chat window.
- LLM API Integration:
- Implements API calls to ChatGPT, Claude, and a Local LLM based on user selection and API keys provided in the settings.
- Uses the correct API structure for ChatGPT and Claude, including API keys in headers and appropriate request bodies.
- Provides a basic structure for Local LLM API calls, which may need further adjustment based on the specific local LLM API.
- Settings Persistence:
- API keys and the selected LLM are saved in the browser's local storage. This allows users to close and reopen the application without re-entering their settings.
-
Open
index.htmlin a web browser. Since this is a client-side web application, you can simply open theindex.htmlfile in any modern web browser to run it. -
Settings Page:
- Click the "Settings" button in the header to navigate to the settings page.
- On the settings page, you can:
- Enter your API keys in the respective input fields for ChatGPT, Claude AI, and Local LLM.
- Use the dropdown menu to select which LLM you want to use for the chat.
- Click "Save Settings" to save your settings. Settings are stored in your browser's local storage and will be loaded when you reopen the application.
- Click "Back to Chat" to return to the chat window.
-
Chat Window:
- After setting up your API keys and selecting an LLM in the settings, return to the chat window.
- Type your message in the input field at the bottom.
- Click the "Send" button or press Enter to send your message.
- The chat window will display your message and the response from the selected LLM (if the API call is successful and the API key is valid).
- If an API key is missing for the selected LLM, an alert will be displayed, and an error message will appear in the chat window.
- API Keys Required: To use the chat functionality, you need to provide valid API keys for the LLM you wish to use. These keys are stored locally in your browser's storage.
- API Endpoints: The application is configured to use the following API endpoints:
- ChatGPT:
https://api.openai.com/v1/chat/completions - Claude:
https://api.anthropic.com/v1/messages - Local LLM:
http://localhost:8080/api/chat(This is an example and may need to be adjusted to your local LLM setup).
- ChatGPT:
- Error Handling: Basic error handling is implemented for API calls. If there's an issue fetching a response from the LLM, an error message will be displayed in the chat window.
- Local LLM Configuration: For the "Local LLM" option, ensure that you have a local LLM API running at the specified endpoint (
http://localhost:8080/api/chatby default) and that it accepts POST requests with a message in the request body. You may need to adjust the request body format inscript.jsto match your local LLM API's requirements. - Settings Purging: A "Purge Settings" button is available on the settings page. Clicking this button will clear the API keys and selected LLM from local storage.
index.html: The main HTML file that structures the web page, including the chat window and settings page.style.css: The CSS file that styles the appearance of the chat application.script.js: The JavaScript file that handles the interactivity of the chat application, including settings persistence, API calls to LLMs, and message display.README.md: This file, providing an overview and instructions for the Basic Chat Program.
This basic chat program can be extended in many ways, including:
- Enhanced UI/UX: Improve the user interface and user experience with features like message timestamps, user avatars, better styling, loading indicators during API calls, and more.
- Advanced Error Handling and User Feedback: Implement more robust error handling and provide more informative feedback to the user in case of API errors or other issues.
- Conversation History: Implement сохранение and display of conversation history.
- More LLM Options and Customization: Extend the settings page to support more LLM services, allow users to customize API parameters (like model selection or temperature), or add support for different API formats.
- Streaming Responses: Implement streaming responses from LLM APIs for a more interactive chat experience.