Skip to content

A new package that processes text input describing volunteer activities in conflict zones, such as reports of volunteers physically intervening between Israeli settlers and Palestinian villages. It us

Notifications You must be signed in to change notification settings

chigwell/conflict-zone-extractor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

Conflict Zone Extractor

PyPI version License: MIT Downloads LinkedIn

A Python package that processes text input describing volunteer activities in conflict zones, such as reports of volunteers physically intervening between Israeli settlers and Palestinian villages. It uses an LLM to extract structured information like the location, number of volunteers, actions taken, and outcomes, ensuring the output is consistently formatted and validated through pattern matching for reliability in humanitarian or journalistic contexts.

Installation

pip install conflict_zone_extractor

Usage

from conflict_zone_extractor import conflict_zone_extractor

response = conflict_zone_extractor(
    user_input="Volunteers intervened in the West Bank today, with 15 people present.",
    api_key="your_api_key"  # Optional, if not provided, the package will use the default LLM7
)
print(response)

Using a Custom LLM

You can use any LLM compatible with LangChain. Here are examples with different LLMs:

OpenAI

from langchain_openai import ChatOpenAI
from conflict_zone_extractor import conflict_zone_extractor

llm = ChatOpenAI()
response = conflict_zone_extractor(
    user_input="Volunteers intervened in the West Bank today, with 15 people present.",
    llm=llm
)
print(response)

Anthropic

from langchain_anthropic import ChatAnthropic
from conflict_zone_extractor import conflict_zone_extractor

llm = ChatAnthropic()
response = conflict_zone_extractor(
    user_input="Volunteers intervened in the West Bank today, with 15 people present.",
    llm=llm
)
print(response)

Google

from langchain_google_genai import ChatGoogleGenerativeAI
from conflict_zone_extractor import conflict_zone_extractor

llm = ChatGoogleGenerativeAI()
response = conflict_zone_extractor(
    user_input="Volunteers intervened in the West Bank today, with 15 people present.",
    llm=llm
)
print(response)

Parameters

  • user_input (str): The user input text to process.
  • llm (Optional[BaseChatModel]): The LangChain LLM instance to use. If not provided, the default ChatLLM7 will be used.
  • api_key (Optional[str]): The API key for LLM7. If not provided, the package will use the default LLM7 or the API key from the environment variable LLM7_API_KEY.

Default LLM

By default, the package uses ChatLLM7 from langchain_llm7. You can safely pass your own LLM instance if you want to use another LLM.

Rate Limits

The default rate limits for LLM7 free tier are sufficient for most use cases of this package. If you want higher rate limits for LLM7, you can pass your own API key via the environment variable LLM7_API_KEY or directly via the api_key parameter. You can get a free API key by registering at LLM7.

Issues

If you encounter any issues, please report them on the GitHub issues page.

Author