Skip to content

A new package that transforms unstructured text about sports evolution into a structured summary. Users input text describing changes in a sport, and the package returns a standardized breakdown of ke

Notifications You must be signed in to change notification settings

chigwell/sport-evo-summary

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

sport‑evo‑summary

PyPI version License: MIT Downloads LinkedIn

sport‑evo‑summary is a lightweight Python package that transforms unstructured text about sports evolution into a standardized, structured summary.
It pulls out key aspects such as rules, equipment, culture, and performance metrics, enabling sports analysts, journalists, and enthusiasts to quickly identify and compare the most significant shifts in a sport’s landscape.


Installation

pip install sport_evo_summary

Quick Start

from sport_evo_summary import sport_evo_summary

# Your raw text about a sport’s evolution
user_input = """
Over the last decade, soccer has introduced VAR to reduce errors in officiating.
Players now use lighter, aerodynamic boots which enhance speed,
and the game’s pace has increased by an average of 15% compared to ten years ago.
"""

# Call the function (uses the default LLM7 if no `llm` or `api_key` provided)
summary = sport_evo_summary(user_input)

print(summary)
# Example output: ["RULES: VAR implementation", "EQUIPMENT: Aerodynamic boots", "PERFORMANCE: 15% faster pace"]

Advanced Usage

Using Your Own LLM

sport_evo_summary accepts a BaseChatModel instance from LangChain.
This lets you plug in any LLM you prefer, such as OpenAI, Anthropic, or Google.

OpenAI

from langchain_openai import ChatOpenAI
from sport_evo_summary import sport_evo_summary

llm = ChatOpenAI()
response = sport_evo_summary(user_input, llm=llm)

Anthropic

from langchain_anthropic import ChatAnthropic
from sport_evo_summary import sport_evo_summary

llm = ChatAnthropic()
response = sport_evo_summary(user_input, llm=llm)

Google Generative AI

from langchain_google_genai import ChatGoogleGenerativeAI
from sport_evo_summary import sport_evo_summary

llm = ChatGoogleGenerativeAI()
response = sport_evo_summary(user_input, llm=llm)

Using a Custom LLM7 API Key

By default, the package uses ChatLLM7 from the langchain_llm7 module, with rate limits from the free tier.
If you need higher limits, pass your own key via the api_key argument or set the LLM7_API_KEY environment variable.

# via argument
response = sport_evo_summary(user_input, api_key="YOUR_API_KEY")

# or via environment variable
import os
os.environ["LLM7_API_KEY"] = "YOUR_API_KEY"
response = sport_evo_summary(user_input)

You can get a free key at https://token.llm7.io/


Function Signature

sport_evo_summary(
    user_input: str,
    api_key: Optional[str] = None,
    llm: Optional[BaseChatModel] = None
) -> List[str]
Parameter Type Description
user_input str Raw text describing changes in a sport.
llm Optional[BaseChatModel] Your own LangChain LLM instance. If omitted, the default ChatLLM7 is used.
api_key Optional[str] LLM7 API key for authentication. Set it via the environment variable if not provided.

Internals

  • Uses the llmatch helper to filter the LLM’s response by a pre‑defined regex pattern (pattern from .prompts).
  • The system_prompt and human_prompt guide the model to produce a concise, structured list.
  • Returns a List[str] of extracted data entries.

Contribution & Issues

If you encounter bugs or have feature requests, please open an issue on GitHub:
https://github.com/chigwell/sport-evo-summary/issues


Author


Happy analyzing! 🚀

Releases

No releases published

Packages

No packages published

Languages