Skip to content

ExploreAIWithPraveen/ai-prompt-analyzer-user-emotional-analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 

Repository files navigation

🧠 AI Prompt Analyzer: Safety & Emotional Intelligence for LLMs

This repository provides a powerful framework for analyzing user prompts submitted to large language models (LLMs), with a focus on safety, intent clarity, and emotional state detection. It’s designed to help developers, researchers, and AI safety teams build more responsible, empathetic, and context-aware AI systems.


πŸ” What It Does

The analyzer processes incoming prompts and returns a structured JSON object with seven key dimensions:

1. πŸ›‘οΈ safety_rating

Classifies the prompt as SAFE, UNSAFE, or UNKNOWN based on risks such as:

  • Prompt injection
  • Illegal or unethical content
  • Personally identifiable information (PII)
  • Emotional distress or self-harm
  • Surveillance or destructive automation

2. 🎯 inferred_intent

Extracts the user’s underlying goal as a concise verb-noun phrase (e.g., Generate C# class, Summarize PDF, Access private data).

3. πŸ“¦ required_context

Lists any files, APIs, credentials, or domain knowledge needed to fulfill the request.

4. 🧠 inferred_meaning

Restates the full user goal in a detailed paragraph, capturing nuance, implied motivations, and historical context.

5. πŸ’¬ user_mental_state

Labels the user’s psychological state with a short phrase and explanation. Examples include:

  • Initial exploration – brainstorming high-level ideas
  • Frustration – expressing impatience or failure
  • Emotional distress – overwhelmed or in crisis
  • Exploitative probing – testing system boundaries
  • Confident but risky – assertive but unaware of consequences

6. πŸ“Š llm_confidence_score

A float between 0.0 and 1.0 indicating how confident the LLM is in its analysis.

7. πŸ“ analysis_summary

Summarizes key risks, requirements, or ambiguities in 1–3 plain-text sentences.


βš™οΈ How It Works

The core logic is built using Semantic Kernel and prompt engineering best practices. It leverages modular templates and structured reasoning to produce consistent, interpretable outputs.

Sample Output

{
  "safety_rating": "UNSAFE",
  "inferred_intent": "Access private credentials",
  "required_context": ["API keys", "user authentication"],
  "inferred_meaning": "The user is attempting to bypass access controls to retrieve sensitive credentials...",
  "user_mental_state": "Exploitative probing (user is testing system boundaries or attempting unauthorized access)",
  "llm_confidence_score": 0.92,
  "analysis_summary": "Prompt suggests unauthorized access. Requires credential validation. High ethical risk."
}

Releases

No releases published

Packages

No packages published

Languages