Maryam Monalisa Gharavi, Ph.D.

Senior Prompt Engineer


AI-First Systems

I build reliable AI systems using prompt engineering, structured workflows, and LLM evaluation frameworks.


What I Do

Prompt Architecture

• model-agnostic structured prompting
• workflows for complex LLM behaviors
• agentic building
LLM Systems
• prompt learning
• tool-using agents
• evaluation pipelines
AI / ML Consulting
• AI-native product design
• LLM reliability
• safety & guardrails


Selected Work

AI Systems at Scale
• Created prompt architecture for AI discovery and routing systems, including system prompts & dynamic user-context injection, improving tool routing reliability, reducing hallucination-risk scenarios, & strengthening “what should I do next?” user guidance
LLM Knowledge Systems
• Built production-ready hallucination mitigation agent that filtered for 97-98% of false outputs (one week before Google DeepMind)
Model Evaluations
• Developed a proprietary, in-house 100-unit model evaluation suite to stress-test a 2-part system: generalized intelligence and enterprise/market outputs
AI Translation/Transcreation
• Devised a GenAI translation system spanning 42+ languages enabling dynamic/formal equivalence, localization, & regional nuance to deliver near-native accuracy for global enterprise contexts
LLM Behavior
• Synthesized cross-surface prompt contracts defining global vs. local prompting responsibilities, improving system coherence, preventing duplication or prompt drift, and enabling consistent AI behavior across product experiences
Humanization
• Conceptualized an 18-part multi-shot prompt based on rigorous linguistic research (toward the asymptotic limits of human syntax, morphology, and semantics) to create heightened anthropomorphization for natural-sounding, "human"-like product surfaces


Skills

Core Skills
• Prompt Engineering for Production LLMs
• AI Agent Systems
• Zero-Shot & Few-Shot Prompting
• Prompt Architecture & Scaffolding
• Evaluation Frameworks
• AI Translation & Multilingual Systems
• Hallucination Mitigation
• LLM Guardrails
Tech Stack
LLMS: OpenAI, Anthropic
Prompting: System prompts, multi-step chains, structured prompting
Orchestration Concepts: tool use, function calling, agent workflows, memory
Evaluation: A/B testing, eval suites, failure mode analysis
Programming: Python (eval pipelines), XML, JSON
Interfaces: API integration, prompt testing environments
Languages
• Persian (native / fluent)
• Portuguese (fluent)
• Spanish (fluent)
• Italian (advanced)
• Modern Standard Arabic (advanced)
• Levantine Spoken Arabic (advanced)
• French (intermediate)
• Dakota (beginner)
• Latin (reading)
• Hebrew (limited)


Relevant Academic Background

• Trained in interpretive reasoning and ambiguity analysis (PhD), applied to structuring and evaluating LLM behavior• Developed frameworks for meaning, translation, and contextual nuance, informing multilingual and cross-domain prompt design



Contact

Let's build better AI systems together 💥