Transform your AI interactions. Master prompt engineering with professional-grade tools and interactive learning.
Get up and running with AI Prompt Engineering tools in minutes.
git clone https://github.com/ElaMCB/AI-Prompt-Engineering.git
cd AI-Prompt-Engineering
Windows:
python -m venv venv
venv\Scripts\activate
Linux/Mac:
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
Option 1: Environment Variables
Windows (PowerShell):
$env:OPENAI_API_KEY="sk-your-key-here"
Linux/Mac:
export OPENAI_API_KEY="sk-your-key-here"
Option 2: .env File
Create .env file in project root:
OPENAI_API_KEY=sk-your-key-here
ANTHROPIC_API_KEY=sk-ant-your-key-here
Option 3: Ollama (Local - No API Key Needed)
ollama pull llama2Validate and score prompts instantly:
from notebooks.prompt_validator import PromptValidator
validator = PromptValidator()
prompt = """
You are a conversion copywriter specializing in SaaS email marketing.
Write a 200-word welcome email for new trial users of a project management tool.
Target audience: small business owners who just signed up but haven't logged in yet.
Must include: specific next steps, one key benefit, and a clear call-to-action.
"""
result = validator.score_prompt(prompt)
print(f"Score: {result['overall_score']}%")
print(f"Grade: {result['grade']}")
for feedback in result['feedback']:
print(f"- {feedback}")
Test prompts with real LLM providers:
from notebooks.model_providers import UnifiedLLMClient
# Auto-detect provider (uses available API keys)
client = UnifiedLLMClient()
# Generate response
response = client.generate(
"Write a haiku about AI",
max_tokens=100,
temperature=0.8
)
if response.error:
print(f"Error: {response.error}")
else:
print(response.content)
print(f"Latency: {response.latency_ms:.0f}ms")
print(f"Tokens: {response.tokens_used}")
Compare different prompt versions:
from notebooks.ab_testing_framework import PromptABTester
tester = PromptABTester()
# Create test
test = tester.create_test(
test_id="email_subject_test",
prompt_a="Write a subject line",
prompt_b="You are an email marketing expert. Write a compelling subject line...",
metric="click_through_rate",
description="Testing generic vs specific prompt"
)
# Record results
tester.record_result("email_subject_test", "A", 4.2, "Weekly Newsletter")
tester.record_result("email_subject_test", "B", 8.1, "5 Growth Hacks That Doubled Revenue")
# Analyze
analysis = tester.analyze_test("email_subject_test")
print(f"Winner: {analysis['winner']}")
print(tester.generate_report("email_subject_test"))
Test prompts for production deployment:
from notebooks.production_validator import ProductionValidator, TestCase
from notebooks.model_providers import UnifiedLLMClient
validator = ProductionValidator()
client = UnifiedLLMClient()
# Add test case
test_case = TestCase(
input_data="Test input",
expected_output_type="text",
expected_keywords=["response"],
min_length=10,
max_length=500
)
validator.add_test_case("my_prompt", test_case)
# Create prompt function
def run_prompt(input_data: str) -> str:
prompt = f"You are a helpful assistant. Respond to: {input_data}"
response = client.generate(prompt)
return response.content if not response.error else ""
# Run validation
result = validator.validate_in_production("my_prompt", run_prompt)
print(f"Production Ready Score: {result['production_ready_score']}%")
print(f"Production Ready: {result['production_ready']}")
for recommendation in result['recommendations']:
print(f"- {recommendation}")
streamlit run streamlit_app.py
This opens an interactive dashboard in your browser at http://localhost:8501.
Features:
The validator uses the CLEAR framework:
notebooks/foundations_lab.ipynbBefore:
prompt = "Write something about marketing"
result = validator.score_prompt(prompt)
# Score: 25% - D (Poor - Needs Major Revision)
After:
prompt = """
You are a conversion copywriter specializing in SaaS email marketing.
Write a 200-word welcome email for new trial users of a project management tool.
Target audience: small business owners who just signed up but haven't logged in yet.
Must include: specific next steps, one key benefit, and a clear call-to-action.
Tone should be friendly but professional.
"""
result = validator.score_prompt(prompt)
# Score: 87% - A (Very Good)
tester = PromptABTester()
# Create test
tester.create_test(
test_id="email_subject_ab",
prompt_a="Write a subject line for our newsletter",
prompt_b="""You are an email marketing expert. Write a compelling subject line
for our weekly newsletter targeting small business owners. Focus on urgency and value.
Keep under 50 characters.""",
metric="click_through_rate",
description="Testing generic vs specific prompt"
)
# Test both versions multiple times
for _ in range(20):
version, prompt = tester.get_random_prompt("email_subject_ab")
# Run prompt, get output, evaluate...
tester.record_result("email_subject_ab", version, score, output)
# Analyze results
analysis = tester.analyze_test("email_subject_ab")
print(tester.generate_report("email_subject_ab"))
# Before deploying a prompt, validate it
result = validator.validate_in_production("customer_support_prompt", run_prompt)
if result['production_ready']:
print("Safe to deploy!")
# Deploy to production
else:
print("Not ready - address issues:")
for rec in result['recommendations']:
print(f" - {rec}")
# Fix issues and re-validate
Solution: Set environment variable or create .env file
export OPENAI_API_KEY="sk-your-key-here"
Solution: Ensure you’re in the project directory and dependencies are installed
pip install -r requirements.txt
Solution: Install Streamlit
pip install streamlit
streamlit run streamlit_app.py
For more troubleshooting, see TROUBLESHOOTING.md.
Happy Prompt Engineering!