Transform your AI interactions. Master prompt engineering with professional-grade tools and interactive learning.
Common issues and solutions for AI Prompt Engineering tools.
Error:
ERROR: Could not find a version that satisfies the requirement...
Solution:
python -m pip install --upgrade pipExample:
pip install openai
pip install anthropic
pip install streamlit
Error:
ModuleNotFoundError: No module named 'openai'
Solution:
pip install -r requirements.txtpython -c "import sys; print(sys.path)"from notebooks.prompt_validator import PromptValidatorError:
ValueError: OPENAI_API_KEY not found in environment
Solution:
Windows (PowerShell):
$env:OPENAI_API_KEY="sk-your-key-here"
Windows (Command Prompt):
set OPENAI_API_KEY=sk-your-key-here
Linux/Mac:
export OPENAI_API_KEY="sk-your-key-here"
Permanent (Linux/Mac):
Add to ~/.bashrc or ~/.zshrc:
export OPENAI_API_KEY="sk-your-key-here"
Create .env file in project root:
OPENAI_API_KEY=sk-your-key-here
ANTHROPIC_API_KEY=sk-ant-your-key-here
Install python-dotenv:
pip install python-dotenv
Load in code:
from dotenv import load_dotenv
load_dotenv()
Error:
401 Authentication failed
Solution:
Error:
Connection refused to http://localhost:11434
Solution:
ollama pull llama2curl http://localhost:11434/api/tags
Alternative: Use different provider (OpenAI or Anthropic)
Error:
429 Too Many Requests
Solution:
def rate_limited_generate(client, prompt, delay=1): time.sleep(delay) return client.generate(prompt)
2. Use exponential backoff
3. Upgrade API tier for higher limits
4. Reduce request frequency
---
### Issue: Model Not Available
**Error:**
Model not found: gpt-5
**Solution:**
1. Check available models:
```python
client.get_available_models()
Problem: Prompt works well but validator gives low score.
Solution:
Example:
result = validator.score_prompt(prompt)
print(result['breakdown']) # See detailed scores
print(result['feedback']) # Get improvement suggestions
Problem: All prompts get low scores.
Solution:
validator.criteria_weights['clarity'] = 0.30 # Increase weight
validator.criteria_weights['examples'] = 0.10 # Decrease weight
Error:
ModuleNotFoundError: No module named 'streamlit'
Solution:
pip install streamlit
streamlit run streamlit_app.py
Error:
ImportError: cannot import name 'PromptValidator' from 'notebooks.prompt_validator'
Solution:
project/
├── notebooks/
│ └── prompt_validator.py
└── streamlit_app.py
import sys
from pathlib import Path
sys.path.append(str(Path(__file__).parent / "notebooks"))
Problem: Dashboard becomes unresponsive.
Solution:
def timeout_handler(signum, frame): raise TimeoutError(“Request timed out”)
signal.signal(signal.SIGALRM, timeout_handler) signal.alarm(30) # 30 second timeout
3. Reduce number of concurrent requests
4. Restart Streamlit server
---
## Performance Issues
### Issue: Slow Response Times
**Problem:**
LLM responses take too long.
**Solutions:**
1. **Reduce max_tokens:**
```python
response = client.generate(prompt, max_tokens=500) # Instead of 2000
client = UnifiedLLMClient(provider=OpenAIProvider(model="gpt-3.5-turbo")) # Faster than gpt-4
@lru_cache(maxsize=100) def cached_generate(prompt): return client.generate(prompt)
4. **Use streaming for long outputs**
---
### Issue: High Token Usage
**Problem:**
Using too many tokens, high costs.
**Solutions:**
1. **Reduce prompt length:**
- Remove unnecessary context
- Use concise language
2. **Limit output:**
```python
response = client.generate(prompt, max_tokens=200)
print(f"Tokens used: {response.tokens_used}")
Cause: No API keys configured.
Solution:
# Set at least one API key
export OPENAI_API_KEY="sk-..."
# OR
export ANTHROPIC_API_KEY="sk-ant-..."
# OR
# Start Ollama service
Cause: Edge cases not handled.
Solution:
result = validator.validate_in_production(prompt_id, run_fn)
print(result['recommendations'])
def safe_run_prompt(input_data: str) -> str:
try:
return run_prompt(input_data)
except Exception as e:
return f"Error: {str(e)}"
Cause: Not enough test results.
Solution:
# Need at least 10 results per version
for _ in range(10):
version, prompt = tester.get_random_prompt(test_id)
output = run_prompt(prompt)
score = evaluate_output(output)
tester.record_result(test_id, version, score, output)
If you’re still experiencing issues:
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install --upgrade -r requirements.txt
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
logger.info("Starting prompt validation")
Still stuck? Open an issue with: