forked from microsoft/edgeai-for-beginners
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy path.env
More file actions
127 lines (114 loc) · 4.35 KB
/
.env
File metadata and controls
127 lines (114 loc) · 4.35 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
#############################################
# Workshop Environment Variables (.env)
#
# This file centralizes configuration for the Workshop samples.
# All Python scripts and notebooks pick these up via os.getenv().
#
# SDK Reference:
# https://github.com/microsoft/Foundry-Local
# https://github.com/microsoft/Foundry-Local/tree/main/sdk/python/foundry_local
#
# Prerequisites:
# 1. Install Foundry Local: Follow installation guide
# 2. Start service: foundry service start
# 3. Load model: foundry model run phi-4-mini
# 4. Verify: foundry service status
#
# Usage:
# - Python scripts: Automatically loaded by samples
# - Notebooks: May need to restart kernel after changes
# - Terminal: Source this file or set manually
#
# Adjust values as needed per machine and deployment.
#############################################
# Python search path so helper utilities (workshop_utils) & samples are importable
# Append Workshop/samples for utilities; keep existing Module08 if needed.
PYTHONPATH=${workspaceFolder}/Workshop/samples;${workspaceFolder}/Module08
# Core Model Aliases
# Default model for most samples
FOUNDRY_LOCAL_ALIAS=phi-4-mini
# Model comparison aliases (Session 04)
SLM_ALIAS=phi-4-mini
LLM_ALIAS=qwen2.5-7b
# Endpoint override (leave blank to allow FoundryLocalManager to auto-detect)
# Only set this if you need to override the default endpoint
# Example for remote Windows host when developing on macOS/Linux:
# FOUNDRY_LOCAL_ENDPOINT=http://192.168.1.50:5273/v1
# Example for local custom port:
# FOUNDRY_LOCAL_ENDPOINT=http://localhost:8000
FOUNDRY_LOCAL_ENDPOINT=
# Benchmark Configuration (Session 03)
# Comma-separated list of model aliases to benchmark
BENCH_MODELS=phi-4-mini,gpt-oss-20b
BENCH_ROUNDS=3
BENCH_PROMPT=Explain retrieval augmented generation briefly.
BENCH_STREAM=0 # Set to 1 to measure first-token latency (streaming)
COMPARE_RETRIES=2 # Number of retry attempts for model comparison
# RAG Configuration (Session 02)
EMBED_MODEL=sentence-transformers/all-MiniLM-L6-v2
RAG_QUESTION=Why use RAG with local inference?
# Multi-Agent Configuration (Session 05)
AGENT_QUESTION=Explain why edge AI matters for compliance.
AGENT_MODEL_PRIMARY=phi-4-mini
AGENT_MODEL_EDITOR=phi-4-mini
# Model Comparison (Session 04)
COMPARE_PROMPT=List 5 benefits of local AI inference.
# Reliability / Telemetry
SHOW_USAGE=1 # Print token usage per completion
RETRY_ON_FAIL=1 # Retry transient errors once
RETRY_BACKOFF=1.0 # Seconds between retry attempts
# (Optional) Deterministic settings applied inside scripts; override if experimenting
# TEMPERATURE=0.0
# TOP_P=1.0
# Azure OpenAI Configuration (Optional - for hybrid scenarios)
# Add any cloud API secrets below (avoid committing real keys to version control)
# AZURE_OPENAI_ENDPOINT=
# AZURE_OPENAI_API_KEY=
# AZURE_OPENAI_API_VERSION=2024-08-01-preview
#############################################
# Recommended Model Configurations
#############################################
#
# Development & Testing:
# FOUNDRY_LOCAL_ALIAS=phi-4-mini # Balanced quality & speed
# SLM_ALIAS=phi-4-mini # Fast responses
# LLM_ALIAS=qwen2.5-7b # Higher quality
#
# Production Scenarios:
# General purpose: phi-4-mini
# Code generation: deepseek-coder-1.3b
# Fast classification: qwen2.5-0.5b
# High quality: qwen2.5-7b
#
# Benchmark Testing:
# BENCH_MODELS=phi-4-mini,qwen2.5-0.5b,gemma-2-2b
#
# Multi-Agent (different models per role):
# AGENT_MODEL_PRIMARY=phi-4-mini # Fast for research
# AGENT_MODEL_EDITOR=qwen2.5-7b # Quality for editing
#
#############################################
# Troubleshooting
#############################################
#
# Service not responding:
# 1. Check: foundry service status
# 2. Start: foundry service start
# 3. Load model: foundry model run phi-4-mini
#
# Model not found:
# 1. List available: foundry model list
# 2. Update FOUNDRY_LOCAL_ALIAS to match available model
#
# Connection errors:
# 1. Verify endpoint: foundry service status (shows port)
# 2. Set FOUNDRY_LOCAL_ENDPOINT if needed
# 3. Check firewall settings
#
# Import errors:
# 1. Activate venv: .venv\Scripts\activate (Windows)
# 2. Install deps: pip install -r requirements.txt
# 3. Check PYTHONPATH includes Workshop/samples
#
#############################################
# End of .env