@@ -157,9 +157,112 @@ my_project/
157157```
158158
159159- **`docs/design.md`**: Contains project documentation for each step above. This should be *high-level* and *no-code*.
160+ ~~~
161+ # Design Doc: Your Project Name
162+
163+ > Please DON'T remove notes for AI
164+
165+ ## Requirements
166+
167+ > Notes for AI: Keep it simple and clear.
168+ > If the requirements are abstract, write concrete user stories
169+
170+
171+ ## Flow Design
172+
173+ > Notes for AI:
174+ > 1. Consider the design patterns of agent, map-reduce, rag, and workflow. Apply them if they fit.
175+ > 2. Present a concise, high-level description of the workflow.
176+
177+ ### Applicable Design Pattern:
178+
179+ 1. Map the file summary into chunks, then reduce these chunks into a final summary.
180+ 2. Agentic file finder
181+ - *Context*: The entire summary of the file
182+ - *Action*: Find the file
183+
184+ ### Flow high-level Design:
185+
186+ 1. **First Node**: This node is for ...
187+ 2. **Second Node**: This node is for ...
188+ 3. **Third Node**: This node is for ...
189+
190+ ```mermaid
191+ flowchart TD
192+ firstNode[First Node] --> secondNode[Second Node]
193+ secondNode --> thirdNode[Third Node]
194+ ```
195+ ## Utility Functions
196+
197+ > Notes for AI:
198+ > 1. Understand the utility function definition thoroughly by reviewing the doc.
199+ > 2. Include only the necessary utility functions, based on nodes in the flow.
200+
201+ 1. **Call LLM** (`utils/call_llm.py`)
202+ - *Input*: prompt (str)
203+ - *Output*: response (str)
204+ - Generally used by most nodes for LLM tasks
205+
206+ 2. **Embedding** (`utils/get_embedding.py`)
207+ - *Input*: str
208+ - *Output*: a vector of 3072 floats
209+ - Used by the second node to embed text
210+
211+ ## Node Design
212+
213+ ### Shared Store
214+
215+ > Notes for AI: Try to minimize data redundancy
216+
217+ The shared store structure is organized as follows:
218+
219+ ```python
220+ shared = {
221+ "key": "value"
222+ }
223+ ```
224+
225+ ### Node Steps
226+
227+ > Notes for AI: Carefully decide whether to use Batch/Async Node/Flow.
228+
229+ 1. First Node
230+ - *Purpose*: Provide a short explanation of the node’s function
231+ - *Type*: Decide between Regular, Batch, or Async
232+ - *Steps*:
233+ - *prep*: Read "key" from the shared store
234+ - *exec*: Call the utility function
235+ - *post*: Write "key" to the shared store
236+
237+ 2. Second Node
238+ ...
239+ ~~~
240+
241+
160242- **`utils/`**: Contains all utility functions.
161243 - It's recommended to dedicate one Python file to each API call, for example `call_llm.py` or `search_web.py`.
162244 - Each file should also include a `main()` function to try that API call
245+ ```python
246+ from google import genai
247+ import os
248+
249+ def call_llm(prompt: str) -> str:
250+ client = genai.Client(
251+ api_key=os.getenv("GEMINI_API_KEY", ""),
252+ )
253+ model = os.getenv("GEMINI_MODEL", "gemini-2.5-flash")
254+ response = client.models.generate_content(model=model, contents=[prompt])
255+ return response.text
256+
257+ if __name__ == "__main__":
258+ test_prompt = "Hello, how are you?"
259+
260+ # First call - should hit the API
261+ print("Making call...")
262+ response1 = call_llm(test_prompt, use_cache=False)
263+ print(f"Response: {response1}")
264+ ```
265+
163266- **`nodes.py`**: Contains all the node definitions.
164267 ```python
165268 # nodes.py
@@ -1559,24 +1662,25 @@ Here, we provide some minimal example implementations:
15591662 def call_llm(prompt):
15601663 from anthropic import Anthropic
15611664 client = Anthropic(api_key="YOUR_API_KEY_HERE")
1562- response = client.messages.create(
1563- model="claude-2",
1564- messages=[{"role": "user", "content": prompt}],
1565- max_tokens=100
1665+ r = client.messages.create(
1666+ model="claude-sonnet-4-0",
1667+ messages=[
1668+ {"role": "user", "content": prompt}
1669+ ]
15661670 )
1567- return response .content
1671+ return r .content[0].text
15681672 ```
15691673
157016743. Google (Generative AI Studio / PaLM API)
15711675 ```python
15721676 def call_llm(prompt):
1573- import google.generativeai as genai
1574- genai.configure (api_key="YOUR_API_KEY_HERE" )
1575- response = genai.generate_text (
1576- model="models/text-bison-001" ,
1577- prompt =prompt
1578- )
1579- return response.result
1677+ from google import genai
1678+ client = genai.Client (api_key='GEMINI_API_KEY' )
1679+ response = client.models.generate_content (
1680+ model='gemini-2.5-pro' ,
1681+ contents =prompt
1682+ )
1683+ return response.text
15801684 ```
15811685
158216864. Azure (Azure OpenAI)
0 commit comments