Skip to content

Commit 7235d4c

Browse files
add initial structure of inference section
1 parent 440763b commit 7235d4c

File tree

1 file changed

+29
-0
lines changed

1 file changed

+29
-0
lines changed
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
# Inference Client
2+
3+
One of the main functions of a Naptha Module is to access model inference. Naptha Nodes can run inference locally , and do so via the Naptha Inference API. Naptha Modules can import the `InferenceClient` class from the `naptha_sdk.inference` module to interact with the inference provider.
4+
5+
6+
```
7+
import asyncio
8+
from naptha_sdk.schemas import NodeConfigUser
9+
from naptha_sdk.inference import InferenceClient
10+
11+
node = NodeConfigUser(ip="node.naptha.ai", http_port=7001, server_type="http")
12+
inference_client = InferenceClient(node)
13+
14+
messages = [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is the capital of France?"}]
15+
16+
response = asyncio.run(inference_client.run_inference({"model": "phi3:mini",
17+
"messages": messages,
18+
"temperature": 0.5,
19+
"max_tokens": 1000}))
20+
21+
content = response['choices'][0]['message']['content']
22+
print("Output: ", content)
23+
```
24+
25+
You can also run inference on a node using the `naptha inference` CLI command:
26+
27+
```bash
28+
naptha inference "How can we create scaling laws for multi-agent systems?" -m "phi3:mini"
29+
```

0 commit comments

Comments
 (0)