Skip to content

Commit 50d661f

Browse files
ifsheldonStrange-Knoll64bitfrankfralickmonadoid
authored
Sync experiments branch with main 0.18.2 (#186)
* Update to Assistants example (#146) * Update to Assistants example * Update examples/assistants/src/main.rs update api config for consistency and secutity Co-authored-by: Himanshu Neema <[email protected]> * added assistant creation * exit, deconstruct assistant, improved readme --------- Co-authored-by: Himanshu Neema <[email protected]> * Add examples tool-call and tool-call-stream (#153) * add names (#150) * Link to openai-func-enums (#152) * Link to openai-func-enums * Link to openai-func-enums * Update async-openai/README.md --------- Co-authored-by: Himanshu Neema <[email protected]> * In memory files (#154) * Added ability to use in-memory files (Bytes, vec[u8]) * Removed unnecessary trait impls * Polished example * Spec, readme, and crate description updates (#156) * get latest spec * update description * add WASM * WASM support on experiments branch * chore: Release * Make tool choice lower case (#158) * Fix: post_form to be Sendable (#157) * changed to allow Send. * add simple tests for sendable * fix test name * chore: Release * Add support for rustls-webpki-roots (#168) * Refactor `types` module (#170) * Document `impl_from!` macro * Fix up `impl_from!` docs * Documents `impl_default!` macro * Document `impl_input!` macro * Factor out types from `assistants` module in `types` * Factor out `model` * Factor out `audio` * Factor out `image` * Factor out `file` * Factor out `fine_tune` * Factor out `moderation` * Factor out `edit` * Factor out `fine_tuning` * Factor out missed `DeleteModelResponse` into `model` * Factor out `embedding` * Factor out `chat` * Factor out `completion` and eliminate `types` * Satisfy clippy --------- Co-authored-by: Sharif Haason <[email protected]> * Sync updates from Spec (#171) * updates to doc comments and types * deprecated * update ChatCompletionFunctions to FunctionObject * More type updates * add logprobs field * update from spec * updated spec * fixes suggested by cargo clippy * add query param to list files (#172) * chore: Release * Optional model in ModifyAssistantRequest (#174) All fields (including model) are optional in OpenAI API. * update contribution guidelines (#182) * update contribution guidelines * fix link * update * consistency * Code of conduct * chore: Release * fix file test by providing query param * Added dimensions param to embedding request (#185) * chore: Release --------- Co-authored-by: Gravel Hill <[email protected]> Co-authored-by: Himanshu Neema <[email protected]> Co-authored-by: Frank Fralick <[email protected]> Co-authored-by: Sam F <[email protected]> Co-authored-by: David Weis <[email protected]> Co-authored-by: yykt <[email protected]> Co-authored-by: XTY <[email protected]> Co-authored-by: sharif <[email protected]> Co-authored-by: Sharif Haason <[email protected]> Co-authored-by: Sebastian Sosa <[email protected]> Co-authored-by: vmg-dev <[email protected]>
1 parent 24da803 commit 50d661f

File tree

8 files changed

+47
-7
lines changed

8 files changed

+47
-7
lines changed

async-openai/Cargo.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[package]
22
name = "async-openai"
3-
version = "0.18.0"
3+
version = "0.18.2"
44
authors = [
55
"Himanshu Neema"
66
]

async-openai/README.md

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -118,9 +118,22 @@ async fn main() -> Result<(), Box<dyn Error>> {
118118

119119
## Contributing
120120

121-
Thank you for your time to contribute and improve the project, I'd be happy to have you!
121+
Thank you for taking the time to contribute and improve the project. I'd be happy to have you!
122122

123-
A good starting point would be existing [open issues](https://github.com/64bit/async-openai/issues).
123+
All forms of contributions, such as new features requests, bug fixes, issues, documentation, testing, comments, [examples](../examples) etc. are welcome.
124+
125+
A good starting point would be to look at existing [open issues](https://github.com/64bit/async-openai/issues).
126+
127+
To maintain quality of the project, a minimum of the following is a must for code contribution:
128+
- **Documented**: Primary source of doc comments is description field from OpenAPI spec.
129+
- **Tested**: Examples are primary means of testing and should continue to work. For new features supporting example is required.
130+
- **Scope**: Keep scope limited to APIs available in official documents such as [API Reference](https://platform.openai.com/docs/api-reference) or [OpenAPI spec](https://github.com/openai/openai-openapi/). Other LLMs or AI Providers offer OpenAI-compatible APIs, yet they may not always have full parity. In such cases, the OpenAI spec takes precedence.
131+
- **Consistency**: Keep code style consistent across all the "APIs" that library exposes; it creates a great developer experience.
132+
133+
This project adheres to [Rust Code of Conduct](https://www.rust-lang.org/policies/code-of-conduct)
134+
135+
## Complimentary Crates
136+
- [openai-func-enums](https://github.com/frankfralick/openai-func-enums) provides procedural macros that make it easier to use this library with OpenAI API's tool calling feature. It also provides derive macros you can add to existing [clap](https://github.com/clap-rs/clap) application subcommands for natural language use of command line tools. It also supports openai's [parallel tool calls](https://platform.openai.com/docs/guides/function-calling/parallel-function-calling) and allows you to choose between running multiple tool calls concurrently or own their own OS threads.
124137

125138
## Complimentary Crates
126139
- [openai-func-enums](https://github.com/frankfralick/openai-func-enums) provides procedural macros that make it easier to use this library with OpenAI API's tool calling feature. It also provides derive macros you can add to existing [clap](https://github.com/clap-rs/clap) application subcommands for natural language use of command line tools. It also supports openai's [parallel tool calls](https://platform.openai.com/docs/guides/function-calling/parallel-function-calling) and allows you to choose between running multiple tool calls concurrently or own their own OS threads.

async-openai/src/embedding.rs

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ impl<'c, C: Config> Embeddings<'c, C> {
3030
#[cfg(test)]
3131
mod tests {
3232
use crate::{types::CreateEmbeddingRequestArgs, Client};
33+
use crate::types::{CreateEmbeddingResponse, Embedding};
3334

3435
#[tokio::test]
3536
async fn test_embedding_string() {
@@ -105,4 +106,25 @@ mod tests {
105106

106107
assert!(response.is_ok());
107108
}
109+
110+
#[tokio::test]
111+
async fn test_embedding_with_reduced_dimensions() {
112+
let client = Client::new();
113+
let dimensions = 256u32;
114+
let request = CreateEmbeddingRequestArgs::default()
115+
.model("text-embedding-3-small")
116+
.input("The food was delicious and the waiter...")
117+
.dimensions(dimensions)
118+
.build()
119+
.unwrap();
120+
121+
let response = client.embeddings().create(request).await;
122+
123+
assert!(response.is_ok());
124+
125+
let CreateEmbeddingResponse { mut data, ..} = response.unwrap();
126+
assert_eq!(data.len(), 1);
127+
let Embedding { embedding, .. } = data.pop().unwrap();
128+
assert_eq!(embedding.len(), dimensions as usize);
129+
}
108130
}

async-openai/src/file.rs

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,8 +83,9 @@ mod tests {
8383
//assert_eq!(openai_file.purpose, "fine-tune");
8484

8585
//assert_eq!(openai_file.status, Some("processed".to_owned())); // uploaded or processed
86+
let query = [("purpose", "fine-tune")];
8687

87-
let list_files = client.files().list().await.unwrap();
88+
let list_files = client.files().list(&query).await.unwrap();
8889

8990
assert_eq!(list_files.data.into_iter().last().unwrap(), openai_file);
9091

async-openai/src/types/chat.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
use std::collections::HashMap;
22

33
use derive_builder::Builder;
4-
use serde::{Deserialize, Serialize};
54
use crate::client::OpenAIEventStream;
5+
use serde::{Deserialize, Serialize};
66

77
use crate::error::OpenAIError;
88

async-openai/src/types/completion.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
use std::collections::HashMap;
22

33
use derive_builder::Builder;
4-
use serde::{Deserialize, Serialize};
54
use crate::client::OpenAIEventStream;
5+
use serde::{Deserialize, Serialize};
66

77
use crate::error::OpenAIError;
88

async-openai/src/types/embedding.rs

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,10 @@ pub struct CreateEmbeddingRequest {
4646
/// to monitor and detect abuse. [Learn more](https://platform.openai.com/docs/usage-policies/end-user-ids).
4747
#[serde(skip_serializing_if = "Option::is_none")]
4848
pub user: Option<String>,
49+
50+
/// The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models.
51+
#[serde(skip_serializing_if = "Option::is_none")]
52+
pub dimensions: Option<u32>
4953
}
5054

5155
/// Represents an embedding vector returned by embedding endpoint.

async-openai/src/types/fine_tune.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
use derive_builder::Builder;
2-
use serde::{Deserialize, Serialize};
32
use crate::client::OpenAIEventStream;
3+
use serde::{Deserialize, Serialize};
44

55
use crate::error::OpenAIError;
66

0 commit comments

Comments
 (0)