Skip to content

Conversation

@dargilco
Copy link
Member

Description

  • Re-emit from latest TypeSpec, which had some changes relate to the Dataset operations.
  • Add support for env variable ENABLE_AZURE_AI_PROJECTS_CONSOLE_LOGGING to turn on un-redacted console logs for the async AIProjectsClient. The same already exists for the sync client.
  • Updates to hand-written method .dataset.upload_file() and .dataset_upload_folder()
    • Fix the methods, now that the startPendingUpload REST API is working, and I can make further progress testing the hand-written implementation
    • Add optional input connection_name to identify which BYO storage connection to use
    • Add optional input file_pattern (a reg-ex) to .connection.upload_folder() to allow selection of which file in the folder to upload.
    • Since the createOrUpdateVersion REST API is not yet working (still waiting for some service-side fixes), I could not fully test the Dataset upload methods. So there may be other fixes needed next week.

@dargilco dargilco self-assigned this May 10, 2025
@dargilco dargilco changed the title Dargilco/get dataset working Updates to Dataset upload methods May 10, 2025
@github-actions
Copy link

API Change Check

APIView identified API level changes in this PR and created the following API reviews

azure-ai-projects

@dargilco dargilco merged commit a68de20 into feature/azure-ai-projects-beta11 May 10, 2025
27 of 30 checks passed
@dargilco dargilco deleted the dargilco/get_dataset_working branch May 10, 2025 07:13
annatisch pushed a commit that referenced this pull request May 15, 2025
* First

* Re-emit

* Fix setup.py

* Quality gates

* Update local cspell instead of root one

* Remove projects SDK stuff from root cspell.json

* Add azure-ai-agents path to pyrightconfig.json

* Updating evaluation object to have name field instead of id (#40779)

* Address comments from SDK review meeting (#40748)

* Updating Evaluator Id list (#40798)

* Updaing Evaluator Id list

* Updating evaluator list

* Re-emitted SDK (#40794)

* Re-emitted SDK

* Fixed patched code following auto-gen code changes

* Evaluation Review Feedback (#40830)

* Evaluation Review Feedback

* Fixing samples

* Fixing sample

* Address Azure SDK review meeting comment and other fixes / minor changes (#40839)

* For now, limit the build to Projects SDK only

* Switch credential scope

* Add tests that run all samples (#40875)

* Re-emit from TypeSpec (#40876)

* First (#40881)

* Adding sample for agent evaluation (#40844)

* Adding sample for agent evaluation

* Review comments

* Create automatic report of all Samples run, and their Exception message (if any) (#40885)

* Add sync and async methods `.connections.get_default()` (#40916)

* Fix pylint errors

* Updating Agent Evalaution Sample (#40927)

* Updating Agent Evlaution Sample

* Update sample_agent_evaluations.py

* Re-emit from latest TypeSpec (#40935)

* Re-emit

* Fix Sphinx

* Fix async get_azure_openai_client. Fixes in samples test

* Re-emit from TypeSpec. Fix one sample

* Users/singankit/evaluation sample update (#40973)

* Evaluation Sample Update

* Fixing agent evaluation sample

* Updating evaluation samples

* Review Comment

* Udpdating sample with evaluator ids

* Spell check

* Updates to Dataset upload methods (#41020)

* Take setup dependency on azure-ai-agents. Update inference route to /models (#41040)

* Use /models instead of /api/models route for inference

* Remove lazy import of azure-ai-agents, not that it was published

* Fix quality gates

* Update package README

* Attempt to fix quality gate related to dependency on azure-ai-agents

* Quality gates and other small fixes in prep for release (#41062)

* Take inference credential scopes from AIProjectClient

* Run tool 'black'

* Attempt to suppress verifytypes in auto-emitted code

* Suppress verifytypes after talking to Krista

* Update shared_requirements.txt to fix `Analyze dependencies` job

* Update implementation of .inference.get_azure_openai_client() (#41076)

Support either returning an AOAI client on the parent AI Services resource, or on a connected AOAI service. We are still debating which api_version to show in the sample code.

* Fix AOAI test samples

* Fix missing connection name is samples test

* Exclue samples test file from spelling check

* Update change log and readme (#41105)

* Update readme, changelog

* Fix quality gates

* manually remove unused services pattern classes from emitted code (#41112)

TypeSpec defined a namespace  Azure.AI.Projects.ServicePatterns that contains a TypeSpec template for common operations used by Datasets and Indexes. This results in an unused property service_patterns on AIProjectClient that does nothing, and emitted ServicePattern classes that do nothing. I'm trying to update the TypeSpec namespace so that will not happen, but running into some issues. So for the time being, I'm manually editing the Python emitted code to unblock release so anything to do with service patter does not appear in the public API surface.

* Updates to README and CHANGELOG

* Users/singankit/readme update (#41123)

* Updating smaples and README

* Update Changelog file

* Review comments

* Fixing pyright issues

* Quality gates for evaluation samples

* Update sample_evaluations_aoai_graders.py (from Ankit)

* fix broken cspell.json

* pyrightconfig.json suppression of evaluation samples does not work

---------

Co-authored-by: Ankit Singhal <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants