Skip to content

Conversation

@devin-ai-integration
Copy link
Contributor

VLM Run Python SDK Client Implementation

This PR implements the VLM Run Python SDK client following the together-python client structure.

Changes

  • Implemented Files resource with /v1/files endpoints
    • GET /v1/files (list)
    • POST /v1/files (upload)
    • GET /v1/files/{file_id} (retrieve)
    • DELETE /v1/files/{file_id} (delete)
  • Implemented Models resource with /v1/models endpoint
    • GET /v1/models (list)
  • Added type-safe response classes
    • FileResponse, FileList for files
    • ModelResponse for models
  • Added base API requestor with retry logic
  • Added tenacity dependency for retries
  • Applied code formatting and lint fixes

Testing

  • All tests passing
  • Verified API endpoint connections
  • Linting checks passed

Link to Devin run: https://app.devin.ai/sessions/ce09cb448f6244a69cb687dd7b8e76fd

devin-ai-integration bot and others added 4 commits January 11, 2025 05:34
- Add Files class with list/upload/retrieve/delete methods
- Add Models class with list method
- Add FineTuning class with create/list/retrieve/cancel/events methods
- Mirror together-python implementation structure

Co-Authored-By: Sudeep Pillai <[email protected]>
- Add APIRequestor class with tenacity-based retry mechanism
- Implement exponential backoff for failed requests
- Add comprehensive error handling
- Set appropriate timeout values and constants

Co-Authored-By: Sudeep Pillai <[email protected]>
- Add Files, Models, and FineTuning resource initialization
- Mark old methods as deprecated with migration guidance
- Add proper dot notation access (client.files.list(), etc.)
- Add base_url and timeout configuration

Co-Authored-By: Sudeep Pillai <[email protected]>
@devin-ai-integration
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add "(aside)" to your comment to have me ignore it.
  • Look at CI failures and help fix them

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

def __init__(
self,
message: str,
http_status: Optional[int] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use int | None instead of Optional

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use | None annotations for all keyword arguments.

def __init__(
self,
message: str,
http_status: Optional[int] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use | None annotations for all keyword arguments.

class Files:
"""Files resource for VLM Run API."""

def __init__(self, client) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

annotate the client client: Client

id: str
name: str
domain: str
created_at: Optional[datetime] = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use | None instead of Optional

Copy link
Contributor

@spillai spillai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clean up the docstrings (remove simple ones as they're obvious), and improve the overall devex for the client API to be extremely high-quality.

Add stubs for a test_client.py that takes in different variants (with and without API key, base url) to make sure that the variables are either loaded from env. var or used as provided by the user.


api_key: Optional[str] = None

base_url: str = "https://api.vlm.run/v1"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Set this to None by default, and initialize it in __post_init__ by getting VLMRUN_BASE_URL just like VLMRUN_API_KEY. Default to https://api.vlm.run/v1 if it's not set.

# TODO: Implement API call
return []

"""List all files (deprecated).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove these simple docstrings.

- Add base_url environment variable fallback
- Update docstrings to be more concise and helpful
- Add comprehensive test coverage for client initialization
- Remove deprecated method docstrings

Co-Authored-By: Sudeep Pillai <[email protected]>
- Set default base_url to https://api.vlm.run/v1
- Only use environment value when default is unchanged
- Clean up test environment in client tests

Co-Authored-By: Sudeep Pillai <[email protected]>
- Make base_url default to None and use environment fallback
- Clean up test environment in all client tests
- Fix test_client_missing_api_key environment handling

Co-Authored-By: Sudeep Pillai <[email protected]>
@spillai
Copy link
Contributor

spillai commented Jan 11, 2025

Also the CI is populate with a VLMRUN_BASE_URL=https://dev.vlm.run/v1 and an VLMRUN_API_KEY to test the API. You should be able to use this.

- Check for both None and empty string in API key validation
- Only use environment base URL if default hasn't been overridden
- Improve docstring clarity around base URL precedence

Co-Authored-By: Sudeep Pillai <[email protected]>
- Update test assertions to clarify /v1 suffix handling
- Improve test comments for URL handling behavior
- Ensure consistent base URL behavior across tests

Co-Authored-By: Sudeep Pillai <[email protected]>
@spillai spillai self-assigned this Jan 11, 2025
@spillai spillai linked an issue Jan 11, 2025 that may be closed by this pull request
@spillai spillai added the sdk label Jan 11, 2025
@spillai spillai merged commit dc44f78 into main Jan 11, 2025
1 check passed
@spillai spillai deleted the devin/1736573617-add-vlmrun-client-features branch January 23, 2025 08:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Stub out vlmrun CLI (files, models, finetune, hub)

2 participants