-
Notifications
You must be signed in to change notification settings - Fork 5.6k
[HDInsight]-Add descriptions for sparkjob related definition. #10121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[Staging] Swagger Validation Report
️✔️ |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
Can one of the admins verify this patch? |
azure-sdk-for-python - Release
|
Azure CLI Extension Generation
No readme.md specification configuration files were found that are associated with the files modified in this pull request, or swagger_to_sdk section in readme.md is not configured
|
azure-sdk-for-net - Release
No readme.md specification configuration files were found that are associated with the files modified in this pull request, or swagger_to_sdk section in readme.md is not configured
|
Trenton Generation - Release
No readme.md specification configuration files were found that are associated with the files modified in this pull request, or swagger_to_sdk section in readme.md is not configured
|
| "type": "object", | ||
| "properties": { | ||
| "from": { | ||
| "description": "The start index to fetch sessions.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sessions [](start = 51, length = 8)
spark batch Job? #Closed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I add description following livy documatation(search GET /batches in page)
Is it ok for different from this doc? #Closed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that the minor change is acceptable and necessary to fit in our sdk. Although the class name is SparkBatchJobCollection.
In reply to: 455539008 [](ancestors = 455539008)
| "type": "integer" | ||
| }, | ||
| "sessions": { | ||
| "description": "Batch list.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Batch list [](start = 26, length = 10)
The Spark Batch job list. #Closed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your reminder. I will scan all description again to enhance the completeness. #Closed
| "type": "object", | ||
| "properties": { | ||
| "id": { | ||
| "description": "The session id.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
session [](start = 30, length = 7)
The spark job's livy id.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think livy is necessary to make it clear.
Do you think "The livy id of the spark batch job." will be better?
In reply to: 455536463 [](ancestors = 455536463)
| "type": "integer" | ||
| }, | ||
| "appId": { | ||
| "description": "The application id of this session.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
session [](start = 53, length = 7)
spark job #Closed
| }, | ||
| "state": { | ||
| "type": "string" | ||
| "description": "The batch state.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The batch state [](start = 26, length = 15)
The spark batch job state. #Closed
| "state": { | ||
| "type": "string" | ||
| "description": "The batch state.", | ||
| "$ref": "#/definitions/SessionState" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SessionState [](start = 33, length = 12)
I noticed that spark batch job and spark session job share the same SessionState. Could you please double check this?
If they don't have the totally same state please split it.
| } | ||
| }, | ||
| "files": { | ||
| "description": "Files to be used in this session.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
session [](start = 51, length = 7)
I know that the description in livy doc used the word "session". But this property is one of SparkBatchJobRequest. So I think that we need to mitigate "session" to "spark batch job". How do you like? #Closed
| "sparkr", | ||
| "sql" | ||
| ] | ||
| "description": "Session kind.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Session [](start = 26, length = 7)
Spark Session kind? #Closed
| "type": "object", | ||
| "properties": { | ||
| "id": { | ||
| "description": "Session id.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Session [](start = 26, length = 7)
The "Session" is so common. I think that we had better to add prefix for example Spark Batch, Spark Session, Spark Statement etc.
Could you please check all the "session" word to make it clear enough? #Closed
|
Azure Pipelines successfully started running 1 pipeline(s). |
azure-sdk-for-python-track2 - Release
No readme.md specification configuration files were found that are associated with the files modified in this pull request, or swagger_to_sdk section in readme.md is not configured
|
| "type": "integer" | ||
| }, | ||
| "sessions": { | ||
| "description": "The Spark Batch job list.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
job [](start = 42, length = 3)
jobs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated.
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
Hi @anuchandy Could you please review the PR? |
aim-for-better
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
majastrz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes is just backfilling missing swagger (changing strings to enums to match real API). Signed off from ARM side.
MSFT employees can try out our new experience at OpenAPI Hub - one location for using our validation tools and finding your workflow.
Contribution checklist:
If any further question about AME onboarding or validation tools, please view the FAQ.
ARM API Review Checklist
Failure to comply may result in delays for manifest application. Note this does not apply to data plane APIs.
Please follow the link to find more details on API review process.