-
Notifications
You must be signed in to change notification settings - Fork 30
feat(database): add support for pipelines #574
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## master #574 +/- ##
==========================================
+ Coverage 54.56% 55.09% +0.53%
==========================================
Files 181 195 +14
Lines 15347 15723 +376
==========================================
+ Hits 8374 8663 +289
- Misses 6639 6699 +60
- Partials 334 361 +27
|
… into feature/database/pipeline
kneal
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall it looks fantastic, one question what is the goal of SkipCreation for the pipeline endpoints?
| pipelines ( | ||
| id SERIAL PRIMARY KEY, | ||
| repo_id INTEGER, | ||
| number INTEGER, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the build number?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, but it follows the same train of logic 👍
Like builds today, each pipeline in the table will have a number field that increments based off the repo_id.
That field doesn't directly map to the build number because pipelines have a one-to-many relation with builds 😃
You can have multiple builds all stemming from the same pipeline i.e. a build that gets restarted multiple times
To answer your question directly, the In case others have the same or similar questions, I'll provide some context on Then I'll explain a bit more on why we're referencing it in the This functionality was first added here: #455 By default, Vela will always attempt to create all tables and indexes in the database 👍 However, we enable Vela installation admins to disable this behavior by setting this flag to The reason why we're using When we create the pipeline server/database/pipeline/pipeline.go Lines 62 to 79 in 093f2e1
So, when we create the database client for server/database/postgres/postgres.go Lines 94 to 98 in 093f2e1
-> server/database/postgres/postgres.go Lines 358 to 369 in 093f2e1
|
… into feature/database/pipeline
ecrupper
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like the design change. + 1 for getting it applied to the rest of the resources.
Everything else LGTM.
JordanSussman
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor feedback around pagination
| ) | ||
|
|
||
| // ListPipelines gets a list of all pipelines from the database. | ||
| func (e *engine) ListPipelines() ([]*library.Pipeline, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like we should have a maximum and/or default value for the number of pieplines to be returned to avoid returning massive amounts of data. What do you think @jbrockopp?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could certainly explore that in the future 👍
But to keep in line with existing functionality, I'm not sure we should make that change at this time.
For context, the List functions for resources were designed to not contain pagination.
As an example from builds:
Lines 37 to 39 in e48007c
| // GetBuildList defines a function that gets | |
| // a list of all builds. | |
| GetBuildList() ([]*library.Build, error) |
->
server/database/postgres/build_list.go
Lines 15 to 16 in e48007c
| // GetBuildList gets a list of all builds from the database. | |
| func (c *client) GetBuildList() ([]*library.Build, error) { |
->
server/database/postgres/dml/build.go
Lines 8 to 13 in e48007c
| // ListBuilds represents a query to | |
| // list all builds in the database. | |
| ListBuilds = ` | |
| SELECT * | |
| FROM builds; | |
| ` |
The only time these List functions are actually called are for the admin endpoints (restricted to platform admins):
Line 50 in e48007c
| b, err := database.FromContext(c).GetBuildList() |
The idea is there are times where Vela installation admins may need to query all objects in a table.
The most prominent use-case we've seen for this is the migration utilities:
https://github.com/go-vela/community/tree/master/migrations
kneal
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🐬
Dependent on go-vela/types#223
Part of the effort for go-vela/community#460
This adds support for a new
pipelinesresource to thegithub.amrom.workers.dev/go-vela/server/databasepackage.As a part of this, I used a new strategy to add this logic which attempts to solve a few things:
If this new approach is liked and folks feel the pros outweigh the cons, I'll replicate this for the other resources.
If not, then I'll modify this PR accordingly to follow our existing pattern.
Structure
One distinguishable change with this approach is all code for
pipelinesis under adatabase/pipelinepackage:$ tree -d database database ├── pipeline ├── postgres │ ├── ddl │ └── dml └── sqlite ├── ddl └── dml 7 directoriesYou'll notice that no
dmlpackage is nested under thepipelinepackage.This new approach would no longer require raw SQL queries for integrating with the
pipelinestable.Instead, we leverage our DB library's (https://gorm.io/) agnostic abstraction for integrating with that table.
This is great because we reduce our total LOC and complexity when we need to add/update functionality.
To be specific, when we want to add new code for
pipelines, ideally I should only have to update the one package.We no longer have to update code specific to
postgres,sqliteor any future drivers we may support for Vela.However, it comes with a cost of becoming reliant on the GORM library which can be a downside.
Resource Specific Interface
Inside the package exists a
PipelineServiceinterface that declares all functions necessary for this code:server/database/pipeline/service.go
Lines 11 to 49 in 093f2e1
Most of the naming for these functions was adopted from the existing code barring the
Getprefix.And then we create an
enginewhich implements that service interface:server/database/pipeline/pipeline.go
Lines 25 to 39 in 093f2e1