-
Notifications
You must be signed in to change notification settings - Fork 140
Install solc/resolc via install-deps
#1043
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
It is better to use the already existing infrastructure instead of adding another way we to install dependencies. This pull request also ensures we are forwarding return values in `/cmd` spawned processes to e.g. fail on errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Found CI reliability risks from moving solc/resolc installation into install-deps (path assumptions + loss of caching), plus a few correctness/safety issues in the cmd wrapper changes.
Suggestions that couldn't be attached to a specific line
.github/install-deps.sh:3,22-45
set -eu at [Line 3] makes the script sensitive to previously-optional environment variables later in the script (e.g., $AGENT_TOOLSDIRECTORY is used further down). If this script is ever run outside GitHub-hosted runners (or if that var changes), it will hard-fail. Consider guarding destructive deletes with defaults/checks, e.g. if [[ -n "${AGENT_TOOLSDIRECTORY:-}" ]]; then rm -rf "$AGENT_TOOLSDIRECTORY"; fi, or avoid -u here if portability is desired.
.github/install-deps.sh:22-23,33-34,44-45
solc/resolc are installed into relative paths (mkdir -p solc, mkdir -p resolc) and PATH is updated via $PWD ([Line 22-23], [Line 33-34], [Line 44-45]). This is brittle if the workflow runs this script with a different working directory (or via working-directory:). Install into a stable location (e.g. ${GITHUB_WORKSPACE} or ${RUNNER_TEMP}) and append that absolute path to GITHUB_PATH.
.github/install-deps.sh:26-29,37-40
Downloads of solc/resolc are not integrity-checked ([Line 26-29], [Line 37-40]). This is a supply-chain risk since the job is executing downloaded binaries. Consider validating an expected SHA256 (pinned per version) before chmod +x, or downloading from a provenance-verified source if available.
.github/actions/get-solc/action.yml:15-21 removed
The deleted composite action previously cached solc and resolc ([Line 15-21 removed]). With the move to .github/install-deps.sh, the caching behavior appears removed, which can significantly increase CI time and add a new failure mode (network flakiness) for every job. If the intent is parity with the prior approach, add an equivalent cache step where install-deps.sh is used, or incorporate caching into the shared infra you’re consolidating on.
.github/workflows/clippy.yml:40-41 removed
The Install Solc step was removed ([Line 40-41 removed]) without any visible replacement in this diff. If clippy builds (directly or via build scripts) require solc/resolc, this job will start failing. Ensure this workflow invokes .github/install-deps.sh (or otherwise provisions solc/resolc) before running cargo clippy.
.github/workflows/test.yml:100-101 removed
The Install Solc step was removed ([Line 100-101 removed]) without any visible replacement in this diff. If any tests/build steps rely on solc/resolc, this will cause runtime failures. Ensure install-deps.sh is executed in this job before tests that require these tools.
.github/scripts/cmd/cmd.py
Outdated
| status = os.system(f"cargo build -p {runtime['package']} --profile {profile} -q --features {features}") | ||
| if status != 0: | ||
| print(f"Failed to build {runtime['name']}") | ||
| sys.exit(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
os.system(...) return value handling is not quite correct: it returns a wait status, not the raw exit code. Checking status != 0 works for failure detection, but you lose the real exit code and can misreport signal terminations. Prefer subprocess.run(..., check=True) (already imported) or decode via os.WEXITSTATUS(status) when printing/debugging ([Line 99-102]).
.github/scripts/cmd/cmd.py
Outdated
| result = subprocess.run( | ||
| f"frame-omni-bencher v1 benchmark pallet --no-csv-header --all --list --runtime={wasm_file}", | ||
| shell=True, capture_output=True, text=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
subprocess.run(..., shell=True, ...) at [Line 105-107] is unnecessary and increases injection risk. Even if inputs come from a checked-in JSON, this still expands shell metacharacters and can break on spaces. Prefer passing argv as a list with shell=False, e.g. subprocess.run(["frame-omni-bencher", "v1", "benchmark", "pallet", "--no-csv-header", "--all", "--list", f"--runtime={wasm_file}"]).
.github/scripts/cmd/cmd.py
Outdated
| print('❌ Failed to format code') | ||
| sys.exit(1) | ||
| command = "taplo format --config .config/taplo.toml" | ||
| print(f'Formatting toml files with with `{command}`') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo in user-facing output: Formatting toml files with with ... ([Line 206]). Fix to a single with to avoid noisy CI logs.
xlc
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still don't fully understand why solc is needed for runtimes?
AFAIK (
|
Because of some revive dependencies that stubbornly depend upon it. @athei do you think there might be a way to work around those dependencies, such that end users don't really need to know about this? Or is it something that we might see on |
|
It is required to compile test and benchmarking fixtures. Hence the dependency is required if you either supply Tests are not really relevant for downstream users. So only the benchmarking fixtures is what is problematic here. I believe it is only a handful of them. So maybe we should actually check in the binaries? |
Yeah! I thought about what was there a few months ago, because yes… there were several internal dependencies to revive fixtures that you needed to have in order to just build, glad to know things have changed quite much. ❤️
That might be a solution, considering it's only a handful of benchmarking tests. Or maybe allowing to setup these binaries via the benchmarking setup. 🤔 But definitely is just an idea, because that might be probably too intricate just to remove a dependency that is not used on any workflow, other than benchmarking. |
It is better to use the already existing infrastructure instead of adding another way we to install dependencies.
This pull request also ensures we are forwarding return values in
/cmdspawned processes to e.g. fail on errors.