This repository contains the code, input datasets, and figures used to reproduce the results in the paper:
https://arxiv.org/abs/2512.01437
We analyze Qubic’s 2025 campaign on the Monero network using on-chain block/orphan observations, pool job notifications, and a third-party Qubic block dataset to estimate hashrate share (\u03b1), tie-breaking (\u03b3), race/orphan dynamics, and revenue outcomes.
| Folder | Description |
|---|---|
data/ |
Input datasets (raw / processed) |
code/ |
Analysis + plotting scripts |
fig/ |
Paper figures (the subset reproducible from this repo) |
derived/ |
Derived CSV outputs generated by scripts (ignored by git) |
visualizer/ |
Website source code for visualizing data |
data/all_blocks.csvdata/raw_jobs.csv(required for Fig. 5 / job-delay analysis)data/blocks-proof.csv(required for Fig. 7 / withholding timeline and some comparisons)data/selfish_mining_blocks.csv(run-summary input used by multiple scripts)
The paper combines three data sources:
-
Monero node observations (main-chain + locally observed orphans)
Collected from a local Monero node (pruning mode) during the study window, and complemented with historical data via RPC queries to public Monero nodes for earlier periods.
In this repository this is represented primarily by:data/all_blocks.csv. -
Qubic pool job-notify observations (Stratum-like API)
Mining job notifications were collected by polling the Qubic pool API at a fixed interval (paper: 5 seconds), recording fields such as height and previous block hash.
In this repository this is represented by:data/raw_jobs.csv. -
Third-party Qubic block dataset (packet-sniffing / community shared)
Additional Qubic-related blocks collected via packet sniffing of Qubic network traffic and shared by Monero community contributors (paper acknowledgement includes DataHoarder / Sergei Chernykh). This dataset is used to supplement Qubic blocks that may be missed by on-chain observation (notably including orphans).
In this repository this is represented by:data/blocks-proof.csv.
Some scripts write intermediate CSV outputs during analysis. These are treated as derived artifacts and should not be committed.