A simple Django project to listen to OpenScanHub scan events and print scan information to the terminal. This project was inspired by the scan-results-collector project.
- Fetches scan events from OpenScanHub
- Prints scan ID and scan information to the terminal
- Stores scan events in a SQLite database
- Supports single scan processing, batch processing, and continuous monitoring
- Configurable polling intervals and batch sizes
sast-ai-check/
├── sast_ai_check/ # Django project settings
├── scanevent_listener/ # Main application
│ ├── management/
│ │ └── commands/
│ │ └── watch_scan_events.py # Management command
│ ├── models.py # Database models
│ └── openscanhub_client.py # OpenScanHub API client
├── deploy/ # Kubernetes deployment
│ └── sast-ai-check/ # Helm chart
├── Dockerfile # Container image
├── demo_with_mock_data.py # Demo script with mock data
├── test_scan_listener.py # Test script
└── README.md
- Clone the repository:
git clone <repository-url>
cd sast-ai-check
- Build and push container image:
docker build -t your-registry/sast-ai-check:latest .
docker push your-registry/sast-ai-check:latest
- Deploy with Helm:
helm install sast-ai-check ./deploy/sast-ai-check --set image.repository=your-registry/sast-ai-check
- Monitor deployment:
kubectl logs -l app.kubernetes.io/name=sast-ai-check -f
- Build image:
docker build -t sast-ai-check .
- Run with default settings:
docker run --rm sast-ai-check
- Run with custom settings:
docker run --rm -e OSH_HUB_URL=https://your-osh.com/osh/xmlrpc -e BATCH_SIZE=20 sast-ai-check
- Run single scan:
docker run --rm sast-ai-check python manage.py watch_scan_events --scan-id 995187
- Install dependencies:
pip install -r requirements.txt
- Configure environment:
cp .env.example .env
# Edit .env with your settings
- Run database migrations:
python3 manage.py migrate
The main functionality is provided through the watch_scan_events
management command:
python3 manage.py watch_scan_events --help
- Run continuously (default behavior):
python3 manage.py watch_scan_events
- Run continuously with custom poll interval:
python3 manage.py watch_scan_events --poll-interval 30
- Process a specific scan ID:
python3 manage.py watch_scan_events --scan-id 250001
- Process a single batch and exit:
python3 manage.py watch_scan_events --once --start-from 250000 --batch-size 5
When a scan event is processed, it prints information like this:
=== SCAN EVENT ===
Scan ID: 250001
Component: openssl
Version: 3.0.8
State: PASSED
Owner: [email protected]
Type: static-analysis
Raw Data: {...}
==================
The project uses a .env
file for configuration. Copy the example file and customize it:
cp .env.example .env
Available environment variables:
OSH_HUB_URL
: OpenScanHub URL (default: https://cov01.lab.eng.brq2.redhat.com/osh/xmlrpc)BATCH_SIZE
: Number of scans to process per batch (default: 10)MINIMUM_SCAN_ID
: Minimum scan ID to process (default: 0)DEBUG
: Django debug mode (default: True)SECRET_KEY
: Django secret key (not critical for this CLI-only project)LOG_LEVEL
: Logging level (DEBUG, INFO, WARNING, ERROR)
# OpenScanHub Configuration
OSH_HUB_URL=https://your-openscanhub-instance.com/osh/xmlrpc
BATCH_SIZE=20
MINIMUM_SCAN_ID=250000
# Django Configuration
DEBUG=True
SECRET_KEY=sast-ai-check-not-used-for-web-features
LOG_LEVEL=INFO
You can also still use environment variables the traditional way if preferred.
- Kubernetes cluster
- Helm 3.x installed
- Container image available in a registry
- Build and push container image:
docker build -t your-registry/sast-ai-check:latest .
docker push your-registry/sast-ai-check:latest
- Configure values:
# Edit deploy/sast-ai-check/values.yaml or create custom-values.yaml
cat > custom-values.yaml <<EOF
image:
repository: your-registry/sast-ai-check
tag: latest
config:
oshHubUrl: "https://your-openscanhub.com/osh/xmlrpc"
batchSize: 20
pollInterval: 30
resources:
limits:
memory: 1Gi
requests:
memory: 256Mi
persistence:
enabled: true
size: 2Gi
EOF
- Deploy:
helm install sast-ai-check ./deploy/sast-ai-check -f custom-values.yaml
- Monitor:
# Check deployment status
kubectl get pods -l app.kubernetes.io/name=sast-ai-check
# View logs
kubectl logs -l app.kubernetes.io/name=sast-ai-check -f
# Check configuration
helm get values sast-ai-check
Upgrade deployment:
helm upgrade sast-ai-check ./deploy/sast-ai-check -f custom-values.yaml
Uninstall:
helm uninstall sast-ai-check
Debug:
# Dry run to see generated manifests
helm install sast-ai-check ./deploy/sast-ai-check --dry-run --debug
# Execute commands in pod
kubectl exec -it deployment/sast-ai-check -- python manage.py watch_scan_events --scan-id 995187
Run the demo with mock data to see how the system works:
python3 demo_with_mock_data.py
This will create sample scan events and demonstrate the output format without requiring access to a real OpenScanHub instance.
Stores information about each scan:
scan_id
: Unique scan identifiercomponent
: Component being scannedversion
: Component versionstate
: Scan state (PASSED, FAILED, IN_PROGRESS, etc.)owner
: Scan ownerscan_type
: Type of scan performedraw_data
: Complete scan data as JSON
Tracks the last processed scan ID for incremental processing.
The project follows Django best practices and is structured to be easily extensible. Key files:
openscanhub_client.py
: Handles communication with OpenScanHub APIlisten_scan_events.py
: Main management command logicmodels.py
: Database schema for storing scan events
This project is based on the scan-results-collector project structure and uses similar patterns for OpenScanHub integration while providing a simplified version focused on displaying scan information rather than importing to DefectDojo.