Infrastructure-as-code for deploying Neo4j on Azure using Bicep templates, published to the Azure Marketplace.
| Edition | Template | VM | Disk | Nodes |
|---|---|---|---|---|
| Enterprise | marketplace/neo4j-enterprise/ |
VMSS (Standard_E4s_v5) | Premium_LRS | 1-10 |
| Community | marketplace/neo4j-ce/ |
Standalone VM (Standard_E4bds_v5, NVMe) | PremiumV2_LRS / Premium_LRS (auto) | 1 |
The CE template uses pickZones() to auto-detect availability zone support at deploy time. In zonal regions it deploys with PremiumV2_LRS + zone 1; in non-zonal regions it falls back to Premium_LRS with no zone pinning. See CE Architecture for details.
marketplace/
neo4j-enterprise/ # Enterprise edition (VMSS, load balancer)
neo4j-ce/ # Community Edition (standalone VM, NVMe, pickZones)
scripts/
neo4j-enterprise/ # Enterprise cloud-init and provisioning
neo4j-ce/cloud-init/ # CE cloud-init (standalone.yaml)
deployments/ # Deployment and testing CLI (see deployments/README.md)
test_suite/test_ce/ # CE integration tests (connectivity, CRUD, resilience)
.github/workflows/ # CI: enterprise.yml, community.yml
cd deployments
# First-time setup
uv run neo4j-deploy setup
# Deploy Enterprise standalone
uv run neo4j-deploy deploy --scenario standalone-lts
# Deploy Community Edition (region set during setup)
uv run neo4j-deploy deploy --scenario ce-standalone-latest
# Check status, test, clean up
uv run neo4j-deploy status
uv run neo4j-deploy test
uv run neo4j-deploy cleanup --all --forceSee deployments/README.md for full command reference.
After deployment, find the FQDN and public IP for your instance:
# CE — show the VM's public IP and FQDN
az vm show --resource-group <resource-group> --name <vm-name> --show-details \
--query '{publicIp: publicIps, fqdn: fqdns}' --output tableThen open Neo4j Browser at:
http://<fqdn>:7474
Connect with the Bolt driver at neo4j://<fqdn>:7687. The default username is neo4j and the password is the adminPassword set during deployment.
Before building, copy .env.sample to .env and set your Partner Center PIDs (GUIDs):
cp .env.sample .env
# Edit .env with your NEO4J_PARTNER_PID (Enterprise) and CE_NEO4J_PARTNER_PID (Community)Then build:
cd deployments
uv run neo4j-deploy ee-package # Enterprise
uv run neo4j-deploy ce-package # Community Edition- Azure CLI 2.50.0+ (includes Bicep CLI)
- Python 3.12+ with uv
- Active Azure subscription
| Scenario | Edition | Version | Purpose |
|---|---|---|---|
standalone-lts |
Enterprise Evaluation | LTS (5) | Single-node enterprise |
cluster-lts |
Enterprise Evaluation | LTS (5) | 3-node cluster |
ce-standalone-latest |
Community | CalVer (latest) | CE standalone (region set during setup, pickZones auto-adapts) |
The test_suite/test_ce/ package runs integration tests against a deployed Community Edition instance.
What it tests:
- HTTP API and authenticated HTTP connectivity
- Bolt protocol connectivity
- APOC plugin availability
- Community Edition verification (
dbms.components()) - CRUD validation using a Movies graph dataset (The Matrix trilogy, 11+ nodes)
- VM provisioning and data disk attachment (full mode, via Azure SDK)
- Data persistence through a VM restart cycle (full mode)
Running the tests:
cd test_suite/test_ce
# Use the latest connection file (default)
uv run test-ce
# Use a specific connection file
uv run test-ce --results connection-ce-standalone-latest-20260207-212235.json
# Simple mode — connectivity + CRUD only, skips Azure resource checks
uv run test-ce --simpleConnection details and password are read from deployments/.arm-testing/results/. When --results is omitted the most recent connection file is used.
GitHub Actions workflows validate deployments on pull requests:
enterprise.yml- Enterprise standalone + cluster (LTS, Enterprise and Evaluation licenses)community.yml- Community Edition standalone
The CE offer deploys a standalone VM (not a scale set) with a separate managed data disk for Neo4j data. Key design choices:
- NVMe-first (Eds_v6 series): Defaults to
Standard_E4ds_v6— NVMe-only VMs with higher remote disk throughput vs SCSI at the same price. The marketplace image supports both NVMe and SCSI (DiskControllerTypes=SCSI,NVMe) so users can override to v5 sizes if needed. - Trusted Launch: All VMs use Secure Boot + vTPM. Required by the marketplace image definition.
- Standalone managed data disk: A separate
Microsoft.Compute/disksresource (Premium_LRS) attached at LUN 0 withdeleteOption: Detach. Data survives VM deletion and can be reattached to a new VM. Cloud-init handles both fresh disks and reattached disks with existing data. - No zone pinning: Resources deploy without availability zone constraints, ensuring compatibility with every VM SKU in every region.
Premium_LRSis used universally instead ofPremiumV2_LRS. - Accelerated networking: SR-IOV hardware offload enabled on the NIC.
See CE Architecture for full details and design rationale.