Skip to content

Commit fcd319e

Browse files
authored
Pgedge (#32)
* pgedge helm cpln * t * pgedge * pgedge helm cpln * t * pgedge * pgedge * undo irrelevant changes --------- Co-authored-by: igorchyts <>
1 parent 24d38c9 commit fcd319e

File tree

15 files changed

+804
-0
lines changed

15 files changed

+804
-0
lines changed

examples/pgedge/helm/Chart.yaml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
apiVersion: v2
2+
name: pgedge
3+
description: A pgEdge helm chart for Control Plane
4+
type: application
5+
version: 0.1.0
6+
appVersion: "1.0.0"

examples/pgedge/helm/README.md

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
## pgEdge multi-region multi-master example
2+
3+
Creates a pgEdge cluster with multi-master configuration, allowing you to write to each of the masters. The masters can be located in different location each for your clients to use the closes master to read and write to. Please review the values.yaml file.
4+
5+
Default specs:
6+
* A trigger and a function to add any table created using `spock.replicate_ddl` function to the `cpln_default` replication set are applied using [replication.sql](scripts/replication.sql). For further infrotmation, you can read in the [pgEdge documentation](https://docs.pgedge.com/spock_ext/advanced_spock/repset_trigger)
7+
* The cluster is created in three locations, unless otherwise specified, and each location has it's own endpoint described below.
8+
9+
### Steps to run this example:
10+
11+
**HELM**
12+
13+
The [Helm CLI](https://helm.sh/docs/intro/install/#through-package-managers) and [Control Plane CLI](https://docs.controlplane.com/reference/cli#install-npm) must be installed.
14+
15+
1. Clone this repo and update the `values.yaml` file as needed
16+
17+
2. (Optional) Build and push pgEdge image to your private Control Plane image registry
18+
19+
By default, a publicly available image of pgEdge is used in this example as seen in the `values.yaml` files.
20+
The image is built using the [Dockerfile](../image/Dockerfile) that is in the [image](../image) folder.
21+
22+
If you like to customize the image for your needs and push it to Control Plane private registry you would use the following command while in the [image](../image) folder.
23+
24+
```
25+
cpln image build --name pgedge-cpln:v1 --push
26+
```
27+
28+
29+
3. Run the command below from this directory.
30+
31+
```bash
32+
helm template . | cpln apply -f -
33+
```
34+
35+
### Testing
36+
37+
1. Connect to the `pgadmin` that is deployed in the `pgedge01` GVC, by navigating to it's canonical endpoint available in Control Plane console.
38+
Connection details are as provided in the [values.yaml](values.yaml)
39+
40+
2. Add the servers. The syntax for the internal endpoint is: WOKRLOAD_NAME.GVC_NAME.cpln.local , port 5432.
41+
In our example we create three replicas:
42+
- pgedge.pgedge01.cpln.local
43+
- pgedge.pgedge02.cpln.local
44+
- pgedge.pgedge03.cpln.local
45+
46+
3. Create a replicated table on all nodes by running the following query on any one of the nodes:
47+
```
48+
select spock.replicate_ddl('create table public.testddl (a int primary key, val2 varchar(10))','{cpln_default}');
49+
```
50+
51+
4. Now, you should be able to write to and read from the table on any one of the nodes, and the data will be replicated.
52+
53+
For advanced configuration of pgEdge, please refer to the pgEdge [documentation](https://docs.pgedge.com/).
54+
55+
### Cleanup
56+
57+
**HELM**
58+
59+
```bash
60+
helm template . | cpln delete -f -
61+
```
Lines changed: 123 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,123 @@
1+
#!/bin/bash
2+
3+
4+
set -x
5+
6+
7+
subscribe() {
8+
local this="$1"
9+
local nodes="$2"
10+
local count=0
11+
12+
for on in $nodes; do
13+
count=$((count + 1))
14+
on_short="n$count"
15+
on_hostname="${on%%:*}"
16+
on_port="${on##*:}"
17+
if [[ "$this" == "$on_short" ]]; then
18+
continue
19+
else
20+
while true; do
21+
#nodectl spock sub-create SUBSCRIPTION_NAME PROVIDER_DSN DB
22+
./nodectl spock sub-create sub_${this}${on_short} "host=${on_hostname} port=${on_port} user=pgedge dbname=${POSTGRES_DB}" "${POSTGRES_DB}" && break
23+
sleep 10s
24+
done
25+
./nodectl spock sub-add-repset sub_${this}${on_short} $SET_NAME demo
26+
fi
27+
done
28+
}
29+
30+
31+
if [ "`id -u`" = "0" ]; then
32+
echo "****** Phase 1 running as root"
33+
34+
export WORKLOAD_NAME=$(echo $CPLN_WORKLOAD | sed 's|.*/workload/\([^/]*\)$|\1|')
35+
export HOSTNAME="${WORKLOAD_NAME}.${CPLN_GVC}.cpln.local"
36+
37+
# mkdir -p /opt/pgedge
38+
chown -R pgedge /opt/pgedge
39+
40+
cat <<EOF > /home/pgedge/pgedge.env
41+
HOSTNAME=$HOSTNAME
42+
43+
CLUSTER_NODES="$CLUSTER_NODES"
44+
45+
POSTGRES_DB=$POSTGRES_DB
46+
47+
POSTGRES_PASSWORD=$POSTGRES_PASSWORD
48+
49+
EOF
50+
51+
# and then rerun this script as pgedge
52+
su pgedge - $0
53+
exit
54+
fi
55+
56+
57+
#------ from here down we are user pgedge....
58+
59+
60+
echo "****** Phase 2 running as pgedge"
61+
62+
63+
source /home/pgedge/pgedge.env
64+
65+
66+
cd /opt/pgedge/
67+
68+
69+
if [ ! -d /opt/pgedge/pgedge/nodectl ]; then
70+
python3 -c "$(curl -fsSL https://pgedge-download.s3.amazonaws.com/REPO/install.py)"
71+
fi
72+
73+
74+
cd /opt/pgedge/pgedge
75+
76+
77+
NODE_COUNT=0
78+
79+
80+
for NODE in $CLUSTER_NODES; do
81+
NODE_COUNT=$((NODE_COUNT + 1))
82+
NODE_SHORT="n$NODE_COUNT"
83+
NODE_HOSTNAME="${NODE%%:*}"
84+
NODE_PORT="${NODE##*:}"
85+
86+
if [ "$NODE_HOSTNAME" == "$HOSTNAME" ]; then
87+
echo "This host ($HOSTNAME) is part of the cluster."
88+
# sed -i 's/export PGPORT=5432/export PGPORT=$NODE_PORT/' pg16/pg16.env
89+
# source pg16/pg16.env
90+
SET_NAME="cpln_default"
91+
output=$(./nodectl status pgedge)
92+
pg_ctl_path="/opt/pgedge/pgedge/pg16/bin/pg_ctl"
93+
94+
if ([ "$output" == "pgedge installed" ] || [ "$output" == "pgedge stopped" ]) && [ -f "$pg_ctl_path" ]; then
95+
# Restore the password
96+
cp /opt/pgedge/pgedge/pg16/.pgpass ~pgedge/.pgpass
97+
./nodectl start pg16
98+
./nodectl spock node-create $NODE_SHORT "host=$HOSTNAME port=$NODE_PORT user=pgedge dbname=$POSTGRES_DB" "${POSTGRES_DB}" || true
99+
./nodectl spock repset-create "${SET_NAME}" "${POSTGRES_DB}" || true
100+
subscribe "$NODE_SHORT" "$CLUSTER_NODES" &
101+
else
102+
./nodectl install pgedge -U $POSTGRES_DB -P $POSTGRES_PASSWORD -d $POSTGRES_DB -p $NODE_PORT
103+
# backup the pass file
104+
cp ~pgedge/.pgpass /opt/pgedge/pgedge/pg16/.pgpass
105+
./nodectl spock node-create $NODE_SHORT "host=$HOSTNAME port=$NODE_PORT user=pgedge dbname=$POSTGRES_DB" "${POSTGRES_DB}" || true
106+
./nodectl spock repset-create "${SET_NAME}" "${POSTGRES_DB}" || true
107+
subscribe "$NODE_SHORT" "$CLUSTER_NODES" &
108+
fi
109+
break
110+
fi
111+
done
112+
113+
114+
wait
115+
116+
117+
/opt/pgedge/pgedge/pg16/bin/psql $POSTGRES_DB -c "SELECT * FROM spock.node;"
118+
-p $NODE_PORT
119+
120+
/opt/pgedge/pgedge/pg16/bin/psql $POSTGRES_DB -p $NODE_PORT -f
121+
/scripts/replication.sql
122+
123+
sleep 99999d
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
CREATE OR REPLACE FUNCTION spock_assign_repset()
2+
RETURNS event_trigger AS $$
3+
DECLARE obj record;
4+
BEGIN
5+
FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands()
6+
LOOP
7+
IF obj.object_type = 'table' THEN
8+
IF obj.schema_name = 'public' THEN
9+
PERFORM spock.repset_add_table('cpln_default', obj.objid);
10+
ELSIF NOT obj.in_extension THEN
11+
PERFORM spock.repset_add_table('default', obj.objid);
12+
END IF;
13+
END IF;
14+
END LOOP;
15+
END;
16+
$$ LANGUAGE plpgsql;
17+
18+
CREATE EVENT TRIGGER spock_assign_repset_trg
19+
ON ddl_command_end
20+
WHEN TAG IN ('CREATE TABLE', 'CREATE TABLE AS')
21+
EXECUTE PROCEDURE spock_assign_repset();
22+
23+
ALTER EVENT TRIGGER spock_assign_repset_trg ENABLE ALWAYS;
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
{{- if .Values.pgadmin.enable }}
2+
kind: workload
3+
name: pgadmin
4+
description: pgadmin
5+
gvc: {{ .Values.pgadmin.gvc }}
6+
spec:
7+
type: standard
8+
containers:
9+
- name: pgadmin4
10+
cpu: {{ .Values.pgadmin.cpu }}
11+
env:
12+
- name: PGADMIN_DEFAULT_EMAIL
13+
value: {{ .Values.pgadmin.email }}
14+
- name: PGADMIN_DEFAULT_PASSWORD
15+
value: {{ .Values.pgadmin.password }}
16+
image: dpage/pgadmin4
17+
inheritEnv: false
18+
memory: {{ .Values.pgadmin.memory }}
19+
ports:
20+
- number: 80
21+
protocol: http
22+
defaultOptions:
23+
autoscaling:
24+
maxConcurrency: 0
25+
maxScale: 3
26+
metric: cpu
27+
minScale: 1
28+
scaleToZeroDelay: 300
29+
target: 100
30+
capacityAI: false
31+
debug: false
32+
suspend: false
33+
timeoutSeconds: 30
34+
firewallConfig:
35+
external:
36+
inboundAllowCIDR:
37+
- {{ .Values.pgadmin.inboundCidr }}
38+
outboundAllowCIDR:
39+
- 0.0.0.0/0
40+
outboundAllowHostname: []
41+
outboundAllowPort: []
42+
internal:
43+
inboundAllowType: same-org
44+
inboundAllowWorkload: []
45+
localOptions: []
46+
supportDynamicTags: false
47+
{{- end }}
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
kind: secret
2+
name: pgedge-postgres
3+
description: pgedge-postgres secret
4+
type: dictionary
5+
data:
6+
POSTGRES_PASSWORD: {{ .Values.postgres.password }}
7+
POSTGRES_DB: {{ .Values.postgres.dbname }}
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
kind: secret
2+
name: pgedge-replication-sql
3+
description: pgedge-replication-sql
4+
tags: {}
5+
type: opaque
6+
data:
7+
encoding: plain
8+
payload: >
9+
{{ .Files.Get "scripts/replication.sql" | indent 4 }}
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
kind: secret
2+
name: pgedge-start-script
3+
description: pgedge-start-script
4+
tags: {}
5+
type: opaque
6+
data:
7+
encoding: plain
8+
payload: >
9+
{{ .Files.Get "scripts/pgedge-start-script.sh" | indent 4 }}

0 commit comments

Comments
 (0)