Skip to content

Commit 83d066c

Browse files
author
James Hutchby
committed
update documentation
1 parent dd481d8 commit 83d066c

File tree

1 file changed

+74
-63
lines changed

1 file changed

+74
-63
lines changed

README.md

Lines changed: 74 additions & 63 deletions
Original file line numberDiff line numberDiff line change
@@ -10,33 +10,43 @@ It can be configured to run periodically using CloudWatch events.
1010
## Quick start
1111

1212
1. Create an AWS lambda function:
13-
- Author from scratch
14-
- Runtime: Node.js 14.x
13+
- Author from scratch
14+
- Runtime: Node.js 16.x
15+
- Architecture: x86_64
1516
2. tab "Code" -> "Upload from" -> ".zip file":
16-
- Upload ([pgdump-aws-lambda.zip](https://github.com/jameshy/pgdump-aws-lambda/releases/latest))
17-
- tab "Configuration" -> "General Configuration" -> "Edit"
18-
- Timeout: 15 minutes
19-
- Edit the role and attach the policy "AmazonS3FullAccess"
20-
- Save
21-
3. Test
22-
- Create new test event, e.g.:
23-
```json
24-
{
25-
"PGDATABASE": "dbname",
26-
"PGUSER": "postgres",
27-
"PGPASSWORD": "password",
28-
"PGHOST": "host",
29-
"S3_BUCKET" : "db-backups",
30-
"ROOT": "hourly-backups"
31-
}
32-
```
33-
- *Test* and check the output
34-
35-
4. Create a CloudWatch rule:
36-
- Event Source: Schedule -> Fixed rate of 1 hour
37-
- Targets: Lambda Function (the one created in step #1)
38-
- Configure input -> Constant (JSON text) and paste your config (as per previous step)
17+
- Upload ([pgdump-aws-lambda.zip](https://github.com/jameshy/pgdump-aws-lambda/releases/latest))
18+
- tab "Configuration" -> "General Configuration" -> "Edit"
19+
- Timeout: 15 minutes
20+
- Edit the role and attach the policy "AmazonS3FullAccess"
21+
- Save
22+
3. Give your lambda permissions permissions to write to S3:
3923

24+
- tab "Configuration" -> "Permissions"
25+
- click the existing Execution role
26+
- "Add permissions" -> "Attach policies"
27+
- select "AmazonS3FullAccess" and click "Attach policies"
28+
29+
4. Test
30+
31+
- Create new test event, e.g.:
32+
33+
```json
34+
{
35+
"PGDATABASE": "dbname",
36+
"PGUSER": "postgres",
37+
"PGPASSWORD": "password",
38+
"PGHOST": "host",
39+
"S3_BUCKET": "db-backups",
40+
"ROOT": "hourly-backups"
41+
}
42+
```
43+
44+
- _Test_ and check the output
45+
46+
5. Create a CloudWatch rule:
47+
- Event Source: Schedule -> Fixed rate of 1 hour
48+
- Targets: Lambda Function (the one created in step #1)
49+
- Configure input -> Constant (JSON text) and paste your config (as per previous step)
4050

4151
#### File Naming
4252

@@ -55,13 +65,13 @@ You can add an encryption key to your event, e.g.
5565

5666
```json
5767
{
58-
"PGDATABASE": "dbname",
59-
"PGUSER": "postgres",
60-
"PGPASSWORD": "password",
61-
"PGHOST": "host",
62-
"S3_BUCKET" : "db-backups",
63-
"ROOT": "hourly-backups",
64-
"ENCRYPT_KEY": "c0d71d7ae094bdde1ef60db8503079ce615e71644133dc22e9686dc7216de8d0"
68+
"PGDATABASE": "postgres",
69+
"PGUSER": "postgres",
70+
"PGPASSWORD": "password",
71+
"PGHOST": "host",
72+
"S3_BUCKET": "db-backups",
73+
"ROOT": "hourly-backups",
74+
"ENCRYPT_KEY": "c0d71d7ae094bdde1ef60db8503079ce615e71644133dc22e9686dc7216de8d0"
6575
}
6676
```
6777

@@ -88,14 +98,13 @@ Your context may require that you use IAM-based authentication to log into the P
8898
Support for this can be enabled my making your Cloudwatch Event look like this.
8999

90100
```json
91-
92101
{
93-
"PGDATABASE": "dbname",
94-
"PGUSER": "postgres",
95-
"PGHOST": "host",
96-
"S3_BUCKET" : "db-backups",
97-
"ROOT": "hourly-backups",
98-
"USE_IAM_AUTH": true
102+
"PGDATABASE": "dbname",
103+
"PGUSER": "postgres",
104+
"PGHOST": "host",
105+
"S3_BUCKET": "db-backups",
106+
"ROOT": "hourly-backups",
107+
"USE_IAM_AUTH": true
99108
}
100109
```
101110

@@ -111,67 +120,69 @@ NOTE: the execution role for the Lambda function must have access to GetSecretVa
111120
Support for this can be enabled by setting the SECRETS_MANAGER_SECRET_ID, so your Cloudwatch Event looks like this:
112121

113122
```json
114-
115123
{
116-
"SECRETS_MANAGER_SECRET_ID": "my/secret/id",
117-
"S3_BUCKET" : "db-backups",
118-
"ROOT": "hourly-backups"
124+
"SECRETS_MANAGER_SECRET_ID": "my/secret/id",
125+
"S3_BUCKET": "db-backups",
126+
"ROOT": "hourly-backups"
119127
}
120128
```
121129

122-
If you supply `SECRETS_MANAGER_SECRET_ID`, you can ommit the 'PG*' keys, and they will be fetched from your SecretsManager secret value instead with the following mapping:
130+
If you supply `SECRETS_MANAGER_SECRET_ID`, you can ommit the 'PG\*' keys, and they will be fetched from your SecretsManager secret value instead with the following mapping:
123131

124-
| Secret Value | PG-Key |
125-
| ------------- | ------------- |
126-
| username | PGUSER |
127-
| password | PGPASSWORD |
128-
| dbname | PGDATABASE |
129-
| host | PGHOST |
130-
| port | PGPORT |
132+
| Secret Value | PG-Key |
133+
| ------------ | ---------- |
134+
| username | PGUSER |
135+
| password | PGPASSWORD |
136+
| dbname | PGDATABASE |
137+
| host | PGHOST |
138+
| port | PGPORT |
131139

132-
133-
You can provide overrides in your event to any PG* keys as event parameters will take precedence over secret values.
140+
You can provide overrides in your event to any PG\* keys as event parameters will take precedence over secret values.
134141

135142
## Developer
136143

137144
#### Bundling a new `pg_dump` binary
145+
138146
1. Launch an EC2 instance with the Amazon Linux 2 AMI
139147
2. Connect via SSH and:
148+
140149
```bash
141150

142-
# install postgres 13
151+
# install postgres 15
143152
sudo amazon-linux-extras install epel
144153

145154
sudo tee /etc/yum.repos.d/pgdg.repo<<EOF
146-
[pgdg13]
147-
name=PostgreSQL 13 for RHEL/CentOS 7 - x86_64
148-
baseurl=https://download.postgresql.org/pub/repos/yum/13/redhat/rhel-7-x86_64
155+
[pgdg15]
156+
name=PostgreSQL 15 for RHEL/CentOS 7 - x86_64
157+
baseurl=https://download.postgresql.org/pub/repos/yum/15/redhat/rhel-7-x86_64
149158
enabled=1
150159
gpgcheck=0
151160
EOF
152161

153-
sudo yum install postgresql13 postgresql13-server
162+
sudo yum install postgresql15 postgresql15-server
154163

155164
exit
156165
```
157166

158167
#### Download the binaries
159168

160169
```bash
161-
scp -i ~/aws.pem ec2-user@18.157.84.236:/usr/bin/pg_dump ./bin/postgres-13.3/pg_dump
162-
scp -i ~/aws.pem ec2-user@18.157.84.236:/usr/lib64/{libcrypt.so.1,libnss3.so,libsmime3.so,libssl3.so,libsasl2.so.3,liblber-2.4.so.2,libldap_r-2.4.so.2} ./bin/postgres-13.3/
163-
scp -i ~/aws.pem ec2-user@18.157.84.236:/usr/pgsql-13/lib/libpq.so.5 ./bin/postgres-13.3/libpq.so.5
170+
scp ec2-user@ec2-18-194-222-2.eu-central-1.compute.amazonaws.com:/usr/bin/pg_dump ./bin/postgres-15.0/pg_dump
171+
scp ec2-user@ec2-18-194-222-2.eu-central-1.compute.amazonaws.com:/usr/lib64/{libcrypt.so.1,libnss3.so,libsmime3.so,libssl3.so,libsasl2.so.3,liblber-2.4.so.2,libldap_r-2.4.so.2} ./bin/postgres-15.0/
172+
scp ec2-user@ec2-18-194-222-2.eu-central-1.compute.amazonaws.com:/usr/pgsql-15/lib/libpq.so.5 ./bin/postgres-15.0/libpq.so.5
164173
```
174+
165175
3. To use the new postgres binary pass PGDUMP_PATH in the event:
176+
166177
```json
167178
{
168-
"PGDUMP_PATH": "bin/postgres-13.3"
179+
"PGDUMP_PATH": "bin/postgres-15.0"
169180
}
170181
```
171182

172183
#### Creating a new function zip
173184

174-
`npm run deploy`
185+
`npm run makezip`
175186

176187
#### Contributing
177188

0 commit comments

Comments
 (0)