Question

Having trouble with restore from a custom kanister blueprint for postgres


Userlevel 1

Hi, I have several postgresql deployment that need their databases backed up in descending alphabetical order (the db names are different but the order is always the same). The backup job works fine but I’m having trouble with the restore.

This is my backup job:

backup_folder='dbs' 
backup_archive='dbs.tar'
mkdir $backup_folder

for db in $(psql -U ${PGUSER} -t -c "SELECT datname FROM pg_database WHERE datname NOT IN ('sysdba', 'postgres') AND datistemplate = false order by datname desc;" | tr -d "\n" | tr -d "\r"); do
backup_file="${db}.pgdump"
echo "Backing up $db to file [${backup_file}]"
pg_dump --create --clean -Fc -U $PGUSER $db > "${backup_folder}/${backup_file}"
done

tar cvf $backup_archive $backup_folder

kando location push --profile '{{ toJson .Profile }}' --path "${backup_archive}" --output-name "kopiaOutput" $backup_archive

I create dumps of all databases, put them into a directory, create a tar archive and upload that. As I said, this works without issues.

When I try restoring with the following script, I run into a nil pointer reference:

backup_archive='dbs.tar'

kando location pull --profile '{{ toJson .Profile }}' --path "${backup_archive}" --kopia-snapshot "${kopia_snap}" $backup_archive

tar xvf $backup_archive

cd $backup_folder

for file in *; do
echo "restoring database from $file"
pg_restore --clean --create -U "${PGUSER}" -Fc -f $file
done

this is the error:

2023-01-04 15:45:19	{"log": panic: runtime error: invalid memory address or nil pointer dereference\n","stream":"stderr","time":"2023-01-04T14:45:18.440929652Z"}
2023-01-04 15:45:19 {"log":"[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x21c568d]\n","stream":"stderr","time":"2023-01-04T14:45:18.440954423Z"}
2023-01-04 15:45:19 {"log":"\n","stream":"stderr","time":"2023-01-04T14:45:18.440961353Z"}
2023-01-04 15:45:19 {"log":"goroutine 169 [running]:\n","stream":"stderr","time":"2023-01-04T14:45:18.440966523Z"}
2023-01-04 15:45:19 {"log":"github.com/kopia/kopia/snapshot/restore.write({0xc0005213c4?, 0x0?}, {0x2ddaf40?, 0xc000122ae0}, 0x27c6fff?, 0x0)\n","stream":"stderr","time":"2023-01-04T14:45:18.440971114Z"}
2023-01-04 15:45:19 {"log":"\u0009/go/pkg/mod/github.com/kastenhq/kopia@v0.0.0-20221209153538-acda9e36276f/snapshot/restore/local_fs_output.go:373 +0x10d\n","stream":"stderr","time":"2023-01-04T14:45:18.440975804Z"}
2023-01-04 15:45:19 {"log":"github.com/kopia/kopia/snapshot/restore.(*FilesystemOutput).copyFileContent(0xc0004e09f0, {0x2dda920, 0xc00005c070}, {0xc0005213c4, 0x4}, {0x2de6568, 0xc0003bd788})\n","stream":"stderr","time":"2023-01-04T14:45:18.440980974Z"}
2023-01-04 15:45:19 {"log":"\u0009/go/pkg/mod/github.com/kastenhq/kopia@v0.0.0-20221209153538-acda9e36276f/snapshot/restore/local_fs_output.go:411 +0x35e\n","stream":"stderr","time":"2023-01-04T14:45:18.440985794Z"}
2023-01-04 15:45:19 {"log":"github.com/kopia/kopia/snapshot/restore.(*FilesystemOutput).WriteFile(0xc0004e09f0, {0x2dda920, 0xc00005c070}, {0x0, 0x0}, {0x2de6568?, 0xc0003bd788?})\n","stream":"stderr","time":"2023-01-04T14:45:18.440997675Z"}
2023-01-04 15:45:19 {"log":"\u0009/go/pkg/mod/github.com/kastenhq/kopia@v0.0.0-20221209153538-acda9e36276f/snapshot/restore/local_fs_output.go:153 +0x2f0\n","stream":"stderr","time":"2023-01-04T14:45:18.441002625Z"}
2023-01-04 15:45:19 {"log":"github.com/kopia/kopia/snapshot/restore.(*copier).copyEntryInternal(0xc000206690, {0x2dda920, 0xc00005c070}, {0x2de5e08?, 0xc0003bd788}, {0x0, 0x0}, 0x0, 0x0, 0x298c600)\n","stream":"stderr","time":"2023-01-04T14:45:18.441007375Z"}
2023-01-04 15:45:19 {"log":"\u0009/go/pkg/mod/github.com/kastenhq/kopia@v0.0.0-20221209153538-acda9e36276f/snapshot/restore/restore.go:208 +0x256\n","stream":"stderr","time":"2023-01-04T14:45:18.441012365Z"}
2023-01-04 15:45:19 {"log":"github.com/kopia/kopia/snapshot/restore.(*copier).copyEntry(0xc000206690, {0x2dda920, 0xc00005c070}, {0x2de5e08?, 0xc0003bd788}, {0x0, 0x0}, 0x52d700?, 0xc0?, 0x298c600)\n","stream":"stderr","time":"2023-01-04T14:45:18.441034837Z"}
2023-01-04 15:45:19 {"log":"\u0009/go/pkg/mod/github.com/kastenhq/kopia@v0.0.0-20221209153538-acda9e36276f/snapshot/restore/restore.go:177 +0x44c\n","stream":"stderr","time":"2023-01-04T14:45:18.441040197Z"}
2023-01-04 15:45:19 {"log":"github.com/kopia/kopia/snapshot/restore.Entry.func2()\n","stream":"stderr","time":"2023-01-04T14:45:18.441045617Z"}
2023-01-04 15:45:19 {"log":"\u0009/go/pkg/mod/github.com/kastenhq/kopia@v0.0.0-20221209153538-acda9e36276f/snapshot/restore/restore.go:112 +0x46\n","stream":"stderr","time":"2023-01-04T14:45:18.441050037Z"}
2023-01-04 15:45:19 {"log":"github.com/kopia/kopia/internal/parallelwork.(*Queue).Process.func1()\n","stream":"stderr","time":"2023-01-04T14:45:18.441055487Z"}
2023-01-04 15:45:19 {"log":"\u0009/go/pkg/mod/github.com/kastenhq/kopia@v0.0.0-20221209153538-acda9e36276f/internal/parallelwork/parallel_work_queue.go:82 +0x72\n","stream":"stderr","time":"2023-01-04T14:45:18.441060178Z"}
2023-01-04 15:45:19 {"log":"golang.org/x/sync/errgroup.(*Group).Go.func1()\n","stream":"stderr","time":"2023-01-04T14:45:18.441065518Z"}
2023-01-04 15:45:19 {"log":"\u0009/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75 +0x64\n","stream":"stderr","time":"2023-01-04T14:45:18.441070008Z"}
2023-01-04 15:45:19 {"log":"created by golang.org/x/sync/errgroup.(*Group).Go\n","stream":"stderr","time":"2023-01-04T14:45:18.441074908Z"}
2023-01-04 15:45:19 {"log":"\u0009/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:72 +0xa5\n","stream":"stderr","time":"2023-01-04T14:45:18.441079819Z"}

Weirdly, this only seems to happen with a tar achive. I tried to use the unarchived folder instead and in that case the download works but I run into a different issue, because the actual dumps don’t get downloaded. Instead I get placeholder files ($db.pgdump.kopia-entry) from which I can’t restore.

I looked into that and found the kopia docs calling this a shallow restore but I couldn’t find a way to change shallowness settings with kando pull. According to kopia docs the default state is a full “deep restore” so I’m unsure why kando would provide me with placeholders here.

So as I see it I have three options:

  1. Fix the nil pointer when using tar archives
  2. Get real files instead of placeholders when NOT using tar archives
  3. Find a different way to do the entire thing

Unfortunately I’m all out of ideas, so I come to you for help.


4 comments

Userlevel 7
Badge +3

Tough question, I’d suggest to log a support call. 

Userlevel 7
Badge +7

@jaiganeshjk 

Userlevel 6
Badge +2

@schnapsidee Thanks for posting your question.

This might need some efforts & digging up and recreating the error locally in the test environments to find out if there is a limitation here.

Would you be able to open a case with us through https://my.veeam.com/

 

If you don’t have an active enterprise subscription, You could select `Kasten by Veeam K10 Trial` under the products category.

Also, please upload the blueprint(after obfuscating sensitive information) that you are using and debug logs to the case.

Once we find the root cause of this issue through the case, We could come up with a summary and add it as an answer here.

Userlevel 1

@schnapsidee Thanks for posting your question.

This might need some efforts & digging up and recreating the error locally in the test environments to find out if there is a limitation here.

Would you be able to open a case with us through https://my.veeam.com/

 

If you don’t have an active enterprise subscription, You could select `Kasten by Veeam K10 Trial` under the products category.

Also, please upload the blueprint(after obfuscating sensitive information) that you are using and debug logs to the case.

Thank you, I’ve created a case with logs and blueprint attached.

Comment