Successful database merge! #4759
Replies: 1 comment
-
Thanks for sharing this, it was very useful for a similar situation I encountered. To make it even easier for others to follow a summary of what happened and what I did to solve it. Situation: Teslamate1 running in container was stopped because of inadverted shutdown of host. Since I was remote I could not restart the host. I started teslamate with the latest backup on a second host (Teslamate2), but lost the data between the backup and the shutdown. How I resolved it:
to the docker-compose.yaml of both teslamate instances
Then I reindexed Teslamate2 using Using pgAdmin I identified the last OK timestamp on Teslamate1 in the states table (teslamate1/Databases/teslamate/public/states, right click View/Edit data, show last 100 lines) and used that as T1_CRASH_STR. Looked up the last good ID (both start and stop before crash) in the states table and used that as S1_OLD_LAST_STATE_ID Then I ran the script. Similar to eliluong I missed some locations and geolocations but these I copied from the complete backup of teslamate2 (so before I removed the old data) and now everything looks OK even if teslamate thinks it misses some recorded mileage, but otherwise it is a complete dataset! Thanks for this! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
This past week when I was on a roadtrip, my server with Teslamate crashed (
server_1
). I had a second server still running, so I spun up a new instance of Teslamate (server_2
) to save my trip. When I came back, I shut down the new Teslamate, and started the old Teslamate again. Then I decided to attempt to merge the databases.I previously tried a different method where I would find the cutoff where new data was being written, and try to insert the data from server_2 to server_1, and have it update the
id
on all the tables and then do offsets on new and old data. But I ran into many problems with updating data this way (database mumbo-jumbo I do not understand in-depth) and I decided not to pursue it.The best method I found was to find a way to just append new data. First step was to find out the exact time the old server was restarted and new data started being written (use pgadmin to investigate). Then I made a backup of server_2 and server_1, and I created another instance of postgres (server_3) to import all data from server_1 to server_3 (server_3 is essentially server_1). You want to work with a copy of your data. Then copy all the data from this time forward to server_2, delete the new data from server_3, then copy all the data from server_2 to append to server_3.
I had Gemini develop a script for me and it seems to work! The only issue was some of the start_address_id and end_address_id in
drives
did not update correctly (for new data since server_1 restart). But these reference existing address_id, so I only needed to go to thedrives
table on server_1 and manually update it on server_3. You can figure out which drive is which by going to Grafana Drives dashboard and finding the drive_id. I had a copy of Grafana running on server_3 pointing to this Postgres (server_3) to check if the data looks okay. I also used pgadmin to point to all three postgres databases.Finally, backup the data on server_3, and restore to server_1. So far everything looks good. I will find out later when I drive if it breaks something. I hope this helps someone. I have attached the Python script. I am no database expert, so I unlikely will be able to help with specific problems, but maybe the method I used will be helpful as a guide. And remember to back up!
teslamate_merge.zip
Beta Was this translation helpful? Give feedback.
All reactions