

I don’t buy it… Certainly the kitchens in the white house have a whole security process to guarantee the procurement chain wasn’t tainted. And even then he could do that with any food place that doesn’t display his name.


I don’t buy it… Certainly the kitchens in the white house have a whole security process to guarantee the procurement chain wasn’t tainted. And even then he could do that with any food place that doesn’t display his name.


I don’t know, can’t speak for the devs. It is weird that if you don’t implement these API calls buried a bit deep in the wiki, you end up storing every meme and screenshot anybody posted on any instance for the rest of time. But I found these through issue reports with many people asking for these to be implemented by default with for instance a simple setting “purge after X days” and a list of rooms to include or exclude from the history clean-up.


I purge 2 weeks old media using these. Then I purge the largest rooms’ history events using these. Then I compress the DB using this.
It looks like this:
export PGPASSWORD=$DB_PASS
export MYTOKEN="mytokengoeshere"
export TIMESTAMP=$(date --date='2 weeks ago' '+%s%N' | cut -b1-13)
echo "DB size:"
psql --host core -U synapse_user -d synapse -c "SELECT pg_size_pretty(pg_database_size('synapse'));"
echo "Purging remote media"
curl \
-X POST \
--header "Authorization: Bearer $MYTOKEN" \
"http://localhost:8008/_synapse/admin/v1/purge_media_cache?before_ts=%24%7BTIMESTAMP%7D"
echo ''
echo 'Purging local media'
curl \
-X POST \
--header "Authorization: Bearer $MYTOKEN" \
"http://localhost:8008/_synapse/admin/v1/media/delete?before_ts=%24%7BTIMESTAMP%7D"
echo ''
echo 'Purging room Arch Linux'
export ROOM='!usBJpHiVDuopesfvJo:archlinux.org'
curl \
-X POST \
--header "Authorization: Bearer $MYTOKEN" \
--data-raw '{"purge_up_to_ts":'${TIMESTAMP}'}' \
"http://localhost:8008/_synapse/admin/v1/purge_history/$%7BROOM%7D"
echo ''
echo 'Purging room Arch Offtopic'
export ROOM='!zGNeatjQRNTWLiTpMb:archlinux.org'
curl \
-X POST \
--header "Authorization: Bearer $MYTOKEN" \
--data-raw '{"purge_up_to_ts":'${TIMESTAMP}'}' \
"http://localhost:8008/_synapse/admin/v1/purge_history/$%7BROOM%7D"
echo ''
echo 'Compressing db'
/home/northernlights/scripts/synapse_auto_compressor -p postgresql://$DB_USER:$DB_PASS@$DB_HOST/$DB_NAME -c 500 -n 100
echo "DB size:"
psql --host core -U synapse_user -d synapse -c "SELECT pg_size_pretty(pg_database_size('synapse'));"
unset PGPASSWORD
And periodically I run vacuum;


And, importantly, run the db on postgre, not sqlite, and implement the regular db maintenance steps explained in the wiki. I’ve been running mine like that in a small VM for about 6 months, i join large communities, run whatsapp, gmessages and discord bridges, and my DB is 400MB.
Before when I was still testing and didn’t implement the regular db maintenance it balloned up to 10GB in 4 months.

Lol, surreal.