HomePage RecentChanges Status About

Backup

Last edit

Changed:

< The recommended solution is to click on the _Administration_ link at the bottom of any page /in your wiki/ and click on _Export HTML_. This should give your an archive containing all the files such that you can read it offline.

to

> The recommended solution is to click on the _Administration_ link at the bottom of any page /in your wiki/ and click on _Export HTML_. This should give you an archive containing all the files such that you can read it offline.


There are various ways of backing up your wiki.

HTML Archive

The recommended solution is to click on the Administration link at the bottom of any page in your wiki and click on Export HTML. This should give you an archive containing all the files such that you can read it offline.

Producing this archive takes a while. You will need some patience. :)

The HTML files and the archive will get deleted whenever the maintenance job runs.

Text Files

The recommended solution is to click on the Administration link at the bottom of any page in your wiki and click on Export Text. This should give your an archive containing all the raw wiki text files in order to give you a backup of your site.

The text files and the archive will get deleted whenever the maintenance job runs.

Once you have these text files, you can make a lot of changes offline and upload it all back to the wiki, if you know how to use certain command line tools like curl and a decent shell like bash. Just make your changes before restoring the “backup”.

Restoring a Backup

Here’s how to restore your wiki from the text files in the current directory:

for p in *.txt; do
  base=${p%.txt}
  echo $base
  curl --form frodo=1 --form title=$base --form text="<$p" \
    "http://www.campaignwiki.org/wiki/NameOfYourWiki" 
  sleep 5
done

The echo statement is just to give you some feedback. The sleep statement is required to prevent the wiki from telling you that you are fetching too many pages.

Restoring uploaded files such as images is a bit trickier. Here’s is a little shell script that skips the text files (assuming you already uploaded them using the simple for loop above) and uploads them as files. This requires us to strip determine the MIME type of the file and strip the extension to get the target page name.

for p in *; do
  extension=${p##*.}
  if [[ $extension != "txt" ]]; then
    type=`file --brief --mime-type $p`
    base=${p%.*}
    echo $base $type
    curl --silent --form frodo=1 --form title=$base --form "file=@$p;type=$type" \
      "http://www.campaignwiki.org/wiki/NameOfYourWiki" > /dev/null
    sleep 5
  fi
done

Huh?

Talk to me if the instructions above are utterly confusing. I’m sure we’ll figure something out. :) – Alex