Sunday, March 16, 2014

Installing IPVanish openvpn on a E4200v1 running DD-WRT

The idea of the setup is to sent a specific device through the IPVanish tunnel while still sending the others through the regular internet connection.
The idea behind this is that internet connection through VPN is slower and you may not want to send all you traffic through the tunnel.


The script below was tested on a E4200v1 linksys router running
Firmware: DD-WRT v24-sp2 (10/31/12) mega

The script needs to run as a startup script (Administration -> Commands -> startup).
One special feature of this script is that it makes sure that in case anything happens to openvpn, the router will not fall back to the default connection but will stop routing instead. As a result, you will not end up sending packets through your regular internet connection if openvpn goes down for any reason: if your packet is delivered, it is delivered through openvpn.

#!/bin/sh

USERNAME="XXXXX"
PASSWORD="XXXXX"
VPNHOST="sto-a01.ipvanish.com"
IPTOVPN="192.168.1.102"

#### DO NOT CHANGE below this line unless you know exactly what you're doing ####

CA_CRT='-----BEGIN CERTIFICATE-----
MIIErTCCA5WgAwIBAgIJAMYKzSS8uPKDMA0GCSqGSIb3DQEBBQUAMIGVMQswCQYD
VQQGEwJVUzELMAkGA1UECBMCRkwxFDASBgNVBAcTC1dpbnRlciBQYXJrMREwDwYD
VQQKEwhJUFZhbmlzaDEVMBMGA1UECxMMSVBWYW5pc2ggVlBOMRQwEgYDVQQDEwtJ
UFZhbmlzaCBDQTEjMCEGCSqGSIb3DQEJARYUc3VwcG9ydEBpcHZhbmlzaC5jb20w
HhcNMTIwMTExMTkzMjIwWhcNMTcwMTEwMTkzMjIwWjCBlTELMAkGA1UEBhMCVVMx
CzAJBgNVBAgTAkZMMRQwEgYDVQQHEwtXaW50ZXIgUGFyazERMA8GA1UEChMISVBW
YW5pc2gxFTATBgNVBAsTDElQVmFuaXNoIFZQTjEUMBIGA1UEAxMLSVBWYW5pc2gg
Q0ExIzAhBgkqhkiG9w0BCQEWFHN1cHBvcnRAaXB2YW5pc2guY29tMIIBIjANBgkq
hkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAt9DBWNr/IKOuY3TmDP5x7vYZR0DGxLbX
U8TyAzBbjUtFFMbhxlHiXVQrZHmgzih94x7BgXM7tWpmMKYVb+gNaqMdWE680Qm3
nOwmhy/dulXDkEHAwD05i/iTx4ZaUdtV2vsKBxRg1vdC4AEiwD7bqV4HOi13xcG9
71aQ55Mj1KeCdA0aNvpat1LWx2jjWxsfI8s2Lv5Fkoi1HO1+vTnnaEsJZrBgAkLX
pItqP29Lik3/OBIvkBIxlKrhiVPixE5qNiD+eSPirsmROvsyIonoJtuY4Dw5K6pc
NlKyYiwo1IOFYU3YxffwFJk+bSW4WVBhsdf5dGxq/uOHmuz5gdwxCwIDAQABo4H9
MIH6MB0GA1UdDgQWBBRL/RQliR3nwXCD1/afERwlThnurjCBygYDVR0jBIHCMIG/
gBRL/RQliR3nwXCD1/afERwlThnurqGBm6SBmDCBlTELMAkGA1UEBhMCVVMxCzAJ
BgNVBAgTAkZMMRQwEgYDVQQHEwtXaW50ZXIgUGFyazERMA8GA1UEChMISVBWYW5p
c2gxFTATBgNVBAsTDElQVmFuaXNoIFZQTjEUMBIGA1UEAxMLSVBWYW5pc2ggQ0Ex
IzAhBgkqhkiG9w0BCQEWFHN1cHBvcnRAaXB2YW5pc2guY29tggkAxgrNJLy48oMw
DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOCAQEAho5ynpvtXISz3neqGXpL
BBlOM35pd1ZSNHLCb2yHQwAjZbfYqfX2MDs9ytH4Cf1OfaVqwe777QyyIC2XR2QK
kw4c2hCT8wPzWhmkLx8Q+jnKdOKkdz+L8+Ji9/vjtaFOcYjMDalI6CbjBiuMFWhB
IzOaYljmA2UeQCVIz9aW80BC8+sLQ6oeWVnFjx7zqK1gbbc2bNuy3slOMdyoEj2m
hkxfiffuHKV+GQoR7tFIr3M7KFFwYgkXeyLh1Pc0rZu7dGe4fUAbR1okB1DgelBd
n6rWTZ8XcNzT/YngtH4bXB9DM7pKWpDWc94va4hFrGgaOxjE861TdoDqHaMO9bW+
Pg==
-----END CERTIFICATE-----
'

OPVPNENABLE=`nvram get openvpncl_enable | awk '$1 == "0" {print $1}'`

if [ "$OPVPNENABLE" != 0 ]; then
nvram set openvpncl_enable=0
nvram commit
fi

sleep 10
mkdir /tmp/ipvanish; cd /tmp/ipvanish
echo -e "$USERNAME\n$PASSWORD" > userpass.conf
echo "$CA_CRT" > ca.crt
echo "$IPTOVPN" > policy_ips

for IP in \`cat /tmp/ipvanish/policy_ips\` ; do
iptables -A OUTPUT -d \$IP -j DROP
done

echo "#!/bin/sh
iptables -I INPUT -i tun1 -j logaccept
iptables -I POSTROUTING -t nat -o tun1 -j MASQUERADE
ifconfig_remote=\`ifconfig tun1 | sed -rn 's/.*r:([^ ]+) .*/\1/p'\`
ip route add default via \$ifconfig_remote table 10
echo "ip route add default via \$ifconfig_remote table 10" > toto
for IP in \`cat /tmp/ipvanish/policy_ips\` ; do
ip rule add from \$IP table 10
iptables -D OUTPUT -d \$IP -j DROP
done
" > route-up.sh
echo "#!/bin/sh
iptables -D INPUT -i tun1 -j logaccept
iptables -D POSTROUTING -t nat -o tun1 -j MASQUERADE
ip route flush table 10
for IP in \`cat /tmp/ipvanish/policy_ips\` ; do
iptables -A OUTPUT -d \$IP -j DROP
done
" > route-down.sh
chmod 644 ca.crt; chmod 600 userpass.conf; chmod 700 route-up.sh route-down.sh
sleep 10
echo "client
ca /tmp/ipvanish/ca.crt
management 127.0.0.1 5001
management-log-cache 50
verb 4
mute 3
log-append /var/log/openvpncl
writepid /var/run/openvpncl.pid
client
resolv-retry infinite
nobind
persist-key
persist-tun
script-security 2
mtu-disc yes
dev tun1
proto tcp-client
cipher aes-256-cbc
auth sha256
remote $VPNHOST 443
comp-lzo yes
redirect-private def1
route-noexec
tls-client
tun-mtu 1500
tls-cipher AES256-SHA
persist-remote-ip
keysize 256
tls-remote $VPNHOST
auth-user-pass /tmp/ipvanish/userpass.conf
script-security 3 system
dhcp-option DNS 8.8.8.8
dhcp-option DNS 8.8.4.4
" > ipvanish.conf
(/tmp/ipvanish/route-up.sh; killall openvpn; openvpn --config /tmp/ipvanish/ipvanish.conf --route-up /tmp/ipvanish/route-up.sh --down /tmp/ipvanish/route-down.sh) &
exit



EDIT 14/03/2015: adding the google openDNS servers in there as I was having DNS issues.
EDIT 18/11/2015: added /tmp/ipvanish/route-up.sh; at teh last line as it lookslike under some circumstances route-up was not excuted resulting in packets sent otside of VPN

Sunday, April 15, 2012

How to install rtorrent 0.9.1 and use magnet links on an Iomega Storcenter ix4-200d


This tutorial uses unsupported features of the IOMEGA Storcenter ix4-200d. It worked for me but use it at your own risk! It should work (again, it is unsupported) on the ix2 Storcenter as well.
Tutorial tested on IOMEGA Storcenter ix4-200d firmware 3.1.14.995

I explained in a previous post why I wanted to use rtorrent instead of the torrent client supplied with the storcenter.
There is a new developpment: thepiratebay.se switched to magnets only for file sharing: the version of rtorrent previously installed did not support magnets....
The good news is that the new rtorrent (0.9.1) does support IP filtering natively!

The problem is it was difficult to compile for the storcenter as the gcc toolchain available on the storcenter is very old... but no worries, I compiled it for you!

1. SSH into your NAS
See my other post: How to ssh into your Iomega StorCenter ix4-200d

2. Install the software
See my other post here to setup at the minimum ikg and ipkg-opt. Then:
ipkg-opt install lighttpd
ipkg-opt install screen

Then, get my pre-compiled version of rtorrent-0.9.1 (works on Iomega Storcenter ix4-200d):
if you want to compile it yourself for a strange architecture, you might want to look at section 3 of my other post How to solve the "undefined reference to '__sync_sub_and_fetch_4'" compilation problem
Warning: this is going to override the following files:
/opt/etc/rtorrent.conf
/opt/etc/init.d/S99rtorrent
/opt/lib/libtorrent.14.0.3
/opt/lib/libtorrent.14
/opt/bin/rtorrent
Make sure you saved everything that needed to be saved before running it!
cd /opt/tmp/
wget http://dl.dropbox.com/u/50398581/rtorrent-0.9.1/rtorrent-0.9.1-package.tar.gz
cd /
tar -xvf /opt/tmp/rtorrent-0.9.1-package.tar.gz


If you don't want to connect remotely to rtorrent to manage it from you computer, you can skip the rest of this section...
Install nTorrent on your computer http://code.google.com/p/ntorrent/
Install xml-rpc on the NAS:
ipkg install optware-devel
ipkg install libcurl-dev
cd /opt/tmp/
svn checkout http://xmlrpc-c.svn.sourceforge.net/svnroot/xmlrpc-c/stable xmlrpc-c    
cd xmlrpc-c/
./configure --prefix=/opt
make
make install
Note: ou can choose something other that nTorrent. Please give me you feedback in the comments if you do.



3. Configure the software

Fix path and other info in rtorrent.conf.
This is also where you want to disable remote access if you don't want it, comment the line:
scgi_port = localhost:5000

If you want to use remote acces, you need to:
vi /opt/etc/lighttpdlighttpd.conf
between
#                               "mod_rrdtool",
and
"mod_accesslog" )
add
"mod_scgi",
and at the end add:
scgi.server = (
"/RPC2" => ( 
    "127.0.0.1" => (
        "host" => "127.0.0.1",
        "port" => 5000,
        "check-local" => "disable"
        )
    )
)
Security warning: if you follow these steps, anybody that can access port 8081 of you NAS will be able to send commands to rtorrent! You want to make sure that this port is only accessible from your local network.

4. Ip filtering
a. download the file
Ip filtering support is build into rtorrent-0.9.1, but you still need to configure the download of the filter files:
vi /etc/cron.daily/rtorrent_ipfilter
#!/bin/sh
cd /mnt/pools/A/A0/torrents/rtorrent/ipfilter/
wget http://list.iblocklist.com/?list=bt_level1
mv index.html\?list\=bt_level1 level1new.gz
gunzip level1new.gz
sed 's/^.*:\([^:]*\)$/\1/g' level1new | grep -v '^#' > level1new_2
rm level1
mv level1new level1
rm level1_2
mv level1new_2 level1_2
then:
mkdir /mnt/pools/A/A0/torrents/rtorrent/ipfilter/
cd /etc/cron.daily/
chmod a+x rtorrent_ipfilter
./rtorrent_ipfilter

b. if not already done, make sure the cron daemon is started at boot
The cron daemon is not started at boot by default....

You can start it manually:
/etc/init.d/cron start

But to have it start up every time at boot, we need to add the line:
/etc/init.d/cron start >> /opt/init-opt.log
to our /opt/init-opt.sh script.

See my other post How to run a program at boot on the Iomega Storcenter NAS to see how it works!


5. Test your setup
/opt/bin/rtorrent -n -o import=/opt/etc/rtorrent.conf
if you get:
rtorrent: Fault occured while inserting xmlrpc call.
did you install xmlprc correctly? is ld.so.conf updated correctly? did you run ldconfig?

to connect to the running instance:
/opt/bin/screen -r rtorrent
and kill the terminal (putty) to exit or press Ctrl-a d.

For remote access: you can start lighthttpd on the NAS
/opt/etc/init.d/S80lighttpd start
and then start nTorrent on your computer and connect to your NAS port 8081 (by default) on path /RPC2.




6. Get rtorrent to start automatically on reboot
Follow the tutorial How to run a program at boot on Iomage Strocenter You just need to add the following lines to the script:
/opt/etc/init.d/S80lighttpd start >> /opt/init-opt.log
/opt/etc/init.d/S99rtorrent start >> /opt/init-opt.log
If you have another brand of NAS (or a regular linux OS), just try to link the startup scripts to /etc/rc2.d/ like ou would normally do an a linux box:
ln -s /opt/etc/init.d/S80lighttpd /etc/rc2.d/S80lighttpd
ln -s /opt/etc/init.d/S99rtorrent /etc/rc2.d/S99rtorrent


7. How to deal with magnet links
I suggest to create a /whereever/rtorrent/magnets like /whereever/rtorrent/torrents and /whereever/rtorrent/download.
And then:
cd /whereever/rtorrent
vi allmagnets.sh 
and add:
#!/bin/bash

for f in magnets/*
do
 echo "Processing $f"
 CONTENT=`cat "$f" | sed s/^URL=// | grep -v '\[InternetShortcut\]' | tr -d '\r'`
 [[ "$CONTENT" =~ xt=urn:btih:([^&/]+) ]] && echo "d10:magnet-uri${#CONTENT}:${CONTENT}e" > "torrents/meta-${BASH_REMATCH[1]}.torrent" && rm "$f"
done
chmod a+x allmagnet.sh
And then, add a cron to run this program every 5 minutes:
vi /etc/cron.d/magnets
and add:
# convert magnets to torrents every 5 minutes
1,6,11,16,21,26,31,36,41,46,51,56 *    * * *   root    cd /mnt/pools/A/A0/data/rtorrent/ && /mnt/pools/A/A0/data/rtorrent/allmagnets.sh 

Make sure the cron daemon is running!!! (see point 4.b above)

Enjoy!

How to solve the "undefined reference to '__sync_sub_and_fetch_4'" compilation problem

If you ran into the following compilation problems:
undefined reference to '__sync_sub_and_fetch_4' problem
or with any of the following functions:
__sync_fetch_and_add, __sync_fetch_and_sub, __sync_fetch_and_or, __sync_fetch_and_and, __sync_fetch_and_xor, __sync_fetch_and_nand,
 __sync_add_and_fetch, __sync_sub_and_fetch, __sync_sub_or_fetch, __sync_and_and_fetch, __sync_xor_and_fetch, __sync_nand_and_fetch
__sync_val_compare_and_swap, 
__sync_bool_compare_and_swap, 
__sync_lock_test_and_set,
__sync_lock_release

Chances are that you are trying to compile for ARM (or an exotic architecture) and your GCC version is too old compared to the source code you are trying to compile!
There is an Easy fix: upgrade your GCC.

If you can't upgrade your GCC for any reason (for example you are on an embedded hardware you don't have full control on), follow the steps below!

1. Find the source code file that's right for the architecture you are trying to compile on
You are going to find it inside a GCC source tarball.
To find it, go into your gcc source gcc/config and do
grep '__sync_fetch' */*
to find the right file.
For ARM, it is:
gcc/config/arm/linux-atomic.c

2. Compile the source code file and link in to the program you are compiling
libtool --tag=CC -mode=compile gcc -g -O2 -MT linux-atomic.lo -MD -MP -MF linux-atomic.Tpo -c -o linux-atmoic.lo linux-atmoic.c
libtool --tag=CC -mode=link gcc -g -O2 -o liblinux-atmoic.la linux-atmoic.lo
And add liblinux-atomic.la in the Makefile so it is linked to the other .la files (into a .so or a program).

3. Example to compile libtorrent 13.1 and rtorrent 0.9.1 for ARM with GCC 4.2.3
If you wonder, this is to compile rtorrent for my Iomega ix4-200d storcenter NAS.

Compile libtorrent:
PATH=$PATH:/opt/bin
wget http://libtorrent.rakshasa.no/downloads/libtorrent-0.13.1.tar.gz
tar -xvf libtorrent-0.13.1.tar.gz
cd libtorrent-0.13.1
vi configure
OPENSSL_CFLAGS='-I/opt/include/'
OPENSSL_LIBS='-L/opt/lib/ -lssl'
STUFF_LIBS='-L/opt/lib/ -lsigc-2.0'
STUFF_CFLAGS='-I/opt/usr/include/sigc++-2.0/ -I/opt/usr/lib/sigc++-2.0/include'

./configure --prefix=/opt/

Add linux-atomic:
cd src
wget http://dl.dropbox.com/u/50398581/rtorrent-0.9.1/linux_atomic.c
libtool --tag=CC --mode=compile gcc -g -O2 -MT linux_atomic.lo -MD -MP -MF linux_atomic.Tpo -c -o linux_atomic.lo linux_atomic.c
vi /opt/bin/libtool

And if necessary, modify libtool for the follwoing entries:
AR="ar"
RANLIB="ranlib"
CC="g++"

libtool --tag=CC   --mode=link gcc  -g -O2  -o liblinux_atomic.la linux_atomic.lo
vi Makefile

add
liblinux_atomic.la
at the end of libtorrent_la_LIBADD

cd ..
make
strip .libs/libtorrent.so
make install


Compile rtorrent:
wget http://libtorrent.rakshasa.no/downloads/rtorrent-0.9.1.tar.gz
tar -xvf rtorrent-0.9.1.tar.gz
cd rtorrent-0.9.1
vi configure
And add:
sigc_LIBS='-L/opt/lib/ -lsigc-2.0 -L/lib/'
sigc_CFLAGS='-I/opt/usr/include/sigc++-2.0/ -I/opt/usr/lib/sigc++-2.0/include -I/opt/include/ncurses'
libcurl_LIBS='-L/opt/lib/ -lcurl'
libcurl_CFLAGS='-I/opt/include/'
libtorrent_LIBS='-L/opt/lib/ -ltorrent'
libtorrent_CFLAGS='-I/opt/include/'

Then:
./configure --prefix=/opt/ --with-xmlrpc-c=/opt/bin/xmlrpc-c-config  --with-ncurses=yes LDFLAGS='-L/opt/lib/' CPPFLAGS='-I/opt/include -I/opt/include/ncurses/' 
cd src
cp ../../libtorrent-0.13.1/.libs/liblinux_atomic.a .
vi Makefile
at the end of rtorrent_LDADD, add
liblinux_atomic.a

Then:
make
strip rtorrent
cd ..
make install

You are done!

Friday, March 2, 2012

How to backup my data (and why!)



It is the digital age, the amount of personal data we produce keeps going up: digital pictures, HD movies and documents take an ever increasing amount of space.
That's a lot a memories and information we don't want to loose (or can't afford to).

Companies have devised backup plans for a long time but the concept is now entering homes though cloud storage (and other means). When your data is lost, it is too late: you need to devise a plan now!

I will focus on the needs of individuals and deal with 3 different data types:
- photos
- videos
- documents (excel, doc, text, pdf...)

Also, there are diffrent risks to take into account when defining a backup plan:
- hardware failure (crashed hard drive)
- physical destruction of data at a physical location (think fire, theft ...)
- human error (oops! I deleted the file)


1. Put your data in the Cloud

The cloud will shield you from hardware failure, physical destruction but might not protect you against human error.... Added bonus is that you can share your data with other people :)
You also take on additional risks: like the risk of you online account being hacked or the risk of your data becoming visible to everybody because of a misconfiguration on your side.

The good news is that there usually are free allocations for each service but you might have to pay for a feature you really need.

For pictures, you have:
- Picasa: 1GB free + free unlimited storage of pictures up to 800x800 pixels (additional storage available: cost of 20GB is 5USD/year, see all prices here)
- Google plus: free unlimited storage of pictures up to 2048x2048 pixels
- Flickr: free upload of 300MB worth of pictures every month, paid option unlimited storage (original quality) & bandwith for (25 USD/year or 45 SD/2y. see all prices here)


For movies, you have:
- Youtube: Videos can be uploaded for free (up to 20GB per video)
The problem is that the videos are automatically edited (and reduced in quality) and it is not easy to download them once they are in the cloud!


For documents, you have:
- Google Docs: storage of documents, presentation and spreadsheets in google format is unlimited, you get 1GB for other type of files. Additional space can be bought (and shared with picasa). See the above link in Picasa for pricing details. The problem is there is no easy way to synchronize a local folder with Google Documents...
- Dropbox: 2B free storage. Local folders can be synchronized with dropbox.

2. Use a backup Service

This will shield you from hardware failure (but it might be slow to recover the data), physical destruction and human error.

The idea is to send your compressed and encrypted data to a remote server where it is stored. You can usually access our backups from a website and from a specific software.
The problem is that all your data goes through the internet and it can be very slow to do the initial upload if you have a lot of data. For example, if you have 1TB of data to backup, it can take months to do the initial backup!
Same problem when you try to do a full backup: it will be usually an order of magnitude faster than doing the initial backup but it can still take a few days.


If you also need to recover fast from an hard drive failure (the most common hardware failure) you can use a local redundant RAID configuration (like RAID 1 or 5). Please note that RAID alone will not prevent data loss: you are still vulnerable to other hardware failures (like RAID controller failure, destruction of the device and human error).

Let's compare the different plans out there. I will focus on 3 providers: Mozy, Carbonite and Crashplan.
Plan Name +10GBUnlimitedFamily unlimited HomeHomePlusHomePremier 50GB125GB
Yearly Price 25 USD50 USD120 USD 59 USD99 USD149 USD 72 USD120 USD
Space 10 GBUnlimitedUnlimited Unlimited 50 GB125 GB
Number of computers 112-10 111 1 (add computer +2USD/month)
Suported Os Windows, Mac, Linux, Solaris Windows, MacWindows Windows, Mac
Automated backup All files All Except videoAll files All files

Whatever your usage, Crashplan seems always cheeper and has more features. I use it myself and I am very satisfied with it....


The free crasplan software also allows you to backup on a friend computer (running crashplan as well). This means that you can backup your data without paying anything, provided a friend is ready to allocate you some data for backup.


3. Case studies

We just need to find the most cost effective combination of the above:

Profile A:10GB of pictures and a few documents: 5USD/year
Pay 5USD/year for Picasa storage (20GB)
Use DropBox free allocation to store the documents.
Problem: the backup process is manual: if you forget to upload your pictures to Picasa, they are not uploaded, unless you use the software I wrote to automatically upload to Picasa: see my post here). You are still vulnerable to Human error.

Profile B:100GB of pictures, 200GB of movies and a 10GB of documents: 50USD/year
Cheapest alternative is Crashplan Unlimited (50USD / year)
The backup process is now automatic: no need to worry about forgetting to backup something. On top of that, you are protected against human error as you can retrieve former versions of a file.

If your data is spread across different computer, you can buy a NAS and run crasplan on the NAS (see my post on how to install crasplan on an Iomega NAS here. Alternatively, you have the simpler option to buy Crashplan unlimited Family.


4. Conclusion

If you care about your data: take the time to devise your backup/data recovery plan now! You can always find a way that fits your budget.

You can get a reduced quality backup for your pictures and videos for free. Truct me, it is better to have a reduced quality backup than nothing!


Thursday, March 1, 2012

How to install Vuze on a NAS




The goal of this tutorial is to install vuze headless (as a command line application). Most of the tutorials found on the web suggest to do the configuration of vuze in the UI before starting it in headless mode. Unfortunately, this is not possible on a NAS where you have to X server and no screen...



Tutorial tested on IOMEGA Storcenter ix4-200d firmware 3.1.14.995 but uses unsupported features on the hardware. Please use at your own risk.


Since Vuze if a java program, the same steps should allow you to install vuze as an headless client on any hardware running java.




Unfortunately, I ran into a lot of JVM crashes with vuze headless and oracle jvm ejre1.7.0 (for ARM). On top of that, vuze is quite an heavy program in terms of CPU and memory usage, which is annoying for the type of hardware we are looking at (like a NAS). Therefore, I don't recommand to install vuze on a NAS. I suggest you look at rtorrent, which is much more reliable (see my tutorial How to install rtorrent with IP filtering).





1. SSH into your NAS

See my other post:
How to ssh into your Iomega StorCenter ix4-200d if you have an IOMEGA NAS




2. Download and install

Steps adapted from the Console_UI Vuze wiki
cd /opt/tmp
wget http://ftp.heanet.ie/mirrors/www.apache.org/dist//commons/cli/binaries/commons-cli-1.2-bin.tar.gz
tar 
wget http://sourceforge.net/projects/azureus/files/vuze/Vuze_4702/Vuze_4702_linux.tar.bz2
PATH=$PATH:/opt/bin/
tar -xvf Vuze_4702_linux.tar.bz2

wget http://ftp.heanet.ie/mirrors/www.apache.org/dist//commons/cli/binaries/commons-cli-1.2-bin.tar.gz
tar -xvf commons-cli-1.2-bin.tar.gz
mv commons-cli-1.2/commons-cli-1.2.jar vuze/

wget http://ftp.heanet.ie/mirrors/www.apache.org/dist//logging/log4j/1.2.16/apache-log4j-1.2.16.tar.gz
tar -xvf apache-log4j-1.2.16.tar.gz
mv apache-log4j-1.2.16/log4j-1.2.16.jar vuze

to install java, you can look at the java section of my other post: How to install crashplan on an Iomaga NAS.

The installation is pretty straight forward...



Now, install the webUI plugin:
cd vuze
cd plugins
mkdir webui
cd webui
wget http://azureus.sourceforge.net/plugins/webui_1.7.0.zip
ipkg-opt install zip
ipkg-opt install unzip
unzip webui_1.7.0.zip
mkdir /opt/var/log/vuze
If you don't have ipkg-opt, see my other post: How to install software into your Iomega StorCenter NAS



3. Configure the Vuze installation

cd /opt/tmp/
mv vuze /opt/
cd /opt/vuze/
/mnt/pools/A/A0/NAS_Extension/ejre1.7.0/bin/java -Xmx128m -Dazureus.config.path=/opt/vuze/.azureus/ -cp "Azureus2.jar:commons-cli-1.2.jar:log4j-1.2.16.jar" org.gudy.azureus2.ui.common.Main --ui=console
Vuze should now be running, we need to configure it now. Adapt the paths to suit your needs and type at the vuze cli:
set "Default save path" "/mnt/pools/A/A0/torrents/vuze/download" string
set "Use default data dir" true boolean
set "Logger.Enabled" true boolean
set "Logging Enable" true boolean
set "Logging Dir" "/opt/var/log/vuze/" string
set "Ip Filter Autoload File" "http://list.iblocklist.com/?list=bt_level1" string
set Plugin.azhtmlwebui.User myusername
set Plugin.azhtmlwebui.Password mypassword password
set "Plugin.azhtmlwebui.Password Enable" true boolean
This is installing IP filtering as well. If you don't want that, just skip the set "Ip Filter Autoload File" command.

You are good to go now. To have vuze automatically start at boot, you need to create the script in /etc/init.d (you can adapt the azureus script provided inside the install).
If you have an Iomega NAS, look at this tutorial to see how to have the program run at boot.





You can now connect to the web Vuze UI from your web browser at
http://ip_of_nas:6883/. Please note that the web UI is not as rich as the regular UI (most options are not available in the web UI)




Please comment to let me know how stable your install is!

Thanks

Sunday, February 26, 2012

How to backup your google docs documents




I am a fan of google docs: I often needs to access and edit my documents while I am away, and google docs offers a great way to do that.
The problem is: I have a lot of large pdfs there, and they can take a while to load: I would love to have a local copy when I am in the office...
On top of that, I always like to have a local copy of stuff... just in case! Call me paranoid but what happens if your account is hacked? or if google unilaterally closes your account because they consider you don't respect the terms of use? Better be safe than sorry...


I couldn't find anywhere an application that would do what I want (get a local backup of my google documents and update it regularly).
There is the google "takeout" application but you can not schedule regular downoads...
A project like google-docs-fs seems promissing but it does only support google documents (and not any other file you may have uploaded if you have -like me- a premium account). Plus, my analysis is that there are too many possible points of failures if you rsync this file system... I need something more robust.


I decided to code what I need myself: a java command line application that can be used to schedule regular downloads of all your google docs documents.


1. Presentation gdocsauploader.jar

The features implemented:
- re use data from a previous data to avoid re-downloading files that haven't changed
- rotating backups (for example, a maximum of 7 backups backup.zip being the most recent one and backup.7.zip being the oldest one)
- zip archive or just a folder archive (takes more space but easier to access)
- configurable document export mode (export google spreasheets as xls or as csv)
- download only once documents that are in multiple folders (gdocsbackup.removeduplicateddocuments)
- archive without folder structure (all documents in a zip, like google takeout) or with folder structure (much easier to navigate)
- support for any type of files.

TODO:
- use hard links on operating systems that support it (that would substantially reduce that amount of disk needed for multiple backups with a lot of unchanged documents)
- fix the bug that forces you to use a temp directory on the same partition as the destination directory

In my setup, I want to install pgdocsauploader.jar as a daily cron on my NAS, but you can install it anywhere.

The program is configured using the config file gdocsuploader.properties which reads as follows:
#use system defined proxy
gdocsbackup.usesystemproxy=true
#google account username and password
gdocsbackup.username=xxxx
gdocsbackup.password=xxx
#the path where we want to backup
gdocsbackup.backuppath=C:\\Users\\xxx\\Documents\\Data\\
#the name of the backup archive. 
#the zip archives will be named: backuprootname.zip backuprootname.1.zip
#the folder archives will be named: backuprootname/ backuprootname.1/
gdocsbackup.backuprootname=gdocs_backup
#the number of backup files to keep
gdocsbackup.nbbackupfiles=7
#TRUE is you want to stroe backup as zip file. 
gdocsbackup.usezip=FALSE
#zip compression level (0-9) with 9 being the most compressed (and most CPU intensive)
gdocsbackup.zipcompresslevel=6
#use hard links to link new data identical to older data. This does save a lot of space (you can't use this option with usezip)
#not supported yet!
gdocsbackup.usehardlinks=FALSE
#document export format: one of doc html odt pdf png rtf txt zip
gdocsbackup.documentexportformat=doc
#presentation export format: one of pdf png ppt txt
gdocsbackup.presentationexportformat=ppt
#spreadsheet export format: one of xls csv pdf ods tsv html (NB: first sheet export only for csv and tsv)
gdocsbackup.spreadsheetexportformat=xls
#try to replicate the directory structure in the zip
docsbackup.keepdirectorystructure=TRUE
#show documents that appear at different places in the folder tree only once (in the first folder where it is found)
gdocsbackup.removeduplicateddocuments=TRUE
#log file (for linux, good practice is to put it in /var/log/ or /opt/var/log (and make sur logrotate works correctly))
gdocsbackup.logfile=C:\\gdocsbackup.log

All options are self explanatory. You can customize it as required by your setup.

As the program is java, it can be run on any OS / Architecture supporting Java.

The jar is available for download at http://dl.dropbox.com/u/50398581/gdocsbackup/gdocsdownload.jar
sample properties files is available at http://dl.dropbox.com/u/50398581/gdocsbackup/gdocsdownload.properties
and source code is available at: http://dl.dropbox.com/u/50398581/gdocsbackup/gdocsdownload-src.zip


Please note that in order to "rotate" backups, the program will delete the oldest backup! Don't modify the backups or store anything there!
The program only gets information from the google server: it does not update or delet anything: you are safe there!


To determine if the file was already downloaded, the last_update tag given by google is checked. I suggest you do a full backup from time to time to avoid an error propagating from backup to backup (to do that, just add the option full download after the "properties" file launching the jar)


2. Steps to install the gdocsbackup on a linux based NAS
The setup is easy to adapt to any machine running linux. I didn't do a tutorial for Windows or Mac as I lack some knowledge to do it, but it can of course be done... feel free to adapt it and post your results and hints in the comments!
This tutorial assumes some vi ans linux knowledge...

This is how I installed the gdocsbackup.jar on my NAS (an Iomega Storcenter ix4-200d). Please note that the procedure is unsupported by Iomega! use at your own risk!

a. Download and setup of gdocsdownload
First, you need to ssh into your NAS (see my other post if you have am Iomega Storcenter)
Then:
mkdir /opt/usr/local
mkdir /opt/usr/local/gdocsdownload/
cd /opt/usr/local/gdocsdownload/
wget http://dl.dropbox.com/u/50398581/gdocsbackup/gdocsdownload.jar
wget http://dl.dropbox.com/u/50398581/gdocsbackup/gdocsdownload.properties
Don't forget to change the properties file to make it work for your setup (you at least need to change account information and paths):
vi gdocsdownload.properties

If you are concerned about security, you should put the properties files into you home folder...

If you haven't already done so, you need to install java on your NAS. See the java section of my previous post How to install Crashplan on an Iomega Storcenter to find out how to do it for an Iomega storcenter.

If you followed the java installation procedure of my other post, link java to a more usual location:
ln -s /mnt/pools/A/A0/NAS_Extension/ejre1.7.0/bin/java /opt/bin/java
The setup can already be tested by starting the command:
/opt/bin/java -jar /opt/usr/local/gdocsdownload/gdocsdownload.jar /opt/usr/local/gdocsdownload/gdocsdownload.properties
press Ctrl-C to stop the run


the program will need to be started from a script so that we can set correct folder permissions and TMP folder.
You need to make sure there is enough space in your temp folder (my /tmp/ folder is way to small, that's why I use /opt/tmp/
vi gdocsdownloader

and then type:
#!/bin/sh

#this is to have a backup that's readable by everybody
#but only writeable by the owner.
#change it to suit your needs
umask 022
#use a tmp file with enough space to fit all your docs
#NB: it seems like there is a bug somewhere and the tmp directory has to
#be on the same partition than the destination directory....
#please choose a tmp file respecting these conditions
#/opt/bin/java -Djava.io.tmpdir=/opt/tmp/ -jar /opt/usr/local/gdocsdownload/gdocsdownload.jar $@
/opt/bin/java -Djava.io.tmpdir=/mnt/pools/A/A0/data/perso/gdocs/ -jar /opt/usr/local/gdocsdownload/gdocsdownload.jar $@
make it an executable:
chmod a+x gdocsdownloader
And test with:
./gdocsdownloader /opt/usr/local/gdocsdownload/gdocsdownload.properties


b. Set up a cron job to backup google docs data
Create the gdocsdownloader cron (I don't use /etc/cron.daily/ because I want a full download once a week):
vi /etc/cron.d/gdocsdownload
and add:
# download google docs files at 3:45 AM

#full download on sunday
45 3    * * 0   root    /opt/usr/local/gdocsdownload/gdocsdownloader /opt/usr/local/gdocsdownload/gdocsdownload.properties fulldownload > /dev/null 2>&1
#regular download the other days
45 3    * * 1,2,3,4,5,6   root    /opt/usr/local/gdocsdownload/gdocsdownloader /opt/usr/local/gdocsdownload/gdocsdownload.properties > /dev/null 2>&1

The cron will run everyday!
you may want to run the first batch by starting:
/opt/usr/local/gdocsdownload/gdocsdownloader /opt/usr/local/gdocsdownload/gdocsdownload.properties

c. start the cron daemon

The cron daemon is not started at boot by default....

You can start it manually:
/etc/init.d/cron start

But to have it start up every time at boot, we need to add the line:
/etc/init.d/cron start >> /opt/init-opt.log
to our /opt/init-opt.sh script.

See my other post How to run a program at boot on the Iomega Storcenter NAS to see how it works!

d. set up logrotate
Logrotate is the process that compresses and delete old logs so that your logs don't eat all you disk space!
vi /etc/logrotate.d/gdocsdownload
and add:
/opt/var/log/gdocsdownload.log {
    rotate 4
    weekly
    compress
    delaycompress
    missingok
    notifempty
    prerotate
      while [ "`ps aux | grep gdocsdownloader.jar | grep -v grep | wc -l`" = "1" ]
        do
          sleep 10
        done
    endscript
}


This will rotate your gdocsdownload logs once a week and keep at least 4 weeks worth of logs. It is easy to modify these parameters in the config file above.

I try to make sure the gdocsdownload is done before rotating the logs to avoid conflict...

Don't forget to change the path if your log is somewhere else!

Saturday, February 11, 2012

How to automatically synchronize a picture folder with picasa (on a NAS or anywhere else)






I like to have a copy of my pictures on picasa to be able to share them with friends and family. I usually upload them in reduced resolution, to stay within the free storage space given by google.
The problem is that uploading them can be a pain: the picasa software can be very slow to upload them, especially if your are accessing the pictures on your NAS with a wireless network.

Instead of using the picasa software, I tried to use googlecl tools (http://code.google.com/p/googlecl/) to do that but it turns out I couldn't get it to do want I want (no sync folder option + no resize of picture on the fly).
There is an unsupported patch to synchronize folders with googlecl (http://code.google.com/p/googlecl/issues/detail?id=170) but that doesn't solve the problem of image resizing...I did not even test it...


1. Presentation of my solution: picasauploader.jar

To solve the problem, I wrote a small piece of java code (picasauploader.jar) that:
- creates any new album in picasa web when a new folder is created on the disk
- upload (and resizes if necessary) new pictures on the disk to picasa web

In my setup, I want to install picasauploader.jar as a daily cron on my NAS, but you can install it anywhere.

You just need to organize your pictures as
/path/albumname/picture.jpg
and use /path in the picasauploader.properties

The jar is configured using the config file picasauploader.properties which reads as follows:
#use system defined proxy
picasauploader.usesystemproxy=true
#picasa/google account username and password
picasauploader.username=xxxx
picasauploader.password=xxx
#semi column separated directories
picasauploader.diskpaths=/xxx/yyyy;/aaaa/bbbb
#can be either:
# private: accessible to anybody with the direct link but not without the direct link
# protected: not accessible except from your account
# public: available for everybody to see
picasauploader.albumcreationaccess=private
#if you want to resize images before uploading (aspect ratio is kept)
#Note: only JPEG images are resized...
#max Height in px
picasauploader.maxheigt=1600
#max Width in px
picasauploader.maxwidth=1600
#jpg quality when resizing
picasauploader.resizequality=85
#log file (for linux, good practice is to put it in /var/log/ or /opt/var/log (and make sur logrotate works correctly))
picasauploader.logfile=/opt/var/log/picasaupload.log

All options are self explanatory. You can customize it as required by your setup.

As the program is java, it can be run on any OS / Architecture supporting Java.

The jar is available for download at http://dl.dropbox.com/u/50398581/picasauploader/picasauploader.jar
sample properties files is available at http://dl.dropbox.com/u/50398581/picasauploader/picasauploader.properties
and source code is available at: http://dl.dropbox.com/u/50398581/picasauploader/PicasaUploader.java

Please note that for safety, the program does not delete anything on picasa web (nor on the disk, of course). Therefore, it is very safe to use.

Known Limitations:
- only suports JPG GIF PNG BMP image formats
- picture resizing is only supported for jpg images
- only the name is used to determine if a picture was already uploaded: if a picture was already uploaded and then changed on disk, it won't be uploaded again.


2. Steps to install the picasauploader on a linux based NAS
The setup is easy to adapt to any machine running linux. I didn't do a tutorial for Windows or Mac as I lack some knowledge to do it, but it can of course be done... feel free to adapt it and post your results and hints in the comments!
This tutorial assumes some vi ans linux knowledge...

This is how I installed the picasauploader.jar on my NAS (an Iomega Storcenter ix4-200d). Please note that the procedure is unsupported by Iomega! use at your own risk!

a. Download and setup of picasauploader
First, you need to ssh into your NAS (see my other post if you have am Iomega Storcenter)
Then:
mkdir /opt/usr/local
mkdir /opt/usr/local/picasauploader
cd /opt/usr/local/picasauploader
wget http://dl.dropbox.com/u/50398581/picasauploader/picasauploader.jar
wget http://dl.dropbox.com/u/50398581/picasauploader/picasauploader.properties
Don't forget to change the properties file to make it work for your setup (you at least need to change account information and paths):
vi picasauploader.properties

If you haven't already done so, you need to install java on your NAS. See the java section of my previous post How to install Crashplan on an Iomega Storcenter to find out how to do it for an Iomega storcenter.

If you followed the java installation procedure of my other post, link java to a more usual location:
ln -s /mnt/pools/A/A0/NAS_Extension/ejre1.7.0/bin/java /opt/bin/java
The setup can already be tested by starting the command:
/opt/bin/java -jar /opt/usr/local/picasauploader/picasauploader.jar /opt/usr/local/picasauploader/picasauploader.properties

b. Set up a cron job to synchronize image folders with picasa
Create the picasauploader cron:
cd /etc/cron.daily/
vi picasauploader
and add:
#!/bin/sh
/opt/bin/java -jar /opt/usr/local/picasauploader/picasauploader.jar /opt/usr/local/picasauploader/picasauploader.properties
Then:
chmod a+x picasauploader
And test with:
./picasauploader

c. start the cron daemon

The cron daemon is not started at boot by default....

You can start it manually:
/etc/init.d/cron start

But to have it start up every time at boot, we need to add the line:
/etc/init.d/cron start >> /opt/init-opt.log
to our /opt/init-opt.sh script.

See my other post How to run a program at boot on the Iomega Storcenter NAS to see how it works!

d. set up logrotate
Logrotate is the process that compresses and delete old logs so that your logs don't eat all you disk space!
vi /etc/logrotate.d/picasauploader
and add:
/opt/var/log/picasaupload.log {
    rotate 4
    weekly
    compress
    delaycompress
    missingok
    notifempty
    prerotate
      while [ "`ps aux | grep picasauploader.jar | grep -v grep | wc -l`" = "1" ]
        do
          sleep 10
        done
    endscript
}


This will rotate your picasauploader logs once a week and keep at least 4 weeks worth of logs. It is easy to modify these parameters in the config file above.

As logrotate is started by the same cron that starts the picasauploader (daily cron), you will notice that I try to make sure the picasauploader is done before rotating the logs...

Don't forget to change the path if your log is somewhere else!