Pages

Showing posts with label Explorer. Show all posts
Showing posts with label Explorer. Show all posts

Friday, 15 July 2011

Setting up a home file server

We have been using a central file server for a while when I realized that a disk failure might be very costly. The server have two major roles; a central file server and a backend myth-tv server where things are recorded. By combining those two roles we only need this one online 24/7 and can close all other computers.

What hit me was that as I am using LVM spawning over several disks, the failure of one disk will drag the others along and the entire LVM volume will be lost, a particular unpleasant alternative, especially as the myth-tv recordings are not regulrarly backed up (mostly Simpsons, Star Gate and Alias). I decided to dig into the world of RAID.

I started to analyse the disk usage to get a full picture of our needs. 


            Size   Used   Free
            [GB]    [GB]  [GB] 

Volume Group lvm_video           
          
video        931    867     65
           
Volume Group opus_main           
           
lvm_users    200    24     177
lvm_audio    300    227     74
lvm_lager    500    359    142
lvm_video    900    570    331
opus_home     30     19     12
           
            1930   1199    736
           
Totals      2861   2066    801



The suprise was that by using different file systems for different purposes (video, music, etc) we have a slack of 800GB, unused space just there in case it should be needed. On the other hand, making one big file system shall be well though through, on disturbance and everything might be lost – we wouldn't want that do we. And when the file systems are in the size of Tera Bytes, the file systems starts behave strange and slow – the choice of file system becomes important.

When I found some “green” 2T har disks for a campaign price I decided to make the plans happened. I purchased 4 * 2TB disk for the array.


Solaris

Back in the 90:th I was very found of Sun-OS and Solaris and my first assumption was to setup the server using OpenSolaris and ZFS. Sorry to say I found that OpenSolaris seems no longer supported by Oracle so I decided to go for Open Indiana – the community driven Solaris.

To solve my problems I planned the following setup:


I made some initial testing and I really liked the ZFS. It just felt good. So did Solaris, felt like a since lost home. I decided to go for the Solaris solution.

First I installed Open Indiana build 148 and tried to make a system update. It failed; “System is a LiveCD” or similar. Had to find a walk around for that (sorry no link, google on the error messange to find solution).

Seting up the ZFS was easy. First step is to create zpools. I made one pool of the 4 disks using RAID-Z, simply RAID-5 but where the write-hole is close. The write-hole is a moment where a power-loss will cause damage to the array in any RAID-5 and, by the sound of it, is closed by integrating RAD and file system as in ZFS.

Second I created some ZFS directories. Those are kind of mount points that can be shared using NFS. Ordinary directories and data resides in the ZFS directory.

Third step was to start moving data from my Linux disks (ReiserFS and LVM) to the Solaris-disks. This turned out to be more problematic than I expected. First Solaris cannot mount a NFS3 share from Linux as Sun have decided to go for a non standard security model. I did some feeble tries to enable NFS4 on the Linux server but gave up and instead I used a Linux workstation to rsync data onto a NFS-mounted ZFS-partition. This was a very sloooow way of transfer 1.7TB data. The storage rate was ridiculous slow, it took like seconds to store a simple small HTML-file. I never got to the huge files like movies but I did the music.

I also found that even though the NFS share had no-root-squash (or root=IP-of-client as it is called in Solaris) it affected the files but not the directories. This results in a load of error during a rsync which I do NOT want as I use rsync for backing up to the server and really need to know the real errors.
Beside this I set-up an mysql server on the Solaris machine which as well turned out to be very slow. Just displaying the tables in a database gave a noticeable response time (even second and third time).

It also turned out to need pretty much more memory than Linux and I only have 2GB in the machine. This was not enough for the Virtual MythBackend server. I started the virtual server on another host (Linux) but with the virtual disk on the original ZFS file system via NFS. Once again things where very slow. This set-up however is not new to me. I do regularly run virtual machines where the virtual disks are mounted using NFS.

At this point I decided to reconsider the Solaris alternative. The drawbacks started to queue up:
  • It seems to be very slow on the file system   
  • It seems to be slow on MySql
  • It cannot mount Linux file systems

Linux

So I did reconsider and decided to go for a Linux setup instead. This way I can use the old installation of Gentoo. I went for the following setup:
  • The root partition is on the old 160GB disk.
  • The 4 new 2TB disks are installed as an RAID-5 array.
  • The raid-array results in a 5.5 TiB disk, about 6TB (Ti 1024-based and TB 1000-based)
  • The array is divided into 28 partitions each 200GiB
  • A reasonable amount of partitions is added to a LVM volume (lvm_nas)
  • Logical volumes are created for my needs and exported
  • All logical volumes are using EXT4 as file system.
There where some issues with the move though.

The first and most scary issue was with the disks. There where something like a GTP partition that fdisk cannot manage, and the GTP was corrupt. Trying out several disk utilities I stumbled on the gdisk utility (GTP fdisk) and learned that you will need a GTP partition table for disks this big.I managed to clear out the Solaris partition information and create the one full-disk partition that is to be included in the RAID.

Creating the RAID array was pretty easy but I discovered that the array is actually built in the background. For my setup it was estimated to take 18h to finish. It did not take that long time. In Gentoo it seems that mdadm is used while the official Raid guide refer to RaidTools.
 
Once the raid-array in place I created a GTP file table using gdisk. I also created 28 partitions รก 200GB to be used as physical volumes in LVM.

To the LVM volume I added space as I need now. There is a lot if unallocated space to use when I need more.

Ext4 takes longer time to create than ReiserFS. How long time file check will take is yet to be discovered. I did however try to grow a mounted EXT4 file system and it worked well but to some extra time. Next time I will probably work on offline file systems.

The MythBacken and MySql works the same as before - it is the same Linux installation. This time MySql have its own logical volume. Backups are mainly made using LVM snapshots; I make a snapshot of the volume that is copied out using rsync. 

Conclusion

Until the integration between Linux and Solaris works well you will avoid some troubles by sticking to one or the other. I am pretty satisfied with the Linux solution but it would been nice to have the Solaris server in place and working well but I had to admit the battle lost at one point. A heterogeneous environment is likely to be easier to keep up to date in the long run.  


If I need to play dare devil again I will tryout some of the BSD-clones. I am particular interested in the Hammer file system on DragonFly BSD. Sorry to say it is very difficult to get an up to date indication of the maturity. Same goes for ZFS on Linux, which would be an alternative. But while the "stable" version of Linux ZFS  is read only support it feels like a bit immature for me. Write capabilities is kind of important on a file server ;) 

 

 

Friday, 4 February 2011

The Djunge of Licencing

I ran across this article by the H today. It covers some of the issues around Open Source Licences and the problems in combining them.

There are lot of stuff that are not clearly covered by any license. Take for example the Qt library, one of my favourites. It has a lot of smart functions. The development is open and freely accessible for each and every one.

So, what if I surf around the bug tracker or more precisely the feature requests and when I find a nice one for my own stuff I steal it.

Take for example the idea of "advanced rubber band", the suggestion that the square used to change size of an object is rotated together with the object.

The question is then: who owns the idea and the feature request, and under what license is it made available to the public?

The same goes the other way around, what it the idea is stolen and then presented as a suggestion and implemented. Who is then responsible for stealing the idea?

I have had the favour of working with mixed closed and open source on an earlier assignment. The conclusion is that this is an area where one shall take  particular care. One is actually playing with the risk of having the entire application suite forcedly licensed as open source with the requirement to supply full source code to any one that asks. We had each and every usage of open source licensed software reviewed by lawyers and signed by someone high in the hierarchy. Thinking of it I this was an reasonable procedure.

Just a disclaimer; I do favour open source software in general and Linux in particular. Meanwhile I do software engineering for living which implies the usage of paid software which in most cases means closed source software.

Saturday, 15 January 2011

Exploring Andoid

Now when I have two droids, a phone and a MP3 player, I spent some effort to play with Android development the other day. I am in deep need of a audio book player that will not mix my huge amount of audio books with the music, and that will not have easily accessible functions like next track as I only use them unintentionally (frustration!). Android comes with some nice design features that I think have been "concepts in mind" or "slideware" since the 90:s but Google have implemented just nice.

I am talking about the Intents and Activities.

From my point of view (mainly Qt development in C++ when it comes to end user applications) I compare the Activity with a full screen dialog with a special purpose. This is not really the same as an ordinary main window; activities is expected to come with an intention - the Intents. The intents describes the intention of an Activity.

So far nothing special. I design my main window as an activity and next thing is to open an audio file from the book. Now I need to start another activity where I can browse files. At the first glans I though this to be an easy task. Then I stumbled on the Intent and things got complicated. After some reading everything turned impressed instead.

Here goes:
The framework is built so when I need to have something done I find the corresponding Intent (i.e. PickFile in my case) and ask for an activity that will provide me with that intended function. The system will try to match this and if there are alternatives, the user will be asked to chose. The proposed way to make sure I have my own Activity started is to create an unique Intent identifier and fire it of.

So, why is this really cool?
Take a look at http://www.openintents.org/. There you have a database of intents and can download or purchase the module supplying the activity. This means that using Intents you can use and resue, not only open source code by copying it into you application, but you can use closed source modules as well.

Experimenting with Intents
and Activities as Use Cases
I like to express my self using UML or other graphical notations. I started to experiment how to model my Audiobook player (named Narrator and have a yet empty project page at http://gitorious.org/narrator ) using intents. The figure shows how I ended up modelling the Intents as existing and reusable Use Cases. This is the point when I really start liking the concept; it really enables software reuse rather than code reuse. Use Case reuse is on a higher level than code reuse and I think that it is on this level reuse must be made if to make sense. This way I will be reasonable flexible with the requirements around the use case Open File thus allow for alternative solutions.