Archive for the ‘Solaris’ Category

Copying directory trees, continued

November 2, 2009

As mentioned earlier, I am using rsync to copy complete directory trees:

$ cd /source_dir
$ rsync -avz . /dest_dir

It will copy all files, including hidden ones, in directory /source_dir, into the destination directory /dest_dir. In case /dest_dir does not already exist, rsync will create it (at least on my Nevada 72 system).

There may be cases where rsync does not work correctly, for example when there are very long file names. Good to know that there’s another easy way to do it:

$ cd /source_dir
$ find . -print -depth | cpio -pdm /dest_dir

The -depth parameter makes find print file names first, and -pdm with cpio means (from man cpio):
-p: reads a list of file path names from stdio and copies those files to the destination tree.
-d: creates directories if necessary
-m: keep file modification times

Destination directory /dest_dir must exist before starting the above commands.

Firefox 3.5 for Solaris!

July 10, 2009

These are the direct links for downloading Firefox 3.5 for Solaris:

Version x86 SPARC
Firefox 3.5 OpenSolaris pkg  |  tar pkg  |  tar
Firefox 3.5 Solaris 10 pkg  |  tar pkg  |  tar

You can still find the links to the latest Firefox 2 version ( in this blog entry, and to Firefox 3.0 in this blog entry.

If you are visiting the Mozilla web site from a Solaris system, your system will be automatically detected, and you can a see link which shows "Download Firefox – Free" and "3.5 for SunOS". That link directs you to the Mozilla development page on Unfortunately, the Solaris versions are not yet mentioned on the "Other Systems and Languages" web page (actually, that’s why I am maintaining the Firefox for Solaris links in my blog), but who knows – maybe we’ll see it there in a while.

Firefox 3 for Solaris – latest versions as of June 2009

July 10, 2009

These are the direct links for downloading the latest versions of Firefox 3 for Solaris:

Version x86 SPARC
Firefox 3.0.11 OpenSolaris pkg  |  tar pkg  |  tar
Firefox 3.0.11 Solaris 10 pkg  |  tar pkg  |  tar

You can still find the links to the latest Firefox 2 version ( in this blog entry.

If you would like to install multiple versions of Firefox on your Solaris system, you can use the tarballs or my script for renaming a Firefox package.

Oh – and if you are running Solaris or OpenSolaris on x86, I suggest to install the recently released Adobe Reader 9.1 for Solaris x86!

Finally: Adobe Reader 9 for Solaris x86 is available for download !!!

March 26, 2009

Believe it or not. Adobe has finished and made available Adobe Reader 9 for Solaris x86! You can download it from here (in .bin (self-extracting executable), .tar, and .pkg format). System requirements are mentioned here (minimum Solaris level are s10u5 or OpenSolaris 2008.11).

Local backups with rsync (forget tar -cvf – . | (cd /dest; tar -xpf -)

February 9, 2009

Recently, I encountered the rsync man page by accident (maybe as one of the lines in the top 10 Google search results?) and was quite surprised to find rsync examples where there was no remote host in any of its arguments. Doesn’t rsync stand for something like "remote synchronization"?

So here’s how it works:

If you want to copy all files in a directory and all directories and files below to another directory (for example on another file system on a different disk), use the following command:

$ rsync -avz /source_dir/ /dest_dir

Note the added slash after /source_dir. This command will recursively copy all files and directories in directory /source_dir to directory /dest_dir (will create it if it doesn’t exist). If you omit the trailing slash, it will create a new directory /dest_dir/source_dir. The rsync command will copy links as links, not as the original files they point to (similar to the default behavior of Solaris or GNU tar). If the rsync command was run before at least once, it will copy only the changed or newly added files. It will not remove destination files if files have been removed in the source directory.

Example: Copy all directories and files in directory /tmp/1 to empty directory /tmp/2:

  • Using the cp command (option P will copy links as links):
    $ cp -Ppr /tmp/1 /tmp/2
  • Using the tar command (Solaris or GNU. Solaris tar will report that a link has been created while GNU tar will only mention the file name of the link):
    $ mkdir /tmp/2
    $ cd /tmp/1
    $ tar -cvf - . | ( cd /tmp/2; tar -xpf -)
  • Using the rsync command:
    $ rsync -avz /tmp/1/ /tmp/2

Using ZFS as (an iSCSI) target for Mac OS X Time Machine

January 27, 2009

Inspired by this and then this blog entry, I thought it was now time for me to get my own experience with iSCSI.

Here’s the result:

  1. On my eco-friendly server running OpenSolaris 2008.11, I created a new ZFS volume (not a ZFS file system!) with iSCSI sharing switched on:
    $ zfs create -o shareiscsi=on -V 180G pool2/mac-tm
    cannot share 'pool2/mac-tm': iscsitgtd failed request to share
    filesystem successfully created, but not shared
  2. Well, that did not work well. Better search and install the iSCSI packages first:
    $ pkg search -rl iscsi | nawk '{print $NF}' | \
    nawk 'BEGIN{FS="@"}{print $1}' | sort -u
    $ pkg install SUNWiscsi SUNWiscsitgt
    DOWNLOAD                                    PKGS       FILES     XFER (MB)
    Completed                                    2/2       18/18     0.86/0.86
    PHASE                                        ACTIONS
    Install Phase                                  74/74
    PHASE                                          ITEMS
    Reading Existing Index                           9/9
    Indexing Packages                                2/2
  3. Then, I wanted to delete (destroy, in ZFS speak) and create the zvol again:
    $ zfs destroy pool2/mac-tm
    cannot destroy 'pool2/mac-tm': volume has children
    use '-r' to destroy the following datasets:
  4. OK, I understand that an automated snapshot had already been created in the meantime. Destroy the zvol with its snapshots, and create the zvol again:
    $ zfs destroy -r pool2/mac-tm
    $ zfs create -o shareiscsi=on -V 180G pool2/mac-tm
  5. Check if the shareiscsi property is on for our volume:
    $ zfs get shareiscsi pool2/mac-tm
    NAME          PROPERTY    VALUE         SOURCE
    pool2/mac-tm  shareiscsi  on            local
  6. List all defined iSCSI targets:
    $ iscsitadm list target
    Target: pool2/mac-tm
    iSCSI Name:
    Connections: 0
  7. Looks great! On the MacBook Pro running Mac OS X 10.5.6, I installed the globalSAN iSCSI initiator software (version from Studio Network Solutions, after downloading from this link.
  8. Then I rebooted the Mac (as required by the globalSAN iSCSI software).
  9. Next step was to mount the iSCSI drive:
    Mac OS X System Preferences
    a) Click on the globalSAN iSCSI icon to display its control panel:
    GlobalSAN iSCSI control panel #1
    b) Click on the + symbol in the lower left corner to get the following popup:
    GlobalSAN iSCSI control panel #2
    c) Enter the IP address or host name of the OpenSolaris server, leave the port number as it is, and enter the target name (the last column in the line starting with iSCSI Name: in the output of the iscsitadm list target command on your OpenSolaris server – in our case, it’s ), and press the OK button. The iSCSI control panel will then look like:
    GlobalSAN iSCSI control panel #3
    d) Click the Connected switch at the end of the iSCSI target line (the line which starts with iqn) to get the following popup:
    GlobalSAN iSCSI control panel #4
    e) Press the Connect button to connect to that iSCSI target. As we did not specify CHAP or Kerberos authentication, the connect will work without user and password. For a walkthrough and more on CHAP authentication, click this link.
    After pressing the Connect button, the control panel will look like:
    GlobalSAN iSCSI control panel #5
    At this time, the newly created volume will show up in Disk Utility. Note that I clicked on the Persistent button to build the connection again after a reboot – I didn’t try rebooting to check, but believe it will work.
  10. Then, I created a Mac OS X volume in Disk Utility.
    Disk Utility #1
    a) Click on the disk drive and then on the Erase tab, enter a new name for the volume (or leave it as it is), and press the Erase… button. The following screen will displayed to show the progesss:
    Disk Utility #2
    After the erase is completed, the new volume will show up in the left part of the Disk Utility (For this screen shot, I created the volume again after providing the name ZFS-180GB for the volume. Not sure if it’s possible to rename a volume without formatting it):
    Disk Utility #3
  11. Now the volume is usable in Time Machine.
    a) Click on the Time Machine icon in System Preferences to start its control panel:
    TM control panel #1
    b) Click on Change Disk to change the destination volume for Time Machine (the lock in the lower left corner has to be unlocked first to allow for the change):
    TM control panel #2
    c) Select the new volume and press Use for Backup. Then, just start the backup (or wait 120 seconds until it starts automatically):
    TM control panel #3
    Mac OS X Time Machine has started its first backup on a ZFS volume!

However, as always in my blog entries, this is no guarantee that it will always work as described, or that the backup and restore will also work after your next Mac OS X upgrade, or that there will be no errors or problems with such a setup. What I can tell you is that a simple restore attempt worked for me just as if I had done it from a USB disk!

Up to now, I have always disconnected the USB disk drive before closing the Mac’s lid so that a Time Machine backup would not be interrupted in the middle. Not sure what would happen if a Time Machine backup is running while you close the lid, so better read the docs and test it, or just always unmount Time Machine’s active volume before letting your Mac sleep.

And I discovered that if an iSCSI volume is mounted before closing the lid, the Mac Book Pro cannot transition into deep sleep mode with a power consumption similar to the switched off state. It somehow sleeps, but with rotating fan and a steady front LED. And in order to wake it up, I had to open and close the lid several times. So the steps to do before closing the Mac’s lid are:

  1. Eject (unmount) the volume (use the eject menu item after right-clicking on the volume’s icon on the desktop).
  2. Disconnect the iSCSI target (and all others) in the globalSAN iSCSI control panel in the Mac OS X System Preferences, by unmarking the tick in column Connected for all targets. A confirmation popup will be shown when unmarking the Connected tick.

After waking up your Mac next time, just tick the Connected mark in the globalSAN iSCSI control panel again and confirm the popup that will be shown. If you did not choose another destination disk for Time Machine in the meantime, Time Machine will recognize the iSCSI drive as a valid destination volume automatically and use it for its next scheduled backup.

BTW For an interesting article on how to use ZFS iSCSI sharing with a Linux client, please click here.

DTrace at its best!

January 23, 2009

Using DTrace‘s destructive actions, you can perform actions on your operating system you never thought of before, like backup files with a zfs snapshot right before a user deletes them, or halt any process before it is ended.

With the help of the files in directory /usr/demo/dtrace, the DTrace Toolkit, and my colleagues who showed me the best probe for intervening before a process is really ended, I wrote and tested the following short DTrace script to halt certain processes before they are ended. It can be very useful if you encounter a large number of short-lived processes which you could not analyze otherwise. In the following example, we are looking only for processes running with userid 4 (username adm) and with the executable name date. I saved it as file name stopper.d.

#!/usr/sbin/dtrace -ws
/(execname == "date") && uid == 4/
printf ("%d(%d): %d %d %d %d, %s, >%s<: %Y", pid, ppid, uid,
curpsinfo->pr_projid, curpsinfo->pr_zoneid,
curpsinfo->pr_dmodel, cwd, curpsinfo->pr_psargs, walltimestamp);
/*   stack(); */
/*   ustack(); */
/*   system ("pmap -x %d", pid); */
printf ("\nStopping Process %d ...", pid);
printf (" done.");
system ("ps -eo user,pid,ppid,s,zone,projid,pri,class,nice,args | nawk '$2==\"%d\"{print}'", pid);

Be warned! Adapt the filter rules carefully on a test system before using the script on the system on which you want to halt processes! Use the script on your own risk – I cannot guarantee for anything!

For listing the stopped processes, you can use the following command:

$ ps -eo user,pid,ppid,s,zone,projid,pri,class,nice,args | \
nawk '$4=="T" && /date/{print}'

And for ending these stopped processes, you can use that one: 

$ ps -eo user,pid,ppid,s,zone,projid,pri,class,nice,args | \
nawk '$4=="T" && /date/{system ("kill -9 "$2)}'

For testing, I created a script named start-50-date-processes.ksh to start 50 date processes roughly at the same time, then started the DTrace script above as user root:


and afterwards started the test script as user adm:


A sample output looks like:

$ ./stopper.d
dtrace: script './stopper.d' matched 1 probe
dtrace: allowing destructive actions
1   3413  rexit:entry 21922(5058): 4 3 0 1, /var/adm/bin, >date<: 2009 Jan 23 13:47:37
Stopping Process 21922 ... done.
adm 21922  5058 T   global     3  57   IA 24 date
1   3413  rexit:entry 22005(5058): 4 3 0 1, /var/adm/bin, >date<: 2009 Jan 23 13:47:41
Stopping Process 22005 ... done.
adm 22005  5058 T   global     3  47   IA 24 date
0   3413  rexit:entry 22090(22089): 4 3 0 1, /var/adm/bin, >date<: 2009 Jan 23 13:47:56
Stopping Process 22090 ... done.
adm 22090     1 T   global     3  44   IA 20 date
... (some more lines)

For stopping the DTrace script, just press <ctrl>c in the window where you started it. Stopping the script will not let the stopped processes continue – they remain in the "T" (for Trace) status until they are killed.

The DTrace script will run faster if you comment out its last line (where it executes the ps command for each stopped process).

And here’s the test script (for starting 50 date processes) which I executed as user adm:

$ cat start-50-date-processes.ksh
while [[ i -gt 0 ]]; do
date &
(( i = i - 1 ))
# or (( i-- )) with ksh93 on Solaris 10, or OpenSolaris

(n)awk: print matching lines and some more

January 21, 2009

I think some of you will find the following (n)awk one-liner useful. I am using it from time to time and thought I should document it so that I do not have to think about where to place the "a++" part, for example.

It will print out lines of a file that match a certain pattern, plus some more lines that follow. In this example, I am searching for lines that contain the string "usb" in /var/adm/messages, and print that line plus 4 more (for more lines, decrement the number in the first curly brackets accordingly). A line number will also be printed out for each line:

$ nawk '/usb/{a=-5}{if (a<0){print NR, $0};a++}' /var/adm/messages

How to collect relevant data for application core analysis

January 21, 2009

Not sure if you knew already – there is a tool available which collects all relevant data for analyzing application cores, for example by Sun’s service and support engineers. It’s called pkgapp (not pkgadd 😉 ) and can be directly downloaded by clicking this link.

When started (after extracting in a directory like /opt/pkgapp) using:

$ /opt/pkgapp/pkgapp -c /path_to_core_file -p /path_to_executable

it collects all libraries for that executable, several proc tool outputs, and more, and packs it into a tar file in a new directory below the current directory.

If you would like to test it, you can just create a core with the gcore command:

$ gcore PID

Where PID is the process ID of a currently running process for which you would like to create the core file. The process will not be killed; it will continue running.

A sample session looks like the following (hostname and hostid replaced by dummy entries):

$ id
uid=0(root) gid=0(root)
$ /usr/bin/sleep 30&
[1]     17825
$ gcore 17825
gcore: core.17825 dumped
$ /opt/pkgapp/pkgapp -c core.17825 /usr/bin/sleep
* ----------------------------------------------------------------------------------
* Sun Microsystems RSD pkgapp 3.0 Solaris                               [01/21/2009]
* ----------------------------------------------------------------------------------
* OS release                            [5.11]
* Platform                              [SUNW,Sun-Blade-1000]
* Checking [-c] is a core or pid        [using core /var/tmp/pkgapp/core.17825]
* Checking corefile for a valid pldd    [pldd is good with 4 elements]
* Process root                          [/usr/bin/sleep]
* Databin parameter [-s] checks         [reset to /var/tmp/pkgapp]
* Databin found                         [/var/tmp/pkgapp]
* Databin writable check                [success]
* Databin used/created is               [/var/tmp/pkgapp/pkgapp-012109-02]
* Creating temp area                    [/tmp/pkgapp.18627/]
* Checking if corefile is truncated     [core seems truncated - may not be useful]
* Checking if corefile cksum = filesz   [core cksum matches file size, may be fine]
* Process binary                        [sleep]
* Checking usage history                [not recently run]
* sleep binary bit version              [32]
* Checking path [-p] to binary name     [failed, path includes binary name]
* Resetting path [-p] parameter         [/usr/bin]
* Checking path [-p] is a directory     [success]
* Locating sleep                        [success]
* Checking located sleep is 32 bit      [success]
* Binary located                        [/usr/bin/sleep]
* Adding binary to pkgapp.pldd          [success]
* Grabbing pldd                         [success]
* Grabbing pstack                       [success]
* Grabbing pmap                         [success]
* Grabbing pcred                        [success]
* Grabbing pflags                       [success]
* Grabbing pargs                        [success]
* Not Including the core/gcore
* Javatools [-j] not set                [skipped]
* Grabbing /var/adm/messages            [success]
* Grabbing uname -a                     [success]
* Grabbing date/time                    [success]
* Grabbing showrev -p                   [success]
* Grabbing pkginfo -l                   [success]
* Grabbing /etc/release                 [success]
* Grabbing coreadm                      [success]
* Grabbing ulimit                       [success]
* Grabbing libs                         [success]
* Making lib paths app/                 [success]
* Making lib paths libs/                [success]
* Processing file 1 of 48
* Processing file 2 of 48
* Processing file 3 of 48
* Processing file 46 of 48
* Processing file 47 of 48
* Processing file 48 of 48
* Linking libraries                     [success]
* Libraries linked                      [48 ttl]
* Using hostid for naming .tar.gz       [12345678]
* Writing file                          [pkgapp-12345678-sol01-090121-094026.tar.gz]
* Done gathering files
* Writing dbxrc & files     [success]
* Writing manifest-090121-094026.log    [success]
* Writing pkgapp-args-090121-094026     [success]
* Creating final tarfile                [success]
* Compressing tarfile                   [success]
* End of runtime logging
* Saving history info                   [/var/tmp/pkgapp-history/history-090121-094026.log]
* Saving runtime log                    [/var/tmp/pkgapp-history/runtime-090121-094026.log]
* Removing [-r] temp area/files         [left alone]
* Operations Complete
Upload the following file(s) to your Cores Directory at Sun
1) File(s) located in directory /var/tmp/pkgapp/pkgapp-012109-02
[ pkgapp-12345678-sol01-090121-094026.tar.gz ]
2) File(s) located in directory /var/tmp/pkgapp/
[ core.17825 ]
Note: pkgapp has not included the core.17825 with the above pkgapp tar
Please rename the file appropriately and upload to the same location
Thank you.
Sun Software Technology Service Center (STSC)

Solaris booting from ZFS, explained by Lori Alt

January 15, 2009

Lori Alt, project lead for the ZFS boot project, explains in this video how booting Solaris from ZFS works.

The first 5 minutes covers the main feaures of ZFS. At about 05:45, Lori starts describing the Solaris boot process (SPARC and x86) and the special considerationg when using ZFS for booting, including some remarks on grub. A great primer if you want to know what happens when Solaris is booting. The video is also available in m4v format for viewing on an iPod, for example.