Well… I installed and tried 360 Desktop on my Windows 7 system. Cute, but useless in terms of improving the UI. All it does is pan a panoramic photo on your desktop. You have the option of having desktop icons and application windows stay “pinned” to that wallpaper, or you can individually keep icons/windows in one place. But it’s slow. I kept my icons stationary (why would anyone want their desktop icons to move off screen?) but let the app windows move with the wallpaper. It takes too long to cycle all the way around. You don’t have control over the speed of the movement. In Compiz Fusion, you determine the speed of the rotation of the desktop cube/shpere/cylinder based on your mouse movement. It does give you more space to work with, but if you can’t navigate that space quickly and efficiently, how useful is it? So I give it three stars out of five for cuteness factor. But it’s not useful to me. Someone suggested I try Bumptop. I’ll give it a shot but it looks gimmicky to me.

I was looking around for a Windows Media Center equivalent for Linux. I’m not actually much of a fan of media centers because it’s a new UI. That’s not a problem for me. But my wife and daughter need to be able to access the media center, so I’ve kept it on the familiar Gnome desktop with custom scripts, button panels with custom icons and the like. But I got curious tonight and tried out XBMC (started out as an XBOX Media Center but now it’s available for Linux, Windows, Mac and Apple TV platforms too). I installed it on my laptop (Ubuntu 9.10) and it works a treat. What I really like is that you don’t have to run it as the main shell. It does take over the screen, but it’s just an app. It does weather, photos, music, videos, plays DVDs (for external regions as well but requires the use of libdvdcss which violates software patents in a few countries), etc… pretty much everything I’ve been doing with Gnome for the past five or six years. And it has a lot of eye candy. It’s missing a few important features for me (like Xine’s Alt-1 through 9 to jump 10-90% into a program as well as other keybindings). But, it’s quite usable and somewhat simple to use. So I will be throwing this on the media center once I migrate it from Gentoo to Ubuntu 9.10. Kudos to the XBMC people!


The Question

What is Linux? That question seems like it should have an easy answer. Most Linux fans think that the question has been answered well many times during Linux’s roughly eighteen years of existence. Most of the more common explanations begin by saying that Linux is a “Unix-like operating system”. That is technically accurate on the surface, but it does not answer what that rare but curious person is really asking when they wonder “what is Linux”? For example, I’ve had people ask what version of Windows I’m running after I tell them that I’m running Linux and nothing else. In reality they tend to be asking what my graphical environment is, or maybe even where I got my “desktop wallpaper”. (A discussion about the alternate graphical environments will come later in this series.) That question alone illustrates how most definitions fail to answer what the person is asking in the first place.

For the average consumer, a typical PC comes with Microsoft Windows on it and Windows is the PC to them. They are unaware that their PC is capable of running something other than Windows. To further complicate things, the Linux, Windows and Mac OS platforms are different enough in philosophies and approaches, that the users of each tend to only see computers through their own experience. Those differences make it harder to learn to do a lot of the same things across multiple computer platforms. When someone tries out any software they’ve never used before there are, of course, new things to learn. Finally, in many cases, the consumer doesn’t really even have a clear understanding of what Windows actually is, which will make understanding any other computer platform difficult at best. In the eyes of many users, there is no separation between Windows and Word, for example. Trying to understand the difference can be very hard since software isn’t something you can touch or that can touch you. This blog entry’s intention is to try and make those roadblocks clearer to both the technically inclined and the interested computer user.

Hidden Questions

First, we’ll start with something I’ve noticed in my, as yet, relatively short career supporting computers and users: misunderstandings and miscommunication are the main sources of interference when trying to solve a computer problem. My wife, a confirmed non-tech, has commented many times that when she hears me providing technical support on the phone, or talking shop with some friends, it sounds like a completely foreign language to her. The only parts of those conversations that make sense are the words in plain English. Sort of like a career specific dialect! Computer support staff and computer users come from different backgrounds with words that are specific to their jobs, but which are second nature to them while working. They simply don’t speak the same version of the native language. Because of this, problems communicating should come as no surprise to either side.

Because of the miscommunication, both sides of a technical support conversation will make many assumptions which eventually lead people down the wrong path. In the case of our core question, “What is Linux?”, these misunderstandings appear when the technically inclined person hears the question, but doesn’t ask the user for more details about what the user is really asking them. When someone who is honestly curious asks “what is Linux”, they may be asking a lot more than what the standard answers address. Here is a sample of some hidden questions that I’ve been able to coax out of people who have asked me in one way or another; “what is Linux”? After reading some of these, it should be pretty clear that asking a lot more questions to really understand what a user really wants to know is extremely important.

Q1. What is an OS (operating system)? What is Windows?
A1. This is the most basic question you’ll get from someone asking you what Linux is. At this point, they may not be ready to try Linux yet. Or if they insist that they are, it might be better to direct them to a simpler to use version of Linux like Ubuntu.

Q2. I know what Windows is, but what is Linux? Is Linux another program for Windows kind of like Office or Quicken? And if so, how is it better than just using Windows and the programs I already have and know?
A2. Linux is just like Windows in that it’s a type of software called an “operating system” (OS). While it may look different, and do things differently, and be based on older philosophies including the benefits and drawbacks that come with them, it does nearly all of the same things that Windows can do. There are some things that Windows can do that Linux can’t, but it’s also true that there are just as many things that Linux can do that Windows can’t. Every year the already small number of things that Linux can’t do continues to get even smaller.

Q3. Does Microsoft make Linux? How much does it cost? Why would I want to spend more money on Linux when I already have Windows and a bunch of programs that all came free with my computer?
A3. A lot of confusion comes from the multitude of different Linux “brands” which are officially known as distributions or “distros” for short. These differ from Windows and Mac OS in that not only are many of them completely free of charge and include a complete working system (OS), but they tend to include a huge selection of additional applications. Each distribution is put together in a way that will hopefully integrate all of the software into a seamless experience. Most distributions do this with differing levels of success, Ubuntu being the most successful. Spending the time to try and learn how to use a Linux distribution might be worthwhile to your wallet and certainly to increasing how much you know about your computer and how it works.

Q4. I’ve heard of this thing called Linux, but it’s only for computer types and can’t really do a lot of the things Windows can do. Right?
A4. Linux is for anyone who is willing to trade a some time and effort to learn it in exchange for quite a few different freedoms. The most commonly touted freedom is “free of charge”. Many people argue that the time spent to learn Linux is expensive. This is, in part, because they’ve forgotten that at one point they needed to spend the time to learn the OS they use now and also because they assume that learning Linux will take more time than that. In the case of easier to use distributions like Ubuntu, Fedora or SuSE, the learning cost is the same as with Windows or Mac OS X. You’re only learning a new set of approaches to do things you’re already familiar with, not a completely new set of procedures you’ve never performed before. As far as things that you can do in Windows vs. Linux, that’s not the topic of this entry. Let’s just say that if you spend the time to learn Linux, you’ll find plenty of software that will most likely meet your needs.

There are actually a lot of answers to the question, “What is Linux” because behind that question there are usually multiple questions that possibly have absolutely nothing to do with Linux. To try and stay away from technical jargon, I’m going to try and answer some of these questions over the next few blog entries in a conversational style. Where I can’t avoid technical terms, I’ll try to explain them as clearly as possible in plain English, possibly with bad analogies. If nothing else it might make you laugh.


…for someone with the artistic eye and the technical ability to be able to pull off realism with computers.  I think this is a great example of where things could go.  Specifically look at the “Teasers” section.  This is unlike any CGI work you’ve ever seen before:

http://www.thirdseventh.com/

This points to the fact that today’s artists are getting a better grasp of what the technology can do and the technology is becoming easier and more manageable for the artists to use as regular tools.  This is less a study in engineering than it is a study in having a good aesthetic sense of reality.

Here’s a teaser:

T&S Teaser2 from Alex Roman on Vimeo.


Back in the late 90s I saw an ad for a free trial of something called VMWare workstation.  Seeing that I’d been playing around with Wine and DOSEmu (Better alternatives today are DOSBox for games and FreeDOS for legacy apps) on Linux to gain access to some legacy applications, I assumed this was just another emulator more like Bochs.  I downloaded the limited trial, installed it on my laptop and my jaw dropped.

Until this time, I was used to slow emulation where doing something like playing a 3D game (software accelerated) or playing a video was just not reliable or usable.  But VMWare Workstation was different.  It was also fairly inexpensive since they were new.  After spending a few weeks with it, I was convinced that it was not emulation, but something completely different.  I paid the price to own a copy and was in OS flipping nirvana.

My laptop running Redhat 7 could also run Windows 2000 on VMWare at what felt to be at least 80% of normal speed.  I was able to play Unreal, watch music videos in MPEG and AVI formats and… do it all without a hiccup while doing the same things in Linux at the same time.  I brought it into my then new employer’s offices to show it around.  People were impressed.

As the price climbed, I couldn’t afford to keep up with newer versions and then I discovered the freebies like QEMU and Xen.  Those worked nearly as well, but weren’t as slick.  No matter for me because I like working closer to the metal anyway.  The focus eventually became Xen (installing and building from source on top of Fedora, then Gentoo).  When the time came to implement a new web based groupware system, Zimbra, the free Xen wasn’t up to the task.  I did some more research and found an excellent commercial product based on Xen called Virtual Iron.

Virtual Iron really had everything right in terms of their implementation of the management.  The management server could be run on Windows or a few supported flavors of Linux.  It provided a central repository of ISOs to use for initially installing your VMs and a boot image that each of your x86_64 virtual machine hosts would boot off of via TFTP.  The beauty of that is that you didn’t need to install anything on the virtual machine hosts at all.  They just boot up with the custom Xen and XenLinux kernels and they’re ready to host whatever VMs you throw at them.  Everything was there including hiccup free live migration.

This past Summer, Oracle acquired Virtual Iron and it currently looks like they plan to turn it more into a virtual desktop hosting solution to complement their Oracle VM product.  At this time it is unclear to me how many of Virtual Iron’s former (very rockin’) support and engineering staff they retained.  At one point it even looked like they might bury the product entirely, but a recent issue of Information Weekly indicates that there might still be life left in it.  However, the time came for me to re-evalute virtualization.

Over the past few months as time has allowed, I’ve been investigating many of the options available for virtualization.  Since my VMs are all Linux, that means I have a lot of options ranging from free/libre (as in speech, not beer), free of charge (but pay for support), or ungodly expensive.  I also have to factor in support from HP for the virtualization platform which narrows the field a little.  So right now it looks like it’s between Novell/Suse Enterprise (can host or be virtualized with near bare metal performance on HyperV), Redhat Enterprise, Centos, Ubuntu Server 9 + KVM or Xen, and VMWare.

This is where it gets interesting.  When I first went over to the Xen side of the table, VMWare ESX and GSX were the server platforms and they were extremely expensive.  ESX was unique when it came out because it was the first virtualization product for the x86 platform that didn’t require a full OS running under it.  It was a hypervisor.  The Xen microkernel, offered essentially the same thing, only it was free if you wanted to make the minimal investment of time to build it from source.  Seeing that kernel compilation is something of a breeze for me, I gave it a try and was duly impressed.

But, VMWare didn’t stop there.  They have an advantage that I don’t believe Xen is yet positioned to compete with.  They can over commit your server’s RAM in some very interesting ways.  Of course, anyone familiar with Xen is aware of how you can “balloon” [1]  RAM for your Linux “guests” [2]  If you assigned only 256 megs to a guest and now you need 512, you can use the ‘xm’ utility to resize the amount of RAM that guest has.  If you’re paravirtualized, which I used a lot initially, the change in the guest is immediate.  As a result you can move RAM between VMs as the demands dictate.

VMWare can also do this.  But the coolest thing that VMWare can do that others can’t yet is to share similar areas of RAM between multiple VMs.  As they point out on their marketing site, this allows for much more efficient use of RAM.  Where products that are based on Xen like Citrix or MS HyperV are limited by how much RAM is on the host machine, VMWare can run many more VMs on the same hardware.  On their example server with 4 gigs of RAM, they could only run seven and six 512 meg Windows XP VMs compared to VMWare’s 20 VMs.  This is because the system copies portions of the first VMs running image that are the same on successive VMs.  It then keeps track of changes to maintain independent functionality.

They argue that while they might cost more you get a lot more VMs for your dollar.  And they’re right.  So, I’ll be eager to see this in action as I test the various VM technologies.  I want the best one I can find, and VMWare is very interesting right now.

[1] The Obsessively Compulsive blog is really excellent for  virtualization info.  I’ve followed it on and off for a while, it never disappoints.

[2] Guest is the wrong terminology in Xen parlance, but it will suffice for this blog post.


Well that was a long break!  I’m going to give my blog another try and see if I can keep it going a bit better this time.  Sorry for that loooong intermission.  :)

Being someone who believes in using technology to make life better, one area that’s always interested me is human/machine interaction.  It’s a topic called human factors.  Having read Jef Raskin’s book, The Humane Interface, I know that there are ideas that work, and ideas that just don’t.  He had some very interesting ideas, many of which have slowly been making their way into the software we use every day whether installed on our computers, or via the web.

In my own personal experience, I discovered quite some time ago that one size (or flavor) does not fit all when it comes to user interfaces.  One person might think that the latest Microsoft Windows 7 GUI advances are the best thing since sliced bread, while another might think they’re a cheap knock-off of the Mac OS.  Another might find both of those environments to be far too wasteful in terms of system resources and workflow efficiency.  And still another might feel that only the “one true OS” has the best GUI of all time.  Which OS that is, is an exercise left up to the reader.

My personal experiences have allowed me to try nearly all GUIs at one point or another for extended periods of time and there are definitely, sometimes subtle, differences.  I’ve used everything from the earliest GUIs on Apple hardware, to the latest in Linux land and all in between.  What I’ve personally discovered is that each GUI approach has its benefits and drawbacks to certain tasks.  Another discovery is that until we get some more interesting input devices, that GUIs are generally all limited to a few basic functions no matter how much eye candy and window dressing is applied.

Having used Linux almost exclusively for many years, I was largely exposed to the Gnome and KDE environments which both take aspects of the Windows and Mac OS desktops as well as inheriting some attributes from a few Unix desktops as well.  The paradigm is purely windows, icons, mouse and pointer (W.I.M.P.).  During this time I also tried out many other environments and found that for normal use I preferred Gnome for day to day use.  I also tried Enlightenment which is just amazing, but since it’s always in alpha, it’s not very stable for day to day use and had some problems with presenting some apps properly.  It also required a little more coding experience to customize than I had at the time.

About three or four years ago, I decided to give KDE a spin again, and spent about half a year with it.  I found that I loved their Konsole (terminal) app above all others.  But GUI-wise it was still about as clunky as Windows 9×-2000.  I breifly returned to Gnome with the default Metacity window manager, then I discovered the amazing Beryl, now merged with Compiz to become Compiz-Fusion and now renamed yet again to Compiz.  This environment was packed with eye candy galore, some of which was Mac inspired, but much of which was actually unique.  I used that consistently at work for about two years and a few months and I loved every minute of it.  Being able to easily and visually zoom in and out of my sixteen virtual desktops was about as close as things get to Jef Raskin’s zooming user interface.  (Tuft’s University’s VUE mind mapping application also has a pretty decent zoom as well)

But, as usual, I got the itch to try something different and last Winter, I found a blog post at codinghorror.com about how your desktop should not be a destination.  That is, your screen should never be focused on your desktop since that means you’re not actually doing anything but looking at your wallpaper and maybe a mess of icons if you’re not a minimalist like me.  ;)  I delved a little farther into this notion and I realized that it was time for me to try a keyboard driven window manager.  It was time to step away from the mouse…

The first one I tried was Xmonad.  It’s a tiling window manager which automatically places applications on your screen in the most efficient layout.  The general idea is to get away from having to move, place or resize windows.  Instead, the first application window is mostly full screen.  There are no window widgets to close/minimize/maximize because those concepts don’t exist in this environment.  When you open a second application window, Xmonad splits the screen in half with the first app on the top and the second on the bottom half.  You can then switch into two other modes: side by side, and full screen where you cycle between apps with a key combo.

There are also a default of nine workspaces (kind of like virtual desktops).  So it’s easy to group applications together.  I would use space 1 for my web mail session at work.  Space 2 for another browser with work related links in it.  Space three for some terminal windows for the various shell sessions to *nix and VMS boxes.  Space four, for a Windows remote desktop session to a virtual machine.  Space seven, for a text editor with my latest task list in it.  And finally space nine for a remote NX session (like remote desktop, but for Linux) to my app server at home via OpenVPN.  The other ones I skipped I’d throw various apps onto and they tend to be transient.

Spending about four or five months with it was interesting.  But I wasn’t sure if it really made that much of a difference.  Sometimes the tiling paradigm just didn’t work.  The split screen whether vertical or horizontal didn’t provide enough space for either app and using them in full screen mode and cycling between them wasn’t very efficient.  So when I just switched to Ubuntu Desktop from Gentoo on my workstation this week, I went back to Gnome with Metacity.  Only to discover that the keyboard driven window manager fans were right.  It’s a massive pain in the rear to have to move, place and resize windows all the time.  Even with multiple virtual desktops!  I could actually feel the impact on my workflow efficiency and working with Gnome, Compiz-Fusion or Windows just feels too slow.

So I promptly installed Xmonad as well as wmii.  As soon as I tried wmii, I was in heaven.  The key combos are similar to Xmonad so not much time to adapt.  Plus they switched from the multiple virtual desktop approach to tags for application windows.  Which is much more powerful.  The tags appear to be the same as virtual desktops in that there are some defaults (0 through 9) which make it look like you just have ten virtual desktops to move between.  But the difference is that you can use more than one tag on an application window.  This then allows you to have a single application grouped with other applications under other tags so it follows you.  Even better is that if you switch an app to floating mode (which is similar to what other standard GUIs do) under one tag, that only applies there.  When you switch to another tag the application will change to conform to however it’s been configured for that other tag.  So I’ve noticed a definite difference in using wmii and my ability to save time during the day as I work through being able to forget about bothering with window placement and resizing unless I want to do it.  Note that I haven’t yet tried Awesome or Ratpoison which also allow you to ditch the mouse.  And as a second note, I’ll point out that some applications still demand a mouse, so you can’t really always keep your hands on the home keys.  But wmii and Xmonad are definitely the way to go for serious work.

On that last point, I’ll note that I use Compiz-Fusion on the media center at home, and standard Gnome on a few other machines.  Which is really the whole reason for this blog entry.  What I’ve discovered is that you can get a lot more done with a computer if you have the right UI for the job.  At work, where I can have anywhere from 15-30 windows open at a time, and I need to be able to move between them quickly or group them in various ways, wmii is a clear win.  But, on the laptop that I share with my family, I need something that isn’t as byzantine, so standard Gnome or Compiz-Fusion is the best choice.  When I’m working on music or editing video, I need maximal screen space combined with the occasionaly need to resize a window, so wmii it is again.

As you can see, there is no one approach that works well for all tasks, but you don’t realize that until you try all the options.  So if you’re on the fence and curious about different UI approaches, go ahead and explore.  What you learn will be worth it.  Just make sure you give yourself enough time to really figure out how you feel about the environment.  For me, that’s a minimum of six months.  Taking the tiime and honestly giving it your best effort might surpris you.  I remember when I first saw tiling window managers about ten years ago and wondered, “why would anyone want that”?  Now I know.


I’m in the middle of reading Jef Raskin’s (started a project in the 70s called Apple Macintosh) book, The Humane Interface. It’s a terrific read that tells us all about what’s wrong with software as it stands today. Although Jef died in 2005, his son Aza is carrying on the work because it’s just too good to leave unfinished. Aza formed a company called Humanized.com and their product is called Enso (Windows only for now). It sits between the user and the system and applies the speed, power and efficiency of the command line with the elegance of the GUI. The closest thing I’ve seen to it is Firefox Ubuiquity (Also written by Aza Raskin), only this applies to the entire system.  While it’s not perfectly in line with Jef Raskin’s ideals, it’s a pretty nice step to getting there.  Check out the link to the demo video above to get a peek into what could be possible.   Enso is a command line for the non-technical person.  Computers don’t have to be unfriendly.


Difficulty Level: Moderate

What is a Network Block Device and Why Would I Want One?

Let me start this entry out by explaining just what a block device is, in case you’re a newer Linux user and you’re not sure. I didn’t know what one was at one point and a quick explanation would have been helpful. In short, block devices are things like hard drives, flash drives, floppy disks, CD-ROMs and even DVDs. On a more complex level, they are devices which get their input/output in the form of data blocks of a certain size in bits or bytes. For the sake of this discussion, we’ll just be thinking of the devices I listed above.

The Linux kernel among it’s many modules (which can be thought of as drivers) has a particular module called ‘nbd’ which stands for Network Block Device. What this means is that you can take almost any block device and present it to the network. This differs from standard Windows file sharing, or Unix NFS in that you’re not presenting a set of files to the network, but the raw device itself. (iSCSI is another way of accomplishing the same thing with a greater degree of reliability and is accessible to Linux through the Linux iSCSI Target and Linux iSCSI Initiator projects.  NOTE: Think of the target as the server, and the initiator as the client.) Although there are other ways of doing this, network block device support is still nothing to ignore. It may not be robust, but it’s still highly useful for non-mission critical tasks where the expense of a SAN is unwarranted.

I originally found out about nbd when I was first starting to work with Xen virtualization. Their project documentation suggested that a convenient way of being able to store virtual machine images on a server was to use nbd support. When I understood that this was the way to a “poor man’s SAN” since Linux software RAID and LVM volumes could be exported with nbd, I then wondered, what else could I export? I decided to experiment and find out. While it’s not perfect because of some I/O control limitations, it’s still quite handy and simple to implement.

Making NBD Work for You:

The first step towards preparing your system for NBD is determining if you need to recompile your kernel.  Fortunately, most of the popular Linux distributions tend to build most of the optional support as kernel modules by default.  You can think of modules as “drivers” for different functionality and hardware support.  In most cases, you should have access to NBD support.  The simplest way to find out is to get to a shell prompt and type: ‘modprobe nbd’.  If you get no error and simply return to the prompt, then type: ‘lsmod | grep -v grep | grep nbd’.  If you see a line indicating that the nbd module is loaded, your kernel has NBD support built in.

However, if you get an error with the ‘modprobe’ command about the module not existing, or you don’t get anything in return for the ‘lsmod’ command, then it’s likely you’ll need to recompile your kernel.  Recompiling the kernel is beyond the scope of this blog entry, but I will link to some resources to get you started in the right direction at the end of this entry.

Three Components to NBD:

There are three components that make up the entirety of NBD for Linux.  The first is the kernel module which allows the kernel to provide an interface to the device for export to/from the network.  The second component is the ‘nbd-server’ application which handles the exporting of the device over TCP/IP.  And finally, the third part is the ‘nbd-client’ application which imports the device on another machine and presents it as /dev/nbX where ‘X’ is a number.  Depending on the distribution you use, you may be able to find a specific package to install the applications.  If not, there is always the source code from the main NBD project site.

Once you have the kernel module loaded and the applications built here is all you need to do to test it and see if it will work for what you need.  This is an example of exporting a raw unpartitioned IDE hard drive over TCP/IP and then importing it on a remote system:

1. On the system containing the hard drive, run the nbd-server command as follows  (Syntax: nbd-server -r <tcp port> /dev/xxx):

nbd-server -r 2000 /dev/hda

2. On the remote system where you wish to import the devicem run the nbd-client command as follows (Syntax: /nbd-client <ip address of system running nbd-server> <matching tcp port number> /dev/nbd0)

nbd-cliet 192.168.1.1 2000 /dev/nbd0

You should then be able to treat /dev/nbd0 on the system where you imported the device as if it were local.  Use ‘fdisk’ to partition the device, format it for use as a file system, or even a paging file.  I’ve used this successfully with remote flash drives, raw hard drives, partitions on hard drives, LVM logical volumes, and even DVD drives for playing movies on devices that don’t have DVD drives.

A few extra notes:

1. Depending on the kernel version, the NBD device nodes might be /dev/nbdX or it might be /dev/nbX where ‘X’ is always a number.

2. There is a one-to-one relationship between the exported device and your chosen TCP port.  That is to say, that if you use port 2000 for /dev/hda and want to export /dev/hdb simultaneously, you’ll need to increment to the next free port.

3. Before randomly chosing ports, it’s a good idea to take a look at the commonly used ports listed in /etc/services as well as run a ‘netstat -an’ to see what ports your particular systems are using.

4. The performance of the DVD export over an 802.11bg link is quite good after an initial buffer period for something like Xine.

5. It’s quite possible to use an imported NBD device as part of a mirror set if you want a pseudo instant copy on a separate machine, but it’s not highly recommended.

Recompiling the Linux kernel (which seems to be a dying art):

http://www.digitalhermit.com/linux/Kernel-Build-HOWTO.htm




Follow

Get every new post delivered to your Inbox.