NOTE: For the less technical reader, some explanations are coming below. Persist!

For the past ten years or so, I’ve been thinking that RAM (a computer’s short term memory) and storage (what you think of as drive space, or “long term” memory) have to merge into one system. It’s very nearly happened in phones, tablets and modern laptops with the use of flash “NVRAM” or solid state disks (SSDs). In fact, a few years back I saw some strong support of this idea when HP revealed a new compute system they called “The Machine” and HP Discover 2015. Think of The Machine as a big server rack where each shelf is populated by multiple processors and nothing but RAM (instead of hard drives or solid state drives) to serve as both long term and short term memory. All of that RAM had interconnections between each shelf using very high speed fiber. In short, the memory lanes were slated to burst off the motherboard and into the interconnects.

HP’s design was predicated on the notion of a memristor with performance equal to standard RAM but with the ability to store data even when the power is off. The primary benefit was to be: similar cost to compared to standard RAM in spite of these new features. Unfortunately, the technology wasn’t quite ready in 2015. Many pessimists probably see this as a failure no real solution, where in reality, it was a setback. I think HP’s proposal is right (partially because it matches my notion of RAM and storage becoming one!). A computer that is no longer hampered by the distinction between storage and RAM would be able to do some very fast processing indeed. This distinction has always been a necessary evil from the days when disk was far less expensive than RAM as a long term memory solution.

Today I read about a bold new chip design from Stanford using carbon nanotubule field effect transistors (CNFETs). I believe this is one more lane being added to the road to the RAM and storage merge. First, a simple and inaccurate explanation of CNFETs:

You hear all the time that computers are “just ones and zeros”. But what does it actually mean? Believe it or not, any computer (smartphones, tablets, laptops, servers, even cheap media players) is mostly just a box of self controlled light switches. Millions of light switches:

buzz lightyear

The ones are when these switches are turned on. The zeros are when they are turned off. Right now as you use your web browser, there are switches turning on and off billions of times per second in your compute device’s processor, memory, graphics chip, and even the screen if you are using a flat screen of some kind. All those billions of ons and offs represent numbers which, in turn, represent the state of something: which letter of the alphabet you just typed, what color a specific pixel on Buzz Lightyear’s red button should be, or the packet of data coming in from the network to let you know that an e-mail just came in.

Those switches are special switches that can be turned on and off by applying power to them. They are known as transistors. One kind of transistor that works well in digital circuits is the field effect transistor (FET). Again, this is just a special kind of switch that can be turned on or off by applying electricity to it. The CNFETs talked about in the article are just the latest generation of FETs. What makes them different is that they are using carbon nanotubules instead of traditional silicon. That means, they can be smaller and much much faster. That difference in size at the chip level translates to smaller and faster devices in our future!

Digging deeper into the article, the most exciting proposition is that RAM (which they include as data storage) will be part of the processor itself. Today, when a processor needs to access storage (RAM or disk), it needs to do so over some kind of interface. Think of that as having to take a bridge (which is sometimes crowded) to get to the grocery store. Imagine how excited you would be if a new grocery store was built at the end of your street. That’s what moving storage onto the processor does. The “bridge” is gone. Besides being smaller and faster because of the move to carbon nanotubules, these systems will also be faster simply because some barriers are being removed. It’s a win-win for users of compute devices everywhere and it also supports the idea that RAM and storage are merging into a single structure, which is as it should be. Good bye to “disk”!


Public note to all Democrat politicians. If you want my support both in the voting booth and through donations, DO NOT force me to donate to a cause I support through guilt. Today I saw what looked like a petition to support Planned Parenthood which my family already contribute to directly. I was happy to put my name out there because I support the health care access and contraception that Planned Parenthood provides to those who need it, and I am pro-choice.

I was VERY DISAPPOINTED that the web site supporting @SenGillibrand and her pro-Planned Parenthood stance appeared to REQUIRE a donation just to voice my support. It could be poor web design, but not knowing who Kristen Gillibrand is, I needed to do some research. That research led me to the conclusion that this is very much intentional as she is a Blue Dog Democrat (read: fiscally conservative centrist which in my book is a Reagan era Republican).

Yes, she could support something I believe in, but given the requirement of a contribution to even be allowed into the party, and her apparent flexibility in moral alignment based on her past actions, I have strong reservations. In my view, Democrats who are flexible lack the backbone needed to do what is necessary to strengthen the left at this point. It is one thing to reach across the aisle to work with your opponents. It is entirely something else to change your view based on who you hope will vote for you. (If Trump is like president Snow from The Hunger Games, any Democrat who changes views to court voters may as well be Coin.)

In spite of that, if the donation to Gillibrand is required I could be accept it if the act of not donating wasn’t taken to be such flippant disregard for the cause.  Must the statement requiring the donation be so horribly worded? See for yourself; below is the option to select if you aren’t able or willing to donate money to Gillibrand in support of Planned Parenthood:

“No, I don’t care about stopping Trump’s anti-women agenda.”

Wow. Really? That’s assigning a whole lot of attributes to a potential backer that are incorrectly assumed. Please rethink that if you don’t want to alienate people who might support you.

As I said, my family currently donates and will continue to donate a good deal of money directly to Planned Parenthood and many other causes that Trump is poised to try and destroy. Gillibrand would have gotten my signature on that petition if it weren’t for the guilt trip. But this is exactly the sort of posturing and behavior within the modern Democrats that I’ve come to expect and it leaves me disgusted with all politicians.

Not all Dems are like this, and I’ll give Gillibrand the benefit of the doubt that this was just a design oversight rather than the desperate and vicious money grab that it looks like. Hopefully I have her completely wrong and she will work at getting that wording changed.


So I’ve been really busy for the past few years and haven’t been able to spend as much time exploring and working on interesting things in Linux or posting to this blog. I don’t know if that will change in the near future. That was partially my reason for moving from Gentoo to Ubuntu. As fallout from all that, I ran into a very inexplicable situation right around the time that Ubuntu 14.04 came out. For the most part, it worked as before, and the polish that the Unity desktop (it wasn’t introduced in this version) brought was pretty nice, especially for the family. Easy to use and pretty.
 
At first things were great. But after a while I noticed some unusual performance issues. Nothing made sense. I would open more than four or five tabs and the computer would start swapping and the performance would nose dive. System load in ‘top’ would be in the 2-3 range. Since I tend to prefer Chrome, and I have a habit of having no less than 30-40 tabs open, this was becoming a major problem. But, life intervened in the form of more and more work to keep me away from investigating it and I just slogged through. I’d typically just close all applications, swapoff and swapon to get the system back to some sanity.
 
Coincidentally, last night I wanted to work on some sound design and I wanted as many system resources dedicated to the virtual modular synthesizer and realtime audio subsystem. I decided, after a few years away, to go back to my most minimal configuration using the XMonad window manager. I far prefer controlling windows by keyboard rather than mouse since it’s just generally faster. At one point I needed to look something up on the web, so I opened Chrome not thinking much about it. I immediately noticed it was faster. Much faster.
 
Sure, I’m using a window manager that is much lighter than Ubuntu Uniity, but come on… it’s not like there is THAT much of a difference in RAM and CPU utilization between heavier and lighter WMs. I was now curious and launched Firefox. Usually that was a recipe for disaster as both browsers would perform poorly and the rest of the system would tank. Indeed, a few times it was so bad I had to hard power cycle the laptop. But right now with XMonad, both Chrome and Firefox were speeding along just fine.
 
Why such a huge difference? I used to switch between twm, E, sawfish, metacity, beryl, compiz and kwm regularly and never saw this big of a difference. twm being the most bare bones was guaranteed to work on the lowest and oldest of systems. compiz had all the eye candy and special effects, but as a rough personal quantification of the difference, I’d say that compiz was maybe two times heavier than metacity. And metacity ran pretty well on nearly all systems.
 
So the big test this afternoon. I decided to hit the family media center and try out a lighter WM. I installed xfce4. It’s somewhat reminiscent of the old Gnome 2 days. I threw it on, configured it, and… everything felt pretty snappy. Not bad for a 2009 laptop. But the big test: Running the game Antichamber.
 
About two years ago the PC I had doing media center functionality for us died. I cloned it to my old laptop with the bad screen which wasn’t being used. Everything worked fine, except Antichamber. It loaded but the framerate was so low it was completely unusable. Even the game Fez had trouble keeping the audio in sync. I chalked it up to being an older laptop that was perhaps just under the requirements for those games. I was a little perplexed because NVidia cards, even in laptops usually don’t perform this poorly even if they’re over five or six years old.
 
After switching to xfce4, I tried Antichamber, and… it played perfectly. WTF!? No change in hardware at all, just a switch of the window manager. Then I thought it through a bit and did some Googling. At this point, I think I have an idea what the problem is. Back in 2009, before Ubuntu had Unity, the only applications that made use of 3D acceleration were games, video editors, and 3D modeling/rendering software. Today, the landscape is different. I knew that Unity made use of 3D acceleration and would likely have some impact on some 3D applications. (NOTE: Unity uses compiz) But in the intervening time, web browsers have also been added to the mix. They, like the games and Unity, are using GL today.
 
A little Googling also revealed that Unity will fall back to software assisted rendering if your 3D accelerator can’t do something that it needs. So all of that processing and data gets shifted from the graphics processor to your plain old CPU and regular RAM. I think the reason that Chrome performs so poorly under Unity is that it spins up a new Chrome process for each tab. Each of those processes hooks into the GL subsystem and the computer’s GPU or potentially uses up a LOT more CPU and RAM for software rendering. This is likely why Chrome is such a heavy load on a system even when it’s not being used. My system idle with Chrome doing nothing and 30+ tabs open is about 50%. Swap gets hit pretty hard, especially if I hit a site that has a lot of ads. imdb.com used to be horrible in Chrome. (There is also Flash, but we all know about that)
Firefox fares better, but it only has one process, so not as many hooks into GL as Chrome can potentially have. Right now, I’m typing this from within Chrome and have a ton of tabs open. No lag. No loud fan. No hot laptop. There is no single point to blame here. It’s not that “Unity sucks” or “Chrome sucks” or “Firefox sucks” or “Ubuntu/Linux sucks”. It’s that there is a resource on most systems that used to be used infrequently, and now it is drawn upon very heavily: the GPU. I suspect that this is also true in other OSes. If you’re seeing crappy browser performance, maybe it’s time to turn off the eye candy, or look into disabling your browser’s reliance on GL. It’s also likely that if you run on a desktop computer, you don’t notice this as much if you have a really good GPU in the system and more RAM and CPU than you can pack in the typical laptop.
For my laptop, the difference is like night and day. I can live without the niceties of Unity for now and I’m more than happy to go back to my beloved keyboard driven window manager, XMonad in exchange for vastly better performance. After showing my wife the improved performance on the family laptop, she said she doesn’t care about the eye candy and is willing to go through a few adjustments to xfce4. It’s not that different from Gnome 2 which she was used to a few years back. Long term… I think future laptops in this household will have the best GPU and video RAM that I can justify without going “gamer” level.

Well… I installed and tried 360 Desktop on my Windows 7 system. Cute, but useless in terms of improving the UI. All it does is pan a panoramic photo on your desktop. You have the option of having desktop icons and application windows stay “pinned” to that wallpaper, or you can individually keep icons/windows in one place. But it’s slow. I kept my icons stationary (why would anyone want their desktop icons to move off screen?) but let the app windows move with the wallpaper. It takes too long to cycle all the way around. You don’t have control over the speed of the movement. In Compiz Fusion, you determine the speed of the rotation of the desktop cube/shpere/cylinder based on your mouse movement. It does give you more space to work with, but if you can’t navigate that space quickly and efficiently, how useful is it? So I give it three stars out of five for cuteness factor. But it’s not useful to me. Someone suggested I try Bumptop. I’ll give it a shot but it looks gimmicky to me.

I was looking around for a Windows Media Center equivalent for Linux. I’m not actually much of a fan of media centers because it’s a new UI. That’s not a problem for me. But my wife and daughter need to be able to access the media center, so I’ve kept it on the familiar Gnome desktop with custom scripts, button panels with custom icons and the like. But I got curious tonight and tried out XBMC (started out as an XBOX Media Center but now it’s available for Linux, Windows, Mac and Apple TV platforms too). I installed it on my laptop (Ubuntu 9.10) and it works a treat. What I really like is that you don’t have to run it as the main shell. It does take over the screen, but it’s just an app. It does weather, photos, music, videos, plays DVDs (for external regions as well but requires the use of libdvdcss which violates software patents in a few countries), etc… pretty much everything I’ve been doing with Gnome for the past five or six years. And it has a lot of eye candy. It’s missing a few important features for me (like Xine’s Alt-1 through 9 to jump 10-90% into a program as well as other keybindings). But, it’s quite usable and somewhat simple to use. So I will be throwing this on the media center once I migrate it from Gentoo to Ubuntu 9.10. Kudos to the XBMC people!


The Question

What is Linux? That question seems like it should have an easy answer. Most Linux fans think that the question has been answered well many times during Linux’s roughly eighteen years of existence. Most of the more common explanations begin by saying that Linux is a “Unix-like operating system”. That is technically accurate on the surface, but it does not answer what that rare but curious person is really asking when they wonder “what is Linux”? For example, I’ve had people ask what version of Windows I’m running after I tell them that I’m running Linux and nothing else. In reality they tend to be asking what my graphical environment is, or maybe even where I got my “desktop wallpaper”. (A discussion about the alternate graphical environments will come later in this series.) That question alone illustrates how most definitions fail to answer what the person is asking in the first place.

For the average consumer, a typical PC comes with Microsoft Windows on it and Windows is the PC to them. They are unaware that their PC is capable of running something other than Windows. To further complicate things, the Linux, Windows and Mac OS platforms are different enough in philosophies and approaches, that the users of each tend to only see computers through their own experience. Those differences make it harder to learn to do a lot of the same things across multiple computer platforms. When someone tries out any software they’ve never used before there are, of course, new things to learn. Finally, in many cases, the consumer doesn’t really even have a clear understanding of what Windows actually is, which will make understanding any other computer platform difficult at best. In the eyes of many users, there is no separation between Windows and Word, for example. Trying to understand the difference can be very hard since software isn’t something you can touch or that can touch you. This blog entry’s intention is to try and make those roadblocks clearer to both the technically inclined and the interested computer user.

Hidden Questions

First, we’ll start with something I’ve noticed in my, as yet, relatively short career supporting computers and users: misunderstandings and miscommunication are the main sources of interference when trying to solve a computer problem. My wife, a confirmed non-tech, has commented many times that when she hears me providing technical support on the phone, or talking shop with some friends, it sounds like a completely foreign language to her. The only parts of those conversations that make sense are the words in plain English. Sort of like a career specific dialect! Computer support staff and computer users come from different backgrounds with words that are specific to their jobs, but which are second nature to them while working. They simply don’t speak the same version of the native language. Because of this, problems communicating should come as no surprise to either side.

Because of the miscommunication, both sides of a technical support conversation will make many assumptions which eventually lead people down the wrong path. In the case of our core question, “What is Linux?”, these misunderstandings appear when the technically inclined person hears the question, but doesn’t ask the user for more details about what the user is really asking them. When someone who is honestly curious asks “what is Linux”, they may be asking a lot more than what the standard answers address. Here is a sample of some hidden questions that I’ve been able to coax out of people who have asked me in one way or another; “what is Linux”? After reading some of these, it should be pretty clear that asking a lot more questions to really understand what a user really wants to know is extremely important.

Q1. What is an OS (operating system)? What is Windows?
A1. This is the most basic question you’ll get from someone asking you what Linux is. At this point, they may not be ready to try Linux yet. Or if they insist that they are, it might be better to direct them to a simpler to use version of Linux like Ubuntu.

Q2. I know what Windows is, but what is Linux? Is Linux another program for Windows kind of like Office or Quicken? And if so, how is it better than just using Windows and the programs I already have and know?
A2. Linux is just like Windows in that it’s a type of software called an “operating system” (OS). While it may look different, and do things differently, and be based on older philosophies including the benefits and drawbacks that come with them, it does nearly all of the same things that Windows can do. There are some things that Windows can do that Linux can’t, but it’s also true that there are just as many things that Linux can do that Windows can’t. Every year the already small number of things that Linux can’t do continues to get even smaller.

Q3. Does Microsoft make Linux? How much does it cost? Why would I want to spend more money on Linux when I already have Windows and a bunch of programs that all came free with my computer?
A3. A lot of confusion comes from the multitude of different Linux “brands” which are officially known as distributions or “distros” for short. These differ from Windows and Mac OS in that not only are many of them completely free of charge and include a complete working system (OS), but they tend to include a huge selection of additional applications. Each distribution is put together in a way that will hopefully integrate all of the software into a seamless experience. Most distributions do this with differing levels of success, Ubuntu being the most successful. Spending the time to try and learn how to use a Linux distribution might be worthwhile to your wallet and certainly to increasing how much you know about your computer and how it works.

Q4. I’ve heard of this thing called Linux, but it’s only for computer types and can’t really do a lot of the things Windows can do. Right?
A4. Linux is for anyone who is willing to trade a some time and effort to learn it in exchange for quite a few different freedoms. The most commonly touted freedom is “free of charge”. Many people argue that the time spent to learn Linux is expensive. This is, in part, because they’ve forgotten that at one point they needed to spend the time to learn the OS they use now and also because they assume that learning Linux will take more time than that. In the case of easier to use distributions like Ubuntu, Fedora or SuSE, the learning cost is the same as with Windows or Mac OS X. You’re only learning a new set of approaches to do things you’re already familiar with, not a completely new set of procedures you’ve never performed before. As far as things that you can do in Windows vs. Linux, that’s not the topic of this entry. Let’s just say that if you spend the time to learn Linux, you’ll find plenty of software that will most likely meet your needs.

There are actually a lot of answers to the question, “What is Linux” because behind that question there are usually multiple questions that possibly have absolutely nothing to do with Linux. To try and stay away from technical jargon, I’m going to try and answer some of these questions over the next few blog entries in a conversational style. Where I can’t avoid technical terms, I’ll try to explain them as clearly as possible in plain English, possibly with bad analogies. If nothing else it might make you laugh.


…for someone with the artistic eye and the technical ability to be able to pull off realism with computers.  I think this is a great example of where things could go.  Specifically look at the “Teasers” section.  This is unlike any CGI work you’ve ever seen before:

http://www.thirdseventh.com/

This points to the fact that today’s artists are getting a better grasp of what the technology can do and the technology is becoming easier and more manageable for the artists to use as regular tools.  This is less a study in engineering than it is a study in having a good aesthetic sense of reality.

Here’s a teaser:

T&S Teaser2 from Alex Roman on Vimeo.


Back in the late 90s I saw an ad for a free trial of something called VMWare workstation.  Seeing that I’d been playing around with Wine and DOSEmu (Better alternatives today are DOSBox for games and FreeDOS for legacy apps) on Linux to gain access to some legacy applications, I assumed this was just another emulator more like Bochs.  I downloaded the limited trial, installed it on my laptop and my jaw dropped.

Until this time, I was used to slow emulation where doing something like playing a 3D game (software accelerated) or playing a video was just not reliable or usable.  But VMWare Workstation was different.  It was also fairly inexpensive since they were new.  After spending a few weeks with it, I was convinced that it was not emulation, but something completely different.  I paid the price to own a copy and was in OS flipping nirvana.

My laptop running Redhat 7 could also run Windows 2000 on VMWare at what felt to be at least 80% of normal speed.  I was able to play Unreal, watch music videos in MPEG and AVI formats and… do it all without a hiccup while doing the same things in Linux at the same time.  I brought it into my then new employer’s offices to show it around.  People were impressed.

As the price climbed, I couldn’t afford to keep up with newer versions and then I discovered the freebies like QEMU and Xen.  Those worked nearly as well, but weren’t as slick.  No matter for me because I like working closer to the metal anyway.  The focus eventually became Xen (installing and building from source on top of Fedora, then Gentoo).  When the time came to implement a new web based groupware system, Zimbra, the free Xen wasn’t up to the task.  I did some more research and found an excellent commercial product based on Xen called Virtual Iron.

Virtual Iron really had everything right in terms of their implementation of the management.  The management server could be run on Windows or a few supported flavors of Linux.  It provided a central repository of ISOs to use for initially installing your VMs and a boot image that each of your x86_64 virtual machine hosts would boot off of via TFTP.  The beauty of that is that you didn’t need to install anything on the virtual machine hosts at all.  They just boot up with the custom Xen and XenLinux kernels and they’re ready to host whatever VMs you throw at them.  Everything was there including hiccup free live migration.

This past Summer, Oracle acquired Virtual Iron and it currently looks like they plan to turn it more into a virtual desktop hosting solution to complement their Oracle VM product.  At this time it is unclear to me how many of Virtual Iron’s former (very rockin’) support and engineering staff they retained.  At one point it even looked like they might bury the product entirely, but a recent issue of Information Weekly indicates that there might still be life left in it.  However, the time came for me to re-evalute virtualization.

Over the past few months as time has allowed, I’ve been investigating many of the options available for virtualization.  Since my VMs are all Linux, that means I have a lot of options ranging from free/libre (as in speech, not beer), free of charge (but pay for support), or ungodly expensive.  I also have to factor in support from HP for the virtualization platform which narrows the field a little.  So right now it looks like it’s between Novell/Suse Enterprise (can host or be virtualized with near bare metal performance on HyperV), Redhat Enterprise, Centos, Ubuntu Server 9 + KVM or Xen, and VMWare.

This is where it gets interesting.  When I first went over to the Xen side of the table, VMWare ESX and GSX were the server platforms and they were extremely expensive.  ESX was unique when it came out because it was the first virtualization product for the x86 platform that didn’t require a full OS running under it.  It was a hypervisor.  The Xen microkernel, offered essentially the same thing, only it was free if you wanted to make the minimal investment of time to build it from source.  Seeing that kernel compilation is something of a breeze for me, I gave it a try and was duly impressed.

But, VMWare didn’t stop there.  They have an advantage that I don’t believe Xen is yet positioned to compete with.  They can over commit your server’s RAM in some very interesting ways.  Of course, anyone familiar with Xen is aware of how you can “balloon” [1]  RAM for your Linux “guests” [2]  If you assigned only 256 megs to a guest and now you need 512, you can use the ‘xm’ utility to resize the amount of RAM that guest has.  If you’re paravirtualized, which I used a lot initially, the change in the guest is immediate.  As a result you can move RAM between VMs as the demands dictate.

VMWare can also do this.  But the coolest thing that VMWare can do that others can’t yet is to share similar areas of RAM between multiple VMs.  As they point out on their marketing site, this allows for much more efficient use of RAM.  Where products that are based on Xen like Citrix or MS HyperV are limited by how much RAM is on the host machine, VMWare can run many more VMs on the same hardware.  On their example server with 4 gigs of RAM, they could only run seven and six 512 meg Windows XP VMs compared to VMWare’s 20 VMs.  This is because the system copies portions of the first VMs running image that are the same on successive VMs.  It then keeps track of changes to maintain independent functionality.

They argue that while they might cost more you get a lot more VMs for your dollar.  And they’re right.  So, I’ll be eager to see this in action as I test the various VM technologies.  I want the best one I can find, and VMWare is very interesting right now.

[1] The Obsessively Compulsive blog is really excellent for  virtualization info.  I’ve followed it on and off for a while, it never disappoints.

[2] Guest is the wrong terminology in Xen parlance, but it will suffice for this blog post.