I’m not a scientist, but I play one (badly) on the web.

[Science for old people] I’ve belonged to many different internet groups dating back to the 80s where people would exchange ideas and have conversations about various subjects. Eventually, as more people joined, the unavoidable tricksters would join too. Some were simply spammers looking to make a quick buck by peddling junk. Others were scammers who wanted to get valuable information from unsuspecting users in the group. Then there were the people who knowingly fed disinformation, based on fear of the unknown, into the group. That information would be passed along as fact until the original source was no longer truly known. It continues to this day in social media and it is the type of misinformation is what I want to talk about in this post.

I’m sure you’ve recently seen posts or reposts of web articles or blog posts warning of the dangers of the new 5G mobile data technology that is around the corner in various parts of the world. Before I get around to that, I want to make sure that you have a clear understanding of what Electro Magnetic Field (EMF) energy is. Welcome to AltThink.

 

Child’s Play

Of course, everyone has played with a magnet at one time or another in their lives. As children, magnets seem nearly magical with their ability to remotely affect metal or other magnets. I’m sure you remember pushing the like poles (north to north or south to south) of two magnets together as a kid and feeling the squishy push back of the opposing pole’s fields. You can’t see the magnetic field, but it’s there and you can feel it in this way.

There are naturally occurring magnets called lodestones (iron ore that was naturally magnetized). The earliest mention of lodestones and their magnetic properties came from Magnesia, an ancient city in Greece, and it is possible this is where the word magnet originates. But the actual use of magnets dates back to 206 BC in China where the earliest compass was invented. But what makes a magnet? Before we get around to that, let’s look at where the planet’s magnetic field comes from in the first place.

The Earth is So Metal

Deep under the ground is the Earth’s core and it’s mostly made of iron. That iron is very hot, but due to the unbelievably high pressure of the weight of everything above it, it isn’t liquid, it’s solid. Surrounding this solid core is one layer of the planet’s “onion skins” made of molten metals like iron, nickel, and other metals. These metals are in liquid form and move freely within that layer.

The Earth’s rotation on it’s axis spins this molten layer somewhat like water spins in a bucket that you spin from the handle in your hands. There are also differences in temperature that cause the molten mix to rise toward the next onion skin layer, and then drop toward the core. This combination of movement creates whirlpools in the molten mix, which generates very strong electrical currents. Where there is electricity, there is a magnetic field.

All of these fields at that layer interact and generally herd together in a loosely similar direction. As the combined voices of a choir singing the same words is louder than a single person, these combined fields produce the Earth’s very strong EM field. (This is a greatly oversimplified and somewhat inaccurate retelling of this article and this forum discussion)

Metal Makeover

With a better understanding of the Earth’s magnetic field, we’re back to the question about what makes a magnet do it’s thing. We’ll assume that you, the reader, are aware of the fact that everything is made of the universe’s version of pixels: atoms. Carrying the pixel analogy a little farther, much like a single pixel is made of three subpixels (red, green and blue) atoms are made up of three subatomic particles called neutrons, protons and electrons.

Some materials are better at being magnets than other materials, and those materials tend to be certain metals known as ferromagnetic materials. Iron, nickel, and the ever popular but rare metal neodymium are used to make contemporary man-made permanent magnets. Naturally occurring lodestone, as mentioned earlier, is simply iron ore that was naturally magnetized. But what makes these metals magnetic?

The ferromagnetic electrons in the metal have a north/south polar alignment. In a regular piece of iron, these electrons’ fields are all pointing in different directions. so there is no unified strong field in that object. If something aligns the electrons’ fields, the ferromagnetic material becomes magnetic. In the case of the lodestone, the electrons were likely aligned with a strong natural field as the Earth cooled and retained that alignment.

Man-made permanent magnets are similarly made by exposing a ferromagnetic material to intense electromagnetic fields generated with electricity. Below are two simple ways of making your own magnet as a personal experiment:

  1. Repeatedly stroking a pin or needle against one pole of a permanent magnet will align the magnetic fields of the pin’s electrons turning it into a weak magnet.
  2. Place one end of a nail on a hard solid surface (garage floor, large rocky area) and repeatedly strike the top end with a hammer. Test for weak magnetic attraction with iron filings.

Fake N/S

You may have noticed that electrical currents and magnetic fields keep popping up in relation to each other in this post. They are absolutely linked to each other due to their relationship to electrons. Because of this, many electrical devices (especially older devices) use magnets. Generators, motors, speakers, ear buds, hard drives, ancient music cassettes, etc… All of these are some variation of an electromagnet. Let’s talk about how an electromagnet is made.

As I said before, a magnet is made by aligning the fields of ferromagnetic electrons in iron or other materials with ferromagnetic potential. Since the link exists between electrical fields and magnetic fields, it is possible to use electricity to align the electrons’ fields. If you didn’t do this in science class, you can do this at home. You just need a nail, some small pins, about one foot of insulated wire, tape, and a 9-volt battery.

Strip about 1/4 inch of insulation off of each end of the wire. Tape the insulate part of one end of the wire to the nail, then wrap the remaining wire around the length of the needle until you only have about three or four inches left. Tape one of the wire to one terminal of the battery. Tape the other end of the wire to the other terminal. Now hold one end of the nail over the pins. The pin should be attracted to the needle. If you are lucky enough to have a compass, you can also check and see what the pole orientation is of each end of the magnet. One will be south and the other north. The compass will point its north towards the south end of the nail since opposites attract and vice-versa. If you reverse the connection to the battery, the north and south poles of the magnet will also reverse. This point is important to remember!

AC/DC (I told you this was metal)

We’ve now talked over the origins of the Earth’s magnetic field, the relationship between electricity and magnetic fields, and how they are bound together by their common component: electrons. Now we need to talk about electrical current as it applies to our everyday use in light bulbs, refrigerators, computers and mobile devices. There are two forms of electricity used in our electrical devices: AC and DC. DC is direct current and comes from sources like batteries, solar panels, and some simple generators. AC is alternating current and is what comes from the electrical outlets and light sockets in your home.

Electricity also has poles, somewhat like a magnetic field’s north and south poles. These electrical poles are called positive (+) and negative (-) which you’ve likely seen on batteries. Electrically speaking, current moves from the positive pole to the negative pole in an electrical circuit. If you did the electromagnetic nail experiment above, the electrical current from the 9-volt battery was flowing through the wire out from the positive pole and back to the negative pole. All DC devices operate this way.

AC is different. It’s called alternating current because the poles of the electrical current swap positions very quickly. In the United States, the alternating current swaps positive and negative poles back and forth 60 times per second. You can imagine this is like someone connecting the 9-volt battery from the experiment in one direction, then disconnecting it and reversing the connection, 60 times per second.

To measure the speed of that change, just like inches or pounds to measure weight, we have Hertz, abbreviated Hz. This unit of measurement refers to the frequency of that alternation of poles. If you look at an American AC device, you’ll likely see something near the power cord that says, (110VAC/60Hz). That’s 110 volts AC at 60 times a second. If you live in another country, you might have a different voltage (220VAC) or a different frequency (50Hz).

Side note: Nikolai Tesla and Thomas Edison had a very nasty competition over AC and DC for long distance electrical power. Tesla’s AC (among other inventions) won and and is in widespread use today, even though Edison was the better marketer and cemented himself in history as a genius. Tesla deserves more credit. Check this out as I thumb my nose at Edison.

The Golden Age of Wireless

We’ve had wireless radio transmission in one form or another for over 100 years now and there were multiple contributors (Tesla included). No matter if it was Morse code sent a short distance, or it’s the 4G mobile data you use on your phone or tablet today, the underlying principles are the same: alternating electrical current is used to disrupt the electromagnetic field around you in a commonly agreed upon way. That disruption is picked up by your receiving device and reconstructed into a recreation of the sent message (music, voice, text). Again, massively oversimplified but generally correct.

All forms of radio transmission require electricity alternating at some frequency. Please note that this alternation doesn’t have any direct relation to the AC in your house other than being alternating currents. When radio was out of its infancy in the United States and AM and FM broadcasts were established, those forms of radio used electrical signals with the following frequencies:

AM: 540-1600 kHZ (kilohertz which means a signal with polarity that changes 540,000 to 1,600,000 time per second)

FM: 88-108 MHz (Megahertz which means a signal with polarity changes 88,000,000 to 108,000,000 times per second)

The radio stations used varying amounts of power to send out their signals which controlled how far the signals could be received. (I won’t get into the differences between AM and FM radio technologies) Today, with WiFi signals in the home, you don’t need much power. WiFi access points usually send signals between 200 and 4000 mW (milliwatts which is .2 to 4 watts) of transmission power. Compare this with the Chicago AM radio station WBBM at 8 KW (Kilowatts which is 8000 watts) of transmission power. Remember what I said here about radio signal frequencies and transmission power.

Closer = Stronger, Farther = Weaker

Like any other form of energy, radio signals (electrical energy that is radiated out from a transmitter) lose strength the farther away you are from the source. But how much weaker do they become? It turns out there is a formula in physics called the Inverse Square law that determines this. Assume a WiFi signal travels 40 feet and it’s still strong enough to be usable. If you go out 80 feet (two times the original distance), the Inverse Square law says the signal strength will be 1 divided by 2 to the second power (1/2^2 or 1/4). If you go out 120 feet (three times the original distance) the signal strength will be 1 divided by 3 to the second power (1/3^2 or 1/9th) and so on. So a 4 watt signal at 40 feet, would be a 1 watt at 80 feet, and a .4 watt signal at 120 feet.

Unlimited Data

Now that we’ve introduced of all the needed information to talk about 5G, let’s talk about 5G. Since smart phones have popularized mobile data, most people are aware of the “cell phone” towers that dot our cities and highways. Those towers have been transmitting mobile signals since the 80s in one form or another using different frequencies and formats over the decades.

The last time the mobile providers rolled out a new mobile technology, it was 4G LTE. If you’re not aware of it, the 4G stands for fourth generation. As with any kind of radio transmission, the signals that 4G uses have specific frequencies and power levels. They start at 700 MHz and go as high as 2.5 GHz. That would be 700,000,000 to 2,500,000,000 polarity changes per second. No different from the AC in your house, FM or AM radio transmissions, just much much faster.

5G differs in two important ways (there are more differences, but I’m only focusing on two). 5G will operate in the ranges of 600 MHz to 6 GHz as well as 24-83 GHz. Converting these frequencies to Hz for comparison with AC power will give you some perspective:

600 MHz = 600,000,000 Hz

6 GHz = 6,000,000,000 Hz

24 GHz = 24,000,000,000 Hz

83 GHz = 83,000,000,000 Hz

Big numbers, right? But why are they using higher frequencies? There are multiple reasons. One of them is that consumers want faster data speeds, and higher frequencies allow for that. This link says that with 4G it might take 10 minutes to download a HD movie whereas with 5G it might take one second. So the speed will be a huge benefit to users.

The other difference between 4G and 5G is the plan to use lower power. Since the higher frequencies have very short wavelengths (hence the name millimeter waves), antennas can be very small. With the smaller antenna size, a city can easily be populated with more of these devices without needing the current cell phone towers. Instead these new 5G transceivers can be placed on top of phone poles and buildings. This widespread distribution will leave to better coverage with lower signal power. These devices will essentially be one step up in power from a typical WiFi access point.

Machinations of War

If you’ve followed me up to this point, it’s now time to discuss how millimeter waves can be used in weaponry. The U.S. military has a non-lethal weapon called the Active Denial System (ADS). The basic principle is very similar to the way a microwave oven works. When you “nuke” (scientifically inaccurate) your food it gets hit with bursts of microwave energy. These bursts cause the molecules of water and fat in our foods to move very quickly. The fast motion produces heat, and… voila, you have heated food. It should be noted that the frequencies used in microwave ovens are chosen because their wavelength will penetrate food at a depth of 3-6 inches even though microwave wavelengths vary from one meter to one millimeter.

The Active Denial System uses millimeter waves by using frequencies much higher than what is used in our microwave ovens. Per this article the ADS uses 95 GHz (95,000,000,000 Hz) waves. Why so high? In this case, that frequency allows the energy to penetrate at a depth of 1/64th of an inch of surface tissue. Exactly like the microwave oven, the ADS warms up the tissue very quickly and causes enough pain that the recipient will not be able to withstand it for more than three to five seconds. One person exposed to it said it feels like touching a very hot wire, but there is no heat and the pain stops as soon as you’re out of the beam. Actual injury is very rare (.1% of exposures) resulting in small blisters. It is an ugly use of technology, no doubt and there are many who are concerned about its potential as a torture device since it leaves no visible damage.

Bringing it All Together

So if 5G uses millimeter waves as the ADS system does, then why can’t it be used against people in the same way? The difference lies in the amount of power required to have the effects that ADS has. Per the Wikipedia article linked in the last paragraph, a portable ADS system uses 30 kilowatts (30,000 watts) of power to reach targets up to 250 feet away. As mentioned earlier, one of the benefits of the 5G system will be the use of less power. These units are likely to use no more than a watt or two. This is like difference between an electrical shock from a 9-volt battery and a lightning bolt. 5G transceivers will never be capable of use as ADS systems.

What still remains an unanswered question for me is what the long term effects are of living in a bath of very low power, very high frequency radio waves. There may be no ill effects at all, or there could be something we just haven’t had enough time to find out about. But until there is a definitive answer from a reliable non-biased source, I’m going to say that radio waves in the diluted form we experience all the time are likely more more damaging than cosmic rays. When 5G goes into service near you, it’s likely you will complain about slow internet access a year later. Just like you did with 4G.

Sources:

https://en.wikipedia.org/wiki/5G

https://en.wikipedia.org/wiki/Active_Denial_System

https://en.wikipedia.org/wiki/Inverse-square_law

https://spectrum.ieee.org/video/telecom/wireless/everything-you-need-to-know-about-5g

https://en.wikipedia.org/wiki/History_of_the_compass

http://www.dowlingmagnets.com/blog/2017/how-are-magnets-made-and-what-are-they-made-of/

http://www.dowlingmagnets.com/blog/2015/how-do-magnets-work/

http://www.physics.org/article-questions.asp?id=64

https://physics.stackexchange.com/questions/385388/even-if-molten-iron-is-ionized-spins-how-does-it-make-a-mag-field

https://physics.stackexchange.com/questions/18340/can-you-magnetize-iron-with-a-hammer

https://w.wol.ph/2015/08/28/maximum-wifi-transmission-power-country/

Advertisements

MythTV error message: Error opening jump program file buffer

I’m posting this hoping it will help others who run into the same issue noted above. For the longest time, I’ve used local disk for TV recordings with MythTV on my media center builds (Typically on Ubuntu, but my last build was on Korora) and never had any issues. But recently I lost a 1TB drive and decided that as a band-aid, I’d just an NFS volume on my central NFS backup server for the house. With 5 TB to spare, why not right? To set this up, I simply renamed the original /var/lib/mythtv/recordings directory as /var/lib/mythtv/recordings.old and created a new directory with the original name, set the correct owner:group and mode and then mounted the remotely exported volume.

On the remote side, I made sure that the right user and group IDs were assigned to the exported /capstore/mythtv directory, defined the export for read/write in /etc/exports and ran ‘exportfs -av’/ So far so good. I then mounted the new NFS volume on the media center, verified that owner:group and mode looked correct and then ran the MythTV frontend (note that the backend is also on the media center system). Watch TV worked, but as soon as I tried to change the channel, I got the dreaded “Error opening jump program file buffer” error. Assuming that others have encountered this before, I searched the web which pointed me to some forums where people were asking the same question but getting various, unhelpful answers. Some of them complaining about how the community isn’t that helpful. Not a good sign for my search!

I went to the MythTV wiki and located the NFS section. Beside the things I’d already set up, I tried the other recommendations, like setting the actimeo=0 mount option on the client to turn off NFS attribute caching, and using ‘soft’ and ‘async’ to prevent issues with uninterruptible sleep states should the NFS server go offline or otherwise be unavailable. I tried switching between UDP and TCP protocols. I set the rsize and wsize to 32k. None of these things seemed to be working. I also found that I could forego actimeo altogether and just use ‘noac’ to disable attribute caching which improves response time for media purposes like MythTV. I double checked the ACL on the dir as well to no avail. It looked like everything was set right but as soon as I tried to change the channel, I’d get that annoying error and the forums had no answer.

At one point I switched back to my local disk to make sure there was no issue there, and it definitely had no problem working from local disk. The NFS was the issue, but how? Why? Log files were vastly unhelpful other than confirming that the file didn’t exist. Looking in the directory, the file was definitely being created and was there. Each one even had a few kB or Megs in it. Then I noticed something funny. I knew when I’d been working on this issue last night and today. And the time stamps on those files did not look correct at all. Nearly -12 hours off! I checked the system time on the media center and it was correct. I went to the NFS server and checked and… it was nearly -12 hours off. The server is quite old (I won’t say how old as it’s nearly embarrassing 🙂 ). The issue appears that the Linux distribution I used for it has time servers in /etc/ntp.conf that are no longer valid so it’s drifted out of time sync for quite a while.

I corrected the defunct time server pools, stopped ntpd, synced with ntpdate and restarted ntpd, then verified that the time was correct as viewed from both sides. It was. I tried running MythTV again and changing channels. It finally worked. So what I learned is that EVERYTHING matters with NFS for MythTV (and other applications) including the server’s time. My guess is that MythFrontend created the file it needed but probably has some aspect of it’s functionality where it stats some portion of the time stamp and if it doesn’t jibe with local time on the client, it treats the file as non-existent, hence the “File not found” errors. The other thing that it reinforced in my mind… I have some updates to take care of. But that’s a task for when I actually have time back in my personal life.

Some of the useful links I found that got me partway there:

Most helpful for the MythTV specifics around NFS set up: https://www.mythtv.org/wiki/Optimizing_Performance#Network_File_Systems

Where to check backend logging: https://www.mythtv.org/wiki/Troubleshooting:Filesystem_Permissions

Useful tip for CIFS and NFS (note there is no ‘forcedirectio’ mount option in LInux, that is where ‘noac’ comes in): http://wildebeestplain.blogspot.com/2011/01/mythtv-and-network-mounts.html

I hope this saves someone else time, because although it might be obvious to some, not everyone would think to check time skew on the backend file server as part of troubleshooting NFS issues. I know I will from now on.

 


NOTE: For the less technical reader, some explanations are coming below. Persist!

For the past ten years or so, I’ve been thinking that RAM (a computer’s short term memory) and storage (what you think of as drive space, or “long term” memory) have to merge into one system. It’s very nearly happened in phones, tablets and modern laptops with the use of flash “NVRAM” or solid state disks (SSDs). In fact, a few years back I saw some strong support of this idea when HP revealed a new compute system they called “The Machine” and HP Discover 2015. Think of The Machine as a big server rack where each shelf is populated by multiple processors and nothing but RAM (instead of hard drives or solid state drives) to serve as both long term and short term memory. All of that RAM had interconnections between each shelf using very high speed fiber. In short, the memory lanes were slated to burst off the motherboard and into the interconnects.

HP’s design was predicated on the notion of a memristor with performance equal to standard RAM but with the ability to store data even when the power is off. The primary benefit was to be: similar cost to compared to standard RAM in spite of these new features. Unfortunately, the technology wasn’t quite ready in 2015. Many pessimists probably see this as a failure no real solution, where in reality, it was a setback. I think HP’s proposal is right (partially because it matches my notion of RAM and storage becoming one!). A computer that is no longer hampered by the distinction between storage and RAM would be able to do some very fast processing indeed. This distinction has always been a necessary evil from the days when disk was far less expensive than RAM as a long term memory solution.

Today I read about a bold new chip design from Stanford using carbon nanotubule field effect transistors (CNFETs). I believe this is one more lane being added to the road to the RAM and storage merge. First, a simple and inaccurate explanation of CNFETs:

You hear all the time that computers are “just ones and zeros”. But what does it actually mean? Believe it or not, any computer (smartphones, tablets, laptops, servers, even cheap media players) is mostly just a box of self controlled light switches. Millions of light switches:

buzz lightyear

The ones are when these switches are turned on. The zeros are when they are turned off. Right now as you use your web browser, there are switches turning on and off billions of times per second in your compute device’s processor, memory, graphics chip, and even the screen if you are using a flat screen of some kind. All those billions of ons and offs represent numbers which, in turn, represent the state of something: which letter of the alphabet you just typed, what color a specific pixel on Buzz Lightyear’s red button should be, or the packet of data coming in from the network to let you know that an e-mail just came in.

Those switches are special switches that can be turned on and off by applying power to them. They are known as transistors. One kind of transistor that works well in digital circuits is the field effect transistor (FET). Again, this is just a special kind of switch that can be turned on or off by applying electricity to it. The CNFETs talked about in the article are just the latest generation of FETs. What makes them different is that they are using carbon nanotubules instead of traditional silicon. That means, they can be smaller and much much faster. That difference in size at the chip level translates to smaller and faster devices in our future!

Digging deeper into the article, the most exciting proposition is that RAM (which they include as data storage) will be part of the processor itself. Today, when a processor needs to access storage (RAM or disk), it needs to do so over some kind of interface. Think of that as having to take a bridge (which is sometimes crowded) to get to the grocery store. Imagine how excited you would be if a new grocery store was built at the end of your street. That’s what moving storage onto the processor does. The “bridge” is gone. Besides being smaller and faster because of the move to carbon nanotubules, these systems will also be faster simply because some barriers are being removed. It’s a win-win for users of compute devices everywhere and it also supports the idea that RAM and storage are merging into a single structure, which is as it should be. Good bye to “disk”!


Public note to all Democrat politicians. If you want my support both in the voting booth and through donations, DO NOT force me to donate to a cause I support through guilt. Today I saw what looked like a petition to support Planned Parenthood which my family already contribute to directly. I was happy to put my name out there because I support the health care access and contraception that Planned Parenthood provides to those who need it, and I am pro-choice.

I was VERY DISAPPOINTED that the web site supporting @SenGillibrand and her pro-Planned Parenthood stance appeared to REQUIRE a donation just to voice my support. It could be poor web design, but not knowing who Kristen Gillibrand is, I needed to do some research. That research led me to the conclusion that this is very much intentional as she is a Blue Dog Democrat (read: fiscally conservative centrist which in my book is a Reagan era Republican).

Yes, she could support something I believe in, but given the requirement of a contribution to even be allowed into the party, and her apparent flexibility in moral alignment based on her past actions, I have strong reservations. In my view, Democrats who are flexible lack the backbone needed to do what is necessary to strengthen the left at this point. It is one thing to reach across the aisle to work with your opponents. It is entirely something else to change your view based on who you hope will vote for you. (If Trump is like president Snow from The Hunger Games, any Democrat who changes views to court voters may as well be Coin.)

In spite of that, if the donation to Gillibrand is required I could be accept it if the act of not donating wasn’t taken to be such flippant disregard for the cause.  Must the statement requiring the donation be so horribly worded? See for yourself; below is the option to select if you aren’t able or willing to donate money to Gillibrand in support of Planned Parenthood:

“No, I don’t care about stopping Trump’s anti-women agenda.”

Wow. Really? That’s assigning a whole lot of attributes to a potential backer that are incorrectly assumed. Please rethink that if you don’t want to alienate people who might support you.

As I said, my family currently donates and will continue to donate a good deal of money directly to Planned Parenthood and many other causes that Trump is poised to try and destroy. Gillibrand would have gotten my signature on that petition if it weren’t for the guilt trip. But this is exactly the sort of posturing and behavior within the modern Democrats that I’ve come to expect and it leaves me disgusted with all politicians.

Not all Dems are like this, and I’ll give Gillibrand the benefit of the doubt that this was just a design oversight rather than the desperate and vicious money grab that it looks like. Hopefully I have her completely wrong and she will work at getting that wording changed.


So I’ve been really busy for the past few years and haven’t been able to spend as much time exploring and working on interesting things in Linux or posting to this blog. I don’t know if that will change in the near future. That was partially my reason for moving from Gentoo to Ubuntu. As fallout from all that, I ran into a very inexplicable situation right around the time that Ubuntu 14.04 came out. For the most part, it worked as before, and the polish that the Unity desktop (it wasn’t introduced in this version) brought was pretty nice, especially for the family. Easy to use and pretty.
 
At first things were great. But after a while I noticed some unusual performance issues. Nothing made sense. I would open more than four or five tabs and the computer would start swapping and the performance would nose dive. System load in ‘top’ would be in the 2-3 range. Since I tend to prefer Chrome, and I have a habit of having no less than 30-40 tabs open, this was becoming a major problem. But, life intervened in the form of more and more work to keep me away from investigating it and I just slogged through. I’d typically just close all applications, swapoff and swapon to get the system back to some sanity.
 
Coincidentally, last night I wanted to work on some sound design and I wanted as many system resources dedicated to the virtual modular synthesizer and realtime audio subsystem. I decided, after a few years away, to go back to my most minimal configuration using the XMonad window manager. I far prefer controlling windows by keyboard rather than mouse since it’s just generally faster. At one point I needed to look something up on the web, so I opened Chrome not thinking much about it. I immediately noticed it was faster. Much faster.
 
Sure, I’m using a window manager that is much lighter than Ubuntu Uniity, but come on… it’s not like there is THAT much of a difference in RAM and CPU utilization between heavier and lighter WMs. I was now curious and launched Firefox. Usually that was a recipe for disaster as both browsers would perform poorly and the rest of the system would tank. Indeed, a few times it was so bad I had to hard power cycle the laptop. But right now with XMonad, both Chrome and Firefox were speeding along just fine.
 
Why such a huge difference? I used to switch between twm, E, sawfish, metacity, beryl, compiz and kwm regularly and never saw this big of a difference. twm being the most bare bones was guaranteed to work on the lowest and oldest of systems. compiz had all the eye candy and special effects, but as a rough personal quantification of the difference, I’d say that compiz was maybe two times heavier than metacity. And metacity ran pretty well on nearly all systems.
 
So the big test this afternoon. I decided to hit the family media center and try out a lighter WM. I installed xfce4. It’s somewhat reminiscent of the old Gnome 2 days. I threw it on, configured it, and… everything felt pretty snappy. Not bad for a 2009 laptop. But the big test: Running the game Antichamber.
 
About two years ago the PC I had doing media center functionality for us died. I cloned it to my old laptop with the bad screen which wasn’t being used. Everything worked fine, except Antichamber. It loaded but the framerate was so low it was completely unusable. Even the game Fez had trouble keeping the audio in sync. I chalked it up to being an older laptop that was perhaps just under the requirements for those games. I was a little perplexed because NVidia cards, even in laptops usually don’t perform this poorly even if they’re over five or six years old.
 
After switching to xfce4, I tried Antichamber, and… it played perfectly. WTF!? No change in hardware at all, just a switch of the window manager. Then I thought it through a bit and did some Googling. At this point, I think I have an idea what the problem is. Back in 2009, before Ubuntu had Unity, the only applications that made use of 3D acceleration were games, video editors, and 3D modeling/rendering software. Today, the landscape is different. I knew that Unity made use of 3D acceleration and would likely have some impact on some 3D applications. (NOTE: Unity uses compiz) But in the intervening time, web browsers have also been added to the mix. They, like the games and Unity, are using GL today.
 
A little Googling also revealed that Unity will fall back to software assisted rendering if your 3D accelerator can’t do something that it needs. So all of that processing and data gets shifted from the graphics processor to your plain old CPU and regular RAM. I think the reason that Chrome performs so poorly under Unity is that it spins up a new Chrome process for each tab. Each of those processes hooks into the GL subsystem and the computer’s GPU or potentially uses up a LOT more CPU and RAM for software rendering. This is likely why Chrome is such a heavy load on a system even when it’s not being used. My system idle with Chrome doing nothing and 30+ tabs open is about 50%. Swap gets hit pretty hard, especially if I hit a site that has a lot of ads. imdb.com used to be horrible in Chrome. (There is also Flash, but we all know about that)
Firefox fares better, but it only has one process, so not as many hooks into GL as Chrome can potentially have. Right now, I’m typing this from within Chrome and have a ton of tabs open. No lag. No loud fan. No hot laptop. There is no single point to blame here. It’s not that “Unity sucks” or “Chrome sucks” or “Firefox sucks” or “Ubuntu/Linux sucks”. It’s that there is a resource on most systems that used to be used infrequently, and now it is drawn upon very heavily: the GPU. I suspect that this is also true in other OSes. If you’re seeing crappy browser performance, maybe it’s time to turn off the eye candy, or look into disabling your browser’s reliance on GL. It’s also likely that if you run on a desktop computer, you don’t notice this as much if you have a really good GPU in the system and more RAM and CPU than you can pack in the typical laptop.
For my laptop, the difference is like night and day. I can live without the niceties of Unity for now and I’m more than happy to go back to my beloved keyboard driven window manager, XMonad in exchange for vastly better performance. After showing my wife the improved performance on the family laptop, she said she doesn’t care about the eye candy and is willing to go through a few adjustments to xfce4. It’s not that different from Gnome 2 which she was used to a few years back. Long term… I think future laptops in this household will have the best GPU and video RAM that I can justify without going “gamer” level.

Well… I installed and tried 360 Desktop on my Windows 7 system. Cute, but useless in terms of improving the UI. All it does is pan a panoramic photo on your desktop. You have the option of having desktop icons and application windows stay “pinned” to that wallpaper, or you can individually keep icons/windows in one place. But it’s slow. I kept my icons stationary (why would anyone want their desktop icons to move off screen?) but let the app windows move with the wallpaper. It takes too long to cycle all the way around. You don’t have control over the speed of the movement. In Compiz Fusion, you determine the speed of the rotation of the desktop cube/shpere/cylinder based on your mouse movement. It does give you more space to work with, but if you can’t navigate that space quickly and efficiently, how useful is it? So I give it three stars out of five for cuteness factor. But it’s not useful to me. Someone suggested I try Bumptop. I’ll give it a shot but it looks gimmicky to me.

I was looking around for a Windows Media Center equivalent for Linux. I’m not actually much of a fan of media centers because it’s a new UI. That’s not a problem for me. But my wife and daughter need to be able to access the media center, so I’ve kept it on the familiar Gnome desktop with custom scripts, button panels with custom icons and the like. But I got curious tonight and tried out XBMC (started out as an XBOX Media Center but now it’s available for Linux, Windows, Mac and Apple TV platforms too). I installed it on my laptop (Ubuntu 9.10) and it works a treat. What I really like is that you don’t have to run it as the main shell. It does take over the screen, but it’s just an app. It does weather, photos, music, videos, plays DVDs (for external regions as well but requires the use of libdvdcss which violates software patents in a few countries), etc… pretty much everything I’ve been doing with Gnome for the past five or six years. And it has a lot of eye candy. It’s missing a few important features for me (like Xine’s Alt-1 through 9 to jump 10-90% into a program as well as other keybindings). But, it’s quite usable and somewhat simple to use. So I will be throwing this on the media center once I migrate it from Gentoo to Ubuntu 9.10. Kudos to the XBMC people!


The Question

What is Linux? That question seems like it should have an easy answer. Most Linux fans think that the question has been answered well many times during Linux’s roughly eighteen years of existence. Most of the more common explanations begin by saying that Linux is a “Unix-like operating system”. That is technically accurate on the surface, but it does not answer what that rare but curious person is really asking when they wonder “what is Linux”? For example, I’ve had people ask what version of Windows I’m running after I tell them that I’m running Linux and nothing else. In reality they tend to be asking what my graphical environment is, or maybe even where I got my “desktop wallpaper”. (A discussion about the alternate graphical environments will come later in this series.) That question alone illustrates how most definitions fail to answer what the person is asking in the first place.

For the average consumer, a typical PC comes with Microsoft Windows on it and Windows is the PC to them. They are unaware that their PC is capable of running something other than Windows. To further complicate things, the Linux, Windows and Mac OS platforms are different enough in philosophies and approaches, that the users of each tend to only see computers through their own experience. Those differences make it harder to learn to do a lot of the same things across multiple computer platforms. When someone tries out any software they’ve never used before there are, of course, new things to learn. Finally, in many cases, the consumer doesn’t really even have a clear understanding of what Windows actually is, which will make understanding any other computer platform difficult at best. In the eyes of many users, there is no separation between Windows and Word, for example. Trying to understand the difference can be very hard since software isn’t something you can touch or that can touch you. This blog entry’s intention is to try and make those roadblocks clearer to both the technically inclined and the interested computer user.

Hidden Questions

First, we’ll start with something I’ve noticed in my, as yet, relatively short career supporting computers and users: misunderstandings and miscommunication are the main sources of interference when trying to solve a computer problem. My wife, a confirmed non-tech, has commented many times that when she hears me providing technical support on the phone, or talking shop with some friends, it sounds like a completely foreign language to her. The only parts of those conversations that make sense are the words in plain English. Sort of like a career specific dialect! Computer support staff and computer users come from different backgrounds with words that are specific to their jobs, but which are second nature to them while working. They simply don’t speak the same version of the native language. Because of this, problems communicating should come as no surprise to either side.

Because of the miscommunication, both sides of a technical support conversation will make many assumptions which eventually lead people down the wrong path. In the case of our core question, “What is Linux?”, these misunderstandings appear when the technically inclined person hears the question, but doesn’t ask the user for more details about what the user is really asking them. When someone who is honestly curious asks “what is Linux”, they may be asking a lot more than what the standard answers address. Here is a sample of some hidden questions that I’ve been able to coax out of people who have asked me in one way or another; “what is Linux”? After reading some of these, it should be pretty clear that asking a lot more questions to really understand what a user really wants to know is extremely important.

Q1. What is an OS (operating system)? What is Windows?
A1. This is the most basic question you’ll get from someone asking you what Linux is. At this point, they may not be ready to try Linux yet. Or if they insist that they are, it might be better to direct them to a simpler to use version of Linux like Ubuntu.

Q2. I know what Windows is, but what is Linux? Is Linux another program for Windows kind of like Office or Quicken? And if so, how is it better than just using Windows and the programs I already have and know?
A2. Linux is just like Windows in that it’s a type of software called an “operating system” (OS). While it may look different, and do things differently, and be based on older philosophies including the benefits and drawbacks that come with them, it does nearly all of the same things that Windows can do. There are some things that Windows can do that Linux can’t, but it’s also true that there are just as many things that Linux can do that Windows can’t. Every year the already small number of things that Linux can’t do continues to get even smaller.

Q3. Does Microsoft make Linux? How much does it cost? Why would I want to spend more money on Linux when I already have Windows and a bunch of programs that all came free with my computer?
A3. A lot of confusion comes from the multitude of different Linux “brands” which are officially known as distributions or “distros” for short. These differ from Windows and Mac OS in that not only are many of them completely free of charge and include a complete working system (OS), but they tend to include a huge selection of additional applications. Each distribution is put together in a way that will hopefully integrate all of the software into a seamless experience. Most distributions do this with differing levels of success, Ubuntu being the most successful. Spending the time to try and learn how to use a Linux distribution might be worthwhile to your wallet and certainly to increasing how much you know about your computer and how it works.

Q4. I’ve heard of this thing called Linux, but it’s only for computer types and can’t really do a lot of the things Windows can do. Right?
A4. Linux is for anyone who is willing to trade a some time and effort to learn it in exchange for quite a few different freedoms. The most commonly touted freedom is “free of charge”. Many people argue that the time spent to learn Linux is expensive. This is, in part, because they’ve forgotten that at one point they needed to spend the time to learn the OS they use now and also because they assume that learning Linux will take more time than that. In the case of easier to use distributions like Ubuntu, Fedora or SuSE, the learning cost is the same as with Windows or Mac OS X. You’re only learning a new set of approaches to do things you’re already familiar with, not a completely new set of procedures you’ve never performed before. As far as things that you can do in Windows vs. Linux, that’s not the topic of this entry. Let’s just say that if you spend the time to learn Linux, you’ll find plenty of software that will most likely meet your needs.

There are actually a lot of answers to the question, “What is Linux” because behind that question there are usually multiple questions that possibly have absolutely nothing to do with Linux. To try and stay away from technical jargon, I’m going to try and answer some of these questions over the next few blog entries in a conversational style. Where I can’t avoid technical terms, I’ll try to explain them as clearly as possible in plain English, possibly with bad analogies. If nothing else it might make you laugh.