Wednesday, August 26, 2009

10 habits of superstitious users

Author: Jaime Henriquez

For some users, the computer is unfathomable - leading them to make bizarre assumptions about technology and the effect of their own actions. Here are a few irrational beliefs such users develop.


Superstition: A belief, not based on human reason or scientific knowledge, that future events may be influenced by one's behavior in some magical or mystical way (Wiktionary).

In 1947, the psychologist B. F. Skinner reported a series of experiments in which pigeons could push a lever that would randomly either give them a food pellet, or nothing. Think of it as a sort of one-armed bandit that the pigeons played for free. Skinner found, after a while, that some of the pigeons started acting oddly before pushing the lever. One moved in counterclockwise circles, one repeatedly stuck its head into the upper corner of the cage, and two others would swing their heads back and forth in a sort of pendulum motion. He suggested that the birds had developed "superstitious behaviors" by associating getting the food with something they happened to be doing when they actually got it — and they had wrongly concluded that if they did it again, they were more likely to get the pellet. Essentially, they were doing a sort of food-pellet dance to better their odds.

Although computer users are undoubtedly smarter than pigeons, users who really don't understand how a computer works may also wrongly connect some action of theirs with success (and repeat it), or associate it with failure (and avoid it like the plague). Here are some of the user superstitions I've encountered.

Note: This article is also available as a PDF download.

1: Refusing to reboot

Some users seem to regard a computer that's up and running and doing what they want as a sort of miracle, achieved against all odds, and unlikely ever to be repeated … certainly not by them. Reboot? Not on your life! If it ain't broke, don't fix it. Why take the risk?

2: Excessive fear of upgrades

Exercising caution when it comes to upgrades is a good idea. But some users go well beyond that, into the realm of the irrational. It may take only one or two bad experiences. In particular, if an upgrade causes problems that don't seem to be related to the upgrade itself, this can lead to a superstitious fear of change because it confirms their belief that they have no idea how the computer really works — and therefore no chance of correctly judging whether an upgrade is worth it or just asking for trouble. Better to stay away from any change at all, right?

3: Kneejerk repetition of commands

These are the people who, when their print command fails to produce output in a timely manner, start pounding the keys. They treat the computer like a recalcitrant child who just isn't paying attention or doesn't believe they really mean it. Users may get the impression that this superstition is justified because the computer sometimes does seem to be ignoring them — when it fails to execute a double-click because they twitched the mouse or when they have inadvertently dropped out of input mode. Or it may come from the tendency of knowledgeable helpers to make inconspicuous adjustments and then say, "Try it again."

4: Insisting on using particular hardware when other equally good hardware is available

Whenever you go to the trouble of providing your users with multiple options — computers, printers, servers, etc. — they will develop favorite choices. Some users will conclude, however, based on their previous experience (or sometimes just based on rumor), that only this particular piece of hardware will do. The beauty of interchangeability is wasted on them.

5: "I broke it!"

Many users blame the computer for any problems (or they blame the IT department). But some users assume when something goes wrong, they did it.

They don't think about all the tiny voltages and magnetic charges, timed to the nanosecond, all of which have to occur in the proper sequence in order for success. In fact, there are plenty of chances for things to go wrong without them, and things often do. But then, all those possible sources of error are hidden from the user — invisible by their nature and tucked away inside the box. The only place complexity isn't hidden is in the interface, and the most obviously fallible part of that is … them. It may take only a few cases of it actually being the user's fault to get this superstition rolling.

6: Magical thinking

These are the users who have memorized the formula for getting the computer to do what they want but have no clue how it works. As in magic, as long as you get the incantation exactly right, the result "just happens." The unforgiving nature of computer commands tends to feed this belief. The user whose long-running struggle to connect to the Web is resolved by, "Oh, here's your problem, you left out the colon…" is a prime candidate to develop this superstition.

Once on the path to magical thinking, some users give up trying to understand the computer as a tool to work with and instead treat it like some powerful but incomprehensible entity that must be negotiated with. For them, the computer works in mysterious ways, and superstitions begin to have more to do with what the computer is than how they use it.

7: Attributing personality to the machine

This is the user who claims in all honesty, "The computer hates me," and will give you a long list of experiences supporting their conclusion, or the one who refuses to use a computer or printer that had a problem earlier but which you have now fixed. No, no, it failed before and the user is not going to forget it.

8: Believing the computer sees all and knows all

Things this user says betray the belief that behind all the hardware and software there is a single Giant Brain that sees all and knows all — or should. They're surprised when things they've done don't seem to "stick," as in "I changed my email address; why does it keep using my old one?" or "Did you change it everywhere?"  "… Huh?" or "My new car always knows where I am, how come I have to tell Google Maps where I live?" or the ever-popular "You mean when you open up my document you see something different?"

9: Assuming the computer is always right

This user fails to recognize that the modern computer is more like television than the Delphic oracle. Even the most credulous people recognize that not everything they see on television is true, but some users think the computer is different. "There's something wrong with the company server." "What makes you think that?" "Because when I try to log in, it says server not found." … "Why did you click on that pop-up?" "It said I had a virus and that I had to."

10: "It's POSSESSED!!"

Users who are ordinarily rational can still succumb to superstition when the computer or its peripherals seem to stop paying any attention to them and start acting crazy — like when the screen suddenly fills with a code dump, or a keyboard problem overrides their input, or a newly revived printer spews out pages of gibberish. It serves to validate the secretly held suspicion that computers have a mind of their own — and that mind isn't particularly stable.

Magic?

We're used to seeing superstitions among gamblers and athletes, who frequently engage in high-stakes performances with largely unpredictable outcomes. That superstitions also show up when people use computers — algorithmic devices designed to be completely predictable — is either evidence of human irrationality or an interesting borderline case of Clarke's Third Law: "Any sufficiently advanced technology is indistinguishable from magic."

10 fundamental differences between Linux and Windows

By Jack Wallen

I have been around the Linux community for more than 10 years now. From the very beginning, I have known that there are basic differences between Linux and Windows that will always set them apart. This is not, in the least, to say one is better than the other. It's just to say that they are fundamentally different. Many people, looking from the view of one operating system or the other, don't quite get the differences between these two powerhouses. So I decided it might serve the public well to list 10 of the primary differences between Linux and Windows.

Full access vs. no access
Having access to the source code is probably the single most significant difference between Linux and Windows. The fact that Linux belongs to the GNU Public License ensures that users (of all sorts) can access (and alter) the code to the very kernel that serves as the foundation of the Linux operating system. You want to peer at the Windows code? Good luck. Unless you are a member of a very select (and elite, to many) group, you will never lay eyes on code making up the Windows operating system.
You can look at this from both sides of the fence. Some say giving the public access to the code opens the operating system (and the software that runs on top of it) to malicious developers who will take advantage of any weakness they find. Others say that having full access to the code helps bring about faster improvements and bug fixes to keep those malicious developers from being able to bring the system down. I have, on occasion, dipped into the code of one Linux application or another, and when all was said and done, was happy with the results. Could I have done that with a closed-source Windows application? No.

Licensing freedom vs. licensing restrictions
Along with access comes the difference between the licenses. I'm sure that every IT professional could go on and on about licensing of PC software. But let's just look at the key aspect of the licenses (without getting into legalese). With a Linux GPL-licensed operating system, you are free to modify that software and use and even republish or sell it (so long as you make the code available). Also, with the GPL, you can download a single copy of a Linux distribution (or application) and install it on as many machines as you like. With the Microsoft license, you can do none of the above. You are bound to the number of licenses you purchase, so if you purchase 10 licenses, you can legally install that operating system (or application) on only 10 machines.

Online peer support vs. paid help-desk support
This is one issue where most companies turn their backs on Linux. But it's really not necessary. With Linux, you have the support of a huge community via forums, online search, and plenty of dedicated Web sites. And of course, if you feel the need, you can purchase support contracts from some of the bigger Linux companies (Red Hat and Novell for instance).
However, when you use the peer support inherent in Linux, you do fall prey to time. You could have an issue with something, send out e-mail to a mailing list or post on a forum, and within 10 minutes be flooded with suggestions. Or these suggestions could take hours of days to come in. It seems all up to chance sometimes. Still, generally speaking, most problems with Linux have been encountered and documented. So chances are good you'll find your solution fairly quickly.
On the other side of the coin is support for Windows. Yes, you can go the same route with Microsoft and depend upon your peers for solutions. There are just as many help sites/lists/forums for Windows as there are for Linux. And you can purchase support from Microsoft itself. Most corporate higher-ups easily fall victim to the safety net that having a support contract brings. But most higher-ups haven't had to depend up on said support contract. Of the various people I know who have used either a Linux paid support contract or a Microsoft paid support contract, I can't say one was more pleased than the other. This of course begs the question "Why do so many say that Microsoft support is superior to Linux paid support?"

Full vs. partial hardware support
One issue that is slowly becoming nonexistent is hardware support. Years ago, if you wanted to install Linux on a machine you had to make sure you hand-picked each piece of hardware or your installation would not work 100 percent. I can remember, back in 1997-ish, trying to figure out why I couldn't get Caldera Linux or Red Hat Linux to see my modem. After much looking around, I found I was the proud owner of a Winmodem. So I had to go out and purchase a US Robotics external modem because that was the one modem I knew would work. This is not so much the case now. You can grab a PC (or laptop) and most likely get one or more Linux distributions to install and work nearly 100 percent. But there are still some exceptions. For instance, hibernate/suspend remains a problem with many laptops, although it has come a long way.
With Windows, you know that most every piece of hardware will work with the operating system. Of course, there are times (and I have experienced this over and over) when you will wind up spending much of the day searching for the correct drivers for that piece of hardware you no longer have the install disk for. But you can go out and buy that 10-cent Ethernet card and know it'll work on your machine (so long as you have, or can find, the drivers). You also can rest assured that when you purchase that insanely powerful graphics card, you will probably be able to take full advantage of its power.

Command line vs. no command line
No matter how far the Linux operating system has come and how amazing the desktop environment becomes, the command line will always be an invaluable tool for administration purposes. Nothing will ever replace my favorite text-based editor, ssh, and any given command-line tool. I can't imagine administering a Linux machine without the command line. But for the end user -- not so much. You could use a Linux machine for years and never touch the command line. Same with Windows. You can still use the command line with Windows, but not nearly to the extent as with Linux. And Microsoft tends to obfuscate the command prompt from users. Without going to Run and entering cmd (or command, or whichever it is these days), the user won't even know the command-line tool exists. And if a user does get the Windows command line up and running, how useful is it really?

Centralized vs. noncentralized application installation
The heading for this point might have thrown you for a loop. But let's think about this for a second. With Linux you have (with nearly every distribution) a centralized location where you can search for, add, or remove software. I'm talking about package management systems, such as Synaptic. With Synaptic, you can open up one tool, search for an application (or group of applications), and install that application without having to do any Web searching (or purchasing).
Windows has nothing like this. With Windows, you must know where to find the software you want, download it (or put the CD into your machine), and run setup.exe or install.exe with a simple double-click. For many years, it was thought that installing applications on Windows was far easier than on Linux. And for many years, that thought was right on target. Not so much now. Installation under Linux is simple, painless, and centralized.

Flexibility vs. rigidity
I always compare Linux (especially the desktop) and Windows to a room where the floor and ceiling are either movable or not. With Linux, you have a room where the floor and ceiling can be raised or lowered, at will, as high or low as you want to make them. With Windows, that floor and ceiling are immovable. You can't go further than Microsoft has deemed it necessary to go.
Take, for instance, the desktop. Unless you are willing to pay for and install a third-party application that can alter the desktop appearance, with Windows you are stuck with what Microsoft has declared is the ideal desktop for you. With Linux, you can pretty much make your desktop look and feel exactly how you want/need. You can have as much or as little on your desktop as you want. From simple flat Fluxbox to a full-blown 3D Compiz experience, the Linux desktop is as flexible an environment as there is on a computer.

Fanboys vs. corporate types
I wanted to add this because even though Linux has reached well beyond its school-project roots, Linux users tend to be soapbox-dwelling fanatics who are quick to spout off about why you should be choosing Linux over Windows. I am guilty of this on a daily basis (I try hard to recruit new fanboys/girls), and it's a badge I wear proudly. Of course, this is seen as less than professional by some. After all, why would something worthy of a corporate environment have or need cheerleaders? Shouldn't the software sell itself? Because of the open source nature of Linux, it has to make do without the help of the marketing budgets and deep pockets of Microsoft. With that comes the need for fans to help spread the word. And word of mouth is the best friend of Linux.
Some see the fanaticism as the same college-level hoorah that keeps Linux in the basements for LUG meetings and science projects. But I beg to differ. Another company, thanks to the phenomenon of a simple music player and phone, has fallen into the same fanboy fanaticism, and yet that company's image has not been besmirched because of that fanaticism. Windows does not have these same fans. Instead, Windows has a league of paper-certified administrators who believe the hype when they hear the misrepresented market share numbers reassuring them they will be employable until the end of time.

Automated vs. nonautomated removable media
I remember the days of old when you had to mount your floppy to use it and unmount it to remove it. Well, those times are drawing to a close -- but not completely. One issue that plagues new Linux users is how removable media is used. The idea of having to manually "mount" a CD drive to access the contents of a CD is completely foreign to new users. There is a reason this is the way it is. Because Linux has always been a multiuser platform, it was thought that forcing a user to mount a media to use it would keep the user's files from being overwritten by another user. Think about it: On a multiuser system, if everyone had instant access to a disk that had been inserted, what would stop them from deleting or overwriting a file you had just added to the media? Things have now evolved to the point where Linux subsystems are set up so that you can use a removable device in the same way you use them in Windows. But it's not the norm. And besides, who doesn't want to manually edit the /etc/fstab fle?

Multilayered run levels vs. a single-layered run level
I couldn't figure out how best to title this point, so I went with a description. What I'm talking about is Linux' inherent ability to stop at different run levels. With this, you can work from either the command line (run level 3) or the GUI (run level 5). This can really save your socks when X Windows is fubared and you need to figure out the problem. You can do this by booting into run level 3, logging in as root, and finding/fixing the problem.
With Windows, you're lucky to get to a command line via safe mode -- and then you may or may not have the tools you need to fix the problem. In Linux, even in run level 3, you can still get and install a tool to help you out (hello apt-get install APPLICATION via the command line). Having different run levels is helpful in another way. Say the machine in question is a Web or mail server. You want to give it all the memory you have, so you don't want the machine to boot into run level 5. However, there are times when you do want the GUI for administrative purposes (even though you can fully administer a Linux server from the command line). Because you can run the startx command from the command line at run level 3, you can still start up X Windows and have your GUI as well. With Windows, you are stuck at the Graphical run level unless you hit a serious problem.

10 Windows XP services you should never disable

Author: Scott Lowe

    Disabling certain Windows XP services can enhance performance and security - but it's essential to know which ones you can safely turn off. Scott Lowe identifies 10 critical services and explains why they should be left alone.


    There are dozens of guides out there that help you determine which services you can safely disable on your Windows XP desktop. Disabling unnecessary services can improve system performance and overall system security, as the system's attack surface is reduced. However, these lists rarely indicate which services you should not disable. All of the services that run on a Window system serve a specific purpose and many of the services are critical to the proper and expected functioning of the desktop computing environment. In this article, you'll learn about 10 critical Windows XP services you shouldn't disable (and why).

    Note: This article is also available as a PDF download. For a quick how-to video on the basics, see Disable and enable Windows XP services.

    1: DNS Client

    This service resolves and caches DNS names, allowing the system to communicate with canonical names rather than strictly by IP address. DNS is the reason that you can, in a Web browser, type http://www.techrepublic.com rather than having to remember that http://216.239.113.101 is the site's IP address.

    If you stop this service, you will disable your computer's ability to resolve names to IP addresses, basically rendering Web browsing all but impossible.

    2: Network Connections

    The Network Connections service manages the network and dial-up connections for your computer, including network status notification and configuration. These days, a standalone, non-networked PC is just about as useful as an abacus — maybe less so. The Network Connections service is the element responsible for making sure that your computer can communicate with other computers and with the Internet.

    If this service is disabled, network configuration is not possible. New network connections can't be created and services that need network information will fail.

    3: Plug and Play

    The Plug and Play service (formerly known as the "Plug and Pray" service, due to its past unreliability), is kicked off whenever new hardware is added to the computer. This service detects the new hardware and attempts to automatically configure it for use with the computer. The Plug and Play service is often confused with the Universal Plug and Play service (uPNP), which is a way that the Windows XP computer can detect new network resources (as opposed to local hardware resources). The Plug and Play service is pretty critical as, without it, your system can become unstable and will not recognize new hardware. On the other hand, uPNP is not generally necessary and can be disabled without worry. Along with uPNP, disable the SSDP Discovery Service, as it goes hand-in-hand with uPNP.

    Historical note: Way back in 2001, uPNP was implicated in some pretty serious security breaches, as described here.

    If you disable Plug and Play, your computer will be unstable and incapable of detecting hardware changes.

    4: Print Spooler

    Just about every computer out there needs to print at some point. If you want your computer to be able to print, don't plan on disabling the Print Spooler service. It manages all printing activities for your system. You may think that lack of a printer makes it safe to disable the Print Spooler service. While that's technically true, there's really no point in doing so; after all, if you ever do decide to get a printer, you'll need to remember to re-enable the service, and you might end up frustrating yourself.

    When the Print Spooler service is not running, printing on the local machine is not possible.

    5: Remote Procedure Call (RPC)

    Windows is a pretty complex beast, and many of its underlying processes need to communicate with one another. The service that makes this possible is the Remote Procedure Call (RPC) service. RPC allows processes to communicate with one another and across the network with each other. A ton of other critical services, including the Print Spooler and the Network Connections service, depend on the RPC service to function. If you want to see what bad things happen when you disable this service, look at the comments on this link.

    Bad news. The system will not boot. Don't disable this service.

    6: Workstation

    As is the case for many services, the Workstation service is responsible for handling connections to remote network resources. Specifically, this service provides network connections and communications capability for resources found using Microsoft Network services. Years ago, I would have said that disabling this service was a good idea, but that was before the rise of the home network and everything that goes along with it, including shared printers, remote Windows Media devices, Windows Home Server, and much more. Today, you don't gain much by eliminating this service, but you lose a lot.

    Disable the Workstation service and your computer will be unable to connect to remote Microsoft Network resources.

    7: Network Location Awareness (NLA)

    As was the case with the Workstation service, disabling the Network Location Awareness service might have made sense a few years ago — at least for a standalone, non-networked computer. With today's WiFi-everywhere culture, mobility has become a primary driver. The Network Location Awareness service is responsible for collecting and storing network configuration and location information and notifying applications when this information changes. For example, as you make the move from the local coffee shop's wireless network back home to your wired docking station, NLA makes sure that applications are aware of the change. Further, some other services depend on this service's availability.

    Your computer will not be able to fully connect to and use wireless networks. Problems abound!

    8: DHCP Client

    Dynamic Host Configuration Protocol (DHCP) is a critical service that makes the task of getting computers on the network nearly effortless. Before the days of DHCP, poor network administrators had to manually assign network addresses to every computer. Over the years, DHCP has been extended to automatically assign all kinds of information to computers from a central configuration repository. DHCP allows the system to automatically obtain IP addressing information, WINS server information, routing information, and so forth; it's required to update records in dynamic DNS systems, such as Microsoft's Active Directory-integrated DNS service. This is one service that, if disabled, won't necessarily cripple your computer but will make administration much more difficult.

    Without the DHCP Client service, you'll need to manually assign static IP addresses to every Windows XP system on your network. If you use DHCP to assign other parameters, such as WINS information, you'll need to provide that information manually as well.

    9: Cryptographic Services

    Every month, Microsoft provides new fixes and updates on what has become known as "Patch Tuesday" because the updates are released on the first Tuesday of the month. Why do I bring this up? Well, one service supported by Cryptographic Services happens to be Automatic Updates. Further, Cryptographic Services provides three other management services: Catalog Database Service, which confirms the signatures of Windows files; Protected Root Service, which adds and removes Trusted Root Certification Authority certificates from this computer; and Key Service, which helps enroll this computer for certificates. Finally, Cryptographic Services also supports some elements of Task Manager.

    Disable Cryptographic Services at your peril! Automatic Updates will not function and you will have problems with Task Manager as well as other security mechanisms.

    10: Automatic Updates

    Keeping your machine current with patches is pretty darn important, and that's where Automatic Updates comes into play. When Automatic Updates is enabled, your computer stays current with new updates from Microsoft. When disabled, you have to manually get updates by visiting Microsoft's update site.

    ITWORLD
    If you have any question then you put your question as comments.

    Put your suggestions as comments