Sunday, August 30, 2009

Three ways to gain programming experience

Author: Justin James 

Justin James offers advice to a reader who needs experience but can't find work because he has very little on-the-job experience. Check out these recommendations for picking up programming experience — sometimes even without having a job in the field.

—————————————————————————————

A TechRepublic member is trapped in the chicken/egg situation that far too many entry-level IT programmers find themselves in: Businesses do not like to hire people without experience, and many businesses are not willing to train. If so many companies aren't open to hiring people without experience, how does someone get experience? Unfortunately, this scenario is a major issue for many IT pros.

In my long running, back-and-forth discussion with this member, here are three ways I suggested that he kick his career into high gear.
#1: Work for free (or close to it)

While the corporate world may not always be eager to hire people with little or no experience, the non-profit world is often delighted (or at least willing) to take volunteers with little or no experience. I got my start as a programmer in high school by volunteering for a local home for developmentally disabled adults. I worked on Excel spreadsheets to manage their finances, I put together a Web site for them, and so on. Was it glamorous? Heck, no. I was working for free on my afternoons and weekends. The only perk was that the place had a stocked pantry that I could hit whenever I wanted. Aside from the emotional satisfaction of doing something positive for the community, it gave me experience that I could put on a resume, and it gave me a reference. Some non-profits will be able to pay you a small amount of money.

And there are plenty of open source projects that can use some help. Or, you could pick up an "abandoned" open source project and revive it. Open source work is a great resume builder.

If you can't find a local charity or non-profit, maybe you can work for family. Perhaps a relative has a business that needs some programmer work. Offer to do it for free, and I bet that you will find that Uncle Jimmy or Aunt Betty would be delighted to have you on the team.
#2: Work like a dog


If you want to get ahead, you're going to have to hustle; I haven't met any developers who were handed opportunities on a silver platter. I suppose a few developers got lucky, and maybe a relative hired them at a very nice salary right out of school. And a few other developers managed to get great internships that led to other good opportunities. But for the vast majority of the people currently in college or just out of college, the only way to differentiate yourself and get the experience is to work, work, work. Period.

Your boss probably won't let you spend huge amounts of time writing code instead of manning the help desk. So, if you want to turn that help desk job into experience developing software, you're going to have to make the time. Code through lunch break? Check. Work after hours? Check. Plan and develop at home? Check.

I know, I know… working for free and working more than what is expected of you doesn't sound like much fun. It could be worse, though. Ever look into what doctors do during their residency (not to mention their pay)? Think of this period as your residency. You're going to bust your buns for a few months or years to get some experience, and your next job, though it may not be any easier (it won't be), it will likely pay better.

There are ways to get experience and get paid; the trick is to sneak in through the "back door" of employment. For example, I had a job where I was doing network management and monitoring. It had been a few years since I had been a professional programmer, and I knew I wanted to get back to it. But between the fact that most of my experience was in Perl (which was fairly dead by that point), and the years since I had been programming, I knew I needed to freshen my experience before I would be employable. So what did I do? I started writing applications to help my department in my free time; on occasion, I would even write code while not on the clock — all to get some experience under my belt and a reference.

Maybe you can't get a job as a developer, but you might be able to get a job as, say, a desktop technician or in the help desk. From there, you can start flexing your coding muscles and either build up a good resume and leave or get promoted. In fact, working at a help desk or as a desktop technician (or a "computer operator") is one of the oldest ways of getting your feet wet in this industry.
#3: Work at home

Maybe you can't find anyone willing to let you code for free. Perhaps there is no way that you are able to fit programming into your nonprogramming job (such as an hourly worker who can't get authorization for overtime). That's where your home comes into play. If all else fails (or to supplement your existing efforts), do some work at home. Find an application you really like and write your own version of it. Or, think of an application you always wish you had and write it.

When you work at home, try to emulate software development in professional environments as much as possible. Write a project plan, create unit tests, set up a nightly build, and so on. I guarantee that you will become a better programmer for it, and you'll have something to show perspective employers, which is actually quite important.

I have never worked somewhere where I could take my labor and show it to potential employers. Not only would it violate my employment contract, but it would often violate my employer's contracts with their customers. But when I do something at home on my own time and on my own dime, it becomes something I can show to potential employers. For example, I wanted to get a job doing more Web development and less Webmaster work, so I put together a Flash presentation that had highlights from my resume, quotes from my references, and so on. I even packaged it in a nice CD case and gave it an Autorun launcher, so potential employers could just pop the CD in. The CD got me a job in the middle of the dot-com bust in an instant. It was a real game changer.

As someone who has been on both sides of the interview table many times, I can tell you that it's impressive to have a candidate come in and talk about work they're doing on their own. Does it get the same level of consideration as paid, professional work? Sometimes. From what I can tell, doing "real work" on a credible open source application is just as good as a paid job; the only time it can hurt you is if the application is awful, and you show it to the interviewer anyway. So, yes, this is another "work without pay" suggestion, but it's often the only differentiator between you and the two dozen other entry-level developers who apply for the job.


Wednesday, August 26, 2009

10 habits of superstitious users

Author: Jaime Henriquez

For some users, the computer is unfathomable - leading them to make bizarre assumptions about technology and the effect of their own actions. Here are a few irrational beliefs such users develop.


Superstition: A belief, not based on human reason or scientific knowledge, that future events may be influenced by one's behavior in some magical or mystical way (Wiktionary).

In 1947, the psychologist B. F. Skinner reported a series of experiments in which pigeons could push a lever that would randomly either give them a food pellet, or nothing. Think of it as a sort of one-armed bandit that the pigeons played for free. Skinner found, after a while, that some of the pigeons started acting oddly before pushing the lever. One moved in counterclockwise circles, one repeatedly stuck its head into the upper corner of the cage, and two others would swing their heads back and forth in a sort of pendulum motion. He suggested that the birds had developed "superstitious behaviors" by associating getting the food with something they happened to be doing when they actually got it — and they had wrongly concluded that if they did it again, they were more likely to get the pellet. Essentially, they were doing a sort of food-pellet dance to better their odds.

Although computer users are undoubtedly smarter than pigeons, users who really don't understand how a computer works may also wrongly connect some action of theirs with success (and repeat it), or associate it with failure (and avoid it like the plague). Here are some of the user superstitions I've encountered.

Note: This article is also available as a PDF download.

1: Refusing to reboot

Some users seem to regard a computer that's up and running and doing what they want as a sort of miracle, achieved against all odds, and unlikely ever to be repeated … certainly not by them. Reboot? Not on your life! If it ain't broke, don't fix it. Why take the risk?

2: Excessive fear of upgrades

Exercising caution when it comes to upgrades is a good idea. But some users go well beyond that, into the realm of the irrational. It may take only one or two bad experiences. In particular, if an upgrade causes problems that don't seem to be related to the upgrade itself, this can lead to a superstitious fear of change because it confirms their belief that they have no idea how the computer really works — and therefore no chance of correctly judging whether an upgrade is worth it or just asking for trouble. Better to stay away from any change at all, right?

3: Kneejerk repetition of commands

These are the people who, when their print command fails to produce output in a timely manner, start pounding the keys. They treat the computer like a recalcitrant child who just isn't paying attention or doesn't believe they really mean it. Users may get the impression that this superstition is justified because the computer sometimes does seem to be ignoring them — when it fails to execute a double-click because they twitched the mouse or when they have inadvertently dropped out of input mode. Or it may come from the tendency of knowledgeable helpers to make inconspicuous adjustments and then say, "Try it again."

4: Insisting on using particular hardware when other equally good hardware is available

Whenever you go to the trouble of providing your users with multiple options — computers, printers, servers, etc. — they will develop favorite choices. Some users will conclude, however, based on their previous experience (or sometimes just based on rumor), that only this particular piece of hardware will do. The beauty of interchangeability is wasted on them.

5: "I broke it!"

Many users blame the computer for any problems (or they blame the IT department). But some users assume when something goes wrong, they did it.

They don't think about all the tiny voltages and magnetic charges, timed to the nanosecond, all of which have to occur in the proper sequence in order for success. In fact, there are plenty of chances for things to go wrong without them, and things often do. But then, all those possible sources of error are hidden from the user — invisible by their nature and tucked away inside the box. The only place complexity isn't hidden is in the interface, and the most obviously fallible part of that is … them. It may take only a few cases of it actually being the user's fault to get this superstition rolling.

6: Magical thinking

These are the users who have memorized the formula for getting the computer to do what they want but have no clue how it works. As in magic, as long as you get the incantation exactly right, the result "just happens." The unforgiving nature of computer commands tends to feed this belief. The user whose long-running struggle to connect to the Web is resolved by, "Oh, here's your problem, you left out the colon…" is a prime candidate to develop this superstition.

Once on the path to magical thinking, some users give up trying to understand the computer as a tool to work with and instead treat it like some powerful but incomprehensible entity that must be negotiated with. For them, the computer works in mysterious ways, and superstitions begin to have more to do with what the computer is than how they use it.

7: Attributing personality to the machine

This is the user who claims in all honesty, "The computer hates me," and will give you a long list of experiences supporting their conclusion, or the one who refuses to use a computer or printer that had a problem earlier but which you have now fixed. No, no, it failed before and the user is not going to forget it.

8: Believing the computer sees all and knows all

Things this user says betray the belief that behind all the hardware and software there is a single Giant Brain that sees all and knows all — or should. They're surprised when things they've done don't seem to "stick," as in "I changed my email address; why does it keep using my old one?" or "Did you change it everywhere?"  "… Huh?" or "My new car always knows where I am, how come I have to tell Google Maps where I live?" or the ever-popular "You mean when you open up my document you see something different?"

9: Assuming the computer is always right

This user fails to recognize that the modern computer is more like television than the Delphic oracle. Even the most credulous people recognize that not everything they see on television is true, but some users think the computer is different. "There's something wrong with the company server." "What makes you think that?" "Because when I try to log in, it says server not found." … "Why did you click on that pop-up?" "It said I had a virus and that I had to."

10: "It's POSSESSED!!"

Users who are ordinarily rational can still succumb to superstition when the computer or its peripherals seem to stop paying any attention to them and start acting crazy — like when the screen suddenly fills with a code dump, or a keyboard problem overrides their input, or a newly revived printer spews out pages of gibberish. It serves to validate the secretly held suspicion that computers have a mind of their own — and that mind isn't particularly stable.

Magic?

We're used to seeing superstitions among gamblers and athletes, who frequently engage in high-stakes performances with largely unpredictable outcomes. That superstitions also show up when people use computers — algorithmic devices designed to be completely predictable — is either evidence of human irrationality or an interesting borderline case of Clarke's Third Law: "Any sufficiently advanced technology is indistinguishable from magic."

10 fundamental differences between Linux and Windows

By Jack Wallen

I have been around the Linux community for more than 10 years now. From the very beginning, I have known that there are basic differences between Linux and Windows that will always set them apart. This is not, in the least, to say one is better than the other. It's just to say that they are fundamentally different. Many people, looking from the view of one operating system or the other, don't quite get the differences between these two powerhouses. So I decided it might serve the public well to list 10 of the primary differences between Linux and Windows.

Full access vs. no access
Having access to the source code is probably the single most significant difference between Linux and Windows. The fact that Linux belongs to the GNU Public License ensures that users (of all sorts) can access (and alter) the code to the very kernel that serves as the foundation of the Linux operating system. You want to peer at the Windows code? Good luck. Unless you are a member of a very select (and elite, to many) group, you will never lay eyes on code making up the Windows operating system.
You can look at this from both sides of the fence. Some say giving the public access to the code opens the operating system (and the software that runs on top of it) to malicious developers who will take advantage of any weakness they find. Others say that having full access to the code helps bring about faster improvements and bug fixes to keep those malicious developers from being able to bring the system down. I have, on occasion, dipped into the code of one Linux application or another, and when all was said and done, was happy with the results. Could I have done that with a closed-source Windows application? No.

Licensing freedom vs. licensing restrictions
Along with access comes the difference between the licenses. I'm sure that every IT professional could go on and on about licensing of PC software. But let's just look at the key aspect of the licenses (without getting into legalese). With a Linux GPL-licensed operating system, you are free to modify that software and use and even republish or sell it (so long as you make the code available). Also, with the GPL, you can download a single copy of a Linux distribution (or application) and install it on as many machines as you like. With the Microsoft license, you can do none of the above. You are bound to the number of licenses you purchase, so if you purchase 10 licenses, you can legally install that operating system (or application) on only 10 machines.

Online peer support vs. paid help-desk support
This is one issue where most companies turn their backs on Linux. But it's really not necessary. With Linux, you have the support of a huge community via forums, online search, and plenty of dedicated Web sites. And of course, if you feel the need, you can purchase support contracts from some of the bigger Linux companies (Red Hat and Novell for instance).
However, when you use the peer support inherent in Linux, you do fall prey to time. You could have an issue with something, send out e-mail to a mailing list or post on a forum, and within 10 minutes be flooded with suggestions. Or these suggestions could take hours of days to come in. It seems all up to chance sometimes. Still, generally speaking, most problems with Linux have been encountered and documented. So chances are good you'll find your solution fairly quickly.
On the other side of the coin is support for Windows. Yes, you can go the same route with Microsoft and depend upon your peers for solutions. There are just as many help sites/lists/forums for Windows as there are for Linux. And you can purchase support from Microsoft itself. Most corporate higher-ups easily fall victim to the safety net that having a support contract brings. But most higher-ups haven't had to depend up on said support contract. Of the various people I know who have used either a Linux paid support contract or a Microsoft paid support contract, I can't say one was more pleased than the other. This of course begs the question "Why do so many say that Microsoft support is superior to Linux paid support?"

Full vs. partial hardware support
One issue that is slowly becoming nonexistent is hardware support. Years ago, if you wanted to install Linux on a machine you had to make sure you hand-picked each piece of hardware or your installation would not work 100 percent. I can remember, back in 1997-ish, trying to figure out why I couldn't get Caldera Linux or Red Hat Linux to see my modem. After much looking around, I found I was the proud owner of a Winmodem. So I had to go out and purchase a US Robotics external modem because that was the one modem I knew would work. This is not so much the case now. You can grab a PC (or laptop) and most likely get one or more Linux distributions to install and work nearly 100 percent. But there are still some exceptions. For instance, hibernate/suspend remains a problem with many laptops, although it has come a long way.
With Windows, you know that most every piece of hardware will work with the operating system. Of course, there are times (and I have experienced this over and over) when you will wind up spending much of the day searching for the correct drivers for that piece of hardware you no longer have the install disk for. But you can go out and buy that 10-cent Ethernet card and know it'll work on your machine (so long as you have, or can find, the drivers). You also can rest assured that when you purchase that insanely powerful graphics card, you will probably be able to take full advantage of its power.

Command line vs. no command line
No matter how far the Linux operating system has come and how amazing the desktop environment becomes, the command line will always be an invaluable tool for administration purposes. Nothing will ever replace my favorite text-based editor, ssh, and any given command-line tool. I can't imagine administering a Linux machine without the command line. But for the end user -- not so much. You could use a Linux machine for years and never touch the command line. Same with Windows. You can still use the command line with Windows, but not nearly to the extent as with Linux. And Microsoft tends to obfuscate the command prompt from users. Without going to Run and entering cmd (or command, or whichever it is these days), the user won't even know the command-line tool exists. And if a user does get the Windows command line up and running, how useful is it really?

Centralized vs. noncentralized application installation
The heading for this point might have thrown you for a loop. But let's think about this for a second. With Linux you have (with nearly every distribution) a centralized location where you can search for, add, or remove software. I'm talking about package management systems, such as Synaptic. With Synaptic, you can open up one tool, search for an application (or group of applications), and install that application without having to do any Web searching (or purchasing).
Windows has nothing like this. With Windows, you must know where to find the software you want, download it (or put the CD into your machine), and run setup.exe or install.exe with a simple double-click. For many years, it was thought that installing applications on Windows was far easier than on Linux. And for many years, that thought was right on target. Not so much now. Installation under Linux is simple, painless, and centralized.

Flexibility vs. rigidity
I always compare Linux (especially the desktop) and Windows to a room where the floor and ceiling are either movable or not. With Linux, you have a room where the floor and ceiling can be raised or lowered, at will, as high or low as you want to make them. With Windows, that floor and ceiling are immovable. You can't go further than Microsoft has deemed it necessary to go.
Take, for instance, the desktop. Unless you are willing to pay for and install a third-party application that can alter the desktop appearance, with Windows you are stuck with what Microsoft has declared is the ideal desktop for you. With Linux, you can pretty much make your desktop look and feel exactly how you want/need. You can have as much or as little on your desktop as you want. From simple flat Fluxbox to a full-blown 3D Compiz experience, the Linux desktop is as flexible an environment as there is on a computer.

Fanboys vs. corporate types
I wanted to add this because even though Linux has reached well beyond its school-project roots, Linux users tend to be soapbox-dwelling fanatics who are quick to spout off about why you should be choosing Linux over Windows. I am guilty of this on a daily basis (I try hard to recruit new fanboys/girls), and it's a badge I wear proudly. Of course, this is seen as less than professional by some. After all, why would something worthy of a corporate environment have or need cheerleaders? Shouldn't the software sell itself? Because of the open source nature of Linux, it has to make do without the help of the marketing budgets and deep pockets of Microsoft. With that comes the need for fans to help spread the word. And word of mouth is the best friend of Linux.
Some see the fanaticism as the same college-level hoorah that keeps Linux in the basements for LUG meetings and science projects. But I beg to differ. Another company, thanks to the phenomenon of a simple music player and phone, has fallen into the same fanboy fanaticism, and yet that company's image has not been besmirched because of that fanaticism. Windows does not have these same fans. Instead, Windows has a league of paper-certified administrators who believe the hype when they hear the misrepresented market share numbers reassuring them they will be employable until the end of time.

Automated vs. nonautomated removable media
I remember the days of old when you had to mount your floppy to use it and unmount it to remove it. Well, those times are drawing to a close -- but not completely. One issue that plagues new Linux users is how removable media is used. The idea of having to manually "mount" a CD drive to access the contents of a CD is completely foreign to new users. There is a reason this is the way it is. Because Linux has always been a multiuser platform, it was thought that forcing a user to mount a media to use it would keep the user's files from being overwritten by another user. Think about it: On a multiuser system, if everyone had instant access to a disk that had been inserted, what would stop them from deleting or overwriting a file you had just added to the media? Things have now evolved to the point where Linux subsystems are set up so that you can use a removable device in the same way you use them in Windows. But it's not the norm. And besides, who doesn't want to manually edit the /etc/fstab fle?

Multilayered run levels vs. a single-layered run level
I couldn't figure out how best to title this point, so I went with a description. What I'm talking about is Linux' inherent ability to stop at different run levels. With this, you can work from either the command line (run level 3) or the GUI (run level 5). This can really save your socks when X Windows is fubared and you need to figure out the problem. You can do this by booting into run level 3, logging in as root, and finding/fixing the problem.
With Windows, you're lucky to get to a command line via safe mode -- and then you may or may not have the tools you need to fix the problem. In Linux, even in run level 3, you can still get and install a tool to help you out (hello apt-get install APPLICATION via the command line). Having different run levels is helpful in another way. Say the machine in question is a Web or mail server. You want to give it all the memory you have, so you don't want the machine to boot into run level 5. However, there are times when you do want the GUI for administrative purposes (even though you can fully administer a Linux server from the command line). Because you can run the startx command from the command line at run level 3, you can still start up X Windows and have your GUI as well. With Windows, you are stuck at the Graphical run level unless you hit a serious problem.

10 Windows XP services you should never disable

Author: Scott Lowe

    Disabling certain Windows XP services can enhance performance and security - but it's essential to know which ones you can safely turn off. Scott Lowe identifies 10 critical services and explains why they should be left alone.


    There are dozens of guides out there that help you determine which services you can safely disable on your Windows XP desktop. Disabling unnecessary services can improve system performance and overall system security, as the system's attack surface is reduced. However, these lists rarely indicate which services you should not disable. All of the services that run on a Window system serve a specific purpose and many of the services are critical to the proper and expected functioning of the desktop computing environment. In this article, you'll learn about 10 critical Windows XP services you shouldn't disable (and why).

    Note: This article is also available as a PDF download. For a quick how-to video on the basics, see Disable and enable Windows XP services.

    1: DNS Client

    This service resolves and caches DNS names, allowing the system to communicate with canonical names rather than strictly by IP address. DNS is the reason that you can, in a Web browser, type http://www.techrepublic.com rather than having to remember that http://216.239.113.101 is the site's IP address.

    If you stop this service, you will disable your computer's ability to resolve names to IP addresses, basically rendering Web browsing all but impossible.

    2: Network Connections

    The Network Connections service manages the network and dial-up connections for your computer, including network status notification and configuration. These days, a standalone, non-networked PC is just about as useful as an abacus — maybe less so. The Network Connections service is the element responsible for making sure that your computer can communicate with other computers and with the Internet.

    If this service is disabled, network configuration is not possible. New network connections can't be created and services that need network information will fail.

    3: Plug and Play

    The Plug and Play service (formerly known as the "Plug and Pray" service, due to its past unreliability), is kicked off whenever new hardware is added to the computer. This service detects the new hardware and attempts to automatically configure it for use with the computer. The Plug and Play service is often confused with the Universal Plug and Play service (uPNP), which is a way that the Windows XP computer can detect new network resources (as opposed to local hardware resources). The Plug and Play service is pretty critical as, without it, your system can become unstable and will not recognize new hardware. On the other hand, uPNP is not generally necessary and can be disabled without worry. Along with uPNP, disable the SSDP Discovery Service, as it goes hand-in-hand with uPNP.

    Historical note: Way back in 2001, uPNP was implicated in some pretty serious security breaches, as described here.

    If you disable Plug and Play, your computer will be unstable and incapable of detecting hardware changes.

    4: Print Spooler

    Just about every computer out there needs to print at some point. If you want your computer to be able to print, don't plan on disabling the Print Spooler service. It manages all printing activities for your system. You may think that lack of a printer makes it safe to disable the Print Spooler service. While that's technically true, there's really no point in doing so; after all, if you ever do decide to get a printer, you'll need to remember to re-enable the service, and you might end up frustrating yourself.

    When the Print Spooler service is not running, printing on the local machine is not possible.

    5: Remote Procedure Call (RPC)

    Windows is a pretty complex beast, and many of its underlying processes need to communicate with one another. The service that makes this possible is the Remote Procedure Call (RPC) service. RPC allows processes to communicate with one another and across the network with each other. A ton of other critical services, including the Print Spooler and the Network Connections service, depend on the RPC service to function. If you want to see what bad things happen when you disable this service, look at the comments on this link.

    Bad news. The system will not boot. Don't disable this service.

    6: Workstation

    As is the case for many services, the Workstation service is responsible for handling connections to remote network resources. Specifically, this service provides network connections and communications capability for resources found using Microsoft Network services. Years ago, I would have said that disabling this service was a good idea, but that was before the rise of the home network and everything that goes along with it, including shared printers, remote Windows Media devices, Windows Home Server, and much more. Today, you don't gain much by eliminating this service, but you lose a lot.

    Disable the Workstation service and your computer will be unable to connect to remote Microsoft Network resources.

    7: Network Location Awareness (NLA)

    As was the case with the Workstation service, disabling the Network Location Awareness service might have made sense a few years ago — at least for a standalone, non-networked computer. With today's WiFi-everywhere culture, mobility has become a primary driver. The Network Location Awareness service is responsible for collecting and storing network configuration and location information and notifying applications when this information changes. For example, as you make the move from the local coffee shop's wireless network back home to your wired docking station, NLA makes sure that applications are aware of the change. Further, some other services depend on this service's availability.

    Your computer will not be able to fully connect to and use wireless networks. Problems abound!

    8: DHCP Client

    Dynamic Host Configuration Protocol (DHCP) is a critical service that makes the task of getting computers on the network nearly effortless. Before the days of DHCP, poor network administrators had to manually assign network addresses to every computer. Over the years, DHCP has been extended to automatically assign all kinds of information to computers from a central configuration repository. DHCP allows the system to automatically obtain IP addressing information, WINS server information, routing information, and so forth; it's required to update records in dynamic DNS systems, such as Microsoft's Active Directory-integrated DNS service. This is one service that, if disabled, won't necessarily cripple your computer but will make administration much more difficult.

    Without the DHCP Client service, you'll need to manually assign static IP addresses to every Windows XP system on your network. If you use DHCP to assign other parameters, such as WINS information, you'll need to provide that information manually as well.

    9: Cryptographic Services

    Every month, Microsoft provides new fixes and updates on what has become known as "Patch Tuesday" because the updates are released on the first Tuesday of the month. Why do I bring this up? Well, one service supported by Cryptographic Services happens to be Automatic Updates. Further, Cryptographic Services provides three other management services: Catalog Database Service, which confirms the signatures of Windows files; Protected Root Service, which adds and removes Trusted Root Certification Authority certificates from this computer; and Key Service, which helps enroll this computer for certificates. Finally, Cryptographic Services also supports some elements of Task Manager.

    Disable Cryptographic Services at your peril! Automatic Updates will not function and you will have problems with Task Manager as well as other security mechanisms.

    10: Automatic Updates

    Keeping your machine current with patches is pretty darn important, and that's where Automatic Updates comes into play. When Automatic Updates is enabled, your computer stays current with new updates from Microsoft. When disabled, you have to manually get updates by visiting Microsoft's update site.

    Saturday, August 22, 2009

    Increase XP NTFS performance

    Takeaway: Make the NTFS perform faster and more efficiently.

    A lot of things go into making a workstation operate at peak performance. Much of it, such as the amount of RAM in the system, the CPU speed, or the speed of the system's hard drive, is hardware-controlled. However, there are other aspects of the operating system that can impact system performance as well.

    One of the mechanisms that can greatly affect a workstation's efficiency is the file system used by the operating system to save files. If the file system is inefficient, then no matter how fast a CPU or hard drive is, the system will waste time retrieving data. XP's default file system, NTFS, is more efficient than Windows 9x's old FAT system under normal circumstances, but you can do more to make it even faster.



    Danger!
    This article discusses making changes to your server's registry. Before using any technique in this article, make sure you have a complete backup of your workstation. If you make a mistake when making changes to your workstation's registry, you may cause your server to become unbootable, which would require a reinstallation of Windows to correct. Proceed with extreme caution.

    NTFS vs. FAT
    NTFS has been around since Microsoft introduced the first version of Windows NT. Its goal was to overcome the limitations of the venerable FAT file system, which had been around since the first version of DOS in 1981. Some of the key benefits of NTFS over FAT include:
    • Smaller cluster sizes on drives over 1 GB
    • Added security through permissions
    • Support for larger drive sizes
    • Better fault tolerance through logging and striping

    Windows XP supports both NTFS and FAT, as well as FAT's newer cousin, FAT32. Chances are that you'll never see an XP workstation running the FAT-related file systems. About the only time you'll find FAT on an XP workstation is if someone upgraded a Windows 9x workstation to Windows XP and didn't convert the file system.

    Last access time stamps
    XP automatically updates the date and time stamp with information about the last time you accessed a file. Not only does it mark the file, but it also updates the directory the file is located in as well as any directories above it. If you have a large hard drive with many subdirectories on it, this updating can slow down your system.

    To disable the updating, start the Registry Editor by selecting Run from the Start menu, typing regedit in the Open text box, and clicking OK. When the Registry Editor window opens, navigate through the left pane until you get to

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Filesystem

    In the right pane, look for the value named NtfsDisableLastAccessUpdate. If the value exists, it's probably set to 0. To change the value, double-click it. You'll then see the Edit DWORD Value screen. Enter 1 in the Value Data field and click OK.

    If the value doesn't exist, you'll need to add it. Select New | DWORD Value from the Edit menu. The new value will appear in the right pane, prompting you for a value name. Type NtfsDisableLastAccessUpdate and press [Enter]. Double-click the new value. You'll then see the Edit DWORD Value screen. Enter 1 in the Value Data field and click OK. When you're done, close Regedit. Your registry changes will be saved automatically. Reboot your workstation.

    The Master File Table
    The Master File Table (MFT) keeps track of files on disks. This file logs all the files that are stored on a given disk, including an entry for the MFT itself. It works like an index of everything on the hard disk in much the same way that a phone book stores phone numbers.

    NTFS keeps a section of each disk just for the MFT. This allows the MFT to grow as the contents of a disk change without becoming overly fragmented. This is because Windows NT didn't provide for the defragmentation of the MFT. Windows 2000 and Windows XP's Disk Defragmenter will defragment the MFT only if there's enough space on the hard drive to locate all of the MFT segments together in one location.

    As the MFT file grows, it can become fragmented. Fortunately, you can control the initial size of the MFT by making a change in the registry. Making the MFT file larger prevents it from fragmenting but does so at the cost of storage space. For every kilobyte that NTFS uses for MFT, the less it has for data storage.

    To limit the size of the MFT, start the Registry Editor by selecting Run from the Start menu, typing regedit in the Open text box, and clicking OK. When the Registry Editor window opens, navigate through the left pane until you get to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Filesystem.

    In the right pane, look for the value named NtfsMftZoneReservation. If the value doesn't exist, you'll need to add it. Select New | DWORD Value from the Edit menu. The new value will appear in the right pane, prompting you for a value name. Type NtfsMftZoneReservation and press [Enter]. Double-click the new value. You'll then see the Edit DWORD Value screen.

    The default value for this key is 1. This is good for a drive that will contain relatively few large files. Other options include:
    • 2—Medium file allocation
    • 3—Larger file allocation
    • 4—Maximum file allocation

    To change the value, double-click it. When the Edit DWORD Value screen appears, enter the value you want and click OK. Unfortunately, Microsoft doesn't give any clear guidelines as to what distinguishes Medium from Larger and Maximum levels of files. Suffice it to say, if you plan to store lots of files on your workstation, you may want to consider a value of 3 or 4 instead of the default value of 1.

    When you're done, close Regedit. Your registry changes will be saved automatically. Reboot your workstation. Unlike other registry changes, which take place immediately for maximum benefit, NtfsMftZoneReservation works best on freshly formatted hard drives. This is because XP will then create the MFT in one contiguous space. Otherwise, it will just modify the current size of the MFT, instantly fragmenting it. Therefore, it's best to use this if you plan to have one drive for data and another for applications.

    Short filenames
    Even though NTFS can support filenames with 256 characters in order to maintain backward compatibility with DOS and Windows 3.x workstations, Windows XP stores filenames in the old 8.3 file format as well as its native format. For example, if this article is named "Increase XP NTFS performance.doc," Windows XP will save this file under that filename as well as INCREA~1.DOC.

    To change this in the registry, start the Registry Editor. When the Registry Editor window opens, navigate through the left pane until you get to

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Filesystem

    In the right pane, look for the value named NtfsDisable8dot3NameCreation. If the value exists, it's probably set to 0. To change the value, double-click it. In the Edit DWORD Value screen, enter 1 in the Value Data field and click OK.

    If the value doesn't exist, you'll need to add it. Select New | DWORD Value from the Edit menu. The new value will appear in the right pane, prompting you for a value name. Type NtfsDisable8dot3NameCreation and press [Enter]. Double-click the new value. You'll then see the Edit DWORD Value screen. Enter 1 in the Value Data field and click OK. When you're done, close Regedit. Your registry changes will be saved automatically. Reboot your workstation.

    Other ways to speed drive access
    There are other ways to speed drive access that aren't NTFS-specific. These include:
    • Caching—If your XP workstation has more than 256 MB of RAM, you might be able to increase hard drive access speeds by tweaking the amount of RAM cache that XP uses. For more information about how to do this, see the article "Squeeze more performance out of Windows XP with CachemanXP 1.1."
    • Striping—If you have more than one hard drive on your system, you can use XP's striping feature to have the file system store data across multiple drives. This feature works best with SCSI drives, but it can work with multiple ATA drives as well. You'll make the change using the Logical Disk Management service in the Computer Management utility.
    • Defragmenting—Even though NTFS is more resistant to fragmentation than FAT, it can and does still fragment. You can either use XP's built-in defragmenter or a third-party utility such as Diskeeper.
    • Disable Compression—Compressing files may save space on your workstation's hard drive, but compressing and decompressing files can slow down your system. With the relative low cost of hard drives today, investing in an additional hard drive is better than compressing files on a workstation.

    Recover lost data with Disk Commander

    Takeaway: When a user mistakenly deletes a file, suffers a hard drive failure, or corrupts their OS, you need a tool to recover their data quickly and successfully. Find out why Disk Commander is one of the best data recovery tools available.


    Disk Commander from Winternals Software is one of the most comprehensive data recovery products that I've ever used. Disk Commander helped me bail out myself and my end users in several tough situations. It allowed me to recover data I thought was lost forever. No help desk should be without this tool.

    File recovery and a whole lot more
    Unlike many disk recovery utilities, Disk Commander isn't a deleted file recovery utility (although it can recover deleted files). Instead, the utility actually reconstructs damaged files. It can also rebuild a corrupt partition and recover data from a formatted hard disk, even if the disk is unbootable. While many other disk recovery utilities limit you to recovering data from a single hard disk, Disk Commander allows you to recover data from stripe sets, mirror sets, and volume sets. The only prerequisite is that the hard disk must be physically functional. If the disk has a problem, such as a dead motor, Winternals recommends shipping the drive to a data recovery lab.

    Disk Commander offers flexible installation. You can install it on a functioning hard disk, run it from a set of boot floppies, or run it from a floppy disk at a DOS prompt (Figure A).

    Figure A


    However, only running from a set of boot floppies gives you access to the product's full functionality. The hard disk installation and the DOS installation are both subject to restrictions of the underlying operating system. For example, when run from DOS, long filenames aren't supported and neither are normal RAID devices.

    Using Disk Commander
    Disk Commander is fairly simple and straightforward to use. The software wizard asks you several questions about your data recovery needs. You'll begin by selecting the drive letter associated with the damaged hard disk. You can then choose to try to salvage deleted files. If you need to perform any other type of repair on the volume, you must tell Disk Commander that no drive letter is associated with the hard disk.

    Next you must tell Disk Commander whether you want to recover regular and damaged files or files that have been deleted from the partition. If you choose to salvage deleted files, Disk Commander scans the hard disk for anything that can be recovered. It will then present you with a directory tree style view of salvageable files (Figure B). You must then select which files you want to recover along with a location for Disk Commander to copy those files to.

    Figure B


    If you tell Disk Commander that the damaged hard disk (or partition) doesn't have a drive letter, Disk Commander performs a thorough scan of the hard drive. The scan might take a while to complete, but the results are worth the wait.

    When the scan completes, Disk Commander will show you a report of the disk's partition scheme. The wizard then asks you whether the partition scheme accurately displays what should be on the disk. If you answer No, it will launch a more thorough scan, which can take an entire day to complete. At the end of the scan, Disk Commander will display a graphical representation of the partition table, including missing partitions and volumes.

    You may then select an area of the partition table and an action, and then click Next to perform the action. For example, you could select a damaged partition and click the Recover Entry button. Or you could select a damaged master boot record (MBR) and click the Rewrite MBR button. Also, the software includes a Volume Details button that allows you to gain detailed information on a partition or volume you're about to repair, which is a nice touch.

    Before executing any instruction that will modify the partition table, Disk Commander gives you the chance to copy the partition table to a floppy disk. That way, you can revert the system to its current state should you make a mistake that damages the partition table more than it was.

    Well worth the cost
    While there are plenty of other data recovery tools out there, each technique you use unsuccessfully decreases your chances of a successful recovery through another method. Most disk recovery utilities modify the data on the hard disk, and once a utility has modified the already damaged hard disk, it becomes that much tougher for another utility to pick up the pieces. So, if your data is important to you, I recommend spending a few bucks for Disk Commander instead of risking further damage to your data with a lower-budget data recovery utility.

    Disk Commander is designed to work on a system with Windows 9x, NT, 2000, XP, or Me—although the operating system doesn't have to be functional. You can buy a copy of Disk Commander for $299 directly from Winternals Software. Volume licensing discounts are also available.

    Thursday, August 20, 2009

    Improve Windows XP's hard drive performance with disk striping

    Takeaway: Learn what disk striping is, how it can boost performance, and how to implement it


    Some applications need a higher level of performance than a standard installation can generally provide. For example, the process of creating DVDs requires the hard disk to read information at a very high speed. Fortunately, there's a relatively easy way of insuring that Windows XP's performance meets your needs: Boost your disk performance by implementing disk striping. In this article, I'll explain disk striping and show you how to implement it.

    What is disk striping?
    Disk striping is a technique by which data spans multiple hard drives. All hard drives involved in the stripe set are simultaneously read from and written to. For example, if a striped set of disks consists of three hard drives, then the data will be read and written about three times faster because Windows is distributing the workload among three hard drives. Creating a striped set is an inexpensive way of dramatically increasing performance.

    Before you begin
    In Windows XP, striped sets with parity aren't supported. This means that if any of the drives associated with the striped set have a problem, the entire volume (striped set) will be lost. Therefore, you'll have to back up frequently.

    Also, once you create a striped set, only Windows XP will be able to read that striped set. There's a way of making Windows 2000 be able to read the set, but generally, you should assume that no other OS will access the striped set if you have a dual-boot system.


    Creating a striped set
    To set up a striped set, first, install the hard drives. However, your primary hard drive cannot be included in the striped set because you can only create striped sets on empty hard drives. You need a minimum of two new hard drives to create a striped set, but you can use up to 32 hard drives in the set. Because this is a software-implemented striped set, there is no requirement as to what type of hard drive you must use. IDE and SCSI are both acceptable.

    Once you've physically installed the drives, boot Windows XP and log in as the Administrator. Next, enter the DISKMGMT.MSC command at the Run prompt to open the Disk Management console shown in Figure A.

    Figure A


    When the Disk Management console opens, locate the new disks and right-click them. Be sure to right-click the reference to the disk itself, not the space on the disk. Select the Convert To Dynamic Disk command from the context menu. When you do, a wizard will open, verifying that you want to convert the disk into a dynamic disk. Click Yes. When the conversion completes, repeat the process for each disk in the striped set.

    To create the striped set, right-click in the empty space on one of your new disks and select the New Volume command from the context menu. Windows will then launch the New Volume wizard. When the wizard asks what type of volume that you want to create, select Striped. Then, follow the instructions to complete the wizard. The process involves simply selecting which disks should be included in the striped set. Your striped set is now ready to use.

    Conclusion
    Creating a striped set is a low cost way of giving your PC a serious performance boost. Just remember to back up your striped set often, because it is more prone to failure than standard partitions due to the number of disks involved.

    Friday, August 14, 2009

    Creating a bootable USB flash drive for Windows XP

    Takeaway: A bootable flash drive can come in handy--but trying to create one might have you pulling out your hair. Windows expert Greg Shultz shares the method he followed, from configuring the BIOS to allow the USB port to act as a bootable device to creating a bootable image of Windows XP using the free PE Builder software (and a pair of Windows Server 2003 SP1 files) to formatting and copying the image onto a UFD.

    This article is also available as a PDF download and a gallery.

    The ability to boot Windows XP from a USB Flash Drive (UFD) offers endless possibilities. For example, you might make an easy-to-use troubleshooting tool for booting and analyzing seemingly dead PCs. Or you could transport your favorite applications back and forth from home to work without having to install them on both PCs.

    However, before you can create a bootable UFD, you must clear a few hurdles. You saw that one coming didn't you?

    The first hurdle is having a PC in which the BIOS will allow you to configure the USB port to act as a bootable device. The second hurdle is having a UFD that that will work as a bootable device and that's large enough and fast enough to boot an operating system such as Windows XP. The third hurdle is finding a way to condense and install Windows XP on a UFD.

    If you have a PC that was manufactured in the last several years, chances are that its BIOS will allow you to configure the USB port to act as a bootable device. If you have a good quality UFD that's at least 512 KB and that was manufactured in the last couple of years, you've probably cleared the second hurdle. And once you've cleared those first two hurdles, the third one is a piece of cake. All you have to do is download and run some free software to create the bootable UFD.

    I'll start by showing you how to determine whether your PC's BIOS will support booting from USB and explain how to configure it to do so. Then, I'll show you how to download and use the free software to create a bootable UFD running Windows XP Professional.

    The UFD hurdle

    You probably noticed that I didn't mention how to determine if your UFD would support being configured as a bootable device, except that it must be a good quality unit of recent manufacture. Well, I've discovered that when it comes to the actual UFD, you'll just have to try it and see what happens. As long as you have a PC with a BIOS that will allow you to configure the USB port to act as a bootable device and you have configured the installation correctly, it should work. If it doesn't, you probably have a UFD that can't boot.

    I tested three UFDs on two new computers and had mixed success. First, I attempted to use a 128 MB PNY Attache but received an error message that said "Invalid or damaged Bootable partition" on both PCs. Next, I tried a 1GB Gateway UFD and it worked on both PCs. Then, I tried a 256 MB Lexar JumpDrive Pro and it worked on only one of the PCs. You can find lists of UFD brands that others have had success with on the Internet.

    Checking the BIOS

    Not every new BIOS will allow you to configure the USB port to act as a bootable device. And some that do allow it don't make it easy. On one of my example systems, it was a no-brainer. On the other, the UFD had to be connected to the USB port before it was apparent that I could configure it as a bootable device. Let's take a closer look.

    On the test system with a PhoenixBIOS version 62.04, I accessed the BIOS, went to the boot screen, and found that USB Storage Stick was one of the options. I then moved it to the top of the list, as shown in Figure A, thus making it the first device to check during the boot sequence. (This particular BIOS also allowed me to press the [F10] key during the boot sequence and select any one of the available bootable devices, so it really wasn't necessary to move it to the top.)

    Figure A

    The settings on the Boot Screen of the PhoenixBIOS made it a no-brainer to select the device.

    On the test system with an AMI BIOS version 2.59, I accessed the BIOS, went to the Boot Sequence screen, and didn't find a USB boot option, as shown in Figure B. I then went one step further and checked the Hard Disk Drives screen and still didn't find a USB boot option, as shown in Figure C.

    Figure B

    A USB boot option didn't appear on the Boot Sequence screen.

    Figure C

    The Hard Disk Drives screen only showed the SATA hard disk.

    I then plugged a UFD into the USB port, booted up the system, and accessed the BIOS. When I checked the Hard Disk Drives screen, the UFD appeared in the list and I could select it as the first drive (Figure D).

    Figure D

    With the UFD plugged into the USB port, I could configure the UFD as a bootable device.

    When I returned to the Boot Sequence screen, the UFD was indeed set as the first bootable device (Figure E).

    Figure E

    As the Boot Sequence screen indicates, the UFD was set to be the first bootable device.

    Rounding up the software

    To condense and install Windows XP on a UFD, you'll need a program called PE Builder by Bart Lagerweij. You'll also need two files from the Windows Server 2003 Service Pack 1. And of course, you need to have a Windows XP Professional CD.

    You can download PE Builder from Bart's Web site. At the time of this writing, the most current version of PE Builder was 3.1.10a.

    You can download Windows Server 2003 SP1 by following the link in the Knowledge Base article "How to obtain the latest service pack for Windows Server 2003." Be sure to get the 32-bit version!

    Keep in mind that at 329 MB, Windows Server 2003 SP1 will take some time to download. And although you need just two small files, the only way to get them is to download the entire package.


    Warning

    Do not run the Windows Server 2003 SP1 executable file! Doing so will completely corrupt Windows XP. We will use a set of special commands to extract the two files and then delete the rest of the package.


    Preparing the software

    Installing PE Builder is quick and easy. Just run the installation program and follow the onscreen instructions. To make things simpler, I installed the program in the root directory in a folder called PEBUILDER3110a.

    Once PE Builder is installed, you'll need to create a folder in C:\PEBUILDER3110a called SRSP1, as shown in Figure F. This is the folder in which PE Builder will look for the extracted Windows Server 2003 SP1 files.

    Figure F

    Once PE Builder is installed, you'll need to create folder called SRSP1 in C:\PEBUILDER3110a.

    Now, you can begin extracting the two needed files from Windows Server 2003 SP1. When you download the Windows Server 2003 SP1, the executable file will have a long name: WindowsServer2003-KB889101-SP1-ENU.exe. To save on typing, you can rename the file to something shorter, such as WS-SP1.exe.

    To begin, open a Command Prompt window and use the CD command to change to the folder in which you downloaded the Windows Server 2003 SP1 executable file. I downloaded the file to a folder called Downloads. Now, to extract the files contained in SP1, type the command

    WS-SP1.exe -x

    You'll immediately see a dialog box that prompts you to select a folder in which to extract the files and can type the name of the same folder, as shown in Figure G. Click OK to proceed with the extraction procedure. When the procedure is complete, just leave the Command Prompt window open.

    Figure G

    You can extract the files into the same folder containing the Windows Server 2003 SP1 executable file

    The extraction procedure will create a subdirectory called i386 and extract all the Windows Server 2003 SP1 files there. Use the CD command to change to the i386 folder and then copy the setupldr.bin file to the SRSP1 folder with the command:

    copy setupldr.bin c:\pebuilder3110a\srsp1

    Expand the ramdisk.sy_ file to the SRSP1 folder with the command:

    expand -r ramdisk.sy_ c:\pebuilder3110a\srsp1

    These three steps are illustrated in Figure H.

    Figure H

    You'll copy and expand the two necessary files to the SRSP1 folder.

    Now, using Windows Explorer, verify that the two necessary files are in the SRSP1 folder, as shown in Figure I. Once you do so, you can delete all the Windows Server 2003 SP1 files.

    Figure I

    You'll want to verify that the setupldr.bin and ramdisk.sys files are in the SRSP1 folder.

    Running PE Builder

    Now that you've extracted the necessary files from the Windows Server 2003 SP1 package, you're ready to use PE Builder to create a compressed version of Windows XP. To begin, place your Windows XP Professional CD into the drive and hold down the [Shift] key to prevent Autostart from launching the CD. Then, launch PE Builder.

    In the Source field on the main PE Builder screen, simply type the letter of drive in which you put the Windows XP Professional CD, as shown in Figure J. Make sure that the Output box contains BartPE and that the None option is selected in the Media Output panel. Then, click the Build button.

    Figure J

    Fill in the Source field on the main PE Builder screen.

    As PE Builder compresses Windows XP Professional into a bootable image, you'll see a detailed progress dialog box. When the operation is complete, as shown in Figure K, click the Close button.

    Figure K

    PE Builder displays a detailed progress report.

    Preparing the UFD to boot Windows XP

    At this point, you're ready to format and copy the Windows XP Professional bootable image to the UFD with the BartPE USB Installer. To do so, open a Command Prompt window and use the CD command to change to the pebuilder3110a folder. Then, insert your UFD into a USB port and take note of the drive letter that it is assigned. On my example system, the UFD was assigned drive E.

    Now, type the command

    pe2usb -f e:

    You'll then be prompted to confirm this part of the operation, as shown in Figure L. While the operation is underway, you'll see progress indicators.

    Figure L

    You'll be prompted to confirm that you want to format your UFD.

    Once the BartPE USB Installer finishes its job, you'll be prompted press any key to exit the program. Now you can use your UFD to boot your computer into the BartPE interface for Windows XP, as shown in Figure M.

    Figure M

    The BartPE interface provides you with a pared down version of Windows XP.

    You can find a list of specialized applications on Bart's Web site, which you can install on your UFD as Plugins. For example, you can find such things as Firefox or McAfee command-line virus scanner.

    Conclusion

    Booting Windows XP from a UFD requires that your PC's BIOS support booting from USB and that you have a UFD that can be formatted as a bootable device. If you can meet these two requirements, all you need is PE Builder, a couple of files from the Windows Server 2003 Service Pack 1, and a little effort to configure a UFD to boot the BartPE interface to Windows XP.

    Tuesday, August 11, 2009

    Validate project status with periodic audits

    Author: Tom Mochal

    If you're working on a large project, it's a good idea to get an auditor involved to make sure the project is progressing as expected. Tom Mochal lists the seven steps that you can expect during the audit process.

    —————————————————————————————————————–

    Editor's note: This article was originally published October 16, 2006.

    The project manager is responsible for establishing a viable project workplan (schedule) and making sure that the project is progressing appropriately against this schedule. However, in many cases it makes sense to have an outside party double-check to make sure the project is progressing as expected. This is especially true with large, critical projects. If you have a two-month project that takes twice as long as expected, you may be upset, but it won't materially effect your organization. On the other hand, if a two-year program budgeted at 100 million dollars takes twice as long and double the budget to complete, it could have a devastating impact on your organization.

    It's not unusual for larger projects to be subject to periodic audits. The sponsor might call for a project audit if there's a concern about the state of the project. In some cases, periodic audits may be called for as a part of the overall charter. (In some organizations, these audits are referred to as Internal Verification and Validation or IV&V.)

    For larger projects, the person performing the audit should be an experienced project auditor — either internal of external to the company. A project audit focuses on quality assurance — asking questions about the processes used to manage the project and build the deliverables. The audit can follow this process:

    1. Notify the parties. The auditor notifies the project manager of the upcoming audit and schedules a convenient time and place.
    2. Prepare for the audit. The auditor may request certain information upfront or ask the project manager to be prepared to discuss certain aspects of the project. This ensures that the actual meeting time is as productive as possible.
    3. Initial meeting. The auditor asks questions to ensure the project is on track. These questions are quality assurance related, verifying that good processes are being used, and then checking some of the outcomes of those processes. For instance, after verifying that the project manager is using good processes to update an accurate schedule, the auditor could review the actual schedule to validate the current project status against the schedule.
    4. Further analysis. On many projects, the investigative aspect of the audit might culminate after one meeting with the project manager. If the project is large or complex, the auditor might need to meet with other team members and clients and review further project documentation.
    5. Document the findings. The auditor documents the status of the project and the processes used on this project. The auditor should also make recommendations on areas that can be improved to provide more effective and proactive management of the project.
    6. Review draft audit report. The auditor and the project manager should meet again to go over the initial findings. This auditor describes any deficiencies and recommendations for changes. This review also provides an opportunity for the project manager to provide a rebuttal when necessary. In many cases, the initial findings of the auditor might be modified based on specific, targeted feedback from the project manager.
    7. Issue final report. The auditor issues a final report of findings and recommendations. The project manager may also issue a formal response to the audit. In the formal response, the project manager can accept points and discuss plans to implement them. The project manager may also voice his or her disagreement with certain audit points and explain his or her reason why.

    Project audits are very helpful to get an outside opinion on the status of a larger project. In many cases, experienced auditors can point out potential problems with projects much quicker than the project manager might communicate them.

    Use this process to estimate a project's effort hours

    Author: Tom Mochal

    Once you understand the effort that's required for a project, you can assign resources to determine how long the project will take and estimate labor and non-labor costs. Here's a process you can use to estimate the total effort required for your project.

    ——————————————————————————————————————-

    Editor's note: This article was originally published December 11, 2006.

    There are three early estimates that are needed for a project: effort, duration, and cost. Of the three, you must estimate effort hours first. Once you understand the effort that's required, you can assign resources to determine how long the project will take (duration), and then you can estimate labor and non-labor costs.

    Use the following process to estimate the total effort required for your project:

    1. Determine how accurate your estimate needs to be. Typically, the more accurate the estimate, the more detail is needed, and the more time that is needed. If you are asked for a rough order of magnitude (ROM) estimate (-25% - +75%), you might be able to complete the work quickly, at a high-level, and with a minimum amount of detail. On the other hand, if you must provide an accurate estimate within 10%, you might need to spend quite a bit more time and understand the work at a low level of detail.
    2. Create the initial estimate of effort hours for each activity and for the entire project. There are many techniques you can use to estimate effort including task decomposition (Work Breakdown Structure), expert opinion, analogy, Pert, etc.
    3. Add specialist resource hours. Make sure you include hours for part-time and specialty resources. For instance, this could include freelance people, training specialists, procurement, legal, administrative, etc.
    4. Consider rework (optional). In a perfect world, all project deliverables would be correct the first time. On real projects, that usually is not the case. Workplans that do not consider rework can easily end up underestimating the total effort involved with completing deliverables.
    5. Add project management time. This is the effort required to successfully and proactively manage a project. In general, add 15% of the effort hours for project management. For instance, if a project estimate is 12,000 hours (7 - 8 people), a full-time project manager (1,800 hours) is needed. If the project estimate is 1,000 hours, the project management time would be 150 hours.
    6. Add contingency hours. Contingency is used to reflect the uncertainty or risk associated with the estimate. If you're asked to estimate work that is not well defined, you may add 50%, 75%, or more to reflect the uncertainty. If you have done this project many times before, perhaps your contingency would be very small — perhaps 5%.
    7. Calculate the total effort by adding up all the detailed work components.
    8. Review and adjust as necessary. Sometimes when you add up all the components, the estimate seems obviously high or low. If your estimate doesn't look right, go back and make adjustments to your estimating assumptions to better reflect reality. I call this being able to take some initial pushback from your manager and sponsor. If your sponsor thinks the estimate is too high, and you don't feel comfortable to defend it, you have more work to do on the estimate. Make sure it seems reasonable to you and that you are prepared to defend it.
    9. Document all assumptions. You will never know all the details of a project for certain. Therefore, it is important to document all the assumptions you are making along with the estimate.

    This type of disciplined approach to estimating will help you to create as accurate an estimate as possible given the time and resources available to you.

    Analyze your risks to determine which ones to manage

    Takeaway: Tips for determining project risks and their severity.

    No project is without some risk. The best way to identify risks is through a combination of checklists and brainstorming. Checklists allow you to catch the typical risks that might be inherent in projects like yours. A team brainstorming session allows you to find risks that are specific to your particular project. You might end up identifying dozens of risks through a combination of checklists and brainstorming.

    After you identify all the risks, you must figure out which ones are important enough for you to address (risk analysis). You want to classify each risk it in terms of high risk, medium risk, or low risk. You do that by looking at the likelihood that the risk will occur and the impact of the risk on the project if it does occur. For example, a risk that is highly likely to occur and has a high impact to the project would definitively be a high category risk. On the other hand, a risk that is not likely to occur and has a small impact to the project if it does occur would definitively be a low-level risk. All other combinations fall somewhere in the middle of this continuum. The following list gives you the various combinations based on the impact to the project (high, medium, low) and the likelihood of the risk occurring (high, medium, low)

    Likelihood

    Impact

    Overall risk level

    High

    High

    High

    High

    Medium

    Medium

    High

    Low

    Low

    Medium

    High

    High

    Medium

    Medium

    Medium

    Medium

    Low

    Medium/Low

    Low

    High

    Medium/Low

    Low

    Medium

    Low

    Low

    Low

    Low

    This is one example of how you can categorize risks as being high, medium or low, based on the likelihood of occurrence and the impact. There are many other techniques as well. These high-level risks should be managed. The low level risks can be ignored. The medium-level risks should be evaluated individually. You might need to respond to some while others can be ignored for

    Analyzing each risk allows you to determine which ones are important enough to manage. This analysis save you the wasted time associated with managing risks that are better documented, but ignored. 

    Check your project for these four warning signs

    Takeaway: Sometimes a project that is on-schedule can still show warning signs that it is in jeopardy. Check for these warning signs to make sure that your seemingly perfect project isn't sidetracked down the road.


    TechRepublic columnist Tom Mochal receives dozens of e-mails each week from members with questions about project management problems. He shares his tips on a host of project management issues in this Q&A format.

    Question
    I'm managing a project that's more or less on track after three months, with six months to go until the deadline date. Perhaps I'm just being nervous, but I have a feeling the project is going to start slipping in the coming months. One reason I'm nervous is that we padded some of the work at the beginning of the schedule so that we could try to get ahead. However, we're simply on schedule now, and that makes me think we're actually behind where we wanted to be. Are there some things we can look for to know if we're in trouble?

    Answer
    Obviously your project is in trouble if you're missing deadlines and consistently exceeding the estimated effort and cost to get work done. However, your question provides a little twist. You have a project that actually appears to be on schedule, yet you're concerned about potential problems down the road.

    I believe there are specific warning signs you can look for that will give some sense of potential risks. At this point, you can't really call them issues or problems, but they can be identified as risks that have the potential to throw off your project in the future.

    Are you falling behind early in the project?
    Many project managers fall prey to the belief that if they fall behind in a project early on, they can make up the time through the remainder of the project. I always looked at this the other way around. I think there's a natural tendency to fall behind as the project progresses. First of all, the farther out you plan, the less accurate you will be. Second, there are always things that come up on your project that you don't expect, and these last-minute surprises always take time to sort out.

    I believe it is in your best interest to try to get ahead of schedule early on in a project. Don't do this with the expectation that you will actually finish ahead of schedule (although that would be nice). Do it with the expectation that you will need the extra time later on when things come up that you don't expect.

    Your dilemma is based on this approach. It sounds like you purposely put some buffer in the schedule to try to get ahead early. Since you are still on schedule, and not ahead of schedule, this may be a sign that work is taking longer than you think. This, in turn, increases the risk that your team will fall behind schedule as you begin the more aggressively estimated work.

    If you find that you're falling behind early in the project, your best remedy is to start putting corrective plans in place immediately. Don't sit back passively and hope you can make the time up later. Be proactive instead, and take action today to get back on schedule.

    Are you identifying more and more risks?
    In terms of this particular question, it appears that you don't have a multitude of issues that you are currently addressing, because, if you did, you probably would not still be on schedule. However, while you may be on schedule now, you could still face a number of identified risks in the future. Of course, all projects have some future risks, but if you see more and more risks as the project continues, your project could be in serious trouble.

    If you face this risk, the good news is that you have identified it while there is still time to address it. Even if you have a greater-than-normal number of these risks, you may still be okay if you focus on managing them successfully.

    Has client participation faded?
    Your client needs to be actively engaged during the planning process and while you are gathering the business requirements. If you cannot get the client excited to participate during this timeframe, then you're in trouble. However, many times the client begins to get disengaged when the project is a third completed and the work starts to turn more toward the project team. This, too, can be a major risk factor for project success.

    It's important to keep the client actively involved. The project manager needs to continue to communicate proactively, and seek the clients' input on all scope changes, issues resolution, and risk plans. The client absolutely needs to be actively involved in testing. The project manager needs to make sure that the client stays involved and enthusiastic. Otherwise, testing and implementation will be a problem down the road.

    Is morale declining?
    On the surface, if you're on schedule, there's no reason for morale to be going south. However, if you detect that morale is slipping, it could be a sign that your project is in trouble. Your next step is to determine the cause. Morale might be declining because people are being asked to work a lot of extra hours to keep the project on track. People on the team might also believe that the future schedule is unrealistic. Whatever the reasons for it, poor morale needs to investigated and combated. If morale is low early on in the project's timeframe, you should be concerned that the morale issue will continue to get worse as the project's deadline gets closer.

    One of the important responsibilities of the project manager is to continually update the work plan, identify risks, and manage expectations. As you know, a project that seems on track today could have major problems tomorrow. So, keep your eyes open for the warning signs that things are worse than they appear. If you recognize them ahead of time, they can all be classified as project risks, and can be managed and controlled in a manner that will allow your project to succeed.

    Managing project risk is easy with the right process

    Takeaway: A lot of project managers are intimidated by the notion of risk management. However, this simple five-step process will be more than adequate for most projects.

    A reactive project manager tries to resolve issues when they occur. A proactive project manager tries to resolve problems before they occur. Here's a process you can use to identify risks before they occur:

    1. Identify all risks

    Perform a complete assessment of project risk. The purpose of this step is to cast a wide net to uncover as many potential risks as possible.

    2. Analyze the risks

    In the prior step, you uncovered as many risks as possible. You'll find there are usually too many potential risks to manage successfully. In fact, many of them don't need to be managed since they have a low probability of occurring or they would have a low impact on your project. In this step, group the identified risks into high, medium, or low categories. For most projects this can be a subjective assignment based on your best estimates. On some projects, this would be based on rigorous risk models, simulations, and quantitative techniques.

    3. Respond to the high risks

    Create a response plan for each high-level risk that you identified to ensure the risk is managed successfully. This plan should include activities to manage the risk, as well as the people assigned, completion dates, and periodic dates to monitor progress. There are five major responses to a risk -- leave it, monitor it, avoid it, move it to a third party, or mitigate it. The risk plan activities should be moved to your project schedule. You should also evaluate the medium-level risks to determine if the impact is severe enough that they should have a risk response plan created for them as well.

    4. Create a Contingency Plan (optional)

    A Contingency Plan describes the consequences to the project if the risk plan fails and the risk actually occurs. In other words, identify what would happen to the project if the future risk turns into a current issue. This helps you ensure that the effort associated with the risk plan is proportional to the potential consequences. For instance, if the consequence of a potential risk occurring is that the project will need to be stopped, this should be a strong indication that the risk plan must be aggressive and comprehensive to ensure that the risk is managed successfully.

    5. Monitor risks

    You need to monitor the risks to ensure they are being executed successfully. You should add new risk plan activities if it looks like the risk is not being managed successfully.

    You also need to periodically evaluate risks throughout the project based on current circumstances. New risks may arise as the project is unfolding and some risks that were not identified early may become visible at a later date. You should perform this ongoing risk evaluation on a regular basis – say, monthly or at the completion of major milestones.

    10 things you should do to successfully manage your workplan

    • Author: Tom Mochal

    Like much of project management, updating the workplan requires discipline and habit. See how these steps can help you stay on top of the process.


    Project managers are sometimes diligent about creating an initial workplan (schedule), but then they don't manage it during the project. Although the initial workplan will help you launch your project, issues will come up that require the workplan to be modified and updated. On most projects, you can follow this simple 10-step process to manage the workplan. If you do this weekly, you'll probably find it takes less than one hour per week-maybe only 30 minutes or so.

    Note: This list is based on the article "Proactively manage your workplan using this ten step process." It's also available as a PDF download.

    1: Update and review the workplan with progress to-date

    This is probably a weekly process. For larger projects, the frequency might be every two weeks. A simple routine is to have the team members send you status updates on Friday with progress on the activities assigned to them during the week. The project manager then updates the workplan on Monday morning to reflect the current status.

    2: Capture and update actual hours (optional)

    If you're capturing actual effort hours and costs, update the workplan with this information.

    3: Reschedule the project

    Run your scheduling tool to see if the project will be completed within the original effort, cost, and duration estimates.

    4: Review your schedule situation

    See if you're trending past your due date. If you are, you will need to determine how you can get back on schedule.

    5: Review your budget situation

    Review how your project is performing against your budget. Because of the way financial reporting is done, you may need to manage the budget on a monthly basis.

    6: Look for other signs that the project may be in trouble

    These trouble signs could include team morale problems, quality problems, a pattern of late work, etc. Look for ways to remedy these problems once you discover them.

    7: Adjust the workplan and add more details to future work

    When the workplan was created, many of the activities that are further into the future may have been vague and placed into the workplan at a high level. On a monthly basis, this work needs to be defined in greater detail. You should always maintain a rolling three months of detailed activities on your workplan.

    8: Evaluate the critical path of the project and then keep your eye on it

    It's possible for the critical path to change during the project.

    9: Update your project forecast

    After you've updated your workplan to reflect the work remaining to complete the project, you should also estimate the cost of the remaining work. This is usually referred to as "forecasting."

    10: Communicate any schedule and budget risk

    If you're at risk of missing your budget or deadline, communicate this risk to the sponsor and management stakeholders. You don't have to state that you will miss your estimates for sure. However, you should start to communicate the risk while you implement actions to try to get the project back on track.

    Friday, August 7, 2009

    Seven tips on mentoring entry-level developers

    Author: Justin James

    Justin James has seen enough mentoring boondoggles to have a good idea of what does and doesn't work. He shares his ideas about how to have a successful software developer mentoring program.

    —————————————————————————————

    One of my recent TechRepublic polls covered the topic of why we hire entry-level programmers. According to the poll results, more than half of the respondents hire entry-level programmers so they can mentor them into the type of programmer they need.

    Schools alone can't prepare programmers for the real world; some sort of internship or apprenticeship is needed to complete a programmer's education. Unfortunately, few schools offer rigorous internship programs; even worse, most companies simply don't have anyone with the time to properly mentor an intern. (My latest download is an example of what a good training program for developers might entail.)

    If your organization is starting or revamping a mentorship program, read my ideas about how to have a successful software developer mentoring program. Before launching into my tips, it's important to note that not every senior developer makes a good mentor, and there's no shame in knowing your limitations. If you don't think you can fully commit to being a good mentor, or you don't think you have the necessary skills or traits to be one, then say something. It's better to admit that you aren't cut out for the task than to force yourself to do it and waste time and probably alienate a promising new employee.

    1. Make mentoring a priority

    I think the key ingredient in a successful mentoring relationship is giving the relationship priority above anything other than an emergency. It is the inability to give the relationship priority that makes true mentoring scenarios so rare. If you don't make the mentorship a priority, the new hire quickly senses that she is not important. She also quickly figures out that, when she goes to you for help, she is slowing you down from attending to your "real" priorities. The end result? She doesn't seek you for help, and she tries to do things on her own. Basically, you're no longer her mentor.

    2. Have a road map

    I've seen a number of mentoring programs sink because there is no plan. Someone is hired, and a more experienced developer is assigned to show that person the ropes. The experienced developer wasn't told about this new mentoring role until 9:05 AM on the new hire's first day. The would-be mentor takes the new hire on a tour of the building and introduces her to a few other teams — and that's the extent of "the ropes." The only thing the new employee usually learns is where to find the kitchen. You need to have a game plan with set goals (for the new hire and the mentor) and a list of topics to cover; otherwise, you'll both feel lost and give up before you even start.

    3. Be tolerant of mistakes

    Working with entry-level developers can be frustrating. They are not familiar with writing code in a real-world environment with version control, unit tests, and automated build tools. Also, they may have been taught outdated habits by a professor who last worked on actual code in 1987. Often, entry-level developers do not realize that the way they were taught to approach a problem may not be the only choice. But if your reaction to mistakes is to treat the developer like she is stupid or to blame (even if she is being stupid or is truly at fault), she probably won't respond well and won't be working with you much longer.

    4. Assign appropriate projects

    One of the worst things you can do is to throw an entry-level programmer at an extremely complex project and expect her to "sink or swim." Chances are, the programmer will sink; even worse, the programmer will add this project to her resume, and then she will run out of there as fast as she can just to get away from you. On the other hand, don't create busywork for the programmer; let her work on nagging issues in current products or internal projects that you never seem to have time to address. Once you gain confidence about what the programmer can accomplish, then you can assign a more difficult project.

    5. Give and accept feedback

    You can't successfully navigate a ship in the middle of an ocean without a compass. Likewise, the new employee will not achieve her goal of becoming a productive member of the team without knowing where she has been and where she is going. This means you need to give feedback on a regular basis, and the feedback needs to be appropriate. For instance, being sarcastic to someone who made an honest mistake is not helpful. Feedback has to be a two-way street as well; you need to be listening to them to find out what their concerns and questions are, and address them.

    6. Listen to the new employee's ideas

    Entry-level developers have a lot less built-in prejudices and biases than experienced developers. Sometimes the saying "out of the mouths of babes" really applies. A number of times in my career, I've seen a less-experienced employee point out an obvious answer that all of the more experienced employees overlooked. When you treat a new hire as a peer, it raises their confidence and makes them feel like part of the team.

    7. Treat the developer with respect

    Just because someone is entry-level, it doesn't mean that her job is to refill your coffee or pick up your lunch. She isn't rushing a sorority — she's trying to break into the development business. If you disrespect the developer, she might leave or go to HR about your behavior (and maybe still leave).

    Rewarding experiences

    A few years ago, I had the opportunity to work closely with someone who was not as experienced as me, and we both learned a lot in the process. While I may not have officially been his mentor, it was a good description of our relationship. I still keep up with him, and we frequently talk about business, programming, and so on. He laughs at a lot of my more traditional development techniques, and I get to share with him some of the painful and costly lessons I've learned along the way.

    If you're considering being a mentor, these relationships can be very rewarding. I hope the tips I presented will help you the next time an entry-level developer is assigned to your department.


    J.Ja

    ITWORLD
    If you have any question then you put your question as comments.

    Put your suggestions as comments