Thursday, August 30, 2012

Effective design principles for web designers: Proximity

This is the final segment in our four-part series on effective web design principles, concluding with the topic of proximity. The previous topics and segments in the effective web design principles series covered ContrastRepetition, and Alignment. Guiding the user through your website with proper flow, effective use of white space, positioning similar content closer together, and providing clear structure are all facets of the proximity design standard.

Proximity, prox-im-ity, [prok-sim-i-tee] noun, nearness in space or time, order, occurrence, or relation, closeness in a series, vicinity, order.

Spacing and relationships

Proximity for web design purposes means that similar or related elements should be grouped together, while those that are unrelated or dissimilar should be separated. The physical relationships and spaces between web design elements create a level of emphasis, and include other factors such as isolation, similarity, eye movement and direction, continuance, andpersistence of vision.

As elements overlap or touch, the top layer typically gets the primary attention. Did you notice the "Proximity" piece of the puzzle above? Did your eye gravitate to the purple puzzle piece first, and then move up and to the left to scan the remaining pieces? However the overlapping object suddenly becomes overshadowed if the other objects close by are in stark contrast; as objects become closer together the contrasting elements will stand out. Striking a balance between closeness and contrast, and even manipulating the two principles can achieve varied results. Take a look at Figure B below and see where your eye gravitates. Did you first notice the "Repetition" puzzle piece?

Every object or element has a gravitational pole in relation to the other objects that are nearer to its center, and the closer an object is to another also affects its weight. Just as a planet's gravity affects its moon orbit, the positions of elements to each other on a web page can change the weight given to it and other elements on the page.

White space

An additional proximity factor is the effective use of white space on the web page, spacing elements utilizing effective margins, gutters between columns, and padding creates a balance between the content and the space between elements. In general, too much white space and the web page looks irregular and void of content, with no direction. Of course, if your web design requires a level of artistic license to accentuate open space with an undeniable void of content for dramatic effect, then go for it.

Proximity and typography

Above, I talked about the negative effects of too much white space, but too little white space can make the web page appear cluttered and cramped. As a rule of thumb, a balanced white space is generally more attractive and pleasing to the eye. Below are two examples which demonstrate both ends of the white space gamut. Figure C, for example, is too much white space, and Figure Dhas too little.

Figure C

Figure D

An intuitive flow of content reveals a balance of white space and the typographic elements that comprise the textual content. Take the first example of the IT Course List shown below in Figure E, and try to step through the list of courses available.

Figure E

Now, take a look at the same list below, which now has each logical grouping defined with appropriate white space, appropriate headings, and unordered lists as displayed in Figure Fbelow.

Figure F

The second list is easily delivered and provides the reader various sections and sub-sections of the course list; each of the courses is in close proximity to the associated and related sub-section header.

Employing the proximity principle of effective web design helps to organize content elements on the web page utilizing space, order, size, relationships, color, and effective use of white space and sectioning throughout typographic elements.

The future of IT will be reduced to three kinds of jobs

There's a general anxiety that has settled over much of the IT profession in recent years. It's a stark contrast to the situation just over a decade ago. At the end of the 1990s, IT pros were the belles of the ball. The IT labor shortage regularly made headlines and IT pros were able to command excellent salaries by getting training and certification, job hopping, and, in many cases, being the only qualified candidate for a key position in a thinly-stretched job market. At the time, IT was held up as one of the professions of the future, where more and more of the best jobs would be migrating as computer-automated processes replaced manual ones.

Unfortunately, that idea of the future has disappeared, or at least morphed into something much different.

The glory days when IT pros could name their ticket evaporated when the Y2K crisis passed and then the dot com implosion happened. Suddenly, companies didn't need as many coders on staff. Suddenly, there were a lot fewer startups buying servers and hiring sysadmins to run them.

Around the same time, there was also a general backlash against IT in corporate America. Many companies had been throwing nearly-endless amounts of money at IT projects in the belief that tech was the answer to all problems. Because IT had driven major productivity improvements during the 1990s, a lot of companies over-invested in IT and tried to take it too far too fast. As a result, there were a lot of very large, very expensive IT projects that crashed and burned.

When the recession of 2001 hit, these massively overbuilt IT departments were huge targets for budget cuts and many of them got hit hard. As the recession dragged out in 2002 and 2003, IT pros mostly told each other that they needed to ride out the storm and that things would bounce back. But, a strange thing happened. IT budgets remained flat year after year. The rebound never happened.

Fast forward to 2011. Most IT departments are a shadow of their former selves. They've drastically reduced the number of tech support professionals, or outsourced the help desk entirely. They have a lot fewer administrators running around to manage the network and the servers, or they've outsourced much of the data center altogether. These were the jobs that were at the center of the IT pro boom in 1999. Today, they haven't totally disappeared, but there certainly isn't a shortage of available workers or a high demand for those skill sets.

That's because the IT environment has changed dramatically. More and more of traditional software has moved to the web, or at least to internal servers and served through a web browser. Many technophobic Baby Boomers have left the workforce and been replaced by Millennials who not only don't need as much tech support, but often want to choose their own equipment and view the IT department as an obstacle to productivity. In other words, today's users don't need as much help as they used to. Cynical IT pros will argue this until they are blue in the face, but it's true. Most workers have now been using technology for a decade or more and have become more proficient than they were a decade ago. Plus, the software itself has gotten better. It's still horribly imperfect, but it's better.

So where does that leave today's IT professionals? Where will the IT jobs of the future be?

1. Consultants

Let's face it, all but the largest enterprises would prefer to not to have any IT professionals on staff, or at least as few as possible. It's nothing personal against geeks, it's just that IT pros are expensive and when IT departments get too big and centralized they tend to become experts at saying, "No." They block more progress than they enable. As a result, we're going to see most of traditional IT administration and support functions outsourced to third-party consultants. This includes a wide range from huge multi-national consultancies to the one person consultancy who serves as the rented IT department for local SMBs. I'm also lumping in companies like IBM, HP, Amazon AWS, and Rackspace, who will rent out both data center capacity and IT professionals to help deploy, manage, and troubleshoot solutions. Many of the IT administrators and support professionals who currently work directly for corporations will transition to working for big vendors or consultancies in the future as companies switch to purchasing IT services on an as-needed basis in order to lower costs, get a higher level of expertise, and get 24/7/365 coverage.

2. Project managers

Most of the IT workers that survive and remain as employees in traditional companies will be project managers. They will not be part of a centralized IT department, but will be spread out in the various business units and departments. They will be business analysts who will help the company leaders and managers make good technology decisions. They will gather business requirements and communicate with stakeholders about the technology solutions they need, and will also be proactive in looking for new technologies that can transform the business. These project managers will also serve as the company's point of contact with technology vendors and consultants. If you look closely, you can already see a lot of current IT managers morphing in this direction.

3. Developers

By far, the area where the largest number of IT jobs is going to move is into developer, programmer, and coder jobs. While IT used to be about managing and deploying hardware and software, it's going to increasingly be about web-based applications that will be expected to work smoothly, be self-evident, and require very little training or intervention from tech support. The other piece of the pie will be mobile applications — both native apps and mobile web apps. As I wrote in my article, We're entering the decade of the developer, the current changes in IT are "shifting more of the power in the tech industry away from those who deploy and support apps to those who build them." This trend is already underway and it's only going to accelerate over the next decade.

Monday, August 27, 2012

10 things to keep in mind when improving processes

Many organizations want to harness the power of IT to improve existing processes or to solve vexing business problems. In this article, I will outline 10 items you should consider as you undertake business process improvement (BPI) projects in your own company.

1: Start at the top with executive support and good governance

Although organizations might begin a BPI initiative with the intent to correct a single issue, these initiatives can quickly take on a life of their own. Further, because change can be difficult for some, it is in the organization's best interests to ensure that BPI projects be chartered and blessed by its senior leadership. With this kind of visibility, there may still be angst, but the improvement group will have the authority it needs to make changes to the business.

2: Identify the problem(s)

When beginning a BPI project, don't just attack something that looks wrong. Carefully analyze the organization's current pain points — perhaps sales are down, customer satisfaction with support is poor, or costs to handle a certain function have skyrocketed — and then determine which problems deserve the most immediate attention.

3: Don't forget how processes interact — think global while acting local

While many processes stand alone, the chances are good that every process is a part of a bigger whole. As your team begins to consider the process at hand, don't lose sight of how that process integrates with everything else. Plan for it. Make sure that you're not making something else worse in an effort to solve a different problem. This may mean attacking multiple processes at once in some cases. As you plan for improvements, step back and from a high level, try to determine what will happen once proposed changes are made.

4: Look for immediate time savings

In one BPI project I led, in our very first meeting, we did a quick, high-level process mapping to ensure that we have all of the process stakeholders in the room. During that meeting, we discovered that one of the process owners was spending about two days per month creating reports for the next process owner in the chain and had been doing so for years. The catch? The reports were never used. The person received them and simply discarded them. Without a second thought, we nixed that step of the process before we made any other changes. So there was an immediate, tangible benefit resulting from the time we spent simply talking about the process.

This brings up a related point: You might not have to be too formal in your efforts. Sometimes, just a bit of communication can yield huge time savings.

5: Make sure the right people are involved

This is a step that I can't stress enough: Make sure you include everyone who has a stake in the process. If you don't, your efforts will fail. Those excluded will know they've been excluded and will resist any proposed changes. Further, your efforts won't be as complete as they otherwise could be.

Again, another related point: Just because someone is involved doesn't mean that that person will cooperate. I've been involved in BPI efforts with people who were less than cooperative, and it really affects the possible outcomes. In every organization, I believe that people have a responsibility for improving the workplace, which should be included in annual performance reviews. If someone is truly combative just to resist the change, it should be reflected there. That said, if people have valid points and you simply don't agree, don't punish them! The goal here is inclusiveness, not divisiveness.

6: Formally map processes under review

This is another step I consider essential. A visual representation of a process helps everyone understand exactly how the process operates, who operates it at particular points along the line, and where that process intersects with other processes and services.

Visio has great templates for process mapping, but there are also excellent stand-alone tools designed for just this purpose, which may be better for particularly complex or involved processes.

With the process map, it becomes easier to make decisions with everyone on the same page.

7: Spend time on what-if scenarios

Don't just come up with a new process and lock it in. Consider every what-if scenario you can think of to try to break the process. Just like software testing, the goal here is to identify weaknesses so that you can shore things up. The more time you spend testing processes, the better the outcome will be.

8: Figure out your measuring stick

If you can't measure it, you can't fix it. You must identify the metrics by which you will gauge BPI project success. The "pain" metric was probably determined when you figured out which processes to attack first, but the success metric should also be targeted. For example, are you trying to reduce customer on-hold time for support to two minutes or less? Whatever your metric is, define it and measure success against it.

9: Don't assume automation

When people hear "business process improvement," they often just assume that is code for "IT is going to automate the process." That's certainly not always the case, although IT systems will often play a large role in these efforts. It's just as likely that non-IT-focused efforts will play as big a role as — or a bigger role than — IT-based systems.

I include this step so that you don't limit yourself. Think outside the system!

10: Look for common chokepoints between disparate processes

As processes intersect, look for places where many processes tend to break down. This is related to "thinking global" and requires people who can look at the organization from a very high level while at the same time, deep-dive into its guts to see how it ticks.

10 compelling reasons to upgrade to Windows Server 2012

We've had a chance to play around a bit with the release preview of Windows Server 2012. Some have been put off by the interface-formerly-known-as-Metro, but with more emphasis on Server Core and the Minimal Server Interface, the UI is unlikely to be a "make it or break it" issue for most of those who are deciding whether to upgrade. More important are the big changes and new capabilities that make Server 2012 better able to handle your network's workloads and needs. That's what has many IT pros excited.

Here are 10 reasons to give serious consideration to upgrading to Server 2012 sooner rather than later.

1: Freedom of interface choice

A Server Core installation provides security and performance advantages, but in the past, you had to make a commitment: If you installed Server Core, you were stuck in the "dark place" with only the command line as your interface. Windows Server 2012 changes all that. Now we have choices.

The truth that Microsoft realized is that the command line is great for some tasks and the graphical interface is preferable for others. Server 2012 makes the GUI a "feature" — one that can be turned on and off at will. You do it through the Remove Roles Or Features option in Server Manager.

2: Server Manager

Speaking of Server Manager (Figure A), even many of those who dislike the new tile-based interface overall have admitted that the design's implementation in the new Server Manager is excellent.

Figure A

Server Manager

One of the nicest things about the new Server Manager is the multi-server capabilities, which makes it easy to deploy roles and features remotely to physical and virtual servers. It's easy to create a server group — a collection of servers that can be managed together. The remote administration improvements let you provision servers without having to make an RDP connection.

3: SMB 3.0

The Server Message Block (SMB) protocol has been significantly improved in Windows Server 2012 and Windows 8. The new version of SMB supports new file server features, such as SMB transparent failover , SMB Scale Out, SMB Multichannel, SMB Direct, SMB encryption, VSS for SMB file sharing, SMB directory leasing, and SMB PowerShell. That's a lot of bang for the buck. It works beautifully with Hyper-V, so that VHD files and virtual machine configuration files can be hosted on SMB 3.0 shares. A SQL system database can be stored on an SMB share, as well, with improvements to performance. For more details about what's new in SMB 3.0, see this blog post.

4: Dynamic Access Control (DAC)

Even though some say Microsoft has shifted the focus away from security in recent years, it would be more accurate to say it has shifted the focus from separate security products to a more "baked in" approach of integrating security into every part of the operating system.

Dynamic Access Control is one such example, helping IT pros create more centralized security models for access to network resources by tagging sensitive data both manually and automatically, based on factors such as the file content or the creator. Then claims based access controls can be applied. Read more about DAC in my "First Look" article over on Windowsecurity.com.

5: Storage Spaces

Storage is a hot — and complex — topic in the IT world these days. Despite the idea that we're all going to be storing everything in the public cloud one day, that day is a long way off (and for many organizations concerned about security and reliability, it may never happen). There are myriad solutions for storing data on your network in a way that provides better utilization of storage resources, centralized management, and better scalability, along with security and reliability. Storage area networks (SANs) and network attached storage (NAS) do that, but they can be expensive and difficult to set up.

Storage Spaces is a new feature in Server 2012 that lets you use inexpensive hard drives to create a storage pool, which can then be divided into spaces that are used like physical disks. They can include hot standby drives and use redundancy methods such as 2- or 3-way mirroring or parity. You can add new disks any time, and a space can be larger than the physical capacity of the pool. When you add new drives, the space automatically uses the extra capacity. Read more about Storage Spaces in this MSDN blog post.

6: Hyper-V Replica

Virtualization is the name of the game in the server world these days, and Hyper-V is Microsoft's answer to VMware. Although the latter had a big head start, Microsoft's virtualization platform has been working hard at catching up, and many IT pros now believe it has surpassed its rival in many key areas. With each iteration, the Windows hypervisor gets a little better, and Hyper-V in Windows Server 2012 brings a number of new features to the table. One of the most interesting is Hyper-V Replica.

This is a replication mechanism that will be a disaster recovery godsend to SMBs that may not be able to deploy complex and costly replication solutions. It logs changes to the disks in a VM and uses compression to save on bandwidth, replicating from a primary server to a replica server. You can store multiple snapshots of a VM on the replica server and then select the one you want to use. It works with both standalone hosts and clusters in any combination (standalone to standalone, cluster to cluster, standalone to cluster or cluster to standalone). To find out more about Hyper-V replica, see this TechNet article.

7: Improvements to VDI

Windows Terminal Services has come a long way, baby, since I first met it in Windows NT TS Edition. Renamed Remote Desktop Services, it has expanded to encompass much more than the ability to RDP into the desktop of a remote machine. Microsoft offered a centralized Virtual Desktop Infrastructure (VDI) solution in Windows Server 2008 R2, but it was still a little rough around the edges. Significant improvements have been made in Server 2012.

You no longer need a dedicated GPU graphics card in the server to use RemoteFX, which vastly improves the quality of graphics over RDP. Instead, you can use a virtualized GPU on standard server hardware. USB over RDP is much better, and the Fair Share feature can manage how CPU, memory, disk space, and bandwidth are allocated among users to thwart bandwidth hogs. Read more about Server 2012 VDI and RDP improvements here.

8: DirectAccess without the hassle factor

DirectAccess was designed to be Microsoft's "VPN replacement," a way to create a secure connection from client to corporate network without the performance drain and with a more transparent user experience than a traditional VPN. Not only do users not have to deal with making the VPN work, but administrators get more control over the machines, with the ability to manage them even before users log in. You apply group policy using the same tools you use to manage computers physically located on the corporate network.

So why hasn't everyone been using DirectAccess with Server 2008 R2 instead of VPNs? One big obstacle was the dependency on IPv6. Plus, it couldn't be virtualized. Those obstacles are gone now. In Windows Server 2012, DirectAccess works with IPv4 without having to fool with conversion technologies, and the server running DirectAccess at the network edge can now be a Hyper-V virtual machine. The Server 2012 version of DA is also easier to configure, thanks to the new wizard.

9: ReFS

Despite the many advantages NTFS offers over early FAT file systems, it's been around since 1993, and Windows aficionados have been longing for a new file system for quite some time. Way back in 2004, we were eagerly looking forward to WinFS, but Vista disappointed us by not including it. Likewise, there was speculation early on that a new file system would be introduced with Windows 7, but it didn't happen.

Windows Server 2012 brings us our long-awaited new file system, ReFS or the Resilient File System. It supports many of the same features as NTFS, although it leaves behind some others, perhaps most notably file compression, EFS, and disk quotas. In return, ReFS gives us data verification and auto correction, and it's designed to work with Storage Spaces to create shrinkable/expandable logical storage pools. The new file system is all about maximum scalability, supporting up to 16 exabytes in practice. (This is the theoretical maximum in the NTFS specifications, but in the real world, it's limited to 16 terabytes.) ReFS supports a theoretical limit of 256 zetabytes (more than 270 billion terabytes). That allows for a lot of scaling.

10: Simplified licensing

Anyone who has worked with server licenses might say the very term "simplified licensing" is an oxymoron. But Microsoft really has listened to customers who are confused and frustrated by the complexity involved in finding the right edition and figuring out what it's really going to cost. Windows Server 2012 is offered in only four editions: Datacenter, Standard, Essentials, and Foundation. The first two are licensed per-processor plus CAL, and the latter two (for small businesses) are licensed per-server with limits on the number of user accounts (15 for Foundation and 25 for Essentials). See the chart with licensing details for each edition on the Microsoft Web site.

Monday, August 13, 2012

10 things learned from working in IT

Sunday, August 5, 2012

10 highly valued soft skills for IT pros


Takeaway: Today's IT pro needs both technical expertise and soft skills — that's nothing new. But the scope of those in-demand soft skills just keeps growing.

Depending on which company you talk to, there are varying demands for IT technical skills. But there is one common need that most IT organizations have: soft skills. This need is nothing new. As early as three decades ago corporate IT sought out liberal arts graduates to become business and systems analysts so they could "bridge the communications gap" between programmers and end users. And if you look at the ranks of CIOs, almost half have backgrounds in liberal arts.

So what are the soft skills areas that companies want to see in IT professionals today?

1: Deal making and meeting skills

IT is a matchup of technology and people to produce products that run the company's business. When people get involved, there are bound to be disagreements and a need to arrive at group consensus. IT'ers who can work with people, find a common ground so projects and goals can be agreed to, and swallow their own egos in the process if need be are in high demand.

2: Great communication skills

The ability to read, write, and speak clearly and effectively will never go out of style — especially in IT. IT project annals are filled with failed projects that were good ideas but poorly communicated.

3: A sixth sense about projects

There are formal project management programs that teach people PM methodology. But for most people, it takes several years of project management experience to develop an instinct for how a project is really going. Natural project managers have this sixth sense. In many cases, it is simply a talent that can't be taught. But when an IT executive discovers a natural project manager who can "read" the project in the people and the tasks, this person is worth his/her weight in gold.

4: Ergonomic sensitivity

Because its expertise is technical, it is difficult for IT to understand the point of view of a nontechnical user or the conditions in the field that end users face. A business analyst who can empathize with end users, understand the business conditions they work in, and design graphical user interfaces that are easy to learn and use is an asset in application development.

5: Great team player

It's easy for enclaves of IT professionals to remain isolated in their areas of expertise. Individuals who can transcend these technical silos and work for the good of the team or the project are valued for their ability to see the big picture. They are also viewed as candidates for promotions.

6: Political smarts

Not known as a particularly politically astute group, IT benefits when it hires individuals who can forge strong relationships with different constituencies throughout the company. This relationship building facilitates project cooperation and success.

7: Teaching, mentoring, and knowledge sharing

IT'ers able to teach new applications to users are invaluable in project rollouts. They are also an asset as teaching resources for internal IT. If they can work side by side with others and provide mentoring and support, they become even more valuable — because the "real" IT learning occurs on the job and in the trenches. Central to these processes is the willingness to share and the ability to listen and be patient with others as they learn.

8: Resolving "gray" issues

IT likes to work in binary (black and white). Unfortunately, many of the people issues that plague projects are "gray." There is no right or wrong answer, but there is a need to find a place that everyone is comfortable with. Those who can identify and articulate the problem, bring it out in the open, and get it solved are instrumental in shortening project snags and timelines.

9: Vendor management

Few IT or MA programs teach vendor management — and even fewer IT'ers want to do this. But with outsourcing and vendor management on the rise, IT pros with administrative and management skills who can work with vendors and ensure that SLAs (service level agreements) and KPIs (key performance indicators) are met bring value to performance areas where IT is accountable. They also have great promotion potential.

10: Contract negotiation

The growth of cloud-based solutions has increased the need for contract negotiation skills and legal knowledge. Individuals who bring this skills package to IT are both recognized and rewarded, often with highly paid executive positions.




ITWORLD
If you have any question then you put your question as comments.

Put your suggestions as comments