Monday, August 27, 2012

10 things to keep in mind when improving processes

Many organizations want to harness the power of IT to improve existing processes or to solve vexing business problems. In this article, I will outline 10 items you should consider as you undertake business process improvement (BPI) projects in your own company.

1: Start at the top with executive support and good governance

Although organizations might begin a BPI initiative with the intent to correct a single issue, these initiatives can quickly take on a life of their own. Further, because change can be difficult for some, it is in the organization's best interests to ensure that BPI projects be chartered and blessed by its senior leadership. With this kind of visibility, there may still be angst, but the improvement group will have the authority it needs to make changes to the business.

2: Identify the problem(s)

When beginning a BPI project, don't just attack something that looks wrong. Carefully analyze the organization's current pain points — perhaps sales are down, customer satisfaction with support is poor, or costs to handle a certain function have skyrocketed — and then determine which problems deserve the most immediate attention.

3: Don't forget how processes interact — think global while acting local

While many processes stand alone, the chances are good that every process is a part of a bigger whole. As your team begins to consider the process at hand, don't lose sight of how that process integrates with everything else. Plan for it. Make sure that you're not making something else worse in an effort to solve a different problem. This may mean attacking multiple processes at once in some cases. As you plan for improvements, step back and from a high level, try to determine what will happen once proposed changes are made.

4: Look for immediate time savings

In one BPI project I led, in our very first meeting, we did a quick, high-level process mapping to ensure that we have all of the process stakeholders in the room. During that meeting, we discovered that one of the process owners was spending about two days per month creating reports for the next process owner in the chain and had been doing so for years. The catch? The reports were never used. The person received them and simply discarded them. Without a second thought, we nixed that step of the process before we made any other changes. So there was an immediate, tangible benefit resulting from the time we spent simply talking about the process.

This brings up a related point: You might not have to be too formal in your efforts. Sometimes, just a bit of communication can yield huge time savings.

5: Make sure the right people are involved

This is a step that I can't stress enough: Make sure you include everyone who has a stake in the process. If you don't, your efforts will fail. Those excluded will know they've been excluded and will resist any proposed changes. Further, your efforts won't be as complete as they otherwise could be.

Again, another related point: Just because someone is involved doesn't mean that that person will cooperate. I've been involved in BPI efforts with people who were less than cooperative, and it really affects the possible outcomes. In every organization, I believe that people have a responsibility for improving the workplace, which should be included in annual performance reviews. If someone is truly combative just to resist the change, it should be reflected there. That said, if people have valid points and you simply don't agree, don't punish them! The goal here is inclusiveness, not divisiveness.

6: Formally map processes under review

This is another step I consider essential. A visual representation of a process helps everyone understand exactly how the process operates, who operates it at particular points along the line, and where that process intersects with other processes and services.

Visio has great templates for process mapping, but there are also excellent stand-alone tools designed for just this purpose, which may be better for particularly complex or involved processes.

With the process map, it becomes easier to make decisions with everyone on the same page.

7: Spend time on what-if scenarios

Don't just come up with a new process and lock it in. Consider every what-if scenario you can think of to try to break the process. Just like software testing, the goal here is to identify weaknesses so that you can shore things up. The more time you spend testing processes, the better the outcome will be.

8: Figure out your measuring stick

If you can't measure it, you can't fix it. You must identify the metrics by which you will gauge BPI project success. The "pain" metric was probably determined when you figured out which processes to attack first, but the success metric should also be targeted. For example, are you trying to reduce customer on-hold time for support to two minutes or less? Whatever your metric is, define it and measure success against it.

9: Don't assume automation

When people hear "business process improvement," they often just assume that is code for "IT is going to automate the process." That's certainly not always the case, although IT systems will often play a large role in these efforts. It's just as likely that non-IT-focused efforts will play as big a role as — or a bigger role than — IT-based systems.

I include this step so that you don't limit yourself. Think outside the system!

10: Look for common chokepoints between disparate processes

As processes intersect, look for places where many processes tend to break down. This is related to "thinking global" and requires people who can look at the organization from a very high level while at the same time, deep-dive into its guts to see how it ticks.

10 compelling reasons to upgrade to Windows Server 2012

We've had a chance to play around a bit with the release preview of Windows Server 2012. Some have been put off by the interface-formerly-known-as-Metro, but with more emphasis on Server Core and the Minimal Server Interface, the UI is unlikely to be a "make it or break it" issue for most of those who are deciding whether to upgrade. More important are the big changes and new capabilities that make Server 2012 better able to handle your network's workloads and needs. That's what has many IT pros excited.

Here are 10 reasons to give serious consideration to upgrading to Server 2012 sooner rather than later.

1: Freedom of interface choice

A Server Core installation provides security and performance advantages, but in the past, you had to make a commitment: If you installed Server Core, you were stuck in the "dark place" with only the command line as your interface. Windows Server 2012 changes all that. Now we have choices.

The truth that Microsoft realized is that the command line is great for some tasks and the graphical interface is preferable for others. Server 2012 makes the GUI a "feature" — one that can be turned on and off at will. You do it through the Remove Roles Or Features option in Server Manager.

2: Server Manager

Speaking of Server Manager (Figure A), even many of those who dislike the new tile-based interface overall have admitted that the design's implementation in the new Server Manager is excellent.

Figure A

Server Manager

One of the nicest things about the new Server Manager is the multi-server capabilities, which makes it easy to deploy roles and features remotely to physical and virtual servers. It's easy to create a server group — a collection of servers that can be managed together. The remote administration improvements let you provision servers without having to make an RDP connection.

3: SMB 3.0

The Server Message Block (SMB) protocol has been significantly improved in Windows Server 2012 and Windows 8. The new version of SMB supports new file server features, such as SMB transparent failover , SMB Scale Out, SMB Multichannel, SMB Direct, SMB encryption, VSS for SMB file sharing, SMB directory leasing, and SMB PowerShell. That's a lot of bang for the buck. It works beautifully with Hyper-V, so that VHD files and virtual machine configuration files can be hosted on SMB 3.0 shares. A SQL system database can be stored on an SMB share, as well, with improvements to performance. For more details about what's new in SMB 3.0, see this blog post.

4: Dynamic Access Control (DAC)

Even though some say Microsoft has shifted the focus away from security in recent years, it would be more accurate to say it has shifted the focus from separate security products to a more "baked in" approach of integrating security into every part of the operating system.

Dynamic Access Control is one such example, helping IT pros create more centralized security models for access to network resources by tagging sensitive data both manually and automatically, based on factors such as the file content or the creator. Then claims based access controls can be applied. Read more about DAC in my "First Look" article over on Windowsecurity.com.

5: Storage Spaces

Storage is a hot — and complex — topic in the IT world these days. Despite the idea that we're all going to be storing everything in the public cloud one day, that day is a long way off (and for many organizations concerned about security and reliability, it may never happen). There are myriad solutions for storing data on your network in a way that provides better utilization of storage resources, centralized management, and better scalability, along with security and reliability. Storage area networks (SANs) and network attached storage (NAS) do that, but they can be expensive and difficult to set up.

Storage Spaces is a new feature in Server 2012 that lets you use inexpensive hard drives to create a storage pool, which can then be divided into spaces that are used like physical disks. They can include hot standby drives and use redundancy methods such as 2- or 3-way mirroring or parity. You can add new disks any time, and a space can be larger than the physical capacity of the pool. When you add new drives, the space automatically uses the extra capacity. Read more about Storage Spaces in this MSDN blog post.

6: Hyper-V Replica

Virtualization is the name of the game in the server world these days, and Hyper-V is Microsoft's answer to VMware. Although the latter had a big head start, Microsoft's virtualization platform has been working hard at catching up, and many IT pros now believe it has surpassed its rival in many key areas. With each iteration, the Windows hypervisor gets a little better, and Hyper-V in Windows Server 2012 brings a number of new features to the table. One of the most interesting is Hyper-V Replica.

This is a replication mechanism that will be a disaster recovery godsend to SMBs that may not be able to deploy complex and costly replication solutions. It logs changes to the disks in a VM and uses compression to save on bandwidth, replicating from a primary server to a replica server. You can store multiple snapshots of a VM on the replica server and then select the one you want to use. It works with both standalone hosts and clusters in any combination (standalone to standalone, cluster to cluster, standalone to cluster or cluster to standalone). To find out more about Hyper-V replica, see this TechNet article.

7: Improvements to VDI

Windows Terminal Services has come a long way, baby, since I first met it in Windows NT TS Edition. Renamed Remote Desktop Services, it has expanded to encompass much more than the ability to RDP into the desktop of a remote machine. Microsoft offered a centralized Virtual Desktop Infrastructure (VDI) solution in Windows Server 2008 R2, but it was still a little rough around the edges. Significant improvements have been made in Server 2012.

You no longer need a dedicated GPU graphics card in the server to use RemoteFX, which vastly improves the quality of graphics over RDP. Instead, you can use a virtualized GPU on standard server hardware. USB over RDP is much better, and the Fair Share feature can manage how CPU, memory, disk space, and bandwidth are allocated among users to thwart bandwidth hogs. Read more about Server 2012 VDI and RDP improvements here.

8: DirectAccess without the hassle factor

DirectAccess was designed to be Microsoft's "VPN replacement," a way to create a secure connection from client to corporate network without the performance drain and with a more transparent user experience than a traditional VPN. Not only do users not have to deal with making the VPN work, but administrators get more control over the machines, with the ability to manage them even before users log in. You apply group policy using the same tools you use to manage computers physically located on the corporate network.

So why hasn't everyone been using DirectAccess with Server 2008 R2 instead of VPNs? One big obstacle was the dependency on IPv6. Plus, it couldn't be virtualized. Those obstacles are gone now. In Windows Server 2012, DirectAccess works with IPv4 without having to fool with conversion technologies, and the server running DirectAccess at the network edge can now be a Hyper-V virtual machine. The Server 2012 version of DA is also easier to configure, thanks to the new wizard.

9: ReFS

Despite the many advantages NTFS offers over early FAT file systems, it's been around since 1993, and Windows aficionados have been longing for a new file system for quite some time. Way back in 2004, we were eagerly looking forward to WinFS, but Vista disappointed us by not including it. Likewise, there was speculation early on that a new file system would be introduced with Windows 7, but it didn't happen.

Windows Server 2012 brings us our long-awaited new file system, ReFS or the Resilient File System. It supports many of the same features as NTFS, although it leaves behind some others, perhaps most notably file compression, EFS, and disk quotas. In return, ReFS gives us data verification and auto correction, and it's designed to work with Storage Spaces to create shrinkable/expandable logical storage pools. The new file system is all about maximum scalability, supporting up to 16 exabytes in practice. (This is the theoretical maximum in the NTFS specifications, but in the real world, it's limited to 16 terabytes.) ReFS supports a theoretical limit of 256 zetabytes (more than 270 billion terabytes). That allows for a lot of scaling.

10: Simplified licensing

Anyone who has worked with server licenses might say the very term "simplified licensing" is an oxymoron. But Microsoft really has listened to customers who are confused and frustrated by the complexity involved in finding the right edition and figuring out what it's really going to cost. Windows Server 2012 is offered in only four editions: Datacenter, Standard, Essentials, and Foundation. The first two are licensed per-processor plus CAL, and the latter two (for small businesses) are licensed per-server with limits on the number of user accounts (15 for Foundation and 25 for Essentials). See the chart with licensing details for each edition on the Microsoft Web site.

ITWORLD
If you have any question then you put your question as comments.

Put your suggestions as comments