Saturday, May 28, 2011

Five tips for optimizing your Internet connection

By Brien Posey

Takeaway: If your Internet connection is sluggish, try these tricks to perk it up.

Few things in life are as frustrating as a slow Internet connection. Unfortunately, there isn't much that you can do about many of the things that cause the Internet to slow down. Things like switch congestion or segment congestion (outside your network) are beyond your control. Even so, you can take a few steps to make sure that your Internet connection is functioning optimally.

1: Avoid DNS bottlenecks
 If you have an Active Directory environment in place, you no doubt have an on-premise DNS server. Recently, I have seen several instances of organizations virtualizing their DNS servers and placing them on host machines that have little capacity remaining. The basic thought behind this is that DNS does not require many system resources, so DNS server placement can be treated almost as an afterthought.


 However, your DNS server's performance has a major impact on the amount of time it takes for users to access Web pages. So it's important to make sure that your DNS server has sufficient resources to prevent it from becoming a bottleneck.

2: Use DNS forwarders
 Another thing you can do to optimize your Internet connectivity is to make use of DNS forwarders. The idea behind a forwarder is that if your DNS server is unable to resolve a query, it sends the 

 It's common to point the forwarder to the DNS servers that are owned by a company's ISP. The problem is that these DNS servers can be located anywhere. For example, my ISP resides in South Carolina, but it uses a DNS server in France. If you really want to optimize your Internet connectivity, your DNS forwarder should point to a DNS server that is in close physical proximity to your geographic location.

 If you aren't sure where your ISP's DNS servers are, I recommend using one of the visual trace route applications to determine where the DNS servers reside. The Visual Trace Route Tool is one free option.

 If you do determine that you're forwarding DNS requests to servers that are far away, the forwarder should be redirected to a DNS server that is in closer geographic proximity. If you don't know of another DNS server you can use, try checking out OpenDNS.

3: Use a proxy cache
 You can also optimize your Internet connectivity by using a proxy cache, which offers two main benefits. First, it provides your network users with a degree of shielding because it is the cache, not the user, that ultimately contacts Web sites. More important, a proxy cache can dramatically speed up Internet access.


 When a user enters a Web URL, the request is sent to the proxy server, which then issues the request on behalf of the user (similar to the way a NAT device works). When the proxy server receives the requested content, it forwards it to the user, but it also stores a copy for itself. If a another user requests the same content, the proxy server can deliver it without having to send the user's request to a Web site. Cached content is delivered almost instantly, so the result is lightning-fast Internet access for your users (at least for any content that has been cached) and decreased Internet bandwidth usage.

 The good news is that you may already have everything that you need to set up a proxy server. Microsoft's Forefront TMG can easily be configured to act as a proxy cache.

4: Secure your wireless access points
 You may be able to optimize your Internet connection by securing your wireless access points. I realize that this sounds ridiculous to anyone who is managing an enterprise class network, because all your access points should already be secure. But a tremendous number of small and midsize businesses are operating unsecured wireless access points.

 From an Internet optimization standpoint, the problem with unsecured wireless access points is that they allow an Internet connection to be used by anyone. A neighbor could potentially be consuming the majority of the available bandwidth.

5: Block streaming media sites
 You can further optimize your Internet connectivity by taking measures to prevent bandwidth from being wasted. One such measure is to block access to any streaming media sites for which there is not a legitimate business need. For example, you might block access to YouTube in an effort to prevent users from wasting Internet bandwidth by downloading viral videos.

Five tips for faster Web browsing

Takeaway: If you're wasting too much time waiting (and waiting and waiting) for Web pages to load, give these tips a try. You should see an immediate, noticeable boost in speed, making your browsing experience faster and more efficient.

Everyone wants faster Web browsing. After all, who has time to wait for Web pages to load these days? This is especially true if you're a tab-junkie like me. When you live with an open browser containing 10 to 15 tabs running at any given time, you know how crucial it is to have as fast a browsing experience as possible. But how do you manage this? Are there tricks to getting more speed when your pipe is maxed out already? You bet your sweet bits and bytes there are.

 Not every solution will work for every user, and not every solution should even be attempted by every user. However, if you like to eke out as much blood as you can from every turnip, let's see how you can squeeze a bit more speed from your browsing experience.

Note: This article is also available as a PDF download.

1: Use a fast browser
 Not all browsers are created equal. Some are simply faster than others. The top speed you will find, in the current crop of browsers, belongs to Google Chrome. If you've grown accustomed to Internet Explorer or Firefox, you'll notice a dramatic increase in rendering time using Google Chrome. Of all the ways you can speed up your browsing experience, this is by far the best. Google Chrome also helps speed things up by allowing you to enter search strings in the URL address bar. With this feature, you don't have to add yet another toolbar, thereby slowing down the browser even further.


2: Disable Flash
 Flash pretty much saturates Web sites now. It's almost impossible to get away from this technology. Problem is, Flash can be slow, so it directly affects the speed of your browsing experience. You can have Flash turned off by default and then re-enable it to view what you need to view. The biggest problem with this is that some browsers require an add-on to block Flash. For Chrome, you need the extension Flashblock. There's also a Flashblock extension for Firefox. Internet Explorer has a built-in tool you can access by clicking Tools | Manage Add-ons. In the Manage Add-ons dialog box, double-click Shockwave Flash Object. Then, click the Remove All Sites button. This will disable Flash for all sites.


3: Save your temporary Web files on a RAM disk
 I wrote an article awhile back on using a RAM disk to help speed up disk-intensive applications. Since the RAM disk will be much faster than your standard hard drive, using it to save all your browsers temporary files will create a faster environment for your browser. However, this solution is not for the newbie, and you will need to use a third-party to better achieve this task.

4: Get rid of all those toolbars
 You've seen them in the wild: browsers so filled with toolbars they take up the majority of real estate in the browser window. Most users don't realize those toolbars tend to slow down the browser in many ways. Some toolbars simply take up precious computer memory, while others eat away at bandwidth by sending and receiving data in the background. The math here is quite simple: The more toolbars you have, the slower your browser will run. Some of those toolbars might seem essential. But if speed is really your top priority, you will want to jettison that extra baggage for the speed you will gain.


5: Use tabs, not windows
 Too many tabs can cause problems, but they're still your best bet for browsing efficiency. How do tabs speed up your experience? A couple of ways. The first is all about organization. With multiple tabs in a single window, it becomes quite a bit faster to locate the page you need to work on. You don't have to maximize a window, discover that it's not the right one, minimize it, maximize a new window… until you find the correct one. A single window open with multiple tabs is far easier to search. This is not the only way tabs can help you. Browsers like Chrome treat each tab as an individual process (instead of a child process of a parent). So when a Web site causes a tab to crash, you can close that one tab and not lose all the other tabs. This behavior is not a standard at the moment, so you'll need to switch over to the Chrome browser to take advantage of it.

10 things you can do to conserve Internet bandwidth

By Brien Posey

Takeaway: You can take a number of practical steps to reduce your organization's bandwidth consumption. Here's a rundown of some strategies to consider.

As organizations move more and more services to the cloud, it is becoming increasingly important to make efficient use of the available Internet bandwidth. Here are a few techniques you can use to conserve Internet bandwidth in your own organization.

1: Block access to content-streaming Web sites
 If your organization allows employees to use the Internet for personal use, the first thing you should do is block access to streaming media sites, such as Netflix, YouTube, and MetaCafe. Playing the occasional YouTube video probably isn't going to have a crippling effect on your Internet connection, but streaming videos do consume more bandwidth than many other Web-based services.


2: Throttle cloud backup applications
 If you're backing up your data to the cloud, check to see whether your backup application has a throttling mechanism. An unthrottled cloud backup solution will consume as much bandwidth as it can. This might not be a big deal if you're backing up small files (such as Microsoft Office documents) throughout the day. But when you first begin backing up data to the cloud, an initial backup must be created. I have seen this process last for months, and if left unchecked, it can Vendor HotSpot
Here to help you with your Document Management Needs

3: Limit your use of VoIP
 VoIP is another bandwidth-intensive protocol. If you plan to use VoIP, you might implement a policy stating that phones are to be used for business calls only. While I will be the first to admit that employees sometimes need to make calls that aren't specifically related to work, almost everyone has a cell phone these days, so limiting the office phones to business use only shouldn't be a big deal.


4: Use a proxy cache
 A proxy cache can help limit the amount of traffic created by Web browsers. The basic idea is that when a user visits a Web site, the contents of the page are cached on a proxy server. The next time that person visits that Web page, the content does not have to be downloaded because it already exists in the cache. Using a proxy cache not only saves bandwidth, but it can give users the illusion that your Internet connection is much faster than it really is.

5: Centralize application updates
 Today, almost every application is designed to download periodic updates over the Internet. You can save a lot of bandwidth by centralizing the update process. For example, rather than let every PC in your office connect to the Microsoft Update Service, you should set up a WSUS server to download all the updates and then make them available to the individual PCs. That way, the same updates aren't being downloaded over and over again.


6: Use hosted filtering
 If you operate your own mail servers in-house, a great way to save bandwidth is to take advantage of hosted filtering. With hosted filtering, your MX record points to a cloud server rather than to your mail server. This server receives all the mail that's destined for your organization. The server filters out any spam or messages containing malware. The remaining messages are forwarded to your organization. You can save a lot of bandwidth (and mail server resources) because your organization is no longer receiving spam.

7: Identify your heaviest users
 In any organization, there will be some users who use the Internet more heavily than others. It's a good idea to identify your heaviest users and to determine what they are doing that's causing them to consume so much bandwidth. I have seen real-world situations in which a user was operating peer-to-peer file-sharing software even though the administrator thought that the users' desktops were locked down to make it impossible for anyone to do so.

8: Aggressively scan for malware
 Malware can rob your organization of a tremendous amount of bandwidth by turning PCs into bots. Be aggressive in your efforts to keep the desktops on your network clean. Here are some resources that can help: 
  • 10 ways to detect computer malware
  • 10 more ways to detect computer malware
  • The 10 faces of computer malware
  • Five tips for spotting the signs of malware
  • Rescue CDs: Tips for fighting malware
  • 10 free anti-malware tools worth checking out
  • Virus & Spyware Removal Checklist
9: Use QoS to reserve bandwidth

QoS stands for quality of service. It is a bandwidth reservation mechanism that was first introduced in Windows 2000, and it's still around today. If you have applications that require a specific amount of bandwidth (such as a video conferencing application), you can configure QoS to reserve the required bandwidth for that application. The bandwidth reservation is in effect only when the application is actively being used. At other times, the bandwidth that is reserved for the application is available for other uses.

10: Make sure you're getting the bandwidth you're paying for
 A lot of factors affect Internet bandwidth, so you can't expect to connect to every Web site at your connection's maximum speed. Even so, your Internet connection should deliver performance that is reasonably close to what you are paying for.


 I haven't ever seen a situation in which an ISP intentionally gave someone a slower connection than they were paying for, but I have seen plenty of situations in which a connection was shared between multiple subscribers. In the case of a shared connection, a neighbor's online activity can directly affect your available bandwidth. If your Internet connection isn't as fast as it should be, talk to your ISP and find out if your connection is shared. You might pay a bit more for a non-shared connection, but the extra cost may be worth it.



Friday, May 20, 2011

How to import an Excel file into SQL Server 2005 using Integration Services

Takeaway: Integration Services, which replaces Data Transformation Services (DTS) in SQL Server 2005, is a wonderful tool for extracting, transforming, and loading data. This article describes how you can use the new features of Integration Services to load an Excel file into your database.

Integration Services, which replaces Data Transformation Services (DTS) in SQL Server 2005, is a wonderful tool for extracting, transforming, and loading data. Common uses for Integration Services include: loading data into the database; changing data into to or out from your relational database structures; loading your data warehouse data; and taking data out of your database and moving it to other databases or types of storage. This article describes how you can use the new features of SQL Server 2005 Integration Services (SSIS) to load an Excel file into your database.

Note: There are several wizards that come with SQL Server Management Studio to aid you in the import and export of data into and out of your database. I will not look at those wizards; I will focus on how you can build a package from scratch so that you don't have to rely on the wizards.

To begin the process, I open SQL Server Business Intelligence (BI) Development Studio, a front-end tool that is installed when you install SQL Server 2005. The BI Development Studio is a scaled down version of Visual Studio. Then I select New Integration Services Project and give the project a name. See Figure A.

Figure A

Figure A

When the project opens, you will see an environment that may look familiar to you if you have used SQL Server DTS; some of the items of the toolbox are the same. For the purposes of this project, I am interested in dragging the Data Flow task item from the toolbar into the Control Flow tab. (The idea of a Data Flow task is one of the major differences between DTS and SSIS packages. In an SSIS package, you can control the manner in which your package logic flows inside of the Control Flow tab. When you need to manage the data aspects of your project, you will use the Data Flow task. You can have several different Data Flow tasks in your project — all of which will reside inside the Control Flow tab.) See Figure B.

Figure B

Figure B

Double-click the Data Flow task that you have dragged onto the Control Flow tab. The available options in the Toolbar have changed; I now have available Data Flow Sources, Data Flow Destinations, and Data Flow Transformations. Since I am going to import an Excel file into the database, I will drag the Excel Source item from the Toolbar onto the screen. See Figure C.

Figure C

Figure C

The Excel Source item represents an Excel file that I will import from somewhere on my network. Now I need somewhere to put the data. Since my plan is to put the data into the database, I will need a Data Flow Destination. For the purposes of this example, I will choose SQL Server Destination from the Data Flow Destination portion of the toolbar and drag it onto my Data Flow tab. See Figure D.

Figure D

Figure D

To designate which Excel file I want to import, I double-click the Excel Source item that I moved onto the screen. From there, I find the Excel file on the network that I want to import. See Figure E.

Figure E

Figure E

I also need to designate the sheet from the Excel file that I want to import, along with the columns from the sheet that I want to use. Figures F and G depict these options.

Figure F

Figure F

Figure G

Figure G

Now that I have defined my Excel source, I need to define my SQL Server destination. Before doing that, I need to indicate the Data Flow Path from the Excel file to the SQL Server destination; this will allow me to use the structure of the data defined in the Excel Source to model my SQL Server table that I will import the data into. To do this, I click the Excel Source item and drag the green arrow onto the SQL Server Destination item. See Figure H.

Figure H

Figure H

To define the database server and database to import the data, double-click the SQL Server Destination item. I will define the server in which I will import the data, along with the database that the data will reside. See Figure I.

Figure I

Figure I

I also need to define the table that I will insert the Excel data into. I will create a new table named SalesHistoryExcelData. See Figure J.

Figure J

Figure J

Under the Mappings section, I define the relationship between the Input Columns (the Excel data) and the Destination Columns (my new SQL Server table). See Figure K.

Figure K

Figure K

Once I successfully define the inputs and outputs, my screen will look like the one below. All I need to do now is run the package and import the data into the new table by clicking the green arrow in the top-middle of the screen, which executes my package. See Figure L.

Figure L

Figure L

Figure M shows that my package has successfully executed and that 30,000 records from my Excel Source item have been transferred to my SQL Server destination.

Figure M

Figure M

You can download the Excel file I used for this article.  

Tasks in SSIS packages

Importing and exporting data are some of the simplest, most useful tasks to accomplish in SQL Server. However, there are literally hundreds of other tasks that can easily be accomplished in SSIS packages that will take a significant amount of time to do by a different means. I plan to take a look at several more of these tasks in future articles.

Monday, May 16, 2011

10 Tips for Writing High-Performance Web Applications

Rob Howard

Before becoming a workaholic, I used to do a lot of rock climbing. Prior to any big climb, I'd review the route in the guidebook and read the recommendations made by people who had visited the site before. But, no matter how good the guidebook, you need actual rock climbing experience before attempting a particularly challenging climb. Similarly, you can only learn how to write high-performance Web applications when you're faced with either fixing performance problems or running a high-throughput site.
My personal experience comes from having been an infrastructure Program Manager on the ASP.NET team at Microsoft, running and managing www.asp.net, and helping architect Community Server, which is the next version of several well-known ASP.NET applications (ASP.NET Forums, .Text, and nGallery combined into one platform). I'm sure that some of the tips that have helped me will help you as well.
You should think about the separation of your application into logical tiers. You might have heard of the term 3-tier (or n-tier) physical architecture. These are usually prescribed architecture patterns that physically divide functionality across processes and/or hardware. As the system needs to scale, more hardware can easily be added. There is, however, a performance hit associated with process and machine hopping, thus it should be avoided. So, whenever possible, run the ASP.NET pages and their associated components together in the same application.
Because of the separation of code and the boundaries between tiers, using Web services or remoting will decrease performance by 20 percent or more.
The data tier is a bit of a different beast since it is usually better to have dedicated hardware for your database. However, the cost of process hopping to the database is still high, thus performance on the data tier is the first place to look when optimizing your code.
Before diving in to fix performance problems in your applications, make sure you profile your applications to see exactly where the problems lie. Key performance counters (such as the one that indicates the percentage of time spent performing garbage collections) are also very useful for finding out where applications are spending the majority of their time. Yet the places where time is spent are often quite unintuitive.
There are two types of performance improvements described in this article: large optimizations, such as using the ASP.NET Cache, and tiny optimizations that repeat themselves. These tiny optimizations are sometimes the most interesting. You make a small change to code that gets called thousands and thousands of times. With a big optimization, you might see overall performance take a large jump. With a small one, you might shave a few milliseconds on a given request, but when compounded across the total requests per day, it can result in an enormous improvement.


Performance on the Data Tier
When it comes to performance-tuning an application, there is a single litmus test you can use to prioritize work: does the code access the database? If so, how often? Note that the same test could be applied for code that uses Web services or remoting, too, but I'm not covering those in this article.
If you have a database request required in a particular code path and you see other areas such as string manipulations that you want to optimize first, stop and perform your litmus test. Unless you have an egregious performance problem, your time would be better utilized trying to optimize the time spent in and connected to the database, the amount of data returned, and how often you make round-trips to and from the database.
With that general information established, let's look at ten tips that can help your application perform better. I'll begin with the changes that can make the biggest difference.


Tip 1—Return Multiple Resultsets
Review your database code to see if you have request paths that go to the database more than once. Each of those round-trips decreases the number of requests per second your application can serve. By returning multiple resultsets in a single database request, you can cut the total time spent communicating with the database. You'll be making your system more scalable, too, as you'll cut down on the work the database server is doing managing requests.
While you can return multiple resultsets using dynamic SQL, I prefer to use stored procedures. It's arguable whether business logic should reside in a stored procedure, but I think that if logic in a stored procedure can constrain the data returned (reduce the size of the dataset, time spent on the network, and not having to filter the data in the logic tier), it's a good thing.
Using a SqlCommand instance and its ExecuteReader method to populate strongly typed business classes, you can move the resultset pointer forward by calling NextResult. Figure 1 shows a sample conversation populating several ArrayLists with typed classes. Returning only the data you need from the database will additionally decrease memory allocations on your server.

// read the first resultset reader = command.ExecuteReader(); // read the data from that resultset while (reader.Read()) { suppliers.Add(PopulateSupplierFromIDataReader( reader )); } // read the next resultset reader.NextResult(); // read the data from that second resultset while (reader.Read()) { products.Add(PopulateProductFromIDataReader( reader )); }

Tip 2—Paged Data Access
The ASP.NET DataGrid exposes a wonderful capability: data paging support. When paging is enabled in the DataGrid, a fixed number of records is shown at a time. Additionally, paging UI is also shown at the bottom of the DataGrid for navigating through the records. The paging UI allows you to navigate backwards and forwards through displayed data, displaying a fixed number of records at a time.
There's one slight wrinkle. Paging with the DataGrid requires all of the data to be bound to the grid. For example, your data layer will need to return all of the data and then the DataGrid will filter all the displayed records based on the current page. If 100,000 records are returned when you're paging through the DataGrid, 99,975 records would be discarded on each request (assuming a page size of 25). As the number of records grows, the performance of the application will suffer as more and more data must be sent on each request.
One good approach to writing better paging code is to use stored procedures. Figure 2 shows a sample stored procedure that pages through the Orders table in the Northwind database. In a nutshell, all you're doing here is passing in the page index and the page size. The appropriate resultset is calculated and then returned.

CREATE PROCEDURE northwind_OrdersPaged ( @PageIndex int, @PageSize int ) AS BEGIN DECLARE @PageLowerBound int DECLARE @PageUpperBound int DECLARE @RowsToReturn int -- First set the rowcount SET @RowsToReturn = @PageSize * (@PageIndex + 1) SET ROWCOUNT @RowsToReturn -- Set the page bounds SET @PageLowerBound = @PageSize * @PageIndex SET @PageUpperBound = @PageLowerBound + @PageSize + 1 -- Create a temp table to store the select results CREATE TABLE #PageIndex ( IndexId int IDENTITY (1, 1) NOT NULL, OrderID int ) -- Insert into the temp table INSERT INTO #PageIndex (OrderID) SELECT OrderID FROM Orders ORDER BY OrderID DESC -- Return total count SELECT COUNT(OrderID) FROM Orders -- Return paged results SELECT O.* FROM Orders O, #PageIndex PageIndex WHERE O.OrderID = PageIndex.OrderID AND PageIndex.IndexID > @PageLowerBound AND PageIndex.IndexID < @PageUpperBound ORDER BY PageIndex.IndexID END

In Community Server, we wrote a paging server control to do all the data paging. You'll see that I am using the ideas discussed in Tip 1, returning two resultsets from one stored procedure: the total number of records and the requested data.
The total number of records returned can vary depending on the query being executed. For example, a WHERE clause can be used to constrain the data returned. The total number of records to be returned must be known in order to calculate the total pages to be displayed in the paging UI. For example, if there are 1,000,000 total records and a WHERE clause is used that filters this to 1,000 records, the paging logic needs to be aware of the total number of records to properly render the paging UI.


Tip 3—Connection Pooling
Setting up the TCP connection between your Web application and SQL Server can be an expensive operation. Developers at Microsoft have been able to take advantage of connection pooling for some time now, allowing them to reuse connections to the database. Rather than setting up a new TCP connection on each request, a new connection is set up only when one is not available in the connection pool. When the connection is closed, it is returned to the pool where it remains connected to the database, as opposed to completely tearing down that TCP connection.
Of course you need to watch out for leaking connections. Always close your connections when you're finished with them. I repeat: no matter what anyone says about garbage collection within the Microsoft® .NET Framework, always call Close or Dispose explicitly on your connection when you are finished with it. Do not trust the common language runtime (CLR) to clean up and close your connection for you at a predetermined time. The CLR will eventually destroy the class and force the connection closed, but you have no guarantee when the garbage collection on the object will actually happen.
To use connection pooling optimally, there are a couple of rules to live by. First, open the connection, do the work, and then close the connection. It's okay to open and close the connection multiple times on each request if you have to (optimally you apply Tip 1) rather than keeping the connection open and passing it around through different methods. Second, use the same connection string (and the same thread identity if you're using integrated authentication). If you don't use the same connection string, for example customizing the connection string based on the logged-in user, you won't get the same optimization value provided by connection pooling. And if you use integrated authentication while impersonating a large set of users, your pooling will also be much less effective. The .NET CLR data performance counters can be very useful when attempting to track down any performance issues that are related to connection pooling.
Whenever your application is connecting to a resource, such as a database, running in another process, you should optimize by focusing on the time spent connecting to the resource, the time spent sending or retrieving data, and the number of round-trips. Optimizing any kind of process hop in your application is the first place to start to achieve better performance.
The application tier contains the logic that connects to your data layer and transforms data into meaningful class instances and business processes. For example, in Community Server, this is where you populate a Forums or Threads collection, and apply business rules such as permissions; most importantly it is where the Caching logic is performed.


Tip 4—ASP.NET Cache API
One of the very first things you should do before writing a line of application code is architect the application tier to maximize and exploit the ASP.NET Cache feature.
If your components are running within an ASP.NET application, you simply need to include a reference to System.Web.dll in your application project. When you need access to the Cache, use the HttpRuntime.Cache property (the same object is also accessible through Page.Cache and HttpContext.Cache).

There are several rules for caching data. First, if data can be used more than once it's a good candidate for caching. Second, if data is general rather than specific to a given request or user, it's a great candidate for the cache. If the data is user- or request-specific, but is long lived, it can still be cached, but may not be used as frequently. Third, an often overlooked rule is that sometimes you can cache too much. Generally on an x86 machine, you want to run a process with no higher than 800MB of private bytes in order to reduce the chance of an out-of-memory error. Therefore, caching should be bounded. In other words, you may be able to reuse a result of a computation, but if that computation takes 10 parameters, you might attempt to cache on 10 permutations, which will likely get you into trouble. One of the most common support calls for ASP.NET is out-of-memory errors caused by overcaching, especially of large datasets.Common Performance Myths

One of the most common myths is that C# code is faster than Visual Basic code. There is a grain of truth in this, as it is possible to take several performance-hindering actions in Visual Basic that are not possible to accomplish in C#, such as not explicitly declaring types. But if good programming practices are followed, there is no reason why Visual Basic and C# code cannot execute with nearly identical performance. To put it more succinctly, similar code produces similar results.
Another myth is that codebehind is faster than inline, which is absolutely false. It doesn't matter where your code for your ASP.NET application lives, whether in a codebehind file or inline with the ASP.NET page. Sometimes I prefer to use inline code as changes don't incur the same update costs as codebehind. For example, with codebehind you have to update the entire codebehind DLL, which can be a scary proposition.
Myth number three is that components are faster than pages. This was true in Classic ASP when compiled COM servers were much faster than VBScript. With ASP.NET, however, both pages and components are classes. Whether your code is inline in a page, within a codebehind, or in a separate component makes little performance difference. Organizationally, it is better to group functionality logically this way, but again it makes no difference with regard to performance.
The final myth I want to dispel is that every functionality that you want to occur between two apps should be implemented as a Web service. Web services should be used to connect disparate systems or to provide remote access to system functionality or behaviors. They should not be used internally to connect two similar systems. While easy to use, there are much better alternatives. The worst thing you can do is use Web services for communicating between ASP and ASP.NET applications running on the same server, which I've witnessed all too frequently.


Figure 3 ASP.NET Cache 
There are a several great features of the Cache that you need to know. The first is that the Cache implements a least-recently-used algorithm, allowing ASP.NET to force a Cache purge—automatically removing unused items from the Cache—if memory is running low. Secondly, the Cache supports expiration dependencies that can force invalidation. These include time, key, and file. Time is often used, but with ASP.NET 2.0 a new and more powerful invalidation type is being introduced: database cache invalidation. This refers to the automatic removal of entries in the cache when data in the database changes. For more information on database cache invalidation, see Dino Esposito's Cutting Edge column in the July 2004 issue of MSDN®Magazine. For a look at the architecture of the cache, see Figure 3.


Tip 5—Per-Request Caching
Earlier in the article, I mentioned that small improvements to frequently traversed code paths can lead to big, overall performance gains. One of my absolute favorites of these is something I've termed per-request caching.
Whereas the Cache API is designed to cache data for a long period or until some condition is met, per-request caching simply means caching the data for the duration of the request. A particular code path is accessed frequently on each request but the data only needs to be fetched, applied, modified, or updated once. This sounds fairly theoretical, so let's consider a concrete example.
In the Forums application of Community Server, each server control used on a page requires personalization data to determine which skin to use, the style sheet to use, as well as other personalization data. Some of this data can be cached for a long period of time, but some data, such as the skin to use for the controls, is fetched once on each request and reused multiple times during the execution of the request.
To accomplish per-request caching, use the ASP.NET HttpContext. An instance of HttpContext is created with every request and is accessible anywhere during that request from the HttpContext.Current property. The HttpContext class has a special Items collection property; objects and data added to this Items collection are cached only for the duration of the request. Just as you can use the Cache to store frequently accessed data, you can use HttpContext.Items to store data that you'll use only on a per-request basis. The logic behind this is simple: data is added to the HttpContext.Items collection when it doesn't exist, and on subsequent lookups the data found in HttpContext.Items is simply returned.


Tip 6—Background Processing
The path through your code should be as fast as possible, right? There may be times when you find yourself performing expensive tasks on each request or once every n requests. Sending out e-mails or parsing and validation of incoming data are just a few examples.
When tearing apart ASP.NET Forums 1.0 and rebuilding what became Community Server, we found that the code path for adding a new post was pretty slow. Each time a post was added, the application first needed to ensure that there were no duplicate posts, then it had to parse the post using a "badword" filter, parse the post for emoticons, tokenize and index the post, add the post to the moderation queue when required, validate attachments, and finally, once posted, send e-mail notifications out to any subscribers. Clearly, that's a lot of work.
It turns out that most of the time was spent in the indexing logic and sending e-mails. Indexing a post was a time-consuming operation, and it turned out that the built-in System.Web.Mail functionality would connect to an SMTP server and send the e-mails serially. As the number of subscribers to a particular post or topic area increased, it would take longer and longer to perform the AddPost function.
Indexing e-mail didn't need to happen on each request. Ideally, we wanted to batch this work together and index 25 posts at a time or send all the e-mails every five minutes. We decided to use the same code I had used to prototype database cache invalidation for what eventually got baked into Visual Studio® 2005.
The Timer class, found in the System.Threading namespace, is a wonderfully useful, but less well-known class in the .NET Framework, at least for Web developers. Once created, the Timer will invoke the specified callback on a thread from the ThreadPool at a configurable interval. This means you can set up code to execute without an incoming request to your ASP.NET application, an ideal situation for background processing. You can do work such as indexing or sending e-mail in this background process too.
There are a couple of problems with this technique, though. If your application domain unloads, the timer instance will stop firing its events. In addition, since the CLR has a hard gate on the number of threads per process, you can get into a situation on a heavily loaded server where timers may not have threads to complete on and can be somewhat delayed. ASP.NET tries to minimize the chances of this happening by reserving a certain number of free threads in the process and only using a portion of the total threads for request processing. However, if you have lots of asynchronous work, this can be an issue.
There is not enough room to go into the code here, but you can download a digestible sample at www.rob-howard.net. Just grab the slides and demos from the Blackbelt TechEd 2004 presentation.


Tip 7—Page Output Caching and Proxy Servers
ASP.NET is your presentation layer (or should be); it consists of pages, user controls, server controls (HttpHandlers and HttpModules), and the content that they generate. If you have an ASP.NET page that generates output, whether HTML, XML, images, or any other data, and you run this code on each request and it generates the same output, you have a great candidate for page output caching.
By simply adding this line to the top of your page
<%@ Page OutputCache VaryByParams="none" Duration="60" %>
you can effectively generate the output for this page once and reuse it multiple times for up to 60 seconds, at which point the page will re-execute and the output will once be again added to the ASP.NET Cache. This behavior can also be accomplished using some lower-level programmatic APIs, too. There are several configurable settings for output caching, such as the VaryByParams attribute just described. VaryByParams just happens to be required, but allows you to specify the HTTP GET or HTTP POST parameters to vary the cache entries. For example, default.aspx?Report=1 or default.aspx?Report=2 could be output-cached by simply setting VaryByParam="Report". Additional parameters can be named by specifying a semicolon-separated list.
Many people don't realize that when the Output Cache is used, the ASP.NET page also generates a set of HTTP headers that downstream caching servers, such as those used by the Microsoft Internet Security and Acceleration Server or by Akamai. When HTTP Cache headers are set, the documents can be cached on these network resources, and client requests can be satisfied without having to go back to the origin server.
Using page output caching, then, does not make your application more efficient, but it can potentially reduce the load on your server as downstream caching technology caches documents. Of course, this can only be anonymous content; once it's downstream, you won't see the requests anymore and can't perform authentication to prevent access to it.


Tip 8—Run IIS 6.0 (If Only for Kernel Caching)
If you're not running IIS 6.0 (Windows Server 2003), you're missing out on some great performance enhancements in the Microsoft Web server. In Tip 7, I talked about output caching. In IIS 5.0, a request comes through IIS and then to ASP.NET. When caching is involved, an HttpModule in ASP.NET receives the request, and returns the contents from the Cache.
If you're using IIS 6.0, there is a nice little feature called kernel caching that doesn't require any code changes to ASP.NET. When a request is output-cached by ASP.NET, the IIS kernel cache receives a copy of the cached data. When a request comes from the network driver, a kernel-level driver (no context switch to user mode) receives the request, and if cached, flushes the cached data to the response, and completes execution. This means that when you use kernel-mode caching with IIS and ASP.NET output caching, you'll see unbelievable performance results. At one point during the Visual Studio 2005 development of ASP.NET, I was the program manager responsible for ASP.NET performance. The developers did the magic, but I saw all the reports on a daily basis. The kernel mode caching results were always the most interesting. The common characteristic was network saturation by requests/responses and IIS running at about five percent CPU utilization. It was amazing! There are certainly other reasons for using IIS 6.0, but kernel mode caching is an obvious one.


Tip 9—Use Gzip Compression
While not necessarily a server performance tip (since you might see CPU utilization go up), using gzip compression can decrease the number of bytes sent by your server. This gives the perception of faster pages and also cuts down on bandwidth usage. Depending on the data sent, how well it can be compressed, and whether the client browsers support it (IIS will only send gzip compressed content to clients that support gzip compression, such as Internet Explorer 6.0 and Firefox), your server can serve more requests per second. In fact, just about any time you can decrease the amount of data returned, you will increase requests per second.
The good news is that gzip compression is built into IIS 6.0 and is much better than the gzip compression used in IIS 5.0. Unfortunately, when attempting to turn on gzip compression in IIS 6.0, you may not be able to locate the setting on the properties dialog in IIS. The IIS team built awesome gzip capabilities into the server, but neglected to include an administrative UI for enabling it. To enable gzip compression, you have to spelunk into the innards of the XML configuration settings of IIS 6.0 (which isn't for the faint of heart). By the way, the credit goes to Scott Forsyth of OrcsWeb who helped me figure this out for the www.asp.net severs hosted by OrcsWeb.
Rather than include the procedure in this article, just read the article by Brad Wilson at IIS6 Compression. There's also a Knowledge Base article on enabling compression for ASPX, available at Enable ASPX Compression in IIS. It should be noted, however, that dynamic compression and kernel caching are mutually exclusive on IIS 6.0 due to some implementation details.


Tip 10—Server Control View State
View state is a fancy name for ASP.NET storing some state data in a hidden input field inside the generated page. When the page is posted back to the server, the server can parse, validate, and apply this view state data back to the page's tree of controls. View state is a very powerful capability since it allows state to be persisted with the client and it requires no cookies or server memory to save this state. Many ASP.NET server controls use view state to persist settings made during interactions with elements on the page, for example, saving the current page that is being displayed when paging through data.
There are a number of drawbacks to the use of view state, however. First of all, it increases the total payload of the page both when served and when requested. There is also an additional overhead incurred when serializing or deserializing view state data that is posted back to the server. Lastly, view state increases the memory allocations on the server.
Several server controls, the most well known of which is the DataGrid, tend to make excessive use of view state, even in cases where it is not needed. The default behavior of the ViewState property is enabled, but if you don't need it, you can turn it off at the control or page level. Within a control, you simply set the EnableViewState property to false, or you can set it globally within the page using this setting:
<%@ Page EnableViewState="false" %>
If you are not doing postbacks in a page or are always regenerating the controls on a page on each request, you should disable view state at the page level.


Conclusion
I've offered you some tips that I've found useful for writing high-performance ASP.NET applications. As I mentioned at the beginning of this article, this is more a preliminary guide than the last word on ASP.NET performance. (More information on improving the performance of ASP.NET apps can be found at Improving ASP.NET Performance.) Only through your own experience can you find the best way to solve your unique performance problems. However, during your journey, these tips should provide you with good guidance. In software development, there are very few absolutes; every application is unique.

ITWORLD
If you have any question then you put your question as comments.

Put your suggestions as comments