Tuesday, August 5, 2008

Isolated Storage

Isolated Storage

When an application stores data in a file, the file name and storage location must be carefully chosen to minimize the possibility that the storage location will be known to another application and, therefore, vulnerable to corruption. Without a standard system in place to manage these problems, developing ad hoc techniques that minimize storage conflicts can be complex and the results can be unreliable.

With isolated storage, data is always isolated by user and by assembly. Credentials such as the origin or the strong name of the assembly determine assembly identity. Data can also be isolated by application domain, using similar credentials.

Administrators can limit how much isolated storage an application or a user has available, based on an appropriate trust level. In addition, administrators can remove all a user's persisted data. To create or access isolated storage, code must be granted the appropriate IsolatedStorageFilePermission.

System.IO.IsolatedStorage Namespace

The System.IO.IsolatedStorage namespace contains types that allow the creation and use of isolated stores. With these stores, you can read and write data that less trusted code cannot access and prevent the exposure of sensitive information that can be saved elsewhere on the file system. Data is stored in compartments that are isolated by the current user and by the assembly in which the code exists. Additionally, data can be isolated by domain. Roaming profiles can be used in conjunction with isolated storage so isolated stores will travel with the user's profile. The IsolatedStorageScope enumeration indicates different types of isolation. For more information about when to use isolated storage, see Performing Isolated Storage Tasks.

The IsolatedStorageFile class provides most of the necessary functionality for isolated storage. Use this class to obtain, delete and manage isolated storage. The IsolatedStorageFileStream class handles reading and writing files to a store. This is similar to reading and writing in standard File I/O classes. For more information about I/O, see the System.IO namespace.

Writing and reading – IsolatedStorage

Write

public void StoreDataIsolated(string strFilename, string strStoreValue)

{

using (IsolatedStorageFile isoFile = IsolatedStorageFile.GetUserStoreForApplication())

{

using (IsolatedStorageFileStream isoStream =

new IsolatedStorageFileStream(strFilename, FileMode.Create, isoFile))

{

using (StreamWriter sw = new StreamWriter(isoStream))

{

sw.Write(strStoreValue);

}

}

}

}

Read

public string ReadDataIsolated(string strFilename)

{

string strRetValue = "";

using (IsolatedStorageFile isoFile = IsolatedStorageFile.GetUserStoreForApplication())

{

using (IsolatedStorageFileStream isoStream =

new IsolatedStorageFileStream(strFilename, FileMode.OpenOrCreate, isoFile))

{

using (StreamReader sw = new StreamReader(isoStream))

{

strRetValue = sw.ReadToEnd();

}

}

}

return strRetValue;

}

Wednesday, June 25, 2008

How to build a Sharepoint Silverlight Beta 2 Webtart?

How to build a Sharepoint Silverlight Beta 2 Webtart?

If anyone is in need of the above topic pls add your comments.

Tuesday, June 10, 2008

How can I connect to the database in Silverlight project? (VS 2008 Beta 2)

Silverlight applications behave like a client side application. So we willnot be able to connect to the DB on the server from the silverlight application as the application and DB are on different layers.
So how do we access DB from Silverlight?
There are different ways to access DB from Silverlight application.
i) Webservices
ii) WCF Service
iii) A normal ASPX page.
In the 1st and 2nd methods the code can be of the same type. A sample service code is as follows

[WebMethod]public string RetrieveTexture()
{
try
{
MySqlConnection _mysqlConnection = new MySqlConnection();_mysqlConnection.ConnectionString = ConfigurationManager.ConnectionStrings["mySqlConnection"].ToString();
_mysqlConnection.Open();
MySqlDataAdapter da = new MySqlDataAdapter();
da.SelectCommand = new MySqlCommand("SELECT * FROM myTablename LIMIT 0, 20", _mysqlConnection);DataSet ds = new DataSet();
da.Fill(ds);
StringBuilder sb = new StringBuilder();
sb.Append("");
sb.Append("");
foreach (DataRow dr in ds.Tables[0].Rows)
{
sb.Append("");sb.Append("");
sb.Append(dr[0].ToString());
sb.Append("
");
sb.Append("");
sb.Append(dr[1].ToString());
sb.Append("
");sb.Append("");
sb.Append(dr[4].ToString());
sb.Append("
");
sb.Append("");
sb.Append(dr[5].ToString());
sb.Append("
");sb.Append("
");
}
sb.Append("
");
_mysqlConnection.Close();
return sb.ToString();
}
catch (Exception ex)
{
return string.Empty;
}
C# Code:----------------------------------------------------------------------------------------------------public partial class Page : UserControl
{
public Page()
{
InitializeComponent();
Loaded += new RoutedEventHandler(UserControl_Loaded);
}
void UserControl_Loaded(object sender, RoutedEventArgs e)
{
BasicHttpBinding bind = new BasicHttpBinding();
bind.MaxReceivedMessageSize = 2147483647;
bind.MaxBufferSize = 2147483647;
EndpointAddress endpoint = new EndpointAddress("http://localhost:51103/serviceTest_Web/myService.asmx");theSercice.myServiceSoapClient textureSoapClient = new serviceTest.theSercice.myServiceSoapClient(bind, endpoint);
textureSoapClient.RetrieveTextureAsync();
textureSoapClient.RetrieveTextureCompleted +=
new EventHandler(textureSoapClient_RetrieveTextureCompleted);
}

void textureSoapClient_RetrieveTextureCompleted(object sender, serviceTest.theSercice.RetrieveTextureCompletedEventArgs e)
{
if (e.Error == null)
displayData(e.Result);
}
void displayData(string xmlContent)
{
try
{
if (xmlContent != string.Empty)
{
XDocument xmlProducts = XDocument.Parse(xmlContent);
var textures = from texture in xmlProducts.Descendants("Texture")
select new
{
itemId = (string)texture.Element("item").Value.PadLeft(10, '0'),
txArtist = (string)texture.Element("Artist").Value.PadLeft(10, '0'),
txPath = (string)"http://www.texturearchive.com/thumbs/" + texture.Element("class1").Value + "/" + texture.Element("class2").Value + "/" + texture.Element("item").Value.PadLeft(10, '0') + "_" + texture.Element("Artist").Value.PadLeft(10, '0') + "_tn.jpg"
};
//Bug: http://silverlight.net/forums/t/11147.aspx
List texturesList = new List();foreach (var t in textures)
{
Texture pdt = new Texture { itemId = t.itemId, txArtist = t.txArtist, txPath = t.txPath };
texturesList.Add(pdt);
}
texturesDataGrid.ItemsSource = texturesList;
}
else
{
//errMessage.Visibility = Visibility.Visible;
texturesDataGrid.ItemsSource = null;
}
}
catch (Exception ex)
{
Console.Write(ex.Message);
}
}
public class Texture
{
public string itemId { get; set; }
public string txArtist { get; set; }
public string txclass1 { get; set; }
public string txclass2 { get; set; }
public string txPath { get; set; }
}

This same logic can be acheved from a ASPX page which will render the above output in XML format.
For Example the ASPX page should have the following code in the page load event
protected void Page_Load(object sender, EventArgs e)
{
Response.Clear();
Response.ContentType = "text/xml";
Response.Write(“Any XML String”);
Response.End();
}

Saturday, May 24, 2008

.NET Programming Standards and Naming Conventions

Common .NET Naming Conventions

These are the industry-accepted standard naming conventions for J#, C# and VB.NET programs. For additional information, please see the MSDN help documentation and FX Cop. While individual naming conventions at organizations may vary (Microsoft only suggests conventions for public and protected items), the list below is quickly becoming the de-facto standard in the industry. Please note the absence of Hungarian Notation except in visual controls. These naming standards should find their way into all of your .NET development, including ASP.NET Web applications and .NET Windows Forms applications.

Note that while this document predates the online and printed standards documentation from Microsoft, everything below which indicates it is based on .NET library standards is consistent with that documentation. In areas where Microsoft has not provided guidance (Microsoft generally doesn't care what you do in private/non-exposed code. In fact, they aren't even consistant in their internal code in the .NET framework), de facto standards have emerged, and I have captured them here.

The "ux" naming convention for controls is something I have added and found to be helpful. It is not based on any official standards, but instead based upon a multitude of projects by my teams and others, as well as on-line discussions on the topic. While I strongly recommend that you follow Microsoft guidelines when present, I encourage you to try out the items marked as extensions below and see how they work for you before committing to them.






Type

Standard / Convention

Example


Namespaces


Standard Based Upon Microsoft .NET Library Standards



Pascal Case, no underscores. Use CompanyName.TechnologyName as root. If you don't
have a company, use your domain name or your own initials. Note that any acronyms
of three or more letters should be pascal case (Xml instead of XML) instead of all
caps.



Why: This convention is consistent with the .NET Framework and is easy to read.



AppliedIS.TimeCard.BusinessRules

IrritatedVowel.Controllers

PeteBrown.DotNetTraining.InheritanceDemo PeteBrown.DotNetTraining.Xml





Assemblies


Standard Based Upon Microsoft .NET Library Standards



If the assembly contains a single name space, or has an entire self-contained root
namespace, name the assembly the same name as the namespace.



Why: This convention is consistent with the .NET Framework and is easy to read.
More importantly, however, it keeps your assembly names and namespaces lined up,
making it really easy to figure out what is any particular assembly, and what assembly
you need to reference for any given class.




AppliedIS.TimeCard.BusinessRules.dll

IrritatedVowel.Controllers.dll








Classes and Structs


Standard Based Upon Microsoft .NET Library Standards



Pascal Case, no underscores or leading "C" or "cls". Classes may begin with an "I"
only if the letter following the I is not capitalized, otherwise it looks like an
Interface. Classes should not have the same name as the namespace in which they
reside. Any acronyms of three or more letters should be pascal case, not all caps.
Try to avoid abbreviations, and try to always use nouns.



Why: This convention is consistent with the .NET Framework and is easy to read.




Widget

InstanceManager

XmlDocument

MainForm

DocumentForm

HeaderControl

CustomerListDataSet (typed dataset)







Collection Classes


Standard Based Upon Microsoft .NET Library Standards



Follow class naming conventions, but add Collection to the end of the name



Why: This convention is consistent with the .NET Framework and is easy to read.



WidgetCollection






Delegate Classes


Standard Based Upon Microsoft .NET Library Standards



Follow class naming conventions, but add Delegate to the end of the name



Why: This convention is consistent with the .NET Framework and is easy to read.



WidgetCallbackDelegate






Exception Classes


Standard Based Upon Microsoft .NET Library Standards



Follow class naming conventions, but add Exception to the end of the name



Why: This convention is consistent with the .NET Framework and is easy to read.



InvalidTransactionException






Attribute Classes


Standard Based Upon Microsoft .NET Library Standards



Follow class naming conventions, but add Attribute to the end of the name



Why: This convention is consistent with the .NET Framework and is easy to read.



WebServiceAttribute






Interfaces


Standard Based Upon Microsoft .NET Library Standards



Follow class naming conventions, but start the name with "I" and capitalize the
letter following the "I"



Why: This convention is consistent with the .NET Framework and is easy to read.
It also distinguishes classes from interfaces, where (unlike in VB6) are truly different
beings. This avoid name collisions as well, as it is quite common to have IFoo and
a class named Foo that implements IFoo.



IWidget






Enumerations


Standard Based Upon Microsoft .NET Library Standards



Follow class naming conventions. Do not add "Enum" to the end of the enumeration
name. If the enumeration represents a set of bitwise flags, end the name with a
plural.



Why: This convention is consistent with the .NET Framework and is easy to read.




SearchOptions (bitwise flags)



AcceptRejectRule (normal enum)








Functions and Subs


Standard Based Upon Microsoft .NET Library Standards



Pascal Case, no underscores except in the event handlers. Try to avoid abbreviations.
Many programmers have a nasty habit of overly abbreviating everything. This should
be discouraged.



Functions and subs must differ by more than case to be usable from case-insensitive
languages like Visual Basic .NET



Why: This convention is consistent with the .NET Framework and is easy to read.




VB: Public Sub DoSomething(...)



C#: public void DoSomething(...)








Properties and Public * Member Variables


Standard Based Upon Microsoft .NET Library Standards



Pascal Case, no underscores. Try to avoid abbreviations. Members must differ by
more than case to be usable from case-insensitive languages like Visual Basic .NET.



Why: This convention is consistent with the .NET Framework and is easy to read.




VB: Public Property RecordID As Integer



C#: public int RecordID








Parameters


Standard Based Upon Microsoft .NET Library Standards



Camel Case. Try to avoid abbreviations. Parameters must differ by more than case
to be usable from case-insensitive languages like Visual Basic .NET.



Why: This convention is consistent with the .NET Framework and is easy to read.




VB: ByRef recordID As Integer



C#: ref int recordID








Procedure-Level Variables


Standard Based Upon De facto Industry-Accepted Practices



Camel Case



Why: This convention is consistent with the .NET Framework and is easy to read.
It also avoids naming collisions with class-level variables (see below)




VB: Dim recordID As Integer



C#: int recordID ;








Class-Level Private and Protected Variables


Standard Based Upon De facto Industry-Accepted Practices



Camel Case with Leading Underscore. In VB.NET, always indicate "Protected" or "Private",
do not use "Dim". Use of "m_" is discouraged, as is use of a variable name that
differs from the property by only case, especially with protected variables as that
violates compliance, and will make your life a pain if you program in VB.NET, as
you would have to name your members something different from the accessor/mutator
properties.



Of all the items here, the leading underscore is really the only controversial one.
I personally prefer it over straight underscore-less camel case for my private variables
so that I don't have to qualify variable names with "this." to distinguish from
parameters in constructors or elsewhere where I likely will have a naming collision.
With VB.NET's case insensitivity, this is even more important as your accessor properties
will usually have the same name as your private member variables except for the
underscore.



As far as m_ goes, it is really just about aesthetics. I (and many others) find
m_ ugly, as it looks like there is a hole in the variable name. It's almost offensive.
I used to use it in VB6 all the time, but that was only because variables could
not have a leading underscore. I couldn't be happier to see it go away.



Microsoft recommends against the m_ (and the straight _) even though they did both
in their code. Also, prefixing with a straight "m" is right out. Of course, since
they code mainly in C#, they can have private members that differ only in case from
the properties. VB folks have to do something else. Rather than try and come up
with language-by-language special cases, I recommend the leading underscore for
all languages that will support it.



If I want my class to be fully CLS-compliant, I could leave off the prefix on any
C# protected member variables. In practice, however, I never worry about this as
I keep all potentially protected member variables private, and supply protected
accessors and mutators instead.



Why: In a nutshell, this convention is simple (one character), easy to read (your
eye is not distracted by other leading characters), and successfully avoids naming
collisions with procedure-level variables and class-level properties.




VB: Private _recordID As Integer



C#: private int _recordID ;








Controls on Forms


An Extension to the Standards



In recent projects (since 2002 or so), I have taken to a single prefix for all my
UI controls. I typically use "ux" (I used to use "ui", but it wasn't set apart well
in intellisense). "ux" comes from my usual design abbreviations where it means "User
eXperience", which has also since become a popular acronym. I have found this to
be extremely helpful in that I get the desired grouping in the intellisense even
better than if I use "txt", "lbl" etc. It also allows you to change combo boxes
to text boxes etc. without having to change the names - something that happens often
during initial prototyping, or when programming using highly iterative agile/xp
methodologies.



Why: This convention avoids problems with changing control types (textboxes to drop-down
lists, or simple text box to some uber textbox, or text box to date picker, for
example), and groups the items together in intellisense. It is also much shorter
than most Hungarian conventions, and definitely shorter and less type-dependent
than appending the control type to the end of the variable name. I will use generic
suffixes which allow me enough freedom to change them around.




"ux" prefix



uxUserID, uxHeader, uxPatientDateOfBirth, uxSubmit







Constants


Standard Based Upon Microsoft .NET Library Standards



Same naming conventions as public/private member variables or procedure variables
of the same scope. If exposed publicly from a class, use PascalCase. If private
to a function/sub, use camelCase..



Do not use SCREAMING_CAPS



Why: This convention is consistent with the .NET Framework and is easy to read.
A sizable section of the Framework Design Guidelines is dedicated to why they chose
not to go the SCREAMING_CAPS route. Using SCREAMING_CAPS also exposes more of the
implementation than is necessary. Why should a consumer need to know if you have
an enum, or (perhaps because they are strings) a class exposing public constants?
In the end, you often want to treat them the same way, and black-box the implementation.
This convention satisfies that criteria.




SomeClass.SomePublicConstant



localConstant



_privateClassScopedConstant



Use the Network Service Account to Access Resources in ASP.NET

This How To shows you how you can use the NT AUTHORITY\Network
Service machine account to access local and network resources. By default on
Windows Server 2003, ASP.NET applications run using this account's identity. It
is a least privileged account with limited user rights and permissions. It does
have network credentials. This means that you can use it to authenticate against
network resources in a domain. This How To describes how you can use the Network
Service account to access server resources such as the Windows event log,
Windows registry, file system, and local and remote SQL Server databases.


By default, Microsoft Internet Information Services (IIS) 6.0 on Windows Server 2003 runs ASP.NET applications in application pools that use the NT AUTHORITY\Network Service account identity. This account is a least privileged machine account with limited permissions. An application that runs using this account has restricted access to the event log, registry, and file system. The account does have network credentials, which means you can use it to access network resources and remote databases by using Windows authentication. The network resources must be in the same domain as your Web server or in a trusted domain.

In some scenarios, using a custom domain service account is a better approach than using the Network Service account. You should use a custom domain service account if:

  • You want to isolate multiple applications on a single server from one another.
  • You need different access controls for each application on local and remote resources. For example, other applications cannot access your application's databases if access is restricted to your application's account.
  • You want to use Windows auditing to track the activity of each application separately.
  • You want to prevent any accidental or deliberate changes to the access controls or permissions associated with the general purpose Network Service account from affecting your application.

This How To shows you how you can use the Network Service account to access a variety of resources types including the event log, registry, file system, and databases.

Event Log Access

Applications that run using the Network Service identity can write to the event log by using existing event sources, but they cannot create new event sources because of insufficient registry permissions. When you use the EventLog.Write method, if the specified event source does not exist, this method attempts to create the event source and a security exception is generated.

Note : It is useful to use application specific event sources so that your application's events can easily be differentiated from other applications' events.

To enable your ASP.NET application to write to the event log using an event source that does not already exist, you have two options:

  • Create new event sources at application install time
  • Manually create new event source entry in the registry.

Creating a New Event Source at Install Time

With this option, you create a specialized installer class that you run by using the install utility to create a new event source at install time when administrator privileges are available. You run the install utility using an administrator account so it has permission to create the new event source.

To create an installer class to create event sources

  1. Use Visual Studio .NET 2005 to create a class library project named InstallerClass.dll. Add a reference of System.Configuration.Install to the InstallerClass project.
  2. Name the class CustomEventLogInstaller, and derive it from System.Configuration.Install.Installer.
  3. Set the RunInstaller attribute for the class to true.
  4. Create a System.Diagnostics.EventLogInstaller instance for each new event log your application needs, and call Installers.Add to add the instance to your project installer class. The following sample class adds one new event source named customLog to the Application Event Log.

using System;
using System.Configuration.Install;
using System.Diagnostics;
using System.ComponentModel;
[RunInstaller(true)]
public class CustomEventLogInstaller: Installer
{
private EventLogInstaller customEventLogInstaller;
public CustomEventLogInstaller()
{
// Create an instance of 'EventLogInstaller'.
customEventLogInstaller = new EventLogInstaller();
// Set the 'Source' of the event log, to be created.
customEventLogInstaller.Source = "customLog";
// Set the 'Event Log' that the source is created in.
customEventLogInstaller.Log = "Application";
// Add myEventLogInstaller to 'InstallerCollection'.
Installers.Add(customEventLogInstaller);
}
public static void Main()
{
}
}

5. Compile the code for the InstallerClass.dll library.
6. Use an account with administrative privileges to run the InstallUtil.exe utility, supplying the name of
the DLL on the command line. For example, open the Visual Studio command prompt and enter the following command.

InstallUtil.exe \InstallerClass.dll

When the install utility is called with the installer class, it examines the RunInstallerAttribute. If this is true, the utility installs all the items in the Installers collection. This creates the specified event sources for your ASP.NET application.

Manually Creating New Event Source Entry in the Registry

If you are unable to create a event source at installation time, and you are in deployment, the administrator should manually create a new event source entry beneath the following registry key

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog\

To manually create a new event source entry beneath this registry key

  1. Start the Registry Editor tool Regedit.exe.
  2. Using the Application Event log, expand the outline list in the left panel to locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog\Application
  3. Right-click the Application subkey, point to New, and then click Key.
  4. Type a new event source name for the key name and press Enter.

The Network Service account can use the new event source for writing events.

Note You should not grant write permission to the ASP.NET process account (or any impersonated account if your application uses impersonation) on the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog\ registry key. If you allow write access to this key and the account is compromised, the attacker can modify any log-related setting, including access control to the log, for any log on the system.

Health Monitoring

ASP.NET version 2.0 health monitoring writes to the Windows application event log to report significant lifetime and security events, if configured to do so. You can raise custom events in your code to write to the event log by using ASP.NET health monitoring. This approach does not use EventLog.WriteEntry, but you are restricted to using a predefined event source. For more information about health monitoring, see How To: Use Health Monitoring in ASP.NET 2.0.

Registry Access

The Network Service account does not have write access to the registry. If your application needs to write to the registry, you must configure the necessary access control lists (ACLs) on the required registry keys.

Granting Registry Access to Network Service

In the following example, an application needs to change and display the name of the Internet time server that Windows is automatically synchronized with. An operator can change this setting by using the Internet Time tab from the Date and Time item in the Control Panel.

Your application needs to modify the following registry key: HKLM\SOFTWARE\ Microsoft\Windows\CurrentVersion\DateTime\Servers

To allow the Network Service account write access to the preceding registry key

You need to use an administrator account with permission to alter the registry security to perform the following steps:


  1. On the taskbar, click Start, and then click Run. Type regedit in the Open box, and then click OK.
  2. Expand the outline list in the left panel to locate the DateTime folder icon at the preceding registry path.
  3. Right-click the DateTime folder, and then click Permissions.
  4. In the Permission for Servers dialog box, click the Add button.
  5. In the Select Users, Computers, or Groups dialog box, type NETWORK SERVICE in the text box, and then click Check Names. The Network Service name will be underlined; this indicates that it is a valid security principal. Click OK.
  6. In the Permissions for Servers dialog box, click the Network Service user name from the list, and in the Permissions for NETWORK SERVICE section, click Advanced.
  7. In the Advanced Security Settings for Servers dialog box, click Network Service, and then click Edit.
  8. In the Permission Entry for Servers dialog box, select the Set Value and Create Subkey check boxes in the Allow column to permit write access. Click OK several times until the Permissions dialog box closes.

Note You should be careful while editing the registry because any mistake can lead to system instability.

Your ASP.NET application could now use code similar to the following sample to change and display the name of the Internet time server.

using Microsoft.Win32;
...
protected void Button1_Click(object sender, EventArgs e)
{
//change the time server
RegistryKey rk = Registry.LocalMachine.OpenSubKey(
@"SOFTWARE\Microsoft\Windows\CurrentVersion\DateTime\Servers",
true); //writable - this will fail without proper access
string sDefault = (String)rk.GetValue("");
int iDefault = Convert.ToInt32(sDefault);
//this an array of all the server names
string[] sServers = rk.GetValueNames();
//requires enumerate sub keys
iDefault++; if (iDefault >= sServers.Length)
iDefault=1; rk.SetValue("", iDefault.ToString());
// update display
Response.write(rk.GetValue(sServers[iDefault]).ToString());
}

File Access

The Network Service account has Read and Execute permissions on the IIS server root folder by default. The IIS server root folder is named Wwwroot. This means that an ASP.NET application deployed inside the root folder already has Read and Execute permissions to its application folders. However, if your ASP.NET application needs to use files or folders in other locations, you must specifically enable access.

Granting File Access to Network Service

To provide access to an ASP.NET application running as Network Service, you must grant access to the Network Service account.

To grant read, write, and modify permissions to a specific file


  1. In Windows Explorer, locate and select the required file.
  2. Right-click the file, and then click Properties.
  3. In the Properties dialog box, click the Security tab.
  4. On the Security tab, examine the list of users. If the Network Service account is not listed, add it.
  5. In the Properties dialog box, click the Network Service user name, and in the Permissions for NETWORK SERVICE section, select the Read, Write, and Modify permissions.
  6. Click Apply, and then click OK.

Your ASP.NET application can now write to the specified file.

Note If you need to allow the same level of access to a file resource for all accounts that run ASP.NET applications (Network Service or a custom service account), you can grant access to the IIS_WPG group instead of specifically to the Network Service account. Any account used to run ASP.NET is required to be a member of the IIS_WPG group.

For more information about creating a custom account to run an ASP.NET application, see How To: Create a Service Account for an ASP.NET 2.0 Application.

SQL Server

ASP.NET applications should use Windows authentication while connecting to a database. By using Windows authentication, you avoid storing database credentials in connection strings and you avoid passing passwords over the network to the database server.
With Windows authentication, your application's process account is used by default for authentication. To be able to access a database, your account requires:

  • A SQL Server login on the database server.
  • Permissions to the required objects (for example, stored procedures, views, or tables) in the required database.

Granting Access to a Local SQL Server
When the SQL Server is on the Web server, you must create a database login for the NT AUTHORITY\Network Service account.


To access a local SQL Server database using Network Service

  1. Start SQL Server Enterprise Manager.
  2. Expand the folders in the left panel and locate the Security folder for your local SQL Server.
  3. Right-click Logins in the Security folder, and then click New Login.
  4. In the SQL Server Login Properties - New Login dialog box, in the Name box, enter NT AUTHORITY\NETWORK SERVICE. Accept the defaults for the other settings, and then click OK.
  5. Expand the Databases folders, and then expand the Pubs (or equivalent) database.
  6. Right-click Users, and then click New Database User.
  7. In the Database User Properties - New User dialog box, select the NT AUTHORITY\NETWORK SERVICE account.
  8. In the Permit in Database Role list, select the db_datareader check box.
  9. Click OK, and then close the SQL Server Enterprise Manager.


The Network Service account now has permission to read the data in the tables of the designated database.

In practice, your application's requirements may be more complex. For example, you might want to allow read access to certain tables and allow update access to others. The recommended approach to help mitigate the risk posed by SQL injection is to grant execute permissions to the Network Service account on a selected set of stored procedures and provide no direct table access.

Granting Access to a Remote SQL Server

If you are accessing a database on another server in the same domain (or in a trusted domain), the Network Service account's network credentials are used to authenticate to the database. The Network Service account's credentials are of the form DomainName\AspNetServer$, where DomainName is the domain of the ASP.NET server and AspNetServer is your Web server name.

For example, if your ASP.NET application runs on a server named SVR1 in the domain CONTOSO, the SQL Server sees a database access request from CONTOSO\SVR1$.

To access a remote SQL Server using Network Service

To grant access to a remote database server in the same domain or a trusted domain, follow the steps described earlier for a local database, except in step 4, use the DomainName\AspNetServer$ account to create the database login.
Note In production environments, you should place the network service account into a Windows group and create a SQL Server login for the Windows group.

Wednesday, May 21, 2008

Reflection in .NET

Building an Extensible Application

In the sections that follow, you'll see a complete example that illustrates the process of building an extensible Windows Forms application that you can augment via external assemblies. To serve as a road map, the extensible sample application includes the following assemblies:

  • CommonSnappableTypes.dll—This assembly contains type definitions that will be implemented by each snap-in as well as referenced by the extensible Windows Forms application.
  • CSharpSnapIn.dll—A snap-in written in C# that leverages the types of CommonSnappableTypes.dll.
  • Vb2005SnapIn.dll—A snap-in written in Visual Basic 2005, which leverages the types of CommonSnappableTypes.dll.
  • MyPluggableApp.exe—This Windows Forms application (which also leverages CommonSnappableTypes.dll) will be the entity that may be extended by the functionality of each snap-in. This application will make use of dynamic loading, reflection, and late binding to dynamically discover the functionality of assemblies of which it has no prior knowledge.

Building CommonSnappableTypes.dll

First you'll need to create an assembly that contains the types a given snap-in must leverage to plug into your extensible Windows Forms application. The CommonSnappableTypes class library project defines two such types: using System;

namespace CommonSnappableTypes
{
// All snap-ins must implement this interface.
public interface IAppFunctionality
{ void DoIt(); }

// Optionally, snap-in designers may supply
// company information.
[AttributeUsage(AttributeTargets.Class)]
public sealed class CompanyInfoAttribute :
System.Attribute
{
private string companyName;
private string companyUrl;
public CompanyInfoAttribute(){}

public string Name
{
get { return companyName; }
set { companyName = value; }
}

public string Url
{
get { return companyUrl; }
set { companyUrl = value; }
}
}
}


The IAppFunctionality type provides a polymorphic interface for all snap-ins that the extensible Windows Forms application can consume. Because this example is purely illustrative, it exposes a single method named DoIt(). In a more realistic example, imagine an interface (or a set of interfaces) that allows the snap-in to generate scripting code, render an image onto the application's toolbox, or integrate into the main menu of the hosting application.

The CompanyInfoAttribute type is a custom attribute that snap-in creators can optionally apply to their snap-in. As you can tell by the name of this class, [CompanyInfo] allows the snap-in developers to provide some basic details about the component's point of origin. Notice that you can create custom attributes by extending the System.Attribute base class, and you can annotate them with the [AttributeUsage] attribute to define valid targets where developers can apply your attribute (limited to class types in the preceding code example).

Building the C# Snap-In

Next, you need to create a type that supports the IAppFunctionality interface. Again, to focus on the overall design of an extensible application, a trivial implementation is in order. Create a new C# code library named CSharpSnapIn that defines a class type named "TheCSharpModule." Given that this class must make use of the types defined in CommonSnappableTypes, be sure to set a reference to that assembly (as well as System.Windows.Forms.dll so you can display a pertinent message for the example).

using System;
using CommonSnappableTypes;
using System.Windows.Forms;

namespace CSharpSnapIn
{
[CompanyInfo(Name = "Intertech Training",
Url = "www.intertechtraining.com")]
public class TheCSharpModule : IAppFunctionality
{
// Using explicit interface implementation,
// as only the extensible app
// will need to obtain IAppFunctionality.
void IAppFunctionality.DoIt()
{
MessageBox.Show(
"You have just used the C# snap in!");
}
}
}


Notice that I chose to make use of explicit interface implementation when supporting the IAppFunctionality interface. This is not required; however, the idea is that the only part of the system that needs to directly interact with this interface type is the hosting Windows Forms application. Also note that the code uses named property syntax to specify the values used to set the Name and Url properties of the [CompanyInfo] attribute.

Building a Visual Basic 2005 Snap-In

Now, to simulate the role of a third-party vendor who prefers Visual Basic 2005 over C#, create a new Visual Basic 2005 code library (Vb2005SnapIn) that references the same assemblies as the previous CSharpSnapIn project. The code behind this class type is again intentionally simple: Imports System.Windows.Forms


Imports CommonSnappableTypes

Url:="www.ChuckySoft.com")> _
Public Class TheVb2005Module
Implements IAppFunctionality

Public Sub DoIt() Implements
CommonSnappableTypes.IAppFunctionality.DoIt
MessageBox.Show(
"You have just used the VB 2005 snap in!")
End Sub
End Class

Notice that applying attributes in Visual Basic 2005 requires angle bracket syntax (< >) rather than the C#-centric square brackets ([ ]). Additionally, note that the code uses the Implements keyword at both the class and method level to add support for interface types.

Hints and tips about the process of finding and fixing bugs

Debugging tips
Answering questions on the newsgroups, I've noticed that several developers seem to find debugging very difficult - not the mechanics of it, so much as knowing the right place to start. This is not to say that they are lazy or stupid - just that debugging is an art unto itself (arguably more so than writing code in the first place - it certainly involves more intuition in my view), and that a few pointers could be useful.
Making use of the techniques discussed on this page won't make you an ace bug-finder in itself - a mixture of patience, experience, intuition and good practice is needed - but my hope is that it can get you started along the right path. Note that although the page title is "Debugging", a lot of the time you may well not need to step through your code in a debugger in order to fix your code. If I'm trying to find a problem in my own code, without external dependencies such as other whole systems being involved, I usually regard it as a failure on my part if I need to use the debugger. It indicates that my code isn't clear enough and my unit tests aren't robust enough.

Reproduce the problem
The worst kind of problem is the kind you can't reliably reproduce. This is often down to issues such as race conditions, external systems, or environmental issues which create different patterns of behaviour on deployed systems and developer boxes. Often you won't know to start with whether or not a problem will be reproducible - and conversely, sometimes a problem which has been hard to reliably reproduce during diagnosis becomes easy to reproduce when it is well understood. In the latter case, when you have found the problem but not yet implemented the solution, if you can work out how to provoke it reliably, you should verify your diagnosis by causing the problem repeatedly.
The more specific you can be in the manner of reproduction, the better. This is often down to testers as well as developers - a good tester will describe steps to reproduce a problem in enough detail to let the developer reproduce it immediately. If the tester can't reproduce it but the developer later finds a way of demonstrating the problem every time, it's worth putting those details into whatever bug tracking system you use - this will help to verify the fix when it has been applied.
The
Subversion project team have a good name for the steps required to reproduce a problem - they call it a recipe (I wouldn't be surprised to find that they weren't the first project to use the term, but I haven't encountered it elsewhere.) I will adopt the term for the rest of this article. A good recipe should be:

  • Simple - the steps involved should all be required to reproduce the problem; the more steps that are required, the more avenues of investigation are opened. If a problem occurs in a multi-stage system (such as a log file being generated by one process, consolidated by another, and then consumed by a third) it is helpful to identift which process is at fault as early as possible and restrict most of the recipe to that system. It may be useful to give an "overall" picture of when the problem would occur in the real world, but that is more useful to a support team answering customer concerns than a developer trying to fix a problem.
  • Specific in its steps - if user input is required, sample values are useful. If the values are too long to be included and the problem involves the length rather than the actual data, it's worth trying to find out exactly what length is involved - a recipe stating that everything is fine with 60 characters in a field but things go wrong with 61 is very suggestive in terms of where to look. Where user input isn't involved, a sample of whatever the system consumes is very useful if it's available.
  • Specific in its environment - this is particularly important for web applications, where different browsers may exhibit different behaviours. State the operating system, browser type and browser version. It's usually not necessary to know what plug-ins etc are installed, but if the problem involves a plug-in (e.g. a Flash video isn't rendering properly) then versions of the relevant plug-ins are useful too.
  • Specific in the problem description - just saying that "the text looks wrong" or "the system breaks" isn't useful. Crash dumps, logs, screenshots with areas of concern highlighted, or whatever accurately describes the difference between how you expect it to behave and how it actually behaves - that's what will be really helpful.

Convert the problem into an automated test

This step isn't always possible, and sometimes it's possible but not practical - but in other situations, it's well worth doing. The automated test may take the form of a unit test, an integration test, or anything else your development context (company, open source project, whatever) uses for its testing. Where possible, a unit test is the ideal - unit tests are meant to be run frequently, so you'll find out if the problem comes back really quickly. The speed of unit test runs also helps when diagnosing the code at fault - if a test runs to completion in a tenth of a second, it doesn't matter if you end up using the debugger and putting the breakpoint just after you intended to - it'll take you longer to move the breakpoint than it will to get the debugger back to the right line.
You may well find yourself writing several unit tests which pass in the course of trying to discover the cause of the problem. I would usually encourage you to leave those unit tests after you've written them, assuming they should pass. Occasionally you may end up writing tests which make very little sense outside the current problem, but even so the "default position" should be to keep a test once it's present. Just because it's not the current problem doesn't mean it won't identify a problem later on - and it also acts as further documentation of how the code should behave. The working unit tests are often a good way to find which layer within a particular system is causing the problem. In a well-layered system, you should often be able to take the input of one layer and work out what its interaction with the layer below should be. If that passes, use that interaction directly as the input to the tests for the layer below, working your way down the system until something misbehaves.
Refactor your unit tests as aggressively as you'd refactor your production code. In particular, if you have a large unit test which is actually testing a relatively large amount of code, try to break it down into several tests. If you're lucky, you'll find that a large failing unit test which is based on the original recipe can be broken down so that you end up with several passing tests and a single, small failing test.
Even if you spot the problem early and can fix it in a single line, write a unit test to verify it wherever practical. It's just as easy to make mistakes fixing code as it is writing code to start with, and the test-first principles still apply. (I'm assuming a leaning towards test-driven development here, because I happen to find it useful myself. If you don't do TDD, I wouldn't expect you to necessarily agree with this step.)

Don't assume things work the way they're meant to

A certain amount of paranoia is required when bug-fixing. Clearly something doesn't work as it's meant to, otherwise you wouldn't be facing a problem. Be open-minded about where the problem may be - while still bearing in mind what you know of the systems involved. It's unlikely (but possible) that you'll find that the cause of the problem is a commonly-used system class - if it looks like System.String is misbehaving, you should test your assumptions carefully against the documentation before claiming to have found a definite bug, but there's always a possibility of the problem being external.
Be wary of debuggers. Sometime they lie. I personally trust the results of logging much more than the display in a debugger window, just because I've seen so many problems (mostly on newsgroups) caused by inaccurate or misleading debugger output. I know this makes me very old-fashioned, but you should at least be aware of the possibility that the debugger is lying. I believe this problem was much worse with Visual Studio 2002/2003 than it is with 2005, but even so a certain amount of care is appropriate. In particular, bear in mind that evaluating properties involves executing code - if retrieving a property changes some underlying values, life can get very tricky. (Such behaviour is far from ideal in the first place, of course, but that's a different matter.)
The main point of this section is to discourage you from ever saying, "The problem can't be there." If you believe it's not, prove it. If it looks like it might be, however unlikely that may seem, investigate further.

Be clear in your mind about correct behaviour


Working with code you're unclear about is like wading through a swamp. You should attempt to get yourself on solid ground as quickly as possible, even if you know the next step will take you into another swamp.
If the purpose of a piece of code (which may or may not be the cause of the problem) is unclear to you, consider taking some time to investigate it a bit. Put some tests around it to see (and document) how it behaves. If you're not sure whether the behaviour is correct or not, ask someone. Either write a test case or possibly a recipe to reproduce the questionable behaviour, and find out what it's meant to do and why. Once it's clear to you, document it somehow - whether with a test, XML documentation, an external document, or just a link from the code to a document which already explains the behaviour.
If the correct behaviour is currently not clearly defined, consider whether or not it should be. It might not need to be - in some situations it's perfectly acceptable to leave things somewhat vague - this gives more wiggle room for a later change. However, that wiggle room should itself be documented somewhere - make it clear what outcomes are acceptable, what outcomes aren't, and why the correct behaviour isn't more tightly specified.

Fix one problem at a time

If a piece of code is particularly bad, you may well spot other problems while you're fixing the original one. In this case, it's important to choose which problem you're going to tackle first, and tackle just that problem. In an ideal world, you should fix one problem, check your code into version control, then fix the next one, and so forth. This makes it easy to tell at a later date exactly which change in code contributed to which change in behaviour, which is very important when a to one problem turns out to break something else.
This is a very easy piece of advice to ignore, and I've done so many times in the past - and paid the price. This situation is often coupled with the previous one, where you really aren't clear about exactly what a piece of code is meant to do. It's a dangerous situation, and one which should encourage you to be extra careful in your changes, instead of blithely hacking away. Once when I was in the middle of just such a mammoth debugging session, a good friend compared my approach to taking great big swords and slashing through a jungle of problems, with his favoured approach being a more surgical one, delicately changing the code just enough to sort out the problem he was looking at, without trying to fix the world in the process. He was absolutely right, which is no doubt a reasons for one of his projects -
PuTTY - being so widely used. High quality code doesn't happen by accident - it takes patience and precision.

Get your code to help you


Logs can be incredibly useful. They can give you information from the field that you couldn't possibly get with a debugger. However, all too often I see code in newsgroups which removes a lot of vital information. If an exception is thrown in a piece of code which really should be rock solid, don't just log the message of the exception - log the stack trace. If there's a piece of code which "reasonably regularly" throws exceptions (for instance, if one system in a multi-system deployment is offline) then at least try to provide some configuration mechanism for enabling stack traces to be logged when you really need them to be.
Your code should almost never silently swallow exceptions without logging them. Sometimes exceptions are the simplest way of validating user input or configuration information, even though that leaves a bad taste in the mouth - and in those cases a default value is often used and there's no need to log anything. That kind of situation is what I think of as an "expected exception". I know that to many people the whole concept of an exception which you know about in advance is anathema, but in a practical world these things happen. Aside from these situations, exceptions should very rarely be swallowed except at the top level of a stack - a piece of code which has to keep going even if something has gone wrong lower down. At that point, logging is truly your friend, even if there are good reasons why it would produce too much information if you had it permanently enabled. There are plenty of ways of making logging configurable. Use one, and make sure your support engineers know how to turn the logging on appropriately and how to extract the logs produced.
Similarly, make your code defensive against bad input. The earlier bad input is caught, the easier it is to find. If you only get an exception once a value has been processed by several methods, it can be very hard to track down the source of the problem. I used to be more aggressive about defending against values being inappropriately null etc on entry to methods. Recently I've become laxer about this because of the additional overhead it produces in terms of testing and coding, as well as the code being clear about what it's really trying to achieve, rather than what it's trying to prevent. In an ideal world, contracts would be significantly easier to express. I believe this will become more mainstream eventually, and at that point my code is likely to become more widely saturated with it. Until that time, I'm likely to add validation when I think there could be a problem. Once it's there, I generally leave the validation in the code, because if someone passes an invalid value in at some point in time, it's quite possibly that the same kind of problem will occur again - at which point we may be able to avoid a lot of investigative work by catching the bad input early.

Learn your debugger's capabilities


Modern debuggers have many features which I suspect are underused. Some of them, such as conditional breakpoints and user-defined representations of data structures, can make the difference between a debugging session which lasts a whole day and one which lasts 10 minutes. You don't need to know everything about it off by heart, but an idea of what it's capable of and a good knowledge of the shortcut keys involved for the basics (e.g. step over, step into, set breakpoint, resume execution) are pretty much vital. It helps if everyone on the team uses the same shortcuts - unless you have a good reason to change the defaults, don't. (One such reason may be to make two different IDEs use the same set of shortcuts. For example, I usually make Eclipse use the same shortcuts as Visual Studio.)
This is one area I'm probably weak in, due to my attempts to usually avoid working in the debugger in the first place. I still believe that it's worth knowing the debugger reasonably well, and I still believe that you should usually only start stepping through code when you really need to, having tried to understand what it will do through inspection and by adding investigative unit tests. The debugger may not quite be a last resort, but I believe it shouldn't be far off. It depends on the situation, of course - the more control you have over the system, the less necessity there should be for using a debugger. If you're interoperating with other systems, you may well need to be running debuggers on all of them in order to make sure that the appropriate code triggers fire at just the right time.


Rest often, and involve other people


Debugging is hard. I have the utmost respect for those whose day to day work involves fixing other people's code more than writing their own. It can easily be demoralising, and working with a poorly documented quagmire of code can bring on physical tiredness quite easily. Take this into account: if you're stuck, take a break, have a walk, have a coffee, a chocolate bar - whatever helps you.
Don't feel ashamed if you need to get someone else involved - if behaviour seems impossible, or if you just don't know where to look, ask. Watch what the other person is doing, both to learn from them and to tell them if it looks like they're going down a blind alley you've already seen. (If they are, it may still be productive to go down it again - you may have missed something - but you should at least let them know that you think it may not be useful.)


Test, test, test

Once you believe you've fixed the problem, do everything you can (within reason) to test it. Obviously your first port of call should be unit tests - the unit tests you've added to isolate the problem should now pass, but so should all the rest of the tests. In addition, if there's a "system level" recipe for the problem, do your best to go through the recipe to verify that you've really fixed the problem. Quite often, a high-level problem is caused by several low-level ones - it's easy to find the first low-level one, fix it, and assume that means the high-level problem will go away. Sometimes quite the reverse happens - the high-level behaviour becomes worse because a more serious low-level problem comes into play.
If the problem was originally raised by a tester, it can often be useful to demonstrate the fixed behaviour to them on your development box to check that you really have done what was anticipated. This is particularly true of aesthetic issues in the UI. Collaboration at this point can be quick and cheap - if you mark a problem as fixed in your bug-tracking system, wait for the tester to retest and fail the fix, reassign it, then pick it up again, you may well find you're no longer in nearly as good a position to fix the problem, quite aside from the time wasted in administrative overhead. It's important to have good relationships with your test engineers, and the more confidence they have that you're not going to waste their time, the more productive you're likely to be with them. If you find you do have to go back and forth a few times, be apologetic about it, even if it would have been hard to avoid the problem. (At the same time, do press testers for more details if their recipes aren't specific enough. Developing a productive code/test cycle is a two-way process.)


Consider the bigger picture

Problems often come in groups. For example, if you find that someone has failed to perform appropriate escaping/encoding on user input in one situation, the same problem may well crop up elsewhere. It's much cheaper to do a quick audit of potential errors in one go than it is to leave each error to be raised as a separate problem, possibly involving different developers going through the same investigations you've just performed.
In a somewhat similar vein, consider other effects your fix may have. Will it provoke a problem elsewhere which may not be fixable in the short term? Will it affect patching or upgrades? Is it a large fix for this point in your product lifecycle? It's almost always worth getting to the bottom of what a problem is, but sometimes the safe thing to do is leave the code broken - possibly with a "band-aid" type of fix which addresses the symptoms but not the cause - but leave the full fix for the next release. This does not mean you should just leave the bug report as it is, however. If you've got a suggested fix, consider producing a patch file and attaching it to the bug report (making it clear which versions of which files need patching, of course). At the very least, give a thorough account of your investigations in the bug report, so that time won't be wasted later. A bug status of "Fix next release" (or whatever your equivalent is) should mean just that - it shouldn't mean, "We're unlikely to ever actually fix this, but we'll defer it one release at a time."

Monday, May 19, 2008

C#'s Reflection to Retrieve Application Information

Just as you can reflect, it's possible to have a C# program reflect upon itself.
For example, you can have a class reflect upon itself and this will tell you the methods or properties it contains. You'll find that being able to reflect on a program, a class, a type, or any other item enables you to take better advantage of it and its attributes.
The first step to this kind of reflection is to get the type of a type. You get the type of a class (or other type) using the static method Type.GetType. The return value of this method is a type that can be assigned to a Type object. The GetType method uses virtually any data type as a parameter. For example, to get the type of a class named TestClass and assign it to a Type object named MyTestObject, do the following:


Type MyTypeObject = Type.GetType(TestClass);

MyTypeObject then contains the type for TestClass. You can use MyTypeObject to get the members of a TestClass. This is done using the GetMembers method. The GetMembers method returns an array of MemberItems. To call the GetMember method on the MyTypeObject (which contains the type of a TestClass in this example), do the following:

MemberInfo[] MyMemberArray = MyTypeObject.GetMembers();

An array of MemberInfo objects is created named MyMemberArray. It is assigned the return value of the call to GetMembers for the type stored in MyTypeObject.
Now, the MyMemberArray contains the members of your type. You can loop through this array and evaluate each member. If you are completely confused, don't worry. The following listing pulls all this together. For fun, this listing reflects on a reflection-related class[md]the System.Reflection.PropertyInfo class.
The MemberInfo type is a part of the Reflection namespace. You need to include System.Reflection to use the shortened version of the name.

1: using System;
2: using System.Reflection;
3:
4: class MyMemberInfo
5: {
6: public static int Main()
7: {
8: //Get the Type and MemberInfo.
9: string testclass = "System.Reflection.PropertyInfo";
10:
11: Console.WriteLine ("\nFollowing is the member info for class:
{0}",
12: testclass);
13:
14: Type MyType = Type.GetType(testclass);
15:
16: MemberInfo[] MyMemberInfoArray = MyType.GetMembers();
17:
18: //Get the MemberType method and display the elements
19:
20: Console.WriteLine("\nThere are {0} members in {1}",
21: MyMemberInfoArray.GetLength(0),
22: MyType.FullName);
23:
24: for ( int counter = 0;
25: counter < MyMemberInfoArray.GetLength(0);
26: counter++ )
27: {
28: Console.WriteLine( "{0}. {1} Member type - {2}",
29: counter,
30: MyMemberInfoArray[counter].Name,
31: MyMemberInfoArray[counter].MemberType.ToString());
32: }
33: return 0;
34: }
35: }

Here's the output of this listing:

Following is the member info for class: System.Reflection.PropertyInfo
There are 36 members in System.Reflection.PropertyInfo

1. get_CanWrite Member type - Method
2. get_CanRead Member type - Method
3. get_Attributes Member type - Method
4. GetIndexParameters Member type - Method
5. GetSetMethod Member type - Method
6. GetGetMethod Member type - Method
7. GetAccessors Member type - Method
8. SetValue Member type - Method
9. SetValue Member type - Method
10. GetValue Member type - Method
11. GetValue Member type - Method
12. get_PropertyType Member type - Method
13. IsDefined Member type - Method
14. GetCustomAttributes Member type - Method
15. GetCustomAttributes Member type - Method
16. get_ReflectedType Member type - Method
17. get_DeclaringType Member type - Method
18. get_Name Member type - Method
19. get_MemberType Member type - Method
20. GetHashCode Member type - Method
21. Equals Member type - Method
22. ToString Member type - Method
23.etAccessors Member type - Method
24. GetGetMethod Member type - Method
25. GetSetMethod Member type - Method
26. get_IsSpecialName Member type - Method
27. GetType Member type - Method
28. MemberType Member type - Property
29. PropertyType Member type - Property
30. Attributes Member type - Property
31. IsSpecialName Member type - Property
32. CanRead Member type - Property
33. CanWrite Member type - Property
34. Name Member type - Property
35. DeclaringType Member type - Property
36. ReflectedType Member type - Property

Friday, May 16, 2008

Best Practices in ASP.net Development

1.1 Configuration
1.1.1 Web.config Configurations
Data Base Connection: Authentication method, Server name, etc.

Exceptions Configurations: Exceptions Policies (rules, providers, ..etc).

Logging Configurations: Enabled Flag, Log file location,

Caching Configurations: Enabled Flag, Maximum cache size,

1.1.2 DataBase Configurations
All configurations that might need to change after deployment will be stored in a database table to ease maintenance process. Examples for these are default selected values in some of the drop downs, minimum and maximum limits, etc. These configurations include default number of results per page, maximum number of results returned from a search, etc.

1.2 State Management
HTTP is a stateless protocol, meaning that the Web server treats each HTTP request for a page as an independent request; by default, the server retains no knowledge of variable values used during previous requests. As a result, building Web applications that need to maintain some cross-request state information is very important for every web based application.



1.2.1 Server Side Approach
Server side state management mechanism stores state of users at server side in servers’ resource such as memory, file or database. So in general, it will consume service side resources. But they are more secure and reliable, plus they produce less network traffic.

1.2.1.1 Application State
Application state stores data within server’s memory, it provides fast access, suitable for small amount of data that is shared between all users and is not updated frequently. It cannot be shared within web farms. It will be reset after web.config is updated. The data stored within application state will be lost if the application restarts or stops.

The objects we typically stored with application: Resource Manager

1.2.1.2 Session State
Session state identifies requests received from the same browser during a limited period of time as a session, and provides the ability to persist variable values for the duration of that session. It is suitable to store per user based, small amount, and sensitive data at server side. You do not want to store a lot of data for each web user since that will take too much server side resources.

There are three different type session implementation:

In-Proc: store session in web server’s process. It cannot be shared with web farm and cannot survive application stop and restart. But it provides fastest access within three session implementations.

Out-Of-Proc: a StateServer will serve all web servers to store session state. The session data will survive when web server stop or restart. The objects require implementation of serializable to be able to store with in StateServer. Session data will be lost if State Server stops or restarts.

SQL Server: session data is stored within a shared database, all objects shall be serializable. The data survives between web server recycles.

The objects we typically stored with session: none. The reasons are:

· Secured and sensitive user data are typically persisted within database if it is suitable.

· Sensitive but non-persisted data will send back to client side as in memory cookies under SSL, with additional encryption (keys are managed only on the server side).

· Unsecured user data will send back to user as hidden fields/view state/control state/cookies.

If you do need use session state, please verify it with architect before you start, and always use Out-Of-Proc during development phase, so your web applications will be able to scale in the future, and it does not have impact on web applications if you decide to deploy it as In-Proc.

1.2.1.3 Cache
Cache stores data that could be shared between users, it cannot be shared in web servers of a web farm. It is useful to store large data requires access during certain period, or data need to be refreshed periodically or triggered by an external event. Please refer to Cache section for more detail.

1.2.1.1 HttpContext Items
HttpContext Items collection gets a key-value collection that can be used to organize and share data between an HttpModule and an HttpHandler during an HTTP request. It is good to store objects that requires expensive database access, but also are not suitable to be cached. For example, you may have an user profile that requires a lot of database access and used by global.asax and your other web page/controls. But between requests, users may update their profiles, and persist updates into database. So you cannot cache it. Storing this type objects in HttpContext items collection can reduce the access to database, and the object only lives during one request, so you won’t have to worry about that you may have a stale object.

The objects we typically stored with application: User Profile

1.2.1.2 Class Static Members
Static class members can be accessed in a AppDomain. If you have a common configuration/data storage that needs to be shared within multiple applications, you can implemented it once, and reuse it within all of you web applications. They are ideal for read-only or infrequently changed data or object. They can provide the fastest access than any other manners.

1.2.1.3 Database
Database is common used to store user data that require permanent persistence. It is common approach that we do not need to explain here.

1.2.2 Client Side Approach
Client side approach will not consume server side resources, and they are shared between web requests to different web servers in a web farm. They will increase the traffic load between client and server, they are also to be more vulnerable than server side approach.

1.2.2.1 Cookie
Cookies store small amount of user data (smaller than 4096 bytes) at client side, and they can be either stored in memory or persisted on users’ disk. Cookies can have a expiration date, so it could be reused between sessions. Cookies are limited per web sites, and cookie could be turn off by users. Cookies will be sent back to server for every single request, even the request is an image, so developers shall use cookies properly to reduce network traffic.

1.2.2.2 Query String
Query String is commonly used for passing parameters between pages, it has most visible to end user, so it definitely shall not be used for secured data. Typically, we use query string for user language choices, product IDs, return URLs, etc.

1.2.2.3 Hidden Fields
Hidden fields is a common approach in the past to store small amount of value type data at client side. They only survive between page request.

1.2.2.4 View State
View state is implemented by asp.net as hidden field. It could be a performance issue if developers are not using it properly, such as store huge amount of data or unnecessary data within it. It persists data between requests, and pages under certain scenario (such cross page post).

It is suggested developers shall turn of view state if it is not used.

1.2.2.5 Control State
Control state cannot be turn off like view state, it is suitable to store custom control data between server trips. It requires more programming effort than view state. Developers shall only use this option when absolutely necessary.
















1.2.2.6 State Management Comparison

Technology

Scenario

Scope

Durability

Require Serialization

Static Members

In process, high performance cache.

Single AppDomain

None

X

ASP.NET Cache

In process, high performance caching. Good for scenarios that require specific
caching features.
Single AppDomain

None

X

Http context Items

In process, high performance caching. Good for request scope scenarios.

Request

None

X

ASP.NET Application State

In process, application scope cache. Good for small amounts of shared data that require access for all users and high performance.

Single AppDomain

None

X

ASP.NET Session State

(InProc)

User session scope cache. Good for small amounts of session data that require high performance.

User

None

X

ASP.NET Session State (StateServer)

User session scope cache. Good for sharing session data in a Web farm scenario where SQL Server is not available.
User/Web Farm

Survives process recycles

Yes

ASP.NET Session State (SQL Server)

User session scope cache. Good for sharing session data in a Web farm scenario where SQL Server is available.

User/Web Farm

Survives process recycles and servers reboot



Yes

Database Server

Can be used for any scope requirement which demands high durability.

Organization

Survives servers reboot

Yes

View State

To store small amounts of information for a page that posts back to itself. Using view state provides basic security.

Request,

Single Page,

Same window

Not Applicable

Yes

Control State

To store small amounts of information for a control that posts back to itself.

Request,

Single Page,

Same window

Not Applicable

Yes


Hidden fields

To store small amounts of information for a page that posts back to itself or to another page when security is not an issue. You can use a hidden field only on pages that are submitted to the server.

Request,

Single Page,

Same Window

Not Applicable

Yes

Cookies

To store small amounts of information on the client when security is not an issue.

User,

Multiple Pages,

Multiple Windows

Subject to expiration rules in client.

Yes



Hidden fields

To store small amounts of information for a page that posts back to itself or to another page when security is not an issue. You can use a hidden field only on pages that are submitted to the server.

Request,

Single Page,

Same Window

Not Applicable

Yes

Cookies

To store small amounts of information on the client when security is not an issue.

User,

Multiple Pages,

Multiple Windows

Subject to expiration rules in client.

Yes

Query strings

To transfer small amounts of information from one page to another when security is not an issue. You can use query strings only if you are requesting the same page or another page using a link.

Request,

Single Page


Not Applicable

Not Applicable



Table 3: State Management Comparison

1.3 Exception Handling
1.3.1 Exception Handling Policies
Exception will be handled by using the Exception Handling Application Block which is part of the Enterprise Library 2.0. The following policies will be created in the configuration file and the equivalent constants in the common projects.









Policy

Constant

Description


Database

DATABASE

This exception policy will handle all database related exceptions. It will be used with all database calls.

FileSystem


FILE_SYSTEM


This policy will handle Input Output operations exception. It will be used with all references to file system object (e.g. finding XSL file, saving log, etc.)

WebService

WEB_SERVICE

This policy will handle all exceptions triggered from a web service call (authentication issue, timeout, invalid xml, etc.). It will be used with all calls to external services.

UIReport

USER_INTERFACE_REPORT

To handle User Interface layer exceptions. It will report error to user. (after replacing it by a meaningful error or standard error)

UISupress

USER_INTERFACE_SUPRESS

This policy also will handle user interface layer errors. It won’t report error to user, instead it will suppress error and resume (or retry) operation.

Default

DEFAULT

This policy will be the default policy which will be used by the system when no policy is specified.


Table 4: Exceptions Policies

All exceptions should be handled in a way similar to the image below.


Figure 45: Exception Handling code



1.3.2 Generic Best Practices
Proper usage exception handling is a key of delivering high quality solutions, here is a list of generic best practices that could apply to any .Net projects:

· Never use exception handling to control application flow

· Design proper exception handling policies by leveraging MS Enterprise Library (the MS EntLib only provides framework, it cannot replace an architect to design proper policies that is suitable for the solution).

· Use structure exception handling rather than using returned error codes.

· Only cache exceptions when required cache and do nothing but just re-throw hurts performances and do not swallow exceptions.

· When handling exceptions, avoid produce more exceptions.

· Use finally block to release any resources that used.

· Reduce usage of Server.Transfer, Response.End, Response.Redirect(url, true). In those cases, ThreadAbortException will be thrown out. Use Response.Redirect(url, false) if you have to use them or JavaScript.

1.3.3 ASP.NET Exception Handling Approach
ASP.NET application have multiple layers to handle unhandled exceptions that propagate from backend layers.

Developers shall always handle exceptions according exception handling policies within different components, the approach described here is for unhandled exceptions.

1.3.3.1 Application
There are two ways to handle exceptions within web application level: using web.config and Application_Error event handler.

Here is an example of web config, developers shall use this approach for common http error code, and ensure the pages, which redirected to, will absolutely NOT generate more exceptions:







Here is an example of application error event handler, you could define proper exception policy and add implementation into application error event handler:

protected void Application_Error(Object sender, EventArgs e)

{

//get last error

System.Exception exception = Server.GetLastError();

GameTracker.ExceptionHandler exceptionHandler = new

GameTracker.ExceptionHandler(

GameTracker.ExceptionHandler.ErrorLevel.Severe,

exception,

APP.Common.Environment.EXCEPTION_SOURCE,

true);

//log exception in event log,

//exceptionHandler will send an email alert to admin using settings in web.config

exceptionHandler.LogException() ;

//clear the exception

//rem ClearError() if, intend to use custom error pages.

Server.ClearError();

}

1.3.3.2 Page
There are two ways to handle exceptions within a web form page: using errorPage property of the web form (it is similar to web.config approach), or use Page_Error event handler (it is similar to Application Error event handler approach).

1.4 Caching
A proper cache implementation within ASP.NET web application will bring significant improvement for the application. Below are the recommended caching guidelines which we are going to follow in the implementation:

· Choosing proper caching technologies

· Separate dynamic data from static data in your pages

· Configure the memory limit

· Refresh cache appropriately

· Cache the appropriate form of data

· Use output caching to cache relatively static pages

· Use VaryBy attributes for selective caching

· Avoid caching personalized content, using alternative way to personalize web content, such as using cookie and client side JavaScript to display user name on pages that have content are identical to all users excepting user name.

We consider the following caching approach:

1.4.1.1.1 ASP.NET Output Cache
You can use two types of output caching to cache information that is to be transmitted to and displayed in a Web browser: page output caching and page fragment caching. Page output caching caches an entire Web page and is suitable only when the content of that page is fairly static. If parts of the page are changing, you can wrap the static sections as user controls and cache the user controls using page fragment caching.

You can use the page output cache to cache a variety of information, including:

· Static pages that do not change often and that are frequently requested by clients.

· Pages with a known update frequency. For example, a page that displays a stock price where that price is updated at given intervals.

· Pages that have several possible outputs based on HTTP parameters, and those possible outputs do not often change—for example, a page that displays weather data based on country and city parameters.

· Results being returned from Web services.

· Caching these types of output avoids the need to frequently process the same page or results.

· Pages with content that varies—for example, based on HTTP parameters—are classed as dynamic, but they can still be stored in the page output cache. This is particularly useful when the range of outputs is small.

Cache multiple version:

VaryByParam—Lets you cache different versions of the same page based on the input parameters sent through the HTTP GET/POST.

VaryByHeader—Lets you cache different versions of the page based on the contents of the page header.

VaryByCustom—Lets you customize the way the cache handles page variations by declaring the attribute and overriding the GetVaryByCustomString handler.

VaryByControl—Lets you cache different versions of a user control based on the value of properties of ASP objects in the control.

You can use it to locate the cache on the originating server, the client, or a proxy server.

Use page fragment caching when you cannot cache the entire Web page. There are many situations that can benefit from page fragment caching, including:

· Page fragments (controls) that require high server resources to create.

· Sections on a page that contain static data.

· Page fragments that can be used more than once by multiple users.

· Page fragments that multiple pages share, such as menu systems.

1.4.1.1.2 .NET Cache Object
Cache object provides additional functionality specifically designed to store transient data, such as .NET Framework objects and custom business entities. Cache features, such as dependencies and expiration policies, also extend the capabilities of the ASP.NET cache.

When you add an item to the cache, you can define dependency relationships that can force that item to be removed from the cache under specific circumstances.

· File dependency—Allows you to invalidate a specific cache item when a disk-based file or files change.

· Key dependency—Invalidates a specific cache item when another cached item changes.

· SQL Server dependency – enable to refresh cached object when the underlying data is changed.

· Aggregate cache dependency – enable to refresh cached object when any dependency is updated.

Time-based expiration—Invalidates a specific cached item at a predefined time. The time for invalidation can be absolute—such as Monday, December 1, at 18:00—or sliding, which resets the time to expire relative to the current time whenever the cached item is accessed.

1.4.1.1.3 HttpContext Items
Please refer to HttpContext Items section in state management.

1.4.1.1.4 Class Static Members
Please refer to Class Static Members section in state management.

1.5 Security
1.5.1 Overview
Security is one of the principal challenges of designing distributed applications, especially the applications that cross multiple physical tiers separated by firewalls. The recommended security strategy that is leveraged is the defense-in-depth security approach, which implements security at each level of the architecture. This approach relies on the assumption that if one aspect of the architecture is compromised, other components in the architecture remain secure. This results in the isolation of the security breach.

The notion of security has many dimensions in the context of an application, including:

· Identity authentication such as credential checking and certificate-based non-repudiation.

· Restricted access to data sources such as SQL Server.

· Restricted access to file system objects (that is, NTFS security).

· Maintaining data integrity through the use of secured channels and signed messages.

Additional considerations are:

· Restrictions on execution of business processes based on identity, role, and data.

· Restrictions on access to subsets of data.

· Restrictions on component instantiation.

· Restrictions on component method calls.

· Restrictions on component method call remoting.

· Protection against security breaches such as Trojan horse and denial-of-service attacks.

· Validate user inputs in each level, prevent SQL injection.

The application infrastructure often implements security mechanisms in addition to those provided by traditional infrastructure security components such as firewalls, proxies, and VPN.

· Restricted network traffic, including closing ports on firewalls and filtering in proxies.

· Maintaining data privacy through the use of secured channels, user access through a virtual private network (VPN), and encrypted message passing.

1.5.2 Mechanism
Security mechanism shall deeply implement all layers of the solution. In this document, we only address the application layer.

1.5.2.1 Authentication
Authentication is the process that identify an person who is trying to access the system is a validate user in the system. All authentication will be carried out centrally by one component.

In addition some part of the internet applications is anonymous accessible for public users.

1.5.2.2 Authorization
After users are identified by system, the system needs to know what they can do within the system. The process of defining user permissions, assigning roles to users, and verifying user permissions is authorization management.

As discussed in the conceptual design Role Based Authorization will be used across the system.



1.5.2.3 Input Validation
All end users input shall be validated before send back to server side for process; server side shall validate all data sent from clients request before further process.

Each layer shall implement data validation, do not assume the data is always valid when it is passed from other components. The common place to implement validation are:

· Pages: Query Strings, Form Fields, Cookies, etc

· UI controls: format, type, length, or nullable/empty.

· Service classes: type, nullable or empty, format

· Data tiers: within stored procedures, constraints, etc.

The layer closer to front end shall always check the input is valid and meanful before passing it back to back end layers, this will reduce the workload on backend layers and avoid unnecessary network traffics.