Search Box

Friday 22 July 2011

Web Hosting and Web Site Launch


A web site is the most effective as well as cost effective way to advertise about your product and company in the large global market. There are many web hosting companies providing world class Internet base to our customers along with other web hosting services like, website design and email service.

Why web hosting service?

Web hosting service is the service which will launch your website on the World Wide Web and make you available to the online users as well. There are two kinds of web hosting services, the ones which can be availed free of cost, and the ones which are paid web hosting services. A free web hosting service, as the name suggests, is the web hosting service which will launch your company website without paying any fees to the service provider but the website of the company will be, in that case, bound to present the advertisement and banners of the web hosting company. A free web hosting service is greatly liked by customers but it has limitations which make it less preferable. These limitations are basically the limited disk space and limited data transfer.      

A paid web hosting service is then a better option. However, if you are a small business organization and do not want spend much funds on website hosting free web hosting will be a better option.

For organizations which have enough resources to concentrate on extended website design, they must opt for paid web hosting service as it will allow their website more space, as it will be given more advanced services by the web hosting service provider. However, if you wish to spend a handsome amount of money on web hosting, make sure you get the top web hosting service for your organization. A good website will be an impressive introduction to the company and so, make sure your company website not only looks perfect but it is also well managed. A web hosting service should not only launch you in the global Internet world but it should also manage it well after it is launched. Web site management is one of the functions which require great efficiency and skill.  

Find the best web host for yourself!

Get hold of a company which gives you best services at modest prices. Find the best web hosting service for your company website and this is possible if you properly analyze web hosting reviews of various web hosting companies. You must make a sound decision on the basis of reviews if you wish to get the top web hosting service.

There are various kinds of web hosting services like shared web hosting service, reseller web hosting, server space and bandwidth service. A good server space is required for loading sufficient information to the website by adding a sufficient number of webpages to the website, whereas shared web hosting service is required to maintain links and hyper links on the webpages of the website. So take sufficient time to decide the best web hosting service for your organization’s website.

Web Hosting for Small Companies


A small business needs will be very different from those of larger businesses. They may need less space than that of a larger business and may also not need some of the features that web hosting companies sometimes offer to businesses. By looking at the different features that are available, a small business owner will be able to determine which features they need and what ones they consider unnecessary.

Technical support is something that needs to be considered when the business owner is looking at web hosting companies. Companies lose money when something goes wrong on their website and this can be devastating to small businesses. It’s important to know that web hosts are available and will respond to any concerns quickly. A lot of web hosting companies have a twenty-four hour support line that can be reached if troubles arise. It’s also important to make sure that the technical support that is received is very helpful and useful.

Bundled software should also be considered when deciding on a web hosting company. Deciding on the type of specialized software the small business needs should be decided on before entering into a commitment with any web hosting company. These packages can include things such as a content management system or shopping cart software. Some web hosts provide control panel software and this software contains a component that is called Fantastico. This allows for the user to easily install many different types of software that can be used on the website.

Editing tool and script support can be very important for a small business’ website. Many web hosting companies offer easy to use design and editing tools. For companies that are using FrontPage for their website, it is important to make sure that the web hosting company they decide on supports FrontPage extensions. It’s also important to make sure that the web hosting company is also compatible with other script languages such as PHP, ASP, and Perl, to name just a few.

Small businesses should also consider uptime and speed when researching different web hosting companies. A good web hosting company should guarantee uptimes of ninety-nine percent. Small business owners should also check the information about their data centre. This is because it’s important to know that they have high-speed connections to the Internet backbone.

Due diligence is also important when the small business owner is searching for a web hosting company. The first thing that should be done by the small business owner is to check out the web host’s website. Some things that should be watched for are awards and seals of approval. These are sometimes given out by magazines. It’s also important to look for the Internet Better Business Bureau seal. One can also search the Internet to find reviews of the different web hosting companies. These reviews can often be found on web hosting forums, where other individuals will give a detailed version of their experience with a certain company. One or two bad reviews are okay but one should be wary of a company that only has bad reviews, and no good reviews to balance it out.

Web Hosting Guide


Looking for and buying a reliable web hosting solution is an imperative decision. Whether you are doing online business, providing important information or sharing views online on a common interest, you need a reliable web hosting service that will allow online visitors to browse through your site effortlessly. It is only powerful web hosting that allows your website to be downloaded, browsed and updated in minimal time.

Trying to identify a web host can be a very daunting task especially when there are so many available nowadays and all of them promise one thing or another. Hence, it is crucial that before you jump in, you do your own homework or research for selecting the most appropriate web hosting company for your website.

With the changing trend of technology, web hosts are also changing. Most of them provide various services in addition to their basic ones. Say if you are running an e-commerce website, then of course you need high end security and a medium through which you can manage your web content efficiently. There are many tools that facilitate this, however if your web hosting service is not reliable then you can miss out on serious revenues and prospective clients.

Once you have determined and identified what web hosting services you require for your online business, it is then time to enlist certain web hosting features and options you must consider. You can find below some of the most important aspects of web hosting:

Disk space and bandwidth

You should know how much space your website would need and approximate data it will generate. When we talk about disk space, well, it's actually the amount of storage assigned to you by the web hosting provider. The bandwidth is the amount of traffic that is allowed to access and leave your website. In case your website has a lot of graphics then you would require higher storage area and greater bandwidth.

Programming tools and the OS

You need to be sure that your website is uploaded through secure servers using the latest Operating System. Most web hosts run on a UNIX based operating system, usually Linux or BSD. For the running of various web applications you would require ASP, .NET, MS SQL, SBS and for these you need a Window based host.

Pricing Aspect

You need to compare pricing before you finalize a web hosting service. Some may provide you better services but at low pricing. It's not always true that the best hosting services are always the most expensive. Do your research and then finalize.

Support, Security, Guaranteed uptime and Backups

Security and backups are two very important aspects that you need to consider. You should always choose a web hosting service with reliable telephone support. Some also offer 24/7 support through local or toll-free numbers. In case you are running an ecommerce website then security is one aspect that you just cannot discard. Your web hosting service provider should be such that they can monitor things round the clock and ensure no unwanted intruder can hack your site. After all it's your website and it is really worth looking into this aspect of web hosting.

Web Hosting in Today’s World


In the Internet dominated commerce scene of the 21st century, it is a very good exercise to expand your business in the computer sphere as well. For doing so, you need the help of a top web hosting service. This service allows you to have your own space on World Wide Web with the help of a Web Host server. This service allows people using the Internet to visit your website and engage in business activities with you. You can host your own website and include all the features that are required by you, but the major deterrent in this technique is that it is rather very expensive and high technical skills are required to maintain the host server. In the light of this information, many people avail the services of other Web hosts which offer the required web space and the features at lower prices. As there are innumerable hosts trying to vie for customers these days, you have to be very careful in choosing the right service provider. It is highly recommended that you use the services of a reputed web host even though it might be a bit more expensive than a local one.

You must assure that your current business needs a dedicated server or not, as this type of server is expensive and similar work can be done over a shared server as well, which is usually a cheaper option. The up-time average of your web host server must be 99.9% as a lesser rate would be detrimental to your business. You should opt for a service provider which offers a money-back policy in case you decide to pull out of a particular venture. A simple practice to find out the credentials of the web host is by comparing the information given to you in the different forms of media like e-mail, telephone and fax. If there are conflicting statements, it is best to steer away from the company. The bandwidth you require for your website is also of prime importance as going in for a greater bandwidth than you actually require can prove be a simple waste of money. For an average business, a bandwidth between 500 MB to 1 GB will be more than enough. Unlimited data transfer at extremely low prices usually is accompanied with some clauses which we often tend to ignore. So you have to be careful while paying for this feature.

A company providing the best web hosting will offer a 24/7 support for your site and it is a good thing to confirm it at the time of contract signing. Moreover, reading web hosting reviews will also help you in choosing the right provider so you must dedicate some time to it before finalizing your deal. The control panel offered for your site must be simple to use for better visitor convenience. The reputation of the service provider must be carefully checked so that you don’t have to suffer later with a bad deal coming your way.

TIPS ON WEB DESIGNING


Web designing involves creativity. Most people would think that it is all about the how the designing is done. However, being a web designer, a good one at that, should not just take into consideration how the whole finished product would look like. Although that would be the first thing that people would notice, there are other essentials that web designers should think of and should prioritize when designing a website.

In order to be sure that your web designing is going through the right path, make sure it is TUFF.

Time efficient

Do not cramp your website with too much graphics and details that it would cause loading problems. The last thing that you would want to have is a very beautiful website with no visitors. Admit it. People are lazy. They do not want to wait for your site to finish loading when they can view other sites in a matter of seconds. No amount of graphics would make them want to stay. So, to avoid this problem, keep the graphics and content at a minimum to avoid slow loading.

Utilize CSS

Table based websites are gone. CSS is the way to go. It offers answers to problems on being accessible, being reusable and being able to have a relatively not so big file size. Through CSS, users are given a wider range of control on how the website should look like. For beginners, especially those non-programmers, you can start with learning how to make hyperlinks, bullets and numbers and modifying texts.

Fits All

People should be able to access your website. You need to think if the graphics you are using would fit all screen resolution. Well, maybe not all, but majority of the users of the Internet world
. Your designing would all be futile if computers’ screen resolution will not be able to give justice to what you have done. Also, put into mind that technology is fast evolving. Thus, your design should still work well with any advances that would come or may be easy to adjust once new technology sets in. And, most importantly, do not limit your website to just one or two browsers.

Friendly to users

This means it should be easy to navigate and the layout should be easy to understand. Second to waiting, people do not like not knowing what to do and where to go. Unless people really need your site badly, they would not shed any effort figuring out how your website works if it’s too difficult to see at first glance.

SIMPLE EXPLANATION FOR WEBHOSTING


Providing advertisements is such a huge factor for all kinds of businesses. After all, reaching probable customers is the key to their success. Many have gone to different forms of mass media such as the television and radio to promote their products. Even the internet is not spared. Companies have built websites to further endorse the goods and services they offer.

Going to the topic of putting up websites, there is a need to fully explain web hosting. The reason as to why is because web hosting is what allows a publisher to erect a website on the World Wide Web. There is no other way. It doesn’t matter how small or big, each website is needs to be hosted first before it can materialize on the net. Even if a website only has one page, web hosting is inevitable if you want it to be viewable.

So, how does web hosting work? Well, the principle is that a website must be first allotted space on a server. The server is physical equipment that stores data, much like a computer memory only that it supports huge amounts. This server then connects the website stored in the server to the internet, meaning that a website cannot be hosted if there is no server. But, of course, websites must be first composed before it can be stored in a server. Small websites can contain text and image files, whereas the larger ones can have audio and video files. The larger the website, the larger space it occupies in the server and the larger bandwidth it takes.

TOP ADVANTAGES OF VPS HOSTING


VPS or Virtual Private Server hosting is a technique wherein a single physical server is partitioned to a number of virtual servers. In this way, one server is made to execute the work of a number of servers.

To make this system work flawlessly, each virtual server is given dedicated resources that make it possible for it to run an Operating System independently.

VPS hosting is best suited to small and medium businesses. It has several advantages for the right users. For the end user, the advantage is obviously one of cutting costs without compromising features. Since a single physical server is sliced into a number of virtual servers, the expenses on resources for the server are reduced. One server controls how much ROM, Disk Space, CPU usage etc is allotted to virtual servers. So, each server works independently. For small and medium businesses, the cost of a dedicated sever may render it unviable. VPS hosting is the answer.

The strength of VPS hosting is in the virtual server’s ability to work independently even though there is only one physical server. Each server may be loaded with unique resources. Since these servers are partitioned, any problem in one of the servers will not affect the other servers. In case one of the servers crash, the others will continue to function.

In VPS hosting, each virtual server is given its own resources. So, there is no competition for resources, which means that VPS hosting has the ability to handle high volumes of traffic without compromising performance. Each virtual server is blocked access to the resources of OTHER virtual servers.

Unlike shared hosting
where resources of a single server are shared among users which leads to a slowing down of data transfer, in VPS hosting, the volume of traffic at one virtual server does not affect the data transfer at another server. Every server works in complete isolation.

Due to these advantages, most small to medium businesses opt for VPS hosting services
. These businesses are too large for shared hosting but cannot afford dedicated hosting. In such cases, VPS hosting is the best solution

CREATING YOUR OWN INCREDIBLE WEBSITE


It wasn't that long ago that the only people that messed around online and with websites were total eggheads. This just ain't the case any more. The whole idea has become so much more user friendly. Every Tom, Dick, and Harry has their own website now. You don't need to to be a computer big shot anymore. Really anyone with workable knowledge can get things going online.

This is an incredible opportunity for everyone to put themselves out there and let people know what they are selling or merely what they think about things. In this new world, web hosts tend to be super user friendly allowing all to create incredible looking web pages and even to increase their rankings in Google searches.

Your web host will help you with all the fun creative aspects if the thought of putting together a website makes you nervous. Your family will get a kick out of keeping up with your exploits on your cool new website. Now if you are thinking more along the lines of business then you may need to go for something a little more complex than a free web host. When you do pay it isn't too much, but it means everything.

You have to understand that when you pay that fee to your web host you are getting a guarantee of service that is nonexistent with the free ones. You want someone you can depend on. For business this is crucial and you will need help immediately sometimes, and get it. For a casual website, it isn't such a big deal if things don't go exactly as you like. You can always do something different, or switch web hosts. That switching is not an option for a business site.

You will be so happy with a web host that is a good fit. You will get the website that you dreamed of without pulling out your hair. You will get past whatever stumble blocks that come up because you will not be alone.

FREE WEBSITE HOSTING


Websites are used by organizations and also individuals if they want to introduce new ideas, products and services to the public. These websites are mainly used to get more attention and easy access to surfers and to future customers. They make their website unique in order for people to recognize them easily. But using a website entails expenses. This is why others prefer to a free website host that you can register to.

Free web hosting provides enough space for you to use. You can personalize your space by using their available tools. They will assist and support you in making your website. FTP support or file transfer protocol is a system where you can copy file from any network available to the local system you choose to host your website. They help you create and manage your website easily. You can extend your front page for you to have enough space to introduce your website.

And it has unlimited bandwidth for you to transfer as much information you want to put into your website. Free web hosting includes free site promotion that they can show your website to popular site engines, and it has free stat and site tools where you can have the information regarding where the bigger market is coming from. Other feature would be no file type and size limits, meaning there is no limitations in transferring file to your website. It has fast and reliable servers allowing your website to easily load and to provide you with more web
site traffic

Security Authentication


Security is a broad topic. Secure communication is an integral part of securing your distributed application to protect sensitive data, including credentials, passed to and from your application, and between application tiers.
There are many technologies used to build .NET Web applications. To build effective application-level authentication and authorization strategies, you need to understand how to fine-tune the various security features within each product and technology area, and how to make them work together to provide an effective, defense-in-depth security strategy. This guide will help you do just that.
There are two types of techniques for doing this.
The first is comparing the attributes of the object itself to what is known about objects of that origin. For example, an art expert might look for similarities in the style of painting, check the location and form of a signature, or compare the object to an old photograph. An archaeologist might use carbon dating to verify the age of an artifact, do a chemical analysis of the materials used, or compare the style of construction or decoration to other artifacts of similar origin. The physics of sound and light, and comparison with a known physical environment, can be used to examine the authenticity of audio recordings, photographs, or videos.
The second type relies on documentation or other external affirmations. For example, the rules of evidence in criminal courts often require establishing the chain of custody of evidence presented. This can be accomplished through a written evidence log, or by testimony from the police detectives and forensics staff that handled it. Some antiques are accompanied by certificates attesting to their authenticity. External records have their own problems of forgery and perjury, and are also vulnerable to being separated from the artifact and lost.

Authentication VS Authorization:
The process of authorization is sometimes mistakenly thought to be the same as authentication; many widely adopted standard security protocols, obligatory regulations, and even statutes make this error. However, authentication is the process of verifying a claim made by a subject that it should be allowed to act on behalf of a given principal (person, computer, process, etc.). Authorization, on the other hand, involves verifying that an authenticated subject has permission to perform certain operations or access specific resources. Authentication, therefore, must precede authorization.
For example, when you show proper identification credentials to a bank teller, you are asking to be authenticated to act on behalf of the account holder. If your authentication request is approved, you become authorized to access the accounts of that account holder, but no others.
Even though authorization cannot occur without authentication, the former term is sometimes used to mean the combination of both.
To distinguish "authentication" from the closely related "authorization", the short-hand notations A1 (authentication), A2 (authorization) as well as AuthN / AuthZ (AuthR) or Au / Az are used in some communities.
IP address:
       
          An Internet Protocol address (IP address) is a numerical label assigned to each device (e.g., computer, printer) participating in a computer network that uses the Internet Protocol for communication.[1] An IP address serves two principal functions: host or network interface identification and location addressing. Its role has been characterized as follows: "A name indicates what we seek.
         An address indicates where it is. A route indicates how to get there."[2] The designers of the Internet Protocol defined an IP address as a 32-bit number[1] and this system, known as Internet Protocol Version 4 (IPv4), is still in use today. However, due to the enormous growth of the Internet and the predicted depletion of available addresses, a new addressing system (IPv6), using 128 bits for the address, was developed in 1995,[3] standardized as RFC 2460 in 1998,[4] and is being deployed world-wide since the mid-2000s.
           IP addresses are binary numbers, but they are usually stored in text files and displayed in human-readable notations, such as 172.16.254.1 (for IPv4), and 2001:db8:0:1234:0:567:8:1 (for IPv6).
           The Internet Assigned Numbers Authority (IANA) manages the IP address space allocations globally and delegates five regional Internet registries (RIRs) to allocate IP address blocks to local Internet registries (Internet service providers) and other entities.
IPv4 private addresses
Early network design, when global end-to-end connectivity was envisioned for communications with all Internet hosts, intended that IP addresses be uniquely assigned to a particular computer or device. However, it was found that this was not always necessary as private networks developed and public address space needed to be conserved.

Computers not connected to the Internet, such as factory machines that communicate only with each other via TCP/IP, need not have globally-unique IP addresses. Three ranges of IPv4 addresses for private networks were reserved in RFC 1918. These addresses are not routed on the Internet and thus their use need not be coordinated with an IP address registry.
Any user may use any of the reserved blocks. Typically, a network administrator will divide a block into subnets; for example, many home routers automatically use a default address range of 192.168.0.0 - 192.168.0.255 (192.168.0.0/24).

Methods
          Static IP addresses are manually assigned to a computer by an administrator. The exact procedure varies according to platform. This contrasts with dynamic IP addresses, which are assigned either by the computer interface or host software itself, as in Zeroconf, or assigned by a server using Dynamic Host Configuration Protocol (DHCP). Even though IP addresses assigned using DHCP may stay the same for long periods of time, they can generally change. In some cases, a network administrator may implement dynamically assigned static IP addresses. In this case, a DHCP server is used, but it is specifically configured to always assign the same IP address to a particular computer. This allows static IP addresses to be configured centrally, without having to specifically configure each computer on the network in a manual procedure.
        In the absence or failure of static or stateful (DHCP) address configurations, an operating system may assign an IP address to a network interface using state-less auto-configuration methods, such as Zeroconf.

Uses of dynamic addressing
          Dynamic IP addresses are most frequently assigned on LANs and broadband networks by Dynamic Host Configuration Protocol (DHCP) servers. They are used because it avoids the administrative burden of assigning specific static addresses to each device on a network. It also allows many devices to share limited address space on a network if only some of them will be online at a particular time. In most current desktop operating systems, dynamic IP configuration is enabled by default so that a user does not need to manually enter any settings to connect to a network with a DHCP server. DHCP is not the only technology used to assign dynamic IP addresses. Dialup and some broadband networks use dynamic address features of the Point-to-Point Protocol.

Modifications to IP addressing

IP blocking and firewalls
            Firewalls perform Internet Protocol blocking to protect networks from unauthorized access. They are common on today's Internet. They control access to networks based on the IP address of a client computer. Whether using a blacklist or a whitelist, the IP address that is blocked is the perceived IP address of the client, meaning that if the client is using a proxy server or network address translation, blocking one IP address may block many individual computers.
IP address translation
          Multiple client devices can appear to share IP addresses: either because they are part of a shared hosting web server environment or because an IPv4 network address translator (NAT) or proxy server acts as an intermediary agent on behalf of its customers, in which case the real originating IP addresses might be hidden from the server receiving a request. A common practice is to have a NAT hide a large number of IP addresses in a private network. Only the "outside" interface(s) of the NAT need to have Internet-routable addresses.[8]
Most commonly, the NAT device maps TCP or UDP port numbers on the outside to individual private addresses on the inside. Just as a telephone number may have site-specific extensions, the port numbers are site-specific extensions to an IP address.
the task.


Secure by SSL and Client Certificates:
SSL & Client certificate:

SSL :

The Secure Sockets Layer (SSL) is a protocol designed to provide encrypted communications on the Internet. It uses a combination of symmetric-key and public-key cryptography to create a secure connection.
SSL secures transactions, preventing eavesdropping, tampering, and impersonation. It provides encryption, tampering detection, and authentication.
The Secure Sockets Layer protocol is a protocol layer which may be placed between a reliable connection-oriented network layer protocol (e.g. TCP/IP) and the application protocol layer (e.g. HTTP). SSL provides for secure communication between client and server by allowing mutual authentication, the use of digital signatures for integrity, and encryption for privacy.
The protocol is designed to support a range of choices for specific algorithms used for cryptography, digests, and signatures. This allows algorithm selection for specific servers to be made based on legal, export or other concerns, and also enables the protocol to take advantage of new algorithms. Choices are negotiated between client and server at the start of establishing a protocol session.

Session Establishment

The SSL session is established by following a handshake sequence between client and server, as shown in Figure 1. This sequence may vary, depending on whether the server is configured to provide a server certificate or request a client certificate. Though cases exist where additional handshake steps are required for management of cipher information, this article summarizes one common scenario: see the SSL specification for the full range of possibilities.
Simplified SSL Handshake Sequence

The elements of the handshake sequence, as used by the client and server, are listed below:
1. Negotiate the Cipher Suite to be used during data transfer
2. Establish and share a session key between client and server
3. Optionally authenticate the server to the client
4. Optionally authenticate the client to the server
The first step, Cipher Suite Negotiation, allows the client and server to choose a Cipher Suite supportable by both of them. The SSL3.0 protocol specification defines 31 Cipher Suites. A Cipher Suite is defined by the following components:
Key Exchange Method
Cipher for Data Transfer
Message Digest for creating the Message Authentication Code (MAC)
These three elements are described in the sections that follow.



Encryption

Encryption is the translation of readable information, called plaintext, into an unreadable form, called ciphertext. To read the ciphertext, you must have the key that translates or decrypts the ciphertext to the original plaintext.
Cryptography systems are largely classified as using either symmetric-key cryptography or public-key cryptography. The SSL protocol employs both techniques.

Symmetric-key Cryptography
Symmetric-key cryptography uses a single secret key that both the sender and recipient have. Symmetric-key systems are simple and fast, but their main drawback is that the two parties must somehow exchange the secret key in a secure way.
Public-key Cryptography

Public-key cryptography uses a pair of keys that work together to encrypt and decrypt information. One key is freely distributed (the public key). The sender uses the public key to encrypt messages to the recipient. The other key is kept secret (the private key). The recipient uses his or her private key to decrypt messages from the sender. The private key will only work with its corresponding public key. The public key and corresponding private key are sometimes referred to collectively as the key pair.
Covalent SSL and Encryption

The SSL protocol uses both public-key and symmetric-key techniques to securely transfer information. Covalent SSL allows your Covalent Enterprise Ready Server (SSL-enabled server) and browsers (SSL-enabled clients) to use encryption to establish and conduct secure SSL sessions.

SSL Session

After the SSL handshake establishes the encrypted connection, the SSL session begins. During this phase, the server and client transmit the message contents. The faster symmetric session key encrypts the messages.
To detect whether the data was altered enroute during the SSL session, a message digest helps verify the integrity of the message. The message digest is also encrypted using public-key techniques. (See "Tamper Detection" for more information.)


Ciphers and Cipher Suites
Mathematical algorithms called ciphers perform encryption. Each of the encrypted exchanges in the SSL handshake and SSL session can use different types of ciphers. These ciphers have an identifying algorithm name.
Encryption Strength

Encryption strength is defined in part by the length of the keys used to perform the encryption. Key length is measured in bits-a greater number of bits provides a higher level of security. A private key with 1024-bit encryption is stronger than a private key with 512-bit encryption. A session key with 128-bit encryption is significantly stronger than a session key with 40-bit encryption. Encoding with a 128-bit session key is commonly referred to as "strong encryption".
Because the client (browser) may or may not support higher levels of encryption, the client and server negotiate the strongest cipher suite available to both during the SSL handshake.
If you want to communicate only with browsers that support the strongest ciphers, you can exclude the weaker ciphers through theSSLCipherSuite and SSLProxyCipherSuite directives. If you do so, be sure that browsers accessing your site can support the ciphers you specify.
Server Certificates, and Certificate Authorities
Private Key, Public Key and Temporary Server Certificate

You begin by using Covalent SSL to generate a temporary server certificate and its corresponding private key. The private key and temporary server certificate are always generated together because the certificate contains the corresponding public key.
The temporary certificate is signed with your server's private key (self-signed) and is valid for 30 days. Browsers won't automatically trust the temporary certificate, but you can use it to verify certificate contents and test secure HTTPS connections to your site.

Certificate Signing Request (CSR)

Covalent SSL also generates a Certificate Signing Request (CSR). The CSR is an unsigned version of the server certificate. You submit the CSR to the Certificate Authority of your choice for verification and signing.

CSR Processing and Certificate Installation

After the CA processes your CSR, which can take several business days, the CA will sign your server certificate with its private key. Use the Covalent SSL Certificate and Key Management Tool to install the signed certificate on your server.
After you install the CA-signed certificate, you are ready to conduct secure transactions. Browsers that access your Covalent SSL-secured site will examine your server certificate and authenticate your site, then proceed to transmit information safely and securely.

Certificate Expiration

When the CA signs your certificate, they also encode an expiration date. The certificate's expiration date is normally one year from the date of issue. To ensure that the certificate remains valid, be sure you renew the certificate with the CA prior to the expiration date.
SSL and Virtual Hosts
The Covalent Enterprise Ready Server allows you to secure multiple sites using Covalent SSL and its virtual host feature. To do so, you must generate a private key and install a server certificate for each host you want to secure.Because SSL negotiation occurs before the server host name is resolved, you must configure IP-based virtual hosts. Name-based virtual hosts cannot be used with the SSL protocol.

SSL Handshake

The SSL handshake establishes the encrypted connection. This is accomplished in part by authenticating the server to the client. Authentication involves digital certificates, which employ public-key encryption techniques. (See "Authentication" for more information.)
During the SSL handshake, the server and client exchange a symmetric session key. The session key itself is encrypted using public-key techniques, so only the intended recipient can decrypt it.

A SSL handshake is performed to setup a secured channel. The main process is:
1.  Server presents client a server certificate and client authenticates server;
2.  Client generates premaster secret and encrypt it with the server’s public key (contained in the server certificate), and send it to server.
3.  Server decrypts the encrypted premaster secret.
4.  Client and server both generates a master secret from the premaster secret, and then generate session keys from the master secret. The session keys are symmetric keys used to encrypt and decrypt information exchanged during the SSL session and to verify its integrity.
After this handshake, the server and client can communicate using encrypted messages that can only be decrypted by each other.

What Does a Client Need to Do to Use SSL?

A client using SSL could be a web browser or a web service proxy. As a browser client, just make sure the URL address of the server starts with “https://“ instead of “http://“. As a web service client, just make sure that when the proxy class is generated, say using wsdl.exe, the used URL starts with “https://“. Otherwise wsdl.exe can not find the WSDL of the web service and the generation will fail. Apart from these concerns there is no other things that a client has to do to use SSL.

Client Certificate

A client certificate is used for two purposes:
1.       Prove the identity of the client (more precisely, identity of his/her browser) to servers that accepts client certificates. Client no longer needs user name and password to log on to the server.
2.       Sign or encrypt user’s email. For most email applications such as Outlook or Eudora, signing or encrypting one single email or all emails are just a matter of one button click or one security setting. If user A wants to send an encrypted email to user B, A should acquire B’s public key (most email applications has the facility to store recipient’s digital IDs), and encrypt email with it. Then when the email reaches B, he can decrypt it with his private key.
The principle and authentication process of a client certificate is almost the same as a server certificate. The differences are:
1.       Client browser sends the client certificate to server, while the server sends the server certificate to client browser;
2.       A server is stationed – it resides on a fixed IP address, which is stated in the server certificate. Therefore, if someone steals the server certificate of IBM and installs it on his own server, client browser finds out that the lunching IP address of the server is different from that on the certificate.
In comparison, a client is mobile. The email address signature of the client email is also easily forgeable. Because of this, the client authentication process has one step which is different from the server authentication: the client browser is required to sign a piece of randomly generated data and send it along with the client certificate. Server is then able to verify using the public key contained in the client certificate that this piece of data was signed by the matching private key. Because of the assumption that only the real client has this private key, server can be sure that the connection was from the real client.

Using a Client Certificate

Installing your client certificate on your browser
You go through similar process to apply for a client certificate from a CA. Again trial certificates are usually provided for free. To install it, go to “Tools | Internet Options | Content tab page | Certificates button | “Personal” tab page | Import”. Browse to the certificate file.
Setting server to ignore, accept or require client certificate
IIS | “Default Web Sites” | right-click the virtual directory that needs to use SSL | “Properties” menu | “Directory Security” tab page | “Secure Communications” group | “Edit” button | “Client Certificates” group | select the corresponding button.
Setting web service proxy to send client certificate
A client browser that has client certificate installed knows to send the certificate to server. But a proxy doesn’t know the certificate. Therefore, you have to export the certificate to a file, then assign it to the proxy. To export the certificate into a file in IE 6.0: Tools | Internet Options | Content | Certificates | Personal tab page | select the certificate that you want to export | click “Export...” button.
To assign it to the proxy:

Client.localhost.Service1 s = new Client. localhost.Service1();
X509Certificate cert = X509Certificate.CreateFromCertFile("c:/TrialId.cer");
s.ClientCertificates.Add(cert);
Console.WriteLine(s.Hello());
Acquiring certificate details at server side
By acquiring the details of the client certificate, server can find out the identity of the user.

[WebMethod]
public string Hello()
{
    return Context.Request.ClientCertificate.Subject;
}
The output should be something like:

E = frank_liu_silan@hotmail.com
CN = Silan (Frank) Liu
OU = Digital ID Class 1 – Microsoft
OU = Persona Not Validated
OU = www.verisign.com/repository/RPA Incorp. by Ref.,LIAB.LTD98
OU = VeriSign Trust Network
O = VeriSign, Inc.
If we want to acquire the email address or the user name, for example, we should parse this string.

Mapping client certificates to Windows accounts
We could map client certificates to Windows accounts, so that we do not need to acquire the certificate details and parse them. To setup server to map client certificates: go to IIS |  “Default Web Sites” | right-click the virtual directory that needs to use SSL | “Properties” menu | “Directory Security” tab page | “Secure Communications” group | “Edit” button | tick “Enable client certificate mapping” tick box | “Edit” button.
There are two ways of mapping. One-to-one mapping maps one client certificate file to one user account. Obviously you should have the client certificate file (.cer) at hand. IIS does not check whether the user account actually exists or the password is correct. Many-to-one mapping maps any client certificate whose certain fields contains certain substring. For example, we can decide that any client certificate whose “Subject” field | “O” sub field begins with “Sealand Consulting” (by using a criteria “Sealand Consulting*”) maps to user account “fliu2000/Administrator”.

Conclusion

With the genius invention of the cryptographic public/private key algorithm, with the involvement of a trustworthy third party such as VeriSign, secured connection and identity verification are made possible across the Internet, which is the most important corner stone of today’s booming e-commerce industry.

ERROR HANDLING


Asp.Net provides rich support for handling and tracking errors that might occur while applications are running.When you run an ASP.Net application,if an error occurs on a server,an HTML error page is generated and displayed in the browser.When an error occurs,a generic error message,”Application Error Occurred,” is displayed to users.
To see the error details,one of the following needs to be done:
Access the page again from the local server.
Modify the configuration settings of the computer.
Modify the configuration settings of the applications web .config file to enable remote access.
Following is a sample of the Web.Config file that you can modify:
<configuration>
<system.web>
<customErrors mode=”Off”/>
</system.web>
</configuration>
In this code,the <customErrors> tag has an attribute mode whose value is set to “Off”.This value indicates  that the remote users always see the original error message that is generated on the server.
Using custom error pages:
HTML error page is displayed to a user in case an error occurs on a server.These error messages are secure,because they do not leak any secret information.you can create custom error pages that can be displayed in case errors occur.For example,you can create an error page that displays the company’s brand and some error messages that you want to display.To implement the custom error pages:
Create a web page that you want to display as an error message page.This can be a page with an html or .aspx extension.
Modify the web.config file of your application to point to the custom page in the event of any error.The configuration settings,shown here,point to a file called MyError.aspx:
<configuration>
<system.web>
<customErrors mode=”RemoteOnly”defaultRedirect=”MyError.aspx”/>
</system.web>
</configuration>
When you modify the web.config file to set the defaultRedirect attribute,the user is directed to the same custom error message irrespective of the type of the error.You can specify specific error messages.,such as “Page not found” or “server crash” for specific status codes,as shown in the following code:


<configuration>
<system.web>
<customErrors
defaultRedirect=”http://host1/MyError.aspx”mode=”RemoteOnly”>
<error statusCode=”500”
redirect=”http://host1/pages/support.html”/>
<error statusCode=”403”
redirect=”http://host1/pages/access_denied.html:/>
</customErrors>
</system.web>
</configuration>
In this code,the error tag takes two attributes,statusCode and redirect.The statusCode attribute represents the value of the HTTP status codes.The redirect attribute points to the errors message file.
The ASP.Net Trace functionality:
In ASP.Net ,the trace feature ensures that the programmers are able to log their applications by providing the means to monitor and examine program performance either during development or after deployment.ASP.Net allows tracing at two levels:
Page-level tracing
Application-level tracing
Page-level Tracing:
ASP.Net makes it easy to debug and test applications by providing a trace capability.Trace capability is enabled,ASP.Net provides the following functionalities automatically:
Creates and appends a table called the tables of  performance data to the end of the ASP.Net page.
Allows a developer to add custom diagnostic messages in the code wherever required.
Basically,the following are the two ways to generate trace statements in a page:
1.Use the code written within a file
2.Use an HTML editor.
While generating the trace statements,you include custom trace messages to the Trace log.Then with the help of an using HTML editor,you can present those messages and other trace information in a better manner.
You’ll now write an ASP.Net page that generates the trace statements.Both Visual Studio.Net and Notepad can be used for writing the code.In this case,Notepad is used to create the ASPX file.
Open Notepad and type the following code:


<%@ Page Language=”VB” Trace=”False”%>
<html>
<head>
<title>Trace Demo</title>
</head>

<Script runat=”server”>
Public Function Addition(FNum As Integer,SNum As Integer)
As Integer
Trace.Write(“Inside Addition()FNum:”,FNum.ToString())
Trace.Warn(“Inside Addition()SNum:”,SNum.ToString())
Return FNum+SNum
End Function
</Script>

<body>
Calling the Addition Function:10+5=<%=Addition(10,5)%>
</body>
</html>

Save the file as an ASPX file in a Web directory on the Web server.In this case,the file is named TraceStat.aspx.Excute the TraceStat.aspx file.The Trace.Write statements generate the trace statements.The Addition function takes two integer values and returns an integer value as the sum of the two numbers.In the calling statement,the Addition functions is called by using the <% and the %> delimiters used for specifying the ASP code.
Application-level tracing:
Applicaation-level Tracing is enabled by using the Web.config file. This file is also used to enable the ASP.NET framework to gather the HTTP request information for the entire application. Application-level Tracing does not present the Trace information in a browser, but the information can be displayed in a webbased Traceviewer application.Trace viewer displays trace information for a sequence of requests to the application thus making it mandatory to store the matrix for each request in memory until tracing is enebled. This can be done by including a TraceContex class that participates in the HTTP execution.By opening the root web.config file and looking at the tracing section,the following code can be seen:
<configuration>
<system.web>
<trace enabled=”false”requestLimit=”10” pageOutput=”false”
traceMode=”SortBy Time’/>
</system.web>
</configuration>

Asp Page Directives


Asp.Net Page directives are something that is a part of every asp.net pages. Page directives are instructions, inserted at the top of an ASP.NET page, to control the behavior of the asp.net pages. So it is type of mixed settings related to how a page should render and processed.

Here’s an example of the page directive.
<%@ Page Language="C#" Auto Event Wire up="true" Code File="Sample.aspx.cs" Inherits="Sample" Title="Sample Page Title" %>

Totally there are 11 types of Pages directives in Asp.Net 2.0. Some directives are very important without which we cannot develop any web applications in Asp.Net. Some directives are used occasionally according to its necessity. When used, directives can be located anywhere in an .aspx or .ask file, though standard practice is to include them at the beginning of the file. Each directive can contain one or more attributes (paired with values) that are specific to that directive
Asp.Net web form page framework supports the following directives

1. @Page
2. @Master
3. @Control @Master Directive
4. @Register
5. @Reference
6. @Previous Page Type
7. @Output Cache
8. @Import
9. @Implements
10. @Assembly
11. @Master Type



@Page Directive

The @Master directive is quite similar to the @Page directive. The @Master directive belongs to Master Pages that is .master files. The master page will be used in conjunction of any number of content pages. So the content pages can the  inherits the attributes of the master page. Even though, both @Page and @Master page directives are similar, the @Master directive has only fewer attributes as follows

a. Language: This attribute tells the compiler about the language being used in the code-behind. Values can represent any .NET-supported language, including Visual Basic, C#, or JScript .NET.

b. Auto Event Wire up: For every page there is an automatic way to bind the events to methods in the same master file or in code behind. The default value is True.

@Control Directive

The @Control directive is used when we build an Asp.Net user controls. The @Control directive helps us to define the properties to be inherited by the user control. These values are assigned to the user control as the page is parsed and compiled. The attributes of @Control directives are


@Register Directive

The @Register directive associates aliases with namespaces and class names for notation in custom server control syntax. When you drag and drop a user control onto your .aspx pages, the Visual Studio 2005 automatically creates an @Register directive at the top of the page. This registers the user control on the page so that the control can be accessed on the .aspx page by a specific name.

@Reference Directive

The @Reference directive declares that another asp.net page or user control should be complied along with the current page or user control. The 2 attributes for @Reference direcive are

a. Control: User control that ASP.NET should dynamically compile and link to the current page at run time.

b. Page: The Web Forms page that ASP.NET should dynamically compile and link to the current page at run time.

C. Virtual Path: Specifies the location of the page or user control from which the active page will be referenced.

ADVANCED ISSUES


Email:

Email is a computer based method of sending messages from one computer user to another. These messages usually consist of individual pieces of text which you can send to another computer user even if the other user is not logged in (i.e. using the computer) at the time you send your message. The message can then be read at a later time. This procedure is analogous to sending and receiving a letter.

When mail is received on a computer system, it is usually stored in an electronic mailbox for the recipient to read later. Electronic mailboxes are usually special files on a computer which can be accessed using various commands. Each user normally has their individual mailbox.

Host-based mail systems

The original email systems allowed communication only between users who logged into the same host or "mainframe". This could be hundreds or even thousands of users within an organization.

By 1966 (or earlier, it is possible that the SAGE system had something similar some time before), such systems allowed email between different organizations, so long as they ran compatible operating systems.

Examples include BITNET, IBM PROFS, Digital Equipment Corporation ALL-IN-1 and the original Unix mail.

LAN-based mail systems

From the early 1980s, networked personal computers on LANs became increasingly important. Server-based systems similar to the earlier mainframe systems were developed. Again these systems initially allowed communication only between users logged into the same server infrastructure. Eventually these systems could also be linked between different organizations, as long as they ran the same email system and proprietary protocol.

Examples include cc:Mail, Lantastic, WordPerfect Office, Microsoft Mail, Banyan VINES and Lotus Notes - with various vendors supplying gateway software to link these incompatible systems.


Early interoperability among independent systems included:uucp was used as an open "glue" between differing mail systems, primarily over dialup telephones
ARPANET which was the forerunner of today's Internet
CSNet which used dial-up telephone access to link additional sites to the ARPANET and then Internet

Working with IIS:
IIS (Internet Information Server) is a group of Internet servers (including a Web or Hypertext Transfer Protocol server and a File Transfer Protocol server) with additional capabilities for Microsoft's Windows NT and Windows 2000 Server operating systems. IIS is Microsoft's entry to compete in the Internet server market that is also addressed by Apache, Sun Microsystems, O'Reilly, and others. With IIS, Microsoft includes a set of programs for building and administering Web sites, a search engine, and support for writing Web-based applications that access databases. Microsoft points out that IIS is tightly integrated with the Windows NT and 2000 Servers in a number of ways, resulting in faster Web page serving.
A typical company that buys IIS can create pages for Web sites using Microsoft's Front Page product (with its WYSIWYG user interface). Web developers can use Microsoft's Active Server Page (ASP)technology, which means that applications - including ActiveX controls - can be imbedded in Web pages that modify the content sent back to users. Developers can also write programs that filter requests and get the correct Web pages for different users by using Microsoft's Internet Server Application Program Interface (ISAPI) interface. ASPs and ISAPI programs run more efficiently than common gateway interface (CGI) and server-side include (SSI) programs, two current technologies. (However, there are comparable interfaces on other platforms.)
Microsoft includes special capabilities for server administrators designed to appeal to Internet service providers (ISPs). It includes a single window (or "console") from which all services and users can be administered. It's designed to be easy to add components as snap-ins that you didn't initially install. The administrative windows can be customized for access by individual customers.
Worker process isolation mode
Provides an easy way to insulate Web applications from each other, so that problems with one Web application don't impact the other Web applications on Microsoft Internet Information Services (IIS).
IIS 6.0 allows you to organize applications into application pools. Each application pool is a completely independent entity, served by one or more worker processes. Usually, a Windows administrator will create a separate application pool for each Web application that the server hosts -- but a single application pool can host multiple applications.
Of course, this raises the question of how application pools can isolate IIS Web applications from each other. True isolation is possible because Windows differentiates between code that is running in kernel mode vs. code that is running in user mode.
Windows runs core IIS components, such as HTTP.SYS and the WWW service, in kernel mode. Each application pool contains its own kernel-mode queue. This means that HTTP.SYS is able to route inbound requests directly to a queue that is dedicated to a specific application pool, all within kernel mode. Application pools are separated from each other by process boundaries.
Worker processes are dedicated to a specific application pool to actually service requests. If a failure occurs, it usually happens within a worker process. However, since worker processes are bound to particular application pools, a worker process failure will only affect the application in which it resides, but no others.
The really cool part is that IIS provides mechanisms for monitoring the health of a worker process. If a worker process fails, the process can be restarted without the end user even being aware of the failure.

Asp Page Directives

Asp.Net Page directives are something that is a part of every asp.net pages. Page directives are instructions, inserted at the top of an ASP.NET page, to control the behavior of the asp.net pages. So it is type of mixed settings related to how a page should render and processed.

Here’s an example of the page directive.
<%@ Page Language="C#" Auto Event Wire up="true" Code File="Sample.aspx.cs" Inherits="Sample" Title="Sample Page Title" %>

Totally there are 11 types of Pages directives in Asp.Net 2.0. Some directives are very important without which we cannot develop any web applications in Asp.Net. Some directives are used occasionally according to its necessity. When used, directives can be located anywhere in an .aspx or .ask file, though standard practice is to include them at the beginning of the file. Each directive can contain one or more attributes (paired with values) that are specific to that directive
Asp.Net web form page framework supports the following directives

1. @Page
2. @Master
3. @Control @Master Directive
4. @Register
5. @Reference
6. @Previous Page Type
7. @Output Cache
8. @Import
9. @Implements
10. @Assembly
11. @Master Type

Introduction to ADO.Net


ADO.NET is a group of libraries used to create powerful databases using various sources that include Microsoft Access, Microsoft Access, Oracle, XML, etc. ADO.NET relies on the .NET Framework's various classes to process requests and perform the transition between a database system and the user. The operations are typically handled through the DataSet class.
While ADO.NET is the concept of creating and managing database systems, the DataSet class serves as an intermediary between the database engine and the user interface.
Active X Data Objects(ADO) is simply a thin layer ,which hits on the top of OLE DB,and allows programs written in high level languages such as Visual Basic to access OLE DB Data.


The System.OleDb namespace provides objects that enable us to connect to OLEDB
providers. OLE-DB is an open specification for data providers that allow for
flexible access to many Microsoft and third-party data sources.This provides us
with one data access technology to connect to and manipulate data in several
database products, without having to change libraries.The System.Data.OleDb
namespace has been tested by Microsoft to work with Microsoft Access,
Microsoft SQL Server, and Oracle. In theory, any data provider that has an OLEDB
interface can be used in ADO.NET.

ODBC or, Open Database Connectivity, is part of the OLE-DB specification,
but Microsoft did not include it with the Beta 2 release.
Some common classes in the System.Data.SqlClient namespace are as follows:
OleDbConnection
OleDbCommand
OleDbDataAdapter
OleDbDataReader
 Database Connection Class:
To support a connection to a database server, the .NET Framework provides the OleDbConnection class that is defined in the System.Data.OleDb namespace. Before using this class, you can first include this namespace in your file:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Data.OleDb;

public partial class _Default : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {

    }
}
To connect to a database, you can first declare a variable of type OleDbConnection using one of its two constructors. Besides the default constructor, the second constructor takes as argument a string value. Its syntax is:
public OleDbConnection(string connectionString);
You can create the necessary (but appropriate) string in this constructor when declaring the variable. This would be done as follows:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Data.OleDb;

public partial class _Default : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {
        OleDbConnection connection = new OleDbConnection("Something");
    }
}
If you want, you can first create the string that would be used to handle the connection, then pass that string to this construction.
To support the connection as an object, the OleDbConnection class is equipped with the ConnectionString property. If you use the default constructor, you can first define a string value, then assign it to this property.
The Attributes of a Connection String
To use a OleDbConnection object, you must provide various pieces of information joined into a string but are separated from each other with a semi-colon ";". Each piece appears as a Key=Value:
Key1=Value1;Key2=Value2;Key_n=Value_n
It can be passed as follows:
OleDbConnection connection = new OleDbConnection("Key1=Value1;Key2=Value2;Key_n=Value_n");
or assigned as a string to the OleDbConnection.ConnectionString property:
string strConnection = "Key1=Value1;Key2=Value2;Key_n=Value_n";
OleDbConnection connection = new OleDbConnection();

connection.ConnectionString = strConnection;

The Database Provider
To use the database, you must indicate its source. To do this, add an attribute named Data Source and assign the file name to it. To help you locate the database file, the Server class of the IIS is equipped with a static method named MapMath. Pass the name of the database or the path to it to this method and assign the whole expression to the Data Source attribute. Here is an
Example:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Data.OleDb;

public partial class _Default : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {
        OleDbConnection connection =
            new OleDbConnection("Provider=Microsoft.Jet.OLE.4.0;" +
                    "Data Source=" + Server.MapPath("App_Data/exercise.mdb"));
    }
}
Transaction class
 A transaction is a group of operations combined into a logical unit of work that is either guaranteed to be executed as a whole or rolled back. Transactions help the database in satisfying all the ACID (Atomic, Consistent, Isolated, and Durable). Transaction processing is an indispensible part of ADO.NET. It guarantees that a block of statements will either be executed in its entirety or rolled back,( i.e., none of the statements will be executed). Transaction processing has improved a lot in ADO.NET 2.0. This article discusses how we can work with transactions in both ADO.NET 1.1 and 2.0.
Implementing Transactions in ADO.NET
In ADO.NET, the transactions are started by calling the BeginTransaction method of the connection class. This method returns an object of type SqlTransaction.
Other ADO.NET connection classes like OleDbConnection, OracleConnection also have similar methods. Once you are done executing the necessary statements within the transaction unit/block, make a call to the Commit method of the given SqlTransaction object, or you can roll back the transaction using the Rollback method, depending on your requirements (if any error occurs when the transaction unit/block was executed).
To work with transactions in ADO.NET, you require an open connection instance and a transaction instance. Then you need to invoke the necessary methods as stated later in this article.  Transactions are supported in ADO.NET by the SqlTransaction class that belongs to the System.Data.SqlClient namespace.
The two main properties of this class are as follows:
Connection: This indicates the SqlConnection instance that the transaction instance is associated with
IsolationLevel: This specifies the IsolationLevel of the transaction
The following are the methods of this class that are noteworthy:

Commit()  :  This method is called to commit the transaction
Rollback()  : This method can be invoked to roll back a transaction. Note that a transaction can                             only be rolled back after it has been committed.
Save()        : This method creates a save point in the transaction. This save point can be used to rollback a portion of the transaction at a later point in time. The following are the steps to implement transaction processing in ADO.NET.
Connect to the database
Create a SqlCommand instance with the necessary parameters
Open the database connection using the connection instance
Call the BeginTransaction method of the Connection object to mark the beginning of the transaction
Execute the sql statements using the command instance
Call the Commit method of the Transaction object to complete the
transaction, or the Rollback method to cancel or abort the transaction
Close the connection to the database
 Data Set Class
Datasets store a copy of data from the database tables. However, Datasets can not directly retrieve data from Databases. DataAdapters are used to link Databases with DataSets. If we see diagrammatically,
DataSets < ----- DataAdapters < ----- DataProviders < ----- Databases
DataSets and DataAdapters are used to display and manipulate data from databases.
Reading Data into a Dataset
To read data into Dataset, you need to:
Create a database connection and then a dataset object.
Create a DataAdapter object and refer it to the DB connection already created. Note that every DataAdapter has to refer to a connection object. For example, SqlDataAdapter refers to SqlDataConnection.
The Fill method of DataAdapter has to be called to populate the Dataset object.
We elaborate the above mentioned steps by giving examples of how each step can be performed:

1)      As we said, our first task is to create a connection to database. We would explore later that there is no need of opening and closing database connection explicitly while you deal with DataAdapter objects. All you have to do is, create a connection to database using the code like this:
SqlConnection con = new SqlConnection ("data source=localhost; uid= sa; pwd= abc; database=Northwind");
We would use Northwind database by using OleDbConnection. The Code would
Look like:

OleDbConnection con= new OleDbConnection ("Provider =Microsoft.JET.OLEDB.4.0;" + "Data Source=C:\\Program Files\\Microsoft Office\\Office\\Samples\\Northwind.mdb");

2)      Now, create a Dataset object which would be used for storing and manipulating data. You would be writing something like
DataSet myDataSet = new DataSet ("Northwind");
Since the name of source database is Northwind, we have passed the same name in the constructor.
3)      The DataSet has been created but as we said before, this DataSet object can not directly interact with Database. We need to create a DataAdapter object which would refer to the connection already created.
Data Adapter Class
OleDbAdapter myDataAdapter = new OleDbAdapter (CommandObject, con);
The above line demonstrates one of many constructors of OleDbAdapter class. This constructor takes a command object and a database connection object. The purpose of command object is to retrieve suitable data needed for populating DataSet. As we know SQL commands directly interacting with database tables, a similar command can be assigned to CommandObject.
OleDbCommand CommandObject = new OleDbCommand ("Select * from employee");

Whatever data you need for your Dataset should be retrieved by using suitable command here. The second argument of OleDbAdapter constructor is connection object con.

Alternative approach for initializing DataAdapter object:
Place a null instead of CommandObject while you initialize the OleDbAdapter object:

OleDbAdapter myDataAdapter = new OleDbAdapter (null, con);

Then you assign your query to the CommandObject and write:

myDataAdapter.SelectCommand = CommandObject;


4)      Now, the bridge between the DataSet and Database has been created. You can populate dataset by using the Fill command:

myDataAdapter.Fill (myDataSet, "EmployeeData");

The first argument to Fill function is the DataSet name which we want to populate. The second argument is the name of DataTable. The results of SQL queries go into DataTable. In this example, we have created a DataTable named EmployeeData and the values in this table would be the results of SQL query: "Select * from employee". In this way, we can use a dataset for storing data from many database tables.
5)      DataTables within a Dataset can be accessed using Tables. To access EmployeeData, we need to write:

myDataSet.Tables["EmployeeData"].

To access rows in each Data Table, you need to write:

myDataSet.Tables["EmployeeData].Rows

1.      <%@ Page Language= "C#" %>
2.      <%@ Import Namespace= "System.Data" %>
3.      <%@ Import Namespace= "System.Data.OleDb" %>
4.      <html>
5.      <body>
6.      
7.      <table border=2>
8.      <tr>
9.      <td><b> Employee ID </b></td>
10.  <td><b> Employee Name </b></td>
11.  </tr>
12.  
13.  <% OleDbConnection con= new OleDbConnection ("Provider
14.  =Microsoft.JET.OLEDB.4.0;" + "Data Source=C:\\Program Files\\Microsoft
15.  Office\\Office\\Samples\\Northwind.mdb");
16.  
17.  <%
18.  DataSet myDataSet = new DataSet();
19.  OleDbCommand CommandObject = new OleDbCommand ("Select * from
20.  employee");
21.
22.  OleDbAdapter myDataAdapter = new OleDbAdapter (CommandObject, con);
23.
24.  myDataAdapter.Fill (myDataSet, "EmployeeData");
25.
26.  foreach (DataRow dr in myDataSet.Tables["EmployeeData"].Rows)
27.  {
28.  Response.write ("<tr>");
29.  for (int j = 0 ; j <2 ; j++)
30.  {
31.  Response.write ( "<td>" + dr[j].ToString() + "</td"> );
32.  }
33.  Response.write ("</tr>");
34.
35.  %>
36.  </table>
37.  </body>
38.  </html>

The Code above would iterate in all rows of Employee table and display ID and name of every employee. To Display all columns of Employee Table, Line # 29 would be replaced by:

for (int j = 0 ; j < dr.Table.Columns.Count ; j++)

ADVANTAGES OF SERVER CONTROL


Web developers coming from an ASP3 or similar background may prefer to work with the HTML-style of control.
Developers can convert existing HTML tags to HTML server controls fairly easily, thus gaining
Some server-side programmatic access to the control

 The HTML elements can be converted into HTML server controls. To do so, we need to use attributes, such as ID and RUNAT, in the tags that are used to add the HTML controls. we can also add these controls to the page by using the HTML tab of the toolbox.

We can manipulate these controls at the server-side. Before dispatching a form to the client, the ASP Engine converts them to the equivalent HTML elements. These controls are included in the System.Web.UI.HtmlControls namespace.
Using the HtmlAnchor Control

The HtmlAnchor control is used to control an <a> element. In HTML, the <a> element is used to create a hyperlink. The hyperlink may link to a bookmark or to another Web page.
we can use the HtmlAchor control (<a>) to navigate from a page to another page.
This basically works almost like the Html anchor tag; the only difference is
that it works on the server.
REQUEST AND RESPONSE OBJECTS:
Request and Response objects represent information coming into the Web server from the browser and information going out from the server to the browser. The Request object is called the input object and the Response object is called the output object.
Request Object

The Request object represents an HTTP request before it has been sent to the server. Notable properties of this object are as follows:

Body  ? Gets/Sets the HTTP request body
CodePage  ? Gets/Sets the code page for the request body
EncodeBody  ?Gets/Sets whether ACT automatically URL encodes the request body
EncodeQueryAsUTF8? Gets/Sets whether ACT automatically UTF-8 encodes the request's                                                                                               query string
Headers  ? Gets the HTTP Headers collection object
HTTPVersion  ? Gets/Sets the HTTP version
Path  ?Gets/Sets the HTTP path
ResponseBufferSize    ?Gets/Sets the size of the buffer used to store the response body
Verb   ?Gets/Sets the HTTP method verb
REQUEST AND RESPONSE OBJECTS:
Request and Response objects represent information coming into the Web server from the browser and information going out from the server to the browser. The Request object is called the input object and the Response object is called the output object.
Request Object

The Request object represents an HTTP request before it has been sent to the server. Notable properties of this object are as follows:

Body  ? Gets/Sets the HTTP request body
CodePage  ? Gets/Sets the code page for the request body
EncodeBody  ?Gets/Sets whether ACT automatically URL encodes the request body
EncodeQueryAsUTF8? Gets/Sets whether ACT automatically UTF-8 encodes the request's                                                                                               query string
Headers  ? Gets the HTTP Headers collection object
HTTPVersion  ? Gets/Sets the HTTP version
Path  ?Gets/Sets the HTTP path
ResponseBufferSize    ?Gets/Sets the size of the buffer used to store the response body
Verb   ?Gets/Sets the HTTP method verb
REQUEST AND RESPONSE OBJECTS:
Request and Response objects represent information coming into the Web server from the browser and information going out from the server to the browser. The Request object is called the input object and the Response object is called the output object.
Request Object

The Request object represents an HTTP request before it has been sent to the server. Notable properties of this object are as follows:

Body  ? Gets/Sets the HTTP request body
CodePage  ? Gets/Sets the code page for the request body
EncodeBody  ?Gets/Sets whether ACT automatically URL encodes the request body
EncodeQueryAsUTF8? Gets/Sets whether ACT automatically UTF-8 encodes the request's                                                                                               query string
Headers  ? Gets the HTTP Headers collection object
HTTPVersion  ? Gets/Sets the HTTP version
Path  ?Gets/Sets the HTTP path
ResponseBufferSize    ?Gets/Sets the size of the buffer used to store the response body
Verb   ?Gets/Sets the HTTP method verb
Response Object
 The Response object represents a valid HTTP response that was received from the server. The response header properties are read-only. Notable properties of the object are as follows:

Body? Gets the body of the HTTP response. Only the portion of the body stored in the response buffer is returned
BytesRecv? Gets the number of bytes the client received in the response
BytesSent: Gets the number of bytes send in the HTTP request
CodePage: Gets or sets the code page used for setting the body of the HTTP response
ContentLength: Gets the size, in bytes, of the response body
Headers: Gets a collection of headers in the response
HeaderSize: Gets the combined size, in bytes, of all the response headers
HTTPVersion: Gets the HTTP version used by the server for this response
Path: Gets the path that was requested
Port: Gets the server port used for the request
ResultCode: Gets the server's response status code
Server: Gets the name of the server that sent the response
TTFB: Gets the number of milliseconds that have passed before the first byte of the response was received
TTLB: Gets the number of milliseconds that passed before the last byte of the response was received
UseSSL: Checks whether the server and client used an SSL connection for the request and response
WORKING WITH DATA - OLEDB CONNECTION CLASS

Introduction To Html


HTML (Hypertext Markup Language) is used to create document on the World Wide Web. It is simply a collection of certain key words called ‘Tags’ that are helpful in writing the document to be displayed using a browser on Internet. It is a platform independent language that can be used on any platform such as Windows, Linux, Macintosh, and so on.To display a document in web it is essential to mark-up the different elements (headings, paragraphs, tables, and so on) of the document with the HTML tags. To view a mark-up document, user has to open the document in a browser.  A browser understands and interpret the HTML tags, identifies the structure of the document (which part are which) and makes decision about presentation (how the parts look) of the document. HTML also provides tags to make the document look attractive using graphics, font size and colors. User can make a link to the other document or the different section of the same document.

 Document Structure
An HTML document consists of two main parts: the Head, and the Body.

General form
<HTML>
<Head> ... </Head>
<Body> ... </Body>
</HTML>

The Head contains information about the document, such as links to pages that could be preloaded; and the Body contains the document to be displayed. The main Head element you need to know about is the <TITLE> tag.  Every document should have a title - it appears as a 'label' on the browser window, and when a user bookmarks it or looks in their history list - it's the text they'll see. Eg: <Title>A Basic Introduction to HTML</Title> Another useful Head tag is the <META> tag if you want to optimise your pages for search engines.
ASP.NET LANGUAGE STRUCTURE:
ASP.Net is founded by Microsoft

.Net:
                It is a problem to design the Internet applications. 3 coded Language (VB,C#,jscript) .Net allow 3rd party developerts to release language like VB.Net ,C#

ASP.NET:
                 It is a ful- fledged object oriented application development platform.ASP.Net integrated with visual studio.Net which provides GUI & a Integrated debugger

CLR:
The .NET runtime engine that executes all .NET programs, and provides modern services such as automatic memory management, security , optimization, and garbage collection.
ASP.Net uses Common Language Runtime (CLR) which is provided by .Net Frame Work.
CLR is used to manage the Execution of code and also it allows the object which is created by other languages.
The client  request a file like default.aspx from the server. Every ASP.Net web page usually has the file extension .aspx. Because this file extension is registered with IIS or known by the Visual Web Developer Web Server .the Asp.Net runtime and the ASP.Net Worker Process get into action. With the file request to the file default.apsx, The ASP.Net Parser is started,and the compiler compiles the file together with a VB.Net file that is associated with the .aspx file and creates an assembly then the assembly is complied to native code by the JIT compiler of the .Net runtime. The assembly contains a page class that is invoked to return HTML code to the Client. Then the page object is destroyed.
 PAGE STRUCTURE

ASP.NET pages are simply text files with the .aspx file name extension that can
be placed on an IIS server equipped with ASP.NET. When a browser requests
an ASP.NET page, the ASP.NET runtime parses and compiles the target file
into a .NET Framework class.

An ASP.NET page consists of the following elements:

? Directives
? Code declaration blocks
? Code render blocks
? ASP.NET server controls
? Server-side comments
? Server-side include directives
? Literal text and HTML tags

It’s important to remember that ASP.NET pages are just text files with an .aspx
extension that are processed by the runtime to create standard HTML, based on
their contents. Presentational elements within the page are contained within the
<body> tag, while application logic or code can be placed inside <script> tags.

PROPERTIES AND COMPILER
To access a property, we place the property name after the class object name. For example, to read the ItemName property into a new variable we would use the following code:

Dim MyItemName As String
MyItemName = MyCartItem.ItemName

HTML SERVER CONTROLS
Html controls are server controls that map directly to common HTML tags. HTML server controls are simple HTML tags with a runat="server" attribute, which enables developers to access them programmatically and work with them in a similar way to Web controls.

Overview Of Tcp/Ip And Its Services


TCP/IP is an acronym for Transmission Control Protocol/Internet Protocol.TCP/IP is a collection of protocols,applications and services.The protocols in TCP/IP move data from one network layer to another.
There are five layers within TCP/IP.
? Application
? Transport
? Inernet
? Data Link
? Physical

The Physical Layer
The Physical layer is pure hardware in any network infrastructure. This includes the cable,satellite,or any other connection medium,and the network interface card.
The Data Link Layer
The layer that is responsible for splitting data into packets to be sent across the connection medium such as cables,satellites,and so on. The Data Link Layer works hard to make sure that the physical link does not garble the electrical signals carrying the data.


The Network Layer
Layer gets packets from the Data Link Layer and sends them to the correct network address. One possible route is available for the data to travel,the network layer figures out the best route.
The Transport Layer
The Network Layer routes data to its destination,it cannot guarantee that the packets holding data will arrive in the correct order or that they won't have picked up any errors during transmission.
The Application Layer
Which contains the application that the user uses to send or receive data.without this layer the computer and its user would never be able to send data and would not know what to do with data sent by another user.
Internet Protocol
The Internet Protocol is responsible for basic network connectivity.when mapped to the TCP/IP layers,the Internet Protocol or IP works with The Network Layer.
Why IP address is not same?
The IP address can be compared to a postal address that indentifies the exact location of a residence or corporate house. Two residences cannot have the same postal address,so also no two computers on a TCP/IP network can have the same IP address.

The Structure of an IP Address
The IP address is a set of numbers separated by periods. IP address is a 32-bit number,divided into two sections,the network number and the host number. Addresses are written as four fields,eight bits each separated by a period. Each field can be a number ranging from 0 to 255.This method of addressing is called dotted decimal notation.An IP address looks likeL
field.field2.field3.field4

Transmission Control Protocol/Internet Protocol(Tcp/Ip)
TCP/IP uses IP to deliver packets to the upper layer applications and provides a reliable stream of data among computers on the network.TCP/IP consists of protocols,applications,and services,Protocols enable a server application to offer services,and the client application to use those services.It is possible to design a new protocol and add it to TCP/IP. The Internet is a large worldwide network of computers,which uses TCP/IP as the underlying communication protocol.

World Wide Web
The world wide web is a worldwide information service on the internet. HyperText Transfer Protocol or HTTP is the protocol used by the WWW service to make communication possible between a Web Server and a Web Browser.

FTP
File Transfer Protocol is not just a protocol but also a service and an application. FTP provides the facility to transfer files between two computers running on different operating systems such as UNIX,MS-DOS and Windows.





FTP As An Application
FTP  is an application for copying files.A client application can be run on the local computer to contact the FTP server application on the remote computer. CuteFTP and Reachout are two very popular FTP applications,which provide excellent user interfaces and a wide range of FTP commands and functions.

FTP As A Service
FTP is a service for copying files from one computer to another. A connection can be made from one computer (Client) to another computer (Server) offering this service and files can be sent or received.

FTP As A Protocol
FTP is a protocol for copying files between two computers.The client and the server applications both use it for communication to ensure that the new copy of the file is identical to the original.

Telnet
Telnet is both a TCP/IP application and a protocol for connecting a local computer to a remote computer. The Telnet application acts as a terminal emulator. Whatever commands are typed into the local computer are sent across the network for execution by the remote computer. Once connection established ,it ask username and password to access remote computer.