Thursday, December 9, 2010

Email archiving in Cloud

Our Email Archival Solution
CSS Corp Labs provides an affordable, reliable and secured Email Archiving solution which help clients to meet their compliance, storage management and best practice requirements. CSS Corp leverages AWS (Amazon Web Services) cloud computing and storage infrastructure to provide clients with scalable solutions that also enable email archives’ smooth transition into a cloud environment. The web interface of solution helps users to view their emails with an extensive search mechanism.

In today's business scenario, the email is a prominent mode of effective communication for organizations to execute their day-to-day business activities, which results in augmentation of their email infrastructure, its resources and staff.

Current archiving mechanism

Desktop email archiving system
A desktop email archiving system enables the user to archive emails in their own desktop, which will be easily accessible via an email client. However, users will not be enabled to access the archived emails in desktop from any other remote environments.

Server email archiving system
A server email archiving system enables the user to archive emails in a server dedicated for archival. The emails archived in the server will be accessible by the user from any remote environments using an email client.

 The various challenges facing the organizations utilizing the email archival services are:


The users utilizing current email archival system will not be able to access archived mails if the desktop or the archival server is down. Hence users will resort to using personal email accounts to continue business transactions which are independent of their corporate email system. This will result in a loss of intellectual property that will be expensive and extremely difficult to recover.


The growth in email data eventually motivates the organization to augment its IT infrastructure, which will result in increase of capital expenditure incurred in the form of hardware and software. With communications using email applications growing exponentially day-by-day, the organization will not be able to budget keeping future trends in mind. Increasing IT infrastructure will result in augmenting resources managing the same; this new investment in IT infrastructure also leads to training for the existing resources, which in turn, leads to causing burden on both physical resources and the IT staff. Also, focus of IT will be shifted from the business needs of organization to unproductive management of IT resources.

With contents archived in various locations (desktops, mail servers, etc), the current email archival system will not facilitate a rapid and efficient search of archived email contents, thereby reducing the reliability of the existing archival system. If the emails are provided with attachments, the complexities will worsen. While the mailing system can help searching for header and mail body text, it fails in scanning the attached content/file. Also, searching for an email within a period of time will cause users to end up searching multiple databases.

The current rigid email archiving system does not have the flexibility to adapt to complex enterprise models and workflows associated with it.

The Desktop Archival mechanism is not a secured mechanism since it fails to identify trusted members of the emailing system, thus overriding corporate mail security policies. The loss of sensitive data in archived emails will have a huge impact on corporate credibility.

CSS Corp Solution
Our solution ensures smooth transition of email archival into a cloud environment. It offers indexing, search and retrieval solutions available as a service focused on end users.


  • Archived E-mail contents will be extracted from SMTP server
  • E-Mail contents will be stored in centralized cloud storage with time stamp
  • Data extracted will be encrypted compressed and indexed
  • Index information will be spread across multiple index files
  • Users will be able to access the archived contents using a Web client Application
  • Load Balancer will be in place to balance the user load and direct the user to an appropriate server for accessing the data
  • Centralized email archival tool
  • Role-based access privileges
  • Schedule archival to the Cloud Storage
  • Indexes all attachments
  • Uses Blowfish encryption
  • Simple and advanced search
  • Request for search with approval workflow
  • Single Sign-On Integration
  • Search Option for attachment contents
  • Attachments as links for download
  • Costs shifting from capital expenditure to operational expenditure
  • Provides clarity in budgeting for IT infrastructure
  • Facilitates to shed burden on internal IT staff and physical resources
  • Efficient and quick inbuilt search capability
  • Leverage expertise of other services offered in cloud
  • Helps organization to focus on their core competency

Tuesday, November 23, 2010

How cloud computing can help in disaster recovery


A DR process using the cloud provides enterprises the ability to augment an enterprise IT infrastructure which are not foreseen during the planning phase of DR.

Disaster recovery, a subset of business continuity, is the process, policies and procedures related to preparing for recovery or continuation of technology infrastructure critical to an organization after a natural or human-induced disaster. In earlier days of IT, the reputation of an IT organization could be easily damaged by making the systems down for many days, which is a common scenario in batch oriented mainframes. Hence many organizations came forward to provide back up of IT infrastructure and support quick recovery to support business. The government regulation too augmented the growth of data centers supporting the disaster recovery.

The traditional way of doing DR

The first level of on-premises organizational set-up to support disaster recovery are :
  • Usage of local mirrors of system / data (mostly using RAID)
  • Ensuring continuous supply of power (without surges / spikes)
  • Physical protection from unauthorized entry and access to systems
  • Safety measure to avoid and fight fire accidents
  • Effective implementation of IDS, IPS and other security measures

The next levels of organizational strategy to ensure disaster recovery are:
  • Backup of data using tapes
  • Backup of data using a remote data centers
  • High availability of IT infrastructure using a replicated offsite location

The disadvantages of traditional DR

Organizations that want to set up a DR process incur heavy capital expenditure in setting up a DR center. Typically, organizations that do DR the traditional way
are unable to size the IT infrastructure to map growing needs.

Other challenges include:
  • Lack of flexibility in defining the data backup routine
  • Support for geographic redundancy
  • Provisioning of latest infrastructure 
  • Multiple pricing and billing models

The cloud way of doing DR

Currently most of the enterprises are looking into various avenues to protect their data and duplicate their infrastructure. A highly visible channel is the cloud. DR services delivered via the cloud gives enterprises the choice and flexibility to perform on premises or outsourced backup and DR, a reduction in capital and operational expenses, and reliable availability of IT systems for business continuity.

Advantages of a cloud DR process

The plan to have DR in cloud environment makes the organization to focus its investments into core business activities instead of investing in a DR center. The cloud way also provides the enterprise easy ways to augment an enterprise IT infrastructure which are not foreseen during the planning phase of DR.

The current advancements in technology also provides flexibility in utilizing services of multiple cloud vendors and thus helps enterprises to build a cloud agnostic DR framework. The cloud services are offered from various geo-locations which ensure redundancy of data in multiple geographies.
Dependency of cloud provider helps the enterprise to leverage the investments made by the provider on latest infrastructure and technologies

The present cloud economic climate provides the enterprise to utilize the cloud services in multiple pricing models (time period / storage based / service or configuration based etc) and billing models (monthly / on hitting a ceiling). This flexibility helps the organization to better utilize the cloud services and financial resources in an optimized manner.

What are the challenges?

The major challenge is the information mobility i.e., information that is transferred to and from the cloud. In case of large multinationals, they would invest in high bandwidth leased lines to overcome the bottlenecks in transporting data. But it would be a challenging issue to small enterprises who will be focusing more on incremental data transportation while for major data transportation they may utilize value added services by the cloud provider.

The current cloud environment with multiple cloud providers pushes the organization to identify resources with multiple skill sets, who are exposed to multiple technologies and platforms. Organizations also have to invest in network redundancy for accessing cloud environment that is also adaptable to services offered by cloud providers. The lack of cloud interoperability leads to complexities in building a DR solution using the services offered by cloud vendors.


Though enterprises are successful in structuring DR plan, it's the regular practice by resources that ensures the reliability of the DR plan. The changing IT scenario of enterprises need to captured meticulously and mapped to cloud DR environment for ensuring a successful DR deployment.

Monday, August 30, 2010

CloudBuddy Personal Office

What is CSS CloudBuddy Personal Office?
CloudBuddy Personal Office is a add -in tightly coupled with CloudBuddy Personal product supporting the following in MS Office package:
• Microsoft Office Word
• Microsoft Office Excel
• Microsoft Office PowerPoint
• Microsoft office Outlook

The latest version of the CloudBuddy Personal Office add-in supports 2003, 2007 & 2010 versions of MS-Office.

How CloudBuddy Personal Office helps their user?
• Anytime, anywhere access to files and mails present in S3
• Allows storing of MS office files directly from MS Office interface
• Access to MS outlook backed up emails using email interface
• Serving attachments in mails via url’s associated with Amazon S3

How to install CloudBuddy Personal Office add-in?
Follow the steps provided below to install CloudBuddy MS Office add-in successfully
Step 1: Download and install the latest version of CloudBuddy Personal
Step 2: Download CloudBuddy Personal Office add-in, and install it
Step 3: Follow below steps to install add-in

How to use CloudBuddy Personal Office add-in?
On successful installation, the CloudBuddy Personal Toolbar will be visible in Add-Ins tab of MS Office package.

CloudBuddy Personal MS Office Add-in provides the following functionality:

Helps in saving files directly into S3
Save As
Helps in saving files with different name in S3
Helps in viewing the files directly from S3
Insert Hyperlink
Helps in attaching files as URL in another file
Helps in sharing files via url’s (public / private)
(The above screenshots are specific to MS Word. It is applicable to MS Excel and MS PowerPoint)

CloudBuddy Personal add-in for “MS-Outlook”

On successful installation, the CloudBuddy Personal Toolbar will be visible in Add-Ins tab of MS Outlook.

CloudBuddy Personal MS Outlook Add-in provides the following functionality:

Helps in saving mails directly into S3, which shall be viewed using mail view option provided by CloudBuddy Personal
Mail Explorer
Helps in selecting and saving mails in the local MS Outlook folder
Insert Hyperlink
Helps in attaching files as URL in a mail, thus entertaining users to attach mails exceeding the quota limits set by the mail administrator
Option of sending attachment as URL
Helps the user to attach files while composing a mail. While sending the mail, a pop-up will come up and helps the user to decide, whether attachment to be sent as a url (serving attachments from Amazon S3) or attachment itself. Please refer the screen shot below.

Wednesday, July 7, 2010

Windows Services and Windows Applications

This might seem to be an age old story, but I thought my experience is still worth sharing, for those who are yet to face it J….
While we were customizing the CSS CloudBuddy Personal for our customer, there was a requirement to add an automated scheduler that alerts the users on the tasks that they define at a specified time. The tasks can include anything like alerts on
  1. Backing up data to the cloud,
  2. Retrieving backed up data,
  3. Manipulating with the backed up data etc.
And so…
What I did:
I started working on a prototype model of this scheduler, which would do nothing but just throw a message box on the user’s desktop alerting him on the action that he might need to perform at that time. The goal was to have a working model which would do the tasks automatically upon the user’s approval and it involved processing of some complex GUI components.
What problems I experienced:
I had Windows XP installed while I developed this scheduler and I had no problem at all invoking the message box at the specified time, of course with “Allow service to interact with the desktop” enabled (NOT RECOMMENDED). I used it just for an experiment; it was a prototype, after all. The reason why I had to enable “Allow service to interact with the desktop” was because windows services are configured not to allow any GUI components being invoked from the service.
In Windows Vista and Windows 7, when my service tried to show a message box, the service controller popped up its own alert box which prompted me whether or not to show the GUI component (my message box)… I felt this was odd enough for me to accept this alert and then view the message that I wanted my service to show, because, going down the line, my scheduler would be required to do actions automatically without user interaction. And hence I started hunting the web on why there is this difference between a service in Windows XP and Windows Vista / Windows 7.
Alternative(s) that I could have implemented:
  1. Develop a normal windows application that alerts the user at the scheduled times.
    1. But this solution was not as good as how easy it is because this involves running the application in the user space all the time.
    2. Have this application run at windows startup. – But this option might serve to be disastrous because there is every chance that the user may remove it from startup accidentally.
  2. Do not alert the user – Just perform the tasks automatically! – Never a great solution if the requirement IS to alert the user before performing the task.
What solution I found:
In Windows XP, there was only one session in which all the applications would run, whether they are windows services or windows applications. However, from Windows Vista, Microsoft decided to separate windows services and hence programmed them to run isolated from other user space applications. For this to happen, there were 2 separate sessions (and subsequent sessions as and when many users log in to the same workstation) that were created:
  1. Session 0 – This is where the windows services would run.
  2. Session 1 (to n) – This is where the normal windows applications would run.
Read the following link for more information on session isolation:
Hence, it was no longer easy to run a windows service that needs invoking GUI components. So what do we do? We need to invoke the GUI components as a desktop user. That might seem deadly difficult. Yes. It is indeed difficult, had we not had P/Invoke. Now that there is a provision in C# to invoke some of the system commands available in a set of C++ dll’s, nothing seems to be difficult at all… All you have to do is to call a few of these native methods that‘ll do the trick for you.
There is a method called CreateProcessAsUser() that can be invoked from a windows service. This method takes as input, the token information of the user that you wish to execute the process, the process name along with other parameters. A detailed C# implementation of this function can be found in this blog:
How I implemented the solution:
I developed a small windows application which on getting executed would show a message box that showed an appropriate message. From the windows service, I used CreateProcessAsUser() to execute this application and passed as arguments to the process, the message that I wanted to show. Of course, I could have used WTSendMessage() function to draw a message box on the user’s desktop. But as I had written earlier that this is just a prototype of a much complex scheduler that I was building which would, in future, involve many GUI components. Using CreateProcessAsUser() which is more versatile, saved me a lot of trouble.

Sunday, June 27, 2010

Registry update on Windows EC2 Instance

While auto-provisioning of a Window EC2 instance, which involves launching an instance, creating/attaching EBS volumes to the successfully launched instance, installing applications, configuring application settings etc., we at CSS Labs are unable to update the registry. The registry update helps the user to communicate with the application but Microsoft never loads the user profiles under the path HKEY_USERS by default, until the user login to the instance.

The auto-provisioning of the windows instances are facilitated by CloudSmart without human intervention while launching an instance. CloudSmart tool will be installed and bundled with necessary AMI’s. The tasks mapped to AMI’s are listed in XML format and stored in S3. While launching an instance for a client, using the bundled AMI’s, the CloudSmart script will be invoked and the matching XML file will be pulled from S3 to EC2 instances. On booting the instance, Java ANT script of CloudSmart will invoke and execute the planned tasks mapped to the AMI. Using CloudSmart, we accomplished most of EC2 provisions. Provisioning a Windows EC2 instances is challenging, since Microsoft is not as open as Linux and not allowing system level updates with too many restrictions at core level.

While provisioning Windows based EC2 instances for a leading CRM provider, we faced few problems. The architecture need to enable communication between 2 servers, an application server and a Web Server. The application server will have the CRM tool to be installed and configured. Also the application server needs to configure the SQL server and map EBS volumes to SQL data directories. All the above tasks are achieved using CloudSmart scripts. But for total completion of CRM setup, the registry need to be updated with the private IP of the application server, CRM database name and its ports under the key path HKEY_USERS. The specialty of this key path is to load the user profile under this column, only when the user logon to the machine. The problem here is to update the registry values under this key path for all users, when the EC2 instance comes up. Since we want to do setup completion during EC2 windows boot up, at first, we tried the following options.

  • By using the windows utility library(adbapi32.dll), we tried to logon to the machine through the C# code. Its able to logon, but not creating the user entry in the registry
  • Updating those registries using logon script. Which works, since Windows has the SID created/loaded into the registry editor.
  • Load the user profile reg file in the registry attempts to make no impact on it.
After an exhaustive research, we came to know that, it could be solvable using VB Script.

The steps that we performed in the VB script are.,
  • Load the file ntuser.dat into the registry.
  • Update the corresponding registry values.
  • Unload all the profiles from the registry.

Monday, March 8, 2010

Joins using HQL

Joins using HQL

         We use hibernate framework in our projects for the ORM (Object Relational Mapping) technology. In the beginning we were facing some problem to associate two or more tables through joins by using hibernate. But after getting clear picture on ORM model and how it (ORM) can be achieved using Hibernate framework made the joins usage simpler.

         In this blog we will explain how to execute join using HQL. Here we will take a simple example of mapping a department to multiple employees in an organization.

Create tables:

        The following tables need to be defined in the database

/*Table structure for `Department` */
  CREATE TABLE `Department` (
  `dept_id` int(4) NOT NULL,
  `dept_name` varchar(30) NOT NULL,
  PRIMARY KEY (`dept_id`)

/*Table structure for `Employee` */
  CREATE TABLE `Employee` (
  `emp_no` int(4) NOT NULL,
  `emp_name` varchar(30) NOT NULL,
  `dept_id` int(4) NOT NULL,
  `email` varchar(30) default NULL,
  PRIMARY KEY (`emp_no`),
   KEY `FK_employee` (`dept_id`),
  CONSTRAINT `FK_employee` FOREIGN KEY (`dept_id`) REFERENCES `department` (`dept_id`)

POJO file for Department table

    import java.util.Set;
  /** * @author CSS Labs */
  public class Department {

        private int deptId;
        private String deptName;
        private Set emp;

      /** * @return the deptId */
              public  int  getDeptId  ()  {
                            return deptId;

      /** * @param deptId the deptId to set */
              public    void  setDeptId  (int  deptId)  {
                            this.deptId = deptId;

      /** * @return the deptName */
                public String getDeptName () {
                            return deptName;

      /** * @param deptName the deptName to set */
              public void setDeptName (String deptName) {
                            this.deptName = deptName;

      /** * @return the emp */
              public Set<Employee> getEmp () {
                            return emp;

      /** * @param emp the emp to set */
              public void setEmp (Set<Employee> emp) {
                            this.emp = emp;


Mapping file of Department table

   <?xml version="1.0" encoding="UTF-8"?>
  <!DOCTYPE hibernate - mapping PUBLIC "-//Hibernate /Hibernate Mapping DTD 3.0//EN"
  "" >>  
       <class name=""table="Department">
             <id name="deptId" column="dept_id"    type="java.lang.Integer">
                <generator class="increment" />
             <property name="deptName"  column="dept_name" type="java.lang.String" >
           <set name="emp" cascade="all" lazy="true">
               <key column="dept_id" />
               <one-to-many class="" />

POJO file for Employee table


  /** * @author CSS Labs */
  public class Employee {

        private int empNo;
        private String empName;
        private int deptId;
        private String email;

      /** * @return the empNo */
              public int getEmpNo () {
                             return empNo;

       /** * @param empNo * the empNo to set */
              public void setEmpNo (int empNo) {
                            this.empNo = empNo;

      /** * @return the empName */
                public String getEmpName () {
                            return empName;

      /** * @param empName * the empName to set */
              public void setEmpName (String empName) {
                            this.empName = empName;

       /** * @return the deptId */
              public int getDeptId () {
                            return deptId;

       /** * @param deptId * the deptId to set */
               public void setDeptId (int deptId) {
                            this.deptId = deptId;

       /** * @return the email */
              public String getEmail () {
                             return email;

       /** * @param email * the email to set */
              public void setEmail (String email) {
                   = email;


Mapping file of Employee table

   <?xml version="1.0" encoding="UTF-8"?>
  <!DOCTYPE hibernate - mapping PUBLIC "-//Hibernate /Hibernate Mapping DTD 3.0//EN"
  "" >> 
       <class name=""table="Employee">
             <id name="empNo" column="emp_no" type="java.lang.Integer">
                <generator class="increment" />
             <property name="deptId" column="dept_id" type="java.lang.Integer" >
             <property name="empName" column="emp_name" type="java.lang.String" >
             <property name="email" column="email" ttype="java.lang.String" >


Configuration File [hibernate.cfg.xml]

   <?xml version="1.0" encoding="UTF-8"?>
  <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate /Hibernate Configuration DTD 3.0//EN"
  "" >> 
             <property name="connection.url">jdbc:mysql://<<server ip>>:3306 /databasename</property>
             <property name="connection.username">username</property>            
             <property name="connection.password">password</property>
             <property name="connection.driver_class">com.mysql.jdbc.Driver</property>
             <property name="dialect">org.hibernate.dialect.MySQLDialect</property>
             <!-- mapping files -->
             <mapping resource="com/css/labs/blog/map/employee.hbm.xml" />
             <mapping resource="com/css/labs/blog/map/department.hbm.xml" />


The join query for getting all the employees mapped to one particular department will be
  “select emp.empName from Department dept join dept.emp emp where dept.deptId = 2”

Hope this blog helped you to understand how to excecute JOIN's using HQL.

Friday, February 26, 2010

CloudCamp @ Chennai - 23rd Feb 2010

 The camp started with a registration @ welcome desk. A welcome note by Dave and Prem triggered the events of CloudCamp.  The lightening speech by organizations that sponsored the event provided some basic information on cloud computing.
The camp then opened up for an unpanel discussion where various types of questions from the audience were listed on a white board. Dave entertained volunteers from the registrants and sponsors to answer the queries posted. A query on multimedia was answered by Ezhilarasan Babaraj, Program Director, CSS Labs @ CSS Corp.
After all the posted questions were answered, the registrants were asked to provide topics for presentations and any clarifications to be discussed in an unconference session that followed shortly. An effective thought process by Dave helped in segmenting those topics and delegated the same to the venues allocated to CloudCamp @ Department of Management Studies, Anna University, Chennai.

Venue 1: Auditorium
  1. Introduction to Cloud by Dave Neilsen, Co-Founder, CloudCamp
  2. Virtualization 
  3. Startup business - Hosting of static websites in S3 by Ram Prasanna, Research Associate, CSS Labs, CSS Corp

    Venue 2: Class room 1
    1. Private Cloud  
    2. Hybrid Cloud by Prashant Vivek, Research Associate, CSS Labs, CSS Corp 
    3. Data security
      Venue 2: Class room 2
      1. Storage as Service  
      2. Key-Value Store – RIAK by Samuel, Lead- Research, CSS Labs, CSS Corp  
      3. Auto scaling & Auto deployment of applications in Cloud by Nagarajan Vedachalam, Technical Architect, CSS Labs, CSS Corp
        After attending the sessions of their choice, the participants assembled back at the Auditorium and a vote of thanks delivered by organizers.

        Snapshots of CloudCamp, Chennai 

        Tuesday, February 9, 2010

        Secure Media Streaming using CloudFront & Progressive Download Method

        One of the biggest challenges faced by the digital content providers today is to protect and secure the media content while streaming them. Streaming media servers like Adobe Flash Media Server, Wowza…etc addresses this issue in a big way. But it is a bit expensive proposition to host these media servers on our own and especially when it comes to serving a large customer base it becomes more complex. Amazon CloudFront recently announced its Media Streaming support using Adobe Flash Media Servers. As i posted in my previous article, Amazon CloudFront Media support is a good service to serve media publicly but lacks mechanism to protect the digital content and serve privately. As a stop gap, till the time Amazon addresses this in the right way, lets take a look at the below technique to stream media privately and to protect the content from bot downloads...etc

        Proposed Method for Serving Private Media Content
        Some of you might be aware that the Amazon CloudFront supports private objects. And one of the very old techniques and still widely used to serve streaming media is progressive download method. With the help of CloudFront private objects & progressive download, you will be able to serve private media content to the closed user base.

        The major problem
        There are two major drawbacks using the progressive download method
        1. Content being copied to the local system and played.
        2. Disallowing other media players (bots) to copy the content.
        Of course, the other advanced features like optimal bandwidth usage, customizing the stream during runtime...etc is not considered here. Because all these issues are addressed in you streaming media server and you definitely need them for a comprehensive solution.

        Disclaimer: This solution is still at the conceptualization layer; if you are interested, try this at your own risk.

        1.               Upload all your media files to a specific S3 bucket (

        2.               Enable CloudFront for that bucket (

        3.               Enable private content support for the bucket (refer section "Enabling Private Content" in this URL .)

        4.               Develop/customize any available open source media player to play content from CloudFront, S3 supports partial content download and you can develop a sophisticated player to download content in a multi threaded fashion as well.

        5.               You can implement a mechanism like SWF verification by Adobe ( to protect your content being copied by other players as well.

        6.               Upon verifying the SWF files, the SWF player can initiate a request to the server to generate a URL which is valid only for a short time (for example 10 or 15 seconds) and starts the downloading. This method ensures that once download begins, the next call to that resource is denied.

        In a nutshell,
        A customized media player, S3, CloudFront private object support and a Web services application, preferably running in EC2 will help you to protect and serve digital content in a cost effective way. If you think running an application in EC2 is little expensive, you can also choose other alternative PaaS platforms like Google App or Azure to host your SWF verification/URL generation web/web services application.
        In addition to this you can use encoding services like to encode your content to various formats. Combining this along with the above solution helps you to support multimode access such as mobile (iPhone/Android), desktop, web...etc

        Protecting video content -  

        Friday, January 29, 2010

        Windows Communication Foundation

        In this posting, let us share our experience of working with WCF (Windows Communication Foundation), an API that got introduced with .Net Framework 3.0. Earlier we used to work with various features like
        • Web Services 
        • .Net Remoting
        • Distributed Transactions and
        • Message Queues.

        WCF, designed based upon SOA (Service Oriented Architecture) supporting distributed computing, address the above features as a single entity.

        The major advantage of WCF is, it supports synchronous, asynchronous and REST style communications utilizing various protocols like http, netnamedpipebinding etc. We used netnamedpipebinding protocol for on-machine communication where the end point will be defined based on protocol type.

        Various steps to create a WCF Service are:

        Step 1: Create Interface Class and define the class with Service Contract attribute and methods with Operation Contract attribute.

        namespace CalculatorService
            [ServiceContract(Namespace = "http://CalculatorService", SessionMode = SessionMode.Required)]
            public interface ICalculator
                double Add(double n1, double n2);
                double Subtract(double n1, double n2);
                double Multiply(double n1, double n2);
                double Divide(double n1, double n2);

        Step 2: Create Service Class that implements the interface defined with Service Contract attribute and define the class with Service Behavior attribute. The Service Behavior attribute have two different parameters such as Instance Context mode and Concurrency mode. In Instance Context mode we have to specify the Service instance is single ton or it will be created for each call or each session. In Concurrency mode we have to specify whether it is single threaded or multi threaded.

        namespace CalculatorService
            [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)]
            public class CalculatorService : ICalculator
                public double Add(double n1, double n2)
                    return n1 + n2;
                public double Subtract(double n1, double n2)
                    return n1 - n2;
                public double Multiply(double n1, double n2)
                    return n1 * n2;
                public double Divide(double n1, double n2)
                    return n1 / n2;

        Step 3: Host the Service as Console based application using the ServiceHost.

        internal static ServiceHost myServiceHost = null;

        internal static void StartService()
                myServiceHost = new ServiceHost(typeof(CalculatorService.CalculatorService));
            catch (Exception ex)

        internal static void StopService()
                if (myServiceHost.State != CommunicationState.Closed)
            catch (Exception ex)

        Step 4: Define the configuration in the host environment with protocol name and endpoint address.

              <service name="CalculatorService.CalculatorService" behaviorConfiguration="metadataSupport">
                   <add baseAddress="net.pipe://localhost/CalculatorService" />
                <endpoint address="" binding="netNamedPipeBinding" contract="CalculatorService.ICalculator" />
                <endpoint address="mex" binding="mexNamedPipeBinding" contract="IMetadataExchange" />
                <behavior name="metadataSupport">
                  <serviceMetadata />
                  <serviceDebug includeExceptionDetailInFaults="true" />

        Step 5: This step helps in generating the proxy object. Before generating the proxy, first run the service using command prompt. The following command will generate the service proxy object and configuration file.

        Svcutil net.pipe://localhost/CalculatorService  /config:App.Config

        The following will be the output

        C:\Documents and Settings\WCFDeveloper >svcutil net.pipe://localhost/CalculatorService /config:App.Config
        Microsoft (R) Service Model Metadata Tool
        [Microsoft (R) Windows (R) Communication Foundation, Version 3.0.4506.648]
        Copyright (c) Microsoft Corporation.  All rights reserved.

        Attempting to download metadata from 'net.pipe://localhost/CalculatorService ' using
        WS-Metadata Exchange. This URL does not support DISCO.
        Generating files...
        C:\Documents and Settings\WCFDeveloper \CalculatorService.cs
        C:\Documents and Settings\WCFDeveloper \App.Config

        Client will connect to a service using a Proxy Object, which is connected to the specified endpoint of the service. Both the file need to be added in the client project.

        Step 6: Calling the Service from client through the service proxy object.

        Here CalculatorClient is the generated service proxy object.

        CalculatorClient client= new CalculatorClient();;

        private void uxAdd_Click(object sender, EventArgs e)
            Double num1 = Convert.ToDouble(uxNumber1.Text.Trim());
            Double num2 = Convert.ToDouble(uxNumber2.Text.Trim());
            uxResult.Text = client.Add(num1, num2).ToString();

                    Now both the client and service are ready for deployment.