Thursday, 28 March 2013

ASP.NET - A simple Question

Question :


I have a StudentManagement Web application. In this application there is an admin module. Admin wants to send personlaized mail only to those students who have scored less than 30 in Physics.
For this purpose admin has a list of student ids with him.

How can the admin retieve a student email id when he enters Student Id in the To Field??

ASP.NET

SqlConnection ObjCon = new SqlConnection("Data Source=servername;Initial Catalog=TestCon;User ID=sa;Password=sa;Pooling=False");
string str = "SELECT emailid FROM Information WHERE num='" + txtnum.Text + "'";
ObjCon.Open();
SqlCommand ObjCmd = new SqlCommand(str, ObjCon);
SqlDataReader dr = ObjCmd.ExecuteReader();
if (dr.Read())
{
txtEmailId.Text = dr[0].ToString();
}
ObjCon.Close();

ASP.NET MVC


View
input type = "text" id = "EmployeeID" />
<input type = "text" id = "EmployeeEmail" />
<input type = "button" id = "GetEmail" />
<
script type="text/javascript">
     $(function(){
         $("#GetEmail").click(function(){
             var empID = $("#EmployeeID").val();
             $.ajax({
                    url: '@Url.Action("GetEmailID")',
                    type: 'post',
                    data: {EmployeeID:empID},
                    success: function (msg) {alert(msg.result); $("#EmployeeEmail").text(msg.email );},
                    error: function (msg) {}});
                    }
         });
     });
</script>

Controller:
//Declare DBContext object as db....
ActionResult Index()        

{            
return View();        
}     
   
public ActionResult GetEmailID(int EID)
{            
var emailFromdb = db.Employee.Where(x=>x.EmployeeID == EID).Select(x=> x.EmailID);

//You get data from database table Employee Which has columns EmployeeID,EmailID            

return Json(new { email = emailFromdb ,result = "success", JsonRequestBehavior.AllowGet});        

}

Search string in CSV

Question is:

I have a string called inputstring. I'd like to search a CSV file to find all instances of the inputstring? How is this done in c#?

Ans:

namespace CSV_search_string

{

    class Program

    {

        static void Main(string[] args)

        {

            string csvFile = "SciFi Books.csv";

            string searchString = "Fred Hoyle";

            char csvSeparator = ',';

 

            foreach (string line in File.ReadLines(csvFile))

                foreach (string value in line.Replace("\"", "").Split('\r', '\n', csvSeparator))

                    if (value.Trim() == searchString.Trim()) // case sensitive

                        Console.WriteLine("[ {0} ] found in: {1}", value, line);

 

            Console.ReadKey();

        }

    }

}

Sunday, 24 March 2013

Getting Exception 1

Unable to evaluate expression because the code is optimized or a native frame is on top of the call stack


Happens when :

If you use the Response.End, Response.Redirect, or Server.Transfer method, a ThreadAbortException exception occurs. You can use a try-catch statement to catch this exception.

Why does this happen??

The Response.End method ends the page execution and shifts the execution to the Application_EndRequest event in the application's event pipeline.
The line of code that follows Response.End is not executed.
 This problem occurs in the Response.Redirect and Server.Transfer methods because both methods call Response.End internally.


What Should I do?

Add the second paramter for Response.Redirect , Server.Transfer to false.

 Response.Redirect ("nextpage.aspx", false);
Noticed in:

  • Microsoft ASP.NET 4.5
  • Microsoft ASP.NET 4
  • Microsoft ASP.NET 3.5
  • Microsoft ASP.NET 2.0
  • Microsoft ASP.NET 1.1
  • Microsoft ASP.NET 1.0
  • Wednesday, 20 March 2013

    Continuous Integration

    These are excerpts from Martin Flower's Continuous Integration . Please visit Continous Integration to view the complete article.Some of the lines are added by me to make the article sound better.

    What is CI?


    Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly.

    Integration is a long and unpredictable process.

    But the above is not true.

    Any integration errors are found rapidly and can be fixed rapidly.This contrast isn't the result of an expensive and complex tool. The essence of it lies in the simple practice of everyone on the team integrating frequently, usually daily, against a controlled source code repository.

    The term 'Continuous Integration' originated with the Extreme Programming development process, as one of its original twelve practices.

    Although Continuous Integration is a practice that requires no particular tooling to deploy, but it is useful to use a Continuous Integration server.

    Before understanding CI we need to understand source control.

    What is Souce Control?
    A source code control system keeps all of a project's source code in a repository. The current state of the system is usually referred to as the 'mainline'. At any time a developer can make a controlled copy of the mainline onto their own machine, this is called 'checking out'. The copy on the developer's machine is called a 'working copy'.

    In source control you can both alter the production code, and also add or change automated tests.

    Continuous Integration assumes a high degree of tests which are automated into the software.



    Steps that are followed for CI

    1. I begin by taking a copy of the current integrated source onto my local development machine. I do this by using a source code management system by checking out a working copy from the mainline.
    2. I do changes to the local development copy.
    3. Once I'm done (and usually at various points when I'm working) I carry out an automated build on my development machine. This takes the source code in my working copy, compiles and links it into an executable, and runs the automated tests. Only if it all builds and tests without errors is the overall build considered to be good.
    4. With a good build, I can then think about committing my changes into the repository. The twist, of course, is that other people may, and usually have, made changes to the mainline before I get chance to commit. So first I update my working copy with their changes and rebuild. If their changes clash with my changes, it will manifest as a failure either in the compilation or in the tests. In this case it's my responsibility to fix this and repeat until I can build a working copy that is properly synchronized with the mainline.
    5. Once I have made my own build of a properly synchronized working copy I can then finally commit my changes into the mainline, which then updates the repository.
    6. At this point we build again, but this time on an integration machine based on the mainline code. Only when this build succeeds can we say that my changes are done. There is always a chance that I missed something on my machine and the repository wasn't properly updated. Only when my committed changes build successfully on the integration is my job done. This integration build can be executed manually by me, or done automatically.
    Everything should be in the repository.

     Everything you need to do a build should be in there including: test scripts, properties files, database schema, install scripts, and third party libraries.

    One of the features of version control systems is that they allow you to create multiple branches, to handle different streams of development.

    Keep your use of branches to a minimum.

     In general you should store in source control everything you need to build anything, but nothing that you actually build. Some people do keep the build products in source control, but I consider that to be a smell - an indication of a deeper problem, usually an inability to reliably recreate builds.[I liked this line. :)]

    Automated environments for builds are a common feature of systems.
     The Unix world has had make for decades,
     the Java community developed Ant,
    the .NET community has had Nant and now has MSBuild.

    Make sure you can build and launch your system using these scripts using a single command.

    The build should include getting the database schema out of the repository and firing it up in the execution environment.

    Rule of Thumb:
    Anyone should be able to bring in a virgin machine, check the sources out of the repository, issue a single command, and have a running system on their machine.

    Depending on what you need, you may need different kinds of things to be built. You can build a system with or without test code, or with different sets of tests. Some components can be built stand-alone. A build script should allow you to build alternative targets for different cases.

    Build Softwares vs IDES
     Many of us use IDEs, and most IDEs have some kind of build management process within them. However these files are always proprietary to the IDE and often fragile. Furthermore they need the IDE to work. It's okay for IDE users set up their own project files and use them for individual development. However it's essential to have a master build that is usable on a server and runnable from other scripts. So on a Java project we're okay with having developers build in their IDE, but the master build uses Ant to ensure it can be run on the development server.

    XP and TDD
    In particular the rise of Extreme Programming (XP) and Test Driven Development (TDD) have done a great deal to popularize self-testing code and as a result many people have seen the value of the technique.

    Both of these approaches make a point of writing tests before you write the code that makes them pass - in this mode the tests are as much about exploring the design of the system as they are about bug catching. This is a Good Thing, but it's not necessary for the purposes of Continuous Integration, where we have the weaker requirement of self-testing code. (Although TDD is my preferred way of producing self-testing code.)

    For self-testing code you need a suite of automated tests that can check a large part of the code base for bugs. The tests need to be able to be kicked off from a simple command and to be self-checking. The result of running the test suite should indicate if any tests failed. For a build to be self-testing the failure of a test should cause the build to fail.

    Tests don't prove the absence of bugs. However perfection isn't the only point at which you get payback for a self-testing build. Imperfect tests, run frequently, are much better than perfect tests that are never written at all.

    The one prerequisite for a developer committing to the mainline is that they can correctly build their code. This, of course, includes passing the build tests. As with any commit cycle the developer first updates their working copy to match the mainline, resolves any conflicts with the mainline, then builds on their local machine. If the build passes, then they are free to commit to the mainline

    The key to fixing problems quickly is finding them quickly.Conflicts that stay undetected for weeks can be very hard to resolve.

    The more frequently you commit, the less places you have to look for conflict errors, and the more rapidly you fix conflicts.

    Frequent commits encourage developers to break down their work into small chunks of a few hours each. This helps track progress and provides a sense of progress.

    Regular builds happen on an integration machine and only if this integration build succeeds should the commit be considered to be done. Since the developer who commits is responsible for this, that developer needs to monitor the mainline build so they can fix it if it breaks. A corollary of this is that you shouldn't go home until the mainline build has passed with any commits you've added late in the day.

    There are two main ways  to ensure this: using a manual build or a continuous integration server.

    The manual build approach is the simplest one to describe. Essentially it's a similar thing to the local build that a developer does before the commit into the repository. The developer goes to the integration machine, checks out the head of the mainline (which now houses his last commit) and kicks off the integration build. He keeps an eye on its progress, and if the build succeeds he's done with his commit.

    A continuous integration server acts as a monitor to the repository. Every time a commit against the repository finishes the server automatically checks out the sources onto the integration machine, initiates a build, and notifies the committer of the result of the build. The committer isn't done until she gets the notification - usually an email.

    1. The whole point of continuous integration is to find problems as soon as you can.
    2. The whole point of working with CI is that you're always developing on a known stable base.
    The whole point of Continuous Integration is to provide rapid feedback. Nothing sucks the blood of a CI activity more than a build that takes a long time.

    1. A build that takes an hour to be totally unreasonable.Because every minute you reduce off the build time is a minute saved for each developer every time they commit.
    Introduce some automated testing into your build. Try to identify the major areas where things go wrong and get automated tests to expose those failures. Particularly on an existing project it's hard to get a really good suite of tests going rapidly - it takes time to build tests up. You have to start somewhere though - all those cliches about Rome's build schedule apply.

    If you are starting a new project, begin with Continuous Integration from the beginning. Keep an eye on build times and take action as soon as you start going slower than the ten minute rule. By acting quickly you'll make the necessary restructurings before the code base gets so big that it becomes a major pain.

    On the whole I think the greatest and most wide ranging benefit of Continuous Integration is reduced risk.

    The trouble with deferred integration is that it's very hard to predict how long it will take to do, and worse it's very hard to see how far you are through the process. The result is that you are putting yourself into a complete blind spot right at one of tensest parts of a project - even if you're one of the rare cases where you aren't already late.

    Bugs are cumulative. The more bugs you have, the harder it is to remove each one.

    As a result projects with Continuous Integration tend to have dramatically less bugs, both in production and in process.

    If you have continuous integration, it removes one of the biggest barriers to frequent deployment. Frequent deployment is valuable because it allows your users to get new features more rapidly, to give more rapid feedback on those features, and generally become more collaborative in the development cycle. This helps break down the barriers between customers and development - barriers which I believe are the biggest barriers to successful software development.

    So, these are my notes on CI from Continuous Integration .

    Thursday, 14 March 2013

    SQL -Tip

    The following line resets the Identity value for the Customer table to 0 so that the next record added starts at 1.

    DBCC CHECKIDENT('Customer', RESEED, 0)

    Tuesday, 12 March 2013

    OAuth - An Overview


    What is OAuth?

    OAuth a.k.a RFC 5849 is Open Standard for AUTHorization.

    First published on Dec 4 2007 , one of the fastest growing Open Web specifications.

    In terms of OAuth.net ,

    OAuth is an open protocol to allow secure authorization in a simple and standard method from web, mobile and desktop applications.

    So, in short: You get the keys to use the authentication mechanism for some other API. The key is only till the place you Authenticate , with the OAuth Key you cannot do any thing else.
    It is just like the valet key which comes with your car. With the Valet key you can drive the car to a small extent only and cannot use any other feature.

    To make it more understandable , let me give another example.

    In traditional Client Server Systems. The  user passes username and password to the server and server gives access/ denies the request based on validity of the user name and password.



    Now a days there are so many websites , user is using mailing server , social networking , banking etc.

    Now , if  a new user name and password has to be remembered for each and every site , it will be a real tiresome exercise for the user.

    Now, suppose I use only one username and password for all my different setups then also it is a problem because in this case the data security becomes a question.

    OAuth what it does is , it introduces a new tenant called resource owner. The client/user talks to the resource owner , resource owner takes cares of getting an authorization from the host and grants access to the user.
    So , here in OAuth introduces a third role to this model: the identity provider comes to play along with client and the server.Here the server acts as the initiator of the authentication instead of the client.

    The server contains the server resources and is completely unaware of the identities through the identity provider.

    Got a very nice pic from msdn that depicts yet another way of implementing this.


    1.The client submits an authentication request to the authentication broker.
    2.The authentication broker contacts the identity store to validate the client's credentials.
    3.The authentication broker responds to the client, and if authentication is successful, it issues a security token. The client can use the security token to authenticate with the service. The security token can be used by the client for a period of time that is defined by the authentication broker. The client can then use the issued security token to authenticate requests to the service throughout the lifetime of the token.
    4.A request message is sent to the service; it contains the security token that is issued by the authentication broker.
    5.The service authenticates the request by validating the security token that was sent with the message.
    6.The service returns the response to the client.


    So here in order for the client to access resources, it first obtains permission from the resource owner.  This permission is expressed in the form of a token and matching shared-secret.  The purpose of the token is to make it unnecessary for the resource owner to share its credentials with the client.  Unlike the resource owner credentials, tokens can be issued with a restricted scope and limited lifetime, and revoked independently.Once the tokens are issued the resources can be accessed independently.

    Now that we have a better knowledge of OAuth. We will go along in next article to see how it can be implemented in .Net.

    Wednesday, 27 February 2013

    WCF : An introduction



    1. What is WCF?
    ü  WCF stands for Windows Communication Foundation (WCF).
    ü  This is considered as the Microsoft Service-Oriented Architecture (SOA) platform for building distributed and interoperable applications. 
    ü  WCF unifies ASMX, Remoting, and Enterprise Services stacks and provides a single programming model.
    ü  WCF services are interoperable and supports all the core Web services standards. 
    ü  A WCF service also provide extension points to quickly adapt to new protocols and updates and integrates very easily with the earlier Microsoft technologies like Enterprise Services, COM and MSMQ.


    1. Why should I use WCF?
    ü  WCF is interoperable with other services when compared to .Net Remoting, where the client and service have to be .Net.
    ü  WCF services provide better reliability and security in compared to ASMX web services.
    ü  In WCF, there is no need to make much change in code for implementing the security model and changing the binding. Small changes in the configuration will make your requirements.
    ü  WCF has integrated logging mechanism, changing the configuration file settings will provide this functionality. In other technology developer has to write the code.
    3.    What is difference between WCF and Web Services?
    ü  Protocol: Web services can only be invoked by HTTP (traditional web service with .asmx). While WCF Service or a WCF component can be invoked by any protocol (like http, tcp etc.) and any transport type. 
    ü  Flexibility: web services are not flexible. However, WCF Services are flexible. If you make a new version of the service then you need to just expose a new end. Therefore, services are agile and which is a very practical approach looking at the current business trends. 
    ü  Ease of Development: We develop WCF as contracts, interface, operations, and data contracts. As the developer we are more focused on the business logic services and need not worry about channel stack. WCF is a unified programming API for any kind of services so we create the service and use configuration information to set up the communication mechanism like HTTP/TCP/MSMQ etc 
    ü  XmlSerializer and DataContractSerializer
    That Web Services Use XmlSerializer But WCF Uses
    DataContractSerializer which is better in Performance as Compared to XmlSerializer.
    Key issues with XmlSerializer to serialize .NET types to XML

    * Only Public fields or Properties of .NET types can be translated into XML.
    * Only the classes which implement IEnumerable interface.
    * Classes that implement the IDictionary interface, such as Hash table can not be serialized.

    The DataContractAttribute can be applied to the class or a structure. DataMemberAttribute can be applied to field or a property and theses fields or properties can be either public or private. A practical benefit of the design of the DataContractSerializer is better performance over XmlSerializer. 




    1. What is service and client in perspective of data communication?
    ü  A service is a unit of functionality exposed to the world.
    The client of a service is merely the party consuming the service.
    1. What is SOA Service?
    ü  SOA is Service Oriented Architecture. SOA service is the encapsulation of a high level business concept. A SOA service is composed of three parts.
    Any Service that fulfills above three requirements is SOA
    1. What are the core components of WCF?
    ü  Like any other SOA Service the three core components of WCF are:
    a.       A service class
    b.       A hosting service
    c.       Endpoints to expose the service

    1. What is ABC in WCF?



    8.    What is an endpoint?
    ü  WCF Service is a program that exposes a collection of Endpoints. Each Endpoint is a portal for communicating with the world.
    ü  All the WCF communications are take place through end point. End point consists of three components.
    a.       Address
    b.       Binding
    c.       Contract
    ü  The Endpoint is the fusion of Address, Contract and Binding.









    Silverlight Deployment


    1.       What is the single unit for deploying Silverlight applications?


    Silverlight applications are downloaded by the browser in XAP files.

    2.       What are xap files?


                    XAP files are essentially .zip files that contain an assembly manifest file and one or more assemblies.

    3.       Can an APACHE server host a Silverlight application?


                  Silverlight applications can be hosted on most types of Web servers, like Internet Information Server (IIS) or Apache. However, most Web servers are usually configured to serve only a short list of wellknown file extensions.

    4.       What are the MIME types your web server should support to host Silverlight application?


    Extension
    MIME type
    .xaml
    application/xaml+xml
    .xap
    application/x-silverlight-app



    IIS 7, included in Windows Server 2008, already includes all the relevant MIME types for both WPF and Silverlight, including both .xap and .xaml extensions, so if you're using Windows Server 2008, you're all set.
    5.       How can you optimize xap file size for increasing download speed?

    a.       Set CopyToLocal as false

    By default, any non-system assemblies that you reference will be added to the XAP file generated by Silverlight applications. If you want to optimize your XAP files for download speed, this behavior may not be efficient for some of your modules' references.

    Consider the following example. You have created an application with several remote modules. Each module gets its own XAP file. You have also created a shared Common assembly that contains shared services, common interfaces, and so on. Each module references that common assembly. By default, each XAP file will now also contain the Common assembly; this makes the XAP files larger than they need to be.

    b.      To solve this, change the references to the Common assembly in all of the modules by setting Copy Local to false. This ensures that the Common assembly is not added to the XAP files.

    When deploying Silverlight applications created with the Composite Application Library, some common assemblies that can typically be excluded from the XAP files include Composite and Composite.Presentation assemblies and infrastructure assemblies, among others.


    6.       How can you deploy Silverlight app.

    c.       You can do it using a SMS server.

    d.      Manually

    e.      Group policy

    Group policy is ideal to deploy Silverlight in small to medium sized organizations or when it is not being deployed to a large number of users simultaneously.  For large organizations, Silverlight is best deployed using SMS or another third-party software distribution tool.  A limitation of the group policy deployment method is that it applies only to Microsoft operating systems, ignoring Apple operating system clients. 


    7.       How do you know if the problem which has occurred is a web browser problem and not a Silverlight issue?


    To isolate browser issues that might be related to the Silverlight add-on, you can selectively disable the add-on in Internet Explorer 7.

    To disable a browser add-on

    a.       Click the Tools menu, click Manage Add-ons, and then click Enable or Disable Add-ons.
    b.      Change the “Show” Drop-down box to “Add-ons that have been used by Internet Explorer”
    c.       Click AgControl Class, click Disable, and then click OK.

    Alternatively, you can turn off all add-ons temporarily in Internet Explorer 7 by starting in No add-ons mode.

    To start Internet Explorer 7 in No add-ons mode

    a.       1.    Click Start, click All Programs, and then click Accessories.
    b.      2.    Click System Tools, and then click Internet Explorer (No Add-ons).

    You can also start Internet Explorer without add-ons by right-clicking the Internet Explorer icon on the desktop and then clicking Start Without Add-ons. Or start Internet Explorer with no add-ons or toolbars by running the command iexplore.exe -extoff.


    8.        What is difference between asp.net application hosting and Silverlight hosting?

    Deploying Silverlight applications is as easy as deploying ASP.NET applications, because Silverlight is normally embedded in a site.


    9.       To install a Silverlight RIA Services application, do we need RIA services installed in the server machine?

    To run Silverlight RIA services application we need .NET 4 installed on the server machine. RIA Services must also be available on the Web server.



    The RIA Services assemblies must be available on the Web server. It is recommended that RIA Services be installed on the Web server that will host your application. If this is not an option, due to lack of permissions or some other issue, you can also make them available on the Web server by either including them in the bin folder of your project when it is published or by installing them in the global assembly cache (GAC).

     To install RIA Service RC on your server, download the MSI locally and then run it as such -

    msiexec /i RIAServices.msi /SERVER=true

     If you have access to a .NET 4 RC server but do not have permissions to install RIA Services on it, you can choose to carry the RIA Services bits in the Web Applications BIN folder.

     10.   What all should you consider while preparing a WCF RIA Service application for deployment?

    Following is what is needed:

    ·         System.ServiceModel.DomainServices.Server.dll

    ·         System.ServiceModel.DomainServices.Hosting.dll

    ·         If you are using Entity Framework to access a database, then you will also need to add a reference to the System.ServiceModel.DomainServices.EntityFramework.dll assembly.

    ·         If you are using LINQ to SQL to access data, then you will need to add a reference to the Microsoft.ServiceModel.DomainServices.LinqToSql.dll assembly

    If you are using the Visual Studio Build->Publish option to deploy your application, make sure the following three assemblies under the Web Application->References have been marked as Copy Local = True


    11.   Ideally where should my RIA service dlls lie in GAC or in bin?

    Instead of copying the RIA Services assemblies in the Bin folder of every project that uses them, you can install the assemblies in the GAC. Any assemblies in the GAC are available to every application on the server. This approach is easier to maintain because an assembly only needs to be updated in the GAC instead of every Bin folder.

    If you are copying the bits over manually to your deployment server, copy the above three assemblies to the Web Applications BIN folder right next to your [WebAppName].dll


    12.   What is need to make the assemblies Copy Local = True?

    Setting these property values to True results in the assemblies getting copied to the bin folder the next time you build the solution. When the assemblies are copied to the bin folder, they will be copied to the Web server when you publish the site.


    13.   Why my installation works in some PCs and fails to run in other PCs?

    Generally this happens because it makes zero sense to have Copy Local = True if an assembly is installed in the GAC.

    Because the local copy will never be used, the GAC is always searched first. Leaving it unchanged would cause major confusion. Changing it causes a confusion too, that could perhaps have been addressed with a message box at solution load time.

    To overcome this problem you can also add a PostBuild event to manually copy the assemblies into the output directory.


    14.   Why Copy Local for some dlls is true by default whereas for some others it is true?

    The project-assigned value of CopyLocal is determined in the following order:

    1.       If the reference is another project, called a project-to-project reference, then the value is true.

    2.       If the assembly is found in the global assembly cache, the value is false.

    3.       As a special case, the value for the mscorlib.dll reference is false.

    4.       If the assembly is found in the Framework SDK folder, then the value is false. Otherwise, the value is true.


    15.   What are the things you need to take care in web.config file while deploying a RIA service?