Friday, November 27, 2015

TypeScript configurations in Visual Studio Code

Before sometime I have blogged about - Why It's the right time to learn Typescript and It does get lot of buzz. Typescript getting every day more popular due to reason I have mentioned in above blog post.  So now in this blog we are going to learn how we can use TypeScript with new Visual Studio Code editor.

TypeScript support in Visual Studio Code:


As you know Visual Studio Code is in beta now and provide great support for typescript. Here you can operate TypeScript in two modes.

1) File Scope:

In this mode Visual Studio Code will treat each typescript file as separate unit. So if you don't reference the typescript files manually with reference or external modules it will not provide intellisense as well as there will not common project context for this.

2) Explicit Scope:

In this scope we are going to create a tsconfig.json which will indicate that folder in which this file exist is a root of TypeScript project. Now you will get full intellisense as well as other common configurations on tsconfig.json file.

You can create a new file via File->New file from Visual Studio Code and add following code about TypeScript configuration under compiler operations.
{
    "compilerOptions": {
        "target": "ES5",
        "module": "amd",
        "sourceMap": true
    }
}

Converting TypeScript into JavaScript files automatically(Transpiling ):

Now as we all know that TypeScript is a super set of JavaScript and this files we can not directly put into html page. As browser will not understand the typescript itself. So we have to convert it into JavaScript. So to convert TypeScript file into JavaScript file we need to configure a in built task runner which will automatically convert all the TypeScript files into JavaScript.

To configure Task Runner you can either press F1 or Ctrl+Shift+P and type task runner. Configure Task runner will popup.

task-runner-vs-code

Once you press enter it will create a .vscode folder and create file called task.json which will have following code.
{
    "version": "0.1.0",

    // The command is tsc. Assumes that tsc has been installed using npm install -g typescript
    "command": "tsc",

    // The command is a shell script
    "isShellCommand": true,

    // Show the output window only if unrecognized errors occur.
    "showOutput": "silent",

    // args is the HelloWorld program to compile.
    "args": ["HelloWorld.ts"],

    // use the standard tsc problem matcher to find compile problems
    // in the output.
    "problemMatcher": "$tsc"
}

Now when you build your project it will create a JavaScript file automatically. Now Let's create a TypeScript file like following.
class HelloWorld{
     PrintMessage(name:string){
        console.log("Hello world:" + name);
    }
}

Here you can right now there is only one file there in explore section.

typescript-file-in-visual-studio-code

Now when you build the project with Ctrl+Shift+B. It will create a JavaScript file.

typescript-and-javascript-both-vscode

and Following is a code for the same.
var HelloWorld = (function () {
    function HelloWorld() {
    }
    HelloWorld.prototype.PrintMessage = function (name) {
        console.log("Hello world:" + name);
    };
    return HelloWorld;
})();

Hiding JavaScript File from explore area in Visual Studio Code:


I don't like my JavaScript file to show in explore area as we already have TypeScript file. So we don't have to worry about JavaScript file. There is a way to hide JavaScript file and I have to add following code in my user settings file. Which you can get it via File->Preferences->UserSetting
"files.exclude": {
"**/.git": true,
"**/.DS_Store": true,
"**/*.js":{
    "when": "$(basename).ts"
}
Here in above I have written custom filter for excluding JavaScript file when TypeScript file is present. Now it will not JavaScript file even if they exists on disk.


typescript-file-in-visual-studio-code

That's it. Hope you like it. Stay tuned for more!!. In forthcoming post we are going learn a lot of TypeScript.
Share:
Sunday, November 8, 2015

My Entity framework blog series

Share:
Saturday, November 7, 2015

Working with transaction in Entity Framework 6

In  any Relation database, maintaining the integrity of the database is very important and Transaction is one way of maintaining database integrity. When you have the situation, where you need to insert data into multiple tables and if inserting a data in one of the table fails you should always rollback other inserts transaction becomes very useful. The Same scenario can occur for update or delete operations. If transactions are not there you will end up with lots of junk data in tables. Entity framework is one of most popular ORM in Microsoft.NET World. So in this post, we are going to learn how we can use transactions with Entity Framework 6.

Transaction and Entity Framework 6:


Entity framework internally maintains a transaction when you call SaveChanges() method. So all Inserts, update operation under single save changes method call will be in a single transaction. But when you want to wrap multiple SaveChanges() method under single transaction there was not inbuilt functionality in the earlier version of Entity framework. We have used to use TransactionScope class for the same.

But now with Entity Framework 6.0, We have two inbuilt APIs for Transaction.

DbContext.Database.BeginTransaction:

It allows us to Begin transaction for multiple save changes, You can combine as many operations as you want under the single transaction and hence either all will be performed successfully then the transaction will be committed  and if any exception occurred than transaction will be rollback.

DbContext.Database.UseTransaction :

Sometimes we need to use a transaction which is started outside of the entity framework. In this case, this option allows us to use that transaction with entity framework also.

In this blog post, We are going to use Begin Transaction. I will write a separate blog post about how to use existing transaction with entity framework.

So enough theory, let's create a console application understand it better.

ef6-with-transactions-console-application

We need entity framework. So I have added it via NuGet package.

entity-framework-transaction-nuget-package

In this application, we are going to use two model classes category and product. We will save them both in the single transaction and try to understand how the transaction works with Entity Framework.
namespace EFWithTransactions
{
    public class Category
    {
        public int CategoryId { get; set; }
        public string CategoryName { get; set; }
    }
}
And here is product model.
using System.ComponentModel.DataAnnotations.Schema;

namespace EFWithTransactions
{
    public class Product
    {
        public int ProductId { get; set; }
        public string ProductName { get; set; }
        [ForeignKey("Category")]
        public int CategoryId { get; set; }
        public virtual Category Category { get; set; }
    }
}

Here you can see I have category id there in product and a product is belong to category and I have created my DB context class like following.
using System.Data.Entity;

namespace EFWithTransactions
{
    public class ProductDbContext : DbContext
    {
        public ProductDbContext()
            : base("ProductConnectionString")
        {

        }
        public DbSet<Category> Categories { get; set; }
        public DbSet<Product> Products { get; set; }
    }
}

And following code is for Main method of console application, Which illustrate real times scenario where exception might occurs during multiple save changes().
using System;

namespace EFWithTransactions
{
    class Program
    {
        static void Main(string[] args)
        {
            using (ProductDbContext productDbContext = new ProductDbContext())
            {
                using (var transaction = productDbContext.Database.BeginTransaction())
                {
                    try
                    {
                        //saving category
                        Category category = new Category
                        {
                            CategoryName = "Clothes"
                        };
                        productDbContext.Categories.Add(category);
                        productDbContext.SaveChanges();

                        // Throw some error to check transaction
                        // Comment this to make transactions sucessfull
                        // throw new Exception("Custom Exception");

                        //saving product
                        Product product = new Product
                        {
                            ProductName = "Blue Denim Shirt",
                            CategoryId = category.CategoryId
                        };
                        productDbContext.Products.Add(product);
                        productDbContext.SaveChanges();
                        Console.Write("Cateogry and Product both saved");
                        transaction.Commit();
                    }
                    catch (Exception exception)
                    {
                        transaction.Rollback();
                        Console.WriteLine("Transaction Roll backed due to some exception");
                    }
                }

            }
            Console.ReadKey();
        }
    }
}
If you see above code carefully, you can see I have two save changes method one for category and another for the product. Also, I have used BeginTransaction method to initiate a new transaction. I have put one custom exception to illustrate something is wrong with that transaction. Also, try catch block is there so if any exception occurs catch block will rollback the transaction. If all went well it will commit the transaction.

Now let's run this application and following is an output as expected. As we have thrown an exception.

Transaction-roll-back-entity-framework

And there is no data inserted in database also.

sql-server-database-with-no-data

Now let's comment the throw new exception part.
//throw new Exception("Custom Exception");

And now let's run our application again and here is the output as expected.

transaction-committed-sql-server

And now we have data in the database there.

sql-server-database-with-data-transaction-commited

So now we have a really good way to use transaction in entity framework. Hope you like it. Stay tuned for more!.
You can find complete source code of above blog post on github at - https://github.com/dotnetjalps/EF6WithTransaction
Share:
Friday, November 6, 2015

How to use stored procedure with Entity Framework Code First

I'm getting lots of request from readers of my blog about writing blog post about how to use stored procedure with entity framework code first. So in this blog post we're going to learn about how we can use stored procedure with Entity framework code first.

To demonstrate that we are going to use create a table called Employee like following.

table-entity-framework-code-first-stored-procedure
And here is the create table script for the same.
CREATE TABLE [dbo].[Employee](
[EmployeeId] [int] NOT NULL,
[FirstName] [nvarchar](50) NULL,
[LastName] [nvarchar](50) NULL,
[Designation] [nvarchar](50) NULL,
CONSTRAINT [PK_Employee] PRIMARY KEY CLUSTERED 
(
[EmployeeId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

And following is the data I have inserted there.

entity-framework-table-data

And here is the store procedure we are using for that.
CREATE PROCEDURE usp_GetAllEmployees
AS
SELECT EmployeeId,FirstName,LastName,Designation FROM Employee

Now we have already done with our database side, it's time to write some C# code. I'm going to use simple console application for the same.

console-application-ef-code-first-stored-procedure

Now It's time to add entity framework via nuget package .

nuget-package-entity-framework-code-first

Here is the model class I have created.
namespace CodeFirstStoredProcedure
{
public class Employee
{
public int EmployeeId { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string Designation { get; set; }
}
}
And then I have created entity framework dbcontext like following.
using System.Data.Entity;

namespace CodeFirstStoredProcedure
{
public class EmployeeDbContext : DbContext
{
public EmployeeDbContext()
: base("EmployeeConnectionString")
{

}
public DbSet<Employee> Employees { get; set; }
}
}
And following is a code for using stored procedure to get data from database via entity framework code first.
using System;
using System.Linq;

namespace CodeFirstStoredProcedure
{
class Program
{
static void Main(string[] args)
{
using (EmployeeDbContext dbContext = new EmployeeDbContext())
{
string commandText = "[dbo].[usp_GetAllEmployees]";
var employees = dbContext.Database.SqlQuery<Employee>(commandText).ToList();

Console.WriteLine("Printing Employee");
foreach (var employee in employees)
{
Console.WriteLine(employee.EmployeeId);
Console.WriteLine(employee.FirstName);
Console.WriteLine(employee.LastName);
Console.WriteLine(employee.Designation);
Console.WriteLine("-----------------");
}
}
Console.ReadKey();
}
}
}

Here in the above code, If you see care fully. I have used SQL Query function to execute stored procedure and that will return a list of employee. Then I have printed this list with console.writeline.  Now when you run this application, output will be as expected.

entity-framework-code-first-stored-procedure

That's it. It's very easy to use stored procedure with entity framework code first. Hope you like it. Stay tuned for more!!.
You can find complete source code of this sample application at - following location - https://github.com/dotnetjalps/EFCodeFirstStoredProcedure
Share:
Sunday, October 18, 2015

The Famous MongoDB Document Database: Operations and Benefits

This is a guest post from Jenny Richards. Jenny Richard is a content maker for RemoteDBA.com which is one of the leading companies in the country which provides remote DBA support Services. You can find more about her at following social profiles.
https://plus.google.com/u/0/114743495388960667319/posts https://www.linkedin.com/in/richardsjenny

Over the last few years, a lot of enterprise attention has been focused on data requirements and data management and administration. The reason for this is clear: proper management of an enterprise’s information assets is critical to effective operation. Enterprises stand to gain a lot if they have in place systems that facilitate proper management of data streaming in from all sources.

Improper storage and management of data also presents a challenge, particularly with subsequent management and analysis to inform decision making and business critical process management. as a result, countless technologies are being developed and churned into the market to enable data storage, analytics and management. These technologies are also constantly being upgraded according to user requirements to make them better suited for different enterprises’ needs.

image
Image credit: slidesharecdn.com

One such technology is the MongoDB, which is just about the best NoSQL, document storage-based database available on the market. If you are looking to implement a data storage solution, you will be interested in a few important qualities, which will explain why best one database is suited for your needs but not the other.

Among the most important considerations is the performance level of the database, as well as the level of availability. You would want to have a database that allows for the highest performance and also one that is available for a large portion of time. These are databases which offer replicated services, including master failover systems. In addition, enterprises need databases that offer seamless scalability, so that the database can be easily grown as organizations/ data volumes increase.

Enterprise requirements for a database solution


Every database must include capabilities to perform a few key functions. In addition, they should have fast learning curves, since the enterprise may have to take up initial and ongoing training for its IT staff if using in-house teams. Organizations that invest in implementation of new database systems without accounting for the training of management personnel are simply setting themselves up for failure.

Every database, MongoDB included, comes with a unique set of features and operational and maintenance requirements. This means that for you to derive maximum benefit and optimal performance from each, you must assign the time and resource allocation for training of the database handlers. If you are unable to, you can instead go with remote DBA services experts, who are already trained in management and can provide maintenance and administration support for a fixed fee payable periodically.

Of course, you should also consider how much data the enterprise needs to store, both in the short and longer term. You don’t want to invest in hardware and applications that will be obsolete just a few years into your investment. As the business grows and its data requirements increase, you need a database solution that can scale accordingly and allow you to manage the content effectively.

MongoDB offers easy, seamless scalability on implementation. In addition, being a NoSQL database storing data in BSON format, there is no need for schema definition which means faster data storage and retrieval. In addition, binary JSON allows fast performance of queries for effective content management.

Why MongoDB is superior to other document-based databases:


There are many features that make MongoDB a preferred NoSQL database solutions compared with other similar databases in the market:
  • The document-based storage method enables seamless object mapping onto various data types and programming languages. It includes documents and embedded arrays which significantly reduce the need for connectors.
  • Polymorphism is made simpler owing to its dynamic schema storage method. Dynamic schema in database terms means that documents found within one collection needn’t have the same structure, or a similar set of fields. In turn, similar/common fields within the same document collection can have different data types.
  • MongoDB is optimized for high performance. Read/write operations are faster because of embedding capability. In addition, keys from embedded arrays and documents can be included within indexes.
  • The database offers high availability owning to replicated servers and automated mater failover implementation.
  • ·High scalability made possible by auto-sharding features to distribute collection data along machines. Consistent reads can also be dispersed throughout the replicated servers.
image
Image credit: blogspot.com

Key features of MongoDB:


The main features of MongoDB are defined by its power, flexibility, ease of use and speed

1. Power

MongoDB offers most of the features included in traditional RDBMSs including: dynamic queries, secondary indexes, upsert operations (update existing documents, insert non-existent documents), rich updates, easy aggregation and sorting. This means you can enjoy the same level of functionality you would using an RDBMS, while still taking advantage of the scalability and flexibility benefits presented by non-relational models.

2. Flexibility

MongoDB stores data in JSON document format (JavaScript object notation) which is them serialized as BSON (Binary JSON). JSON offers a rich data model that can flawlessly map onto programming language types for native apps. In addition, its dynamic schema feature allows easier evolution of data models compared with RDBMS systems that have defined schema, rigid schemas.

3. Ease of use

MongoDB was designed by its creators to have easy installation, configuration, maintenance and usability functions. In this regard, MongoDB comes with few options for configuration, instead opting for the logically “right thing” according to the circumstance. As a result, MongoDB can be used straight out of the box, allowing you to concentrate on application development rather than spending countless hours tweaking vague database configurations

4. Scalability/speed

In MongoDB, related data is grouped into documents, which means that queries can be run much faster. This in comparison to relational databases; where related data may be separated in multiple tables because of different schema definition, necessitating connection prior to processing at a later time.

The auto-sharding feature in MongoDB allows for easy outward scaling - clusters can be linearly scaling by increasing the number of machines. In addition, the database can be scaled outwards without having any downtime. This is extremely important for enterprises with a web presence, since it eliminates the need to take down the website for maintenance, during which time the business sacrifices huge amounts in actual and potential revenue. 
Share:

Three way to activate state in UI-Router Angular.js

Before some time I have written a blog post about routing - Routing with UI-Router and Angular.js I have got few questions from my reader in email. One of question is to how we can activate state of UI-Router as UI-Router works with state instead of URLs. In this blog post we are going to explain, different ways to activate state in UI-Router.

There are three ways of activating state in UI-Router.

  1. Call $State.go
  2. UI-sref directive as link
  3. Navigate to URL associated with State
We are going to learn about this three ways in details.

Call $State.Go:


It is a one of convenient method of changing state, Returns a promise representing the state of transition. It calls underlying $State.Transition method internally but automatically sets option location: true , inheritance:true, relative: $state.$current and notify:true. This allows you to set easily update parameters with passing arguments. You can find more information about that at following location.

https://github.com/angular-ui/ui-router/wiki/Quick-Reference#stategoto--toparams--options

UI-sref directive as link:


We have used this method already in previous example. It is a directive that bind a link to state. If the state has an associated URL, the directive automatically generate and update the href attribute via $State.href() method. Clicking on link will trigger a state transition with parameters.
You can use it in following way.
  • ui-sref='stateName' - Navigate to state, no params. 'stateName' can be any valid absolute or relative state, following the same syntax rules as $state.go()
  • ui-sref='stateName({param: value, param: value})' - Navigate to state, with params.
You an find more information about this at following location.

https://github.com/angular-ui/ui-router/wiki/Quick-Reference#ui-sref

Navigate to the URL associated with State:


As we have seen in previous blog post, most of the state will have URL associated with it. Now when user access the index.html/contactus, it will activate contactus state and load the template associated with this state. You can also pass the parameters with that. Following is documentation for the same.

https://github.com/angular-ui/ui-router/wiki/Quick-Reference#ui-sref

Hope you like it, Stay tuned for more!!.
Share:
Friday, October 9, 2015

Best way to do code review with GIT

Recently we had a discussion when you should do a code review when using GIT. As a general rule you should do code review often.There are two ways to do code review with GIT.

1) Classic way:

 

We all know that GIT is known for it's feature branch features. So each developer working on some feature should have created feature set. That features branch is accessible to all the peers working on same project. So when developer completes his changes and push to feature branch anybody from his/her peers can checkout branch and perform code review.  The problem with this method is we don't have any records for the code review in source control it self.

2) Integration Way:

 

This is best method for doing code review. As we all know GIT provides pull request feature. So we will have two type of branches one is master branch for project and another feature branches for each developer working on different features of project. So now once a developer completes his feature and feels that its ready to push developer will send a pull request to master branch. Now the person or lead who are managing master branch will perform a review of pull request and either accept pull request and merged into code or if he found a some issue with code he can also reject pull request also with proper comments. So here we can easily perform code review with GIT and all data remains same. Most of the open source project on Github maintained this way only.

Hope you like it. Stay tuned for the more!.
Share:

Support this blog-Buy me a coffee

Buy me a coffeeBuy me a coffee
Search This Blog
Subscribe to my blog

  

My Mvp Profile
Follow us on facebook
Blog Archive
Total Pageviews