SOLID Principles

There are five SOLID principles of Object-Oriented Design :-

  1. Single Responsibility Principle (SRP)
  2. Open Closed Principle (OCP)
  3. Liskov Substitution Principle (LSP)
  4. Interface Segregation Principle (ISP)
  5. Dependency Inversion Principle (DIP)

These five design principles used to make software design more understandable, flexible, and maintainable.

1. Single Responsibility Principle (SRP)
Definition: A class should have only one reason to change.
This means that a class should not be loaded with multiple responsibilities and a single responsibility should not be spread across multiple classes or mixed with other responsibilities. And it’s easy to change in class as it has single responsibility. This also reduces number of bugs and improves development speed and most importantly makes developer’s life lot easier.

2. Open Closed Principle (OCP)
This principle suggests that the class should be easily extended but there is no need to change its core implementations. The OCP states that the behaviors of the system can be extended without having to modify its existing implementation. New features should be implemented using the new code, but not by changing existing code. The main benefit of adhering to OCP is that it potentially streamlines code maintenance and reduces the risk of breaking the existing implementation.

3. Liskov Substitution Principle (LSP)
The Liskov Substitution Principle (LSP) states that "you should be able to use any derived class instead of a parent class and have it behave in the same manner without modification".
It ensures that a derived class does not affect the behavior of the parent class.
In other words, that a derived class must be substitutable for its base class.

4. Interface Segregation Principle (ISP)
Definition: No client should be forced to implement methods which it does not use, and the contracts should be broken down to thin ones.
When all the tasks are done by a single class or in other words, one class is used in almost all the application classes then it has become a fat class with overburden. Using ISP, we can create separate interfaces for each operation or requirement rather than having a single class to do the same work.

5. Dependency Inversion Principle (DIP)
This principle says that there should not be a tight coupling among components of software and to avoid that, the components should depend on abstraction. The terms Dependency Injection (DI) and Inversion of Control (IoC) are generally used as interchangeably to express the same design pattern.
Inversion of Control (IoC) is a technique to implement the Dependency Inversion Principle in C#.
Inversion of control can be implemented using either an abstract class or interface. The rule is that the lower level entities should join the contract to a single interface and the higher-level entities will use only entities that are implementing the interface. This technique removes the dependency between the entities.

Understanding file sizes | Bytes, KB, MB, GB, TB, PB, EB, ZB, YB, BB

Bit:
A Bit is the smallest unit of data that a computer uses. It can be used to represent two states of information, such as Yes or No.

Byte:
A Byte is equal to 8 Bits. A Byte can represent 256 states of information, for example, numbers or a combination of numbers and letters. 1 Byte could be equal to one character. 10 Bytes could be equal to a word. 100 Bytes would equal an average sentence.

Kilobyte:
A Kilobyte is approximately 1,000 Bytes, actually 1,024 Bytes depending on which definition is used. 1 Kilobyte would be equal to this paragraph you are reading, whereas 100 Kilobytes would equal an entire page.

Megabyte:
A Megabyte is approximately 1,000 Kilobytes. In the early days of computing, a Megabyte was considered to be a large amount of data. These days with a 500 Gigabyte hard drive on a computer being common, a Megabyte doesn’t seem like much anymore. One of those old 3-1/2 inch floppy disks can hold 1.44 Megabytes or the equivalent of a small book. 100 Megabytes might hold a couple volumes of Encyclopedias. 600 Megabytes is about the amount of data that will fit on a CD-ROM disk.

Gigabyte:
A Gigabyte is approximately 1,000 Megabytes. A Gigabyte is still a very common term used these days when referring to disk space or drive storage. 1 Gigabyte of data is almost twice the amount of data that a CD-ROM can hold. But it’s about one thousand times the capacity of a 3-1/2 floppy disk. 1 Gigabyte could hold the contents of about 10 yards of books on a shelf. 100 Gigabytes could hold the entire library floor of academic journals.

Terabyte:
A Terabyte is approximately one trillion bytes, or 1,000 Gigabytes. There was a time that I never thought I would see a 1 Terabyte hard drive, now one and two terabyte drives are the normal specs for many new computers. To put it in some perspective, a Terabyte could hold about 3.6 million 300 Kilobyte images or maybe about 300 hours of good quality video. A Terabyte could hold 1,000 copies of the Encyclopedia Britannica. Ten Terabytes could hold the printed collection of the Library of Congress. That’s a lot of data.

Petabyte:
A Petabyte is approximately 1,000 Terabytes or one million Gigabytes. It’s hard to visualize what a Petabyte could hold. 1 Petabyte could hold approximately 20 million 4-door filing cabinets full of text. It could hold 500 billion pages of standard printed text. It would take about 500 million floppy disks to store the same amount of data.

Exabyte:
An Exabyte is approximately 1,000 Petabytes. Another way to look at it is that an Exabyte is approximately one quintillion bytes or one billion Gigabytes. There is not much to compare an Exabyte to. It has been said that 5 Exabytes would be equal to all of the words ever spoken by mankind.

Zettabyte:
A Zettabyte is approximately 1,000 Exabytes. There is nothing to compare a Zettabyte to but to say that it would take a whole lot of ones and zeroes to fill it up.

Yottabyte:
A Yottabyte is approximately 1,000 Zettabytes. It would take approximately 11 trillion years to download a Yottabyte file from the Internet using high-power broadband. You can compare it to the World Wide Web as the entire Internet almost takes up about a Yottabyte.

Brontobyte:
A Brontobyte is (you guessed it) approximately 1,000 Yottabytes. The only thing there is to say about a Brontobyte is that it is a 1 followed by 27 zeroes!

Geopbyte:
A Geopbyte is about 1000 Brontobytes! Not sure why this term was created. I’m doubting that anyone alive today will ever see a Geopbyte hard drive. One way of looking at a geopbyte is 15267 6504600 2283229 4012496 7031205 376 bytes!


Processor or Virtual Storage
· 1 Bit = Binary Digit
· 8 Bits = 1 Byte
· 1024 Bytes = 1 Kilobyte
· 1024 Kilobytes = 1 Megabyte
· 1024 Megabytes = 1 Gigabyte
· 1024 Gigabytes = 1 Terabyte
· 1024 Terabytes = 1 Petabyte
· 1024 Petabytes = 1 Exabyte
· 1024 Exabytes = 1 Zettabyte
· 1024 Zettabytes = 1 Yottabyte
· 1024 Yottabytes = 1 Brontobyte
· 1024 Brontobytes = 1 Geopbyte

Disk Storage
· 1 Bit = Binary Digit
· 8 Bits = 1 Byte
· 1000 Bytes = 1 Kilobyte
· 1000 Kilobytes = 1 Megabyte
· 1000 Megabytes = 1 Gigabyte
· 1000 Gigabytes = 1 Terabyte
· 1000 Terabytes = 1 Petabyte
· 1000 Petabytes = 1 Exabyte
· 1000 Exabytes = 1 Zettabyte
· 1000 Zettabytes = 1 Yottabyte
· 1000 Yottabytes = 1 Brontobyte
· 1000 Brontobytes = 1 Geopbyte

AutoMapper

The AutoMapper is an open-source library present in GitHub.

AutoMapper is a productivity tool designed to help you write less repetitive code mapping code. AutoMapper maps objects to objects, using both convention and configuration.  AutoMapper is flexible enough that it can be overridden so that it will work with even the oldest legacy systems.
AutoMapper is a popular object-to-object mapping library that can be used to map objects belonging to dissimilar types.
As an example, you might need to map the DTOs (Data Transfer Objects) in your application to the model objects.
To start working with AutoMapper, you should create a project in Visual Studio and then install AutoMapper.
Install AutoMapper from NuGet.
Automapper is a simple reusable component which helps to copy data from object type to other.

1) Projection-working:-
var config = new MapperConfiguration(cfg => {
cfg.CreateMap();
});
IMapper iMapper = config.CreateMapper();
var source = new AuthorModel();
source.Id = 1;
source.FirstName = "Sibin";
source.LastName = "Thomas";
source.Address = "Kodiyan(H)";
var sourcedto = new AuthorDTO();
sourcedto.City = "Thrissur";
sourcedto.State = "Kerala";
sourcedto.Country = "India";
var destination = iMapper.Map(source, sourcedto);
Console.WriteLine("FirstName: " + destination.FirstName + "City: " + destination.City);

2) Flattening-working:-
var customer = new Customer
{
Name = "George Costanza"
};
var order = new Order
{
Customer = customer
};
var bosco = new Product
{
Name = "Bosco",
Price = 4.99m
};
order.AddOrderLineItem(bosco, 15);
var configuration = new MapperConfiguration(cfg => cfg.CreateMap());
IMapper mapper = configuration.CreateMapper();
OrderDto dto = mapper.Map(order);
dto.CustomerName.Equals("George Costanza");dto.Total.Equals(74.85m);

Advantages Of Microservice Architecture

A microservice design is a single unit that can be broken down into different modules like development, deployment, maintenance. In technical terms, a microservice system allows development of single function modules.

Let's have a look at some major advantages of microservice architecture:
Technological Flexibility
While monolithic architecture always left the developers looking for the "right tool for the job," a microservice architecture offers coexistence of multiple technologies under one cover. Different decoupled services can be written in multiple programming languages. Not only does this enable developers to experiment but also scale their product by adding additional features and functionalities.
Increased Efficiency
Microservice architecture speeds up the entire process of creation. Unlike a single unit, teams can work simultaneously on multiple components of a software system. This, in addition to increasing productivity, makes it easier to locate specific components and focus on them. Malfunctioning of a single component will not affect the entire system. Instead, this also eases error location and maintenance.
Products Not Projects
According to Martin Fowler, microservice architecture helps businesses create "products instead of projects." In simpler terms, the use of microservice architecture allows teams to come together and create functionality for business rather than a simple code. The entire team comes together to contribute to different functionalities. These can further be used for different lines of business if applicable. In addition, it also creates an autonomous, cross-functional team.

Following are some of the steps that you can take to define your microservice:-

The first step is to identify the pieces of code that are replicated under various modules. How often do you see them repeat? and how much effort goes into getting them setup each time in different modules? If the answer to all of these are high, then the scope of the microservice would be to handle just the repeating pieces of code.
Another step that you can take is to check if a module is not dependent on other modules or in simpler terms, check if it's possible that a module is loosely coupled with the rest of the services. If so, then the scope of the microservice will be the scope of the entire module.
Another very important metric to consider while defining the scope is to check if the features would be used with a heavy load. This would check if the microservice would have to be scaled up in the near future.If it does, then it's a good idea to define the scalable bits as the scope of a microservice rather than combine it with other features.

The main motive of any microservice is to have services independent of each other. This means one can edit, update or deploy a new service without hampering any other services present. This is possible if interdependence is low. A loosely coupled system is the one where one service knows too less or nothing about others.

It is important for any service to be the unique source of identification for the rest of the system. Let us take an example to understand this scenario.
After an order is placed on an e-commerce website, the user is provided with an order ID. This order ID once generated contains all the information regarding the order. As a microservice, the order ID is the only source for any information regarding the order service. So, if any other service seeks information regarding the order service, the order ID acts as the source of information rather than its actual attributes.


API Integration
Breaking down the monolithic design into multiple services means these service will coordinate and work together to form the system. But, how do these services communicate? Imagine using multiple technologies to create different services. How do they relate to each other?
Well, the simple answer would be the use of an API (Application Programming Interface). The fundamental of microservice design is using the correct API. This is crucial to maintaining communication between the service and the client calls. Easy transition and execution are important for proper functioning.
Another important thing to note while creating an API is the domain of the business. This definition of the domain will ease out the process of differentiating the functionality. There are several clients which are external to the system. These clients could be other applications or users. Whenever a business logic is called, it is handled by an adapter (a message gateway or web controller) which returns the request and makes changes to the database.

Data Storage Segregation
Any data stored for a specific service should be made private to that specific service. This means any access to the data should be owned by the service. This data can be shared with any other service only through an API. This is very important to maintain limited access to data and avoid "service coupling."

Traffic Management
Once the APIs have been set and the system is up and running, traffic to different services will vary. The traffic is the calls sent to specific services by the client. In the real world scenario, a service may run slowly, thus, causing calls to take more time. Or a service may be flooded with calls. In both the cases, the performance will be affected even causing a software or hardware crash.
This high traffic demand needs management. A specific way of calling and being called is the answer to a smooth flow of traffic. The services should be able to terminate any such instances which cause delay and affect the performance.
This can also be achieved using a process known as 'auto-scaling' which includes constant tracking of services with prompt action whenever required. In some cases, a 'circuit breaker pattern' is important to supply whatever incomplete information is available in case of a broken call or an unresponsive service.

Minimal Database Tables (Preferably Isolated Tables)
Accessing database tables to fetch data can be a lengthy process. It can take up time and energy. While designing a microservice, the main motive should revolve around the business function rather than the database and its working. To ensure this, even with data entries running into millions, a microservice design should have only a couple of tables. In addition to minimum numbers, focus around the business is key.

Constant Monitoring
The microservice monitoring tools will monitor individual services and later combine the data by storing it in a centralized location. This is a necessary step while following micro-services design principles.
API performance monitoring is crucial to any microservice architecture in order to make sure the functionality stays up to the mark in terms of speed, responsiveness and overall performance of the product. Realizing the crucial part played by an API in a successful microservice architecture.


Microservice

Microservices is more about applying a certain number of principles and architectural patterns than it is about architecture. Each microservice lives independently, but on the other hand, also all rely on each other. All microservices in a project get deployed in production at their own pace, on-premise, on the cloud, independently, living side by side.


There are various components in a microservices architecture apart from the microservices themselves.

Management : Maintains the nodes for the service.

Identity Provider : Manages the identity information and provides authentication services within a distributed network.

Service Discovery : Keeps track of services and service addresses and endpoints.

API Gateway : Serves as client’s entry point. The single point of contact from the client which, in turn, returns responses from underlying microservices and sometimes an aggregated response from multiple underlying microservices.

CDN : A content delivery network to serve static resources. For example, pages and web content in a distributed network.

Static Content : The static resources like pages and web content.


Microservices are deployed independently with their own database per service so the underlying microservices look as shown in the following picture:


A microservice is an approach to create small services, each running in their own space that can communicate via messaging. These are independent services directly calling their own database.

The following is the diagrammatic representation of microservices architecture.


Xunit - test framework for Unit Testing

There are three different test frameworks for Unit Testing supported by ASP.NET Core: MSTest, xUnit, and NUnit; that allow us to test our code in a consistent way.

Instead of creating a new test, we can use these two attributes: Theory and InlineData to create a single data driven test.
using UnitTest.Controllers;
using Xunit;
namespace TestProject1
{
    public class UnitTest1
    {
        [Theory]
        [InlineData(1, "Jignesh")]
        [InlineData(2, "Rakesh")]
        [InlineData(3, "Not Found")]
        public void Test3(int empId, string name)
        {
            HomeController home = new HomeController();
            string result = home.GetEmployeeName(empId);
            Assert.Equal(name, result);
        }
    }
}


To unit test the controller having dependency on ILogger service, we have to pass ILogger object.
private readonly ILogger _logger;



If you want, you can override that by setting the Name property on your Fact attribute -- often a good idea to improve readability. This example changes the name of the test in Test Explorer to "Customer Name Update":
[Fact(Name = "Customer Name Update"]
public void ChangesCustomerName()
{}


To turn off a test method, set the Skip property on the Fact attribute to the reason that you've turned the test off (unfortunately the reason isn't currently displayed in Test Explorer).
[Fact(Skip ="Test data not available")]
public void ChangesCustomerName()
{}


The Trait attribute lets you organize tests into groups by creating category names and assigning values to them.
This example creates a category called Customer with the value "Update":
[Fact(DisplayName = "Change Name2")]
[Trait("Customer", "Update")]
public void ChangesCustomerName()
{}

By default, xUnit runs tests in different test classes in parallel, which can significantly shorten the time to run all your tests. It also means that xUnit effectively ignores the Run Tests in Parallel setting at the top of the Test Explorer window.
You assign tests to a collection using the Collection attribute, passing a name for the collection.
This code assigns both the PremiumCustomerTests and the CashOnlyCustomerTests test classes to a collection called Customer Updates, ensuring that the tests in the two classes aren't run in parallel:
[Collection("Customer Updates")]
public class PremiumCustomerTests
{
   ... //test methods ...
}
[Collection("Customer Updates")]
public class CashOnlyCustomerTests
{
 ... //test methods ...
}


It is essentially a testing framework which provides a set of attributes and methods we can use to write the test code for our applications. Some of those attributes, we are going to use are:
[Fact] – attribute states that the method should be executed by the test runner
[Theory] – attribute implies that we are going to send some parameters to our testing code. So, it is similar to the [Fact] attribute, because it states that the method should be executed by the test runner, but additionally implies that we are going to send parameters to the test method
[InlineData] – attribute provides those parameters we are sending to the test method. If we are using the [Theory] attribute, we have to use the [InlineData] as well.

Command vs Immediate Window in Visual Studio


Immediate window:

The Immediate window is used at design time to debug and evaluate expressions, execute statements, print variable values, and so forth. It allows you to enter expressions to be evaluated or executed by the development language during debugging.

Command window:

The Command window is used to execute commands or aliases directly in the Visual Studio integrated development environment (IDE). You can execute both menu commands and commands that do not appear on any menu.


Immediate window: has all the functionality of the Command Window as long as it starts with ">".

Command window: does not have other functionality since it can't start with anything but ">".

Difference between ASP.NET Core and ASP.NET MVC 5

Difference 1 -
Single aligned web stack for ASP.NET Core MVC and Web APIs.
A Web stack is the collection of software required for Web development.
ASP.NET MVC 5 will give us 'option' of choosing MVC or Web API or both while creating a web application.
ASP.NET Core MVC now has single aligned web stack for MVC and Web API.
Single stack for ASP.NET Core (MVC & Web APIs).


Difference 2 -
Project(Solution) Structure Changes
If you see ASP.NET Core MVC solution explorer on the right-hand side, there is no Web.config, Global.asax.
appsettings.json, custom configuration files are some files which do those work of missing files from ASP.NET MVC 5.


Difference 3 -
ASP.NET Core targets Full .NET and .NET Core
.NET Core is a general purpose development platform maintained by Microsoft and the .NET community on GitHub. It is cross-platform, supporting Windows, macOS, and Linux, and can be used in device, cloud, and embedded/IoT scenarios
Not only we can develop in Windows OS but also in Linux, Mac using Visual Studio Code or any other code editors like Vim, Atom, Sublime


Difference 4 -
ASP.NET Core apps don’t need IIS for hosting
The goal of ASP.NET Core is to be cross-platform using .NET Core. With this in mind,
Microsoft decided to host ASP.NET Core applications not only on IIS but they can be self-hosted or use Nginx web server on Linux.


Difference 5 -
wwwroot folder for static files
Static files like config.json, which are not in wwwroot will never be accessible, and there is no need to create special rules to block access to sensitive files.
These static files might be plain HTML, Javascript, CSS, images, library etc.
In addition to the security benefits, the wwwroot folder also simplifies common tasks like bundling and minification, which can now be more easily incorporated into a standard build process and automated.
The “wwwroot” folder name can be changed too.


Difference 6 -
New approach to Server side and client side dependency management of packages
Working in Visual Studio IDE and deploy ASP.NET Core applications either on Windows, Linux or Mac using .NET Core.
Its Server side management of dependencies.
The client-side has more different packages from the server side.
Client side will surely have jQuery, Bootstrap, grunt, any Javascript frameworks like AngularJS, Backbone etc, images, style files.


Difference 7 -
Server-side packages save space in ASP.NET Core
We have been using NuGet package manager to add a reference to assemblies, library, framework or any third party packages.
They would have been downloaded from NuGet which creates “Packages” folder in project structure.
So we need more disk space for storing packages even though they all are same.
ASP.NET Core came up with storing all the packages related to its development in Users folder and while creating ASP.NET Core applications, Visual Studio will reference them from Users folder.
This feature is called Runtime Store for .NET Core 2.


Difference 8 -
Inbuilt Dependency Injection (DI) support for ASP.NET Core

Delegates and Events in C# .NET

A delegate is a way of telling C# which method to call when an event is triggered. For example, if you click a Button on a form, the program would call a specific method. It is this pointer that is a delegate. Delegates are good, as you can notify several methods that an event has occurred, if you wish so.
Delegates form the basis of event handling in C#.
A delegate declaration specifies a particular method signature. References to one or more methods can be added to a delegate instance.

Delegates types are declared with the delegate keyword. They can appear either on their own or nested within a class, as shown below.
namespace DelegateArticle
{
    public delegate string FirstDelegate (int x);
 
    public class Sample
    {
        public delegate void SecondDelegate (char a, char b);
    }
}

There are three steps in defining and using delegates:
Declaration
Instantiation
Invocation




An event is a notification by the .NET framework that an action has occurred. Each event contains information about the specific event, e.g., a mouse click would say which mouse button was clicked where on the form.

Let's say you write a program reacting only to a Button click. Here is the sequence of events that occurs:
User presses the mouse button down over a button
The .NET framework raises a MouseDown event
User releases the mouse button
The .NET framework raises a MouseUp event
The .NET framework raises a MouseClick event
The .NET framework raises a Clicked event on the Button
Since the button's click event has been subscribed, the rest of the events are ignored by the program and your delegate tells the .NET framework which method to call, now that the event has been raised.


Differences Between Delegates and Events in C#
Delegate is an object used as a function pointer to hold the reference of a method. On the other hand, events provide an abstraction to delegates.
A keyword required to declare a delegate is a delegate whereas, a keyword  required to declare an event is event.
A delegate is declared outside a class whereas, an event is declared inside a class.
To invoke a method using a delegate object, the method has to be referred to the delegate object. On the other hand, to invoke a method using an event object the method has to be referred to the event object.
Covariance and Contravariance provide extra flexibility to the delegate objects. On the other hand, event has no such concepts.
Event Accessor handles the list of event handlers whereas delegate has no such concept.
Delegates are independent on events but, events can not be created without delegate.

Multithreading in C#

                                Multithreading is a feature provided by the operating system that enables your application to have more than one execution path at the same time. Technically, multithreaded programming requires a multitasking operating system.

Let’s understand this concept with a very basic example. Everyone has used Microsoft Word. It takes input from the user and displays it on the screen in one thread while it continues to check spelling and grammatical mistakes in another thread and at the same time another thread saves the document automatically at regular intervals.


The following are the most common instance members of the System.Threading.Thread class:

Name 
A property of string type used to get/set the friendly name of the thread instance.

Priority 
A property of type System.Threading. ThreadPriority to schedule the priority of threads.

IsAlive 
A Boolean property indicating whether the thread is alive or terminated.

ThreadState 
A property of type System.Threading.ThreadState, used to get the value containing the state of the thread.

Start()
Starts the execution of the thread.

Abort()
Allows the current thread to stop the execution of the thread permanently.

Suspend()
Pauses the execution of the thread temporarily.

Resume()
Resumes the execution of a suspended thread.

Join()
Make the current thread wait for another thread to finish.

What is .NET Reflection ?

Reflection enables you to use code that is not available at compile time. .NET Reflection allows an application to collect information about itself and also to manipulate on itself.  It can be used effectively to find all types in an assembly and/or dynamically invoke methods in an assembly. This includes information about the type, properties, methods, and events of an object. With Reflection, we can dynamically create an instance of a type, bind the type to an existing object, or get the type from an existing object and invoke its methods or access its fields and properties. Using Reflection, you can get any kind of information which you can see in a class viewer; for example, information on the methods, properties, fields, and events of an object.

Repository Pattern With ASP.NET MVC And Entity Framework

Repository Pattern is used to create an abstraction layer between data access layer and business logic layer of an application. Repository directly communicates with data access layer [DAL] and gets the data and provides it to business logic layer [BAL]. The main advantage to use repository pattern to isolate the data access logic and business logic, so that if you make changes in any of this logic that cannot effect directly on other logic.