When to use Response.Redirect and Server.Transfer?

Recently I had to respond to an issue which was a result of using wrongly of the aforementioned in web forms. Each of them are used with a specific purpose and not as an alternative to another. And so let see when to use either of them and in what situations.

They do have distinct notable differences. One of them being with Response.Redirect the browser url changes to the targeted page while in Server.Transfer the url remains the same.

With Response.Redirect a HTTP 302 message is usually sent do the browser  while in Server.Transfer all happens without the browser knowing anything, and it ends up recieving a different content from the requested.

Another difference is that the Server.Transfer consumes more server power in comparison to Response.Redirect.

Server.Transfer cant send a user to an external site while Response.Redirect can.

When to use Response.Redirect:

  • we want to redirect the request to some plain HTML pages on our server or to some other web server
  • we don’t care about causing additional roundtrips to the server on each request
  • we do not need to preserve Query String and Form Variables from the original request
  • we want our users to be able to see the new redirected URL where he is redirected in his browser (and be able to bookmark it if its necessary)

When to use Server.Transfer:

  • we want to transfer current page request to another .aspx page on the same server
  • we want to preserve server resources and avoid the unnecessary roundtrips to the server
  • we want to preserve Query String and Form Variables (optionally)
  • we don’t need to show the real URL where we redirected the request in the users Web Browser

I do hope this will help someone out there who needs to find which to use and when. This might not be exhaustive but it’s worthy you consideration

Happy coding!

Backbone.js demisfied

This is the first post in backbone.js series which aims at looking at the overview of what is backbone. In this series we will look at various backbone aspects like the Models, Views, Collections, Eventing system, Event aggregator and its uses. And I do promises that by the end of the series you will be able to make decision of when to use backbone.js.

What is backbone.js?

From backbone.js home page, it is described as giving structure to web applications by providing models with key-value binding and custom events, collections with rich API of enumerable functions, Views with declaritive event handling and connects it all to you existing API over a RESTful json interface.

I would also take backbone.js being more of a javascript library providing basic and helpful types for building and organizing rich javascript interfaces, qualifying it not to be more of a framework.

The big and best differences between a library and a framework a  library is something you call from your program. A framework is something that calls into your program, or better frameworks sort of control how you design your code or applications. They specify what you can do with them and how to do it. On the other hand libraries provide with some useful features and then you can move on and implement or even extend them to build your application. Frameworks contain key distinguishing features that separate them from normal libraries like inversion of control. In a framework, unlike in libraries or normal user applications, the overall program’s flow of control is not dictated by the caller, but by the framework.

There has been conflicting opinion of whether backbone is MVC or not, a library or a framework etc. But I would like to take sometime to simply look at what MVC is.

What is MVC?

In short MVC stands for Model-View-Controller.

Model– represent  application data perhaps for a specific domain that an application is dealing with. So in short Models are at the heart of any JavaScript application, containing the interactive data as well as a large part of the logic surrounding it: conversions, validations, computed properties, and access control. This how backbone takes a model to be.

View -visual representation of the model. The view is dependent on the model and in case the model changes then the view should update accordingly. The user usually interacts with the view setting and changing the view thence the model. In most cases or ideal scenario it not the work of the view to update the model but the controller. e.g when a click occurs the information should be taken to the controllers to update the model accordingly.

Controller-controls the co-rodination of the views and the model. They are the mediators between the models and the views. They update the views when the model changes and update the models when any changes occurs on the view.

It is in the controllers where most javascripts frameworks and libraries brings issues as the developers of the framework tend to try to map one to one with the server side MVC frameworks, hence contradicting C on the client frameworks. This issue however is subjective and brings issues with understanding classical MVC patterns and the role of controllers in modern frameworks.

And therefore in respect to backbone, truly and distinctly it has model and views, but does not have true controllers since its views and routers sort of act similar to controller but neither of them can act as a controller on its own. And this on my own opinion does not qualify backbone a mvc framework and I consider it a member of MV* family with its own implementation.

Backbone had its own Backbone.Controller which was not making sense in the context in which it was used and therefore it was renamed to Backbone.Router. And therefore backbone shares the responsibility of a controllers with the view and the router.Router handle little more of the controller work  as you can bind the events and the models and any other customizations you may like regarding the same.

Some few points to note regarding backbone for this post is:

  •  Backbone core components are Model, View, Collection and Router. Therefore I would not be wrong saying it has borrowed from MVC from framework.
  • You have more control of what is happening.Backbone has one of the best eventing system which  is great between the views and models. You can even attach an event to any attribute in a model and you will be notified when the property changes.
  • Supports data bindings through manual events
  • Great support for RESTful interfaces and so models can easily be tied to the backend API’s like ASP NET WEB API.
  • It uses underscore.js templating engine which is great.
  • Clear and flexible conventions for structuring applications. Its not opinionated, and so does not force you to use all of its components and can only work with only those needed.
  • Initially you might have to write more javascript code but its very easy to implement complex user interactions.

Feel free to engage me in discussion, questions, recommendations or even you input. There are more to come on specific backbone intrinsics.

Happy coding 🙂

What really is Katana and owin?

What really is really Katana? How about Owin? After playing around with Katana, I really found it worthywhile to post about what I have found regarding them. OWIN ( initials for Open Web Interface for .NET )is a set of specifications that defines a stardand interface between which .NET web servers and web applications communicates. Katana on the hand is an implementation of the OWIN.

A little history of ASP NET is that it was released in early 2002 or sometime around then, with .NET framework 1.0 and it was meant to bring a web experience for customers who included classic asp  and desktop line of business application developers like VB 6.

There were alot of stuff and concerns brought forward by these two customers, which meant that the framework was to be monolithic leading to inclusion of all concerns in one single package, System.Web.  The package included just to mention a few features, Modules, handlers, sesssion, cache, web forms and controls. And all this was meant to run on IIS.

With time the System.Web package became complex because with every new requirement it was added to the package. By default most of the features were turned on by default to seamless intengration .

By then, IIS was the only hosting for ASP NET and most of the features for ASP NET runtime map one to one with those found on the IIS. This was a problem and in around 2008 ASP NET MVC was released which atleast enabled rapid developement and was distributed via nuget. Come 2012 ASP NET WEB APi was released , and unlike ASP NET MVC, WEB API does not rely on System.Web in anyway. Initially WEB API was started by the WCF team, which later was sent to ASP NET  team who added capabilities like self hosting.

Thats a little background and history of a few players which brought the idea behind katana. In a modern web application, you  can expect to have at least some static files, web api, rendering engine perhaps MVC razor, and perhaps SignalR for some real time communication. And the question is perhaps, couldn’t it be awesome to be able to compose all these or other multiple frameworks together to create a single server?

And there comes Katana which is a set of components for building and running web applications on a common abstraction which is, OWIN…

The primary interface in OWIN is a application delegate or simply AppFunc, which is a delegate that takes IDictionary<string,object> and returns a Task, and the signature is like Func<IDictionary<string,object>,Task>

Katana is meant to offer the following

Portability by reducing to the least primitives as possible

Composability ->Its really easy to compose modules together that will participate in all aspect of the incoming requests.

Performance and scalability -> Uses Async throughout and there is total decoupling of web application and host.

A typical Owin enabled application should have the following layers.

1. Application ->This is your application which could be like ASP NET MVC

2. Application framework -> These could be regular OWIN component implementation giving simple to use API to developers to easily plug-in to the pipeline. For example, we have signalR and Web API, though currently SignalR uses OwinHost.

3.Server –> This is responsible to binding to a port and channels request to the pipeline for processing. e.g SystemWeb, HttpListener o or the new Weblistener

4.Host –> This is a process managers that spans new process call to your code and provides and puts the pipeline in place eg this could be IIS if you will be hosting on IIS, could be OwinHost.exe which is the katana implementation or even you custom process.

Currently, katana embraces conventions over configuration. So when you install owin, when you code runs it goes through you code via reflection looking for a class with name of Startup and a method named “Configuration” taking IAppBuilder interface as the only parameter. IAppBuilder is an interface used to compose modules together. There are alot of extension methods built on top of IAppBuilder which we will explore at a later post.

To recap, let write a simple application and host on a console application. I am using visual studio 2013. Open VS 13, File -> New Project ->Console Application, give a nice name and click Ok. This is  a simple console application with nothing to do with Owin. So open Package Manager Console and install the following nuget packages

Install-Package Microsoft.Owin.

install-package microsoft.Owin.Host.HttpListener.

install-package microsoft.owin.hosting

And at least it should be close to

Package-isntallation

With in we will now add the basic code to host our application. And just as we said we will need to add a class with a name “Startup” and a method “Configuration”. Quickly let to that. After we finish setting up the nuget packages then copy and paste following code to your program.cs file as is

using System;
using Microsoft.Owin.Hosting;
using Owin;

namespace KatanaTest
{
 class Program
 {
 static void Main(string[] args)
 {
 //Create a url
 string uri = "http://localhost:8080";

using (WebApp.Start(uri))
 {
 Console.WriteLine("Server is starting");
 Console.ReadKey();
 Console.WriteLine("Stopping the server");
 }
 }
 }

///
/// Note the name "Startup"
 ///
 public class Startup
 {
 ///
/// Note the name and parameter
 ///
 ///
 public void Configuration(IAppBuilder app)
 {
 //Simple write to the response stream here
 app.Run(x =>
 x.Response.WriteAsync("We are just testing this application"));
 }
 }
}

Run the code and open you favourite browser and paste http://localhost:8080/. The result which would be “We are just testing this application” printed on the browser. This means that you have successfully written you first owin application hosted on a console application.
Katana is utilized in the ASP NET MVC 5 project template for authetication. In our case we have utilized the simplest method ever to write something on the response stream. But there is alot of extensibility and capabilities with katana where you can plug your own middleware or component in the pipeline, which is how the authetication is plugged in the ASP NET MVC 5. Another post regarding this soon.

But just to point out is that all you really need to write a katana module, is to write a class which has a constructor takes one arguement of type  Func<IDictionary<string,object>,Task>, and this represent the next module in the pipeline which you manually call from your component.

Just to demonstrate lets a middleware component that prints some message to the response

using System;
using System.Collections.Generic;
using System.IO;
using System.Threading.Tasks;
using Microsoft.Owin.Hosting;
using Owin;

namespace KatanaTest
{
 class Program
 {
 static void Main(string[] args)
 {
 //Create a url
 string uri = "http://localhost:8080";

using (WebApp.Start(uri))
 {
 Console.WriteLine("Server is starting");
 Console.ReadKey();
 Console.WriteLine("Stopping the server");
 }
 }
 }

///
/// Note the name "Startup"
 ///
 public class Startup
 {
 ///
/// Note the name and parameter
 ///
 ///
 public void Configuration(IAppBuilder app)
 {
 app.Use();
 //Simple write to the response stream here
 app.Run(x =>
 x.Response.WriteAsync("We are just testing this application"));

//Plug our component here

 }
 }

public class HelloWorld
 {
 private Func<IDictionary<string, object>, Task> _next;
 public HelloWorld( Func<IDictionary<string,object>,Task> next)
 {
 _next = next;
 }

public Task Invoke(IDictionary<string, object> environment)
 {
 var response = environment["owin.ResponseBody"] as Stream;

using (var writer = new StreamWriter(response))
 {
 return writer.WriteAsync("Hello!!");
 }
 }
 }
}

Its as easy as that, and now when you run the application you see “Hello!” on the browser. Something worthy noting here is “owin.ResponseBody”, which’s one of the key in the Environment dictionary. For now take that dictionary as the HttpContext which contains all the information about a request. There are more Owin specific keys in the dictionary which we will explore in a later post.

That’s it for now, there is more to come on this same topic especially on how to write to write and hook up your own modules in the pipeline. You may also be interested in Checking-out-the-Helios-IIS-Owin-Web-Server-Host which I may recommend to get some information on the topic.

Happy coding and kindly get involved via comments, questions or suggestions. Thanks for reading 🙂

Abstract classes or interfaces?

Many of the times, I have seen the question of what factors should one consider either to implement a functionality in terms of abstraction, and this narrows down to if one should employ abstract class or interfaces.  A few days ago I was surprised when reviewing some code and just to brainstorm the developer,  I asked him a simple question, why did you choose to use abstract class rather than an interface?  Guess the response.. I just thought its good to use abstract class rather than interfaces. I didn’t have any other question for him but found it useful to demisfy this here for him, and others who would have the same kind of response to such a question.

I do agree that the choice of using any of them can be a daunting task.I will try to summarize what I have and wha I have found useful on the internet on this post. But  lets start at looking at the definitions of each:

An abstract class is a class that cannot be instantiated but must be inherited. The class can contain some default implementation for the child classes to use, and this methods should be non abstract methods. If an abstract method need exist, then it should be modified with abstract keyword and does not contain a body.

An interface on the hand is a type definition similar to class except that it purely represent a contract between an object and its user. And so its just a collection of members definitions e.g method, properties, events etc without any implementation.

So this brings to the question of who support or does what? Here are do’s and don’ts for each of them.

  1. Instantiation: Abstract classes cannot be instantiated away from their derived classes, meaning that there constructor are called only by their derived classes. Interfaces on the hand cannot be instantiated.
  2. Abstract classes can provide abstract members which base class MUST implement while all interfaces members MUST be implemented in the base class. So you can not implement partial interface, if you don’t need an implementation or you will not use some interface members you may need to leave stubs. e.g ConvertBack method for IValueConverter is usually not implemented but most implementation of the interface leave a stub, and this bring to the Interface Segragation Principle
  3. Extensibility: Abstract classes are more extensible than interface. You can alter an abstract classes without breaking any version compatibility. And note this extensibility is for non-abstract members. Interfaces on the other hand, if you have to extend you will have to create a new interface, otherwise you break the existing clients. Consider a situation where you are employed and after sometime you get a pay rise, you actually and should sign a new contract which looks the same as the previous with with the new  salary figures appear.
  4. Virtual members: Abstract classes allows for virtual members with default implementation for the deriving classes while for interfaces all members are automatically virtual and cannot contain any implementation.
  5. Accessibility modifiers: You can control accessibility of some members in abstract classes while all members of an interface are public by default.
  6. Inheritance: Being a rule in C# specification multiple class inheritance is not supported meaning that a class cannot inherit from more than one classes. On the contrary multiple inheritance is supported with interfaces

With that in place there are some guideline you should follow when deciding which one to use and when to use it.

  1. If you anticipate to create multiple versions of a component you may use abstract classes reason being it’s easy to create and version your components. For example by changing or updating the base class all inheriting classes are automatically updated whilst in interfaces they do not support versioning. Once an interface is created it cannot be changed you will have to create a new one. The analogy being once you sign a contract with your employer, and happens you get a pay rise you sign a new contract, you don’t manipulate the previous.
  1. If the functionality you are creating will be useful across a wide range of disparate objects, use an interface. Abstract classes should be used primarily for objects that are closely related, whereas interfaces are best suited for providing common functionality to unrelated classes.
  2. If you are designing small, concise bits of functionality, use interfaces. If you are designing large functional units, use an abstract class.
  3. If you want to provide common, implemented functionality among all implementations of your component, use an abstract class. Abstract classes allow you to partially implement your class, whereas interfaces contain no implementation for any members.

So too much of theory lets dive and see how we can establish some, if not all of the aforementioned points. So we are going to use an aspect of polygons. Basically we have a number of polygons which have different ways of calculating their areas. So we will employ both, interface and abstract class concepts.

 public interface IRegularPolygon
 {
 int NumberOfSides { get; set; }
 int SideLength { get; set; }

double GetArea();
 }
 public class Octagon : IRegularPolygon
 {
 public int NumberOfSides { get; set; }
 public int SideLength { get; set; }

public Octagon(int length)
 {
 NumberOfSides = 8;
 SideLength = length;
 }

public double GetArea()
 {
 return SideLength * SideLength * (2 + 2 * Math.Sqrt(2));
 }
 }

We also have an abstract class approach of the same.

 public abstract class AbstractRegularPolygon
 {
 public int NumberOfSides { get; set; }
 public int SideLength { get; set; }

public AbstractRegularPolygon(int sides, int length)
 {
 NumberOfSides = sides;
 SideLength = length;
 }

public abstract double GetArea();
 }
 public class Triangle : AbstractRegularPolygon
 {
 public Triangle(int length) :
 base(3, length) { }

public override double GetArea()
 {
 return SideLength * SideLength * Math.Sqrt(3) / 4;
 }
 }

Assuming that  you have implemented you get a requirement that you need to add a way to calculate the perimeter of the polygons. And knowing that, to calculate the perimeter of a polygon is just multiplying the number of sides with the length of one side.  With the abstract class implementation it really straight forward all you need to do is as below:

 public double GetPerimeter()
 {
 return NumberOfSides * SideLength;
 }

With the interface implementation you cannot just add the function declaration since this will break the existing clients. An easier way is to have a new interface with all the previous declaration or rather employ interface inheritance where you create a new interface with the new perimeter function inheriting from the IRegularPolygon. I would go with the latter.

Just to keep the post short, we end by looking what is in the framework which employs abstract classes and intefaces. In the BCL we have System.IO.Stream class, which is an abstract class. It contains some implementation of various methods as well as abstract and virtual methods which deriving classes implements. Some of the deriving classes include:

  • MemoryStream
  • FileStream
  • BufferedStream

With the interface, we have List<T> class which implements a list of interfaces which also show interface inheritance and best application of Interface Segregation Principle. Some of the interface it implements are

IList<T>, ICollection<T>, IList, ICollection, IReadOnlyList<T>, IReadOnlyCollection<T>, IEnumerable<T>, IEnumerable

So we end the post to make sure its not too long,  having demonstrated a couple of points I leave it to you to try the rest and poke me if any issue arises or if you need some clarity or some of technical engagement.

Its my hope you will consider the above mentioned points next time you find yourself in a dillema of which one to use, abstract or interfaces.

Again happy coding  and bye for now 🙂

Delegates, Events and Events args

In our previous post on delegate, events and Events Args we talked discussed what each is and its role in event driven programming. In this post we present the code representation of the previous post.  So without wasting too much time and keeping you waiting we immediately starting on presentation of the code based on our previous discussion.

EventArgs are used to contain the additional information that should be passed from the event to the event handler.  And to encapsulate the additional information we will extend the EventArgs class and add the information we want the doctor to know about the Patient.  To have it simple we will need the first and last name for the patient.


internal class PatientArgs : EventArgs
 {
 public PatientArgs(string firstName, string lastName)
 {
 FirstName = firstName;
 LastName = lastName;
 }
 public string FirstName { get; set; }
 public string LastName { get; set; }
 }

So far so good, huh? We need to have our Patient class now in place. Remember it is the patient who should raise the alert that he has arrived. Also its important to note the Event declaration. In my case I am using the generic delegate EventHandler<TEventArgs> which represent a method that will used to handle the event provides data.

 internal class Patient
 {
 //Event arriving
 public event EventHandler<PatientArgs> Arrived;

public void OnPatientArrived()
 {
 if (Arrived != null)
 {
 Arrived(this, new PatientArgs("", ""));
 }
 }
 }

So clients(Doctors) interested in knowing when a Patient arrives should subscribe to Arrived event, which you can simply see it done in the next snippet. So you can see clearly that Program provides an handler to the event as to when a patient arrives. So what happens is that when the patient arrived “p_Arrived” will run. Note that one of the parameter for p_Arrived is PatientArgs which carries the FirstName and LastName of the patient. You are not limited to the number of information you want passed to the Event handler in the EventArgs.


class Program
 {
 public static void p_Arrived(object sender, PatientArgs e)
 {
 Console.WriteLine(string.Format("Attending {0} {1}", e.FirstName, e.LastName));
 }
 static void Main(string[] args)
 {
 Patient p = new Patient();
 p.Arrived += p_Arrived;
 p.OnPatientArrived();
 Console.Read();
 }
 }

There you have it . Happy coding :).

Understanding delegates, events and event handler.

This is the first post in a series of posts focussing on event, delegates and event handlers. In the series we will discuss what each and what its role in event driven programming. The first post will be theory where we bring into focus what each is and we will try to explain using a simple example.

In this post we are going to look at delegates, events and event handler and understand how each of them depend on the other to work. A delegate in C# is more like a function pointer in C++. By function pointer, means that, it means that the delegate points to a function , and so it will know which function to call at runtime. Basically, a delegate is a specialized .NET class.  So at runtime, a class is generated for every delagate you declare. This generated class inherit from MultiCastDelegate class which tracks the subscribers.

Delegates, plays a major role in event driven programming, as it acts as the glue/pipeline through which the event sends or communicates with the event handler. Its through the delegates that you pass the events arguements to event handlers.

Event arguments encapsulate the actual data from the event to the event handler. For example, in windows application, we have useful event like mouse click, which passes lot of data including x and y co-ordinates of the clicked point. We have others like button click event which has minimal data to pass.

Event Handler, is a method that gets data from a delegate, and process it appropriately. The difference with other methods is that its signature is dictated by the delegates itself. But most of the event handlers and delegates takes two parameter, sender and EventArgs. The sender represents whoever raised the event or who owns the event. e.g OnClick, the sender is the button as its the one to be clicked. EventsArgs is extra data which the handler of the event might find useful.

Event are a way of notification and are used every in .net framework. They provide a way to trigger notification and alert from the interaction of the program either by a user or other programs. In short they signal an occurence of an action. Once an event is raised, it passes data (EventArgs) which is very important to the method responding to it. And so between an event and event handler there is a delegate, otherwise you are just calling a function just like any other. You can raise an event in two ways

  1. calling the event like methods
  2. access the events delegate and invoke it directly

The best way to explain is a hospital set up, we have a doctor, patient and the receptionist. Once a new patient arrives he provides information to a receptionist. We can have more than one receptionist, doctor and patient as well. The receptionist takes paperwork containing patient information as well as his (Receptionist) name so that the doctor can know who attended the patient. The hospital protocal is that a patient should not go straight to the doctor.  So medium to pass information to the doctor is the receptionist. With reference to the explanation we have so far we can deduce the following:

-Patient: Is the event raiser/he will own the event.

-Doctor: Event handler. He knows how to treat the patient

-Receptionisht: Delegate. He knows the doctor who will treat the patient given a patient.

-Patient information: Event args.

We can define our event like PatientArrived. So when the doctor arrives in the hospital he prepares himself and alerts the receptionist that he can now attend to patient who arrives. So this means he can handle PatientArrived event.

Hope this makes it easier to understand the concept behind Events, Delegates and Event handler.

In the next blog we will now present this information using code and you will see how each will come into play.

Happy coding 🙂

Strategy design pattern demystified

By definition from wikipedia, strategy pattern also called as policy pattern is a software design pattern whereby an algorithm behaviour can be selected at runtime. Using the pattern you define a family of algorithms and making them interchangeable.

With strategy pattern the specific behaviors should not be inherited and to avoid this you use interfaces to abstract the behavior with which each of the strategy implementation should implement. And this should be in harmony with the open/closed principle which states that classes should be open to extensibility and closed to modifications.

The pattern is really simple, nice pattern to get started on design patterns and very applicable as implement algorithms on a daily basis.

The purpose for this design pattern includes but not limited to the following:

  •  Encapsulates a family of related algorithms
  •  It allows various algorithm to vary and evolve separately from the context (Context being the class using them)
  •  Allowing a class to maintain a single purpose

Some of the red flag to know that you need to implement this pattern in you code might include:

  • Switch statement in code
  • Adding a new way of implementation of a certain algorithm will include adding a new file for that specific implementation.

In this post we are going to consider, a situation where we are required to calculate both weighted and simple average mean of some given rating e.g could game rating, restaurant rating etc,etc. Just from the requirement you can easily see that we will probably need some different ways of implementing each of them, since hardly can you get one from the other. And this now will prompt to us to use strategy pattern to implement the different algorithms.

Assuming we have the following Review class which will contain a rating property:

public class Review
{
public int Rating { get; set; }
}

For us to be in harmoney with Open/Closed principle, since the two algorithm are involved in computing then we will have an interface with one function, simple as:

internal interface IRatingAlgorithm
{
int Compute(IList<Review> reviews);
}

And therefore any algorithm that needs to be in this family of algorithm, MUST implement this interface. And in case we need to introduce some other algorithms then we will just create a new class that implement the interface, and not modifying the existing ones (Hope now you can see the Open/Closed principle at work). Our first implementation is for simple average mean which uses LINQ average extension function

    internal class SimpleRatingAlgorithm : IRatingAlgorithm
{

public int Compute(IList<Review> reviews)
{
return (int)reviews.Average(r => r.Rating);
}
}

Easy huh? The weighted version go like this:

internal class WeightedRatingAlgorithm : IRatingAlgorithm
{

public int Compute(IList<Review> reviews)
{
var result = default(int);
var counter = 0;
var total = 0;

for (int i = 0; i < reviews.Count(); i++)
{
if (i < reviews.Count() / 2)
{
counter += 2;
total += reviews[i].Rating * 2;
}
else
{
counter += 1;
total += reviews[i].Rating;
}
}

result = total / counter;
return result;
}
}

And our context class is here:

internal class Rater
{
private readonly List<Review> _reviews;

public Rater(List<Review> reviews)
{
_reviews = reviews;
}

public int ComputeResult(IRatingAlgorithm algorithm)
{
return algorithm.Compute(_reviews);
}
}

Our client will now use the context class to call either of our algorithms. And here is how I am doing it, with a small static helper method to build some static review.

class Program
{
static void Main(string[] args)
{
var reviews = SampleReviews(new[] { 4, 8 });

var rater = new Rater(reviews);

var simpleresult = rater.ComputeResult(new SimpleRatingAlgorithm());

var weighted = rater.ComputeResult(new WeightedRatingAlgorithm());

Console.WriteLine("Simle average mean: {0} , Weighted mean {1}", simpleresult, weighted);
Console.ReadLine();
}

public static List<Review> SampleReviews(params  int[] rating)
{
return rating.Select(x => new Review { Rating = x }).ToList();
}
}

Advantages

  • Strategies may not use members of the containing class they will have to be self sustaining
  • Test may now be written for individual concrete strategies
  • Strategies may be mocked when testing the context class
  • Adding a new strategy does not break the Open/closed principle

Happy coding 🙂

 

 

%d bloggers like this: