If you have come across TPL task parallel library of .net 4.5 its important that you also have focus on below new library.
Task parallelism is the simultaneous execution on multiple cores of many different functions across the same or different datasets.
Data parallelism (aka SIMD) is the simultaneous execution on multiple cores of the same function across the elements of a dataset.
http://msdn.microsoft.com/en-us/library/dd460717(v=vs.110).aspx
http://msdn.microsoft.com/en-us/library/ff963548.aspx
http://www.zdnet.com/understanding-task-and-data-parallelism_p2-3039289129/
Data parallelism is pretty simple. It is the concept that you have a lot of data that you want to process — perhaps a lot of pixels in an image, perhaps you have a whole lot of payroll cheques to update. Taking that data and dividing it up among multiple processors is a method of getting data parallelism. This is an area that supercomputers have excelled at for years. It is a type of problem that is pretty well understood and more emphasis has been put on data parallelism in the past than on task parallelism.
Task parallelism, on the other hand, is where you have multiple tasks that need to be done. So perhaps you have a large data set and you want to know the minimum value and you want to know the maximum and you want to know the average. This is a rather trivial example but you could have different processors each look at the same data set and compute different answers. So task parallelism is a different way of looking at things. Instead of dividing up the data and doing the same work on different processors, in task parallelism what you're doing is dividing up the task to apply.
The most common task parallelism is something called "pipelining
Data Parallelism
http://msdn.microsoft.com/en-us/library/dd460720(v=vs.110).aspx
Parallel.ForEach{
}
Task Parallelism
static
void Main(string[] args)
{
Task[] tasks = new Task[2]
{
Task.Factory.StartNew(() => {
TaskA objTaskA=new TaskA();
// objTaskA.MessageSent += MessageSent;
// objTaskA.TaskCompleted += TaskCompleted;
objTaskA.Activate();
}),
Task.Factory.StartNew(() => {
TaskB objTaskB=new TaskB();
objTaskB.Activate();
})
};
Task.WaitAll(tasks);
}
Data parallelism is pretty simple. It is the concept that you have a lot of data that you want to process — perhaps a lot of pixels in an image, perhaps you have a whole lot of payroll cheques to update. Taking that data and dividing it up among multiple processors is a method of getting data parallelism. This is an area that supercomputers have excelled at for years. It is a type of problem that is pretty well understood and more emphasis has been put on data parallelism in the past than on task parallelism.
Task parallelism, on the other hand, is where you have multiple tasks that need to be done. So perhaps you have a large data set and you want to know the minimum value and you want to know the maximum and you want to know the average. This is a rather trivial example but you could have different processors each look at the same data set and compute different answers. So task parallelism is a different way of looking at things. Instead of dividing up the data and doing the same work on different processors, in task parallelism what you're doing is dividing up the task to apply.
The most common task parallelism is something called "pipelining
Data Parallelism
http://msdn.microsoft.com/en-us/library/dd460720(v=vs.110).aspx
Parallel.ForEach{
}
Task Parallelism
static
void Main(string[] args)
{
Task[] tasks = new Task[2]
{
Task.Factory.StartNew(() => {
TaskA objTaskA=new TaskA();
// objTaskA.MessageSent += MessageSent;
// objTaskA.TaskCompleted += TaskCompleted;
objTaskA.Activate();
}),
Task.Factory.StartNew(() => {
TaskB objTaskB=new TaskB();
objTaskB.Activate();
})
};
Task.WaitAll(tasks);
}