Parallel computation is one of the computer
programming that allows to execute commands simultaneously and concurrently in
a single or multiple processors inside a CPU. Parallel computation itself is
useful to improve the performance of the computer as more and more processes
that can be done at the same time it will be faster.
Parallel Concept
The
concept of parallel is a processors ability to perform a task or multiple
tasks simultaneously or concurrently, in other words, the processor is able to
perform one or many tasks at one time.
Distributed Processing
Distributed
processing is the process of parallel processing in distributed parallel
processing using multiple machines. So, it could be said the ability of the
computers that run simultaneously to solve a problem with the process quickly.
Architectural Parallel
Computer
According
to a Processor Designer, taxonomy Flynn, Computer Architecture is divided into
four sections.
- SISD (Single Instruction Single Data Stream)The type of computer that only has one processor and one instruction is executed serially.
- SIMD (Single Instruction Multiple Data Stream)This type of computer that has more than one processor, but this computer only executes one instruction in parallel on different data in lock-step level.
- MISD (Multiple Instruction Single Data Stream)This type of computer that has one processor and execute multiple instructions in parallel but in practice there is no computer that is built with this architecture because the system is not easily understood, until now there has been no computers that use this type of architecture.
- MIMD (Multiple Instruction Multiple Data Stream)This type of computer that has more than one processor and execute more than one instruction in parallel. This type of computer that is most widely used to build a parallel computer, even many supercomputer that implement this architecture, because the models and concepts that are not too complicated to understand.
Introduction to
Programming Thread
A thread
in computer programming is a relevant information about the use of a single
program that can handle multiple users simultaneously.Thread This allows the
program to determine how the user entered into the program in turn and the user
will go back to using a different user. Multiple threads can run concurrently
with other processes divides the resources into memory, while the other
processes do not share it.
Introduction to Programming CUDA GPU
GPU Refers to a specific processor GPU to accelerate and change the memory to speed up image processing. The GPU itself is usually located on the graphics card or laptop computer. CUDA (Compute Unified Device Architecture) is a scheme created by NVIDIA as the GPU (Graphic Processing Unit) capable of computing not only to graphics processing, but also for general purposes. So with the CUDA we can take advantage of multiple processors from NVIDIA to do the calculation process much or computing.
Refference :