• Post Reply Bookmark Topic Watch Topic
  • New Topic

How Multithreading improves performance  RSS feed

 
raj talatam
Ranch Hand
Posts: 92
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Fundamental rule on processor is "processor process single operation at a time " if it is so
How threading improves performance .
Let say if my program has for loop with 80 iterations(80 sec -->1 thread) , and it is operating by 2 threads(2 thread-->40sec)
this is how people explain thread concept.
But any how processor takes single instruction at a time(80 instructions -->80 sec). there is is no change in performance right ??



 
Tim Cooke
Marshal
Posts: 3652
184
Clojure IntelliJ IDE Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Fundamentally that is true. A single processor core can only do one thing at a time. But modern processors have multiple cores.
 
Campbell Ritchie
Sheriff
Posts: 53779
128
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
You can increase throughput speed if you can send equals amounts of work to each core. In a 4‑core machine you will get fastest execution if you can send 25% of the work to each core.

But that is a dreadful oversimplification. Start by searching for Amdahl's Law.
 
Paul Clapham
Sheriff
Posts: 22185
38
Eclipse IDE Firefox Browser MySQL Database
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
If you're only talking about operations which affect memory only, then I can understand why you don't see where gains in performance come from. But consider what happens when your loop involves reading data from a file. In this case the computer spends a lot of time waiting for data to be read from a disk, so while that's happening it could be doing other things. So even with a single processor you can achieve gains in performance by accessing several files at once.

(Of course if you try to read too many files at once you may have disk contention, where the read head has to jump back and forth across the disk between File 1 and File 2 and so on. There's nothing simple about performance.)
 
raj talatam
Ranch Hand
Posts: 92
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Tim Cooke wrote:Fundamentally that is true. A single processor core can only do one thing at a time. But modern processors have multiple cores.


so if that is true, Multi Thread concepts are not valid in this scenario right
 
Campbell Ritchie
Sheriff
Posts: 53779
128
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
The concepts do affect your scenario. Very few new computers have single‑core chips nowadays. You can virtually always divide a task in two and let each half of your chip deal with each half of the task. There is however an overhead in creating new threads and dividing the task, so the longer the whole task takes to run, the more likely you are to benefit from multi‑threading. Your 80‑iteration loop probably won't benefit, but my 80000000‑iteration loop will take approx half the time if I divide it into two tasks of 40000000 each. The easiest way to do that is probably to use a Stream in parallel. You need to install Java8 to do that.
 
Tim Holloway
Bartender
Posts: 18531
61
Android Eclipse IDE Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
The generally-accepted figure (probably out of date) for multi-threading is that the overhead for creating, switching, and maintaining multiple threads is approximately 10% per thread. So actually, if your task is strictly linear and required only 1 thread. OR if it could be done on multiple threads optionally, but only 1 thread can be processing at a time (with the other threads waiting on it), then you actually are LESS efficient by multi-threading. This fact was actually abused by partisans of the Atari ST back around 1986 when they tried to promote themselves at the expense of the Commodore Amiga (which was the first mass-market computer shipped with a fully-premptive multi-tasking OS).

Interestingly, this 10% switching overhead is also what's quoted as the amount of efficiency lost when people attempt to multi-task.

Multi-tasking is actually only efficient if most of the tasks are spending most of their time waiting on some external event. So that when one thread is waiting, another can be using the otherwise-idle CPU.

Things are more complicated these days, since we have multiple physical processors (both in terms of multi-chip and multi-core processors), but the typical computer running a multi-tasking OS has many more threads running than it has cores. Using a GPU can help, but the GPU processors aren't as general-purpose as the main CPUs are. And finally, there are things like instruction pipelining which transparently allow what is effectively multi-threading at micro-scale.

Still, the bottom line is that like every other Silver Bullet, there are best cases and worst cases for multi-threading. Knowing the difference is important.
 
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!