This forum is in archive mode. You will not be able to post new content.

Author Topic: Explain the concept of cpu pipelines ? what are those and how do they help speed  (Read 2102 times)

0 Members and 1 Guest are viewing this topic.

pllaybuoy

  • Guest
--------------------------------------------------------------------------------------------------
 I don't understand the pipeline logic , what exactly are pipelines in a CPU ? Suppose a CPU has a register size of 32 bit , means it has 32 internal data buses/system  buses
 Then how does pipelines speed it up ? Are the 32 electronic lines sup-divided into more or do they add another set of 32 electronic lines (if thats the case then why don't they call it 64 bit ?)
 I really am unable to understand the pipeline structure .
 ------------------------------------------------------------------------------------------------------
 I am reading this book "pc-repair" just to increase my knowledge about hardware , n matter how hard I try , I don't understand the pipeline concept/structure of a CPU , If anybody knows anything then please help . Have been searching this for like a week , but no solution.
I've tried my best , googled and many other sources but no answer yet , I hope there would be somebody knowing much about hardware here , Please don't criticize about searching first , I've already looked for answer everywhere , its been a week and still no solution

*sorry for bad grammar*

Offline ande

  • Owner
  • Titan
  • *
  • Posts: 2664
  • Cookies: 256
    • View Profile
http://en.wikipedia.org/wiki/Pipeline_%28computing%29
http://www.youtube.com/watch?v=1kpfgXHabD4
http://en.wikipedia.org/wiki/Instruction_pipeline

Dont quote me on this, but I think the point is to perform multiple instructions in one go instead of doing one instruction, read the next, do one instruction, read the next and so on.

Take the example of performing ((1+2)/3)+10 in assembly. That would be like.. Idk 10 lines of instructions. But what if you had a pipeline that could do all of those in one go, then the performence would be much greater and less code (I guess). Not a very realistic scenario but you get the idea.
if($statement) { unless(!$statement) { // Very sure } }
https://evilzone.org/?hack=true

Offline centizen

  • Peasant
  • *
  • Posts: 70
  • Cookies: 8
  • Certified Evil Genius
    • View Profile
Dont quote me on this, but I think the point is to perform multiple instructions in one go instead of doing one instruction, read the next, do one instruction, read the next and so on.



Bang on, I'll elaborate though


In the CPU very large instructions can sometimes take multiple clock cycles to be completed. This was alot more prevalent back when the registers were only 8 or 16bits wide, and the operations were large.


These instructions would need to be broken down into multiple smaller instructions and done separately. What they started to realize was that when this happened, the CPU, even though it usually had memory registers and L* cache much larger than what the CPU was capable of executing at once, this was going unused, and rather the CPU would pull only one instruction at a time, work on it, store it, load up the next one and on and on. What they had it do instead was load as many of the instructions/memory pointers/output pointers into the cache at once and have the CPU do all of the instructions in one big execution run.

A 32 bit CPU usually has much more than just 32 bits of cache so there is usually ample space for storing these pipelined instructions.

Basically, you could think of it similarly to buffering a video. Your preloading things you need just in time for when you need them, instead of getting them as you need them. This dosen't work in all situations of course, and it also presents some problems like latency and in some cases taking longer than doing it in the origional way, but in operations like blit's on large bitmaps or heavy mathmatics it can seriously speed up an application.
« Last Edit: June 23, 2012, 06:02:23 AM by centizen »

Offline techb

  • Soy Sauce Feeler
  • Global Moderator
  • King
  • *
  • Posts: 2350
  • Cookies: 345
  • Aliens do in fact wear hats.
    • View Profile
    • github



Bang on, I'll elaborate though


In the CPU very large instructions can sometimes take multiple clock cycles to be completed. This was alot more prevalent back when the registers were only 8 or 16bits wide, and the operations were large.


These instructions would need to be broken down into multiple smaller instructions and done separately. What they started to realize was that when this happened, the CPU, even though it usually had memory registers and L* cache much larger than what the CPU was capable of executing at once, this was going unused, and rather the CPU would pull only one instruction at a time, work on it, store it, load up the next one and on and on. What they had it do instead was load as many of the instructions/memory pointers/output pointers into the cache at once and have the CPU do all of the instructions in one big execution run.

A 32 bit CPU usually has much more than just 32 bits of cache so there is usually ample space for storing these pipelined instructions.

Basically, you could think of it similarly to buffering a video. Your preloading things you need just in time for when you need them, instead of getting them as you need them. This dosen't work in all situations of course, and it also presents some problems like latency and in some cases taking longer than doing it in the origional way, but in operations like blit's on large bitmaps or heavy mathmatics it can seriously speed up an application.
^This.


Buffer example was a good analogy. Also comparable to threading I guess.
>>>import this
-----------------------------

 



Want to be here? Contact Ande, Factionwars or Kulverstukas on the forum or at IRC.