# Editing MPSchedulerPerformance

**Warning:** You are not logged in. Your IP address will be publicly visible if you make any edits. If you **log in** or **create an account**, your edits will be attributed to your username, along with other benefits.

The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.

Latest revision | Your text | ||

Line 4: | Line 4: | ||

<pre> | <pre> | ||

− | + | ||

− | + | ||

− | + | 1 pipeline x 3 stages | |

− | + | 3 pipelines x 1 stage | |

− | + | 3 pipelines x 3 stages | |

− | + | ||

− | + | ||

− | + | ||

− | + | ||

− | + | ||

− | + | ||

− | |||

</pre> | </pre> | ||

Each FIR has 256 taps. Since these are implemented with a dot-product, we count 2 floating point operations (FLOP) per tap, per sample. For each run of the benchmark, we measure the user, system and real time. In addition we know the number of samples processed by the graph and the topology, thus we can compute the total number of floating point operations. We compute GFLOPS as the total FLOPs / real time / 1e9. | Each FIR has 256 taps. Since these are implemented with a dot-product, we count 2 floating point operations (FLOP) per tap, per sample. For each run of the benchmark, we measure the user, system and real time. In addition we know the number of samples processed by the graph and the topology, thus we can compute the total number of floating point operations. We compute GFLOPS as the total FLOPs / real time / 1e9. |