GNU Radio 4.0 Summary of Proposed Features: Difference between revisions

From GNU Radio
Jump to navigation Jump to search
 
(17 intermediate revisions by 2 users not shown)
Line 10: Line 10:
* Separating the Block API from the runtime
* Separating the Block API from the runtime
* YAML based block design methodology [[https://github.com/gnuradio/greps/blob/main/grep-0021-YAML%20driven%20block%20implementation.md]]
* YAML based block design methodology [[https://github.com/gnuradio/greps/blob/main/grep-0021-YAML%20driven%20block%20implementation.md]]
 
* Consolidated Parameter Access Mechanisms
* Ports as a first class construct
* Port size deduction (e.g. copy block doesn't need to specify sizeof(float))
* Multiple implementations per block with a common interface


==Main Features==
==Main Features==
Line 21: Line 24:
* Runtime
* Runtime
* Custom Buffers
* Custom Buffers
In addition to having the ability for custom schedulers, the default CPU scheduler that replaces TPB has some major upgrades:
* Messaging based framework based on Single Actor Model
** Each thread has a mailbox to be notified of data available or ready to write
* Messages and Streams are treated the same
* Ability to set multiple blocks in one thread


=== Heterogeneous Architectures ===
=== Heterogeneous Architectures ===
Line 40: Line 49:


=== Streamlined Developer Experience ===
=== Streamlined Developer Experience ===
See [[https://github.com/gnuradio/greps/blob/main/grep-0021-YAML%20driven%20block%20implementation.md]] for more details
The goal of this feature is to make the process of creating and maintaining blocks less painful by:
* Getting rid of boilerplate through code generation
* Organizing the code files in one folder
* Get as much "for free" as possible when making a block
For instance, as of GR3.10, if you want to add a parameter to the constructor of a block, you have to
* Add it to the public header
* Update the impl header
* Update the impl.cc file
* Update the grc
* Update the python bindings
* Update the documentation (either wiki or doxygen or both)
This is a lot of effort for a minimal change - so the idea here is to have a top level .yml file that will drive the generation of all the boiler plate.  All you as a developer should need to worry about generally is the work() function
=== Improved PMT library ===
A new PMT effort is underway (led by John Sallay) that seeks to modernize and make more performant the PMT API [[https://www.youtube.com/watch?v=x4-JMsXjmFY]].  This will have many benefits including faster processing of message/PDU based flowgraphs
GR 4.0 will not use the legacy PMT API, but will use the new PMT library [[https://github.com/gnuradio/pmt]]
=== Parameter Access Mechanisms ===
Throughout GR3.x there is an inconsistent and/or manual way of changing the parameters of a block.  It is desired to have all of the possible access mechanisms consistent and consolidated, which include:
# Block Constructor
# Setters and Getters
# Tags
# Message Ports
# RPC


Each of these ways of changing a variable in a block must be done manually and is inconsistently handled across blocks in the library.  Also, by consolidating the access, we can pipe changes through the scheduler when running, so there is no conflict between, say, the setters and the work() function - removing the need for mutexes.  Let's look at how we can bring all this together (as done in newsched currently).


First, we utilize the new PMT library to represent each "parameter" of a block as a PMT.  The block class keeps these PMTs accessible by an id which is autogenerated as an enum (and mappable to/from string) in the top level myblock.h class.  So we have a way to store a generic object (PMT) now that is accessible from many different places. 


=== Improve PMT library ===
We will use the parameters from a multiply_const block to show how parameters can be accessed.  This block has 2 parameters defined in the yaml:
<syntaxhighlight lang="yaml">
parameters:
-  id: k
    label: Constant
    dtype: T
    settable: true
-  id: vlen
    label: Vec. Length
    dtype: size_t
    default: 1
</syntaxhighlight>
 
==== Block Constructor ====
The first change we have made with the block constructor is to lump all the parameters that will be set into a struct (that is autogenerated in the top level header).  The yaml above generates the following struct:
<syntaxhighlight lang="cpp">
    struct block_args {
        T k;
        size_t vlen = 1;
    };
</syntaxhighlight>
So the block constructor (and <code>make</code> factory method) can just use this struct.  This prevents changes to the yaml from forcing the developer to change the constructor in several places.
 
It is not necessary now in the constructor to set private variables for simple values as the PMTs of the base class hold the value and can be accessed from the work function.
 
==== Setters and Getters ====
 
For each parameter that is "settable" (not default) and/or "gettable" (default), setter and getter methods are autogenerated at the base class.  These trigger the block method "request_parameter_change" or "request_parameter_query" respectively - and if the scheduler is running, trigger a callback to get this in between work calls
 
The Setters and Getters (or the on_parameter_change callbacks) can be overridden in the block implementation to do things like updating NCOs and such.
 
 
==== Tags ====
Not currently implemented, but a common tag should trigger the update of parameters.  Since they are PMTs, a tag that has the name of the parameter and it's new value should be able to work
 
The goals with any mechanism to update parameters via tags should consider the following:
* Shouldn't have to implement the parameter changes in the work function
** Defeats the purpose of common access mechanisms
** Updates should happen in the scheduler prior to work() being called
 
==== Message Ports ====
Each block by default (via autogenerated code) instantiates a message port named "param_update", and when receiving a PMT, will update the associated parameter
 
==== RPC ====
Via python bindings, a general block_method can call the method on the specified block (setter or getter)


==Dependencies==
==Dependencies==
===New Dependencies===
===New Dependencies===
=====meson/ninja=====
=====meson/ninja=====
Meson is a powerful build system that uses a python-like syntax rather than CMake
Meson is a powerful and user friendly build system that uses a python-like syntax


Originally intended as a placeholder for the build system (replace CMake) since it is easier to get things up and running quickly, it has turned out to be quite powerful and less mind-boggling.  We should consider sticking with it.
Originally intended as a placeholder for the build system (replace CMake) since it is easier to get things up and running quickly, it has turned out to be quite powerful and less mind-boggling.  We should consider sticking with it.
Line 60: Line 146:
===Removed Dependencies===
===Removed Dependencies===
* Boost (no boost is a hard requirement)
* Boost (no boost is a hard requirement)


===Vendorized Dependencies===
===Vendorized Dependencies===
Line 70: Line 155:
* moodycamel
* moodycamel
* pmtf
* pmtf
===Coding Standard===
Changes to the coding standard that will be applied 4.0 and beyond
===Wishlist===

Latest revision as of 11:49, 25 July 2022

High Level Design Goals

GNU Radio 4.0 seeks to make major changes to the core GNU Radio code in order to achieve the following goals

  • Modular Runtime Components
  • Improved Support of Heterogeneous Architectures
  • Support for Distributed Architectures

In addition, there are many things we are able to improve "while we're at it" that aren't related to performance, but more toward the Developer and User Experience. These include:

  • Separating the Block API from the runtime
  • YAML based block design methodology [[1]]
  • Consolidated Parameter Access Mechanisms
  • Ports as a first class construct
  • Port size deduction (e.g. copy block doesn't need to specify sizeof(float))
  • Multiple implementations per block with a common interface

Main Features

Modularity

GNU Radio 3.x uses a fixed runtime that is intended to support operation on GPP-only platforms. The scheduler which uses 1 thread per block (TPB) has been generally effective, but is not suitable to all applications. Rather than solve the problem for every potential user, GR 4.0 will provide a modular architecture for the major runtime components so that application specific version can be used when appropriate.

The currently proposed modular components are

  • Scheduler
  • Runtime
  • Custom Buffers

In addition to having the ability for custom schedulers, the default CPU scheduler that replaces TPB has some major upgrades:

  • Messaging based framework based on Single Actor Model
    • Each thread has a mailbox to be notified of data available or ready to write
  • Messages and Streams are treated the same
  • Ability to set multiple blocks in one thread

Heterogeneous Architectures

GNU Radio 3.10 introduced a Custom Buffers feature for streamlined data movement to and from hardware accelerators. GR 4.0 seeks to extend this capability by not being constrained by the GR3.x API, which allow more flexible custom buffers to be specified, rather than being locked with the block. For instance, a block might have a CUDA implementation that assumes the work() method is already in GPU memory. Depending on the platform, this could be more effectively handled if the data is in device memory, pinned memory, or utilizing managed memory. By separating the buffer abstraction from the block, one block implementation can be used on different platforms.

Scheduler and runtime modularity is also intended to be useful for heterogeneous architectures. For instance, consider a multi-gpu server. The current CPU scheduler with GPU custom buffers can handle a single GPU effectively, but probably can't adequately utilize the multi-gpu resources without a custom scheduling component.

Distributed Architectures

Sometimes it is useful to run a flowgraph across multiple host processors. One example could be a distributed DSP problem where channels of filtered data are sent to different machines for computationally intensive signal processing. This can be done manually currently in GR3.x with the use of ZMQ or Networking blocks and setting up orchestration scripts to control the flow between flowgraphs running on different machines.

The goal for 4.0 is to integration this behavior by use of a modular runtime that can automatically handle the serialization and configuration of graph edges that cross host boundaries.

There are a few main components to this feature:

  1. Serialization of stream and message data
  2. RPC control of the runtime
  3. Custom Runtime to be able to integrate things like Kubernetes


Streamlined Developer Experience

See [[2]] for more details

The goal of this feature is to make the process of creating and maintaining blocks less painful by:

  • Getting rid of boilerplate through code generation
  • Organizing the code files in one folder
  • Get as much "for free" as possible when making a block

For instance, as of GR3.10, if you want to add a parameter to the constructor of a block, you have to

  • Add it to the public header
  • Update the impl header
  • Update the impl.cc file
  • Update the grc
  • Update the python bindings
  • Update the documentation (either wiki or doxygen or both)

This is a lot of effort for a minimal change - so the idea here is to have a top level .yml file that will drive the generation of all the boiler plate. All you as a developer should need to worry about generally is the work() function

Improved PMT library

A new PMT effort is underway (led by John Sallay) that seeks to modernize and make more performant the PMT API [[3]]. This will have many benefits including faster processing of message/PDU based flowgraphs

GR 4.0 will not use the legacy PMT API, but will use the new PMT library [[4]]

Parameter Access Mechanisms

Throughout GR3.x there is an inconsistent and/or manual way of changing the parameters of a block. It is desired to have all of the possible access mechanisms consistent and consolidated, which include:

  1. Block Constructor
  2. Setters and Getters
  3. Tags
  4. Message Ports
  5. RPC

Each of these ways of changing a variable in a block must be done manually and is inconsistently handled across blocks in the library. Also, by consolidating the access, we can pipe changes through the scheduler when running, so there is no conflict between, say, the setters and the work() function - removing the need for mutexes. Let's look at how we can bring all this together (as done in newsched currently).

First, we utilize the new PMT library to represent each "parameter" of a block as a PMT. The block class keeps these PMTs accessible by an id which is autogenerated as an enum (and mappable to/from string) in the top level myblock.h class. So we have a way to store a generic object (PMT) now that is accessible from many different places.

We will use the parameters from a multiply_const block to show how parameters can be accessed. This block has 2 parameters defined in the yaml:

parameters:
-   id: k
    label: Constant
    dtype: T
    settable: true
-   id: vlen
    label: Vec. Length
    dtype: size_t
    default: 1

Block Constructor

The first change we have made with the block constructor is to lump all the parameters that will be set into a struct (that is autogenerated in the top level header). The yaml above generates the following struct:

    struct block_args {
        T k;
        size_t vlen = 1;
    };

So the block constructor (and make factory method) can just use this struct. This prevents changes to the yaml from forcing the developer to change the constructor in several places.

It is not necessary now in the constructor to set private variables for simple values as the PMTs of the base class hold the value and can be accessed from the work function.

Setters and Getters

For each parameter that is "settable" (not default) and/or "gettable" (default), setter and getter methods are autogenerated at the base class. These trigger the block method "request_parameter_change" or "request_parameter_query" respectively - and if the scheduler is running, trigger a callback to get this in between work calls

The Setters and Getters (or the on_parameter_change callbacks) can be overridden in the block implementation to do things like updating NCOs and such.


Tags

Not currently implemented, but a common tag should trigger the update of parameters. Since they are PMTs, a tag that has the name of the parameter and it's new value should be able to work

The goals with any mechanism to update parameters via tags should consider the following:

  • Shouldn't have to implement the parameter changes in the work function
    • Defeats the purpose of common access mechanisms
    • Updates should happen in the scheduler prior to work() being called

Message Ports

Each block by default (via autogenerated code) instantiates a message port named "param_update", and when receiving a PMT, will update the associated parameter

RPC

Via python bindings, a general block_method can call the method on the specified block (setter or getter)

Dependencies

New Dependencies

meson/ninja

Meson is a powerful and user friendly build system that uses a python-like syntax

Originally intended as a placeholder for the build system (replace CMake) since it is easier to get things up and running quickly, it has turned out to be quite powerful and less mind-boggling. We should consider sticking with it.

yaml-cpp

Use yaml for preferences and for configuration of plugin components with a public factory method

gtest

Replace Boost.Test for c++ unit tests

Removed Dependencies

  • Boost (no boost is a hard requirement)

Vendorized Dependencies

The following dependencies are added as submodules (actually using meson's wrap functionality)

  • CLI11 (replaces Boost program_options)
  • cppzmq
  • nlohmann-json
  • moodycamel
  • pmtf

Coding Standard

Changes to the coding standard that will be applied 4.0 and beyond

Wishlist