Main content



Loading wiki pages...

Wiki Version:
![Drone Processing Pipeline logo]( =100x100) # Drone Pipeline # A processing pipeline for drone captured plant phenomic information. This document is intended to provide a high level context on the drone pipeline and provide information on how to use it. ## Overview ## -------------- The goals motivating the drone pipeline are (in no particular order): - common processing: provide components that are reusable in multiple environments - dynamic work flows: mix common and unique processing components to create meaningful processing pipelines - scalable work flows: use scalable architecture as needed, in the right places, to return results faster The drone pipeline effort is part of the larger TERRA REF project and the source code resides on [TERRA REF]( "TerraRef GitHub repository link") GitHub site. Additional code and information can be found on the [Drone Pipeline]( "Drone Pipeline GitHub respository link") portion of our [Digital Agriculture]( "Digital Agriculture GitHub organization") GitHub site. Below is an overview diagram of the data processing pipelines. ![Drone Processing Pipeline Overview][1] ## Processing Steps ## ---------------------- At its simplest level, the drone pipeline has the following steps: 1. Make captured and configuration data available 2. Trigger the start of processing 3. Process data until the workflow is complete 4. Make the result of the workflow available Of course, there are many details the above steps gloss over. Details such as: how does one makes data available, and what does it mean to make it available? In our approach we use the [Scientific Filesystem]( based on Docker to package up the workflows and other functionality. This approach requires the data processed and produced to be local to the running Docker image. This can be done by using Docker volumes to make data available. See the [Docker volumes]( pages for more information. ## Transformers ---------------- Transformer are worker processes that perform specific units of work in our workflow. There are many alternative names for transformers, such as "nodes". We use the term transformers to indicate their specific [conceptual basis]( and that they are used in our workflows. All workflows consist of one or more transformers. Transformers work in sequence, and possibly together, to complete a workflow. Most transformers have different input requirements that need to be satisfied before they can be successfully run. ## Workflows ## --------------- Information on [workflows]( can be found here, on this site A workflow defines the order of transformer events. Workflows can have implicit or explicit components. An implicit workflow component are one or more transformers that are automatically triggered by the right conditions, an uploaded file for example. An explicit workflow component is one that is configured to be started after a separate component instance has completed its work. ## Future Work ## There is a [roadmap on Zenhub]( for this pipeline's planned development. ## Acknowledgements ## ---------------------- Many people have put a lot of work into making this project happen. Our [Acknowledgement]( page lists people and projects that have made contributions. If you feel that you have been left out of our Acknowledgements, or if you want to be removed, [let us know](! [1]:
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.