Home

Menu

Loading wiki pages...

View
Wiki Version:
<p>![Drone Processing Pipeline logo](<a href="https://github.com/az-digitalag/Drone-Processing-Pipeline/raw/07b1edc34a1faea501c80f583beb07f9d6b290bb/resources/drone-pipeline.png" rel="nofollow">https://github.com/az-digitalag/Drone-Processing-Pipeline/raw/07b1edc34a1faea501c80f583beb07f9d6b290bb/resources/drone-pipeline.png</a> =100x100)</p> <h1>Drone Pipeline</h1> <p>A processing pipeline for drone captured plant phenomic information.</p> <p>This document is intended to provide a high level context on the drone pipeline and provide information on how to use it.</p> <h2>Overview</h2> <hr> <p>The goals motivating the drone pipeline are (in no particular order):</p> <ul> <li>common processing: provide components that are reusable in multiple environments</li> <li>dynamic work flows: mix common and unique processing components to create meaningful processing pipelines</li> <li>scalable work flows: use scalable architecture as needed, in the right places, to return results faster</li> </ul> <p>The drone pipeline effort is part of the larger TERRA REF project and the source code resides on <a href="https://github.com/terraref/drone-pipeline" rel="nofollow" title="TerraRef GitHub repository link">TERRA REF</a> GitHub site. Additional code and information can be found on the <a href="https://github.com/az-digitalag/Drone-Processing-Pipeline" rel="nofollow" title="Drone Pipeline GitHub respository link">Drone Pipeline</a> portion of our <a href="https://github.com/az-digitalag" rel="nofollow" title="Digital Agriculture GitHub organization">Digital Agriculture</a> GitHub site.</p> <p>Below is an overview diagram of the data processing pipelines.</p> <p><img alt="enter image description here" src="https://mfr.osf.io/export?url=https://osf.io/w9n74/?action=download&mode=render&direct&public_file=True&initialWidth=684&childId=mfrIframe&parentTitle=OSF%20%7C%20CIMMYT%20Drone%20Pipeline.png&parentUrl=https://osf.io/w9n74/&format=2400x2400.jpeg"></p> <h2>Processing Steps</h2> <hr> <p>At its simplest level, the drone pipeline has the following steps:</p> <ol> <li>Make captured and configuration data available</li> <li>Trigger the start of processing</li> <li>Process data until the workflow is complete</li> <li>Make the result of the workflow available</li> </ol> <p>Of course, there are many details the above steps gloss over. Details such as: how does one makes data available, and what does it mean to make it available? [TBD]</p> <h3>Clowder</h3> <p>Before beginning processing it's necessary to have a space created to contain the results of the processing. Extractors running with Clowder as their data repository create collections and datasets under an identified space.</p> <p>The name or ID of this Clowder space is used in the <a href="https://osf.io/xdkcy/wiki/Configuration%20YAML/" rel="nofollow">Configuration YAML</a> file that is placed within the set of data to be processed.</p> <h4>Create a space</h4> <p>There are two ways to create a space.</p> <p>One way is by clicking the Create Space link right after logging in![Clowder Create Space link](<a href="https://files.osf.io/v1/resources/xdkcy/providers/osfstorage/5cc0daed20c6e00018cc4cd0?mode=render" rel="nofollow">https://files.osf.io/v1/resources/xdkcy/providers/osfstorage/5cc0daed20c6e00018cc4cd0?mode=render</a> =10%x)</p> <p>The other is to use the <code>you -&gt; spaces</code> menu to see your Spaces page and click on the Create button ![Clowder create button](<a href="https://files.osf.io/v1/resources/xdkcy/providers/osfstorage/5cc0dad6870f9f001820c87f?mode=render" rel="nofollow">https://files.osf.io/v1/resources/xdkcy/providers/osfstorage/5cc0dad6870f9f001820c87f?mode=render</a> =10%x)</p> <h2>Extractors</h2> <hr> <p>Information on specific <a href="https://osf.io/xdkcy/wiki/Extractors/" rel="nofollow">extractors</a> can be found here, on this site.</p> <p>All workflows consist of one or more container-ized worker processes know as extractors. Extractors work in sequence, and possibly together, to complete a workflow. Most extractors have different input requirements that need to be satisfied before they can be successfully run.</p> <h2>Workflows</h2> <hr> <p>Information on <a href="https://osf.io/xdkcy/wiki/Workflows/" rel="nofollow">workflows</a> can be found here, on this site</p> <p>A workflow defines the order of extractor events. Workflows can have implicit or explicit components. An implicit workflow component are one or more extractors that are automatically triggered by the right conditions, an uploaded file for example. An explicit component is one that is configured to be started after a separate component instance has completed its work.</p> <h2>Acknowledgements</h2> <hr> <p>Many people have put a lot of work into making this project happen. Our <a href="https://osf.io/xdkcy/wiki/Acknowledgements/" rel="nofollow">Acknowledgement</a> page lists people and projects that have made contributions. If you feel that you have been left out of our Acknowledgements, or if you want to be removed, <a href="mailto:schnaufer@email.arizona.edu">let us know</a>!</p>
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.