Loading wiki pages...

Wiki Version:
<h1>README</h1> <p>for the DART Project component: Whitmire, Amanda L, Jake Carlson, Brian Westra, Patricia Hswe, and Susan Parham. 2016. “Data from: Using Data Management Plans to Explore Variability in Research Data Management Practices across Domains.” Open Science Framework. August 29. <a href="http://osf.io/ewmsy" rel="nofollow">osf.io/ewmsy</a>. '</p><h2>Project Name</h2> “Analysis of data management plans as a means to inform and empower academic librarians in providing research data support.”<p></p> <h2>Project Short Name</h2> <p>Data Management Plans as a Research Tool: The DART Project</p> <h2>Funding</h2> <p>Institute of Museum and Library Services grant number LG-07-13-0328</p> <h2>Lead PI name and contact information</h2> <p>Amanda L. Whitmire, Head Librarian & Bibliographer, Harold A. Miller Library & Assistant to the Director, Hopkins Marine Station of Stanford University | <a href="http://orcid.org/0000-0003-2429-8879" rel="nofollow">ORCID</a></p> <h2>Co-Principal Investigators</h2> <ul> <li><strong>Jake Carlson</strong>, Research Data Services Manager, University of Michigan Library | <a href="http://orcid.org/0000-0003-2733-0969" rel="nofollow">ORCID</a></li> <li><strong>Patricia M. Hswe</strong>, Program Officer in Scholarly Communications, The Andrew W. Mellon Foundation | <a href="http://orcid.org/0000-0003-0013-2655" rel="nofollow">ORCID</a></li> <li><strong>Susan Wells Parham</strong>, Head, Scholarly Communication & Digital Curation, Georgia Institute of Technology Library</li> <li><strong>Brian Westra</strong>, Lorry I. Lokey Science Data Services Librarian, University of Oregon | <a href="http://orcid.org/0000-0003-0898-078X" rel="nofollow">ORCID</a></li> <li><strong>Lizzy Rolando</strong>, now with MailChimp</li> </ul> <h2>Porject Description</h2> <p>This National Leadership Grants for Libraries Demonstration Project proposal will facilitate a multi-university study of faculty data management plans (DMPs). The primary output of this project will be an analytic rubric to standardize the review of data management plans as a means to inform targeted expansion or development of research data services at academic libraries. The primary deliverables of this project are an analytic rubric for assessing the content and quality of a DMP, and a multi-institutional comparative analysis of DMPs which demonstrates the rubric. Our rubric will give librarians a means to utilize DMPs as a research tool that can inform decisions about which research data services they should provide. This tool will enable librarians who may have no direct experience in applied research or data management to become better informed about researcher’s data practices and how library services can support them. An analysis of DMPs can identify common gaps and weaknesses in faculty understanding of data management principles and practices, and identify barriers for faculty in applying best practices. These findings could highlight areas where libraries may be able to provide services and/or training. A structured review of DMPs would also identify the range and types of obligations for library resources and services that are listed in DMPs, thereby assisting libraries in targeting or expanding the most critical support services, making more efficient use of limited resources. The overall goals are to enable academic librarians to offer support in the area of DMP consultation, an important service area, and facilitate their institution’s development or improvement of research data services as a whole.</p> <h2>Dataset Description</h2> <p>This dataset contains data from assessments of 500 National Science Foundatin (NSF) data management plans (DMP). The assessment rubric developed for this project was translated into a Qualtrics survey for the purpose of data collection. The data are:</p> <ol> <li>assessment scores across the range of performance criteria established in the rubric (e.g., "Describes what types of data will be captured, created or collected" rated as "Complete/detailed", "Addressed issue, but incomplete", or "Did not address issue". </li> <li>Categorical data that we gathered to add context to the assessment data (e.g., "Where did they say they would archive the data?")</li> </ol> <p>A description of each file follows.</p> <h3>File names and description</h3> <ol> <li>DART_Data_ResponseText_Cleaned.txt (raw/cleaned)</li> <li>DMP assessment data from using the Qualtrics surevy, responses in plain text (see example below)</li> <li>DART_Data_ResponseCoded_Cleaned.txt (raw/cleaned)</li> <li> <p>DMP assessment data from using the Qualtrics surevy, responses in coded text. The answer codes are shown in "DART_SurveyRubric.docx" as the number in parentheses after each question response. </p> </li> <li> <p>For example, in Question 4 as shown in "DART_SurveyRubric.docx" :</p> </li> </ol> <blockquote> <p>Q4 NSF Division - ACI - Advanced Cyberinfrastructure (CISE) (1) - AGS - Atmospheric & Geospace Sciences (GEO) (2) - AST - Astronomical Sciences (MPS) (3)</p> </blockquote> <p>the file, "DART_Data_ResponseText_Cleaned.txt" would have</p> <blockquote> <ul> <li>ACI - Advanced Cyberinfrastructure (CISE)</li> <li>AGS - Atmospheric & Geospace Sciences (GEO)</li> <li>AST - Astronomical Sciences (MPS)</li> </ul> </blockquote> <p>while the file, "DART_Data_ResponseCoded_Cleaned.txt" would have</p> <blockquote> <ul> <li>1</li> <li>2</li> <li>3</li> </ul> </blockquote> <ol> <li>DART_SurveyRubric.docx (survey instrument)</li> <li> <p>The complete survey text, with codes for responses included (as described above)</p> </li> <li> <p>DART_DataSummary_ResponseText_Cleaned.txt (raw/cleaned)</p> </li> <li>Qualtrics gives users the option to download a summarize version of the survey data. This is that file. In many cases, it was much easier to look at the summarized data rather than wade through the full exported dataset. #protip</li> </ol> <p>--- stopping for now --- 2016-10-04</p> <p>Definitions of acronyms, site abbreviations, or other project-specific designations used in the data file names or documentation files, if applicable</p> <p>Description of the parameters/variables (column headings in the data files) and units of measure for each parameter/variable, including special codes, variable classes, GIS coverage attributes, etc. used in the data files themselves, including codes for missing data values, if applicable: • column headings for any tabular data • the units of measurement used • what symbols are used to record missing data • any specialized formats or abbreviations used • any additional related data collected that was not included in the current data package</p> <ol> <li> <p>Uncertainty, precision, and accuracy of measurements, if known</p> </li> <li> <p>Environmental or experimental conditions, if appropriate (e.g., cloud cover, atmospheric influences, etc.)</p> </li> <li> <p>Method(s) for collecting the data, as well as the methods for processing data, if data other than raw data are being contributed</p> </li> <li> <p>Standards or calibrations that were used</p> </li> <li> <p>Specialized software (including version number) used to produce, prepare, render, compress, analyze and/or needed to read the dataset, if applicable</p> </li> <li> <p>Quality assurance and quality control that have been applied, if applicable</p> </li> <li> <p>Known problems that limit the data's use or other caveats (e.g., uncertainty, sampling problems, blanks, QC samples)</p> </li> <li> <p>Date dataset was last modified </p> </li> <li> <p>Relationships with any ancillary datasets outside of this dataset, if applicable </p> </li> </ol> <p>Optional information: 14. Resources, such as books, articles, serials, and/or data files, if any, that served as source of this data collection</p> <ol> <li> <p>Methodology for sample treatment and/or analysis, if applicable</p> </li> <li> <p>Example records for each data file (or file type)</p> </li> <li> <p>Files names of other documentation that are being submitted along with the data and that would be helpful to a secondary data user, such as pertinent field notes or other companion files, publications, etc.</p> </li> </ol> <p>NOTE: Use standardized formats and follow scientific conventions in your Cataloging Metadata Form, ReadMe file and in your data files: <em> Dates: we recommend that you follow the W3C/ISO 8601 date standard, which specifies the international standard notation of YYYYMMDD. EXAMPLES: 20090824 is 24 August 2009. Times can be appended as YYYYMMDDThhmmss; For example, 3 seconds after 1:05 pm on March 18, 2002 is 20020318T130503. Punctuation can be added to improve readability: 2009-08-24 or 2002-03-18T13:05:03. </em> Taxonomic, geospatial, geologic names: we recommend using terms from standardized taxonomies and vocabularies. EXAMPLE: Torellia vallonia (the scientific name for the acorn hairysnail; example from ITIS (Integrated Taxonomic Information System) EXAMPLE: Yardley Village (inhabited place); example from TGN (Getty Thesaurus of Geographic Names)</p>
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.