Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Publication of site specific and raw data will continue through June 2020. All field data for public use went through the following creation steps: 1. Margo Gustina, Hope Decker, and Eli Guinnee recorded interviews with library-involved people and general community residents with Sony UX560 Digital Voice Recorders. 2. Raw audio was uploaded via wireless internet connections in field locations to Trint automated transcription service secure servers. 3. Each interview transcript was reviewed for errors and edited by hand by Margo Gustina, Hope Decker, and Eli Guinnee. More errors occurred where those interviewed had accents the software was not equipped to handle. And also where the recording was interrupted by or co-occurring with environmental noise (eg. sudden construction, other human noise and background conversation). 4. Transcripts were exported from Trint into MS Word documents and distributed for coding to research team members not involved in that transcripted interview. 5. Margo, Hope, and Eli coded transcripts using MS Word, MS Excel, and Atlas.ti. 5. Margo Gustina imported all transcripts into a desktop version of Atlas.ti 8 qualitative data analysis software and hand transferred all externally created codes into relevant transcripts. 6. Atlas.ti was used to organize and analyze all project assets and to produce any report displayed in the Field Data component. 7. Reports were read for information that would make the speaker's identity readily apparent. Replacement text was used in brackets ([]) to signify that the text was not spoken. Where deemed appropriate three dashes (---) were used rather than replacement descriptive text. For additional information about field research methodology, ethics protocols, and inter-rater reliability steps used, see the methodology components associated with this project. The compilation of all anonymized coded quotes into a single searchable MS Excel spreadsheet was created for fast use by public users who may not be entirely comfortable manipulating tables or searching MS Excel workbooks. It went through the following additional handling steps before publication: 1. Reports on two-element co-occurrence were run (eg. Economic Development & 1.Contribution) and exported as MS Excel workbooks. 2. All workbook contents were copied and pasted into a single long running sheet. 3. Data in the Atlas.ti generated column "Codes" was expanded into three separate columns: "Dimension Category", "Codes", and "Description" to allow for data filtering. This was done by hand. 3. The more than 3000 quotes were re-examined for unintentional identifiers. 4. Using MS Excel's "Format as a Table" function, filter buttons were added to all column data.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.