# Creating a SHARE Harvester with scrAPI
There are two ways you can create a harvester for the SHARE project.
If you're creating a harvester for an OAI PMH feed, you should see the section on creating an OAI PMH harvester using classes in the scrapi library.
If you're creating a harvester with a custom data format, see the section on creating a custom harvester.
See more information in the wiki section on [scrAPI](https://osf.io/wur56/wiki/scrAPI/)
---
## Making a class based harvester
If you're creating a harvester for a service that uses OAI-PMH, you can create a harvester using scrapi classes that will harvest data and send normalized data through the SHARE pipeline.
You can automate this process by using this [auto oai tool](https://github.com/erinspace/autooai).
Your harvester will live in the [scrapi harvesters directory](https://github.com/CenterForOpenScience/scrapi/tree/develop/scrapi/harvesters) along with the other harvesters.
This class based harvester will make calls to the specified OAI PMH service using the ListRecords verb and the oai_dc namespace, with a date range of one day in the past.
You can find the base class definition for the OAI PMH class in the scrapi code, [available here](https://github.com/CenterForOpenScience/scrapi/blob/develop/scrapi/base/__init__.py).
To create a class-based harvester, follow these steps:
1. Fork the [scrapi](https://github.com/CenterForOpenScience/scrapi/) repo, and create your own harvester in a folder with the same name under the [scrapi/harvesters directory](https://github.com/fabianvf/scrapi/tree/develop/scrapi/harvesters).
- See the [GitHub help page on forking](https://help.github.com/articles/fork-a-repo) for detailed instructions.
2. Folow the setup instructions on the [scrapi](https://github.com/CenterForOpenScience/scrapi/) repo README.
Set your local settings to baseline
7. `cp scrapi/settings/local-dist.py scrapi/settings/local.py`
8. To see a harvester run,
`invoke harvester [harvester name here]`
If you'd like to run a harvester with a certain number of days back, run
`invoke harvester [harvester name here] --days=[number of days back]`
See the [list of provider names](https://github.com/CenterForOpenScience/SHARE/wiki/Provider-Names) or see the [names of the current providers](https://github.com/CenterForOpenScience/scrapi/tree/develop/scrapi/harvesters) in scrapi for examples of harvesters you can invoke.
9. Within your new harvester folder, create a file named ```yourharvester.py``` where you will create an instance of the harvester class.
Your ```yourharvester.py``` will have 3 main parts:
- The imports section at the top, where you'll import the base OAI harvester class
- The schema transformer, which defines each main element and where in the source API that item can be found.
- Your instance of the harvester class, with some key areas defined:
+ the name of your provider (as it will show up in the source field). *Note: This is the official name of your provider, and the name you will use to invoke it later when running! It has to be unique, and not collide with any other provider already in the system.*
+ the base url where you will make your OAI requests. Should include everything before the ? in the request url
+ a property list of elements that don't fit into the set schema - see more in the "Property List" section down below.
+ a list of "approved sets" - if your provider has a certain set of items with a particular "setSpec" entry that should make their way into the notification service, list the approved "setSpec" items here. Only those entries that are in the approved setSpec list will be normalized and set to the notification Service.
+ timeout - time in seconds to wait between subsequent requests to gather resources.
+ timezone_granularity - how much time detail to include in the OAI request. Setting timezone_granularity to True will add 'T00:00:00Z' to the date request.
Here's an example of what your ```myharvester.py``` file might look like:
```python
"""
A harvester for Calhoun: The NPS Institutional Archive for the SHARE project
An example API call: http://calhoun.nps.edu/oai/request?verb=ListRecords&metadataPrefix=oai_dc
"""
from __future__ import unicode_literals
from scrapi.base import OAIHarvester
class CalhounHarvester(OAIHarvester):
short_name = 'calhoun'
long_name = 'Calhoun: Institutional Archive of the Naval Postgraduate School'
url = 'http://calhoun.nps.edu'
verify = False
base_url = 'http://calhoun.nps.edu/oai/request'
property_list = [
'type', 'source', 'format', 'setSpec', 'date', 'rights'
]
approved_sets = [
'com_10945_7075', 'com_10945_6', 'col_10945_17'
]
```
3. Add your provider's favicon to the [favicon folder](https://github.com/CenterForOpenScience/scrapi/tree/develop/img/favicons)
4. From the root directory, run ```invoke provider_map```
5. Test your harvester locally by running ```invoke harvester harvester_short_name_here```
- This should be the (unique) name that you gave your harvester in the "short_name" variable when creating the harvester.
6. Create a pull request to add your new harvester to the [scrapi](https://github.com/CenterForOpenScience/scrapi/) repo
### Creating an OAI PMH Harvester Property List
Property lists are created from the elements that don't match the [base schema](https://osf.io/wur56/wiki/Schema/). Also include elements that the base schema will only save one of. For example, many sources have more than one description field or identifier field. To make sure this metadata is still captured, include it in the property list so that all elements show up in the normalized data. This way, the first description or identifier is saved in the primary schema field, and the others are included in the OtherProperties field.
Also make sure to include items in the header that might not fit into our standard schema, such as 'setSpec' for OAI harvesters.
Here's an example of a property list:
```property_list = ['date', 'identifier', 'setSpec', 'description']```
If you're creating an OAI PMH harvester, the [auto oai tool](https://github.com/erinspace/autooai) will automatically create a property list out of items that don't match the base schema.
## Making a Custom Harvester
Many harvesters for the SHARE project are written for providers with an OAI-PMH endpoint, and can be written very quickly by creating an instance of an oai harvester class. However, many other providers have a custom data output that requires a bit more of a custom implementation.
Here's how to create a custom harvester using tools provided within [scrapi](https://github.com/CenterForOpenScience/scrapi). For more information about scrapi, see the [GitHub repo](https://github.com/CenterForOpenScience/scrapi).
To create a harvester, first fork the [scrapi repo](https://github.com/CenterForOpenScience/scrapi). You'll add your new harvester in the [harvesters folder](https://github.com/CenterForOpenScience/scrapi/tree/develop/scrapi/harvesters).
Here's what a typical custom harvester looks like:
```python
"""
A CrossRef harvester for the SHARE project
Example API request: http://api.crossref.org/v1/works?filter=from-pub-date:2015-02-02,until-pub-date:2015-02-02&rows=1000
"""
from __future__ import unicode_literals
import json
import logging
from datetime import date, timedelta
from six.moves import xrange
from nameparser import HumanName
from scrapi import requests
from scrapi import settings
from scrapi.base import JSONHarvester
from scrapi.linter.document import RawDocument
from scrapi.base.helpers import build_properties, compose, datetime_formatter
logger = logging.getLogger(__name__)
def process_contributor(author, orcid):
name = HumanName(author)
ret = {
'name': author,
'givenName': name.first,
'additionalName': name.middle,
'familyName': name.last,
'sameAs': [orcid] if orcid else []
}
return ret
def process_sponsorships(funder):
sponsorships = []
for element in funder:
sponsorship = {}
if element.get('name'):
sponsorship['sponsor'] = {
'sponsorName': element['name']
}
if element.get('award'):
sponsorship['award'] = {
'awardName': ', '.join(element['award'])
}
if element.get('DOI'):
sponsorship['award']['awardIdentifier'] = 'http://dx.doi.org/{}'.format(element['DOI'])
sponsorships.append(sponsorship)
return sponsorships
class CrossRefHarvester(JSONHarvester):
short_name = 'crossref'
long_name = 'CrossRef'
url = 'http://www.crossref.org'
DEFAULT_ENCODING = 'UTF-8'
record_encoding = None
@property
def schema(self):
return {
'title': ('/title', lambda x: x[0] if x else ''),
'description': ('/subtitle', lambda x: x[0] if (isinstance(x, list) and x) else x or ''),
'providerUpdatedDateTime': ('/issued/date-parts', compose(datetime_formatter, lambda x: ' '.join([str(part) for part in x[0]]))),
'uris': {
'canonicalUri': '/URL'
},
'contributors': ('/author', compose(lambda x: [
process_contributor(*[
'{} {}'.format(entry.get('given'), entry.get('family')),
entry.get('ORCID')
]) for entry in x
], lambda x: x or [])),
'sponsorships': ('/funder', lambda x: process_sponsorships(x) if x else []),
'otherProperties': build_properties(
('journalTitle', '/container-title'),
('volume', '/volume'),
('tags', ('/subject', '/container-title', lambda x, y: [tag.lower() for tag in (x or []) + (y or [])])),
('issue', '/issue'),
('publisher', '/publisher'),
('type', '/type'),
('ISSN', '/ISSN'),
('ISBN', '/ISBN'),
('member', '/member'),
('score', '/score'),
('issued', '/issued'),
('deposited', '/deposited'),
('indexed', '/indexed'),
('page', '/page'),
('issue', '/issue'),
('volume', '/volume'),
('referenceCount', '/reference-count'),
('updatePolicy', '/update-policy'),
('depositedTimestamp', '/deposited/timestamp')
)
}
def harvest(self, start_date=None, end_date=None):
start_date = start_date or date.today() - timedelta(settings.DAYS_BACK)
end_date = end_date or date.today()
base_url = 'http://api.crossref.org/v1/works?filter=from-pub-date:{},until-pub-date:{}&rows={{}}&offset={{}}'.format(start_date.isoformat(), end_date.isoformat())
total = requests.get(base_url.format('0', '0')).json()['message']['total-results']
logger.info('{} documents to be harvested'.format(total))
doc_list = []
for i in xrange(0, total, 1000):
records = requests.get(base_url.format(1000, i)).json()['message']['items']
logger.info('Harvested {} documents'.format(i + len(records)))
for record in records:
doc_id = record['DOI']
doc_list.append(RawDocument({
'doc': json.dumps(record),
'source': self.short_name,
'docID': doc_id,
'filetype': 'json'
}))
return doc_list
```
__scrapi__
Scrapi has a few custom tools to help with requests, parsing your provider's schema to match with the SHARE Schema, and linting the provided documents to make sure that the normalized results match with the SHARE schema.
__HarvesterClass(FormatHarvester)__
The base harvester class will inherit from a base harvester type, either a JSONHarvester or a XML Harvester. The base harvester specifies the schema that will be transformed into the new SHARE harvester.
__HarvesterClass Methods__
__schema()__
Return a dictionary, where the outer keys are the elements of the SHARE schema, and the values are the equivalent entry in the target provider, as well as any functions that should be run on the result of that request to format it properly. Use xpath statements for XML target schemas, and the keyword itself for json schemas.
For more information, see the section on [schema transformers](https://osf.io/wur56/wiki/Transformers/).
__harvest()__
This function will be very similar for all custom harvesters. Request a group of records from your provider, and add each of those records to a list of RawDocuments, (a type defined by scrAPI. A raw document consists of the rawrecord, stored in doc, the source, or the name of the source, a docID, or unique identifier for the document, and a filetype of the provider API.
__helper functions()__
Each harvester will have smaller helper functions that format return values of the provider API. These are passed to the schema transformer to properly format the information received from the provider.These functions can include:
* process_contributors()
* process_tags()
* Any other helper functions that go in the schema transformer
## Creating Tests
Test are mostly auto-generated, but you do have to make a slight modification to generate the test for the first time.
Inside of ```scrapi/tests/test_harvesters.py``` change the 'record_mode' on line 22 to 'once.' It should now read:
```
with vcr.use_cassette('tests/vcr/{}.yaml'.format(harvester_name), match_on=['host'], record_mode='once'):
```
Run your test with:
```py.test tests/test_harvesters.py::test_harvester\[your_shortname_here\]```
There is a chance that your automatically created test will fail when run for the first time. If that's the case, you can create a new vcr file that will hopefully work.
Delete the old vcr file inside ```scrapi/tests/vcr/shortname.py```
Change the date within the "freeze time" decorator above def test_harvester() to a date where you know the harvester had results (don't forget to import from freezegun). For example: ```@freeze_time("2014-03-15")```
Inside of scrapi/tests/test_harvesters.py make sure the record mode is still once.
Re-run your test with ```py.test tests/test_harvesters.py::test_harvester\[your_shortname_here\]```
To run a test for a specific harvester, run ```invoke one_test shortname```
Make sure to not save these changes to test_harvesters.py!