Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
This OSF project is meant to accompany the SIGMOD 2020 paper "Database Benchmarking for Supporting Real-Time Interactive Querying of Large Multi-Dimensional Data". # Files This project contains: * the full benchmark codebase (workflows, experiment scripts and code, results analysis code). * The input datasets for the 1M and 10M dataset cases (unfortunately, the 100M files were too large too include on any available anonymous platforms). * A technical report version of the submission with more details (e.g., all task prompts for the user study, experiment results for the laptop environment setup, etc.) # Wiki table of contents These wiki pages explain how to get our Docker image setup for running our benchmark experiments, and how to execute our experiments using our codebase. Below is the table of contents: ## docker_setup This wiki page explains how to setup the Docker image we created to facilitate replication of our experiments. ## running_the_benchmark Given a running Docker setup, this wiki page explains how to run our experiment code. ## GitHub repository The code needed to run our benchmark is also available through our [GitHub repository](https://github.com/leibatt/crossfilter-benchmark-public). However, due to file size constraints, our datasets and Docker environment are only available on OSF.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.