**The 3D-LEX Project**
----------------------
Dear user,
We are currently preparing and uploading the 3D-LEX dataset. If the data you seek is not yet available, please revisit our site next week or contact the corresponding author at: oline.ranum@student.uva.nl.
This directory contains the codebase and dataset for [the 3D-LEX Project][1].
[1]: https://www.sign-lang.uni-hamburg.de/lrec/pub/24030.html
In this project, we introduce the 3D-LEX dataset, featuring high-resolution, three-dimensional lexicons of prevalent signs in American Sign Language and Sign Language of the Netherlands. The collection includes 1000 signs from each language, encompassing handshapes, body poses, and facial features captured using three distinct motion capture techniques. Our data acquisition method is highly efficient, achieving an average capture time of 10 seconds per sign language word. This includes the time for sign demonstration, performance, and the automated triggering and storage of the recorded sign. 3D-LEX supports in-depth analysis of sign features and facilitates the generation of 2D projections from any viewpoint. Additionally, the collection has been aligned with existing sign language benchmarks and resources, to support research exploring how 3D data can be leveraged to support existing video benchmarks.