Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
This training data was used to train the Surface model and the Neural Network model of HyperBlend. For re-training, you can download the training_data.zip. Unzip it into HyperBlend's directory structure at `HyperBlend/leaf_measurement_sets/`. Note that the name of the directory may change in future versions of HyperBlend. This data is used in paper: *HyperBlend leaf simulator : improvements on simulation speed, generalizability, and parameterization* (https://doi.org/10.1117/1.JRS.17.038505) The new Itertive method of training is now also included (except training data itself). In the iterative traning, the old system where the starting guess is obtained from curve fitting, is only used in the first iteration. Later iterations use surface fitting as a starting guess and each iteration lessens the similarity requirement for the training data to incrementally adapt to more asymmetric R and T. Also, the goodness of accepted training points is relaxed from 1% error to 2%. The iterative training results were run with the code in commit `322cb36d0f245b4ec9e017ca01639b76142a9a70`. For quick glance, here is the snippet used to run training: ```python # Iterative train manually set_name_iter_1 = "train_iter_1v4" LI.train_models(set_name=set_name_iter_1, generate_data=True, starting_guess_type='curve', train_points_per_dim=30, similarity_rt=0.25, train_surf=True, train_nn=False, data_generation_diff_step=0.01) set_name_iter_2 = "train_iter_2_v4" surf_model_name = FN.get_surface_model_save_name(set_name_iter_1) LI.train_models(set_name=set_name_iter_2,generate_data=True, starting_guess_type='surf', surface_model_name=surf_model_name, similarity_rt=0.5, train_surf=True, train_nn=False, train_points_per_dim=50, data_generation_diff_step=0.001) set_name_iter_3 = "train_iter_3_v4" surf_model_name =FN.get_surface_model_save_name(set_name_iter_2) LI.train_models(set_name=set_name_iter_3, generate_data=True, starting_guess_type='surf', surface_model_name=surf_model_name, similarity_rt=0.75, train_surf=True, train_nn=False, train_points_per_dim=70,data_generation_diff_step=0.001) set_name_iter_4 = "train_iter_4_v4" surf_model_name =FN.get_surface_model_save_name(set_name_iter_3) LI.train_models(set_name=set_name_iter_4, generate_data=False, starting_guess_type='surf', surface_model_name=surf_model_name, similarity_rt=1.0, train_surf=False, train_nn=True, train_points_per_dim=200, dry_run=False, data_generation_diff_step=0.001, show_plot=True, learning_rate=0.0005) ``` and the tests can be run with: ```python nn_name = "lc5_lw1000_b32_lr0.000_split0.10.pt" surf_model_name = FN.get_surface_model_save_name('train_iter_4_v4') resolution = 5 LI.solve_leaf_material_parameters(clear_old_results=True, resolution=resolution, set_name="iterative_specchio_nn", copyof="specchio", solver="nn", solver_model_name=nn_name, plot_resampling=False, use_dumb_sampling=True) LI.solve_leaf_material_parameters(clear_old_results=True, resolution=resolution, set_name="iterative_specchio_surf", copyof="specchio", solver="surf", plot_resampling=False, solver_model_name=surf_model_name, use_dumb_sampling=True) LI.solve_leaf_material_parameters(clear_old_results=True, resolution=resolution, set_name="iterative_prospect_nn", copyof="prospect_randoms", solver="nn", solver_model_name=nn_name, plot_resampling=False, use_dumb_sampling=True) LI.solve_leaf_material_parameters(clear_old_results=True, resolution=resolution, set_name="iterative_prospect_surf", copyof="prospect_randoms", solver="surf", plot_resampling=False, solver_model_name=surf_model_name, use_dumb_sampling=True) ```
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.