BYOM walkthrough DEBtox2019 package
- Author: Tjalling Jager
- Date: November 2021
- Web support: http://www.debtox.info/byom.html
Step-by-step walk through the code of the BYOM DEBtox2019 package. This package is linked to the publication of the DEBtox update (Jager, 2020), which is based on the DEBkiss model. This walk through is made with the 'publish' option in Matlab, which might also be very convenient to keep track of your work (as a modeller's log book).
Note that this package contains three (closely related) model versions. They are explained in detail in the text file in the package (about_the_DEBtox2019_package.txt). This walk through uses the 'simple_compound' model.
This walk through provides an example for Daphnia magna, using a strategy of first fitting the basic parameters to the control treatment, and next fitting the tox parameters on all treatments, keeping the basic parameters fixed. This allows for more straightforward (and easier-to-calculate) model analysis (fitting more than 6 parameters simultaneously is generally a bad idea).
This walk through consists of the following files:
- byom_debtox_daphnia.m: the script is set up to fit data for growth, reproduction and survival for Daphnia exposed to fluoranthene. First, the basic model parameters are fitted on the controls only. Next, the full data set is fitted, fixing the basic parameters and only fitting the tox parameters.
- derivatives.m: the actual model in the form of a system of ordinary differential equations (ODEs).
- call_deri.m: calls derivatives.m or simplefun.m to calculate the model output.
- pathdefine.m: a piece of code that searches for the engine directory, and adds it to the Matlab path. No need to make changes here, just make sure it is in every directory from which you run BYOM scripts.
- byom_debtox_daphnia.m: this is the version in the ERA_special directory, which is a more tricked out version that also allows running through a sequence of configurations.
- data_FLU_Dmagna.m: the data are contained and prepared in a separate function. This is handy when working with multiple data sets, so they are easy to combine and use for calibration or validation.