The following model was part of the the research article:
Developing an Interpretable Machine Learning Model for the Detection of Mimosa Grazing in Goats
You can test the app using an example dataset available here
A dataset is already preloaded in the app for demostration purposes
In the last years, several machine learning approaches for detecting animal behaviors have been proposed. However, despite their successful application, their complexity and lack of explainability have difficulty in their application to real-world scenarios. The article presents a machine-learning model for differentiating between grazing mimosa and other activities (resting, walking, and grazing ) in goats using sensor data. Boruta, an algorithm for selecting the most relevant features, and SHAP, a technique for interpreting the decision of a machine learning model are two fundamental components of the methodology used for creating the model. The resulting model, a gradient boost algorithm with 15 selected features proved to be extremely accurate in detecting Grazing activities. The study demonstrates the fundamental role of model explainability in identifying model weaknesses and errors, thereby creating a path for future improvements. In addition, the simplicity of the resulting model not only reduces computational complexity and processing time but also enhances interpretability and facilitates the deployment of real-life scenarios.
This application allows users to test the pre-trained machine learning model that predicts goat behavior based on input sensor data. The input data should be a tab-separated value (.tsv) file containing specific sensor data related to the goat's activity.
The application then generates predictions, provides a confusion matrix result, and offers the option to download the predictions. In addition you can explore the decisions of the model via SHAP analysis.
The key features expected in the dataset are:
| No | Feature | Definition |
|---|---|---|
| 1 | Steps | Number of steps |
| 2 | HeadDown | % time with head down |
| 3 | Standing | % time Standing |
| 4 | Active | % time Active |
| 5 | MeanXY | Arithmetic mean between X and Y positions |
| 6 | Distance | Distance in meters |
| 7 | prev_steps1 | Number of steps one step backward |
| 8 | X_Act | X position actuator |
| 9 | prev_Active1 | % time Active one step backward |
| 10 | prev_Standing1 | % time Standing one step backward |
| 11 | DFA123 | Accumulative Euclidean distance from actual position to three positions forward |
| 12 | prev_headdown1 | % time with head down one step backward |
| 13 | Lying | % time Lying |
| 14 | Y_Act | Y position actuator |
| 15 | DBA123 | Accumulative Euclidean distance from actual position to three positions backward |