# This file contains comprehensive details for each of the columns of the detected vocalisations. # Please note that the detections are made using an 8-second window (no overlap). # This section presents the output of the neural network, which is not required for further processing. file : Name of the file, with the time of the file analysed appended (e.g. '40.0' = 40 seconds to 48 seconds). class : Class of the detection x : Centre of the bounding box in ration (e.g. 0.5 = 50% = middle) along the x axis. y : Centre of the bounding box in ratio along the y axis. w : Width of the bounding box in ratio along the x axis. h : Height of the bounding box in ratio along the y axis. # The model's confidence threshold is the confidence level of the model associated with the detection. conf : Confidence of the model. offset : The time at the beginning of the window. # Converted part for Raven and analysis Annotation : Vocalisation unit according to the vocal repertoire. pos : Time in seconds of the centre of the detection. Low Freq (Hz) : Lower frequency of the bounding box. High Freq (Hz) : Higher frequency of the bounding box. Begin Time (s) : Start time of the detection. End Time (s) : End time of the detection. duration : Total duration time of the detection. filename : Name of the file in which the detections were made. date : Date of the detection in format YYYY-mm-dd. ########### RAVEN files ########### In order to visualise detections on an audio file, please open the file in Raven and download the corresponding annotation file from the RAVEN folder (https://sabiod.lis-lab.fr/pub/CARIMAM/ANTIGUA_2025/RAVEN/) (in the format .Table1.txt). Then, load the audio file into Raven and add the annotations using the 'File' menu option, followed by 'Open Sound Selection Table'. /!\ Please note that the confidence threshold for detection is set at 0.01. /!\ /!\ It is important to be aware that some vocalisations may be new to the repertoire and therefore may be missed or misinterpreted. /!\ # Should you require any further information or if you experience any problems, please do not hesitate to contact : stephane.chavin@lis-lab.fr