In Task 1, the models are expected to extract the monophonic voice signal from the 3D mixture that contains various background noises. The evaluation T1 metric for this task is a combination of the short-time objective intelligibility (STOI), which estimates the intelligibility of the output speech signal, and word error rate (WER), computed to assess the effects of the enhancement for speech recognition purposes. The T1 metric lies in the range [0,1], where the higher the value, the better.
L3DAS23 Challenge results for Task 1 are depicted in the following interactive chart.
The table below shows the L3DAS23 Challenge rank for Task 1, including all the scores for the sake of comparison.
Rank | Team Name | WER | STOI | T1 Metric |
---|---|---|---|---|
1 | SEU Speech | 0.101 | 0.902 | 0.901 |
2 | JLESS | 0.174 | 0.836 | 0.831 |
3 | CCA Speech | 0.240 | 0.831 | 0.796 |
- | Baseline | 0.567 | 0.673 | 0.553 |
4 | SpeechLab410 | 0.643 | 0.608 | 0.483 |
In task 2, the models are expected to predict a list of the active sound events and their respective location at regular intervals of 100 milliseconds. The evaluation T2 metric is a location-sensitive detection error computed on each time frame. It consists of measuring the Cartesian distance between the predicted and true events, and then computing the F score. The T2 metric lies in the range [0,1], where the higher the value, the better.
L3DAS23 Challenge results for Task 2 are depicted in the following interactive chart.
The table below shows the L3DAS23 Challenge rank for Task 2, including all the scores, for the sake of comparison.
Rank* | Team Name | Precision | Recall | T2 Metric |
---|---|---|---|---|
1 | JLESS | 0.288 | 0.204 | 0.239 |
2 | NERCSLIP-USTC | 0.275 | 0.216 | 0.242 |
- | Baseline | 0.182 | 0.140 | 0.158 |
* The quality of the results produced, as well as the T2 metric, was considered to define the ranking order.
The list of results organized by Task, Track, and 1-mic and 2-mic configuration are available at this link.
Based on the challenge results the following benefits are assigned.
The first 5 ranked teams are allowed to submit a paper to ICASSP 2023. Given the number of submissions to the two tasks, we accept papers from all the teams, as listed in the following table. Papers will undergo a regular peer-review process. The format should be consistent with ICASSP 2-page paper. The deadline for the paper submission is February 20 at 11:59 p.m. AoE (strict deadline).
Ranking Position | Task | Team |
---|---|---|
1st | 1: 3D SE | SEU Speech |
2nd | 1: 3D SE + 2: 3D SELD | JLESS |
3rd | 1: 3D SE | CCA Speech |
4th | 2: 3D SELD | NERCSLIP-USTC |
5th | 1: 3D SE | SpeechLab410 |
The following table shows the training hours and related CO2 production in kg eq., as reported by participants during submission.
Team Name | CO2 Best Model | Training hours | Total CO2 | Total Training hours |
---|---|---|---|---|
SEU Speech | 14.11 kg CO2 eq. | 90 h | 14.11 kg CO2 eq. | approx. 90 h |
CCA Speech | 11.19 kg CO2 eq. | 74 h | 22.9 kg CO2 eq. | approx. 180 h |
SpeechLab410 | 12.20 kg CO2 eq. | 121 h | 13.96 kg CO2 eq. | approx. 139 h |
NERCSLIP-USTC | 0.54 kg CO2 eq. | 5 h | 16.74 kg CO2 eq. | approx. 155 h |
JLESS | 12.20 kg CO2 eq. | 121 h | 18.15 kg CO2 eq. | approx. 120 h |
If you are unable to view the interactive charts due to your geographical location, click here to replace them with images.