banner

Technologies

Technologies

Speech Recognition

Trained with massive data for deep learning, Fano Labs’ Automatic Speech Recognition technology can accurately recognize Mandarin, English, minority languages, as well as dialects such as Cantonese and Sichuanese. Our Automatic Speech Recognition engine can be customized for various scenarios to optimize specific variables such as accents, industry jargons and background noise, so as to significantly improve the accuracy and stability of the speech recognition engine in different environments.

Features

  • 图片一 Supports recognition of various dialects and minority languages
  • 图片二 Continuously increase recognition accuracy by self-learning
  • 图片三 Personalizes the model to accurately identify industry jargons
  • 图片四 Customized development and flexible deployment

Application Scenarios

  • Voice-enabled Chatbot

    Voice-enabled Chatbot
  • Voice Transcription

    Voice Transcription
  • Speech Analytics

    Speech Analytics
  • Voiceprint Recognition

    Voiceprint Recognition

Research Papers

  • Incorporating Prior Knowledge Into Speaker Diarization and Linking for Identifying Common Speaker

    Tsun-Yat Leung, Lahiru Samarakoon, and Albert Y.S. Lam, in Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2019), Dec. 2019.

    Speaker Diarization and Linking discovers “who spoke when” across recordings without any speaker enrollment. Diarization is performed on each recording separately, and the linking combines clusters of the same speaker across recordings. It is a two-step approach, however it suffers from propagating the error from diarization step to the linking step. In a situation where a unique speaker appears in a given set of recordings, this paper aims at locating the common speaker using the prior knowledge of his or her existence. That means there is no enrollment data for this common speaker. We propose Pairwise Common Speaker Identification (PCSI) method that takes the existence of a common speaker into account in contrast to the two-step approach. We further show that PCSI can be used to reduce the errors that are introduced in the diarization step of the two-step approach. Our experiments are performed on a corpus synthesised from the AMI corpus and also on a in-house conversational telephony Sichuanese corpus that is mixed with Mandarin. We show up to 7.68% relative improvements of time-weighted equal error rate over a state-of-art x-vector diarization and linking system.

  • Domain Adaptation of End-to-end Speech Recognition in Low-resource Settings

    Lahiru Samarakoon, Brian Mak, and Albert Y.S. Lam. IEEE Workshop on Spoken Language Technology (IEEE SLT 2018), Athens, Greece, Dec. 2018.

    End-to-end automatic speech recognition (ASR) has simplified the traditional ASR system building pipeline by eliminating the need to have multiple components and also the requirement for expert linguistic knowledge for creating pronunciation dictionaries. Therefore, end-to-end ASR fits well when building systems for new domains. However, one major drawback of end-to-end ASR is that, it is necessary to have a larger amount of labeled speech in comparison to traditional methods. Therefore, in this paper, we explore domain adaptation approaches for end-to-end ASR in low-resource settings. We show that joint domain identification and speech recognition by inserting a symbol for domain at the beginning of the label sequence, factorized hidden layer adaptation and a domain-specific gating mechanism improve the performance for a low-resource target domain. Furthermore, we also show the robustness of proposed adaptation methods to an unseen domain, when only 3 hours of untranscribed data is available with improvements reporting up to 8.7% relative.


  • Subspace Based Sequence Discriminative Training of LSTM Acoustic Models with Feed-Forward Layers

    Lahiru Samarakoon, Brian Mak, and Albert Y.S. Lam. ISCSLP, Taipei, Taiwan, Nov. 2018.

    State-of-the-art automatic speech recognition (ASR) systems use sequence discriminative training for improved performance over frame-level cross-entropy (CE) criterion. Even though sequence discriminative training improves long short-term memory (LSTM) recurrent neural network (RNN) acoustic models (AMs), it is not clear whether these systems achieve the optimal performance due to overfitting. This paper investigates the effect of state-level minimum Bayes risk (sMBR) training on LSTM AMs and shows that the conventional way of performing sMBR by updating all LSTM parameters is not optimal. We investigate two methods to improve the performance of sequence discriminative training of LSTM AMs. First more feed-forward (FF) layers are included between the last LSTM layer and the output layer so those additional FF layers may bene- fit more from sMBR training. Second, a subspace is estimated as an interpolation of rank-1 matrices when performing sMBR for the LSTM layers of the AM. Our methods are evaluated in benchmark AMI single distance microphone (SDM) task. We find that the proposed approaches provide 1.6% absolute improvement over a strong sMBR trained LSTM baseline.