Speech Recognition - National Design System

A voice-to-text plugin that lets users dictate into any text input, with automatic language detection for Arabic and English, audio feedback, and a programmatic API for custom integrations.

Voice Input on a Form Field

Add nds-voice-input to any action button inside a form container to activate speech-to-text. The button toggles listening on and off, and the transcript is written directly into the input.

Search with Voice Input

Built-in Features

Auto-initialization

Any button with nds-voice-input inside an NDS form container is wired automatically when the form module loads. No extra JS required.

Automatic Language Detection

Reads the page language at recognition start and sets ar-SA for Arabic pages or en-US for English, with no manual configuration needed.

Live Interim Transcripts

Partial results appear in the field as you speak, styled in italic to distinguish them from committed text. The final transcript replaces them when speech ends.

Audio Feedback Tones

A short tone plays when the microphone opens (high pitch), closes (low pitch), or encounters an error (very low pitch), giving clear non-visual feedback during dictation.

30-Second Timeout

Listening automatically stops after 30 seconds of inactivity and shows a localized timeout message in the input placeholder, preventing the microphone from staying open indefinitely.

Programmatic Control

Access NDS.VoiceRecognition directly to create recognition instances, handle transcript callbacks, and wire voice input to custom UI elements outside the standard form container.

Usage Guidelines

Best Practices

  • Use voice input on search fields and long free-text inputs where typing is burdensome. Short, constrained fields like phone numbers, postcodes, or PIN codes are not good candidates.
  • Use the search input variant (nds-search-input) for search fields. It includes the microphone button slot alongside the clear button by design.
  • Do not add voice input to password fields, OTP fields, or other security-sensitive inputs where dictation could expose credentials to bystanders or screen-recording software.
  • Do not add voice input to <select>, <textarea>, or read-only inputs. The plugin targets the primary text input inside the form control and relies on setting .value directly.
  • Always provide a visible microphone icon in the button so users can identify it without reading the aria-label. Use the nds-hgi-mic-01 UI icon for consistency with the rest of NDS.
  • Graceful degradation is automatic: if the browser does not support the Web Speech API the button is hidden and the input works as a normal text field. Do not write your own feature-detect.
  • The plugin requires microphone permission from the browser. Pair it with a visible permission explanation or tooltip when the feature is prominent in a service flow, so users understand why they are being prompted.
  • In Arabic layouts the plugin sets lang="ar-SA" on the recognition instance automatically. You do not need to set dir or lang on the input itself.
  • For custom integrations using the VoiceRecognition API directly, always call isSupported() first and guard the rest of your code behind it.

Error Messages

When recognition fails, the plugin sets a localized message as the input placeholder for 3 seconds. These messages are bilingual and chosen automatically by page language.

Error CodeEnglish MessageArabic Message
no-speechNo speech detectedلم يتم اكتشاف صوت
not-allowedMicrophone permission requiredمطلوب إذن الميكروفون
audio-captureMicrophone access deniedتم رفض الوصول للميكروفون
networkNetwork errorخطأ في الشبكة
abortedVoice input cancelledتم إلغاء إدخال الصوت
language-not-supportedLanguage not supportedاللغة غير مدعومة

JavaScript API

The NDS.VoiceRecognition module is exposed for custom integrations. Use it to attach voice input to elements outside the standard NDS form container, or to build custom recording UI with full control over transcript handling.

const VR = NDS.VoiceRecognition; // ── Browser support check ───────────────────────────── if (!VR.isSupported()) { // Web Speech API not available — hide mic UI } // ── Detected language ──────────────────────────────── // Returns 'ar-SA' on Arabic pages, 'en-US' on English pages const lang = VR.getLanguage(); // ── Create a recognition instance ──────────────────── // Accepts any SpeechRecognition property as an override const recognition = VR.create({ continuous: false, // stop after first utterance (default) interimResults: true, // stream partial transcripts (default) maxAlternatives: 1 // number of alternatives (default) }); // ── Start listening ─────────────────────────────────── VR.startListening(recognition, { onStart: function() { // Microphone is open, audio tone plays automatically }, onResult: function(result) { // result.final — committed transcript so far // result.interim — partial transcript in progress // result.isFinal — true when the utterance is complete myInput.value = result.isFinal ? result.final : result.interim; }, onError: function(errorCode) { // errorCode: 'no-speech' | 'not-allowed' | 'audio-capture' // | 'network' | 'aborted' | 'language-not-supported' console.warn('Voice error:', errorCode); }, onEnd: function(finalTranscript) { // Called when recognition session ends (normally or on error) // finalTranscript — the full committed text } }); // ── Stop listening ──────────────────────────────────── VR.stopListening(recognition); // ── Audio feedback tones (called automatically by the plugin) ─ VR.audioFeedback.start(); // high-pitch tone — microphone open VR.audioFeedback.end(); // low-pitch tone — session ended VR.audioFeedback.error(); // very low tone — error occurred
Was this page useful?
60% of users said Yes from 2843 Feedbacks