Skip to main content

V9Latest_Service

A list of all methods in the V9Latest_Service service. Click on the method name to view detailed information about that method.

MethodsDescription
scoreAWordOrSentenceIn this example we score pronunciation of a word or sentence The overall score is returned as a speechace_score on a scale of 0 to 100. In addition, the API returns a quality_score on a scale of 0 to 100 for each word, syllable, and phoneme in the utterance which allows pin-pointed feedback on pronunciation mistakes made by the speaker. ### Overall Score The API result contains the following score field under the text_score node: | Field | Description | | --- | --- | | speecahce_score.pronunciation | An overall pronunciation score for the the entire utterance on a scale of 0 to 100. See guide for detail on score rubric. | ### Feedback Subscores In addition the API returns the following lists of elements and subscores. | Field | Description | | --- | --- | | word_score_list[] | a list of words in the utterance, each with it's own quality_score | | syllable_score_list[] | a list of syllables in each word in the word_score_list[], each with it's own quality_score | | phone_score_list[] | a list of phonemes in each word in the word_score_list[], each with it's own quality_score | Each element has its own quality_score (see guide), its extent information marking its begin and end in time (see guide) and additional fields. For example each element in phone_score_list[] identifies the expected phone in Arpabet phonetic notation and the actual sound_most_like phone based on the speaker's attempt. It also includes the word_extent for that phone to enable mapping to its corresponding letters in the word. This enables Applications to visually demonstrate pronunciation errors to the speaker. json "word_score_list":[ { "word": "Some", "quality_score": 100, "phone_score_list": [ { "phone": "s", "stress_level": null, "extent": [10,27], "quality_score": 99.05882352941177, "sound_most_like": "s" }, { "phone": "ah", "stress_level": 1, "extent": [27,36], "quality_score": 100, "stress_score": 100, "sound_most_like": "ah" }, ... } ]
scoreAPhonemeListIn this example we score the term: "gotcha" /g/ao1/ch/ah0 Since gotcha is an american vernacular and not a valid dictionary word we use the phoneme list API to score it. The phoneme list uses a different url endpoint and expects the list of phonemes in Arpabet notation. Note that we specify phoneme stress as 0,1,2 per Arpabet notation. This API allows you to score any word or sentence that can phonetically expressed in Arpabet.
validateTextIn this example we validate whether all the words in the text exist in the Speechace lexicon. This API allows you to quickly check whether authored content will be able to be scored with Speechace. This is useful to use at the time of text authoring to avoid errors later on. Out of lexicon terms can be reported to [email protected] for inclusion. Or you can see the phoneme list API as an alternative. You may also opt let Speechace API automatically handle unknown words by finding the most likely phonetic mapping for the term.
scoreTaskThis section contains a full description of the Score Task request type. In the subsequent sections you can find specific request examples for the following task types: - describe-image - retell-lecture - answer-question
transcribeScoreIn this example we transcribe a free speaking audio and score the response providing an estimated IELTS and CEFR score for each of the following aspects: - Fluency - Pronunciation - Grammar - Vocabulary - Coherence The API accepts the user audio and a relevance_context as inputs. The relevance_context is typically a question prompt provided to the user and is used to provide a relevance assessment of whether the user's response is relevant or not. Irrelevant answers have the overall IELTS score automatically set to zero and a warning is returned in the score_issue_list[] in the API result. ### Overall Scores The overall score is returned in 5 formats. See the Scoring Rubrics Guide to understand how the different formats map to each other: - A speechace_score on a scale of 0 to 100 - An ielts_score on a standard IELTS scale of 0 to 9.0 - A pte_score on a standard PTE scale or 10 to 90 - A cefr_score on a standard scale of A0 to C2 - A toeic_score on a standard scale of 0 to 200 In addition, the API returns segment (sentence) level scores to provide feedback on the speaker's weaknesses on the whole and at specific segments within the passage. The API result contains the following score fields under the speech_score node: | Field | Description | | --- | --- | | transcript | The speech-to-text transcript of what the user has said. | | speechace_score | An overall score on a scale of 0 to 100, in addition to subscores for: Fluency, Pronunciation, Grammar, Vocabulary, Coherence. | | ielts_score | An overall score on an IELTS scale of 0 to 9.0, in addition to subscores for: Fluency, Pronunciation, Grammar, Vocabulary, Coherence. | | pte_score | An overall score on a PTE scale of 10 to 90, in addition to subscores for: Fluency, Pronunciation, Grammar, Vocabulary, Coherence. | | cefr_score | An overall score on CEFR scale of A0 to C2, in addition to subscores for: Fluency, Pronunciation, Grammar, Vocabulary, Coherence. | | toeic_score | An overall score on an TOEIC scale of 0 to 200, in addition to subscores for: Fluency, Pronunciation, Grammar, Vocabulary, Coherence. | | relevance.class | TRUE or FALSE indicating whether the response was relevant given the relevance_context passed as input to the API. | The following example snippet from the API results demonstrates the overall scores: json { "status": "success", "speech_score": { "transcript": "But the residents have felt the strain, they to launched a healthy streets program, opening up, select streets to just walking and cycling. Now, this action proved valuable in helping residents life and broaden the benefit of their tax dollars. That typically pay to serve cars. New designs were implemented on South Congress. The iconic Main Street of Texas, Inn, Downtown Austin, the stretch of road has changed character overtime evolve e with advances in technology Civic priorities or public preferences with City council's Direction. This stretch of road now has just two fewer Lanes of car traffic. A third of the street space was given over to people bicycling and rolling on scooters. Taking them off the busy sidewalks better suited for dining under the oak trees and give them increased comfort and safety.", "relevance": { "class": "TRUE" }, "ielts_score": { "pronunciation": 8.5, "fluency": 9, "grammar": 8.5, "coherence": 9, "vocab": 9, "overall": 9 }, "pte_score": { "pronunciation": 86, "fluency": 87, "grammar": 86, "coherence": 90, "vocab": 89, "overall": 87 }, "speechace_score": { "pronunciation": 97, "fluency": 98, "grammar": 97, "coherence": 100, "vocab": 99, "overall": 98 }, "toeic_score": { "pronunciation": 190, "fluency": 200, "grammar": 190, "coherence": 200, "vocab": 200, "overall": 200 }, "cefr_score": { "pronunciation": "C2", "fluency": "C2", "grammar": "C2", "coherence": "C2", "vocab": "C2", "overall": "C2" } ... } } ### Feedback Metrics In addition, the API returns the following feedback nodes: | Node | Description | | --- | --- | | fluency | This node contains fluency metrics and subscores for the overall utterance and for each segment (sentence) within the utterance. | | word_score_list[] | This node contains pronunciation scores and metrics for each word, syllable, and phoneme within the utterance. | | grammar | This node contains grammar metrics, errors and feedback for the overall utterance. | | vocab | This node contains vocabulary metrics, errors and feedback for the overall utterance. | | coherence | This node contains coherence metrics, errors and feedback for the overall utterance. |

scoreAWordOrSentence

In this example we score pronunciation of a word or sentence The overall score is returned as a speechace_score on a scale of 0 to 100. In addition, the API returns a quality_score on a scale of 0 to 100 for each word, syllable, and phoneme in the utterance which allows pin-pointed feedback on pronunciation mistakes made by the speaker. ### Overall Score The API result contains the following score field under the text_score node: | Field | Description | | --- | --- | | speecahce_score.pronunciation | An overall pronunciation score for the the entire utterance on a scale of 0 to 100. See guide for detail on score rubric. | ### Feedback Subscores In addition the API returns the following lists of elements and subscores. | Field | Description | | --- | --- | | word_score_list[] | a list of words in the utterance, each with it's own quality_score | | syllable_score_list[] | a list of syllables in each word in the word_score_list[], each with it's own quality_score | | phone_score_list[] | a list of phonemes in each word in the word_score_list[], each with it's own quality_score | Each element has its own quality_score (see guide), its extent information marking its begin and end in time (see guide) and additional fields. For example each element in phone_score_list[] identifies the expected phone in Arpabet phonetic notation and the actual sound_most_like phone based on the speaker's attempt. It also includes the word_extent for that phone to enable mapping to its corresponding letters in the word. This enables Applications to visually demonstrate pronunciation errors to the speaker. json "word_score_list":[ { "word": "Some", "quality_score": 100, "phone_score_list": [ { "phone": "s", "stress_level": null, "extent": [10,27], "quality_score": 99.05882352941177, "sound_most_like": "s" }, { "phone": "ah", "stress_level": 1, "extent": [27,36], "quality_score": 100, "stress_score": 100, "sound_most_like": "ah" }, ... } ]

  • HTTP Method: POST
  • Endpoint: /api/scoring/text/v9/json

Parameters

NameTypeRequiredDescription
bodyScoreAWordOrSentenceRequestThe request body.
keystringAPI key issued by Speechace.
dialectstringThe dialect to use for scoring. Supported values are "en-us" (US English) and "en-gb" (UK English).
userIdstringOptional: A unique anonymized identifier for the end-user who spoke the audio.
__string

Return Type

ScoreAWordOrSentenceOkResponse

Example Usage Code Snippet

import { ScoreAWordOrSentenceRequest, Speechaceapi } from 'speechaceapi';

(async () => {
const speechaceapi = new Speechaceapi({});

const input: ScoreAWordOrSentenceRequest = {
includeFluency: '1',
includeUnknownWords: '1',
noMc: '1',
text: 'Yo vivo en Granada, una ciudad pequeña que tiene monumentos muy importantes como la Alhambra. Aquí la comida es deliciosa y son famosos el gazpacho y el salmorejo.',
userAudioFile: user_audio_file,
};

const { data } = await speechaceapi.v9Latest_.scoreAWordOrSentence(input, {
key: '{{speechacekey}}',
dialect: 'en-us',
userId: 'XYZ-ABC-99001',
__: 'laborum culpa',
});

console.log(data);
})();

scoreAPhonemeList

In this example we score the term: "gotcha" /g/ao1/ch/ah0 Since gotcha is an american vernacular and not a valid dictionary word we use the phoneme list API to score it. The phoneme list uses a different url endpoint and expects the list of phonemes in Arpabet notation. Note that we specify phoneme stress as 0,1,2 per Arpabet notation. This API allows you to score any word or sentence that can phonetically expressed in Arpabet.

  • HTTP Method: POST
  • Endpoint: /api/scoring/phone_list/v9/json

Parameters

NameTypeRequiredDescription
bodyScoreAPhonemeListRequestThe request body.
keystringAPI key issued by Speechace.
userIdstringOptional: A unique anonymized identifier for the end-user who spoke the audio.
dialectstringThe dialect to use for scoring. Supported values are "en-us" (US English) and "en-gb" (UK English).

Return Type

ScoreAPhonemeListOkResponse

Example Usage Code Snippet

import { ScoreAPhonemeListRequest, Speechaceapi } from 'speechaceapi';

(async () => {
const speechaceapi = new Speechaceapi({});

const input: ScoreAPhonemeListRequest = {
phoneList: 'g|ao|ch|ah',
questionInfo: "'u1/q1'",
userAudioFile: user_audio_file,
};

const { data } = await speechaceapi.v9Latest_.scoreAPhonemeList(input, {
key: '{{speechacekey}}',
userId: 'XYZ-ABC-99001',
dialect: 'en-us',
});

console.log(data);
})();

validateText

In this example we validate whether all the words in the text exist in the Speechace lexicon. This API allows you to quickly check whether authored content will be able to be scored with Speechace. This is useful to use at the time of text authoring to avoid errors later on. Out of lexicon terms can be reported to [email protected] for inclusion. Or you can see the phoneme list API as an alternative. You may also opt let Speechace API automatically handle unknown words by finding the most likely phonetic mapping for the term.

  • HTTP Method: POST
  • Endpoint: /api/validating/text/v9/json

Parameters

NameTypeRequiredDescription
bodyanyThe request body.
keystringAPI key issued by Speechace.
textstringA sentence or sequence of words to validate.
dialectstringThe dialect to use for validation. Default is "en-us". Supported values are "en-us" (US English) and "en-gb" (UK English).

Return Type

ValidateTextOkResponse

Example Usage Code Snippet

import { Speechaceapi } from 'speechaceapi';

(async () => {
const speechaceapi = new Speechaceapi({});

const input = {};

const { data } = await speechaceapi.v9Latest_.validateText(input, {
key: '{{speechacekey}}',
text: '"Validate these words existeee."',
dialect: 'en-us',
});

console.log(data);
})();

scoreTask

This section contains a full description of the Score Task request type. In the subsequent sections you can find specific request examples for the following task types: - describe-image - retell-lecture - answer-question

  • HTTP Method: POST
  • Endpoint: /api/scoring/task/v9/json

Parameters

NameTypeRequiredDescription
bodyScoreTaskRequestThe request body.
keystringAPI key issued by Speechace
taskTypestringThe task_type to score. Supported types are: describe-image, retell-lecture, answer-question.
dialectstringThe dialect to use for scoring. Supported values are: en-us, en-gb, fr-fr, fr-ca, es-es, es-mx.

Example Usage Code Snippet

import { ScoreTaskRequest, Speechaceapi } from 'speechaceapi';

(async () => {
const speechaceapi = new Speechaceapi({});

const input: ScoreTaskRequest = {
includeSpeechScore: '0',
taskQuestion:
'What do you call a system of government in which people vote for the people who will represent them?',
userAudioText: 'elections',
};

const { data } = await speechaceapi.v9Latest_.scoreTask(input, {
key: '{{speechace_premiumkey}}',
taskType: 'task_type',
dialect: 'dialect',
});

console.log(data);
})();

transcribeScore

In this example we transcribe a free speaking audio and score the response providing an estimated IELTS and CEFR score for each of the following aspects: - Fluency - Pronunciation - Grammar - Vocabulary - Coherence The API accepts the user audio and a relevance_context as inputs. The relevance_context is typically a question prompt provided to the user and is used to provide a relevance assessment of whether the user's response is relevant or not. Irrelevant answers have the overall IELTS score automatically set to zero and a warning is returned in the score_issue_list[] in the API result. ### Overall Scores The overall score is returned in 5 formats. See the Scoring Rubrics Guide to understand how the different formats map to each other: - A speechace_score on a scale of 0 to 100 - An ielts_score on a standard IELTS scale of 0 to 9.0 - A pte_score on a standard PTE scale or 10 to 90 - A cefr_score on a standard scale of A0 to C2 - A toeic_score on a standard scale of 0 to 200 In addition, the API returns segment (sentence) level scores to provide feedback on the speaker's weaknesses on the whole and at specific segments within the passage. The API result contains the following score fields under the speech_score node: | Field | Description | | --- | --- | | transcript | The speech-to-text transcript of what the user has said. | | speechace_score | An overall score on a scale of 0 to 100, in addition to subscores for: Fluency, Pronunciation, Grammar, Vocabulary, Coherence. | | ielts_score | An overall score on an IELTS scale of 0 to 9.0, in addition to subscores for: Fluency, Pronunciation, Grammar, Vocabulary, Coherence. | | pte_score | An overall score on a PTE scale of 10 to 90, in addition to subscores for: Fluency, Pronunciation, Grammar, Vocabulary, Coherence. | | cefr_score | An overall score on CEFR scale of A0 to C2, in addition to subscores for: Fluency, Pronunciation, Grammar, Vocabulary, Coherence. | | toeic_score | An overall score on an TOEIC scale of 0 to 200, in addition to subscores for: Fluency, Pronunciation, Grammar, Vocabulary, Coherence. | | relevance.class | TRUE or FALSE indicating whether the response was relevant given the relevance_context passed as input to the API. | The following example snippet from the API results demonstrates the overall scores: json { "status": "success", "speech_score": { "transcript": "But the residents have felt the strain, they to launched a healthy streets program, opening up, select streets to just walking and cycling. Now, this action proved valuable in helping residents life and broaden the benefit of their tax dollars. That typically pay to serve cars. New designs were implemented on South Congress. The iconic Main Street of Texas, Inn, Downtown Austin, the stretch of road has changed character overtime evolve e with advances in technology Civic priorities or public preferences with City council's Direction. This stretch of road now has just two fewer Lanes of car traffic. A third of the street space was given over to people bicycling and rolling on scooters. Taking them off the busy sidewalks better suited for dining under the oak trees and give them increased comfort and safety.", "relevance": { "class": "TRUE" }, "ielts_score": { "pronunciation": 8.5, "fluency": 9, "grammar": 8.5, "coherence": 9, "vocab": 9, "overall": 9 }, "pte_score": { "pronunciation": 86, "fluency": 87, "grammar": 86, "coherence": 90, "vocab": 89, "overall": 87 }, "speechace_score": { "pronunciation": 97, "fluency": 98, "grammar": 97, "coherence": 100, "vocab": 99, "overall": 98 }, "toeic_score": { "pronunciation": 190, "fluency": 200, "grammar": 190, "coherence": 200, "vocab": 200, "overall": 200 }, "cefr_score": { "pronunciation": "C2", "fluency": "C2", "grammar": "C2", "coherence": "C2", "vocab": "C2", "overall": "C2" } ... } } ### Feedback Metrics In addition, the API returns the following feedback nodes: | Node | Description | | --- | --- | | fluency | This node contains fluency metrics and subscores for the overall utterance and for each segment (sentence) within the utterance. | | word_score_list[] | This node contains pronunciation scores and metrics for each word, syllable, and phoneme within the utterance. | | grammar | This node contains grammar metrics, errors and feedback for the overall utterance. | | vocab | This node contains vocabulary metrics, errors and feedback for the overall utterance. | | coherence | This node contains coherence metrics, errors and feedback for the overall utterance. |

  • HTTP Method: POST
  • Endpoint: /api/scoring/speech/v9/json

Parameters

NameTypeRequiredDescription
bodyTranscribeScoreRequestThe request body.
keystringAPI key issued by Speechace.
dialectstringThe dialect to use for scoring. Supported values are: en-us (US English) en-gb (UK English) fr-fr (French France) fr-ca (French Canada) es-es (Spanish Spain) es-mx (Spanish Mexico)
userIdstringOptional: A unique anonymized identifier for the end-user who spoke the audio.

Return Type

TranscribeScoreOkResponse

Example Usage Code Snippet

import { Speechaceapi, TranscribeScoreRequest } from 'speechaceapi';

(async () => {
const speechaceapi = new Speechaceapi({});

const input: TranscribeScoreRequest = {
includeIeltsFeedback: '1',
questionInfo: "'u1/q1'",
userAudioFile: user_audio_file,
};

const { data } = await speechaceapi.v9Latest_.transcribeScore(input, {
key: '{{speechace_premiumkey}}',
dialect: 'en-us',
userId: 'XYZ-ABC-99001',
});

console.log(data);
})();

Build Your Own SDKs with  liblab

Build developer friendly SDKs in minutes from your APIs

Start for Free →