Skip to main content

V1V7OlderVersions_Service

A list of all methods in the V1V7OlderVersions_Service service. Click on the method name to view detailed information about that method.

MethodsDescription
scoreAWordOrSentence1In this example we score pronunciation of a word or sentence Scoring pronunciation provides a quality score for the speaker's pronunciation for the entire utterance and for each word, syllable, phoneme. This allows overall activity scoring and pin-pointed feedback on pronunciation mistakes. In this request JSON result includes the following fields: Field | Description :------:|:-----: quality_score | An overall pronunciation score for the the entire utterance on a scale of 0 to 100. See guide for detail on score rubric. syllable_score_list[] | a list of syllables in each word in the word_score_list[], each with it's own quality_score word_score_list[] | a list of words in the utterance, each with it's own quality_score syllable_score_list[] | a list of syllables in each word in the word_score_list[], each with it's own quality_score phone_score_list[] | a list of phonemes in each word in the word_score_list[], each with it's own quality_score extent[] | start and end boundaries of a syllable or phoneme in units of 10 msec.
scoreAPhonemeList1In this example we score the term: "gotcha" /g/ao1/ch/ah0 Since gotcha is an american vernacular and not a valid dictionary word we use the phoneme list API to score it. The phoneme list uses a different url endpoint and expects the list of phonemes in Arpabet notation. Note that we specify phoneme stress as 0,1,2 per Arpabet notation. This API allows you to score any word or sentence that can phonetically expressed in Arpabet. Copy the example code and be sure to: 1. Add your Speechace API key 2. Add a valid file path in the user_audio_file parameter. For example in curl the you would add something like @/tmp/gotcha_16k.wav You can download a sample gotcha_16k.wav file here.
validateText1In this example we validate whether all the words in the text exist in the Speechace lexicon. This API allows you to quickly check whether authored content will be able to be scored with Speechace. This is useful to use at the time of text authoring to avoid errors later on. Out of lexicon terms can be reported to [email protected] for inclusion. Or you can see the phoneme list API as an alternative. Copy the example code and be sure to: 1. Add your Speechace API key 2. Replace text with the text you wish to validate. 3. Set the dialect parameter to the dialect you will use when scoring. If you are not sure which dialect will be used then validate once using each available dialect.
transcribeScore1In this example we transcribe a free speaking audio and score the response providing an estimated IELTS score for each of the following aspects: _ Fluency _ Pronunciation _ Grammar _ Vocabulary * Coherence The API accepts the user audio and a relevance context as inputs. The relevance context is typically a question prompt provided to the user. The relevance context is used to provide a relevance assessment of whether the users response is relevant or not. Irrelevant answers have the overall IELTS score automatically set to zero. In this request JSON result includes the following fields: | Field | Description | | --- | --- | | transcript | The speech-to-text transcript of what the user has said. | | relevance.class | Boolean. Whether the user response is relevant to the relevance context passed to the API. | | ielts_estimate | an estimate of the IELTS Speaking Fluency of the speaker | | ielts_subscore.vocab | an estimate of the IELTS Vocabulary level of the speaker's response | | ielts_subscore.grammar | an estimate of the IELTS Grammar level of the speaker's response | | ielts_subscore.coherence | an estimate of the IELTS Coherence level of the speaker's response | | quality_score | An overall pronunciation score for the the utterance on a scale of 0 to 100. See guide for detail on score rubric. | | duration | total length of speech in seconds | | articulation | total length of articulation (speech minus pauses, hesitations and non-speech events such as laughter). Excludes beginning silence on very first segment and ending silence on very last segment. | | speech_rate | speaking rate in syllables per second. | | articulation_rate | articulation rate in syllables per second. | | syllable_count | Count of syllables in this segmtent | | word_count | Count of words in this segment | | correct_syllable_count | Count of correctly spoken syllables in this segment | | correct_word_count | Count of correctly spoken words in this segment | | syllable_correct_per_minute | correct_syllable_count / duration in mins | | word_correct_per_minute | correct_word_count / duration in mins | | all_pause_count | count of all pauses (filled and unfilled) which are longer than the minimum pause threshold | | all_pause_duration | total duration of all pauses (filled and unfilled) in seconds | | all_pause_list[] | a list of all the pauses with the begin/end markers for each in extents of 10 msecs | | mean_length_run | mean length of run in syllables between pauses | | max_length_run | max length of run in syllables between pauses | | segment_metrics_list[] | A list of segments within the overall text/audio with the IELTS scores, subscrores, and fluency metrics for each segment. | | syllable_score_list[] | a list of syllables in each word in the word_score_list[], each with it's own quality_score | | word_score_list[] | a list of words in the utterance, each with it's own quality_score | | syllable_score_list[] | a list of syllables in each word in the word_score_list[], each with it's own quality_score | | phone_score_list[] | a list of phonemes in each word in the word_score_list[], each with it's own quality_score | | extent[] | start and end boundaries of a syllable or phoneme in units of 10 msec. |

scoreAWordOrSentence1

In this example we score pronunciation of a word or sentence Scoring pronunciation provides a quality score for the speaker's pronunciation for the entire utterance and for each word, syllable, phoneme. This allows overall activity scoring and pin-pointed feedback on pronunciation mistakes. In this request JSON result includes the following fields: Field | Description :------:|:-----: quality_score | An overall pronunciation score for the the entire utterance on a scale of 0 to 100. See guide for detail on score rubric. syllable_score_list[] | a list of syllables in each word in the word_score_list[], each with it's own quality_score word_score_list[] | a list of words in the utterance, each with it's own quality_score syllable_score_list[] | a list of syllables in each word in the word_score_list[], each with it's own quality_score phone_score_list[] | a list of phonemes in each word in the word_score_list[], each with it's own quality_score extent[] | start and end boundaries of a syllable or phoneme in units of 10 msec.

  • HTTP Method: POST
  • Endpoint: /api/scoring/text/v0.5/json

Parameters

NameTypeRequiredDescription
bodyScoreAWordOrSentence1RequestThe request body.
keystringAPI key issued by Speechace.
dialectstringThe dialect to use for scoring. Supported values are "en-us" (US English) and "en-gb" (UK English). en-gb requires setting v0.1 in url path. i.e. https://api.speechace.co/api/scoring/text/v0.1/json?
userIdstringA unique anonymized identifier for the end-user who spoke the audio. Structure this field to include as much info as possible to aid in reporting and analytics. For example: user_id=XYZ-ABC-99001 where: _ XYZ is an id for your Product or App _ ABC is an id for the customer/site/account * 99001 is an id for the end-user Ensure user_id is unique and anonymized containing no personally identifiable information.
__string

Return Type

ScoreAWordOrSentence1OkResponse

Example Usage Code Snippet

import { ScoreAWordOrSentence1Request, Speechaceapi } from 'speechaceapi';

(async () => {
const speechaceapi = new Speechaceapi({});


const input: ScoreAWordOrSentence1Request = {
includeFluency: "1",
includeIeltsSubscore: "1",
questionInfo: "'u1/q1'",
text: "Yes, I do. Travel today is vastly different than what it used to be. In the past, a traveller had little idea about what to expect when they arrived at their destination. These days, the internet connects our world in ways previous generations could only dream about. We can instantly review destination information and make travel arrangements. Also, in the past, people could only travel by land or sea and travelling was often long and unsafe.
",
userAudioFile: user_audio_file
};

const { data } = await speechaceapi.v1V7OlderVersions_.scoreAWordOrSentence1(
input,
{
key: "{{speechacekey}}",
dialect: "en-us",
userId: "XYZ-ABC-99001",
__: "sunt ",
}
);

console.log(data);
})();

scoreAPhonemeList1

In this example we score the term: "gotcha" /g/ao1/ch/ah0 Since gotcha is an american vernacular and not a valid dictionary word we use the phoneme list API to score it. The phoneme list uses a different url endpoint and expects the list of phonemes in Arpabet notation. Note that we specify phoneme stress as 0,1,2 per Arpabet notation. This API allows you to score any word or sentence that can phonetically expressed in Arpabet. Copy the example code and be sure to: 1. Add your Speechace API key 2. Add a valid file path in the user_audio_file parameter. For example in curl the you would add something like @/tmp/gotcha_16k.wav You can download a sample gotcha_16k.wav file here.

  • HTTP Method: POST
  • Endpoint: /api/scoring/phone_list/v0.5/json

Parameters

NameTypeRequiredDescription
bodyScoreAPhonemeList1RequestThe request body.
keystringAPI key issued by Speechace.
userIdstringA unique anonymized identifier for the end-user who spoke the audio. Structure this field to include as much info as possible to aid in reporting and analytics. For example: user_id=XYZ-ABC-99001 where: _ XYZ is an id for your Product or App _ ABC is an id for the customer/site/account * 99001 is an id for the end-user Ensure user_id is unique and anonymized containing no personally identifiable information.
dialectstringThe dialect to use for scoring. Supported values are "en-us" (US English) and "en-gb" (UK English). en-gb requires setting v0.1 in url path. i.e. https://api.speechace.co/api/scoring/text/v0.1/json?

Return Type

ScoreAPhonemeList1OkResponse

Example Usage Code Snippet

import { ScoreAPhonemeList1Request, Speechaceapi } from 'speechaceapi';

(async () => {
const speechaceapi = new Speechaceapi({});

const input: ScoreAPhonemeList1Request = {
phoneList: 'g|ao|ch|ah',
questionInfo: "'u1/q1'",
userAudioFile: user_audio_file,
};

const { data } = await speechaceapi.v1V7OlderVersions_.scoreAPhonemeList1(input, {
key: '{{speechacekey}}',
userId: 'XYZ-ABC-99001',
dialect: 'en-us',
});

console.log(data);
})();

validateText1

In this example we validate whether all the words in the text exist in the Speechace lexicon. This API allows you to quickly check whether authored content will be able to be scored with Speechace. This is useful to use at the time of text authoring to avoid errors later on. Out of lexicon terms can be reported to [email protected] for inclusion. Or you can see the phoneme list API as an alternative. Copy the example code and be sure to: 1. Add your Speechace API key 2. Replace text with the text you wish to validate. 3. Set the dialect parameter to the dialect you will use when scoring. If you are not sure which dialect will be used then validate once using each available dialect.

  • HTTP Method: POST
  • Endpoint: /api/validating/text/v0.5/json

Parameters

NameTypeRequiredDescription
bodyanyThe request body.
keystringAPI key issued by Speechace.
textstringA sentence or sequence of words to validate.
dialectstringThe dialect to use for validation. Default is "en-us". Supported values are "en-us" (US English) and "en-gb" (UK English).

Return Type

ValidateText1OkResponse

Example Usage Code Snippet

import { Speechaceapi } from 'speechaceapi';

(async () => {
const speechaceapi = new Speechaceapi({});

const input = {};

const { data } = await speechaceapi.v1V7OlderVersions_.validateText1(input, {
key: '{{speechacekey}}',
text: '"Validate these words existeee."',
dialect: 'en-us',
});

console.log(data);
})();

transcribeScore1

In this example we transcribe a free speaking audio and score the response providing an estimated IELTS score for each of the following aspects: _ Fluency _ Pronunciation _ Grammar _ Vocabulary * Coherence The API accepts the user audio and a relevance context as inputs. The relevance context is typically a question prompt provided to the user. The relevance context is used to provide a relevance assessment of whether the users response is relevant or not. Irrelevant answers have the overall IELTS score automatically set to zero. In this request JSON result includes the following fields: | Field | Description | | --- | --- | | transcript | The speech-to-text transcript of what the user has said. | | relevance.class | Boolean. Whether the user response is relevant to the relevance context passed to the API. | | ielts_estimate | an estimate of the IELTS Speaking Fluency of the speaker | | ielts_subscore.vocab | an estimate of the IELTS Vocabulary level of the speaker's response | | ielts_subscore.grammar | an estimate of the IELTS Grammar level of the speaker's response | | ielts_subscore.coherence | an estimate of the IELTS Coherence level of the speaker's response | | quality_score | An overall pronunciation score for the the utterance on a scale of 0 to 100. See guide for detail on score rubric. | | duration | total length of speech in seconds | | articulation | total length of articulation (speech minus pauses, hesitations and non-speech events such as laughter). Excludes beginning silence on very first segment and ending silence on very last segment. | | speech_rate | speaking rate in syllables per second. | | articulation_rate | articulation rate in syllables per second. | | syllable_count | Count of syllables in this segmtent | | word_count | Count of words in this segment | | correct_syllable_count | Count of correctly spoken syllables in this segment | | correct_word_count | Count of correctly spoken words in this segment | | syllable_correct_per_minute | correct_syllable_count / duration in mins | | word_correct_per_minute | correct_word_count / duration in mins | | all_pause_count | count of all pauses (filled and unfilled) which are longer than the minimum pause threshold | | all_pause_duration | total duration of all pauses (filled and unfilled) in seconds | | all_pause_list[] | a list of all the pauses with the begin/end markers for each in extents of 10 msecs | | mean_length_run | mean length of run in syllables between pauses | | max_length_run | max length of run in syllables between pauses | | segment_metrics_list[] | A list of segments within the overall text/audio with the IELTS scores, subscrores, and fluency metrics for each segment. | | syllable_score_list[] | a list of syllables in each word in the word_score_list[], each with it's own quality_score | | word_score_list[] | a list of words in the utterance, each with it's own quality_score | | syllable_score_list[] | a list of syllables in each word in the word_score_list[], each with it's own quality_score | | phone_score_list[] | a list of phonemes in each word in the word_score_list[], each with it's own quality_score | | extent[] | start and end boundaries of a syllable or phoneme in units of 10 msec. |

  • HTTP Method: POST
  • Endpoint: /api/scoring/speech/v0.5/json

Parameters

NameTypeRequiredDescription
bodyTranscribeScore1RequestThe request body.
keystringAPI key issued by Speechace.
dialectstringThe dialect to use for scoring. Supported values are "en-us" (US English) and "en-gb" (UK English).
userIdstringA unique anonymized identifier for the end-user who spoke the audio. Structure this field to include as much info as possible to aid in reporting and analytics. For example: user_id=XYZ-ABC-99001 where: _ XYZ is an id for your Product or App _ ABC is an id for the customer/site/account * 99001 is an id for the end-user Ensure user_id is unique and anonymized containing no personally identifiable information.

Return Type

TranscribeScore1OkResponse

Example Usage Code Snippet

import { Speechaceapi, TranscribeScore1Request } from 'speechaceapi';

(async () => {
const speechaceapi = new Speechaceapi({});

const input: TranscribeScore1Request = {
includeFluency: '1',
includeIeltsSubscore: '1',
includeUnknownWords: '1',
relevanceContext: 'Describe the healthy streets program and its impact on the residents of Austin Texas.',
userAudioFile: user_audio_file,
};

const { data } = await speechaceapi.v1V7OlderVersions_.transcribeScore1(input, {
key: '{{speechace_premiumkey}}',
dialect: 'en-us',
userId: 'XYZ-ABC-99001',
});

console.log(data);
})();

Build Your Own SDKs with  liblab

Build developer friendly SDKs in minutes from your APIs

Start for Free →