<img src="https://ws.zoominfo.com/pixel/ODemgiDEhQshzjvCQ1qL" width="1" height="1" style="display: none;">

Automatic live captioning for meetings that matter

Bring world-class AI live captioning to your audiences in Teams, Zoom, or even your auditorium to make your meetings more accessible.

Interprefy banner_automatic captioning-min

World-class AI, refined by the human touch

Make sure even hard-to-catch but crucial words like unique names, technical terms, and abbreviations are not missed by AI. We use best-in-class speech-to-text technology and customize the engine to the terminology of your event.

automatic live captioning

Unlock the full potential of live captioning

Engine customization

We customize the ASR engine to capture key names, abbreviations, and phrases that are relevant to your session and your particular subject matter.

Our engine customization with customer-specific requirements or profanity filters increases accuracy for particularly challenging terms, thus creating the best ASR captions possible.

Good automatic captions

Worry-free experience

Project management, live monitoring, and remote support expertise to guarantee a smooth, hassle-free event experience for all users.

live captioning support

30+ languages supported

Our speech-to-text engines support over 30 languages, including the world's most spoken and beyond.

List of supported languages
multi-language live captioning

Enjoy end-to-end captioning services

From set-up to monitoring and reporting, we help you create a more engaging, memorable, inclusive and accessible event experience for everyone.

Icons_Integration@4x

Highest compatibility

We can stream AI captions anywhere: your event stage, your online meeting platform, our mobile app, and even the metaverse. 

Icons_Translate@4x

Transcribe interpretation

If you have interpreters in the meeting, easily transcribe their speech into live text.

Icons_Captions@4x

AI evaluation

Not all AI engines produce the same results. Because our AI experts benchmark the market's available engines, you can be confident that you are using only the best one for your language combination on the market.

Icons_Star@4x

Engine customisation

Increase accuracy with our AI customisation feature that programs the engine to recognise specific names, acronyms, or industry-specific terminology.

crissilia and andres-min
LET'S TALK

Want to learn more?

Book a 15-minute introduction meeting with us today to learn more about how we can help you add automatic captions and translated subtitles to your upcoming meetings.

Frequently Asked Questions

Find more answers in our knowledge base

How does Automatic Speech Recognition (ASR) work?

As a visual aid to follow the speech, live audio is transcribed into text through AI-powered Automated Speech Recognition (ASR), or often also referred to as "speech-to-text" technology.

Interprefy Captions are generated off the audio speech of each speaker (and interpreter, if active) using Automated Speech Recognition (ASR) technology powered by Artificial Intelligence (AI).

This technology combination uses speech-to-text processing technology to provide text directly from the words being spoken. Just like interpretation, captions will follow as live transcription slightly after the speaker has delivered their words.

In the diagram below, you can see how ASR works, when you have an English speaker, and a Spanish interpreter connected:

How does automatic captioning work

 

How accurate are Interprefy Captions?

We only support languages that have been tested thoroughly in collaboration with a linguistic partner and meet rigorous quality standards. Our system uses human-curated custom glossaries to add a layer of human refinement to raw ASR output, leading to better accuracy for you and your audience. Audio input factors such as heavy accents or low audio quality can impact captioning accuracy.

What are custom glossaries and how do they work?

Custom glossaries are lists of terms and phrases specific to your sessions, that our project team uses to guide an ASR engine, so it produces the words correctly when it ‘hears’ them. After collecting and compiling key names and terms, professional linguists use their in-depth language knowledge to program phonetic pronunciations into the system. This process makes the automatic live captions more accurate.

How does the speed of Interprefy Captions compare to human-generated live captions?

Interprefy Captions have a similar or even shorter time delay than human-generated live captions. The delay between the audio and the Interprefy Captions is usually around two to four seconds, depending on the preferred setting, whereas human-generated captions are usually delayed by around four to seven seconds.

Interprefy Captions can be enabled in two different modes. By default, the text will appear within 4 seconds of the speaker having completed a sentence. If 'instant mode' is activated, the text will appear in real-time with rapid auto-correction.

How is Interprefy's ASR solution different from others?

We are a language technology company with a strong belief that humans and machines working together lead to outstanding results. We use best-of-breed ASR technology and continuously benchmark existing and new ASR systems to offer customers solutions that are best suited for their needs. Our system utilizes linguistic expertise to improve these engines and guide them to capture customer-specific terms and phrases that are typically difficult to capture.

What's more, we provide end-to-end services to support you at every step of the way. To make sure the process and delivery are as smooth as possible.

Can Interprefy Captions be combined with simultaneous interpretation?

Absolutely. We can deliver simultaneous interpretation alongside live captions, and also turn audio interpretation into text for your audience to follow.

Which languages does Interprefy's ASR solution support?

We can automatically recognize and transcribe speech in over 30 languages. Click here for the latest list of supported languages.