Take a fresh look at your lifestyle.

This startup manages to reduce or change the accent for another in real time during calls

Speaking with a regional accent can financially penalize up to 20% in salary. There are no better accents than others, but in the Anglo-Saxon world not have an American or British accent when speaking English It can be a reason for discrimination on certain occasions such as when getting a job offer or when making commercial calls.

To try to correct this issue is Sanas, a North American startup that has recently obtained a funding round worth 5.5 million dollars to continue the development of its artificial intelligence platform, the first in apply algorithms in real time and “neutralize” the accent.

Transforming accents in real time

Sanas is a “real-time translation technology that allows whoever speaks to do so with the accent they want without any noticeable delay,” they explain from the company itself.

Sanas.AI is capable of algorithmically recognizing accents such as Spanish and applying a modification so that the other end of the call sounds with another specific accent, mainly standard American.


It’s about a software installed locally on the device, which means that it does not have to connect to the servers of the platform in order to function.

As described, it is integrated into the sound system of the operating system, so they believe they are capable of being compatible with any audio or video tool. Yes OK, so far the system has been tested as a pilot program with thousands of people in the United States, the Philippines, and India. Due to success in various call centers, the company has secured the round of funding to continue the work.

By the end of the year they hope to be able to reduce and “translate” accents such as American, Spanish, British, Indian, Filipino and Australian.

According to company data, with its system increase the clarity of conversations and have improved fluidity by 40%, even reducing errors with Google TTS by 20.5%.

The latency time for applying this audio modification is about 200 milliseconds, a relevant latency compared to standard calls but without becoming an insurmountable barrier.

My problem with voice assistants is their problem with accents

As described by TechCrunch, the result can be improved in terms of although the original accent is lost, the result is somewhat more mechanical and the cadence and personality of the voice is blurred.

Sanas has been created by a Stanford engineering student team and various experts in speech machine learning. “We want to make communication easy and friction-free, so that people can speak confidently and understand each other, wherever they are and with whom they are trying to communicate,” explains Maxim Serebryakov, founder of this tool.