Menu
  • UX Trending
  • UX PODCASTS
    • DESIGN UNTANGLED
    • UX CAKE
  • UX Reading Room
  • UX Portfolio Building
  • UX JOBS
    • Atlanta
    • Dallas
    • Los Angeles
UXShareLab… everything you need to know about UX and more…
for the user experience design community

Search

Browse: Home   /   tone-of-voice

Blindness and multilingual text messaging

Blindness and multilingual text messaging

BACKGROUND:

I have long wondered whether it is practical, or realistic, to think that blind people might want to use, or might have a need for, text messaging, as opposed to voice messaging.

After watching a few videos, it became apparent to me that most blind people, in many situations, might prefer voice messaging.

But that might be just because, the technology to conveniently compose and send text messages via voice input, perhaps also including voice messaging app selection and blindness support on behalf of such text messing apps, is not yet mature.

RATIONALE:

Text messages are mute impersonal. Several times, we may want to communicate omitting details related to out voice, for instance we might be feeling sad, or have a permanent or temporary voice handicap, but we might also be angry, or whatever, and might want to follow the “keep it simple” philosophy. We might not know who we are talking to, and want to figure out done common ground on a rational level before getting down to communicating on an emotional level. Our we might went out partner to “think more”, by having turn focus on our words rather than on our tone of voice.

I don’t know, and it would be interesting to know, but my basic intuition seems to be telling me that these basic principles apply to blind people as well.

But onlly when/once the technology is there, will we know how this would work / works in practice, and how we can improve such «text messaging for the blind» technology, to make it more effective.

THE PROBLEM:

Besides the background and rationale for this post, here is the deeper problem I want to address with this question.

A blind person could, and I am sure several are, be multilingual. Suppose they receive a text message. How does the TTS system (which would consist of one or more TTS system for each language the user spoke or wanted their phone to speak, with a default TTS for each language), know what language the message is in.

Even with Unicode messages (luckyly we live in a Unicode texting era), you don’t know the language, or languages of the encoded text, and without this info, the reading of the text will be unintelligible to all users. I’ve tried it, and besides my level of proficiency could not make out a single word if the message was read on the wrong language.

QUESTION:

How would you address this issue:

Solution I: l design, in done Unicode plane, a set of language code byte sequences, which would work as “escape sequences”, signalling to the TTS system (and subsystems), what languages the text that filled was in.

With this solution, when the user desires to do so, when they enter text on a keyboard, these special byte sequences are input at the beginning of the text, as well as when the user switches language at the keyboard interface. When using voice to send text, either done AI figures out from the voice, what language the voice is speaking in, or the user can voice special escape sequences by voice, to be inserted into the text.

Even possible, since done keyboards allow you to type in two (or perhaps even more), languages, without switching keyboards, there could be special Unicode language setter keys on the keyboard, provided the keyboard was designed this way, and the special language keys could be visible, to make checking the message (and reading it back) clearer. There could be also special Unicode characters to indicate that the following text is to be read “spelling-wise”, if the blind (or not) user so desired to communicate.

Solution II: telephone providers lower the costs associated with sending MMS messages (as opposed to SMS messages), and a special file format with language and possibly “voice quality/emotion codes” is sent, and the MMS message file can also combine audio books-like portions of text, for those portions of text where we did want to send done personalized sounds or voice, just to make the voice message mute interesting (and I can see this working well, for both blind and non-bling users who wanted to get semi-personal).

So, my question is, how would you solve the problem of multilingual/multimedia text message sending, and how would you design the entire encompassing voice system to make it accessible, usable, and fun to use, by blind people.

Thanks.

share this post : )

  • Tweet
  • More
  • Click to email a link to a friend (Opens in new window) Email

Why is it that all caps text looks like SHOUTING, but all caps handwriting is easier to read?

Why is it that all caps text looks like SHOUTING, but all caps handwriting is easier to read?

Specifically for UI — handwritten mockups, comics, etc are always better all-caps, but the same is never true for printed text. Why is that?

Visual Studio 2012’s all-caps menus are generally despised.
http://blogs.msdn.com…

share this post : )

  • Tweet
  • More
  • Click to email a link to a friend (Opens in new window) Email

Why is it that all caps text looks like SHOUTING, but all caps handwriting is easier to read?

Why is it that all caps text looks like SHOUTING, but all caps handwriting is easier to read?

Specifically for UI — handwritten mockups, comics, etc are always better all-caps, but the same is never true for printed text. Why is that?

Visual Studio 2012’s all-caps menus are generally despised.
http://blogs.msdn.com…

share this post : )

  • Tweet
  • More
  • Click to email a link to a friend (Opens in new window) Email

Topics

UI UX Jobs Atlanta web-app user-behavior Usability UX Toolbox UI design UX Rockstars Web Design User Interaction WordPress Gutenberg User Experience WordPress Plugins Visual Design User testing User Research uxbooth wordpress UX Universal Design & Accessibility uxstackexchange UX Jobs Los Angeles UX Design UX Jobs in Atlanta UX Jobs Dallas

Feeds

UI UI design Universal Design & Accessibility Usability user-behavior User Experience User Interaction User Research User testing UX uxbooth UX Design UX Jobs Atlanta UX Jobs Dallas UX Jobs in Atlanta UX Jobs Los Angeles UX Rockstars uxstackexchange UX Toolbox Visual Design web-app Web Design wordpress WordPress Gutenberg WordPress Plugins

<span>recent posts</span>

  • UX in 2018: The human element

    • Anywhere
  • Three Takeaways from the Hawai’i Missile False Alarm

    • Anywhere
  • UX in 2018: Content

    • Anywhere
  • UX in 2018: Design, Development, and Accessibility

    • Anywhere
  • The Power and Danger of Persuasive Design

    • Anywhere

connect to uxsharelab

Enter your email address to subscribe to receive notifications of new posts by email.

UXShareLab. Copyright © 2018. All rights reserved.

  • Contact UXShareLab
  • UXShareLab Community
  • UX PROCESS
  • Recommended Reading
  • UX StackExchange