Switch on/off use with button
QQ, I am debating with a fellow UXA, would love to hear your thoughts.
does it make sense to have a switch on/off with a save?
The switch on/off would enable the save button then triggering a confirmation modal.
How to Display Pass/Fail/Missing Data
I’m trying to create a better experience in a mobile application that tracks a users fitness activity. When looking at a calendar-view, the user will 1 of 3 options:
A Colored Dot – Indicates the user reached their goal
A C…
How to Display Pass/Fail/Missing Data
I’m trying to create a better experience in a mobile application that tracks a users fitness activity. When looking at a calendar-view, the user will 1 of 3 options:
A Colored Dot – Indicates the user reached their goal
A C…
Blindness and multilingual text messaging
BACKGROUND:
I have long wondered whether it is practical, or realistic, to think that blind people might want to use, or might have a need for, text messaging, as opposed to voice messaging.
After watching a few videos, it became apparent to me that most blind people, in many situations, might prefer voice messaging.
But that might be just because, the technology to conveniently compose and send text messages via voice input, perhaps also including voice messaging app selection and blindness support on behalf of such text messing apps, is not yet mature.
RATIONALE:
Text messages are mute impersonal. Several times, we may want to communicate omitting details related to out voice, for instance we might be feeling sad, or have a permanent or temporary voice handicap, but we might also be angry, or whatever, and might want to follow the “keep it simple” philosophy. We might not know who we are talking to, and want to figure out done common ground on a rational level before getting down to communicating on an emotional level. Our we might went out partner to “think more”, by having turn focus on our words rather than on our tone of voice.
I don’t know, and it would be interesting to know, but my basic intuition seems to be telling me that these basic principles apply to blind people as well.
But onlly when/once the technology is there, will we know how this would work / works in practice, and how we can improve such «text messaging for the blind» technology, to make it more effective.
THE PROBLEM:
Besides the background and rationale for this post, here is the deeper problem I want to address with this question.
A blind person could, and I am sure several are, be multilingual. Suppose they receive a text message. How does the TTS system (which would consist of one or more TTS system for each language the user spoke or wanted their phone to speak, with a default TTS for each language), know what language the message is in.
Even with Unicode messages (luckyly we live in a Unicode texting era), you don’t know the language, or languages of the encoded text, and without this info, the reading of the text will be unintelligible to all users. I’ve tried it, and besides my level of proficiency could not make out a single word if the message was read on the wrong language.
QUESTION:
How would you address this issue:
Solution I: l design, in done Unicode plane, a set of language code byte sequences, which would work as “escape sequences”, signalling to the TTS system (and subsystems), what languages the text that filled was in.
With this solution, when the user desires to do so, when they enter text on a keyboard, these special byte sequences are input at the beginning of the text, as well as when the user switches language at the keyboard interface. When using voice to send text, either done AI figures out from the voice, what language the voice is speaking in, or the user can voice special escape sequences by voice, to be inserted into the text.
Even possible, since done keyboards allow you to type in two (or perhaps even more), languages, without switching keyboards, there could be special Unicode language setter keys on the keyboard, provided the keyboard was designed this way, and the special language keys could be visible, to make checking the message (and reading it back) clearer. There could be also special Unicode characters to indicate that the following text is to be read “spelling-wise”, if the blind (or not) user so desired to communicate.
Solution II: telephone providers lower the costs associated with sending MMS messages (as opposed to SMS messages), and a special file format with language and possibly “voice quality/emotion codes” is sent, and the MMS message file can also combine audio books-like portions of text, for those portions of text where we did want to send done personalized sounds or voice, just to make the voice message mute interesting (and I can see this working well, for both blind and non-bling users who wanted to get semi-personal).
So, my question is, how would you solve the problem of multilingual/multimedia text message sending, and how would you design the entire encompassing voice system to make it accessible, usable, and fun to use, by blind people.
Thanks.
Blindness and multilingual text messaging
BACKGROUND:
I have long wondered whether it is practical, or realistic, to think that blind people might want to use, or might have a need for, text messaging, as opposed to voice messaging.
After watching a few videos, it became apparent to me that most blind people, in many situations, might prefer voice messaging.
But that might be just because, the technology to conveniently compose and send text messages via voice input, perhaps also including voice messaging app selection and blindness support on behalf of such text messing apps, is not yet mature.
RATIONALE:
Text messages are mute impersonal. Several times, we may want to communicate omitting details related to out voice, for instance we might be feeling sad, or have a permanent or temporary voice handicap, but we might also be angry, or whatever, and might want to follow the “keep it simple” philosophy. We might not know who we are talking to, and want to figure out done common ground on a rational level before getting down to communicating on an emotional level. Our we might went out partner to “think more”, by having turn focus on our words rather than on our tone of voice.
I don’t know, and it would be interesting to know, but my basic intuition seems to be telling me that these basic principles apply to blind people as well.
But onlly when/once the technology is there, will we know how this would work / works in practice, and how we can improve such «text messaging for the blind» technology, to make it more effective.
THE PROBLEM:
Besides the background and rationale for this post, here is the deeper problem I want to address with this question.
A blind person could, and I am sure several are, be multilingual. Suppose they receive a text message. How does the TTS system (which would consist of one or more TTS system for each language the user spoke or wanted their phone to speak, with a default TTS for each language), know what language the message is in.
Even with Unicode messages (luckyly we live in a Unicode texting era), you don’t know the language, or languages of the encoded text, and without this info, the reading of the text will be unintelligible to all users. I’ve tried it, and besides my level of proficiency could not make out a single word if the message was read on the wrong language.
QUESTION:
How would you address this issue:
Solution I: l design, in done Unicode plane, a set of language code byte sequences, which would work as “escape sequences”, signalling to the TTS system (and subsystems), what languages the text that filled was in.
With this solution, when the user desires to do so, when they enter text on a keyboard, these special byte sequences are input at the beginning of the text, as well as when the user switches language at the keyboard interface. When using voice to send text, either done AI figures out from the voice, what language the voice is speaking in, or the user can voice special escape sequences by voice, to be inserted into the text.
Even possible, since done keyboards allow you to type in two (or perhaps even more), languages, without switching keyboards, there could be special Unicode language setter keys on the keyboard, provided the keyboard was designed this way, and the special language keys could be visible, to make checking the message (and reading it back) clearer. There could be also special Unicode characters to indicate that the following text is to be read “spelling-wise”, if the blind (or not) user so desired to communicate.
Solution II: telephone providers lower the costs associated with sending MMS messages (as opposed to SMS messages), and a special file format with language and possibly “voice quality/emotion codes” is sent, and the MMS message file can also combine audio books-like portions of text, for those portions of text where we did want to send done personalized sounds or voice, just to make the voice message mute interesting (and I can see this working well, for both blind and non-bling users who wanted to get semi-personal).
So, my question is, how would you solve the problem of multilingual/multimedia text message sending, and how would you design the entire encompassing voice system to make it accessible, usable, and fun to use, by blind people.
Thanks.
Is it a strong anti-pattern to use a Floating Action Button in an iOS app?
Floating Action Buttons – invented by the Google Material Design Team – are rather popular on Android. Apple’s closest equivalent are perhaps an action icon at top-right or the bottom Tool Bar pattern – which won’t play nicel…
Is it a strong anti-pattern to use a Floating Action Button in an iOS app?
Floating Action Buttons – invented by the Google Material Design Team – are rather popular on Android. Apple’s closest equivalent are perhaps an action icon at top-right or the bottom Tool Bar pattern – which won’t play nicel…
Web-design: "Paging" versus "Dynamic content loading" [duplicate]
This question already has an answer here:
Is infinite scrolling justifiable?
6 answers
I apologize for badly formed Title, bu…
Web-design: "Paging" versus "Dynamic content loading" [duplicate]
This question already has an answer here:
Is infinite scrolling justifiable?
6 answers
I apologize for badly formed Title, bu…
Default Date for Birthdate
When the date of birth is asked to the user, with a Date Picker on mobile, what should be the default date to show on Date Picker? Current date makes no sense to me as it is probably too far from the date the user has to sele…