Menu
  • UX Trending
  • UX PODCASTS
    • DESIGN UNTANGLED
    • UX CAKE
  • UX Reading Room
  • UX Portfolio Building
  • UX JOBS
    • Atlanta
    • Dallas
    • Los Angeles
UXShareLab… everything you need to know about UX and more…
for the user experience design community

Search

Browse: Home   /   Prototyping   /   Page 3

How to emphasize/mark one option over the other?

How to emphasize/mark one option over the other?

I’m attaching screenshot of the current section.

The left option, is the free option(obviously), while the right one, is the paid option, marked with “locked” emoji.

I’m trying to increase the “awareness” of the right(pai…

share this post : )

  • Tweet
  • More
  • Click to email a link to a friend (Opens in new window) Email

Naming convention for attributes/functions/menus

Naming convention for attributes/functions/menus

In programming there are ex. PascalCase and CamelCase and more conventions.

In the SIGCHI Conference Proceedings Format subsections and sub-subsections starts with initial letters capitalized, but a word like the or of is no…

share this post : )

  • Tweet
  • More
  • Click to email a link to a friend (Opens in new window) Email

In need of best font colour [on hold]

In need of best font colour [on hold]

Can someone help with the font colours for my poster which has an image with combination of white colour and shadow and different shades of black

share this post : )

  • Tweet
  • More
  • Click to email a link to a friend (Opens in new window) Email

Reasons for preferring listening to "up to date" podcast episodes

Reasons for preferring listening to "up to date" podcast episodes

Not sure this makes a good post for this site, but I am looking for use cases which would make the podcasting model still viable in nowadays times in view of the following considerations. I wanted to know, from a usability pe…

share this post : )

  • Tweet
  • More
  • Click to email a link to a friend (Opens in new window) Email

Survey tool to ask questions on individual pages – what are they called?

Survey tool to ask questions on individual pages - what are they called?

Is there an off-the-shelf survey tool that can ask simple questions on specific pages? Along the lines of the “Was this information helpful?” on Microsoft’s support pages (example). The user gets asked a simple question and i…

share this post : )

  • Tweet
  • More
  • Click to email a link to a friend (Opens in new window) Email

Switch on/off use with button

Switch on/off use with button

QQ, I am debating with a fellow UXA, would love to hear your thoughts.

does it make sense to have a switch on/off with a save?
The switch on/off would enable the save button then triggering a confirmation modal.

share this post : )

  • Tweet
  • More
  • Click to email a link to a friend (Opens in new window) Email

How to Display Pass/Fail/Missing Data

How to Display Pass/Fail/Missing Data

I’m trying to create a better experience in a mobile application that tracks a users fitness activity. When looking at a calendar-view, the user will 1 of 3 options:

A Colored Dot – Indicates the user reached their goal
A C…

share this post : )

  • Tweet
  • More
  • Click to email a link to a friend (Opens in new window) Email

How to Display Pass/Fail/Missing Data

How to Display Pass/Fail/Missing Data

I’m trying to create a better experience in a mobile application that tracks a users fitness activity. When looking at a calendar-view, the user will 1 of 3 options:

A Colored Dot – Indicates the user reached their goal
A C…

share this post : )

  • Tweet
  • More
  • Click to email a link to a friend (Opens in new window) Email

Blindness and multilingual text messaging

Blindness and multilingual text messaging

BACKGROUND:

I have long wondered whether it is practical, or realistic, to think that blind people might want to use, or might have a need for, text messaging, as opposed to voice messaging.

After watching a few videos, it became apparent to me that most blind people, in many situations, might prefer voice messaging.

But that might be just because, the technology to conveniently compose and send text messages via voice input, perhaps also including voice messaging app selection and blindness support on behalf of such text messing apps, is not yet mature.

RATIONALE:

Text messages are mute impersonal. Several times, we may want to communicate omitting details related to out voice, for instance we might be feeling sad, or have a permanent or temporary voice handicap, but we might also be angry, or whatever, and might want to follow the “keep it simple” philosophy. We might not know who we are talking to, and want to figure out done common ground on a rational level before getting down to communicating on an emotional level. Our we might went out partner to “think more”, by having turn focus on our words rather than on our tone of voice.

I don’t know, and it would be interesting to know, but my basic intuition seems to be telling me that these basic principles apply to blind people as well.

But onlly when/once the technology is there, will we know how this would work / works in practice, and how we can improve such «text messaging for the blind» technology, to make it more effective.

THE PROBLEM:

Besides the background and rationale for this post, here is the deeper problem I want to address with this question.

A blind person could, and I am sure several are, be multilingual. Suppose they receive a text message. How does the TTS system (which would consist of one or more TTS system for each language the user spoke or wanted their phone to speak, with a default TTS for each language), know what language the message is in.

Even with Unicode messages (luckyly we live in a Unicode texting era), you don’t know the language, or languages of the encoded text, and without this info, the reading of the text will be unintelligible to all users. I’ve tried it, and besides my level of proficiency could not make out a single word if the message was read on the wrong language.

QUESTION:

How would you address this issue:

Solution I: l design, in done Unicode plane, a set of language code byte sequences, which would work as “escape sequences”, signalling to the TTS system (and subsystems), what languages the text that filled was in.

With this solution, when the user desires to do so, when they enter text on a keyboard, these special byte sequences are input at the beginning of the text, as well as when the user switches language at the keyboard interface. When using voice to send text, either done AI figures out from the voice, what language the voice is speaking in, or the user can voice special escape sequences by voice, to be inserted into the text.

Even possible, since done keyboards allow you to type in two (or perhaps even more), languages, without switching keyboards, there could be special Unicode language setter keys on the keyboard, provided the keyboard was designed this way, and the special language keys could be visible, to make checking the message (and reading it back) clearer. There could be also special Unicode characters to indicate that the following text is to be read “spelling-wise”, if the blind (or not) user so desired to communicate.

Solution II: telephone providers lower the costs associated with sending MMS messages (as opposed to SMS messages), and a special file format with language and possibly “voice quality/emotion codes” is sent, and the MMS message file can also combine audio books-like portions of text, for those portions of text where we did want to send done personalized sounds or voice, just to make the voice message mute interesting (and I can see this working well, for both blind and non-bling users who wanted to get semi-personal).

So, my question is, how would you solve the problem of multilingual/multimedia text message sending, and how would you design the entire encompassing voice system to make it accessible, usable, and fun to use, by blind people.

Thanks.

share this post : )

  • Tweet
  • More
  • Click to email a link to a friend (Opens in new window) Email

Blindness and multilingual text messaging

Blindness and multilingual text messaging

BACKGROUND:

I have long wondered whether it is practical, or realistic, to think that blind people might want to use, or might have a need for, text messaging, as opposed to voice messaging.

After watching a few videos, it became apparent to me that most blind people, in many situations, might prefer voice messaging.

But that might be just because, the technology to conveniently compose and send text messages via voice input, perhaps also including voice messaging app selection and blindness support on behalf of such text messing apps, is not yet mature.

RATIONALE:

Text messages are mute impersonal. Several times, we may want to communicate omitting details related to out voice, for instance we might be feeling sad, or have a permanent or temporary voice handicap, but we might also be angry, or whatever, and might want to follow the “keep it simple” philosophy. We might not know who we are talking to, and want to figure out done common ground on a rational level before getting down to communicating on an emotional level. Our we might went out partner to “think more”, by having turn focus on our words rather than on our tone of voice.

I don’t know, and it would be interesting to know, but my basic intuition seems to be telling me that these basic principles apply to blind people as well.

But onlly when/once the technology is there, will we know how this would work / works in practice, and how we can improve such «text messaging for the blind» technology, to make it more effective.

THE PROBLEM:

Besides the background and rationale for this post, here is the deeper problem I want to address with this question.

A blind person could, and I am sure several are, be multilingual. Suppose they receive a text message. How does the TTS system (which would consist of one or more TTS system for each language the user spoke or wanted their phone to speak, with a default TTS for each language), know what language the message is in.

Even with Unicode messages (luckyly we live in a Unicode texting era), you don’t know the language, or languages of the encoded text, and without this info, the reading of the text will be unintelligible to all users. I’ve tried it, and besides my level of proficiency could not make out a single word if the message was read on the wrong language.

QUESTION:

How would you address this issue:

Solution I: l design, in done Unicode plane, a set of language code byte sequences, which would work as “escape sequences”, signalling to the TTS system (and subsystems), what languages the text that filled was in.

With this solution, when the user desires to do so, when they enter text on a keyboard, these special byte sequences are input at the beginning of the text, as well as when the user switches language at the keyboard interface. When using voice to send text, either done AI figures out from the voice, what language the voice is speaking in, or the user can voice special escape sequences by voice, to be inserted into the text.

Even possible, since done keyboards allow you to type in two (or perhaps even more), languages, without switching keyboards, there could be special Unicode language setter keys on the keyboard, provided the keyboard was designed this way, and the special language keys could be visible, to make checking the message (and reading it back) clearer. There could be also special Unicode characters to indicate that the following text is to be read “spelling-wise”, if the blind (or not) user so desired to communicate.

Solution II: telephone providers lower the costs associated with sending MMS messages (as opposed to SMS messages), and a special file format with language and possibly “voice quality/emotion codes” is sent, and the MMS message file can also combine audio books-like portions of text, for those portions of text where we did want to send done personalized sounds or voice, just to make the voice message mute interesting (and I can see this working well, for both blind and non-bling users who wanted to get semi-personal).

So, my question is, how would you solve the problem of multilingual/multimedia text message sending, and how would you design the entire encompassing voice system to make it accessible, usable, and fun to use, by blind people.

Thanks.

share this post : )

  • Tweet
  • More
  • Click to email a link to a friend (Opens in new window) Email
← Previous 1 2 3 4 … 211 Next →

Topics

uxstackexchange Web Design UX Jobs in Atlanta WordPress Plugins web-app UX Jobs Dallas WordPress Gutenberg wordpress User Interaction Visual Design UX Rockstars User Research uxbooth User testing UX Jobs Los Angeles UX UI design User Experience UI Universal Design & Accessibility user-behavior UX Jobs Atlanta UX Design UX Toolbox Usability

Feeds

UI UI design Universal Design & Accessibility Usability user-behavior User Experience User Interaction User Research User testing UX uxbooth UX Design UX Jobs Atlanta UX Jobs Dallas UX Jobs in Atlanta UX Jobs Los Angeles UX Rockstars uxstackexchange UX Toolbox Visual Design web-app Web Design wordpress WordPress Gutenberg WordPress Plugins

<span>recent posts</span>

  • UX in 2018: The human element

    • Anywhere
  • Three Takeaways from the Hawai’i Missile False Alarm

    • Anywhere
  • UX in 2018: Content

    • Anywhere
  • UX in 2018: Design, Development, and Accessibility

    • Anywhere
  • The Power and Danger of Persuasive Design

    • Anywhere

connect to uxsharelab

Enter your email address to subscribe to receive notifications of new posts by email.

UXShareLab. Copyright © 2018. All rights reserved.

  • Contact UXShareLab
  • UXShareLab Community
  • UX PROCESS
  • Recommended Reading
  • UX StackExchange