Intelligent User Interface Designing for Better Communication
The age we are living in is called the era of smartphone. So, the number of social media users is increasing each second. People all over the globe communicate with each other for various reasons with the help of Instant Messages. Some of the states show that mobile users are increasing at a very rapid rate. 1 million new active mobile social users are added every day. 2.1 billion people in this world who have social media accounts and 1.7 billion use social networks from a mobile device. A business of 100 employees spends on average 17 hours a week to clarify bad communication.
There is a problem that can result in huge losses. The problem is called a miscommunication. The receiver is unable to understand what the sender wants to say to him/her.
When talking of Work Environment, 57% of the projects fail due to communication breakdown and $37 billion are lost yearly due to employee misunderstandings. The main cause of this miscommunication in text messaging is the absence of visual clues.
Nonverbal behavior, i.e. gestures, facial displays, body postures and movement, plays an important role in face-to-face communication. It is an understood fact that lack of aural and visual clues leads to misinterpretations of words. And the rest damage is done by assumptions where one is so sure that the other person will perceive a certain message exactly like they intend them to. Not only the senders but the receivers also have a full hand in miscommunication where the meaning of a message is shaped up by their emotions/ mood at that point in time, their relationship with and the image of the sender in their mind and the good-old stereotypes.
So, to solve this issue of miscommunication we came up with a solution known as an intelligent user interface. We are solving this problem with the help of an android application whose working is based on ML. This android application has three components, user interface, cloud storage and a machine learning model.
The cloud storage is used to store the messages on the cloud database along with their emotions. The user interface is used to render the color-coded messages that represent a specific emotion. The machine learning model is used to perform inference locally to predict the user’s emotion from the camera.
We are incorporating color theory with emotion. A specific color is assigned to a specific emotion. The mobile(android) application ‘acquires the frames from the camera, then performs the face detection and cropping to remove the background clutter. The image is then sent to the model (machine learning model), which runs locally on the user’s device, to perform the inference. The inference is run over 5 consecutive frames and then averaged to avoid miss-classifications that may occur due to transitioning frames and other factors. The final emotion is sent along with the message to the cloud database. From there it will be sent to the receiver end. The receiver has a good knowledge of the colors assigned to the specific emotions. The receiver will receive the verbal information from the words and facial emotion information from the background color of the message.
Incorporate color theory to show conversational context and assign a color to every emotion. With this information, now the receiver is in a better position to interpret the message he receives from the sender. There are many other uses of this idea. For example, you can use this idea for automatic feedback systems, security systems and many others.