The following are notes on divergent observations of various AI design patterns across prominent mobile apps today and their applications and concerns to addressing health related problems.
Acquiring feedback is key to improving the services of the product and how we can better tailor to and evolve with user’s needs.
Acquiring feedback from users is a significant AI pattern seen across many major mobile applications. It allows the companies to learn from the data collected and make necessary changes to their platform to improve user experience and ultimately achieve their bottom line (e.g. profit based on increased and consistent engagement). This usually suggests that user feedback is a significant driving force for what the company offers and services, and how it evolves through time. A successful AI design pattern needs to allow for the evolving needs and challenges of users. Users after all dynamic, and often their behaviors are a result of their environment and culture they are surrounded with. For example, Facebook has ways for users to personalize and curate their newsfeed through reporting what they find offensive and the ability to see less of a particular kind of ad or someone’s off-tune memes.
How do companies account for feedback while they grow and attract more new users? Gathering feedback is a common pattern applied to first-time users (e.g. Netflix, Linkedin, Pinterest) as a way to provide them more relevant content from the get go. The assumption is that users are more likely to be engaged when they are presented with content that is more relevant to them from the beginning. This feedback can also teach the platform what’s most and less relevant to the user in time, the learnings of which the users can identify when they filter content for relevance (Youtube). With enough feedback and time, some companies may even offer predictions on what may happen when the user or site exhibits certain patterns and behaviors, as in the case of stocks (Robinhood), what the price of airline tickets could be 3 weeks from now (Google Flights) and identifying how busy a place might be at 5pm (Google Maps). Though AI seems like may ease some anxiety when it comes to planning things in advance or curating the perfect newsfeed, it’s important to note that the the results of the AI are dynamic, meaning that AI design patterns should accommodate a way to ensure both control from the user and helping them be at ease with the inevitable non-zero amount of uncertainty.
We should expect outputs to be dynamic and hold ourselves accountable for outputs that are incorrect, misleading, or off-base.
Another very common AI pattern is displaying suggestions and recommendations that is catered to the user based on the feedback and data the company has already acquired from them. Reasons for certain suggestions vary. Some are based on who you know (Linkedin), activity and interests (Facebook), history of purchases (Amazon), and geography (Airbnb).
Not all search results are based on closely observed interests and history. In the case of navigation apps, search results are often based on a few parameters (e.g. preferred mode of travel, desire to avoid traffic, arrival time, etc.), and the results are live suggestions of optimal routes based on those parameters (Google Maps). While you’re driving, the route can change depending on live input from satellites identifying traffic and other hazards ahead. These solutions aren’t perfect given the dynamic nature of the technology. Sometimes the directions lag, and sometimes they’re plainly just a more inefficient way to get somewhere (tried and true knowledge of backroad shortcuts of how to get from point A to point B is something that doesn’t often get accounted for in navigation).
AI design patterns should augment suggestions on how to solve problems, and as such, they are not definitive truths. The user should always feel that they can take their personal discretion into account.
While these first few examples largely showcase patterns based on pre-generated text and button inputs from the user, another significant pattern utilizes the camera’s ability to scan images for text as a point of input that isn’t always predefined. In some ways, camera is used a way to extract text for search (Facebook allows for text extraction into their comments feature), while some use the camera as a way to identify common patterns in existing objects they are intended to scan (e.g. ID cards for Airbnb). The challenge for companies is not having the same control as you would have in a more traditional text-based input, and many platforms have tried to account for that by providing some educational remarks on how to achieve the ideal photograph. This supports the suggestion that more data is better, even if the data acquired is not necessarily relevant or accurate, because even inaccurate data is a signal for the AI to learn what’s not relevant.
In a similar vein, cameras can also be used to scan or take photos of objects that can be used as an input for search, including food and even math equations (Google) and even group. For example, some photo apps (like Apple Photos) can identify similarities across photos (faces, textures, geometries, locations, patterns) with enough intelligence to auto-group them into photo albums. Though the search results for these may not be perfect (e.g. you may not exactly the same object in return), the assumption is that the more data and feedback is fed into the AI, the more accurate the results will become. It depends on the ultimate goal of accuracy for the company, why that goal exists, and how it’s useful to their bottom line because it’s these qualities that define the way the algorithm is designed. Camera technology has greatly improved the way we’ve been able to identify and augment reality, as seen in TikTok, Instagram, and Snap, through various filters, paving the way for democratizing ways people can create and express themselves online. However, we also know that bias is a significant issue when it comes to using AI to identify faces and people. This can have tremendous repercussions on people’s livelihoods and how they’re represented and therefore resources made available to them.
You see this phenomena of AI improving features in the ways search results have improved in the past couple of decades since the invention of the search engine. With just a single word as input, search results can now aim to predict what you may be searching for based on a myriad of data they have on you (Google). If there’s a typo or you may have meant a different search, search engines can try and correct and predict what you meant based on similarity, proximity, and history. This is similarly applied with autocomplete, which is seen is popular e-mail clients today (Gmail, Outlook, Apple).
Voice search and transcription is also another AI pattern that continues to improve. Though by no means perfect, it’s been proven useful for those with dexterity concerns, and has cut time from having to transcribe meetings from scratch (Otter). Voice/sound search has also been implemented in ways to search for music (Google) and shop (Alexa). Perhaps this suggests that we need to continue to strive for accessibility: the influence this has on accessible design trumps the slight annoyances of having to repeat or revise the results of the transcription to perfection.
Chatbots have also become a popular way to engage with users most common concerns, especially for those who prefer to feel like they can get immediate support. However, they tend to feel impersonal, and may not always answer more pressing concerns you may have. From the company’s perspective however, chatbots can reduce resource cost and double as a way to get feedback more efficiently from users all day. Should AI always be in favor of a better user experience? It’s a matter of balance and aligning with the company’s priorities : how much of the user experience should be sacrificed, and should AI be an agent of an experience that isn’t helpful nor delightful?
Does a successful AI design pattern necessarily mean good user experience?
Another common AI pattern is being able to auto-identify and tag content. For example, some posts can been tagged as being related to COVID, which would link to credible information to learn more about how to protect yourself from it. (Facebook and Instagram), while others would bold text of featured content for easier scannability (Yelp). In the case for Facebook and Instagram, they were able to successfully use the tag as a needed educational opportunity to bring more awareness of the pandemic. Both achieve the goal of calling out information that may be relevant and helpful to the user.
Some of the major takeaways as it relates to health, is that there is a lot we can apply from existing AI design patterns and best practices to enhance people’s health needs. It’s evident that AI can be very useful when trying to achieve a specific target goal or helping enhance and personalize the experience for that user based on health parameters they control and can define for themselves. There’s a lot of opportunity to use the camera and microphone to identify things about the human body and experience (both positive and negative), and with enough of it, at best can begin to provide some suggestions and instructions on how to lead a healthier life without the supervision of a medical doctor. How do we ensure equity in a technology that’s been shown to be biased against people who often need the most care? How do we use AI to help facilitate healing in a caring, equitable way? How can we predict and suggest without prescribing? How do we course correct when the feedback we give to the user is incorrect or off base? How do we call it when we realize that the people we aim to care for are actually being hurt in the process?