What risks does AI-infused healthcare product brings in ? examples of data ethnic problems and bias while working AI design product.

I had worked for AI-infused healthcare product as UI/UX Designer three years ago. Alongside my experience and product development, I had noticed some of problems with data surveillance and some problems that AI may bring in. I would like to have some awareness of risks while working with healthcare product with AI and UX.

Chao-Ling Chyou
5 min readSep 17, 2023

what is AI-infused product?

AI-infused product is the product which had used AI attached in their product as their functionalities and features. For instance, Netflix had AI-detected in some recommendations in their product, or match rate presented underneath in the movie or dramas.

Example: ‘Match’ Function came from AI depends on user’s preferences while using Netflix

How AI worked out in healthcare product as example of child development system?

While working as UI/UX designer three years ago in silicon valley start-up for the sake of purpose of child development, firstly, you need to have features of ‘detection’, which can be collected the information toward via diversity of media and digital platforms, like audios, videos, and interfaces, those can be helped with detected the potentials and possibilities, given the AI clarification results to ensure your child may need more further improvement or not. but hold on, there would be some problems…..

So, what are the issues would be while working with the products with AI?

What are the risks toward AI-infused product with healthcare child product?

With stakeholders and the relationship with product operation, the platform collected the data may be given to hospitals or data analysis led to potentials for other product development for company, and healthcare information would be collected from parents as well as children.

From my journey within creating interfaces in this system, I had some awareness toward those risks while operating this system with those stakeholders.

Data information relationship with stakeholders

Firstly, in company side, Where does healthcare data will be hold and used for for the company?

While I was working with some specific interfaces with some sensitive questions like their other disease questions, I was wondering how those information would be used or collected if company don’t have clear clarification or instructions toward data. It may happen that company had rights to use those information improperly like being given those health information to the business partners like hospitals.

Second, Why users need to hand in healthcare problems information? Did they have rights toward their healthcare problem information?

While I was working with some specific interfaces with some sensitive questions, like inherit-related disease or family disease information, it would be not really related to the questions toward child development it selves if the company may not ensure about their policy those protected information or even may not be necessary or be optional for user to deliver those information to the company.

For product side, users have rights to share their information properly, especially the child development problems may not have connection toward their(parents) health problems or even to the child development especially they are the company instead the hospital or professional health workers. Company may have privilege to use information, being given data (or may share data)to business partners or other stakeholders in order to build up further opportunities with those information with the purpose of business development. And of course, this means user need to hold on their data policy or take over their information toward company.

The bias of AI may cause with product development : the responsibilities of accurate and inaccurate result

If AI firstly recognised the child development is not equal or being equal to their age, what side effects would be caused? and what responsibilities does company need to hold on?

Firstly, the child may need to have clear and professional clarification for their development condition, situation or specific checking toward therapist while having inaccurate result. The parents need to pay for more fee to ensure their child development situation, making sure the result aligned with same-age child development expectations.

Secondly, parents may being triggered some mental health problems with concerning their child development. If their child is equal or not equal, it still cause some potentials which parents may concern. While I had observed with some end users in Asia, some of them had mentioned some problems that they are somehow deeply concerned about their child development with result, they spend more time on research on child development in order to ensure their child will come up with other child’s development. Then, parents will spend more time consulting professionals, scrolling down online information, or even enquire knowledge about their child development, just in case that their child would be aligned with their peers or maybe look for opportunities to reinforce their child strengths from external methods. Those will cause more concern and anxiousness toward parents, which aligned with social perspectives and expectations toward their children.

If all those things come out, how company will deal with those side effects, and where is the responsibilities from the company?

If the company claim that they don’t have responsibilities, which means they don’t be responsible for the AI results would be given or even AI would be caused. You need to have alternatives to help with those users who is struggling for the side effects of results, like being given professionals consulting, and being transparent to the public toward that how you use AI to calculate the result and the metric of assessment with AI, and even mentioned how those data will be used while calculating from AI. If the company can take responsibilities toward those information or take actions being given to the persecutions, they are more likely to be responsible for the users, product and AI.

Bias of AI is human’s mindset and human-made, not suitable for all user’s situations, especially health condition.

While I had reached out some user’s feedbacks on parenting forum, they had used another competitive product previously, but the result is not aligned with hospital’s professional analysis, they had been really upset about the calculator and argued of the platform had some issues which is not professional, and can’t be quickly recognised that the accuate results. From this experience, I realised that not all the healthcare situations can be accurate recognised. Once AI started to calculated healthcare conditions, it will collect as many as problems which user had be given, but AI can’t replace your healthcare condition based on some features or simple professionals attached on interfaces. You need to have deep comprehension of child problems, and diversity of checking out system or even professional’s face-to-face observations, in order to make sure the healthcare conditions. Those AI detection products are just assistance of result but not a professionals especially healthcare product.

So, next one, I would like to mention the possible solutions, and how to create responsible machine learning disciplines to protect users and their rights while using AI.

Thank you.

--

--

Chao-Ling Chyou

Previous UX Designer/researcher. Now Product/Project manager. Based in London/Taipei.