đŸŒč Looking for 'The One' (LSA+ solution)?

â€ș

Book a Demo

Thank you - we’ll email you about next steps soon!

For Fijoya members please submit question to support@fijoya.io

Oops! Something went wrong while submitting the form.
Back to the Resources

White papers

Privacy and security

AI and personalization

Building a responsible AI roadmap for your organization: HR and health benefits case study

February 12, 2024

‱

The acceleration of AI innovation has generated a sense of urgency for employers to ##“figure out AI”## in a way that can transform everyday tasks, enhance productivity, and optimize costs. At the same time, there is concern about the risks associated with nascent AI technologies, particularly in handling sensitive employee data.

In conversations with many employers over the past two years about their practical use of AI, we identified two recurring strategies to mitigate this risk. 

First, focusing on smaller experiments with clear use cases rather than attempting to overhaul an entire function with AI. Implementation has been most successful for repetitive and time-intensive tasks, freeing attention for more strategic work.

Second, taking a deliberate and strategic approach to AI governance and the responsible use of these technologies. Companies must ensure that any third parties handling employee data or using AI for high-stakes decisions have robust governance mechanisms. This is especially critical in sensitive areas like employee health benefits, where misuse can have life-altering consequences.

In this blog, we explain the concept of AI governance and share insights from its implementation at Fijoya, addressing point solution vendor fatigue in the realm of employee health benefits.

The challenge: point solution vendor fatigue‍

The pandemic prompted a spike in telehealth innovation, and employers faced growing employee expectations for better healthcare access. Many companies launched new solutions, distributed by employers as ‘point solution’ health benefits.

However, this rapid expansion led to a complex and bloated benefits stack. Today, benefits teams are overwhelmed - or ‘fatigued’ - managing numerous contracts, each with unique payment, support, and reporting requirements. This complexity raises data privacy risks and causes confusion for employees, ultimately hindering value production.

The repetitive, time-consuming nature of this challenge is an ideal use case for AI, if coupled with robust AI governance protocols. 

Why AI governance matters from day one.

Deploying AI into the daily operations of employee health benefits comes with significant challenges and responsibilities. AI systems, while powerful, can be flawed, biased, and produce inaccurate responses if not properly managed. Without the correct guardrails, these systems can lead to incorrect and potentially harmful decisions. Therefore, it's crucial to understand AI governance, an emerging field that establishes guidelines, principles, and frameworks to support the responsible development and deployment of AI systems. A grounded perspective can help mitigate potential risks and ensure AI is used safely, ethically, and transparently.

Governance should be part of your AI strategy from the outset rather than an afterthought, and this means working with responsible innovators who have long left behind the age of ‘move fast and break things’. 

‍Case study: Implementing AI governance at Fijoya

Fijoya's software is powered by AI on multiple levels. The product uses AI to provide personalized recommendations to users - including tailoring the app experience on the homepage, ‘explore’ tab and search. This is core to the company's unique value: matching healthcare consumers with the most beneficial services their employers have made available. Additionally, AI is used for a support chatbot that triages and assists users before they reach a human agent.

Here's how Fijoya has implemented AI governance:

‍Finding the right framework 

Since AI governance is relatively new, standards and frameworks are still evolving. Companies and organizations are exploring approaches to meeting internal guidelines and the quickly forming compliance landscape.

Fijoya chose to rely on Deloitte's Trustworthy AI framework, which provides a comprehensive approach to managing the unique risks associated with AI. The framework consists of six key dimensions:

  1. Fair and unbiased: AI systems should be designed and trained to make fair decisions, free from discriminatory bias.
  2. Transparent and explainable: AI systems' decision-making process should be open to inspection, and their outputs should be explainable.
  3. Responsible and accountable: Clear policies should be in place to establish responsibility and accountability for AI system outputs.
  4. Robust and reliable: AI systems should be consistent and reliable and perform well even in less-than-ideal conditions.
  5. Respectful of privacy: AI must comply with data regulations and use data only for agreed-upon purposes.
  6. Safe and secure: AI systems should be protected from cybersecurity risks that could lead to physical or digital harm.

Using this framework, Fijoya set out to build AI systems that are trustworthy, ethical, and aligned with the company’s values of promoting health equity and enhancing the employee healthcare experience. Let’s now look at how that was achieved.

Fair and unbiased automated recommendations

In Fijoya’s context, fairness and impartiality are particularly relevant because the company uses an automated recommendation system to match healthcare solutions to consumers. Such systems could perpetuate biases or unfairly favor specific solutions without proper governance.

Fijoya validates its recommendation engine using bias and fairness tests, ensuring that it provides equitable suggestions across various demographics. Additionally, Fijoya employs human-in-the-loop oversight to maintain fairness across genders, ages, races, and other characteristics. The company also commits to never endorsing vendors based on financial incentives, thus maintaining an unbiased marketplace.‍

Safety, security, and privacy in handling sensitive employee health data

Fijoya operates in the healthcare domain and handles sensitive patient data. As such, the company needs to take extensive action to prevent that data from being exposed or misused, as any breach could have severe consequences for patients' privacy and enterprise liability.

Fijoya applied various measures in this regard:

  • Achieving SOC 2 and HIPAA compliance early on.
  • Only pre-approved, anonymized user labels will be used for AI recommendations.
  • Minimizing data storage and implementing strict access controls to sensitive information.
  • Private cloud deployment with limiting data sharing. 
  • Adhering to secure engineering and architecture best practices. 
  • Conducting regular adversarial testing of AI systems to identify and address potential vulnerabilities.

Transparency, explainability, and accountability for AI decisions

Showing the reasoning behind AI decisions matters because Fijoya’s AI is making decisions that can have significant financial impacts on enterprises and potential health impacts on consumers. Stakeholders need to understand how the AI arrives at its recommendations and have recourse if something goes wrong.

Fijoya takes full accountability for AI decisions by designating team members to monitor AI performance and ethics. The company provides user feedback channels to report AI discrepancies. It ensures traceability and logging of AI system processes for auditability, enabling thorough investigation and resolution of any issues.

Regarding transparency, Fijoya notifies users when AI is being used. The company integrates user feedback to refine AI outputs and explains recommendations within the user interface, ensuring users can understand the rationale behind the AI's suggestions.

Responsible use 

As a marketplace connecting healthcare solutions with consumers, Fijoya is responsible for ensuring that the products and services it recommends are safe, effective, and ethically sound.

Fijoya demonstrates social responsibility by partnering with suppliers and partners who align with sustainable practices. The company also conducts regular ethical compliance audits of AI systems to ensure that they are being used responsibly and beneficially.‍

‍Robust and reliable performance 

The platform's AI is a mission-critical service for enterprises that rely on Fijoya to connect their employees with health and wellness benefits. The system needs to provide a high level of service consistently and function as expected so that employees have access to the care they need.

Fijoya ensures consistent, error-resistant AI outputs by conducting risk analysis and mitigation for AI interactions. The company monitors rule adherence and critical anomaly detection, implementing fallback mechanisms for uninterrupted service. Fijoya focuses on the reliability and reproducibility of AI outputs, enabling effective troubleshooting and maintaining user trust.

Tying governance with impact 

It's important to note that responsible AI governance doesn't limit companies. In fact, quite the opposite is true.

By providing a structured framework, AI governance enables innovative teams to generate and test new ideas confidently, making significant strides towards solving important problems. Responsible governance ensures that AI development is unbiased, safe, ethical, and transparent, unleashing the full potential of AI technology in a way that is trustworthy and compliant.

At the end of the day, AI governance must be met with innovative product development that truly makes a difference to the organization. In the case of Fijoya, this means working on strong ethical foundations and architecture to ensure that employees engage more with their health benefits and benefits teams spend less time on repetitive administrative work. We can already see the impact of Fijoya in action - employee engagement with point solutions has increased from 5% to 63.5%, with a 50% cost savings for the employer.

Be in touch

If you can relate to your organization's point solution vendor fatigue problem and would like to learn more about Fijoya, you can be in touch here.

About the authors

##Assaf Mischari, Managing Partner at Team8 Health## 

With 25 years of experience in cybersecurity and technology, Assaf is a founding member and the former Chief Technological Officer (CTO) of Team8. During his ten-year journey with the fund, Assaf has successfully established and scaled 12 companies by leveraging the rigorous Team8 company-building and scaling process. His extensive cybersecurity and data perspective, coupled with access to top-tier talent, play a critical role in the success of our founding teams and startups. Before his tenure at Team8, Assaf held several leadership positions within the Israel Defense Forces’ Technology & Intelligence Unit 8200, Israel’s elite military intelligence unit. His distinguished service includes acting as the CTO of the Cyber Division. Assaf holds a B.Sc in Electrical Engineering from Tel-Aviv University.

##Yael Oshri Balla, VP R&D at Fijoya##

With extensive experience in engineering management, Yael is the Vice President of R&D at Fijoya. She has proven expertise in leading engineering groups in enterprise settings as well as early-stage startups, including building organizations from the ground up. Her career includes roles at prominent companies such as Mercury HP, Ski, and Balance, and she has a strong background from Mamram, Israel's elite computer unit.

‍

White papers

Privacy and security

AI and personalization

Follow us on LinkedIn

One pay-per-use contract for thousands of health and wellness benefits.

  • You choose the budget

  • We handle the administration

  • Unspent funds return to your budget

Your submission has been received! We will email you shortly.
Oops! Something went wrong while submitting the form.