Skip to main content
All CollectionsGeneral
Responsible AI at Corti
Responsible AI at Corti

Mitigating risks and preventing bias in Corti products

Updated over a week ago

Overview of Strategies and Efforts for Mitigating AI Risks in a Company

The adoption of artificial intelligence (AI) in business operations brings numerous benefits, but it also introduces significant risks, including bias, data privacy concerns, security vulnerabilities, and unintended consequences. Effective risk mitigation strategies are essential to ensure that AI deployment is ethical, safe, and aligned with the goals of concrete use-cases and societal values in pursuit of beneficial outcomes for people.

While the advent of sophisticated generative AI tools has brought more attention to AI risks, such risks are not new. And they are not new to us. At Corti, committed to building and deploying AI technologies responsibly, while focusing at all times on ethical considerations and the well-being of those in need.

Here, we describe the various strategies and concrete efforts that Corti takes to mitigate AI risks.


Robust governance frameworks

AI ethics committees:

Corti has formed interdisciplinary AI ethics committees composed of data scientists, ethicists, legal experts, and business leaders and implements regular review of AI projects to ensure they adhere to ethical standards and regulatory requirements.

Clear policies and guidelines:

Corti has developed comprehensive internal policies and guidelines that outline acceptable AI practices, data usage, and decision-making processes and ensures these guidelines are aligned with legal standards and ethical norms.

Accountability mechanisms:

Corti assigns clear accountability for AI systems' outcomes and implements regular audits and impact assessments to monitor AI systems' performance and identify potential risks early.

Humans in Control:

Corti has developed its products ensuring that humans make the decisions.


Transparency and explainability

Explainable models:

Corti gives preference to explainable and interpretable AI models over black-box models, especially in high-stakes decisions. Corti utilizes techniques such as LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) to interpret model decisions and performs active in-house research in explainable AI. By ensuring this, we empower users to interpret the system's output accurately and use it appropriately, thus fostering trust and confidence in our AI solutions.

Transparent development:

Corti ensures a transparent development process of our AI systems, providing overviews of how datasets are collected, cleaned, and quality assured, which input modalities models are trained on, which outputs they produce, and which objectives they are taught to optimize for.

Transparent user experience

Corti ensures that it is always transparent to users when they are interacting with an AI system and which suggestions, predictions, and automations are made using AI models. Model explanations and the development process are made available to users wherever relevant for their safe and transparent use of the software.

Documentation and reporting:

Corti maintains thorough documentation of AI development processes, datasets used, and decision rationale and produces regular reports on AI systems' performance, including metrics on fairness, accuracy, and impact, for internal and external stakeholders.


Continuous monitoring and evaluation

Bias monitoring:

Corti deploys bias detection tools and metrics to continuously monitor AI systems in production for discriminatory patterns. Corti regularly validates AI models against updated datasets to ensure ongoing fairness. This includes the use of diverse and representative data, pre-processing techniques, and in-processing and post-processing approaches to adjust model outputs and ensure fairness.

Performance monitoring:

Corti has established key performance indicators (KPIs) for AI systems, focusing on accuracy, fairness, and ethical considerations. Corti uses automated monitoring systems to track AI performance in real-time and flag anomalies.

Fairness and human value alignment:

These mechanisms encompass a comprehensive approach to prevent unfair identification, profiling, or statistical singling out of any segmented population based on various sensitive characteristics such as race, gender identity, nationality, religion, disability, or any other politically-charged identifier. We believe in providing equal opportunities and fair outcomes for every person touched by our AI systems, irrespective of their background, ethnicity, or any other characteristic.


Data privacy and security

Secure and robust:

Corti employs mechanisms to ensure secure and resilient designing, development, testing, implementation and maintenance of AI systems. It ensures that AI systems remain resilient against errors, faults, inconsistencies, and malicious actions that could compromise system security. The system is compliant with the SOC2 type 2 security framework.

Data anonymization and encryption:

Corti implements strong data anonymization techniques to protect individuals' identities in datasets and uses encryption to safeguard data during storage and transmission.

Access controls:

Corti enforces strict access controls to limit who can view or modify sensitive data and regularly review and update access permissions to prevent unauthorized access.

Incident response plans:

Corti has developed and maintains incident response plans to address data breaches or AI system failures promptly. Corti also conducts regular drills to ensure preparedness for potential incidents.

Privacy rights:

Corti is compliant with the GDPR and HIPAA privacy frameworks and has policies, processes and procedures to adhere to their requirements, e.g. provide user rights, and obtain consent. The data is processed only for a specific purpose and legal basis documented before data collection and processing. The data is used only according to the documented instructions. Data is never used for profiling or personalized marketing for example.


A culture of ethical AI use

Training and education:

Corti provides ongoing training for employees on AI ethics, bias mitigation, and responsible AI use. Corti encourages a culture of continuous learning and ethical awareness.

Stakeholder engagement:

Corti engages with stakeholders, including customers, employees, and community representatives, to understand their concerns and perspectives on AI use and incorporates stakeholder feedback into AI development and deployment processes.

Collaboration and industry standards:

Corti participates in academic and industry collaborations to share best practices and stay updated on the latest advancements in AI risk mitigation. Corti also contribute to the development of industry standards, guidelines and academic research for responsible AI use.


Conclusion

Our trustworthy AI approach to mitigate AI risks requires a comprehensive and proactive approach that integrates governance, transparency, technical solutions, and cultural changes within an organization. By adopting these strategies and workflows, Corti is able to utilize the abilities of AI while minimizing potential risks, ensuring ethical and equitable outcomes, and maintaining trust with stakeholders.

As we forge ahead in the realm of AI, we remain committed to responsible and ethical practices. We recognize the immense potential of AI to transform industries and improve lives, and we are dedicated to realizing that potential responsibly, guided by our robust policies and a commitment to making AI a force for good.

Did this answer your question?