Gesund is the world’s first compliant AI factory on a mission to help bring clinical-grade AI solutions to market. To help comply with regulatory requirements, our platform audits and validates 3rd party medical AI solutions for safety, effectiveness and equity. Backed by marquee investors including Merck, McKesson, Northpond and 500, Gesund orchestrates the entire AI/ML lifecycle for all stakeholders by bringing models, data and experts together in a no-code environment.
Model owner shares clinical study with Gesund.ai for curation of appropriate dataset(s), and uploads their model onto Gesund.ai's federated validation platform, which resides on hospital premise or private cloud.
Model runs against a previously unseen validation data set that has been curated on the hospital side.
Model accuracy metrics are produced and displayed on the Gesund.ai platform for further examination with respect to patient characteristics, scenario analyses and stress testing.
The model insights are exported into a report for the model owner to supplement their regulatory submission.
Gesund.ai is Now Proudly Compliant with SOC 2 - Type II Standards!
Gesund.ai utilizes enterprise-grade best practices to protect our customers’ data, and works with independent experts to verify its security, privacy, and compliance controls, and has achieved SOC 2 Type II report against stringent standards.
SOC 2 Report
We work with an independent auditor to maintain a SOC 2 Type II report, which objectively certifies our controls to ensure the continuous security of our customers' data.
Developed by the Assurance Services Executive Committee (ASEC) of the AICPA, the Trust Services Criteria is the set of control criteria to be used when evaluating the suitability of the design and operating effectiveness of controls relevant to the security, availability, or processing integrity of information and systems, or the confidentiality or privacy of the information processed by the systems at an entity, a division, or an operating unit of an entity.
Continuous Security Control Monitoring
Gesund.ai uses Drata’s automation platform to continuously monitor 100+ security controls across the organization. Automated alerts and evidence collection allows Gesund.ai to confidently prove its security and compliance posture any day of the year, while fostering a security-first mindset and culture of compliance across the organization.
Employee Trainings
Security is a company-wide endeavor. All employees complete an annual security training program and employ best practices when handling customer data.
Penetration Tests
Gesund.ai works with industry leading security firms to perform annual network and application layer penetration tests.
Secure Software Development
Gesund.ai utilizes a variety of manual and automatic data security and vulnerability checks throughout the software development lifecycle.
Data Encryption
Data is encrypted both in-transit using TLS and at rest.
Vulnerability Disclosure Program
If you believe you’ve discovered a bug in Gesund.ai’s security, please get in touch at security@gesund.ai. Our security team promptly investigates all reported issues.
Can an automated approach help identify clinically significant biases within machine learning prognostic models?
Of the 1,343 total patients included in the study, 179 (13%) patients died. The final model accuracy on the validation cohort overall was 80% (minority class F1 score=0.39, AUC=0.663). However, using our automated tool there were numerous clinically significant differences identified in model accuracy on different patient subsets. For example, the model was much more accurate for patients requiring the intensive care unit (86% accuracy) but was much worse for other sub cohorts such as current smokers (60% accuracy) and male patients (78% accuracy). Moreover, there were many sub cohorts with insufficient patient data to perform sufficient analysis.
These data demonstrate the high risk for model performance discrepancies on subset of patients with different characteristics. Using a standardized, automated approach for systematic model validation is instrumental in minimizing model biases before implementing a machine learning model in a clinical setting.