Gesund is the world’s first compliant AI factory on a mission to help bring clinical-grade AI solutions to market. To help comply with regulatory requirements, our platform audits and validates 3rd party medical AI solutions for safety, effectiveness and equity. Backed by marquee investors including Merck, McKesson, Northpond and 500, Gesund orchestrates the entire AI/ML lifecycle for all stakeholders by bringing models, data and experts together in a no-code environment.
Model owner shares clinical study with Gesund.ai for curation of appropriate dataset(s), and uploads their model onto Gesund.ai's federated validation platform, which resides on hospital premise or private cloud.
Model runs against a previously unseen validation data set that has been curated on the hospital side.
Model accuracy metrics are produced and displayed on the Gesund.ai platform for further examination with respect to patient characteristics, scenario analyses and stress testing.
The model insights are exported into a report for the model owner to supplement their regulatory submission.
Gesund.ai is Now Proudly Compliant with SOC 2 - Type II Standards!
Gesund.ai utilizes enterprise-grade best practices to protect our customers’ data, and works with independent experts to verify its security, privacy, and compliance controls, and has achieved SOC 2 Type II report against stringent standards.
SOC 2 Report
We work with an independent auditor to maintain a SOC 2 Type II report, which objectively certifies our controls to ensure the continuous security of our customers' data.
Developed by the Assurance Services Executive Committee (ASEC) of the AICPA, the Trust Services Criteria is the set of control criteria to be used when evaluating the suitability of the design and operating effectiveness of controls relevant to the security, availability, or processing integrity of information and systems, or the confidentiality or privacy of the information processed by the systems at an entity, a division, or an operating unit of an entity.
Continuous Security Control Monitoring
Gesund.ai uses Drata’s automation platform to continuously monitor 100+ security controls across the organization. Automated alerts and evidence collection allows Gesund.ai to confidently prove its security and compliance posture any day of the year, while fostering a security-first mindset and culture of compliance across the organization.
Security is a company-wide endeavor. All employees complete an annual security training program and employ best practices when handling customer data.
Gesund.ai works with industry leading security firms to perform annual network and application layer penetration tests.
Secure Software Development
Gesund.ai utilizes a variety of manual and automatic data security and vulnerability checks throughout the software development lifecycle.
Data is encrypted both in-transit using TLS and at rest.
Vulnerability Disclosure Program
If you believe you’ve discovered a bug in Gesund.ai’s security, please get in touch at email@example.com. Our security team promptly investigates all reported issues.
“Gesund.ai joins White House CancerX initiative to end cancer as we know it.”
“With AI embedding itself ever more broadly into healthtech, machine learning operations (MLOps) are vital for rapidly maintaining, monitoring and scaling ML models. ”
“The human civilization didn’t ban electricity, but implemented intelligent mechanisms to wield it; we expect AI guardrails to be collectively architected by all stakeholders.” Dr. Enes Hosgor
Can an automated approach help identify clinically significant biases within machine learning prognostic models?
Of the 1,343 total patients included in the study, 179 (13%) patients died. The final model accuracy on the validation cohort overall was 80% (minority class F1 score=0.39, AUC=0.663). However, using our automated tool there were numerous clinically significant differences identified in model accuracy on different patient subsets. For example, the model was much more accurate for patients requiring the intensive care unit (86% accuracy) but was much worse for other sub cohorts such as current smokers (60% accuracy) and male patients (78% accuracy). Moreover, there were many sub cohorts with insufficient patient data to perform sufficient analysis.
These data demonstrate the high risk for model performance discrepancies on subset of patients with different characteristics. Using a standardized, automated approach for systematic model validation is instrumental in minimizing model biases before implementing a machine learning model in a clinical setting.
Develop, maintain, and execute test plans, test cases, and test scripts for our frontend web application using Python Selenium automation.
Work closely with the DevOps team to integrate test automation into the continuous integration and continuous deployment (CI/CD) pipeline using tools such as Jenkins.
Work with the team to design and implement tools and APIs for a centralized system with distributed agents/workers
Build supplementary software components that enables data scientists to interact with the platform
Solid experience in Restful API design and development
Support integration with existing ML/DL/FL libraries
Develop highly scalable machine learning (computer vision) models to solve problems such as medical image classification and segmentation
Develop in-house machine learning tools and pipelines to support fast experimentation of machine learning models