The doctor prescribes video games and virtual reality rehab [Andy Coravos | WIRED]
Recognizing such advances in medical technology, last year the FDA outlined a more streamlined path to clearance for digital health devices. This year, the FDA is deep into building a pre-certification program, paving the way for software products—including AI and adaptive algorithms—to come to market faster.
Digital Health Software Precertification (Pre-Cert) Program [FDA]
The FDA envisions that the future regulatory model will provide more streamlined and efficient regulatory oversight of software-based medical devices developed by manufacturers who have demonstrated a robust culture of quality and organizational excellence, and who are committed to monitoring real-world performance of their products once they reach the U.S. market. This proposed approach aims to look first at the software developer and/or digital health technology developer, rather than primarily at the product, which is what we currently do for traditional medical devices.
The FDA is currently basing the Pilot Program’s criteria on five excellence principles: patient safety, product quality, clinical responsibility, cybersecurity responsibility, and proactive culture. The FDA is currently considering two levels of precertification based on how a company meets the excellence principles and whether it has demonstrated a track record in delivering software products.
Startup Roadshow: FDA Regulation of Artificial Intelligence Used in Healthcare 2019 Multi-City Tour [Clinical Decision Support Coalition]
Stops at University of Michigan, UCSD, Georgia Tech, Carnegie Mellon, Johns Hopkins, Rutgers, Berkeley, and a startup incubator in Chicago.
One of the fathers of AI is worried about its future [Will Knight | MIT Technology Review]
If you have a good causal model of the world you are dealing with, you can generalize even in unfamiliar situations. That’s crucial. We humans are able to project ourselves into situations that are very different from our day-to-day experience. Machines are not, because they don’t have these causal models.
We can hand-craft them, but that’s not enough. We need machines that can discover causal models. To some extent it’s never going to be perfect. We don’t have a perfect causal model of the reality; that’s why we make a lot of mistakes. But we are much better off at doing this than other animals.
From machine learning to machine reasoning [Leon Bottou | arXiv]
This observation suggests a conceptual continuity between algebraically rich inference systems, such as logical or probabilistic inference, and simple manipulations, such as the mere concatenation of trainable learning systems. Therefore, instead of trying to bridge the gap between machine learning systems and sophisticated “all-purpose” inference mechanisms, we can instead algebraically enrich the set of manipulations applicable to training systems, and build reasoning capabilities from the ground up.