Tuesday, November 22, 2016

How to Make Medical Devices More Secure

Q&A with Medtronic’s Retired Director of Product Security Bill Aerts

By Nikki McDonald

Former Medtronic Director of Product Security Bill Aerts took some time recently to discuss the new security challenges arising from the IoT of medical devices, how to put together a strong security program, and the current state of medical device security (and how we can fix it). 

Aerts will be hosting a training session on How to Set up a Medical Device Security Program for Manufacturers at the Archimedes Medical Device Security 101 Conference this January. 

Describe your experience in the medical device security field and how it’s led to the work you’re doing now.

I’ve had the opportunity to start and develop IT security programs at a number of large companies over my career, including the program at Medtronic. As time moved on, it was clear that the products and services that Medtronic sells had some of the same IT security challenges, as well as many unique challenges and situations.

As my wife has benefitted from many heart devices, I’ve always been very interested in making sure that products are secure, so I jumped at the opportunity to build a medical device security program at Medtronic. It has been a great experience and the program is really having an impact.

More recently, I realized that 30+ years working in large corporations was enough, and that I wanted to try something new, so I retired from Medtronic. Now, I’m excited about any kind of work I can do in this field to help all of the players in the medical device security industry create better and more secure products. There is so much opportunity and challenge ahead.

As more medical devices have become wirelessly connected, what new security challenges have arisen?

The list is long: asset management is difficult because of the wide variety of vendors and unique devices connected to a hospital network...protecting the storage and use of personal information as it is sent anywhere in the world...lack of physical control over the device.

Secure communications, including authentication and encryption, is also a real challenge. Being connected to the Internet is an even higher risk for medical devices than for a typical laptop or mobile device. It will be difficult to secure IOT devices as they multiply.

How serious is the risk to patients?

Real security risk does exist in connected medical devices, especially in older ones. Any security risk needs to be taken very seriously to protect patient safety, but the key question to me is always, “Does the therapy that the device provides outweigh the risk of a security problem?”

In the majority of current cases, the risk is relatively low, and the benefit is very high. That said, there are too many devices out there that have poor security and they need to be addressed as quickly as possible. The risk to patients is growing quickly as more connected devices are used, and the IOT becomes full of medical related “things.”

What can medical device manufacturers do to create more secure products?

They need to build a program around device security and have strong commitment from the top, as well as assignment of accountability. This may require that they find or buy more expertise on security, specifically in medical devices.

Manufacturers need to leverage new security technologies and build security into the development of new products from the beginning. As part of this process, they must engage heavily with their healthcare customers to really understand what needs to be improved with their products, and then support the security functions in their products when they’re in the field.

What are the key components of a strong security program for device manufacturers?

A successful security program should have strong leadership and governance, security built into the entire product lifecycle, training and education on security for those people developing products, independent assessment and security testing in products, a repeatable coordinated response capability, and heavy engagement with the communities outside of the company, including patients, providers, researchers, regulatory agencies, industry groups, and the press.

What are some of the common struggles manufacturers have in implementing a security program?

One of the biggest issues many face is simply getting the support and funding they need from leaders to build a new capability and hiring the right people to do it. Manufacturers have to educate engineers about the real threats that exist for these products and secure their understanding and support as well. To get a security program in place, it’s essential to bring together IT people with R&D engineering people and help them understand that they need each other to take on this challenge.

It can also be difficult to find the right expertise from inside or outside the company and to get Legal and Regulatory onboard without being too cautious and slowing things down.

What can manufacturers do to overcome these issues?

Gaining support from executive leadership, including the BOD, is essential. Sell it to them based on patient safety, regulatory requirements, and requirements coming from the healthcare customers that are buying the products. Provide training/education on the risks and remedies done by outside expert groups. Invite Legal and Regulatory into the discussion early, and expose them to what the industry and other competitors are doing. Put deliberate effort into bringing the IT experts together with the engineering experts in order for them to learn each other’s language and build productive relationships. If needed, have security assessments done on core legacy products to be sure there is good understanding of the risks.

St. Jude Medical was in the news recently when a report indicated that their pacemakers could be hacked and Johnson & Johnson recently released a warning that their insulin pumps could be vulnerable to hackers. How widespread is this problem? What is the state of medical device security today?

We know there are large numbers of devices out there right now that are not secure. There will be more events like these recent examples in the future, and they could involve any manufacturer or connected device. Many of the devices in use today were designed years ago when the only requirement was patient safety.

A lot has been accomplished in the last three to four years by manufacturers, healthcare providers, and regulators, as well as security researchers working with the community to help improve security. I hope we’ll see the benefit of that collaboration in the next few years as newer, more secure devices are rolled out. In the meantime, we need to put mitigations in place, and continue to measure security risk against therapy benefit.

You’re teaching at the Medical Device Security 101 Conference this January. Who do you think should attend and what are the most important things they’ll learn?

My session will be on building a strong medical device security program. I believe that anyone who has or desires responsibility to ensure their medical devices are safe and secure would have great interest in this, regardless of how big or small their company is, or how new or mature their program is.

They will learn about the importance of taking a programmatic approach, getting executive support and creating governance, integrating security into the product development process, engaging the right people, coordinated response, and the importance of being connected within the industry, among other topics.


The Medical Device Security 101 Conference takes place January 15-17, 2017 at Disney’s Yacht & Beach Club Resorts in Lake Buena Vista, Florida. Register today to get access to over 20 expert speakers at this highly selective event attended by medical device manufacturers and healthcare delivery organizations.

Email archimedes@umich.edu to learn about individual discounts or group rates. 

Professor to Congress: 'Internet of Things security is woefully inadequate'

From: Nicole Casal Moore
Michigan Engineering

As the Internet of Things grows around us, so do the threat of cybersecurity breaches severe enough to shut down hospitals and other vital infrastructure, a Michigan Engineering professor told federal lawmakers this week.

Kevin Fu, associate professor of computer science and engineering, and director of the Archimedes Center for Medical Device Security, was one of several experts who called for federal security regulation of the Internet of Things (IoT). He spoke to the House Energy and Commerce Committee at the Nov. 16 hearing, “Understanding the Role of Connected Devices in Recent Cyber Attacks.”

On Oct. 21, many high-traffic sites including Paypal, Twitter, Amazon and Netflix went down for several hours due to an IoT-powered attack on web service provider Dyn. Hackers carried out the attack by taking advantage of vulnerabilities in connected consumer devices like webcams and digital video recorders—perhaps millions of them.

While the consequences of the Dyn breach were not major, Fu warned that it demonstrates a gaping security hole as more and more consumer technologies—appliances, thermostats, cars, airplanes, and medical devices—become connected.

"I fear for the day every hospital system is down," CNN quoted him as saying. “This will require some kind of governmental mandate."

Companies don’t have enough incentive to do it on their own, he argued.

"We are in this sorry and deteriorating state because there's almost no cost for a manufacturer to deploy products with poor cybersecurity,” CIO quotes him as saying.

He called on a variety of sectors to help put safeguards in place.

“Universities, industry and government must find the strength and resolve to invest in embedded cybersecurity with interdisciplinary science and engineering, industrial partnerships for research and education, and service to the nation," he said.

Read Fu’s full testimony or watch a video of the hearing at the E&C committee websites of the Republican majority or the Democrat minority.

U-M’s Archimedes Center for Medical Device Security offers a Medical Security 101 training for healthcare organizations, device manufacturers, and regulators in Orlando Jan. 15-17, 2017. The center is a multidisciplinary team of medical and computer science experts who focus on research, education and on advising industry leaders on methods for improving medical device security.

Ensuring the security of our society is a top priority for the U-M College of Engineering's transformational campaign currently underway. Find out more about supporting the security of our future in the Victors for Michigan campaign.

Thursday, September 15, 2016

Commentary: Hospitals need better cybersecurity, not more fear

By Kevin Fu, Dr. John Halamka, Jack Kufahl and Mary Logan | September 14, 2016

We've seen unprecedented attention to medical-device security after an unorthodox report was recently released by short-selling investment research firm Muddy Waters Capital and MedSec, which alleged security vulnerabilities in St. Jude Medical's pacemakers. An independent research team subsequently raised doubts about some of the clinical claims made by the report. St. Jude Medical, meanwhile, has filed a lawsuit disputing the allegations in the same report.

Cybersecurity risks associated with medical devices must be weighed against the often life-saving benefits of these devices. Hospitals struggle in assessing those risks: They may not know which medical-device assets are exposed to cybersecurity threats or get meaningful responses from vendors, and there is no national testing facility for medical-device security. There are different schools of thought on how to safely and effectively share information regarding medical-device security vulnerabilities. However, we should agree that vulnerability reporting should not be done in a manner that causes people to make decisions based on fear, rather than on clinically relevant data.

Read the complete article, which originally posted on Modern Healthcare.

Tuesday, August 30, 2016

Correlation is Not Causation: Electrical Analysis of St. Jude Implant Shows Normal Pacing

St. Jude Merlin Error Indicators Are Not Evidence of Malfunction

Battle of the bands: Here's what we listened to while
writing this summary.
Here's an abbreviated technical analysis of some claims by Muddy Waters and St. Jude regarding pacemaker/defibrillator security. We will show you why correlation is not causation in the sense that a scary-looking screen is not a reliable indicator of a clinically relevant security problem. We did this analysis based on our experience over the last ten years analyzing pacemaker and defibrillator security and our experience building cardiac arrhythmia simulators for humanitarian pacemaker reuse. Read more at our ancient research website. Or see our index of previous blog posts on medical device security. This is a fun extracurricular activity for our team at the University of Michigan and Virta Labs, and we may post more thoughts before we return to our regular lives baking hearth breads and helping hospitals with cybersecurity risks.

The Muddy Waters report of August 25 showed a screenshot which they say shows an “apparent malfunction.” They also say that red error marks “are also indicators that the device is malfunctioning.” We were curious about these claims and decided to see if we could produce the same onscreen displays without causing any malfunction. This summary shows the screenshot is correlated with normal pacing and sensing, suggesting that the Muddy Waters report misinterprets clinical relevance of the screenshot.

Figure 1: Our experiment shows that a Merlin programmer screenshot from p. 17 of the Muddy Waters report is not supportive evidence of a successful attack. The top photo shows our reproduction of the Merlin programmer screen photo, but without causing changes to the pacing pulses. Our end-to-end oscilloscope measurements (bottom photo) show that pacing pulses continue normally despite the three benign alerts that are expected when not connected to cardiac tissue. 
Hypothesis: The Merlin programmer screen photo on page 17 of the Muddy Waters report is not supportive evidence of appearing “to have caused the device to pace at a rapid rate.”

Approach: Produce the same on-screen screen output, and externally measure electrical signals to test safety and effectiveness of pacing and sensing.

Result: We reliably produced the same screen output while the implant continued to pace normally.

Material: St. Jude Medical Fortify Assura ICD, Merlin programmer (software version 22.0.1 rev1)

Clinical validation: Verified by Dr. Thomas Crawford, a cardiologist and a clinical electrophysiologist at the University of Michigan Health System's Frankel Cardiovascular Center.

To verify pacing, we configured the device to emit 40 bpm pacing pulses at 2.5 V, then connected a clipped lead (~20 cm) to the V (IS-1 Bi) sense/pace port, connected an oscilloscope to the clipped lead with 50 Ω probes, and visually confirmed that the device was emitting 40 pulses per minute (Figure 1 bottom). To verify sensing, we used a signal generator to produce a 0.5 Hz square wave (consisting of 2 events, a rising then a falling edge, for a total of 1 event per second or 60 pbm) at 2 mV which we fed into the sense/pace port via the same lead; the programmer recognized a 60 bpm beat as expected. We tested other square-wave frequencies between 0.5 Hz and 2 Hz to verify that the sensing worked as expected.

To reproduce the markers that the Muddy Waters report highlights as indicators of a successful attack, we introduced benign electrical noise on the sense/pace port via the clipped lead by connecting the lead to a separately grounded oscilloscope (i.e., not grounded to the “can” of the device, which typically acts as ground). This noise was sufficient to trigger the “VS2” markers on the programmer screen, indicating that the device sensed a “ventricular beat.” While sampling the 40 bpm pacing output as described above, we reproduced the count of three alerts visible in the Muddy Waters report’s screen photo: two alerts from high impedance on two leads (since those were not connected to cardiac tissue), and one indicating “ventricular noise reversion.” The pacing and sensing continued to function normally. ■

The team from the University of Michigan and Virta Labs is continuing to investigate the contrasting claims by Muddy Waters and St. Jude Medical. To receive notifications of updates, follow the Archimedes Center for Medical Device security @ARC_MedSec and @DrKevinFu on Twitter. Virta Labs also plans to issue a separate white paper.

Study on St. Jude medical device security deemed “inconclusive” by University of Michigan researchers

A recent report that alleged security flaws in St. Jude Medical’s pacemakers and other life-saving medical devices has major flaws of its own. That’s according to a team of University of Michigan researchers who say they’ve reproduced the experiments that led to the allegations, and come to strikingly different conclusions.

The U-M team is composed of several leading medical device security researchers and a cardiologist from the U-M Health System's Frankel Cardiovascular Center. “Hyperbolic” and “sloppy” are words they use to describe the unorthodox report, which was released last week by short-selling investment research firm Muddy Waters Capital and medical device security firm MedSec, Ltd.

The U-M team reproduced the error messages the report cites as evidence of a successful “crash attack” into a home-monitored implantable cardiac defibrillator. But they showed that the messages are actually the same set of errors you’d get if you didn’t have the device properly plugged in.

When it’s implanted, a defibrillator’s electrodes are connected to heart tissue via wires that are woven through blood vessels, explains Kevin Fu, associate professor of computer science and engineering at U-M and director of the Archimedes Center for Medical Device Security. Fu is also co-founder of medical device security startup Virta Labs.

Through these wires, implantable defibrillators can perform sensing operations and also send shocks if necessary.

“When these wires are disconnected, the device generates a series of error messages: two indicate high impedance, and a third indicates that the pacemaker is interfering with itself,” said Denis Foo Kune, former U-M postdoctoral researcher and co-founder of Virta Labs.

On page 17 of the Muddy Waters report, a screenshot cites these very error messages as proof of a security breach.

“But really the pacemaker is acting correctly,” Fu said. “To the armchair engineer it may look startling, but to a clinician it just means you didn’t plug it in. In layman’s terms, it’s like claiming that hackers took over your computer, but then later discovering that you simply forgot to plug in your keyboard.”

Added Foo Kune, “While there still could be security problems, the screenshot is anything but supportive of the claim. When researchers with limited medical training go public with unvetted claims, it’s easy to jump to conclusions.”

Ethicists and other researchers have criticized MedSec’s technique of teaming with a short-seller to publicize its preliminary findings—and benefit financially, no less.

Short-selling is an investment practice that essentially involves betting that a particular stock will decline in value. If it does, then the investment firm profits. In this case, MedSec made a deal with Muddy Waters to receive a share of those profits. St. Jude’s stock fell sharply over the weekend.

“It was the irresponsible thing to do. Think about whether you believe everything a used car dealer claims when deciding whether to buy,” said Wenyuan Xu, a visiting professor of electrical engineering and computer science at U-M and an expert in automotive and medical device security. She recently hacked into Tesla’s autopilot system to demonstrate its vulnerabilities.

To conduct the experiments, the U-M team used a new and properly functioning model of the same defibrillator that the Muddy Waters study used—the Fortify Assura VR. In several additional instances, they found that the device operated properly.

Even while the U-M research team finds fault with the Muddy Waters report, they don’t mean to suggest that these medical devices—or any medical devices for that matter—are necessarily secure. They stress the importance of establishing security workflows early on in the design process of medical devices.

“While medical device manufacturers must improve the security of their products, claiming the sky is falling is counterproductive,” Fu said. “Healthcare cybersecurity is about safety and risk management and patients who are prescribed a medical device are far safer with the device than without it.”

Thomas Crawford, an assistant professor of medicine and a clinical electrophysiologist at U-M, agrees. Crawford implants and follows patients with pacemakers and implantable defibrillators.

“Given the significant benefits from home monitoring, patients should continue to engage in it via St. Jude Medical Merlin, and other companies’ respective proprietary home monitoring systems, before independent research can substantiate the claims made by MedSec and their financial partner Muddy Waters Capital, LLC,” Crawford said.

Crawford adds that home monitoring has been shown to reduce a variety of adverse events, with some studies even showing reduction in overall mortality over periodic checks of devices in the doctor’s office. The devices can send actionable alerts to a central monitoring service, which then is forwarded to the physician, so that it can be dealt with immediately if necessary. Alerts include low battery status, potential malfunction of the device, or changes in heart rhythm, which may require treatment.

The Archimedes Center for Medical Device Security offers a Medical Security 101 training in Orlando Jan. 15-17, 2017. Details will be forthcoming online. In the meantime, for more information, email archimedes@umich.edu.

Monday, May 2, 2016

Archimedes Circular Podcast 0x01: Co-Chairs of the AAMI Medical Device Security Working Group

Not an official logo of the AAMI Medical Device Security Working
Group, but it may become a T-shirt after members catch up.
Welcome to the inaugural Archimedes Circular Podcast. Today, Dr. Kevin Fu interviews the co-chairs of the AAMI Working Group on Medical Device Security ahead of the release of its Technical Information Report 57 to medical device manufacturers on specific security engineering methods designed to help satisfy regulatory expectations of cybersecurity in the 510(k) and PMA processes.

Ken Hoyme and Geoffrey Pascoe are co-chairs of the AAMI Medical Device Security Working group. AAMI is the Association for the Advancement of Medical Instrumentation. Founded in 1967, AAMI is a non-profit organization of 7,000 professionals for the development, management, and use of safe and effective healthcare technology. AAMI consists of over 100 technical committees and working groups that produce Standards, Recommended Practices, and Technical Information Reports for medical devices. The medical device security co-chairs are interviewed by Kevin Fu, a professor at the University of Michigan and the Archimedes Center for Medical Device Security.

For several years, the AAMI Medical Device Security Working group has been toiling away tirelessly on the Technical Information Report #57 (Principles for medical device information security risk management). Its members fondly call it TIR 57.  The document provides advice to front-line medical device engineers on how to begin integrating security engineering into the design and implementation of medical devices. The TIR 57 is based on the input and consensus vote of medical device manufacturers, health delivery organizations, security engineering experts, and faculty.

Kevin:                      Welcome to the Inaugural Archimedes Broadcast. My name is Kevin Fu. I direct the Archimedes Center for Medical Device Security. Today, we’re going to talk about consensus standards and guidance documents for manufacturers to meet FDA expectations on medical device security. Today, I am interviewing Ken Hoyme and Geoff Pascoe, the co-chairs of the Medical Device Security Working Group of AAMI, which is considered the most respected standards body in the medical devices arena. I am also joined by Wil Vargas who is the director of standards at AAMI, so welcome, Ken, Geoff and Wil.
Wil:                         Thank you.
Ken:                          Thank you. Thank you for having us.
Geoff:                      Thanks.

Saturday, April 23, 2016

Comments on Postmarket Cybersecurity Guidance: The FDA Awakens

FDA's draft postmarket guidance on cybersecurity greatly
improves beyond past approaches, but the devil is in the details
The deadline to submit comments on FDA's draft postmarket cybersecurity guidance has come and gone last week. Below is a copy of my comments to FDA.

My major recommendation pertains to language choice when describing postmarket risks so as to monitor for postmarket problems without falling victim to the streetlight effect. While network-based threats are a significant part of the problem, it's just one of many postmarket problems. There's a reason we don't write guidance on how to avoid flu by sneeze, then write a different guidance document on how to avoid flu by cough. By focusing instead on exposure to cybersecurity risk, the industry can better prepare for shifting threats whether it be by network, USB drive, telephone social engineering, or whatever fancy technology next comes out of Silicon Valley. To ensure that the postmarket guidance can remain relevant as technology and threats change, focus on overarching exposure rather than streetlight modalities.

I also advise manufacturers and HDOs to follow the NIST cybersecurity guidance for critical infrastructure.  For example, (1) enumerate cybersecurity risks because deploying technology without understanding risk is counterproductive; (2) deploy cybersecurity controls that match the specific risks; and (3) continuously measure the effectiveness of the security controls because threats, vulnerabilities, and misconfigurations can bypass a previously effective control within seconds. For instance, if you just look for threats against your core reactor, you might forget about your thermal oscillator.

My letter is downloadable here.

Sunday, January 31, 2016

White House Roundtable on Cybersecurity of Hospitals and Medical Devices

The White House convened a leadership roundtable
on the topic of cybersecurity of hospitals and
medical devices.
Last month, the White House quietly convened a group of medical device security stakeholders and domain experts to discuss the cybersecurity challenges faced by healthcare delivery organizations and medical device manufacturers. There were actually multiple meetings. Here I summarize just one that I attended in my role as a professor leading the Archimedes Center for Medical Device Security at the University of Michigan, and in my role as a member of the Computing Research Association's Computing Community Consortium (CCC) Council.

Convened by the President's Office of Science and Technology Policy (OSTP), we sat together in the elegant Diplomatic Room in the Old Executive Office Building. I was invited because of my expertise in medical device security and FDA regulatory affairs dating back to when I briefed the FDA in October 2006 on looming cybersecurity risks and when I worked in hospital IT in the early 1990s. I was probably not invited for my bread making skills.

The room was packed with people from a diverse set of backgrounds: techies, physicians, policy wonks, CISOs, lawyers, and more. I noticed that the group roughly divided into three parts, like Gaul:
  • visitors like myself who responded to questions, 
  • special assistants to the President who asked questions, and 
  • leaders from various parts of the executive branch who listened attentively.
White House Chief Data Scientist DJ Patil chaired the meeting. White House Cybersecurity Czar Michael Daniel asked many questions. There were a large number of federal representatives from 
  • various HHS agencies (FDA, CMS, OCR, ONC) plus the HHS CISO,
  • the U.S. Digital Service, 
  • DOD, 
  • DHS, 
  • FBI, 
  • NIH, 
  • the National Security Council, and 
  • a guy from the Secret Service who offered just his first name.
One notable techie in the room was Mina Hsiang, a fellow engineer from MIT who served in the tech surge team to rescue healthcare.gov.

We talked about the NIST cybersecurity framework, collaboration across agencies and industry, regulatory matters to incentivize better cybersecurity, information sharing so that hospitals and manufacturers need not be in the dark about threats, incident and vulnerability response, leadership, and medical devices in general.

Prof. Kevin Fu and Dr. David Klonoff
Michael Daniel expressed concern that the Internet was becoming a liability, but also that security problems can slow innovation. He pointed out that the median number of days to detect an intrusion has improved to an embarrassing 209 days across all industries. So what happens during those 209 days as the intrusion spreads its tentacles thru a hospital? He also expressed hope that computer scientists can find a way to decouple and better layer security into operating systems (sounds right up the alley for an SOSP paper). Multiple speakers brought up the topic of Medicare/Medicaid reimbursement policies, and how it ought to use the power of the purse to incentivize purchasing of more secure, safe, and effective products. Separately reached for comment, a representative from CMS explained that they do routinely realign their reimbursement policies, especially when FDA uses new guidance (ahem, cue the new FDA pre-market and post-market guidance). A CMS representative explained that it's not uncommon to set policies more strict than FDA requirements by pointing to industry standards (cue AAMI TIR 57 on medical device security).

It's the Simple Stuff, Stupid

I spoke about cybersecurity problems at hospitals and medical device manufacturers, why the problems exist in the first place, and how stakeholders are genuinely working on the problems. The good news is that many (but not all) manufacturers and hospitals genuinely want to find a way to mitigate cybersecurity risks. In contrast to sensationalist media reports, I emphasized that the greatest near-term risks are dirt simple: the delivery of patient care is disrupted when medical devices get compromised by garden variety, decade-old malware by accident. These devices are no longer safe and effective, and often require downtime to clean up the cybermess. My longer manifesto on this subject appears in the National Academy of Engineering Winter 2015 newsletter and as part of a workshop at the Institute of Medicine.

The feds had many questions about NIST guidance documents on cybersecurity, and the invited guests from industry heaped praise on NIST for documents that actually get used in practice. Footnote: NIST is about to celebrate the grand opening of its new National Cybersecurity Center of Excellence (NCCoE). I've been asked to spread the word about their recently posted call on tools to protect the security of medical devices.

One of the more interesting conversations involved culture shock. When I spoke about the security problems that hospitals face and the sometimes adversarial relationship between IT and biomedical groups, the counsels from the American Hospital Association nodded, smiled, and sighed in agreement. They know what I am talking about: the IT security people that lock down computers to the point that clinicians can't get their job done, or the clinician who accidentally infects a cathlab with virus transferred by a USB stick from a Yahoo account on a nursing workstation. Having worked in a community hospital installing computers in patient rooms, back offices such as medical records, and administrative areas such as the CEO's office, I had first hand experience observing effective and ineffective ways of deploying technology in clinical areas. IT security people: thou shalt not interrupt clinical workflow! Period!

For the academics

I'd like to encourage my fellow computer science faculty to get out of their dingy offices and educate leaders in government. Conference and journal publications are not the end point of research, but rather the beginning of impact on society at large. For faculty who might participate in future White House roundtables, here's a bit of advice. Come prepared with a single request, not a long annoying list, of how the government can help help rather than get in the way. My request was simple: use the force. That is, use the convening force of the government to bring stakeholders together. I asked them to convene medical device manufacturer CEOs, Boards of Directors, and hospital executives to ask how they are meaningfully addressing medical device security risks.

Final thoughts

The higher ranking people in federal government are just beginning to wrestle with the problem of medical device security. It's clear that the government isn't going to sit idly as hospitals continue to get infected with cybersecurity problems (three hospitals hit last week [1, 2, 3]) and manufacturers continue to produce difficult to secure devices (remote buffer overflows in drug infusion pumps last week). At the end of the day, hands were shook, business cards were exchanged, speaking invitations were offered, and other passive tense events. 

The government is a meta-organization, and you should not expect them to directly solve your problems. They will not do your homework for you, and they won't debug your software for you. But they will set expectations and desired outcomes, and they will take action against medical device companies that prefer to bury cybersecurity problems. Expect to hear about the outcomes of these types of ongoing meetings at the 4th Annual Archimedes Workshop on Medical Device Security at the University of Michigan. Ok, all for now!

Kevin Fu is Associate Professor of EECS at the University of Michigan and Chief Scientist of Virta Labs, Inc.

Tuesday, January 19, 2016

Postmarket Management of Cybersecurity in Medical Devices: FDA Releases #2 Draft Guidance Document

In the end, poost-market medical device
security is about peeople and respoonsibility. Photo
taken today outside the Washington Convention
Center meeting on automotive cybersecurity.
FDA has unleashed its long-awaited #2 guidance document on cybersecurity: its draft post-market guidance on medical device security. Ok, ok, I conclude my Secure Health IT humor here.

I'd like to commend FDA for releasing this difficult to write document. To the arm chair engineer, one might think this is easy stuff. Wrong. While the already finalized pre-market guidance primarily focuses on basic engineering practices to build security into medical device designs, the post-market guidance is mostly about people and effective communication. Why is writing post-market guidance so difficult? Because it's more about people than technology.

There's a lot that the FDA guidance gets right, and most of my criticism pertains to word choice (and lack of puns) that can be solved by editing. The preamble of the document (that focuses on networked medical devices) does not entirely match the body of the document (that talks about all devices, not just networked). The terms "networked devices” and “connected" are red herrings. A network is not necessary for an cybersecurity exploit; malware gets in just fine by unhygienic USB drives carried by unsuspecting personnel. Social engineers still use telephones to trick personnel into enabling unauthorized remote access. The final post-market guidance will need to more deliberately draw attention to outcomes of compromise and risks of vulnerabilities rather than the constantly evolving modality of delivery of exploits. After all, when we talk about surveilling for the spread of flu, we don't limit discussions to spread by cough versus spread by sneeze. Should the document list networked and connected devices as examples of infection vectors? Yes. Should it mention only networked and connected devices? No. Outcomes, not modalities.

What's my opinion on important post-market activities in general? Stakeholders need to communicate vulnerabilities more effectively, and monitor for shifting threats. Medical device manufacturers should create workflows to receive outside input on potential vulnerabilities.

Security folks who discover potential problems need be aware of timescales for responses to responsible security vulnerability disclosures. As Allan Friedman explains, even if you think you're the most important person on the planet, don't expect a medical device manufacturer to simply drop everything they are doing to fix a security flaw overnight. On the other hand, it boggles the mind why a manufacturer might wait a year to meaningfully respond to a clinically significant vulnerability reported by a security researcher.

I expect to see more FDA actions on the less noble manufacturers who do not catch up with this basic medical device security post-market guidance.