Health Data Palooza 2015 NO, YOU CAN’T ALWAYS GET WHAT YOU WANT: GETTING WHAT YOU NEED FROM HHS
Realtime notes from day 2 of the HealthData Palooza 2015 (#HDPalooza) in Washington DC.
While more data is better than less, pushing out any ol’ data isn’t good enough. As the Data Liberation movement matures, the folks releasing the data face a major challenge in determining what’s the most valuable stuff to put out. How do they move from smorgasbord to intentionally curated data releases prioritizing the highest-value data? Folks at HHS are wrestling with this, going out of their way to make sure they understand what you want and ensure you get the yummy data goodies you’re craving. Learn how HHS is using your requests and feedback to share data differently. This session explores the HHS new initiative, the Demand-Driven Open Data (DDOD): the lean startup approach to public-private collaboration. A new initiative out of HHS IDEA Lab, DDOD is bold and ambitious, intending to change the fundamental data sharing mindset throughout HHS agencies — from quantity of datasets published to actual value delivered.
Moderator: Damon Davis, U.S. Department of Health & Human Services
Phil Bourne, National Institute of Health (NIH);
Niall Brennan, Centers for Medicare & Medicaid Services;
Jim Craver, MMA, Centers for Disease Control & Prevention;
Chris Dymek, EdD, U.S. Department of Health & Human Services;
Taha Kass-Hout, Food & Drug Administration;
Brian Lee, MPH, Centers for Disease Control & Prevention;
David Portnoy, MBA, U.S. Department of Health & Human Services
David Portnoy calls out the sterling backroom work that Damon Davis does in the Healthdata.gov world.
Demand Driven Open Data (#DDOD)
External requesters submit Use Cases for data. Before data is released there is therefore a customer in mind.
Phil Bourne – NIH
NIH establishes a commons – a virtual shared space across public and private clouds. All are commons compliant. All research objects have unique identifiers.
Creating a marketplace for the research data. Also building data level metrics.
An objective is to drive more structure in to the data. At its basic there is critical meta data associated with each data object.
Brian Lee – CDC Public Health
Ebola epidemic revealed that asking the same information in different ways creates operational overhead. Need to develop a harmonized set of data elements.
Open Data mandates have raised awareness.
Think about small projects that enable change. eg. security assignments to simplify data release (eg FOIA requests) or producing synthetic data sets to let data submitters practice and create tools to deliver data more accurately.
Jim Craver – CDC
Health Indicators warehouse.
National Center for Health Statistics. Population health focus. Deeper focus than surveillance organizations at CDC. eg. looking at health conditions like Diabetes, Asthma etc.
Generate Annual Surveys for 40-50,000 people.
Sample size becomes an issue when you slice and dice the data.
Also have mobile clinics that go in to a community and collect samples from 4-5000 people per year. Costly to scale.
Dealing with requests for specific datasets. Created a platform to deliver datasets so that they are made available to everyone. Health Indicators Warehouse. http://2.healthca.mp/1JikwEy
Deliver detailed data to geography level while preserving privacy. That can limit level of granularity.
Has a Research Data Center. Pre and Post Disclosure reviews to protect confidentiality of survey participants.
Machine Readable in Data terms does NOT mean 508-compliant screen readers!
Self-Reported Data does not work with Mortality Data.
Datasets are becoming publications in their own right.
Taha Kass-Hout – FDA
OpenFDA has simplified the release of data.
Prior to this it took significant effort to link different data from different parts of the agency.
Tools are published on Github. 100 hits/second on APIs. 6,000 registered API users.
Use StackExchange to crowd source support and interpretation.
CMS has been rolling out datasets at the Palooza. A deep commitment to transparency.
Biggest challenge: People want all of the data for free. A lot is privacy protected for vulnerable sections of the population.
CMS has removed restriction on granular access to CMS Claims data. Aggregated results can be taken out of the Virtual Research Data Center. No PHI can be extracted.
The datasets in the VRDC will be updated on a quarterly basis.
CMS has a mature data ecosystem.
Chris Dymek – HHS (PCOR Trust Fund) ASPE
Patient Centered Outcome Research Institute is funded by PCOR Trust Fund.
HHS has a small part of the Trust Fund that funds internal activities to help release data for PCOR.
PCOR TF is working out how to make data that is valuable to PCOR more liquid.
Shout out to Office of Enterprise Data Analytics (Niall’s team) and the creation of the VRDC, also the Re-purposing BlueButton Platform – pointing to my work at CMS with BlurButton on FHIR to make it easier for beneficiaries to connect their data with applications and services that they trust.
[category News, Health]
[tag health cloud, bluebutton, hdpalooza]
Mark is available for challenging assignments at the intersection of Health and Technology using Big Data, Mobile and Cloud Technologies. If you need help to move, or create, your health applications in the cloud let’s talk.
Stay up-to-date: Twitter @ekivemark
I am currently HHS Entrepreneur-in-Residence working on an assignment to update BlueButton for Medicare Beneficiaries. This involves creating a Data API. Watch out for more about BlueButton on FHIR.
The views expressed on this blog are my own.
I am also a Patient Engagement Advisor, CTO and Co-Founder to Medyear.com. Medyear is a powerful free tool that helps you collect, organize and securely share health information, however you want. Manage your own health records today.
Medyear: Less Hassle, Better Care.
via WordPress http://2.healthca.mp/1Idcv3o