Technology Accelerators to Support Medicaid Innovation

Speed Up Delivery, Reduce Risk, and Deliver a Great User Experience: At the recent Medicaid Innovations Forum in Orlando, Florida, CMA’s Brian Dougherty (Chief Technology Officer) and Joe Chiarella (Medicaid Practice Lead) gave a presentation on technology accelerators used for Medicaid innovation.

Many of the accelerators developed by CMA over the past 15 years are packaged into our Secure Healthcare Analytics & Research Platform (SHARP). The first half of the presentation went through these features, while the second half focused on how the system supports advanced funding requirements from CMS (The Seven Conditions and Standards).

You can find the presentation in its entirety on our YouTube channel, and we’ve summarized the technology accelerator portion of the presentation below. Further explanation of the advanced funding requirements support is detailed in a later blog entry.

What is an Accelerator?

CMA uses two criteria to define a technology accelerator that supports Medicaid innovation:

  • To reduce delivery time for large projects
  • To reduce risk

Medicaid data warehouses can house hundreds of servers and support petabytes of storage. It’s quite difficult to overstate the complexity involved in these systems. Many different constituents and agencies, both internal and external, use the data. Components Include business intelligence (BI) features such as dashboards and reports, metadata, data delivery, ETL functionality, and more. Building such a warehouse involves a lot of risk. Speed and efficiency are vital. And because so many users are involved, so is security.

This is where CMA comes in. We’ve focused on implementing a streamlined and secure environment for the data warehouses we support. And by doing so, the accelerators we made also created a friendlier and useful experience for the warehouse users.

The Technology Services Layer

We build software that we can’t find on the market.

CMA has developed many accelerators that function at the technology services layer of the warehouse. They include both hardware and software. We’ve bundled our accelerators into a series of reusable components, including:

  • Run Time Security Framework
  • User Provisioning and Access Management System
  • Integrated Consumer Portal
  • Analytics and Reporting Framework
  • Data Warehouse Assistant for Ad Hoc Queries, Templates, and Custom SQL
  • Integrated Metadata Reporting
  • ETL Master Scheduler and Event Broker
  • Secure Data Exchange Framework (DART and Data Delivery Portal)

We’ll explain each of these accelerators, keeping in mind real-world situations and the shortcomings of some of the products available on the recent market to solve real-world problems.

Run Time Security Framework

Security is a crucial aspect to delivering technology today, especially in the healthcare field.

When you register a user in a data warehouse, the first questions that needs answering are:

  • Which of these components does the user need to use?
  • What can they see and what can’t they see (down to the column level)?

In the CMA Security Framework, when users log in, all their information (context and data) follows them around wherever they go in the warehouse.

Warehouse products included ad hoc query products, reporting products, and all kinds of analytics products. Our framework eliminates the need to log into each product. We make sure the user always has the authorization he or she needs, behind the scenes.
It’s always the same security context, using a single sign-on, no matter what product the user is using.

User Provisioning and Access Management System

When thousands of users use a warehouse, it is no easy task to provision those users and get them up to speed. You need to keep track of what they can and can’t see.

CMA created a workflow engine—a very elaborate user provisioning set of software. It is HIPPA and high-tech compliant. It considers the process used to gather information about the user, not only security considerations, but training and education information. There are real-time alerts for supervisors and/or the individual.

This accelerator is also considerate of the future. If a new provisioning requirement comes along, we can easily plug it into this workflow.

It can be a 10-15 step process to provision a user. But once they are provisioned, we have everything we need to apply that information across the warehouse.

Integrated Consumer Portal

CMA believes there should be a single entry point into the warehouse. A warehouse is not one discreet object—it delivers many platforms. When a user sits down and starts to use the warehouse, we really want one pane of glass, one single view into the warehouse, which simplifies their navigation no matter how complicated (or simple) his or her needs are.

This uniform implementation not only helps speed and security, it creates a very user-friendly experience.

Analytics and Reporting Framework

User personalization is an important aspect of a good Medicaid data warehouse. All our technology accelerators aim to provide a personalized user experience. This personalization (and security) carries over to the analytics and reporting framework. A user will only see the reports and tools they have access to, and nothing they don’t.

Our implementation of this framework includes a queuing environment. A user sees all his or her reports that are currently executing, future reports that are “in line” to be run, as well as all the reports in his or her history.

A typical Medicaid warehouse could have 10,000 tables, eight different databases, and just a lot of business intelligence (BI) content. The normal person sitting down to that becomes overwhelmed. Our goal is a simple and streamlined experience.

Data Warehouse Assistant for Ad Hoc Queries, Templates, and Custom SQL
As we mentioned, we build software that doesn’t already exist in the market. Some of the areas where we saw gaps were with regards to ad hoc querying, templates, and custom SQL. So we built the Data Warehouse Assistant to address these gaps.

Ad Hoc Queries

To explain ad hoc queries, let’s use the New York State Medicaid Data Warehouse [link] as a real-world example. In New York, we have about six billion Medicaid claims that are active right now, and thousands of people that use the Medicaid data warehouse. One of the things people liked to do was grab 100 million claims. They would bring this information to another system, or want to do something with it on their desktops. This was a problem, because it’s not efficient or secure to bring a billion rows of data down to a desktop computer.

So why did the users do it?

Simply put, they had other problems to solve. They may have 80 percent of the data they needed in the warehouse. But they never had all the data they needed. They may have a subpopulation or a cohort; they may have something on their desktop that they’d like to inject into the warehouse and use that to direct their querying.

To address this need, CMA build this concept of “my private data” into the warehouse. All the users, in addition to all the data that is available in the warehouse, can upload and store their own private data. To them, it looks like it’s there on the platform. No other user will see someone else’s private data.

So instead of bringing data down (and all the associated security risks that go along with that), they can push their data up to examine in an environment where it is protected and easily usable.


Templates were another areas where we saw gaps in the available software. There are many common patterns for query and analysis that go on. We’ve allowed users to build their own templates and store them up in the warehouse itself.

Custom SQL

A third component of our Data Warehouse Assistant is the ability to register custom SQL. There are essentially three types of users in a warehouse.

  • Power users who can write their own SQL.
  • Users that can run reports and queries.
  • Executive users that use reports, dashboards, and drill-downs.

But some users want the flexibility of writing their own SQL, but they aren’t comfortable writing it themselves. So we’ve allowed these users to register their own SQL, which they cut and paste from tools and products and inject that into the warehouse. Our software will parse it semantically and make sure it’s good. Then they can execute it successfully.

In other words, if someone has a report they feel needs some additional work, they can now extend it with their own SQL without being a typical power user.

Integrated Metadata Reporting

Metadata has historically been a trendy word when applied to databases. CMA believes you need enough metadata to deliver context to the data delivered, but you don’t want to spend all of your time and effort on metadata.

To address this, we have a set of accelerators that allow all producers of metadata to push all of that metadata into one integrated repository. So, the ETL produces metadata, the BI produces metadata, etc.

ETL Master Scheduler and Event Broker

A large warehouse will take in thousands of files, weekly, with many streams running simultaneously. It helps to think of the warehouse as a wholesaler of data. We take data in from all over the place and then we integrate and homogenize it, and then publish it all over the place. So the Extract, Transformation, and Load (ETL) process is important.

The Event Broker component has been called a “traffic cop on steroids.” It is a big help in orchestrating all this data.

Secure Data Exchange Framework (DART and Data Delivery Portal)

There were two more problems that we encountered, where we made our own software to solve.

When you build a large warehouse, you have many different nodes where data needs to be moved, point-to-point. Again, you will have thousands of files, many different datamarts, and many external entities to distribute data to. We couldn’t find, on the market, software that allowed us to move this data fast enough at a high volume. In the New York warehouse, we process terabytes a night.

So we built the DART, which is a high-speed data transport with end-point intelligence. This just means we can plug it into any source, and we can move it to any target, graphically.

The Data Delivery Portal allows external consumers (agencies, entities, CMS) to subscribe to data. We’ve moved from a resource-heavy “extract” method to a publish and subscribe model. So all the consumers have to do is put a subscription in. And we make an automated delivery of that data.

We built these products in Java, so they work with any J2EE environment. We can drop these in and they can easily run.

A Wholesaler of Medicaid Data

In summary, we like to think of a Medicaid data warehouse as a wholesale environment. In addition to the sheer size and complexity of the information involved, there are many internal and external users who have very real needs to address with the data.

For example, there may be users who want to add Medicaid data to their own Mental Health or Disabilities data.

You need technology that makes it easy for people to request large amounts of data and then inject that into their own internal systems. The only alternative is to call up a programmer and wait weeks for the information. And that isn’t efficient.

We strive to provide Medicaid data warehouses that address speed, security, and the user experience. These technology accelerators are a tool to help us achieve that.

To learn about how SHARP supports enhanced funding requirements from CMS, please visit our next blog entry.

Fill out the form or give us a call and one of our experts will be in touch with you soon.