Business analysts, doctors, and industry researchers simply must understand and trust their models and modeling results.

Even though they’re not always the most accurate predictors, the elegant simplicity of linear models makes the results they generate easy to interpret.

While understanding and trusting models and results is a general requirement for good (data) science, model interpretability is a serious legal mandate in the regulated verticals of banking, insurance, and other industries.

The blockchain is one of the top emerging technologies revolutionizing today’s business models.

Fundamentally, blockchain enables participants to exchange value without the need for intermediaries. And what capabilities make it so attractive for enterprises?

However, as you can see in the following figure, the trend is changing if we just consider the existing engagement and pipeline.

On the other hand, double checking the solutions related to banking & capital markets, we had discovered that 60% of the blockchain implementations involved at least one participant from a second industry such as manufacturing, government, or retail.

Machine learning can even drive cars and predict elections. Although it is possible to enforce monotonicity constraints (a relationship that only changes in one direction) between independent variables and a machine-learned response function, machine learning algorithms tend to create nonlinear, non-monotonic, non-polynomial, and even non-continuous functions that approximate the relationship between independent and dependent variables in a data set.

I believe it can, but these recent high-profile hiccups should leave everyone who works with data (big or not) and machine learning algorithms asking themselves some very hard questions: do I understand my data?

Using this approach is helping us to define reference architectures, and developing IP to reduce the time-to-market of blockchain solutions.