_
RobertoLofaro.com - Knowledge Portal - human-generated content
Change, with and without technology
for updates on publications, follow @robertolofaro on Instagram or @changerulebook on Twitter, you can also support on Patreon or subscribe on YouTube


_

You are here: Home > Rethinking Organizations > From #datalakes to #datapersonalislands - #consentdata

Viewed 5381 times | words: 2108
Published on 2019-01-25 | Updated on 2019-04-03 19:15:18 | words: 2108



Usually, a mantra for new initiatives in our complex society is "nobody is an island".

And that might well be true- in our current economy and our society.

But in a data-centric future?

I am not the only one to say that, in the future, maybe online social networking will not require "central gatekeepers".

I was born in 1965, and if there is a constant that I observed since I first approached what was called in Italian "an electronic brain" while I was in high school, is that routinely "incumbent gurus" or "leading figures in the industry" when talking of the future are actually thinking in terms of incremental innovation.

Back to before I was born, right after WWII, it is famous the statement about how many computers would be needed in the world (see https://www.theguardian.com/technology/2008/feb/21/computing.supercomputers).

Or, few years later, what another "leader" in the computer industry said about the lack of interest for individual PCs (https://www.pcworld.com/article/155984/worst_tech_predictions.html).

I rather be wrong by projecting too far, than share a "95% chance" by doing what an American colleague called a research company: "experts in forecasting the past".

And, anyway, if you drop in a social element within any new technology, frankly, routinely the wildest projections are actually underestimations of our collective ability to eventually build up something new.

We need to reach a threshold, for change to gain momentum- and that's also why often we re-discover that our current innovations are actually "déjà-vu" (or, at least, already attempted).

The risk is that the "stock" of experience and knowledge accumulated creates a huge incentive to think into the future while staying in your own "safety area", i.e. what you can still recognize and, thanks to your experience, feel that you can control.

It is just human- and it is true also for corporate cultures.

One of the training courses/conferences/webinars that I followed earlier this week presented a coherent, systemic view on personal data that delivers, out-of-the-box, full GDPR compliance (you can see it at https://open.sap.com/courses/c4h1/ - my positive view on GDPR can be read at https://robertolofaro.com/gdpr).

Now, that is coherent and welcome, but still assumes that data are within corporate reach or data sources- our current technological and social architecture is based on data replication in multiple locations.

The concept of "data lake" as a "staging area" where to pile up data from a gazillion of devices, and then discover what you need, is our current logic- which is already a step forward vs. a not-so-distant past.

When I started working on decision support in the late 1980s, stating that we should be able to use also data that we cannot control and assess how reliable they are, was an heresy.

At the time, databases were becoming common- even on the limited PCs that we had back then.

Example: can you imagine, today, a mobile phone with 512KB of memory (KB, not MB, i.e. 1/1000th) no hard disk, and just two 5.25 FD with 360KB each of storage capacity (https://en.wikipedia.org/wiki/Floppy_disk)?

Still, at the time, I was able to help customer have useful business models showing them how their business was going, or do a forecast.

On a PC that was a 10Kg "luggable brick".

Actually: just one picture taken with my smartphone would require more than twice that amount of memory.

Just in the 1990s, we stored data on databases, but I remember how, within the retail industry, "large databases" were just few GBs in size (i.e. less than one of the latest smartphones), and we had to say something that now would sound funny, when deciding how to do prepare management reporting.

Such as: we will store detailed data for the last X days, and then remove details on products across the hierarchy as we move further into time.

As an example, look at the picture at this link: https://goo.gl/images/yuUckB

In the late 1990s, my business intelligence customers would store data for soft-drinks for one year, but further down toward specific products or Stock Keeping Units (SKUs https://en.wikipedia.org/wiki/Stock_keeping_unit) would be done, eventually, only on a daily basis- too expensive.

Then, databases started growing. And growing. And growing.

And memory costs (both physical and various type of RAM) started going down.

And with interactive websites and smartphones, to say nothing about cars and appliances sending data continuously, more data were collected than we had uses for.

Or: if provider A asks you your own personal data in order to provide a service or product, any ensuing processing by another party usually involves shuttling some data to the receiving end.

And all the parties involved are inclined to store both what they receive and what they send to others.

I already shared over the last few days parts of the commentary on Linkedin, and in some face to face discussions, but I re-share it here for future reference and reuse by others.

As part of my "next step bibliography", the preliminary step toward any publication (i.e. crossing the Ts and dotting the Is on my experience by updating with current data and comparing with others), after releasing #consentdata I decided to follow an interesting short course on "Liquid Legal" and LegalTech (https://open.sap.com/courses/lim1-tl).

Why interesting? Have a look (free online, if you do not want to buy it on Amazon) at https://robertolofaro.com/consentdata (and forthcoming articles).

As shown within the journey started in 2014 with ConnectingTheDots #1 (SYNSPEC, on talent management), "unbundling" our business models to move toward what we now call "data-centric" society, business 4.0, etc, requires differentiating between what can be devolved to applications, data, etc, and what will stay with humans.

And, at the same time, requires considering what can be "centralized within the cloud", and what can be "privately shared within the cloud", "shared locally", or "shared on-demand and upon consent".

I am currently experimenting with various apps on Android and services online that require an explicit consent to access detailed personal data, including e.g. walking speed.

And if you look at Fintech and all the "smart" (city, energy, mobility, etc) elements, you cannot ignore that those and similar interactions will create a continuous stream of contracts and risks.

In our current concept, you have to give and revoke manually consent, but in the future you will be surrounded by devices (including in your own home) delivering services based upon your own data.

So, if you are into #data #centric , #data #privacy , #fintech , #smartcity , you have also to consider a different way of managing e.g. crowdsourcing - to avoid turning it into a "corraling of risks" (i.e. importing risks through your continuously evolving supply chain based on micro-transactions) and, on the other side, extracting value from your own data.

Already on the market are available "personal cloud" computers, to install in your own home, so that they can interface with and store all the data provided by your own devices- both while at home, and while e.g. working or spending your spare time elsewhere.

Frankly, those "personal cloud" computers remind me two things.

Technical historical digression.

On the business side, when, in the 1990s, PC servers were installed around, e.g. in offices, bank branches, etc- all connected with a central server.

The issue? You needed then to have service staff going around, manage a stock of backup units, a warehouse of spareparts (or, at least, contracts with suppliers to that end), etc.

The alternative? Inability to provide business continuity.

On the business side, this soon evolved into something that looked like pre-PCs computing, e.g. having data centres and storage only in the centre, while leaving discless-PCs around, all connected with a private high-speed network.

If there was a failure in a PC, you just needed to have a lower-level of support locally, e.g. boxes with spare discless PCs.

Then, in mid-2000s Windows XP first, and Windows Vista later added a "media center" element: i.e. creating a kind of entertainment hub in your home.

Issue: as almost a decade before with businesses, you ended up having to be your own computer and network technician.

It wasn't really a great success- and with Windows 7, Media Center was basically dropped.

So, I think that there will be few enthusiasts that will install a "private cloud" at home, if they want to store all the "Big Data" that their devices will generate on a daily basis, until fixed-line Internet will have the same speed as the 5G.

As for corporate uses: currently they will still have to receive the raw information from, say, a mobile or a fridge or a car, store into a "data lake" online (e.g. on Amazon or Google), to be accessed later to extract consolidated information.

Purpose: e.g. to enable improving products or services.

Of course: after receiving consent.

Consent, for the time being, still implies more often than not "consent to copy".

Looking into the future, when a "private cloud" will probably be online for everyone, it could make sense to have your own "filter/negotiating" point, stored directly on the personal device that you will always have with you.

Instead of replicating data around, into "corporate data lakes", "consent" could become just "access"- and your device could actually negotiate consent with different parties.

Then, the data will be accessed, but not copied; processed, but not stored; and only derivative data (e.g. a KPI) will be stored.

This would allow a further step in "demand-offer", e.g. the value of data about what is collected by the sensors within your own device at time zero could be X, but the same data at peak time could have a value of 2X or 3X.

Over a decade ago, while living in Brussels, I wrote (and wasn't innovative) that technology made viable also for smaller states to provide services to their citizens that just few decades before (e.g. between WWI and WWII) would have required a huge bureaucracies.

Look at Singapore or Estonia: if you were to remove computers, how many of their current services to citizens would still be viable, if they were to add people?

Five years from now, this could scale down to cities, after they will benefit from the opportunities offered by, say, Machine Learning and robotization of processes.

Then, few years down the road, further levels of advances would be possible, if not for all, at least for many citizens within advanced data-centric economies.

So, we will be able to move from data lakes to automated data personal islands, all continuously negotiating access to and exchange of data with other devices.

It doesn't really matter now which services: remember how many cases of "incremental innovation thinking" simply ignored possibilities, and looked just at extensions of existing activities and roles.

What matter now is which opportunities new infrastructures will open.

Stay tuned... eventually we will converge on what should be inside a "personal data island manifesto"- maybe embedded within the next generation of GDPR, that will, who knows, potentially require providers to "embed" automatically in their network "gatekeeping" mechanisms at the individual customer level.