Arya Stark would love the iPhone X Face ID technology.

This is not a review of the new iPhone X, for that you will have zillions of websites to inform you, the sames I read myself yesterday. One thing has been surprisingly new in the most recent Apple flagship phone and it’s not the full screen, it’s not the “ears”, it’s not the battery life, nor the wireless charger. It’s Face ID.

It’s funny to see when a company tries to innovate changing things in a quite established phone design (mainly Nokia, in the pre-iPhone era, was bold enough to design crazy phones), there are a bunch of collateral effects that sometimes create something unique. Since Samsung launched the S6 Edge, the race to have the best screen surface vs. size ratio had begun. But that meant a lot for the iPhone, since the home button was an iconic element since the very beginning. Taking the home button out from the front would mean you have to put it elsewhere (like on the back as many LGs do since 3 or 4 years), or innovate and take it completely out, like they finally did. The original functionality of the home button could be easily replaced by other gestures on the screen, but in the last years that button was also holding the fingerprint reader, which has evolved in terms of security to a point where Apple based the payment authentication on it.

By sacrificing the fingerprint reader, Apple creates (yet) a new playground for phones

The only possible, obvious identification mechanism left was to use the front camera. Samsung explored the iris scan (a historical way to authenticate people in security areas) but it seems it’s not 100% secure and a flat picture could be easily faked. So Apple had to “innovate”, using another well known technology, 3D mapping via infrared grid.

It’s well known because the first version of Microsoft Kinect had it, although it has been replaced with newer and more accurate technologies.

Future app developers could integrate 3D mapping via the infrared grid for a myriad of usages, not only identification or payments

And this has a lot of implications:

  • If it doesn’t exist (didn’t have much time to investigate) a new picture format will be created, not only containing the image itself, but the 3D correspondence for each dot.
  • Panoramic pictures will also be possible, scanning an entire object or person and building the complete 3D model.
  • With this new 3D layer AIs could be exponentially better in identifying objects. This has a lot of implications in object recognition, such in shopping environments.
  • 3D people models could be an input for online fashion retailers. no need to give your pants size anymore.
  • 3D models could be sent to 3D printers directly and Amazon Prints will send you the piece.
  • Hackers will steal 3D models of faces, and use it for malicious purposes
  • Arya Stark wouldn’t need to kill people to have their faces, with the iPhone X, she would be able to print them at home.

Finally Apple innovates again, even if it’s with existing technology. Double merit.


Join the conversation! If you like what you read, leave me your comments below, and feel free to follow me on Twitter @olopezprat or LinkedIn.


Your Customer Data is Useless. Here is why.


These are the 5 biggest limitations in the use of customer data

You can store billions of bytes of customer data, millions of transactions, thousands of data points, hundreds of attributes. All that data could be useless, and most of the times is. Storage costs are plummeting, true, so storing everything will be close to free, but what will you do with it? Not much if…

You spent zillions in tools

General purpose BI tools are useless if you don’t have a purpose, or a set of specific, well defined purposes. It’s sometimes easier to justify a bold investment to the Board of Directors in a single will-solve-everything-top-right-Magic-Quadrant tool than just ask for a bunch of good analysts and start obtaining quick actionable insights. And the later could be the right way to grow organically while obtaining tangible results.

Tools don’t solve everything unless they have a specific purpose, save some money for people, internal or external.

Have you considered who will be the users of the tools? How much time will they have to “play” with them? How many trainings you will have to give in order to be really up and running? What happens with newcomers? How to ensure the right people is using the right reports? Starting your customer data strategy by implementing a BI tool is often a mistake, because you will eat a big junk of budget and attention, and will not solve all the problems tomorrow (which will be expected based on the investment made). Focus first on the people who will think and execute the strategy, no tool will do that. People inside your organization, or outside, if you’re lacking the expertise and can’t hire.

You didn’t spend enough on data protection policies.

And you are already late. Before even creating a database to put data in, you need to be sure you can actually store that data, you have the permission from your customers to keep it and use it in specific ways (i.e. communication vs. analytics). And the matter is tricky because you can obtain thousands of different data points directly from the customer (name, email…) or indirectly (geolocation, ad networks, third party data providers…). Reflect on the usage you are going to make, current and future.

Think ahead, and hire a good data protection lawyer expert.

Once you start it’s much more difficult to change privacy policies, which will necessarily produce legacy customers with old versions of them that you can’t activate or even analyze the way you want. If you are in the EU, or working with EU customers, take a look on the the EU General Data Protection Regulation (GDPR) website to understand more about what is coming in less than a year from now. And don’t forget the lawyer, really, it will save you lots of headaches.

You don’t consider your data infrastructure a profit center.

Today there are several options regarding data storage. You can build your own data center, outsource it, or even put everything on the cloud. Very reliable providers such as Amazon or Microsoft can give you the service on a ‘pay per use’ model, which is great for unknown volume escalation curves. But be careful, you plan your budget with a cost based on a maximum storage capacity, your service suddenly explodes in terms of number of customers or data points per customer, and then you have a problem. You can’t afford to stop gathering data and will have to knock the door of your CFO for more money before the fiscal year is even finished.

Whatever storage model you choose, it will probably have three components of cost: space, bandwidth and processing. Don’t forget the last two, or you will have thousands of petabytes of data without the ability to extract any value at a decent speed.

Data infrastructure is very often considered a cost center, so you’ll have to fight the cost of every Byte and Flop.

The good news is cost cuts half every two years, and that will help you keep your budget, but ideally it should be considered a profit center. For that, you need to demonstrate what is the tangible value the data is generating, that you could use for revenue recognition vs. your costs. Then the whole problem changes. If you are not able to assign revenues (or cost reductions) to your data, maybe you are using the data in the wrong way.

You are looking for the perfect data match

Every business is different, and you might just have an email and an IP address, or, on the contrary, lots of anonymous behavior just linked to a cookie. Whatever is you case, don’t be too strict when choosing your primary ID and desperately trying to attach every data point to that ID, because you might end with nothing more. Instead, choose wisely what ID(s) you want to use and consider different levels of data completion for different customers and inferring the rest of the information missing . You might not have perfect information on a customer geo-behavior, but might try to create a loose link between other clients with similar transactional behavior in order to explore hypothesis you can then validate. Then an IP might serve you to cluster clients or target them in a meaningful way. Not ideal, but better than nothing. Don’t let perfection limit yourself, because one day you could have the data, and then your thinking process would be already done. Going trough the process even not having the data can help you acquire better customer information, so confront yourself to it even the data you have today is not perfect.

You are not starting from the customer

This is probably the most important one. You have your infrastructure, your data, your lawyer, your team (even your tools). What do you do now? Sometimes you have specific pain points to start with (i.e. loosing customers, reducing purchase frequency or basket size) and it’s a very valid approach. But the ideal is to start by the customer.

Focus on how can you make use of data to improve your customers experience

Let’s put an example. Most of grocery ecommerce sites allow you to reuse past purchases as input for your next shopping list. It’s a very gross approach, absolutely not data driven, because your average repurchase period for each of the products is different from the rest, and also different from your overall current purchase frequency. So in some cases you will overstock a product, but most of the times there will be a gap of consumption until your next, let’s say, weekly purchase. You end taking out half of the products of your past purchase list.

Another approach is possible, since all the data points are there. I know your average repurchase period for every product, so I know when you did buy each product, and therefore when are you going to buy it next. For each product in your past purchases (not only the last one), I’ll see which ones are going to “expire” in your next average overall purchase cycle (next week) and include them automatically in your shopping list. Providing intelligent shopping lists is a great service to the shopper and ensures there are no consumption gaps, so on the long run, the average frequency increases, therefore the yearly spend.

Working the data with specific customer centric use cases gives you focus, puts order among the big data chaos, and if well implemented, generate incremental revenues you can then bring to your CFO. Starting from the customer always pays off.


Join the conversation! If you like what you read, leave me your comments below, and feel free to follow me on Twitter @olopezprat or LinkedIn.

Shopper Analytics vs. Free Will. How much can we predict people’s behavior?

FreeWillMatrixThe future is here. The huge amount of shopper data generated every minute(*) in retailers all over the world is allowing Watson-like machines to predict what we are going to buy, where and when, pushing us to buy more, more often. And this is not going to get better. Collected data will include different types of behaviors (not only transactions, but digital interactions, social influence, physical movements in and out store…), and machines will increase their power to a point where by 2030 a $1000 computer will be a thousand times more powerful than a single human brain.  Read the rest of this entry »

7 Reasons Why Great Content is Not Enough



2015 will be the year of many interesting things, one of them it’s content. Since advertisement effectiveness is every year lower, “Content” is the word used now for the information delivered by the brands that is more than a simple ad. The objective of any content strategy is to generate a deeper link with consumers and provide spaces of conversation with the brands and between them. Content that can be shared, viralized, and shared again for the pride of the creative agency that produced it. But remember, very few ideas are powerful enough to be self viral, and if the content doesn’t work, brands, not creatives, are paying the price of failure. Read the rest of this entry »