The first aspect of advanced AI to be adopted in business was the concept of machine learning. Rather than gathering data and programming a computer as was done traditionally, machine learning allows the computer to gather its own data and, in a sense, program itself. By making complex connections between data, the computer can create new useful information.
A common misconception is that machine learning is simply about automation. That’s not really the case. We have had computer automation for decades, but only through extensive programming of each “expert system” scenario, use-case and situation that the automation would encounter.
Machine Learning allows the automation to act correctly in the face of an unfamiliar situation and make a decision based on its previously ingested data set.
Machine learning often begins with a basic decision tree that can be built out into a full neural network of connected data sets. Dealing with large amounts of data and acting on programmed rules are where machine learning out-paces the human mind.
A particularly effective application is seen in Facebook’s learning algorithms. By tracking a user’s clicks, searches, likes and time spent reading articles and posts, the algorithm creates a dynamic constantly updating picture of the person’s interests, hobbies, political views, travel plans and many other data points. This data then informs Facebook’s delivery of new content. The system then takes that user profile and curates custom content and advertising that is designed to appeal directly to the user.
Each user profile is then combined with the larger set of other user profiles to produce an ongoing constantly refined AI system that learns from user activity and responds with ever-improving suggestions for content or custom-tailored advertising.
Under less-advanced modeling, there would still be results, but they aren’t as refined and intuitive. For example, if a person clicks like on a post about Game of Thrones, it is simple for a computer program to lookup the metadata on Game of Thrones and determine that it is a TV show in the fantasy genre. This simple algorithm might then recommend Lord of the Rings.
But a more complex machine learning system can go much deeper. Facebook’s newest tracking algorithm tracks every interaction with the site and grades them according to the type of interaction and how “meaningful” they are. While that sounds like a nebulous concept, it’s really just a distillation of a large range of data, including likes, comments, clicks, sharing, time spent reading, and dozens of other factors.
Our example of Game of Thrones to Lord of the Rings might learn that people who like both of those things also are likely to like other unrelated things. Perhaps a substantial number people who are fans of these also tend to drive Honda Accords. The real systems obviously employ more variables than the three we are using for this example, but bridging the obvious connections with the not-so-obvious is the key to successful application of this type of algorithm.
Stumbling Blocks of Early Adopters
Learning about users and customers is an incredibly valuable and important part of business. But at this point, the early adopters are focusing on information aggregation and delivery. Acting intelligently on that information is the next step. Target found this out the hard way. Marketing personnel at the big box retailer knew that new parents are a gold mine of opportunity for their type of store. Their ability to cater to new parents can literally lead to a lifetime of customer loyalty from parent and child. Studies have also shown that this time in an adult’s life is when brand loyalty is most flexible. So Target’s machine learning was set to the task of determining when a customer became pregnant.
Purchases such as pregnancy tests and prenatal vitamins could easily be tied to future purchases. This means a customer who buys maternity clothes in July will probably be shopping for car seats in September, diapers in October, one-year old clothes a year later, and so on. This carries forward to school supplies five to six years later and eventually toys, electronics and everything else a family with children buys.
Using the first few indicators as a baseline, Target began offering ads to customers who appeared to be expecting. The results were good, but ran the company into a little trouble over privacy concerns when it mailed a pregnancy-related promotional ad to a teenage girl who had not yet told her father that she was pregnant.
The ability to capture a customer profile and tailor products directly is an important step in improving customer satisfaction and sales, but acting on that information correctly is something that can still benefit from the human touch. That’s one reason why AI and machine learning arw not so much autopilots as they are co-pilots that still requires extensive human guidance.
Analyzing the Real Costs and Savings
The rapidly improving ability of computers to analyze and act on new data is changing things everywhere in the business world. However, the real measure of a successful application of machine learning comes in the benefit per dollar spent.
While the eventual goal of deploying machine learning is that you can do the same job faster or better, the decision must be based on total cost, not just speed and quality of work. Imagine an AI program that can automate a job but actually takes 10 times longer to complete the work that human would. That sounds like a poor investment and a waste of resources.
But what if you factor in cost? If the machine learning program and all its maintenance and hardware costs amount to just one twentieth of the total cost of a human doing the job, then there is still a relevant use case that applies to many business problems. In this scenario, you can actually deploy 10 instances of the program and complete the same work for half the cost.
What Machine Learning Can’t Do (Yet)
Practical application of the principles of AI and machine learning are still in their infancy, but the possibilities seem endless.
There’s where some businesses run into problems. Machine learning programs can do things no human can do, like analyze years of data in a few seconds and display results and patterns. But humans are much better at determining if the results make sense in context. Humans also use intuition to respond to smaller data sets, which is something computers have no context for.
One famous recent example of machine learning in action was when IBM’s Watson machine learning program faced off against Jeopardy! champions Ken Jennings and Brad Rutter.
The solution to suboptimal machine learning framework is often to inject more and more quality data to help the system see more patterns. In one example, different teams competing to create a new predictive system for Netflix to recommend movies found that their models worked best when they were able to combine all the data sets that each team was using. This is useful for areas where you have a large dataset, but it slows the adoption of machine learning in a brand new field of study.
Optimized processes done by humans may not experience much improvement when an AI attempts to do the same job. The cost of “teaching” the system by gathering and inputting data and programming the rules for the data could be cost-prohibitive.
One famous recent example of machine learning in action was when IBM’s Watson machine learning program faced off against Jeopardy! champions Ken Jennings and Brad Rutter. Armed with an extensive set of encyclopedias and other text, Watson did very well against two of the game’s best human champions and defeated them in a two-game tournament in 2011. But from a pure business perspective, the multi-year efforts of a team of more than 15 data scientists was not a successful venture. Why? Because Watson’s winnings totaled just $1 million, which is far less than the cost of the project.
Obviously, in the Watson case money was not the goal. The $1 million went to charity and the win provided extensive marketing value to IBM while introducing the world to a new level of AI. But that type of project is not in the cards when making a case for real-world business AI.
The reality is that aside from a few mature applications such as monitoring large IT systems and data centers, email and spam filtering, travel time and traffic analysis by apps such as Waze or Uber, and other applications of large amounts of data, many AI and machine learning tools are still in the pilot-program stage.
At this part of the product lifecycle, they can do a few interesting things, but are not able to produce the total bang-for-the-buck that will revolutionize business. Other existing tools work in a limited fashion but still need extensive human interaction – known as supervised learning — to support and verify the results. These tools often produce only marginal cost savings. These products exist because they fill a niche for the companies that are looking to leverage the latest buzzword – “machine learning” in this case – before they have a complete AI-focused business enterprise strategy in place.