Making Your Models Work For the Long Haul

Too often, engineers are brainwashed into thinking they can create an impeccable artificial intelligence (AI) model — a blank slate they release into the wild for independent learning. They think: “If I create flawless math on top of the right infrastructure, I’ll have the perfect model.” Train the algorithm, let it run free, and that’s the end of the story, right?

Unfortunately, no. Just like human intelligence, artificial intelligence requires continuous learning to advance its expertise.

Instead, great ML / AI innovators need to plan ahead to include the right people, in the right loops, to train, test, validate and continuously improve their models' application. The best results come from a carefully tuned merger of computer and human cognition. Here's how we think about the data flows at Mighty AI:

Our Training Data as a Service™ platform enables us to guarantee accurate human insights, across domains, at scale and on demand.

Our Training Data as a Serviceplatform enables us to guarantee accurate human insights, across domains, at scale and on demand.

Train, test, validate, repeat

Training a commercially applied AI is not a one-and-done exercise. It requires regular validation to understand whether the AI is working as it should. Otherwise, you’re practically begging for bias to worm its way in. Look no further than the troubling example of an AI designed to predict criminal recidivism that turned out to be biased against black people. And who can forget the now-infamous fiasco that was Microsoft’s Tay, a well-intentioned chatbot experiment that quickly soured? These and countless other examples underscore the need for continuous human validation of AI to keep it on its intended trajectory.

In addition to mitigating against bias, human validation helps AI keep up with changing knowledge. Take language, for example. The meanings of words constantly evolve. As the father of a teenager, I can personally attest to the fact that by the time a new slang term goes mainstream (“lit!”), a trendier alternative has already replaced it (“savage!”). If the only education we provide chatbots is the initial data sets we train them on, how will they keep up with the changing ways people talk to them? Like human intelligence, the only way artificial intelligence can adapt to accommodate a growing body of knowledge is if we continually educate it.

The AI value chain

As humans retrain them, AIs get smarter. And once an AI has achieved its initial ambition, it can continue learning and growing.

Imagine you’re a retailer that sells clothes and shoes online, and now you’ve created a recommendation engine. In its infancy, the AI is a form of visual search. When a customer searches your website for women’s brown boots, the shopper gets back results of brown boots from your catalog. Once your AI has mastered visual search, its next ambition might be association. Instead of simply returning results for brown boots, it begins serving up images of models wearing outfits that pair well with the boots. Once it’s conquered association, it moves to even more personalized intelligence. The AI knows you’re a software engineer who lives in Seattle and is shopping in March, so it begins personalizing recommendations based on a dress code that is decidedly casual and a climate that is frequently wet.

This progression from search to association to personalization is the AI value chain in action. Behind the scenes, humans are in the loop, validating the AI’s performance and retraining it with new data sets. The AI’s advancement up the value chain is only possible with the aid of human intelligence.

The right human in the right loops

So you’re sold on integrating humans into your AI training loop. Now what? It’s time to identify the right humans with the specialized knowledge your business needs. Take our retail example. If your target customer is a millennial American woman, at the end of the day, her opinions — what she perceives as fashionable, what she wants to wear — are what matters. You want individuals like her annotating your data, helping to make your recommendation engine as relevant as possible for your customers. The same is true for “expert” AIs, which need to integrate the latest human knowledge in fields such as accounting, education, law, and medicine.

But it doesn’t end there. You’ve got the right humans, and now you need to think about the right loops. Remember, the initial training data set is just the first loop. The validation loop, which is where you determine if your AI worked as intended, is also a critical juncture for incorporating that specialized knowledge your customers bring to the table. The validation loop is as much about improving the AI as it is your human intelligence engine. It’s about getting smarter about who should perform annotation tasks, how those individuals perform, and whether the results are accurate.

Just like no one says you’re done learning once you’ve graduated from college, an AI isn’t finished once it’s trained. Training is merely the first step.

The good folks at VentureBeat originally published this article: http://venturebeat.com/2017/03/12/sorry-but-your-ai-needs-to-go-back-to-school/

Comment

Matthew Bencke

Matt Bencke is an entrepreneur, leader and change agent who drives new business and product strategies based on deep analysis, inspired leadership and focused execution. He has strong successes across technology, strategy, business development, design, e-commerce, marketing and manufacturing. His passion is attracting great talent, fostering a meaningful team culture, and taking performance to new levels. During his tenures at Microsoft, Getty Images and Boeing he has created, advised, led and grown businesses ranging from several millions to billions in size.