Ideas advanced in an AI (artificial intelligence) program newly unveiled this week could have big implications over time for enterprise software. Here are the details from MIT's Technology Review:

Taking inspiration from the way humans seem to learn, scientists have created AI software capable of picking up new knowledge in a far more efficient and sophisticated way.

The new AI program can recognize a handwritten character about as accurately as a human can, after seeing just a single example. The best existing machine-learning algorithms, which employ a technique called deep learning, need to see many thousands of examples of a handwritten character in order to learn the difference between an A and a Z.

The software was developed by Brendan Lake, a researcher at New York University, together with Ruslan Salakhutdinov, an assistant professor of computer science at the University of Toronto, and Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences at MIT. Details of the program, and the ideas behind it, are published today in the journal Science.

The researchers used a technique they call the Bayesian program learning framework, or BPL. Essentially, the software generates a unique program for every character using strokes of an imaginary pen. A probabilistic programming technique is then used to match a program to a particular character, or to generate a new program for an unfamiliar one. The software is not mimicking the way children acquire the ability to read and write but, rather, the way adults, who already know how, learn to recognize and re-create new characters.

Beyond recognizing handwritten characters, the software also drew its own as part of a visual Turing Test. Most of the judges weren't able to tell the characters were drawn by a machine. 

AI and Enterprise Apps

"The implication [of BPL] is not just better machine vision—OCR, et cetera—but also better AI that doesn't require lots of training," says Constellation Research VP and principal analyst Doug Henschen. "The pattern could be not just a visual character but a pattern in data, which has broad applicability."

A fascinating report from Defense One on the BPL project reveals that part of its funding came from multiple U.S. military branches, who have high hopes for its potential:

Consider the plight of the sensor operator on a drone team flying combat air patrols over, say, Afghanistan. Today, such a team might fly for 6,000 hours before striking a specific target. That’s time spent watching, waiting, collecting intelligence, and making determinations about what people on the ground are doing before finally launching a Hellfire missile. A machine that can recognize objects and, more importantly, behaviors could help with that.

“Meaning making” in the context of sensor operators for drones equals understanding what the subject, say a fellow on the roadside outside of Mosul, is doing. Is he digging a hole for an IED or planting vegetables? If he meets another man and embraces him, is he transferring weapons or welcoming home his son? It’s the sort of categorization job that involves some ability to place yourself in the man’s shoes, and ask obvious questions. “If I were an insurgent, would I bury an IED here? If I were farmer, would I be here at all?”

If you ask Pentagon leaders, they’ll say that they have no interest in leaving the ultimate kill decision to drones. But a computer program that could do some of the watching, waiting, categorizing, and tagging could reduce strain on operators and perhaps enable even more patrols, intelligence-gathering, and targeted strikes.

It's not hard to see the value of the last point—having a machine handle some of the workload while leaving decisions to humans—as applied to advanced analytics and BI, customer-service and support applications or content management systems, to name just a few examples. It's not clear how quickly BPL's approach will become commercialized, but at the least it should set the software industry's AI arms race on a new course.

Still, it's important to retain some healthy skepticism on the topic, in Henschen's view.

"We've been hearing a lot about AI this year, but for all the progress, it's still terribly difficult to even define what 'intelligence' is," he says. "I think it's a leap to predict that chips that process a series of ones and zeros—no matter how quickly—will gain the 'intelligence' to take on a variety of tasks with the same flexibility of the human brain."

Rather, over the coming years expect to see increasingly sophisticated, dedicated applications that much as the BPL does, will provide examples of "superhuman" performance," Henschen says.

"But I don't think we'll see autonomous, artificially 'intelligent' computers taking on a variety of applications and taking on human versatility," he adds. "Machines will continue to do what we program them to do, with increasingly amazing feats of performance."