HomeCourse ResourcesDiscussion TranscriptsDiscussion Thread on AI

Discussion Thread on AI

 
Lynette Vey 
Artificial Intelligence

In this week's slides it was suggested what machine learning is not - machines are not smart. They need to be trained or taught before they can do anything. The Turing Test has been followed as a test that robots need to complete in order to be termed "Human-Like". But to what extend is this test justified if Description Logic used?  

 
 
Lynette Vey 
RE: Artificial Intelligence

Author: Lynette Vey Date: Friday, July 7, 2017 6:32:49 AM EDT Subject: Artificial Intelligence

In this week's slides it was suggested what machine learning is not - machines are not smart. They need to be trained or taught before they can do anything. The Turing Test has been followed as a test that robots need to complete in order to be termed "Human-Like". But to what extent is this test justified if Description Logic is used?  

 
Raleigh Douglas Herbert 
RE: Artificial Intelligence
 

Great question. If humans provide machines with the DL in order to infer, it seems to me that we still have the upper hand over machines when it comes to making logical connections between abstract concepts like classifying a whale as a mammal and not a fish. However, once we give the machines the DL to pass the Turing Test, have we lost the ability to reverse it?    

 
 

Nguyen Khanh Trang Dang 
RE: Artificial Intelligence
 

Very great question Lynette! 

Artificial Intelligence is a fascinating area of study and more than often incur heated debates whether or not "machines can think." I am not particularly well versed on this topic, so I don't have a direct answer to this question. As you mentioned, machines are not smart and need to be taught. Description Logic is a way to help them understand the relationships between facts within a particular domain/concepts. I think the more facts and connections the machine acquires, the more accurate the answer deduced or, the better the decision-making. Moreover, a machine's memory capacity can be enormous, and its "thinking" speed would even surpass human capability.  So does it mean that it's possible a machine would be more "Human-Like" than Human? Strictly speaking, if only our thinking process is entirely based on logic; however, there are of course multiple aspects, and cognitive functions involved. This is just my thought and it would be very incorrect.

 
 
Nicholas Houlahan 
RE: Artificial Intelligence

Interesting question! It seems to me that Description Logic is one tool in the toolbox to train machines to think like a human, in particular that part of human thinking where inferences are made based on a certain knowledge base. The Turing test could still be justified in order to see whether the inferences made on the basis of the description logic accurately reflect a human's way of thinking. It seems to me that we still have a long way to go, given the complexity of human thinking and the ways it accommodates exceptions to systematic inferencing, as brought out by Brachman's article. In addition, the description logic has to be maintained, unless a computer can maintain itself to imitate reality, or human thinking about reality, so there's always a need for correction and refinement.

I also really like Nguyen's point about computers surpassing human capability: computers can be great at volume and speed, and if we include the kinds of inferencing we've been reading about this week, it seems that computers have the ability to become even more "human-like" in the way they interpret "facts" (i.e. triples). A classic example is diagnosis bias where doctors sometimes frame their diagnoses based on what they know and their initial experiences, rather than all available information. Computers are better equipped to process higher volumes of data to avoid such biases. This is certainly departing from human-like thinking, but isn't that the point in this case? 

 

Andrew Janco 
RE: Artificial Intelligence

I have a lot to say in response to your thought-provoking post. First of all, why do you say that machines aren't smart just because they have to be trained?  Isn't intelligence a measure of a being's ability to learn?  In that case, machines are extremely intelligent (super-human even) when given the right model and features to learn from. At this point, we can't explicitly tell a machine what to learn, so we have to give it lots and lots of supervised training data.  It's the only way we know to describe what we want it to learn in terms that the machine will understand. With one-shot learning and semi-supervised learning, models are learning from less and less data.  Recent experiments have demonstrated that a model trained on one kind of task, can be trained effectively to accomplish other tasks through transfer learning.  While a general intelligence is still in the offing, how certain are we that humans have that same "general intelligence" that we ask of AI?  There's still much to learn about both human neuroscience and the possibilities of artificial neural networks and machine learning.    

    
As far as the Turing Test is concerned, I'd agree that all machines are very good at Description Logic, while relatively few humans are. If only descriptive logic is allowed, machines will have to learn to make mistakes in ways that seem human.  The game itself is not unlike a Generative Adversarial Model. One side generates answers, while the other distinguishes whether it's real or not. Picture a person talking to a robot and a person. We reach equilibrium when the discriminator is no longer able to tell the difference between a robot response and a human response.  In this game, the robot is more likely to get caught when it makes robot-like statements, particularly those that use descriptive or relational logic. These are things that we know computers are good at.  It will get better results when it does things or says things that we'd don't think computers are able to do.  Researchers at Rutgers recently created a Creative Generative Model that not only generates art, but it also generates art that doesn't fit into existing categories and genres (arxiv.org/abs/1706.07068).  By selecting for the innovative and unconventional, the machine is able to make art that, when hung in a gallery, patrons consistently identified as more unique and creative than many human-made works. Applied to the Turing Test, this gives machines the ability to surprise and act in ways that we don't expect of computers.

 
 
Scott Jordan 
RE: Artificial Intelligence
 

I think artificial intelligence can aid humans in finding information and even provide the clues for making sense of it. But in the end, it is humans that actually do the sense-making. Even after we train machines to “reason,” categorize, describe, or perform some other function to identify objects and reveal relationships to other entities or classes, humans determine meaning based on so many facets of experience and knowledge. In other words, computers simply cannot say definitively and completely what is the “Italian Renaissance,” but humans build that knowledge extensively and imperfectly that goes beyond computer logic. It seems with the Turing Test that machines can be programmed to be “human-like,” or in other words, to perform support functions for humans to build and share knowledge. But there are many inferencing hurdles to jump over. For instance, the logical statement that a “whale is a mammal” can mean that any whale is a mammal, but “John is a bachelor” may lead to ambiguity because not all people named John are bachelors. As Brachman noted (1983), the semantic intent could include a variety of ideas and characteristics describing an object. This is still a fair question; after all, to what extend can machines be more like humans?

 
 
 

Maria Victoria Fernandez 
RE: Artificial Intelligence

When it comes to description logic and machine learning, I think the article by Paul H. Cleverley and Simon Burnett, "The best of both worlds: Highlighting the synergies of combining manual and automatic knowledge organization methods to improve information search and discovery in oil and gas enterprises" (2015), provided a great case study of how regardless of the increased use of machine learning to automate information search, discovery, and classification practices , it is the combination of both manual classification and automated techniques that provide the best results. The nuances of manual, human classification practices have not yet been fully replicated using machine learning. 

Reading this article, I also thought about the role of machine learning in natural language processing and the limits of providing training data to machines. The tide is currently shifting from teaching machines to make predictions based on patterns in data using training models to expanding the capacities of artificial neural networks. Andrew also touched upon this in his post and it made me think of the advances Google has made to improving its translation capacities by using neural networks instead of copious amounts of training data. A NYT article from last December titled "The Great A.I. Awakening", did a great job at explaining to a lay audience how Google used artificial intelligence to transform Google Translate and what some of the greater implications are for the future of machine learning. Just as neural networks are transforming natural language processing, this new computer science paradigm is undoubtedly affecting linked data practices in a variety of contexts.