A key goal of MS in AI research is to develop agent systems that can make decisions and complete tasks without direct human supervision.
Agent systems focuses on completing the perception-action loop. Given the results of such perception, how should an agent act in order to reach its goal, maximize its utility, and minimize its costs? Autonomous robots are a prototypical example of such systems, though an agent system can also be a computer that plays board games like chess and go, or a search engine that meets information needs and offers recommendations.
An interesting setting is that of multiple agents that collaborate and communicate to behave intelligently. This involves understanding each others goals and perceptions, and planning actions collaboratively.
In Computational Intelligence (CI), you research techniques that achieve intelligence, or at least intelligent behavior, by considering the behavior that emerges from the interaction between relatively simple components in large collectives.
The algorithms are often bio-inspired – they imitate aspects of behavior found in nature. Examples are algorithms that mimic trail formation in ant colonies for optimal path planning.
Evolutionary algorithms that mimic natural evolution to optimize robot control, or neural networks for predicting the course of a disease. CI employs these algorithms to develop systems such as swarm robotic systems, smart environments, health systems with interactive sensing devices and smart vehicles, all of which are adaptive, collective, autonomous and self-organizing.
Given the inherent complexity of CI systems and the difficulties in analytically predicting the behavior that emerges, CI has a strong focus on experiments. As a student in CI you can gain real-life experience with applying and/or researching CI techniques.
This can be through an internship at a company that exploits CI techniques, or you can become involved in research projects in Evolutionary Robotics, Artificial Life and adaptive health systems, leading you to conduct proper scientific research aiming at a publication, typically at a prestigious conference.
Computer vision focuses on techniques and models for acquiring and analyzing images in order to understand objects and scenes in the real world.
Computer vision is important for the construction of intelligent methods and techniques for (autonomous) systems that interpret sensory information and use that information to generate intelligent and goal-directed behavior.
Computer vision methods include image segmentation, object recognition and profiling, motion estimation, event detection, 3D scene reconstruction, human-behavior analysis, faces and gesture recognition. The methods are studied using elements of geometry, physics and statistics.
The way we access, provide, and exchange information has changed dramatically with the rise of the Internet.
Information retrieval studies and invents methods and techniques for the design, implementation, and use of information processing technology in the context of a variety of Internet applications, ranging from search engines to text analysis.
Information Retrieval has developed from a number of research areas, including Computer Science, Library Science, Artificial Intelligence, Data Mining, and Natural Language Processing.
While Information Retrieval builds on techniques from a variety of research areas, there are a number of research problems that are specific to the Web applications, such as the design of Internet search engines, efficient linking of related information across the Web, improving information extraction from social networking sites, and the access of foreign language information.
In addition, the sheer scale of the Internet opens up tremendous opportunities for data mining approaches, while at the same time posing interesting research challenges with respect to robustness and scalability.
Information Retrieval courses help you familiarize with several data mining, natural language processing, and link-based techniques. It covers the well-established techniques within the area but is also looks forward, discussing the science behind cutting-edge technologies and anticipating Web technologies that yet have to be fully realized.
When humans reason about the world, we identify objects, we make categories of such objects, and we reason about the relations between the things in the world around us. How can we represent such knowledge in a computer, in such a way that a computer could reason about the world around it in a similar way?
The field of Knowledge Representation and Reasoning aims to represent knowledge in such a form that a computer system can use it to solve complex tasks such as diagnosing a medical condition or having an intelligent dialog in a natural language.
Knowledge representation and reasoning uses logic as its main mathematical tool, and tries to answer such questions as: how can we design logic that can efficiently reason with very large amounts of knowledge? Which logic are suited for reasoning about space and time?
How can we deal with uncertainty and vagueness? How to reason about changes in the world around us? Knowledge Representation techniques are used in many practical applications.
Examples are expert systems for medical diagnosis, decision support systems for judges, and intelligent dialogue systems such as Siri on the iPhone.
In Machine Learning, you develop algorithms that can improve their performance by learning from experience. Experience often comes in the form of very large amounts of data, or “Big Data”.
The resulting algorithms and models are used for making predictions and for improving decisions. It has become a core technology for a wide variety of applications such as: text and image classification; information retrieval, robot control; discovering causal explanations, social network analysis, customer intelligence; anomaly detection, recommendation systems, fraud detection, forecasting and so on.
Due to the increased availability of data from sensors (Internet-of-Things), the range of applications is growing fast. The emphasis in this area is on algorithms and statistical models that explain why and when algorithms work. You also work on a number of algorithms in detail, such as clustering, dimensionality reduction, regression and classification, graphical models and deep learning. This area has a strong mathematical component, but there is also an emphasis on developing the skills to implement machine learning algorithms through project assignments.
As a student in Machine Learning you can do your master’s thesis on a fundamental topic. e.g. developing a new general algorithm, but also on a more applied topic, e.g. developing an innovative application. Many students conduct their thesis research as an intern with a company.
Natural Language Processing
Over the past few years, research towards natural language processing has shown strong evidence as to the effectiveness of models that involve both hierarchical structure as well as statistical learning from corpora. In this area you will study the state-of-the-art statistical models for complex language processing tasks such as parsing, language modeling and machine translation.
A characteristic of some of these models is that they involve defining probability measures over hierarchical structure, e.g., trees and graphs. The profile covers supervised as well as unsupervised methods for learning these models directly from large training corpora and provides the necessary background for research in Computational Linguistics and Natural Language Processing.
This specialization focuses on understanding, analyzing and working with large amounts of data. You can study entire Data Science lifecycle from data acquisition and management to analysis and visualization.
These techniques include machine learning and data mining, large scale data management, information visualization and reasoning over web data.
There is a strong emphasis on applying artificial intelligence techniques to Data Science problems and in particular setting up experiments and performing informative analyses.
You will have the opportunity to apply their knowledge to large real world datasets like those from social media or the web. During the final Masters project, you will put together all facets of their education to tackle a data science problem.
AI and the Web
Since its invention in the early ’90s, the Web has become the largest information space that has ever been constructed. The Web is not only a very large environment, but it is also very diverse, it combines text, images, video and data, it is very dynamic, noisy, and very, very large.
That makes the Web a natural “habitat” for intelligent systems, and many typical AI problems can be investigated in this habitat:
Can we build computers that can reason about the information in websites?
Can we build search engines that really understand our question, and that give us intelligent answers?
Can we build smart agents that travel across the web to collect personalized information?
Can we use natural language processing to build computers that can read and understand web pages?
Can we use machine learning techniques to automatically categorize web pages, or even to learn which web pages are trustworthy, and which ones are not?
The interdisciplinary field of “AI on the Web” combines techniques from such diverse sub-fields as machine learning, natural language processing, knowledge representation and intelligent agents to tackle these challenging problems.