University of Siena
Department of Information Engineering and Mathematics
Franco Scarselli

Home
Research
Publications
Curriculum Vitae
Teaching
GNN software
Links
DIISM
AIR group

Final exam and assignament of the course on advanced database systems, 2013-2014

Grading
Grades are based on a final exam (75%) and a final assignament (25%).

Final exam
Final exam will be oral. The teacher's questions may regard any argument in those discussed in the course slides. Students are required to have understood the main concepts supporting the technologies and the methods presented during the lectures. Remembering all the details and all the exercises of the more complex arguments is not mandatory, but the students should be able to reconstruct most of those details and exercises by their knowledge.

Dates
Students can take the final examination and submit the assigament on official days for exams or (upon request) on hoffice ours (wensday, 14:30-16:30).

Final assignament
The goal of the final assignament is to deepen a topic in data mining or information retrieval. Students can choose between two types of final assignaments: to study a research paper; to carry out a small project which includes an experimental part. In both the cases, students should produce a short presentation with slides (10-15 minutes) to describe their activity.

The final assignament can be carried in groups of up to 3 students. While the students are carrying out the assignament, the teacher is availble for hints and to verify the preliminary versions of the assignament.

The argument of the project must be discussed with the teacher. Here, some examples are listed. Students may also propose their own topics.
  • Research papers
    • Web spam detection is the problem of recognizing pages that contain spam. In [1], a survey on spam detection methods is contained. The two papers [2], [3] domenstrate the importance of using the web connectivity to detect the spam.
    • Learning to rank is the application of machine learning to the constructions of ranking models for information retrieval systems. The work [8] provides a general description of how such a problem can be faced in search engines. Several tutorials about learning to rank are available here. Three of the most well known methods are: RankNet [4] (which is said to be used in Microsoft's Bing), AdaRank [5] and ListNet [6]. Finally, the method [9] has been develop at our department.
    • Computer clusters are required when an information retrieval system has a huge number of users. Mapreduce, a programming model for processing and generating large data sets, is used by Google an described in [7].
    • Text categorization is the problem of classifying a text according to the content. A survey on the methods used for text categorization is in [12]. The most common approaches are based on naive bayes, f.i. [10] and on support vetor machines, f.i. [11].
    • The web link analysis allows search engines to measure the importance of the pages. PageRank has been introduced in [13]. A deep analysis of PageRank properties is in [14]. Extensions of PageRank include ranks based on document contents [15], [16], HITS [17] and TrustRank (currently used by Google).
  • Experimental projects
    • WEBSPAM-UK2006 is a benchmark that has been used in one of the competitions whose goal was to compare web spam detection algorithms (see this site for information about the competition). The dataset contains a snapshot of web containg with 18 million of pages extracted from 11.400 hosts. Such a benchmark can be used to experiment spam detection methods.
    • LETOR has been extensively used to test query ranking algorithms. LETOR includes several benchmarks, each one containinig information about a set of queries and a set of documents. The purpose is to design algorithms that are able to sort the documents according to the relevance to the queries.
    • Recently, a research group of our university has constructed a large dataset of about 160.000 proteins. In the dataset, each pattern describes the amino acid distribution in the core of a protein and the classification of the protein. Such a dataset can be used to study the reltionships between the characterstics of the proteins and the characteristics of their cores.
Bibliography