The goal of this course is to continue and build upon the theory covered in Knowledge Discovery and Data Mining 1. In KDDM2 the emphasis lies on practical aspects and therefore the practical exercise is integral part of this course. In addition a number of algorithmic approaches will be covered in detail, which are related to the project topics. Participants may choose one out of a number of proposed projects from different stages of the Knowledge Discovery process and different data sets.
The main instructor of the course is Roman Kern, and if there are open questions, please feel free to send an e-mail with a prefix [KDDM2] in the subject.
Knowledge Discovery and Data Mining are closely related to the concepts Big Data, Data Science and Data Analytics. Data science encompasses a number of diverse skills, ranging from software engineering, data mining and statistics by following a scientific, evaluation driven approach. This course aims to develop some of these important skill, with a diverse set focus areas. In addition, is a necessary prerequisite for many Knowledge Discovery applications to develop strong skills in analysing big data sets and preprocessing them.
The slides and resources from the previous years are available here: 2020, 2019, 2018, 2017, 2016, 2015, 2014
Course topics include:
In this course the students will learn about:
At the end of this course the students will able to apply:
The lectures take place via videos and slides, which can be downloaded directly from this web site. In addition there are online Q&A sessions Thursday, 14:00 - 15:00 via WebEx, where the lecturer will be present and available for questions regarding the topics and projects (feel free to join).
Topic | Notes |
---|---|
Course Organization Videos:
Slides: Course Organisation |
Introduction to the course and the administrative aspects. |
Ensemble Methods Videos:
Slides: Ensemble Methods |
Combination of multiple learning algorithms/models/hypothesis Each learning algorithm might have different strength and weaknesses. The idea of an ensemble is to combine the (weak) learners (e.g., base classifiers) into a combination that eliminates some of the weaknesses and combines some of the strengths. Today, ensemble algorithms like Random Forests or Gradient Boosting are goto-methods for many data science tasks. |
Time Series Data Analysis Videos:
|
Time series data requires specific preprocessing and analysis. |
Anomalies in Data Videos:
Slides: Slides for Anomalies in Data |
Outliers/Anomalies/Surprise/Noise - all the same, or different? |
Causal Data Science Videos:
Slides: Slides for Causal Data Science |
How does causality help in Data Science? |
Privacy-Preserving Data Science Video:
|
Privacy-protection and confidentiality in Data Science. Introduction to main concepts, including k-Anonymity, Differential Privacy and Federated Learning. |
Bias & Assumptions in Data Science Videos:
|
What assumptions do we have to make to effectively conduct data science? Is our data biased, or our algorithms? And why? And what can we do about that? |
Projects Videos:
|
All practical projects, together with a teaser video for an overview!
|
Created with Ganttproject: kddm2-project-plan-ws2020.gan
There are several practical projects from different phases of the Knowledge Discovery process to choose from The work on the projects will be conducted by single students on their own (groups of one), but there is also the possibility to form groups of two people, where the project scope is then expanded appropriately (see advanced). The students are expected to present their work as a video presentation.
For all projects the evaluation is considered to be an integral part. One needs to be able to state how good a solution works and what are the expected limitations.
There are data sets proposed for each of the project, but participants are free to come up with data sets of their own, or make own project proposals.
Topic | Notes |
---|---|
TUG Data Team |
Task: Take part in an scientific challenge (shared task), or an Kaggle challenge of your choice. More details about the data team and how to get in touch can be found on the Data-Team Homepage. |
Topic | Notes |
---|---|
Outlier Detection | Task: Given some observations (data), find the instances that do not conform to the remainder of the observations. Approach: Select a data-set and setting. Research appropriate algorithmic approaches, implement selected outlier detection algorithms, apply them on selected dataset and report their performance. Suggested data-sets:
Advanced: Implement your own outlier detection algorithm and compare its performance against the baseline. |
Privacy Protection | Task: Given a dataset, which contains some sensitive information, the dataset should be transformed into a representation (e.g., a modified version of that dataset), which no longer contains the sensitive information. The type of the sensitive information is defined beforehand and could either be regarding the membership (e.g., if a certain person is part of the dataset), or some attribute (e.g., the income). |
Sensor Analytics (Studentlab) | Task: Collect sensor data and detect certain states (only for teams) Option A: Biosensors (e.g. heartrate, ...), detect positions Option B: Industrial sensors (e.g. temperature, ...), estimate the numner of people within a room Option C: Fluid & gas sensors (e.g. CO2, ...), detect certain liquids Data-Set: Needs to be collected in course of the project, or alternatively use the one from "Databases 2" course. |
Detect Causality | Task: Identification of causality relationship directly from the data (i.e., cause, effect). This is an advanced data science task with big potential. Data-Set: Data from a past challenge (complete with prior research): Causality Challenge #1. Papers:
|
Dataset Collection | Task: Collect datasets (e.g., open-governmental) datasets, analyse these and assess the suitability of these datasets for a number of application scenarios. Approach: Research datasets, assess their key characteristics, apply data science methods to assess their usefulness. Advanced: Build a database (or similar) to allow to collect and update the relevant key parameters of each dataset. |
Topic | Notes |
---|---|
Machine Learning | Task: Automatic tagging of resources, using either an unsupervised or a supervised approach. The goal is to apply tags to an unseen resource. Approach: For this project the approach may vary widely. A supervised approach requires a training dataset and may include classification algorithms. Unsupervised approaches may either look at a set of resources or a single resource at a time. Suggested data-sets:
Advanced: Implement an unsupervised and a supervised approach, and then compare the two approaches. Measure their differences in accuracy as well as discuss their individual strengths and weaknesses. |
Explainable AI | Task: Compare two machine learning models with each other and explain/interpret what they have learnt. For example, which features have been picked by the respective models. Approach: Train two separate machine learning models - they should be distinctively different, for example a CNN and a linear model. Advanced: Compare approaches what are predominantly physical-driven (i.e., the features are derived from expert Knowledge or physical laws) with data-driven (i.e., no explicit feature engineering) approaches. |
Topic | Notes |
---|---|
Timeseries Prediction | Task: Given a set of sensor data for multiple streams, e.g. temperature, power consumption. The goal is to predict the future values of these signals, optimally including a confidence range. Approach: Take the stream of data and build a prediction algorithm, that is able to predict the future values of the streams a accurately as possible. Suggested data-sets:
Advanced: Try to detect events (e.g. meetings) within the data. This is a hard task, as there is no ground truth to evaluate against, thus it is part of the project to with strategies on how to measure the quality of the algorithms. |
Time Series Classification | Task: Classification of time series data. Approach: Pick a dataset from the linked repository of time series datasets and try to reproduce (or surpass) the posted performance values. Suggested data-sets:
Advanced: Analyse how the performance (i.e., classification accuracy) drops, the fewer data is used (fewer parts of the timeseries), to simulate a early classification problem. |
Pattern Mining in Time Series | Task: Given a set of sensor data for multiple streams, e.g. temperature, power consumption. Apply sequential pattern mining on a time series data-set (optionally apply SAX beforehand). Suggested data-sets:
Advanced: Pre-process the data via piecewise linear approximation. |
Seasonality in Time Series | Task: Detect the seasonality (= number of observations of the dominant repeating pattern in time series). Approach: Take the stream of data and build a prediction algorithm, that is able to predict the future values of the streams a accurately as possible. Suggested data-sets:
|
Topic | Notes |
---|---|
Query Completion | Scenario: A user is starting to search by typing in some words... Task: The system should automatically suggest word completions, depending on the already entered words. Suggested data-sets:
Advanced #1: Provide a estimate of the number of hits for each completion suggestion. Advanced #2: Provide not only completions, but also similar queries (yielding similar search results), may also include synonyms. |
Blog Search | Task: Provide a list of matching resources for a given piece of text. The goal is to produce a ranked list of items relevant to a context. Use-Case: Consider a user writing a text, for instance a blog post. While typing the user is presented a list of suggested items, which might be relevant or helpful. Suggested data-sets: Same as previous task. Advanced: Identify Wikipedia concepts within the written text. For example, if the text contains the word Graz it should be linked to the corresponding Wikipedia page. |