Tutorial overview

Tutorial overview

ICWE 2012 hosts five tutorials, which are

  • The Web of Data for E-Commerce in Brief
  • Epidemic Intelligence for the Crowd, by the Crowd
  • Engineering Evaluation Approaches to fit different Web Project/Organization Needs
  • An Introduction to SPARQL and Queries over Linked Data
  • Natural Language Processing for the Web

Tutorial 1: The Web of Data for E-Commerce in Brief (full-day)

This tutorial is a hands-on introduction to the GoodRelations Ontology, Schema.org, RDFa and Microdata Authoring, Google Rich Snippets for Products, Yahoo, Bing, and Linked Open Commerce. The GoodRelations ontology (http://purl.org/goodrelations/) is one huge success story of applying Semantic Web technology to business challenges. In this tutorial, we will (1) give a comprehensive overview and hands-on training on the conceptual structures of the GoodRelations ontology including patterns for ownership and demand, (2) present the full tool chain for producing and consuming GoodRelations-related data, (3) explain the long-term vision of linked open commerce, (4) describe the main challenges for future research in the field, and (5) discuss advanced topics, like access control, identity and authentication (e.g. with WebID); micropayment services (like Payswarm), and data management issues from the publisher and consumer perspective.

Bio:

Martin Hepp  URL: http://www.heppnetz.de/ 

Martin Hepp is a professor of General Management and E-business at Bundeswehr University Munich in Germany and a professor of Computer Science at the University of Innsbruck in Innsbruck, Austria, where he leads the research group “Semantics in Business Information Systems”. Martin holds a Master’s degree in Business Management and Business Information Systems and a Ph.D. in Business Information Systems from the University of Würzburg (Germany). He was the organizer of more than fifteen workshops and conference tracks on conceptual modeling, Semantic Web topics, and information systems, and member of more than sixty conference and workshop program committees, including ASWC, ESWC, IEEE CEC/EEE, and ECIS. Martin has taught more than 30 courses at the graduate and undergraduate level at universities in Germany, Austria, and in the USA. He is also frequent speaker at academic and business conferences.

Additional tutorial presenters: Alex Stolz (http://www.unibw.de/ebusiness/team/alex-stolz/) and Laszlo Török (http://www.unibw.de/ebusiness/team/laszlo-toeroek/)

Tutorial 2: Engineering Evaluation Approaches to fit different Web Project/Organization Needs (half-day)

Measurement, evaluation, analysis and recommendation are support processes to primary web engineering processes; also they give support to deal with information needs at different project or organizational levels. In addition, quality is one out of four main dependent variables for managing web projects. For each engineered project, and independently of the development/maintenance lifecycle adopted, levels of quality for its entities and attributes should be agreed, specified, measured and evaluated for analyzing and improving them. To assure repeatability and consistency of results for better analysis and decision making, it is necessary to have a well-defined yet customizable evaluation approach. In this tutorial, we discuss a general measurement and evaluation (M&E) approach which is based on two main pillars, namely: i) a quality modeling framework; and ii) M&E strategies which in turn are grounded on three principles viz. a M&E conceptual framework, a well-established M&E process, and evaluation methods and tools. This general M&E approach can be adapted to fit different organizational information needs and levels for different quality focuses regarding entities categories such as resource, product, system, system in use, etc. in a flexible yet structured manner. 

The development of the tutorial makes use of both theoretical and practical background. From the practical point of view, so far, we have developed two M&E strategies, namely: GOCAME (Goal-Oriented Context-Aware Measurement and Evaluation), and SIQinU (Strategy for understanding and Improving Quality in Use), which the latter was used in a testing industry case. These strategies can be instantiated regarding the quality modeling framework and specific information needs. For illustration purposes, the tutorial uses concrete examples of entities, quality models, relationships, and strategies –with their processes and methods.

Bio:

Luis Olsina URL: http://gidis.ing.unlpam.edu.ar/home/ingles/home/personas/olsina/olsina.htm

Luis Olsina is a Full Professor in the Engineering School at the National University of La Pampa, Argentina, and heads the Software and Web Engineering R&D group (GIDIS_Web). His research interests include Web engineering, particularly, Web quality strategies, quality improvement, measurement and evaluation processes, evaluation methods and tools, and domain ontologies. He earned a PhD in the area of Software/Web Engineering and a MSE from National University of La Plata, Argentina. In the last 16 years, he has published over 90 refereed papers, and participated in numerous regional and international events both as program committee chair and member. Particularly, he co-chaired the Web Engineering Workshop held in USA in the framework of ICSE 2002 (Int’l Conference on Software Engineering); the ICWE 2002 congress (held in Argentina) and ICWE 2003 (held in Spain); in addition to LA-Web (2005 and 2008 editions), and the WE track at WWW’06 (held in Edinburgh, UK). He has been an invited speaker at several conferences and professional meetings, and presented tutorials, for instance, at ICWE’05 (Int’l Conference on Web Engineering held in Australia), ICWE’11 (held in Cyprus), CEE SEC’10 (held in Moscow) and graduate courses in different countries. Recently, Luis and his colleagues have co-edited the book titled Web Engineering: Modeling and Implementing Web Applications published by Springer, HCIS Series, 2008.

Tutorial 3: Epidemic Intelligence for the Crowd, by the Crowd (half-day)

Event Based Epidemic Intelligence (e-EI) encompasses activities related to early warnings and their assessments as part of the outbreak investigation task. Recently, modern disease surveillance systems have started to also monitor social media streams, with the objective of improving their timeliness in detecting disease outbreaks, and producing warnings against potential public health threats. In this tutorial we show how social media analysis can be exploited for two important stages of e-EI, namely: (i) Early Outbreak Detection, and (ii) Outbreak Analysis and Control. We discuss techniques and methods for detecting health-related events from unstructured text and outline approaches, as well as the challenges faced in social media-based surveillance. In particular, we will show how using Twitter can help us to find early cases of an outbreak, as well as, understand the potential causes of contamination and spread from the perspective of the field practitioners.

Bio:

Ernesto Diaz URL: http://l3s.de/web/diazaviles Ernesto Diaz-Aviles is a research scientist at L3S Research Center and PhD candidate at the Leibniz University of Hannover, Germany. He holds a M.Sc. in Computer Science from the University of Freiburg, Germany and a B.Sc. in Electrical Engineering from the Central American University (UCA), El Salvador. He has also served as IT consultant and provided advice to a range of public and private sector clients. His current research interests are in supervised machine learning, social computing and applications to problems in recommender systems and social media. 

Avaré Stewart  URL: http://l3s.de/web/stewart Avaré Stewart is a PhD candidate at the L3S Research Centre / Leibniz University of Hannover, Germany. Before joining L3S Research Centre, she worked at Fraunhofer IPSI, Darmstadt, an applied research institute leading the personalization and semantic services work package in the EU-funded project, VIKEF. Her main research interests are text mining for open-source intelligence.

Tutorial 4: An Introduction to SPARQL and Queries over Linked Data

Nowadays, more and more datasets are published on the Web adhering to the Linked Data principles. The availability of this data, including the existence of data-level connections between datasets, presents exciting opportunities for the next generation of Web-based applications. As a consequence, consuming Linked Data is a highly relevant topic in the context of Web engineering. Our introductory tutorial aims to provide participants with an understanding of one of the basic aspects of Linked Data consumption, that is, querying Linked Data.

The tutorial consists of three main parts: First, we briefly introduce the concept of Linked Data and its underlying data model, the resource description framework (RDF). The second and largest part provides a comprehensive introduction to SPARQL, the de facto query language for RDF. Participants will learn how to express basic queries with SPARQL and how to use the more complex features of the language. Finally, in the third part of the tutorial, we discuss several approaches for executing SPARQL queries over multiple, interlinked datasets. We understand the tutorial as a beginners' introduction. The pre-requisites for participation in this tutorial include a broad technical understanding of querying databases, and a basic conceptual understanding of the architecture of the World Wide Web.

Bio:

Olaf Hartig URL: http://olafhartig.de

Olaf is a research assistant with the Database and Information Systems Group at Humboldt-Universität zu Berlin. His research focuses on querying Linked Data on the Web. His aim is to develop concepts that allow users to query the Web of Linked Data as if it is a giant global database. As project maintainer and lead developer he is implementing these concepts in the free software project SQUIN. In addition to his hands-on experience in the field of Linked Data queries, he recently also published important results regarding theoretical properties of such queries. Olaf presented several Linked Data related tutorials at major international conferences such as ISWC 2008, ISWC 2009, and WWW 2010.

Tutorial 5: Natural Language Processing for the Web (half-day)

Sölides of the tutorial: http://goo.gl/qDrex

The web offers huge amounts of unstructured textual data that are not readily processable using computational resources. Indeed, the ambiguity of natural language is the main obstacle to its understanding by computers. However, dialogue with artificial intelligences has been a human goal since the Turing test and the first conversational machines appeared in the 1960s, with ELIZA the Rogerian psychotherapist.

Fifty years later, we have designed machines able to win TV game shows such as Jeopardy! by giving more correct answers to complicated questions than the all-time top participants; we are able to search the web via spoken interfaces using our phone thanks to billion-word phonetic models and we can make sense of user-contributed data thanks to tagging and what is called the “social web”. How did we get this far?

The Natural Language Processing for the Web tutorial will focus on challenging and interesting aspects dealing with natural language Web applications. The audience will be introduced to NLP as a discipline and gain basic knowledge of its different methods, particularly statistical, and evaluation metrics. State-of-the-art applications of natural language research will then be discussed in detail, including information extraction from the social web and web crawling with particular focus on question answering systems and natural language data service querying.

This tutorial is composed of two parts:
I) Introduction to NLP: a) motivations & definitions, b) levels of natural language understanding, c) methods, d) evaluation;
II) Applications of NLP: a) overview of natural language applications, b) question answering systems, c) querying data services.

Bio:

Silvia Quarteroni
URL: http://home.dei.polimi.it/quarteroni/
Silvia Quarteroni holds a PhD in Computer Science from the University of York, UK. After being a Senior Marie Curie fellow at Trento University, she is now a research associate at Politecnico di Milano, Italy. Her main interests lie in the field of Question Answering, Dialog Interfaces and Machine Learning approaches. She has authored about 50 publications in international journals and conferences and has actively contributed to a number of program committees, editorial boards and to the organization of workshops in various research fields.