You can find a preliminary workshop schedule in Conftool.
|PreS11 PreS21 PreS31 PreS41
Heyworth, Gregory (1); Van Duzer, Chet Adam (1); Boydston, Ken (1,3); Phelps, Michael (2); Easton, Roger (1)
1: The Lazarus Project, University of Mississippi, United States of America; 2: Early Manuscripts Electronic Library; 3: MegaVision
A Demonstration of Multispectral Imaging
Multispectral imaging is a powerful tool to recover text from manuscripts affected by fading, palimpsesting, water, fire, or overpainting. The Lazarus Project is offering a workshop in which its portable multispectral imaging system will be demonstrated. These demonstrations will take place in the Jagiellonian Library, and manuscripts from the library will be imaged. Participants in the workshop will see how multispectral imaging works, from the imaging of the object to the processing of the images and results. Each session will include a description of the equipment and process, a few cycles of imaging of leaves of a manuscript, and a look at what goes into processing the images—with time for questions. There will be four 90-minute workshops, with a maximum of 17 participants permitted in each session due to space constraints in the room in the Jagiellonian Library. We look forward to seeing you at the demonstration.
Brando, Carmen (1); Frontini, Francesca (2)
1: Institut National de l’information géographique et forestière (IGN), France; 2: Istituto di Linguistica Computazionale “A.Zampolli”, Italy
A place for Places: Current Trends and Challenges in the Development and Use of Geo-Historical Gazetteers
The proposed full day workshop aims to investigate the latest developments of geo-historical gazetteers and their impact in natural language processing and digital humanities studies. In particular the workshop will deal with crucial problems concerning the geo-spatial models of representation for ancient places, and the management of temporal information for geographic features in general. Current projects concerning the publication of geo-historical data as Linked Open Data, as well as their exploitation for annotating and enriching texts will also be discussed, alongside with more theoretical issues on vocabularies and ontologies. A group of well known invited scholars will hold 15 minutes presentations, discussing different and approaches to the topic, ranging from engineering, data models, standards and publication, corpus annotation models, visualization. Finally, an interactive discussion with workshop attendees as well as a sum up panel will constitute an occasion for the community to gather together and harmonize efforts.
Kleppe, Martijn (1); Stef, Scagliola (2); Clara, Henderson (3); Johan, Oomen (4)
1: Vrije Universiteit Amsterdam, Netherlands, The; 2: Erasmus University Rotterdam; 3: Indiana University, Bloomington; 4: Netherlands Institute for Sound and Vision
Audiovisual Data And Digital Scholarship: Towards Multimodal Literacy
In many online platforms and websites, audio-visual data is gradually playing an equal or greater role than text. Similarly, in multiple disciplines such as anthropology, ethnomusicology, folklore, media studies, film studies, history, and English, scholars are relying more and more on audio-visual data for richer analysis of their research and for accessing information not available through textual analysis. Developing aural and visual literacy has therefore become increasingly essential for 21st century digital scholarship. While audiovisual data allows for research to be disseminated and displayed linearly, within one modality, it also allows for non-linear discovery and analysis, within multiple modalities. This workshop will address both the challenges of analyzing audiovisual data in digital humanities scholarship, as well as the challenges of educating contemporary digital humanists on how to access, analyze, and disseminate an entire century of information generated with audiovisual media.
Ridge, Mia (1); Ferriter, Meghan (4); Henshaw, Christy (2); Brumfield, Ben (3)
1: British Library, United Kingdom; 2: Wellcome Library, United Kingdom; 3: Independent, USA; 4: Smithsonian Transcription Center, USA
Beyond The Basics: What Next For Crowdsourcing?Applications are now open for an expert workshop to be held in Kraków, Poland, on 12 July 2016, 9:30am – 4:00pm, as part of the Digital Humanities 2016 conference (http://dh2016.adho.org/). We anticipate 30 participants.We welcome applications from all, but please note that we will aim balance expertise, disciplinary backgrounds, experience with different types of projects, and institutional and project affiliations when finalising our list of participants. This workshop is not suitable for beginners. Participants should have some practical knowledge of running crowdsourcing projects or expertise in human computation, machine learning or other relevant topics. You can apply to attend at https://docs.google.com/forms/d/1l05Rba3EqMyy-X4UVmU9z7hQ-jlK2x2kLGvNtJfgtgQ/viewformOn notification of acceptance, participants should register via the DH2016 website.
Crowdsourcing – asking the public to help with inherently rewarding tasks that contribute to a shared, significant goal or research interest related to cultural heritage collections or knowledge – is reasonably well established in the humanities and cultural heritage sector. The success of projects such as Transcribe Bentham, Old Weather and the Smithsonian Transcription Center in processing content and engaging participants, and the subsequent development of crowdsourcing platforms that make launching a project easier, have increased interest in this area. While emerging best practices have been documented in a growing body of scholarship, including a recent report from the Crowd Consortium for Libraries and Archives symposium, this workshop looks to the next 5 – 10 years of crowdsourcing in the humanities, the sciences and in cultural heritage. The workshop will gather international experts and senior project staff to document the lessons to be learnt from projects to date and to discuss issues we expect to be important in the future.
Organisational and project management issues:
The timetable will include a brief round of introductions, a shared agenda-setting exercise, four or so discussion sessions, and a final session for closing remarks and to agree next steps.
The discussion and emergent guidelines documented during the workshop would help future projects benefit from the collective experiences of participants. Outcomes from the workshop might include a whitepaper and/or the further development of or support for a peer network for humanities crowdsourcing.
The workshop is organised by Mia Ridge (British Library), Meghan Ferriter (Smithsonian Transcription Centre), Christy Henshaw (Wellcome Library) and Ben Brumfield (FromThePage). For more information, please contact Ben.
|William (1); Burkette, Allison (2); Hettel, Jacqueline (3)
1: Department of English, University of Georgia, United States of America; 2: Department of Modern Languages, University of Mississippi, United States of America; 3: Nexus Lab, Arizona State University, United States of America
Big Data: Complex Systems and Text Analysis
A complex system (CS) is a system in which large networks of components with no central control and simple rules of operation give rise to complex collective behavior, sophisticated information processing, and adaptation via learning or evolution. The order that emerges in human language is simply the configuration of components, whether particular words, pronunciations, or constructions, that comes to occur in our communities and occasions for speech and writing. Nonlinear frequency profiles (A-curves) always emerge for linguistic features at every level of scale. In this workshop we wish to introduce some basic ideas about complex systems, including A-curves and scaling; talk about corpus creation with either a whole population or with random sampling; and talk about quantitative methods, why “normal” statistics don’t work and how to use the assumption of A-curves to talk about document identification and comparison of language in whole-to-whole or part-to-whole situations like authors or text types.
Fokkens, Antske (1); Wandl-Vogt, Eveline (2); Declerck, Thierry (3); ter Braake, Serge (4); Hyvönen, Eero (5); Bosse, Arno (6)
1: VU University, Netherlands, The; 2: Austrian Academy of Sciences, Austria; 3: German Research Center for Artificial intelligence, Germany; 4: University of Amsterdams, Netherlands, The; 5: Aalto University, Finland; 6: Oxford University, United Kingdom, The
Biographical Data Workshop: modeling, sharing and analyzing people’s lives
This workshop brings together researchers from various domains working on biographical data. In addition to sharing latest progress, it has the specific aim to initiate efforts to share (knowledge about) data and data models. The workshop directly contributes to the efforts of the DARIAH workgroup on biographical data and aims to involve new researchers in this collaboration. The workshop consist of two main components: dedicated sessions about data and data models followed by a discussion on this topic and a poster session where researchers can share their latest work on biographical data and computational analysis. Descriptions on data models and data samples will be distributed to participants in advance and studied by a panel. The panel will present their findings and support the discussion on sharing data. Direct involvement in several projects and strong relations with other international partners guarantees an interesting set of data and data models.
Pike, Alan Gilchrist (1); Childress, Dawn (2); Antonijević, Smiljana (3); McGrath, Jim (4); Gil, Alex (5); Collins, Brennan (6)
1: Emory University, United States of America; 2: University of California at Los Angeles (UCLA), United States of America; 3: Pennsylvania State University, United States of America; 4: Brown University, United States of America; 5: Columbia University, United States of America; 6: Georgia State University, Unitcrowded States of America
Building Capacity with Care: Graduate Students and DH work in the Library
Graduate students are valuable members of digital humanities teams in a variety of institutional and library settings around the world. This full-day workshop will bring together individuals, groups, and institutions that employ these students to discuss the variety of institutional arrangements for getting graduate students involved in digital scholarship labor. What are some current best practices for structuring graduate student employment? What models are in place for this work, and what are the benefits and challenges of fellowships vs. part time employment or RAships? We want this workshop to present helpful practical advice on this topic, but also serve as a starting point for a broader discussion about the place of student labor in DH work, specifically how libraries and DH organizations can make an ethic of care the foundation upon which their graduate student labor arrangements are built as they look to expand capacity within their institutions and beyond.
Sinclair, Stéfan (1); Brown, Susan (2); Rockwell, Geoffrey (2)
1: McGill University, Canada; 2: University of Alberta, Canada
CWRC & Voyant Tools: Text Repository Meets Text Analysis
This workshop combines two large, robust projects: the Canadian Writing Research Collaboratory (CWRC) and Voyant Tools. The workshop will provide core training with each of these platforms respectively and also demonstrate how to build bridges between the text repository environment of CWRC and text analysis tools of Voyant.
Johnsen, Lars Gunnarsønn; Birkenes, Magnus Breder; Lindstad, Arne Martinus
National Library of Norway, Norway
Data mining digital libraries
The central theme for this workshop is data mining and the connection between metadata and data in the context of digital libraries. Digital resources and search engines raise several ques-tions about the relationship between metadata and the data they describe. For example, what is the relation between metadata keywords and classification categories? How do we label the topics found by topic modeling algorithms? With readily available search engine technology, using document relevance based on content words, is there a need for library classification systems at all, like Dewey or UDC?
We invite papers on topics such as:
The structure of subject headings and descriptors, used in book classification (e.g. in building thesauri)
The relationship between topic words and library classification systems
The relationship between content words and topic words (of existing metadata, or as output from topic modeling algorithms)
Automatic classification of digital documents
Development of computational services for research and the general public
Legal issues arising with different data mining practices
Please send us an abstract of max. 500 words that is situated within the above context: Lars.Johnsen(at)nb.no
Szabo, Victoria; Jacobs, Hannah; Triplett, Edward
Duke University, United States of America
Digital Archiving & Storytelling in the Classroom with Omeka & CurateScape
This tutorial is an intensive introduction to archive development and storytelling within the Omeka content management and exhibition system (http://omeka.org/). We will also demonstrate the use of the CurateScape plugin, which allows users to create location-based itineraries drawn from Omeka items optimized for mobile devices. CurateScape (http://curatescape.org). Over the course of the tutorial we will introduce participants to the principles of digital archive collection development using exercises developed for the Duke University Wired! Lab for Digital Art History & Visual Culture tutorials. Content types may include digital images, audio files, 3D models, video, text facsimiles, and other source materials. These Items may be annotated with descriptive elements, locations, and other metadata relevant to search and presentation. Items in Omeka may also be organized into location-based Tours using the CurateScape Framework, a set of freely available themes and plug-ins.
Herrmann, J. Berenike (1); Frontini, Francesca (2); Gemma, Marissa (3)
1: Göttingen University Germany, Germany; 2: Instituto di Linguistica Computazionale “A. Zampolli”, CNR, Pisa, Italy; 3: Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
Digital Literary Stylistics Workshop
We propose a workshop on Digital Literary Stylistics in its multiple dimensions, from computational stylistics and authorship attribution to corpus linguistics and digital hermeneutics, encompassing technical aspects, empirical studies and methodological and epistemological issues. The full-day workshop will see invited talks by a range of active and distinguished researchers in the field. It will be concluded by a final panel that is meant to be a discussion ground for the proposal of an ADHO SIG on related topics. More details at http://digitalliterarystylistic.github.io/
Meier, Wolfgang (2); Turska, Magdalena (2); Czmiel, Alexander (1)
1: Berlin-Brandenburg Academy of Sciences and Humanities; 2: eXist Solutions
eXistdb: more than a database. A basic Introduction to eXistdb and XQuery
eXistdb can be used to store any type of data. This includes XML, texts, HTML, images etc. The stored data can be retrieved in various ways through numerous interfaces. In the tutorial we will mainly use the REST-interface. The main programming language to develop for eXistdb is XQuery, a query language which is used to query, extract, and manipulate XML documents. The current version of the W3C standard is 3.1. Although it is not expressed in XML syntax it is an important member of the X-technologies family.
This workshop aims to introduce the participants to basic features of eXist-db:
* schema-less database organization
* uploading data and installing application packages
* eXide editor
* querying data with XPath and XQuery
Nugues, Pierre (1); Borin, Lars (4); Fargier, Nathalie (2); Johansson, Richard (4); Reiter, Nils (5); Tonelli, Sara (3)
1: Lund University; 2: Persée (Université de Lyon, ENS de Lyon, CNRS); 3: Fondazione Bruno Kessler; 4: University of Gothenburg; 5: Universität Stuttgart
From Digitization to Knowledge: Resources and Methods for Semantic Processing of Digital Works/Texts
The goal of this workshop is twofold: First, provide a venue for researchers to describe and discuss practical methods and tools used in the construction of semantically annotated text collections. We expect such tools to include lexical and semantic resources with a focus on the interlinking of concepts and entities and their integration into corpora.
A second goal is to report on the on-going development of new tools for providing access to the rich information contained in large text collections. Semantic tools and resources are reaching a quality that makes them fit for building practical applications. They include ontologies, framenets, syntactic and semantic parsers, entity linkers, etc. We are interested in examples of cases that make use of such tools and their evaluation in the field of digital humanities with a specific interest on multilingual and cross-lingual aspects of semantic processing of text.
Bürgermeister, Martina (1); Fellegi, Zsófia (2); Palkó, Gábor (2); Schneider, Gerlinde (1); Scholger, Martina (1); Steiner, Elisabeth (1); Vasold, Gunter (1)
1: Centre for Information Modelling – Austrian Centre for Digital Humanities, University of Graz, Austria; 2: Petőfi Literary Museum, Hungary
GAMS and Cirilo: research data preservation and presentation
Modern infrastructures for the management and dissemination of humanities data face various challenges. On the one hand, sustainability and availability for long-term preservation have to be guaranteed. On the other hand, flexibility and the possibility of individual data usage play a major role in this field. The FEDORA based Asset Management System GAMS (Geisteswissenschaftliches Asset Management System – Humanities’ Asset Management System) and the corresponding client, developed by the Centre for Information Modelling – Austrian Centre for Digital Humanities (ZIM-ACDH), address both demands in combining long-term preservation with a presentation and management layer. In the workshop, presentations and hands-on sessions will alternate. First, the repository with its concrete workflows and functionalities will be introduced and set into the context of theoretical considerations regarding long-term preservation and accessibility.
Düring, Marten (1); Jatowt, Adam (2); van den Bosch, Antal (3); Preiser-Kapeller, Johannes (4)
1: CVCE, Luxembourg; 2: Kyoto University, Japan; 3: Rabdoud University Nijmegen, The Netherlands; 4: Austrian Academy of Sciences
The HistoInformatics workshop series brings together researchers in the historical disciplines, computer science and associated disciplines as well as the cultural heritage sector. Historians, like other humanists show keen interests in computational approaches to the study and processing of digitized sources (usually text, images, audio). In Computer Science, experimental tools and methods stand the challenge to be validated regarding their relevance for real-world questions and applications. The HistoInformatics workshop series is designed to bring researchers in both fields together, to discuss best practices as well as possible future collaborations.
Franzini, Greta; Franzini, Emily; Büchler, Marco; Moritz, Maria
Georg-August-Universität Göttingen, Germany
Historical Text Reuse Tutorial
The tutorial addresses participants who are interested in finding text reuse between two or multiple texts (in the same language). It teaches them how to explore, use and semi-automatically run the TRACER tool, a suite of state-of-the-art Natural Language Processing (NLP) algorithms and functions aimed at discovering text reuse in multifarious corpora from multiple genres. It is designed to work with historical text, such as Ancient Greek, Latin, Classical Arabic or medieval German, and provides researchers with a powerful engine to identify and display different types of text reuse ranging from verbatim quotations to paraphrase. To do so, TRACER implements basic NLP measures and operations, such as shingling and winnowing, and supplies an inbuilt, step-wise pipeline which breaks down the challenging task of reuse detection into smaller sub-tasks. A human-readable and editable configuration file gives the user full control over the parameters during every step, thus accommodating specific needs.
Tilton, Lauren; Arnold, Taylor
Yale University, United States of America
Introduction to Natural Language Processing
This workshop will introduce the basic components of natural language processing. Techniques include tokenization, lemmatization, part of speech tagging, and coreference detection. These will be introduced by way of examples on small snippets of text before being applied to a larger collection of short stories. Applications to stylometric analysis, document clustering, and topic detection will be briefly mentioned along the way. Our focus will be on a high-level, conceptual understanding of these techniques and the potential benefits of using them over models commonly employed for text analysis within humanities research. The workshop will be accessible to those new to the method or versed in text analysis but unfamiliar with natural language processing.
Rubio-Campillo, Xavier (1); Romanowska, Iza (2); Alcaina, Jonas (3)
1: Barcelona Supercomputing Centre, Spain; 2: Institute for Complex Systems Simulation and Centre for the Archaeology of Human Origins, University of Southampton, UK; 3: Institució Milà i Fontanals – CSIC, Spain
Introduction to Simulation: Complex social dynamics in a few lines of code
Simulation comprise a family of powerful and versatile quantitative techniques for investigating complex, non-linear processes, such as social change, human-environmental interaction, or cultural transmission and diffusion of ideas. Using modelling frameworks derived from mathematics requires explicit definition of concepts, promotes systemic thinking, and provides a tool for formal testing of conceptual models. Despite the common believe that simulation techniques require significant technical expertise, simple yet useful models can be constructed in a matter of hours even by absolute beginners. The aim of this workshop is to provide a practical, hands-on training in one of the most popular simulation technique to explore social processes: system dynamics. The workshop is tailored to the needs of researchers with humanities background and limited or no computational experience. It is particularly aimed at historians, archaeologists and other researchers dealing with past and present social dynamics in their research.
Sayers, Jentery (1); Gil, Alex (2); Martin, Kim (3); Rosenblum, Brian (4); Chan, Tiffany (1)
1: University of Victoria, Canada; 2: Columbia University, USA; 3: University of Guelph, Canada; 4: University of Kansas, USA
Minimal Computing: A Workshop
We use “minimal computing” to refer to computing done under some set of significant constraints, including constraints of hardware, software, education, network capacity, infrastructure, and power. Minimal computing is also used to capture the maintenance, refurbishing, and use of machines to do digital humanities work out of necessity, along with the choice to use new, streamlined computing hardware, such as Raspberry Pi or Arduino. While minimal computing has gained traction in various fields, it remains largely unexplored by digital humanities practitioners. Additionally, conversations about minimal computing become opportunities to discuss and share the material conditions in which digital humanities research is being conducted. We are proposing a single, full-day workshop at Digital Humanities 2016. This workshop will explore the practice and influence of minimal computing from both a practical and theoretical perspective
Organisciak, Peter; Downie, J. Stephen
University of Illinois at Urbana-Champaign, United States of America
Mining Texts with the Extracted Features Dataset
The Extracted Features (EF) dataset from the HathiTrust Research Center (HTRC) provides page-level features for 4.8 million volumes, with information such as part-of-speech tagged word frequencies. The HTRC Feature Reader is a software library that simplifies the use of this dataset, providing an access point for scholars to perform text analysis at culture- and era-spanning scales. This tutorial teaches an introduction to text analysis with the EF dataset through the HTRC Feature Reader.
Webb, Sharon; Kiefer, Chris; Jackson, Ben; Eldridge, Alice; Baker, James
Sussex Humanities Lab, University of Sussex
Music Information Retrieval Algorithms for Oral History Collections
Digital humanities, as a largely a text based domain, often treats audio files as texts, retrieving semantic information in order to categorise, sort, and discover audio. This workshop will enable participants to treat audio as audio. Taking oral history collections from the University of Sussex Resistance Archive as a test case, participants will be lead through the use of Music Information Retrieval (MIR) approaches to categorise, sort, and support their discovery of an audio collection. Participants will also be supported in planning the extension of these approaches to audio collections that they know or work with.
Constantopoulos, Panos (1,2); Dallas, Costis (2,3); Hughes, Lorna (4); Ross, Seamus (1,3,4)
1: Athens University of Economics and Business, Greece; 2: Digital Curation Unit / IMIS – Athena Research Centre; 3: University of Toronto; 4: University of Glasgow
Ontology-Based Recording and Discovery of Research Patterns in the Humanities
This workshop aims to engage participants in the process of developing and analyzing ontology-based, structured documentations of scholarly research practices, predominantly those in the (digital) humanities. The workshop will employ NeMO, the ontology for digital research methods in the arts and humanities developed in the context of the NeDiMAH project. Participants will be invited to contribute one or more examples of their own research work, from which they will produce structured descriptions according to NeMO, to explore the formulation of complex associative queries that aim at discovering patterns of work or resource usage and to share feedback regarding their experience from using NeMO.
Please visit call for participation at
Almas, Bridget May (1); Fortun, Kim (2); Harrower, Natalie (3); Wandl-Vogt, Eveline (4)
1: Tufts University, United States of America; 2: Rensselear Polytechnic Institute, United States of America; 3: Digital Repository of Ireland, Ireland; 4: Austrian Academy of Sciences, Austria
RDA/ADHO Workshop: Evaluating Research Data Infrastructure Components and Engaging in their Development
The purpose of this workshop is to conduct a meaningful examination of the data fabric and infrastructure components being defined by tce Research Data Alliance, to test their relevance and applicability to the needs of the digital humanities community, and to discuss opportunities for humanities engagement in further solutions development. This will be a full day workshop in the format of a hand-on round-table and open discussion. We will work with real humanities data use-cases, provided by participants, to understand how to best shape RDA outputs to enable better data sharing and management in the humanities. The focus will be on proposed solutions for working with Persistent Identifiers and machine actionable Data Types but but other relevant RDA outputs and in-progress efforts will be considered as well.
An Foras Feasa, Maynooth University, Ireland
Reflectance Transformation Imaging (RTI) for Cultural Heritage Artefacts
This tutorial will introduce participants to Reflectance Transformation Imaging (RTI), a method based on low-cost photography equipment, free and open-source software, and a portable light source in order to enhance objects’ subtle surface details and create new interactive relighted models that help in the augmentation of surface characteristics that cannot be identified by using conventional photographic techniques. The method has been used on a broad range of cultural objects, including manuscripts, paintings, and inscriptions, while it can be applied to any possible material, i.e. from metal and stone to paper and clay. During this 3-hour tutorial, participants will learn and practice all the different stages involved in RTI: capturing, processing, viewing, and online sharing. They will primarily work with pre-captured datasets (processing and viewing stages) but they can also perform RTI data capturing using objects brought by the instructor. Participants can also bring their own objects.
Meier, Wolfgang; Turska, Magdalena
TEI Processing Model Toolbox: Power To The Editor
Crossing the divide between encoded XML sources and published digital edition has always been a weak spot for TEI community. dh+lib, an eXist-db based application bridges that gap with its implementation of the processing model allowing to create standalone digital editions out of the box. Publishing an edition so far involved tedious work on complex stylesheets and significant effort in building an application on top of it. Using the dh+lib , customising the appearance of the text is all done in TEI ODD. This approach easily saves thousands of lines of code for media specific stylesheets. eXistdb and its application framework on the other hand take care of all the other core features like browsing, search and navigation. The proposed workshop intends to introduce the concepts of the TEI Processing Model and provide a tutorial on how to generate custom standalone edition using the dh+lib.
Brase, Jan (3); Hegel, Philipp (1); Kollatz, Thomas (2); Rapp, Andrea (1); Schmid, Oliver (1); Schmunk, Stefan (3); Söring, Sibylle (3)
1: Technische Universität Darmstadt, Germany; 2: Salomon Ludwig Steinheim-Institut für deutsch-jüdische Geschichte Essen; 3: Niedersächsische Staat- und Universitätsbibliothek Göttingen
The proposed workshop shall give interested researchers the chance to explore the diversity of the digital research environment TextGrid and the tools of the research infrastructure DARIAH-DE and its possibilities for digital analysis, First, a short introduction into the history of the projects, the different components and extensions of the software as well as the digital collaborative work in TextGrid will be given. Also, examples from various projects will be presented that are using the virtual research environment and infrastructure. After giving a review of the range of tools and services in this way, the participants can try the different tools and services for themselves in simultaneous half-hour presentations with TextGrid and DARIAH staff members at a number of “islands” according to their own interests and move on from subject to subject, from island to island.
Renaud, Clément (1); Grégory, Bahde (2)
1: Telecom ParisTech, Centre Norbert Elias; 2: ENSSIB Lyon, UMR 5600 Université Jean Monet ST Etienne
Topogram: a Web Toolkit for Spatio-Temporal Network Mapping
Topogram is an open-source web toolkit to map social, semantic and spatio-temporal dynamics from large sets of data. It answers the growing need for interactive mapping of complex online and offline interactions. The methodological concept of topogram provides ways to represent and explore relationships in data by considering different dimensions : words (lexical analysis), relationships (networks), time (changes and evolution) and space (maps). The software is divided into 2 parts : 1) a Python mining library to extract and format networks of words, citations and places from text data and 2) a collaborative web interface to visualize, edit, annotate and publish graphs. During this workshop, we will introduce Topogram from installation to the publishing of annotated graphs online from raw sets of data (typically from online social networks, sensors or machine logs).
Potvin, Sarah (1); Ortega, Élika (2); Galina, Isabel (3); Gil, Alex (4); O’Donnell, Daniel Paul (5); Williams, Patrick (6); Borovsky, Zoe (7); Shirazi, Roxanne (8); Coble, Zach (9); Worthey, Glen (10)
1: Texas A&M University, United States of America; 2: University of Kansas, United States of America; 3: National University of Mexico (UNAM), Mexico; 4: Columbia University, United States of America; 5: University of Lethbridge, Canada; 6: Syracuse University, United States of America; 7: University of California Los Angeles, United States of America; 8: City University of New York, United States of America; 9: New York University, United States of America; 10: Stanford University, United States of America
Translation Hack-a-thon!: Applying the Translation Toolkit to a Global dh+lib
How can we move beyond a monolingual DH, and promote exchange of works between linguistic communities? And how can we ensure this exchange is ongoing and sustainable? This hack-a-thon brings together practitioners from two ADHO SIGs (Global Outlook::Digital Humanities and the Libraries and DH SIGs), a primarily monolingual dh community project ( $2 ), and the newly-created GO::DH Translation Toolkit in an attempt to hack a solution. While focused on a pilot that models a translation process for a particular publication, $2 , our goal is generalizable to other scholarly communication vehicles and venues. We will prepare attendees to engage in translation work and will continue conversations around translation practices and existing workflows, offering participants practical and adaptable approaches to developing comfort with and practices around translation in their own institutions and endeavors.
Blackwell, Christopher William
Furman University, United States of America
Tutorial: Getting Started with a CITE/CTS Digital Library
This three-hour tutorial will help participants get up and running with CITE/CTS digital library services. These services allow identification and retrieval of text, image, and other data at arbitrary levels of granularity based on concise URN formatted citations. Participants will download a Virtual Machine description file which, combined with free cross-platform software, will allow them to boot a Linux virtual machine pre-configured with a CITE/CTS digital library, prose and poetic texts in Greek, Latin, and English, and a small collection of manuscript images. The tutorial will cover the philosophy of CITE/CTS, this particular implementation of the services, importing new texts and images into a digital library, and possibilities for end-user applications based upon it. Participants will leave with a working digital library environment, and access to all source code.
Stevens Institute of Technology, United States of America
View Source: Reading the Hidden Texts of the Web
Jacomy, Mathieu (1); Grandjean, Martin (2); Girard, Paul (1)
1: Sciences Po, médialab, France; 2: University of Lausanne, Switzerland
visual network analysis with Gephi Workshop
Gephi is a free and open source network analysis software used, among other things, in social network analysis. It exists since 10 years and a new version was published in December 2015. This workshop is intended for beginners as well as confirmed users. We first propose an introduction to Gephi basics, then we explore collectively and through practice the question of visual network interpretation. We provide for the workshop, networks examples based on datasets of Twitter hashtags and hyperlinked websites on various topics related to the DH community.
Jacomy, Mathieu; Girard, Paul; Ooghe-Tabanou, Benjamin
Sciences Po, France
Web Communities Mapping With Hyphe
This workshop proposes to introduce a curation-oriented web crawler called Hyphe. This software, developed with and for Social Sciences and Humanities scholars, aims at providing a method and a tool to select and harvest web contents (web pages and HTTP links) to be used in a research project. It provides web mining tools wrapped with a User Interface and curation features (defining web pages aggregates, filtering contents, expansion method) required by Social Sciences and Humanities scholars to use the web as a research field.
Scholz, Martin; Merz, Dorian; Goerz, Guenther
University Erlangen-Nürnberg, Germany
Working with WissKI – A Virtual Research Environment for Object Documentation and Object-Based Research
WissKI is a ready-to-be-used web-based $2 that relies on Semantic Web technologies and the CIDOC Conceptual Reference Model (CRM) to manage curated knowledge. Data acquisition and presentation intentionally borrow from traditional modes, while the user profits from linked and semantically enriched data, without bothering with semantic modelling issues. WissKI addresses the needs of object-centered documentation and research as it is typical for many memory institutions and cultural heritage research like art history, biodiversity, architecture, epigraphy, etc. WissKI is used by several academic and memory institutions in Germany in national and international research projects.
This half-day workshop
– gives a short introduction to the (technical) approach of WissKI,
– presents current use cases and modes of use,
– shows how to install, configure, and use WissKI, and
– includes a hands-on for semantic modelling and data acquisition with WissKI and CIDOC CRM.