Creating and Using Image Graphs

This image has an empty alt attribute; its file name is opening_screen-1024x598.jpg

This talk was part of the DBpedia sponsored Knowledge Graphs in Action event as part of the Autumn Semantics Conference – you can view the talk here: . Most of the talk was also a demo, I will try to make a video recording to accompany the slides soon.

A description of the ImageSnippets system

ImageSnippets is a multi-faceted system for creating, publishing and managing images described with structured, linked-data.Here are two videos illustrating some of the utility of using linked data descriptions with images: 


Here is a blog post that discusses how ImageSnippets uses DBedia:

I’m also pleased that – at long last – all of the galleries I am currently creating on this site (  are being generated from SPARQL queries pulling triples which have have been used to tag images in ImageSnippets. More posts on this should be forthcoming as I get around to writing them. 

The paper: ‘Bounding Ambiguity: Experiences with an Image Annotation System was presented at the the Human Computation Conference (HCOMP2018) in Zurich June, 2018 .  

You can read the paper here: BoundingAmbiguity_warren_hayes.  In it, we discuss at some length, many of the theories and experiences we have had/been working with over the years in developing ImageSnippets. 

Here are the slides from the lightning talk: 



A summation of knowledge graph ideas

This post is a summation of a few ideas posted by Pat Hayes as part of an exchange that can be found in much greater detail on the W3C Semantic Web list group:

These thoughts were in response to a previous e-mail in the thread – but I think they express some very succinct ideas that could be separated out and perhaps expanded upon by Pat in this post:

Most (all?) of the KR (Knowledge Representation) proposals put forward in AI or cognitive science work have been some subset of first-order predicate logic, using a variety of surface notations. There are some fairly deep results which suggest that any computably effective KR notation will not be /more/ expressive than FO logic. So FOL seems like a good ‘reference’ benchmark for KR expressivity.

Avoiding KR silos was one of the primary goals of the entire semantic-web linked-data initiative. But this has many aspects. First, we need to agree to all use a common basic notation. Triples (=RDF =Knowledge Graph =JSON-LD) has emerged as the popular choice. Getting just this much agreement has taken 15 years and thousands of man-hours of strenuous effort and bitterly contested compromises, so let us not try to undo any of that, no matter what the imperfections are of the final choice. The next stage, which we are just getting started on, involves agreeing on a common vocabulary for referring to things, or perhaps a universal mechanism for clearly indicating that your name for something means the same as my name for that same thing. This seems to be much harder than the semantic KR pioneers anticipated. The third stage involves having a global agreement on the ontological foundations of our descriptions, what used to be called the ‘upper level ontology’. This is where we get into actual metaphysical disagreements about the nature of reality (are physical objects extended in time? How do we handle vague boundaries? What are the relationships between written tokens, images, symbols, conventions and the things they represent? What is a ‘background’? What is a ‘shape’? Is a bronze statue the same kind of thing as a piece of bronze? What changes when someone signs a contract? Etc. etc., etc.) This is where AI-KR and more recently, applied ontology engineering (not to mention philosophy) has been working for the past 40 or 50 years, and I see very little hope of any clear agreements acceptable to a large percentage of the world’s users.

My Semantic/AI challenge for the world

In my interview with Teodora Petkova published in January of this year (
I wrote a section about my semantic challenge to the Semantic/AI world. You can read about it more in depth in the interview. But basically, my challenge is this:

Can we can search for and find a list of all movies, stories and/or books in which a plot device (either the main plot point or a smaller plot device used to move the narrative along) involves a tethered telephone of some sort (tethered meaning that it has an actual LINE attached to it and can’t be moved far – as in NOT wirelessly). The concept is a question to all researchers in these fields: Can we get the kind of meaning out of our recorded media formalized in such a way that this question can be discoverable before tethered phones have been completely forgotten out of our collective memory? The plot device could be: that someone could either not answer a ringing phone, or could not find a working phone to make a call, or could not find currency to put in the phone to make it work or when they got there, the device went dead or the line was cut, OR it could be that the plot was moved along BECAUSE they WERE able to complete a call (like they found a phone booth in time and connected with someone just before they were about to leave their house) – but had the phone not been working they could not have made the call. The important part of this challenge is that it has to somehow involve a phone that has a tethered land line and the other important part of this challenge is to see if this query can be made BEFORE many humans largely no longer recall or have access to or have experienced the angst (or lack of angst) or issues arising before wireless hand-held telephones were largely in the hands of almost anyone, esp babies and young children who are now considered to be born digital natives and who have always known and understood wireless communications/cell phones.

Meanwhile, here is a WORKING telephone booth on a trail in a Redwood Forest near the Chabot Space and Science center above Oakland, California.


So, if Daniel Schwabe doesn’t mind, I am going to use one of his beautiful photos to illustrate Dr. Robert MacFarlane‘s ‘word of the day’ from Twitter and also give a little more info about things you can do with ImageSnippets . I’m doing a project where I am sharing photos each day from ImageSnippets that match his words of the day – so I needed to figure out how I was going to express this concept with our triple tags which let you add quite deep metadata as linked data to images. .

MacFarlane’s word of the day was: ‘Zohar’

—- In ImageSnippets, we select from a range of meanings to create a triple-tag around a defined word or expression. All of the datasets we queried have a number of results for ‘zohar’ – including a few albums and the name of the book in the Kaballah. But none of them are actually the defined ‘concept’ of this kind of quality of brightness. It is possible that Hebrew DBpedia or Wikidata might have the concept as well as the book, but in English, the concept itself is not defined in any dataset in which I search for entities.

Luckily – in ImageSnippets, I can create an entity fairly easily and using Dr. Macfarlane’s definition. From his Twitter post, I created a simple entity that describes ‘Zohar’ as: The quality of brightness, of radiance from within, of being aglow with possibility (Hebrew; זֹהַר). “Zohar” connotes an openness to & connection with natural beauty; it is also the name of the foundational work of Kaballah. – and added the provenance of where this definition originated – [Definition from Dr. Robert G. MacFarlane, Author;] and then said that this image ‘conveys’ ‘zohar’. [I can discuss how this works if anyone is interested.] OR just enjoy this beautiful image of Daniel’s and see if you agree with me that it beautifully illustrates the concept that Macfarlane defined.

I have now triple-tagged a few more images in ImageSnippets with the word: zohar and so if you search IS for ‘zohar’ you can find them.