Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel a disturbance in the Semantic Web - as if a million palms slapped into faces all at once and €100m got spent on nonsense.


To explain - many people spent many eons trying to reason with RDF, OWL and so on, and it hasn't gone too well. The problem is that writing logical statements is very hard, and writing knowledge bases is harder, and writing distributed knowledge bases is hardest of all, and none of this and nothing has sorted out the knowledge engineering bottleneck.

And encoding it in XML does not help.


How hasn't it gone too well? Many, many well known companies outside of the technology space including Fortune 100s and 500s as well as various government agencies are successfully leveraging semantic web technologies for knowledge management. There are many interesting public databases out there being widely used for research that leverage semantic technologies as well. JSON serializations of RDF exist too. See: https://dvcs.w3.org/hg/rdf/raw-file/default/rdf-json/index.h...

Public database examples: http://flybase.org/ , http://dbpedia.org/ , http://disease-ontology.org/


Ok, how about

http://www.itjobswatch.co.uk/jobs/uk/semantic%20web.do

Ranking 845 doesn't seem to me to cut it.


About 10 or 12 years ago I spent a lot of time experimenting with the semantic web library for SWI-Prolog and it was time well spent. In modern times, commercial/community versions of tools like Star Dog are probably a better place to start experimenting with what OWL reasoning (or RDFS or RDFS+) can do. On great use is being able to mix and match different RDF data sets without translating to a common Ontology.

I have written a few semantic web books and I guess I am still a fanboy, but I also agree with you that "classic" semantic web apps using the W3 standards have had only partial acceptance into mainstream use while similar graph based systems like Knowledge Graph affect how many people use the web (at least for search, Google Now, etc.).


I've used RDF and SPARQL for a couple of years so I have some thoughts on it. The problem with using it is that while there are standards for it, there are too many standards and they often aren't followed. If the structure of the data has to be inferred and metadata is missing, then there is no way to figure out the structure. Our product was meant to be able to communicate with internal tools as well as external ones because it used RDF. What we ended up doing was hard coding assumptions about the internal tools because certain pieces of information weren't available in the RDF.

The goal of RDF and the semantic web was to make things easy, but after using it, it makes things much more complicated.

Finally, on the topic of encoding. The old defacto standard of encoding RDF was XML. More recently, Turtle[1] has become more popular, and that really helps the human readability of RDF by an order of magnitude.

[1] http://www.w3.org/TeamSubmission/turtle/


I probably use Semantic X more than most people do and I absolutely never understood RDF/OWL/whatever. My use case: I run a fairly large wiki, we have Semantic MediaWiki installed, and literally the only thing we use it for is setting "facts" on pages that you can use as a search engine to find the pages that satisfy queries. It's just metadata for pages, essentially.

I don't think I'm alone in this. None of Semantic Web stuff ever really made sense to me.


Off-topic but related:

Freebase will be closed on March 31, 2015 (largest open collaborative knowledge base): http://www.freebase.com , http://en.wikipedia.org/wiki/Freebase

Freebase has 2,751,614,700 facts, Wikidata has 13,788,746 facts.

Freebase was acquired by Google and their internal Google Knowledge Graph is based on it. Wikidata may import some data of Freebase, but due its stricter guidelines (notability...) many facts of minor will be lost/never migrated. A Freebase dump won't age well, in a lot of cases up-to-date facts from the real world are required. (ex "Who is the current president of the USA?" "Bill Clinton")

Maybe some community project like Archive.org, Apache Foundation, Wikipedia.org, Mozilla can rescue the Freebase community project before it is too late?


I believe the current intention is to move all the Freebase facts to Wikidata. It's been noted on the Wikidata mailing list that the notability requirements for Wikidata are much less strict than for Wikipedia.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: