Reposted from Strategic Technologies By Oleg Svet
“You can’t unread this sentence.”
I remember seeing that on a website once, after spending hours on the computer researching a topic, mostly through open-source search engines like Google. Ironically, I was researching the positive sides of Web 2.0, trying to see what products the next wave in the frenzied gold rush of Web 2.0 will bring to our society. Through applications like Facebook and YouTube, Web 2.0 has given users the ability to upload and share their own unique products (videos, music, writing, and so on) onto the Internet. Easier access to a greater pool of information has its obvious benefits, from bringing about innovative technologies to providing users with greater varieties of entertainments. Individuals have unquestionably been empowered by these technologies.
But Web 2.0 has also had its costs. I do not mean this in the sense that Web 2.0 applications can be as easily used by nefarious actors as anyone else (although that is also a problem.) I’m talking about the cognitive costs imposed by these new applications: a user researching a topic on-line runs into the potential problem of cognitive overload. Much (if not most) of the information that users come across is useless, which brings me back to the first line in this blog: “You can’t unread this sentence.” All of that information that consumers read and digest on-line stays in their brain. We sit for hours at our desk, taking in countless bytes of information from countless sources, and store it in our brain. The cognitive burdens of this process lead us to develop mechanisms by which we don’t miss any of the headlines, so that we do not stay out of the loop or miss something really important. The unfortunate consequence is that we don’t necessarily get any depth, and we develop attention deficit disorder. Because of the cognitive overload imposed by the Internet on our brain, our attention has been reduced to 140 characters on a Twitter feed.
This is not something inherent in the internet itself. It is, rather, a product of Web 2.0’s relative youth. Today we are witnessing the frantic gold rush of Web 2.0, but with time, the Web will mature. My guess is that Web 3.0, and whatever the next phase in the evolution of the web is (some call it “The Semantic Web”), will narrow down all of that information so that the user does not have to face today’s cognitive overload. Information will be narrowed down and simplified. Web 3.0 will smooth out the rough edges of Web 2.0.
EPN, a Dutch think tank that studies the impact of information technologies on society, released an interesting video on the evolution of Web 1.0, Web 2.0, and Web 3.0. According to EPN, in the next step in the evolution of the web, technologies will become invisibly present in every day appliances; for example, as you will be travelling in your car, different bits of information — traveling times, GPS locations, multiple itineraries, restaurant sites, and weather information—will all be synchronized in real time. Those appliances will communicate with each other through the web to meet our individual needs (they will not form an all knowing, omniscient computer that surpasses human intelligence.) In some ways, this version of Web 3.0 is already happening, although in the next step of the Web development, less human direction will supposedly be needed. For example, I recently purchased an iMac, and with it got a wireless printer. The printer is completely disconnected from my computer (as is my wireless key board and my wireless mouse.) However, any document I create on my computer can be sent wirelessly to my printer. Both my iMac and my work laptop can send documents to be printed out on it. In the world of Web 3.0, there will be more appliances like that, and they will probably be able to communicate and recognize each other more easily. The web will become more present in every day appliances, augmenting our reality, but it will also be less visibly present.
Eric Schmidt, the CEO of Google, delivered an interesting response when asked what Web 3.0 means. Schmidt couldn’t define precisely what it will be, but he gave the following characteristics of what he thinks Web 3.0 will look like: applications will be pieced together; applications will be small; data will be stored in the cloud; applications will run on any device (PC or mobile phones); applications will be fast, and customizable; and applications will be able to be distributed virally (sent from person to person). Perhaps we will start trading products on-line (e.g. Kindle users will be able to “loan” on-line books to friends.)
Professor Abraham Bernstein, a professor at the University of Zurich who explores natural language processing through the web, delivered a Google TechTalk in which he described how these new technologies can be used to make web technologies more accessible. His vision of the Semantic Web is a place where semi-structured information can be processed in a machine way, using inductive and deductive reasoning to get somewhere. Bernstein’s notion of the Semantic Web is simple yet complex: rather than putting out pages of information (as you did on Web 2.0), on Web 3.0 you will be putting out different assertions or statements, and complex algorithms will piece those assertions together to create a cognitively simple and factually correct product.
Many technologists call the step after Web 3.0 the Semantic Web, predicting the year 2020 as the year in which the web took the next step in its evolution.
The Semantic Web
First off, what is meant by “Semantic”? As a short and useful YouTube video on Semantic Web points out, syntax is how you say something, whereas semantic is the meaning of what you say. Both are parts of communication. For example, the statements “I love technology” and “I Heart technology” have different syntax, but similar semantics. Though said differently, they share the same meaning. Reading both statements on a Twitter post, a human will be able to recognize that both mean the same thing. However, we have not gotten to the point where a computer can pick up the semantics of statements.
The internet gave a medium for computers to communicate with each other, but computers merely mimic human communication. Computers were not designed to teach human beings what the information means, only provide them with a tool to share that information. The Web created a storage and withdrawal database for us to quickly retrieve information, using HTML as the syntax.
So how are the Internet, the Web, and HTML related to the Semantic Web? Wikipedia defines the Semantic Web as “a group of methods and technologies to allow machines to understand the meaning – or ‘semantics’ – of information on the World Wide Web.” So the Internet allowed us to communicate with each other, the web lets us store and retrieve any document, and search engines created a way to retrieve that data. Computers don’t understand the meaning, they only understand the syntax. The semantic web will, in theory, make our lives easier by helping computers get us what we want by developing complex algorithms that account for human factors.
Accounting for human factors will undoubtedly make our lives easier, much in the same way as ergonomics has enabled the development of car seats that are better for our back. The semantic web will never fully account for human differences, but it will simplify the process of storing, sharing, and using information from the Web, making our lives easier.