Deep Web Technologies’ fearless leader, Abe Lederman, will travel from New Mexico, USA to Coventry, UK to attend the first ever Resource Discovery Tools for Health Libraries meeting on September 11. The growing use of discovery tools in the healthcare sector prompts discussion about suitable technologies and the nature of search for health librarians.
The event is hosted by the University Hospitals of Coventry and Warwickshire NHS Trust. Founded in 1948, the NHS includes four individual systems:
The NHS aims to provide a wide range of free health services in response to the needs and requirements of the population.
This event is free to all health librarians. DWT, as one of the event sponsors, will have an opportunity to present how Explorit Everywhere! applications further search and discovery in the healthcare industry.
University Hospital, Coventry, United Kingdom
Friday, 11 September 2015 from 10.00 to 16:00 (BST)
If you plan on attending the conference and would like to meet with Abe, please let us know as soon as possible. His schedule is quickly filling up!
As much as we like to think that Explorit Everywhere! is simple to use, it still holds the junior heavyweight championship title for feature-rich technologies. Throw on top of that the concepts of “federated search”, “Deep Web”, and “discovery services”, and it’s easy to get lost in a maze of information. Since Deep Web Technologies is all about pulling the needle out of the haystack for you, we thought it was time to create an easy reference post on the world of Explorit Everywhere!. Want to know where you can find out about how we rank results? How about federated search or the Deep Web? Take a gander through some of these posts:
Explorit Everywhere! Features
The Deep Web
You may also enjoy reading this post from the Federated Search Blog enumerating informational posts about federated search and discovery services. A couple of key posts include:
WorldWideScience.org has received a tremendous amount of press so far in 2015. On January 8th, Microsoft published a case study on WorldWideScience.org and Deep Web Technologies:
“WorldWideScience.org is the result of years of research and innovation. Although the underlying technology itself is exciting, Deep Web Technologies and the WorldWideScience Alliance are most interested in what it enables for users. “This solution increases access to worldwide information, which is the biggest benefit,” explains Johnson. “We search approximately 100 repositories that we estimate include more than 500 million pages of science and technology information. So instead of having to go to 100 different sources to find content, WorldWideScience.org using Microsoft Translator offers the ability to search all of them with a single query.”
Then, the April/May issue of Multilingual.com Magazine published an article entitled, “Advancing Science by Overcoming Language Barriers.” The article discussed the rise of WorldWideScience.org and its role in bridging language barriers using Microsoft’s machine translation.
In late June, Deep Web Technologies updated WorldWideScience.org, just in time for the WorldWideScience Alliance meeting in Germany. Responsive design is now an integral part of the application making it much easier to add new features now and in the future. The spotlight enhancements include:
- Mobility: WorldWideScience.org is now mobile and can now be accessed from any device. When a user goes to the application on a mobile device, the interface will automatically adjust to their screen size, making it easier to search and view results.
- Localization: While WorldWideScience.org has been a multilingual application for years, allowing users to translate results into their language of choice, now, when a user chooses English, Spanish, French or Portuguese, WorldWideScience.org will automatically update the interface text to the selected language too.
There are a host of other small improvements to WorldWideScience.org. This upgrade is setting the stage for future enhancements such as MyLibrary, the ability to save results for future reference, and additional language localizations. Take a look from your smartphone or tablet and let us know what you think!
WorldWideScience.org isn’t the only application recently updated. Science.gov received a facelift recently as well.
People tend to think of Google as the authority in search. Increasingly, we hear people use “google” as a verb, as in, “I’ll just google that.” General users, students and even professional researchers are using Google more and more for their queries, both mundane and scholarly, perpetuating the Google myth: If you can’t find it on Google, it probably doesn’t exist. Google’s ease of use, fast response time and simple interface gives users exactly what they need…or does it?
Teachers say that 94% of their students equate “Research” with “Google”. (Search Engine Land)
“Another concern is the accuracy and trustworthiness of content that ranks well in Google and other search engines. Only 40 percent of teachers say their students are good at assessing the quality and accuracy of information they find via online research. And as for the teachers themselves, only five percent say ‘all/almost all’ of the information they find via search engines is trustworthy — far less than the 28 percent of all adults who say the same.”
Do teachers have a point here? Is it possible that information found via search engines is less than trustworthy, and if so, where do teachers and other serious researchers need to go to find quality information? Deep Web Technologies did a little research of our own to see just how results on Google vs. popular Explorit Everywhere! search engines differs in quality of science sources.
How Google Works
Google, and other popular search engines such as Bing and Yahoo, search the surface web for information. The surface web, as opposed to the Deep Web, consists of public websites that are open to crawlers to read the website’s information and store it in a giant database called an index. When a user searches for information, they are actually searching the index of information, not the website itself. The results that are returned are the ones that people seemed to like in the past, or most popular results for the query. That’s right…the most popular…not necessarily the most relevant information or quality resources.
We should probably also mention those sneaky ads at the top of the page that look informative, but can be quite deceptive. A JAMA article states this about medical search ads:
“Many of the ads, the researchers noted, are very informational — with ‘graphs, diagrams, statistics and physician testimonials’ — and therefore not identifiable to patients as promotional material.
This kind of ‘incomplete and imbalanced information’ is particularly dangerous, they note, because of its deceptively professional appearance: ‘Although consumers who are bombarded by television commercials may be aware that they are viewing an advertisement, hospital websites often have the appearance of an education portal.'”
Researchers thinking that Google reads their mind and magically returns the right information on the first page of results should think again. The #1 position on a Google results page gets 33% of the traffic, so is a highly sought-after spot on a Google page. Unfortunately, with SEO tricks inflating page-rank on Google and ads vying for top spot, that number one result, or even the top page of results, may not be entirely germane or even contain much scholarly content. But those results rank high because they’ve worked the Google system.
So, a search performed on Google may return educational results, but the source itself may be unreliable, pure opinion or even company marketing as in the example above. For those needing credible information from recognized, authoritative sources, Google results just don’t cut it. For example, searching for the term “Climate Change” and organizing the top 25 results into categories – Opinions, News, Government, Ads, Wiki Sources, Peer Reviewed and Education – we find that the two biggest categories are News and Opinions. This doesn’t support Google as an authoritative source of information for scientific research.
Where are Quality Science Sources?
Scholarly researchers may need some publicly available information, but more often than not they need information that is not publicly available, i.e. from Google. Much of what they look for is in password protected repositories, subscription databases, or part of an organization’s internal collection of information. These sources of information are not available to Google’s crawlers, so they are not available through Google. Databases and sources of information like these are part of what is known as the Deep Web. The Deep Web contains 95% of the information on the Internet, such as scientific reports, medical records, academic information, subscription information and multilingual databases. You can read more about the Deep Web here.
How is a Deep Web Search Better than Google for Scholars?
For scholars needing to go deeper into their research, Deep Web databases often contain key information and current data unavailable through Google.
Deep Web sources must be searched through specialized search engines, like Explorit Everywhere! by Deep Web Technologies. Explorit Everywhere! combines all of the Deep Web resources, making them available to search from a single search box, kind of like Google. But, there are no gimmicks, SEO tactic to get the results higher up on the page or sly ranking systems that websites can use to maneuver themselves into the number one position. It’s a simple matter of good sources and good results, aggregated and ranked so the best results are at the top. Don’t worry about wading through ads or junky opinions; if you’re searching through Explorit Everywhere!, you are searching high quality, relevant sources.
Explorit Everywhere! outperforms Google by eliminating the clutter and providing dependable, scholarly sources of current information to the user. Time and again, Explorit Everywhere! has proven itself to find the needle in the haystack for serious researchers.
Do Your Own Comparison – Google vs. Science.gov
Most Deep Web search engines are, well, Deep. They aren’t freely available because the sources themselves are private or only available to registered users. Most academic libraries subscribe to premium sources of information, for example, and those databases are considered part of the Deep Web since they aren’t available to search through Google. And, while some reputable sources of information that once existed only on the Deep Web, such as PubMed and NASA, are now publicly available through Google, these sources tend to get buried amidst other results so they aren’t always easy to find. Many libraries feature these authoritative databases in guides, links or in search portals like Explorit Everywhere! simply to highlight the source rather than forcing users to wade through un-relevant results.
There are a few publicly available search engines where you can test drive a Deep Web search and see the difference for yourself. Science.gov, developed and maintained by the DOE Office of Scientific and Technical Information, uses Explorit Everywhere! to search over 60 databases and over 2200 selected websites from 15 federal agencies. The results are from authoritative, government sources, and extraordinarily relevant. When you perform a search on Science.gov, there is no question about the sources you are searching. Explore the difference!
Whether you are a student or scientist, knowing where to start your science search is very important. In most cases, serious research doesn’t start with Google. A 2014 IDC study shows that only 56% of the time do knowledge workers find the information required to do their jobs. Having the right sources available through an efficient Deep Web search like Explorit Everywhere! is critical to finding significant scientific information and staying ahead of the game.
I recently had a conversation with a VC and he brought up the acronym “SMAC”. SMAC, he explained, stands for Social, Mobile, Analytics and Cloud, and pointed out that these four areas are red-hot with investors now.
In a Forbes, May, 2014 blog article, Ravi Puri, Senior Vice President, North America Oracle Consulting Services defined SMAC and talked about: “The convergence of these trends is creating a coming wave of disruption that will let companies drive improved customer satisfaction, sustainable competitive advantage and significant growth in enterprise value—but only if you are ready for it.”
More recently Casey Galligan, Morgan Stanley Wealth Management Market Strategist, advises investors to not shy away from this sector and invest in leading SMAC companies and writes: “We believe that companies levered to these key secular growth areas will continue to be differentiators.”
It is an exciting time to be Deep Web Technologies, as we have been working in a number of these areas for a while now and are poised to make significant contributions to advance the state-of-the-art of all SMAC technology areas directly and through partners in the years ahead. Let me give you some examples:
- Social – At its heart, Explorit Everywhere! connects people to information. That’s one reason that Explorit Everywhere! naturally integrates well with social networking sites. These sites offer rich information to end-users in the form of opinions, rants, new developments, scientific breakthroughs and more. An organization may have a variety of social networks supporting their philosophy and marketing their brand, such as Twitter, Facebook, LinkedIn, Pinterest, and blogs. These social networks are plenty rife with interesting and useful tidbits for marketing folks, researchers, students and other professionals alike. Explorit Everywhere! can search all of these networks for relevant information in five seconds or less. To follow things up, Explorit Everywhere! lets the user share what they’ve found back to their own networks, completing the number one rule of thumb for social networks: share and share alike. Social integration engages users and simplifies the searching and posting to multiple networks by social networking users.
- Mobile – The mobile wave is more than just a fad; it’s the future. As we mentioned in our previous post, Explorit Everywhere! Goes Mobile, when we reach the year 2020 we may see around 50 billion connected devices slinging information around the world. When it comes to mobility, we needed Explorit Everywhere! to be flexible and device-driven, with an ultra-sleek user interface. Advances in mobile technology require that we stay up-to-date, and Explorit Everywhere! accomplishes this through its use of responsive design and vigilance of new devices searching our application.
- Analytics – Explorit Everywhere!’s statistics package has been collecting usage statistics for years now which enable our clients to maximize the ROI of the content that they license. Deep Web Technologies is an expert at gathering information from multiple sources, aggregating the results and categorizing them into concepts that expand the breadth of a researcher’s information. But even beyond that, Explorit Everywhere! can feed collected, pinpoint information it retrieves into best-of-breed analytical tools and software for further filtering and sifting. Explorit Everywhere! complements big data dashboards by funneling a broad swath of relevant material down the pipe for further analysis. On the front-end Explorit Everywhere! can also enhance what the user sees in the dashboard with complementary information drawn from a variety of sources, both internal and external to an organization.
- Cloud – Enterprise search is moving toward the cloud, and with that comes silos of information lost in the cloud. Explorit Everywhere! performs a real-time search, of multiple databases across multiple clouds of information together with information residing in Corporate silos that have not been moved to the cloud. These clouds may be behind a firewall, or outside of the firewall, but often stump indexers due to the nature of resources. Explorit Everywhere! connects to the databases wherever they are making the world a much smaller place.
Explorit Everywhere!’s integrated SMAC features create a holistic search experience, ensuring that our clients are at the forefront of technology, and not trailing behind the curve. With the best of this generation and next-generation technology, Explorit Everywhere! clients are part of the changing technology scene. We’re riding not just the mobile wave, but regularly improving connections to social networks, tuning our analytics and simplifying our cloud-based technology. And, the process of finding the most current information will shift as the future unfurls. Explorit Everywhere! will leverage SMAC and other next-generation technologies to embrace new concepts, connect with data wherever it may sit, and engage our users. Explorit Everywhere! is state-of-the-search.
Data Planet was reviewed in April by the Charleston Advisor, a highly regarded critical review resource for libraries. Deep Web Technologies and Data Planet teamed up several years ago to create Data-Planet Statistical Ready Reference, designed to be a more user friendly interface to finding and extracting data from their extensive repository and flagship product, Data-Planet Statistical Datasets.
“Data-Planet Statistical Ready Reference is designed to allow users to quickly navigate the 18.9 billion points of data contained in the repository, representing 3.9 billion time series covering thousands of geographic entities. With Data-Planet Statistical Ready Reference, users can quickly search and view charts, maps, and rankings of time series at the country, state, county, MSA, postal code, and census-tract/block group levels. All of the data are drawn from authoritative sources and are citable. The product provides high-level summary information as well as detailed line item views.”
Deep Web Technologies worked closely with Data Planet to create the Data Planet Statistical Ready Reference application and later, the Data Planet Related Content application (not referenced with the review). Data Planet Statistical Ready Reference is an Explorit Everywhere! custom user interface that retrieves results via the Data Planet API. Extensive work was done to create a simple UI to search the data, and to present results with accompanying information such as graphs, charts and statistics. To enhance Ready Reference, an additional application, Related Content, was created to perform a Deep Web search for users to research topics beyond the Data Planet database. For example, a Ready Reference Geographical search for “New York”, and a Subject search for “Airports” returns results from only the Data Planet database. Clicking on a result link will take you to a Data Sheet with Sources, Dataset, graphs, charts and Subject Terms. From the Data Sheet, however, you can continue to research your topic by clicking on the Related Content section – News or Scholarly – which opens a federated search application of selected Deep Web resources to retrieve related results.
Data Planet received a composite score of 4 1/8 stars out of 5. Both Data-Planet Statistical Ready Reference and Data-Planet Statistical Datasets were included in the review, which judged Content, User Interface/Searchability, Pricing, and Contract Options. The reviewer, Jennifer Starkey of Oberlin College in Ohio mentions, “Data-Planet rates highly in comparison, with its broad coverage of subjects, focus on time series data, provision of raw data that can be downloaded or viewed using the analytical tools, and the overall number of data sources available.”
This isn’t the first time that the Charleston Advisor has taken a close look at DWT. In 2012, Grace Baysinger, Head Librarian and Bibliographer at the Swain Chemistry and Chemical Engineering Library and Tom Cramer, Chief Technology Strategist at Stanford University Libraries and Academic Information Resources gave Deep Web Technologies a 4 3/8 out of 5 stars, based on their experience with Deep Web Technologies product. We’re still going strong!
Data Planet plans on rolling out improvements to their Data-Planet Statistical Datasets over the next month. See the Data Planet blog to find out more information and where you can see their products in action.
Nowadays, everyone seems to have a mobile device. Over 80% of internet users who own a smartphone use it to access the internet. Almost 60% of the total digital users in the U.S. visit the web through a variety of mobile devices (comScore). We’re in a global, mobile wave, expected to continue with a projected 50 billion connected devices by 2020 (Cisco). That’s 10 zeros of mobile connection worldwide!
Deep Web Technologies is on board and ready for the mobile wave. We believe that Explorit Everywhere! clients should have the best possible viewing experience, no matter what device they are coming from – their desktop computer, smartphone, or tablet. With mobility in mind, we recently transformed Explorit Everywhere! so that access is easy and mobile friendly, wherever you are and from whatever device you are using.
Our revamped Explorit Everywhere! applications take advantage of responsive design. Responsive design websites adjust the layout and resize content to optimize the user’s experience, regardless of the device that they are coming from. With over 100 different device screen resolutions worldwide (and growing), Deep Web Technologies developed Explorit Everywhere! to detect the screen resolution our users are coming from and automatically adapt the content to fit the screen of the device. Of course, there is a little extra magic that takes place behind the scenes to tailor the robust Explorit Everywhere! features to the various devices and screen resolutions. We want to make sure that the useability for all of Explorit Everywhere! is optimized for every size and shape.
The updated Explorit Everywhere! uses only a single URL for sharing between smartphones, tablets, and desktop computers. Some mobile strategies require developers to create both a mobile site and desktop site, each with a separate URL (mobile and desktop). This strategy will not automatically detect and update the page to fit the device. For example, if a user shares a link from a mobile-only site with their desktop friend, the desktop screen will default to the mobile URL they received and will bring up the mobile website. Likely, the mobile page won’t show well on the desktop computer. Explorit Everywhere! now solves this problem by automatically identifying the screen – mobile or desktop – so users can share the same link across all devices.
Want to see Explorit Everywhere! on your mobile device? Mednar.com, a public medical search portal, offers a peek at Explorit Everywhere! with the new, responsive design. When viewed from your desktop computer, you’ll see the full screen with clusters on the left, tabs, filter options and tools. However, from a mobile device, those features are tucked into neat icons at the top of the results, maximizing screen space and simplifying your viewing experience. Once you see results you can view them as they appear, grouped thematically (corresponding to desktop tabs, such as Medical or Patents), or by topic clusters. You can still filter the results by rank, date, title or author, email the results, view the search summary or select results to email, save, print or export just as you would on the desktop screen.
With responsive design and a few nifty optimizations, Deep Web Technologies is surfing the mobile wave and ensuring Explorit Everywhere! users can access their application of choice from wherever their mobile connection takes them.
A question we hear regularly is, “Why doesn’t Explorit Everywhere! return all of the results from every source that is searched?” For example, if a user goes directly to a source to search, they may find thousands of results for their query. But, performing the same search on an Explorit Everywhere! application may only return 100 results from that source. Why aren’t we returning the thousands of results like the source does?
DWT specifically returns up to 100 of the top results (unless our customer specifies that we return more) from each source to ultimately avoid overloading the user with information that may not be relevant to their search. Because the majority of Explorit Everywhere! applications have at least 10 sources of information, if each of those 10 sources returns 100 results, then the user will see 1000 results for their query, divided into a default of 20 results per page, or 50 pages of results total. Each of those results has been ranked as relevant to the user’s query with the most relevant results across all of the sources on page 1. Of course, we hope that the gold nugget is right at the top, front and center. (You can read more about how DWT ranks results on this post – Ranking: The Secret Sauce for Searching the Deep Web.) But, if we returned all of the results from all of the sources, then the total number of results and total number of pages increases to a dizzying number.
We know from the countless SEO studies on Google’s results placement that the majority of consumers using Google rarely click to the second page of results. In fact, page 2 and 3 may get only around 6% of the clicks on any given search. (Marketing Land). While the likelihood of next page clicks does go up with age and education, we’ve found that it is unlikely even for our erudite Explorit Everywhere! users to click through 50 pages of results. Rather, most researchers will perform a new search or refine their search if they don’t find what they are looking for within the first couple of pages.
And while we’re being honest, although Google may say that they have found millions of results for your search, they actually don’t return all of those results for you to view. If you set up Google to display 100 results per page, and then perform a search for “climate change” you may see that Google found about 142,000,000 results, and that there are 7 pages available to scroll through. However, once you get to page 4, you are unable to scroll further, and will see this message: “In order to show you the most relevant results, we have omitted some entries very similar to the 354 already displayed. If you like, you can repeat the search with the omitted results included.” Try clicking that link to see how many results you actually get – probably around 900 results, or 9 pages. So much for the millions that are available! Even Google tries to avoid overloading users by limiting the vast number of results for broad queries.
For those researchers preferring to narrow their research to just one or two relevant sources on Explorit Everywhere!, it’s possible that 100 results from each source may not be enough. If a source has particularly relevant results, then we suggest capitalizing on that information by going directly to that source to continue searching. Use Explorit Everywhere! as a tool to not only find relevant results, but to find relevant sources of information to further your information discovery.
African research is on the rise, doubling the amount of research in the Science, Technology, Engineering and Math (STEM)-based fields between 2003 and 2012 (Worldbank). While at this point, African researchers produce only 1 percent of the world’s research, the quality of that research is measurably improving. And that’s where the United Nations Economic Commision for Africa (UNECA) is stepping in to help. UNECA launched their Explorit Everywhere! federated search application this month, aiming to improve opportunities for scientific discovery in Africa as part of their ASKIA Initiative:
“The Access to Scientific and Socio-economic Information in Africa (ASKIA) Initiative is under the Public Information & Knowledge Management Division of the United Nations Economic Commission for Africa. It defines a framework for bringing together scientific and socio-economic information for the African community, including scientists, researchers, academics, students, economists and, policy-makers, over an interactive online portal acting as a one-stop shop to such knowledge and associated information from/on Africa. The overall goal of the initiative is to strengthen knowledge discovery and access by tapping into global scientific and socio-economic knowledge on and from Africa.”
The launch of UNECA’s customized Explorit Everywhere! application offers users a rich digital search experience from any screen, and the ability to search and translate results into four different languages. The next-generation ASKIA portal, based on responsive design, offers an innovative multilingual search experience to African users.
With an estimated 120 million people in Africa speaking French, an additional six African countries speaking Portuguese and English as the dominant
language in scientific research, the ASKIA application offers the ability to search and display results in these three languages, as well as Spanish, with the click of a button even if the sources are in different languages. Results are automatically translated into the language of the source to search, and then translated back to the language of the user when results are returned. Other languages may be considered for future upgrades.
In order to reach the ever-growing group of mobile users in Africa, UNECA’s application was built with responsive design. Using this platform, the application will automatically detect the browser a user is coming from and adapt the display appropriately, loading a mobile friendly version of the application for tablets and smartphones and a desktop page otherwise. With smartphone and tablet growth expected to increase 20-fold in Africa over the next five years, trumping online activities normally performed on laptops or desktop computers, this was an essential update to broaden the reach of the application to users across Africa.
Also included in the new launch is the MyASKIA feature. MyASKIA implements the Explorit Everywhere! MyLibrary feature allowing users to save and tag selected results under their account. At any time users can email, export, print or download their results.
UNECA’s vision is far reaching and they plan on integrating other features in the future to help capture, manage and disseminate local content. Deep Web Technologies is proud to support UNECA with their ASKIA initiative, promoting information dissemination and discovery across Africa.
Explore their search here: http://askia.uneca.org/askia/
In a highly cited September 2001 article, The Deep Web: Surfacing Hidden Value, Michael Bergman coined the term “Deep Web” and wrote:
Searching on the Internet today can be compared to dragging a net across the surface of the ocean. While a great deal may be caught in the net, there is still a wealth of information that is deep, and therefore, missed. The reason is simple: Most of the Web’s information is buried far down on dynamically generated sites, and standard search engines never find it.
In February, 2002 just a few months after Michael Bergman published this article I saw the huge potential of the “Deep Web” for providing access to a wealth of high-quality content not available via search engines such as Google, so incorporated Deep Web Technologies that year. The “Deep Web” was a more accurate term for what had been referred to for a number of prior years as the “Hidden Web” or the “Invisible Web”. I’m not sure who eventually coined the term “Dark Web” or when. One early reference I found was to a chapter in a book on Intelligence and Security Informatics published in 2005: “The Dark Web Portal Project: Collecting and Analyzing the Presence of Terrorist Groups on the Web.”
Everything was mostly good until October 2013 when the FBI shut down the Silk Road website, a Dark Web eBay-style marketplace for selling illegal drugs, stolen credit cards and other nefarious items. Since the take-down of Silk Road there have been a plethora of articles published which refer to the Dark Web as the Deep Web and lead to a lot of confusion and heartache for the CEO of one company in particular, Deep Web Technologies.
On November 2013, following a cover story in Time Magazine, on the Secret Web, which soon was referenced as the Deep Web, I wrote a letter to the Editor of Time and followed it with the blog article – The Deep Web isn’t all drugs, porn and murder to no avail.
In the past few months following the announcement of DARPA’s Memex project which states as its goal, “Creation of a new domain-specific indexing and search paradigm will provide mechanisms for improved content discovery, information extraction, information retrieval, user collaboration, and extension of current search capabilities to the deep web, the dark web, and nontraditional (e.g. multimedia) content,” there have been many more articles published equating the “deep web” and the “dark web” such as the following article about NASA’s efforts to leverage the memex efforts: NASA has big plans for DARPA’s scary “Deep Web”.
What prompted me to write this blog article is that I learned a few days ago that Epix has produced a documentary, that is going to be released on May 31, 2015, titled Deep Web.
“Extending far beyond the confines of Google and Facebook, there is a vast section of the World Wide Web that is a hidden alternate internet. Appropriately named the Deep Web, this mysterious and complex cyberspace serves as an outlet for anonymous communication and was home to Silk Road, the online black market notorious for drug trafficking. The intricacies of this concealed cyber realm caught the attention of the general public with the October 2013 arrest of Ross William Ulbricht – the convicted 30-year-old entrepreneur accused to be ‘Dread Pirate Roberts,’ the online pseudonym of the Silk Road leader. Making its World Television Premiere this spring, Deep Web – an EPIX Original Documentary written, directed and produced by Alex Winter (Downloaded) – seeks to unravel this tangled web of secrecy, accusations, and criminal activity, and explores how the outcome of Ulbricht’s trial will set a critical precedent for the future of technological freedom around the world.”
Clearly Dark Web would be a more appropriate title for this documentary and might attract a bigger audience than Deep Web, but I’m not so fortunate. What am I to do?