I recently had a conversation with a VC and he brought up the acronym “SMAC”. SMAC, he explained, stands for Social, Mobile, Analytics and Cloud, and pointed out that these four areas are red-hot with investors now.
In a Forbes, May, 2014 blog article, Ravi Puri, Senior Vice President, North America Oracle Consulting Services defined SMAC and talked about: “The convergence of these trends is creating a coming wave of disruption that will let companies drive improved customer satisfaction, sustainable competitive advantage and significant growth in enterprise value—but only if you are ready for it.”
More recently Casey Galligan, Morgan Stanley Wealth Management Market Strategist, advises investors to not shy away from this sector and invest in leading SMAC companies and writes: “We believe that companies levered to these key secular growth areas will continue to be differentiators.”
It is an exciting time to be Deep Web Technologies, as we have been working in a number of these areas for a while now and are poised to make significant contributions to advance the state-of-the-art of all SMAC technology areas directly and through partners in the years ahead. Let me give you some examples:
- Social – At its heart, Explorit Everywhere! connects people to information. That’s one reason that Explorit Everywhere! naturally integrates well with social networking sites. These sites offer rich information to end-users in the form of opinions, rants, new developments, scientific breakthroughs and more. An organization may have a variety of social networks supporting their philosophy and marketing their brand, such as Twitter, Facebook, LinkedIn, Pinterest, and blogs. These social networks are plenty rife with interesting and useful tidbits for marketing folks, researchers, students and other professionals alike. Explorit Everywhere! can search all of these networks for relevant information in five seconds or less. To follow things up, Explorit Everywhere! lets the user share what they’ve found back to their own networks, completing the number one rule of thumb for social networks: share and share alike. Social integration engages users and simplifies the searching and posting to multiple networks by social networking users.
- Mobile – The mobile wave is more than just a fad; it’s the future. As we mentioned in our previous post, Explorit Everywhere! Goes Mobile, when we reach the year 2020 we may see around 50 billion connected devices slinging information around the world. When it comes to mobility, we needed Explorit Everywhere! to be flexible and device-driven, with an ultra-sleek user interface. Advances in mobile technology require that we stay up-to-date, and Explorit Everywhere! accomplishes this through its use of responsive design and vigilance of new devices searching our application.
- Analytics – Explorit Everywhere!’s statistics package has been collecting usage statistics for years now which enable our clients to maximize the ROI of the content that they license. Deep Web Technologies is an expert at gathering information from multiple sources, aggregating the results and categorizing them into concepts that expand the breadth of a researcher’s information. But even beyond that, Explorit Everywhere! can feed collected, pinpoint information it retrieves into best-of-breed analytical tools and software for further filtering and sifting. Explorit Everywhere! complements big data dashboards by funneling a broad swath of relevant material down the pipe for further analysis. On the front-end Explorit Everywhere! can also enhance what the user sees in the dashboard with complementary information drawn from a variety of sources, both internal and external to an organization.
- Cloud – Enterprise search is moving toward the cloud, and with that comes silos of information lost in the cloud. Explorit Everywhere! performs a real-time search, of multiple databases across multiple clouds of information together with information residing in Corporate silos that have not been moved to the cloud. These clouds may be behind a firewall, or outside of the firewall, but often stump indexers due to the nature of resources. Explorit Everywhere! connects to the databases wherever they are making the world a much smaller place.
Explorit Everywhere!’s integrated SMAC features create a holistic search experience, ensuring that our clients are at the forefront of technology, and not trailing behind the curve. With the best of this generation and next-generation technology, Explorit Everywhere! clients are part of the changing technology scene. We’re riding not just the mobile wave, but regularly improving connections to social networks, tuning our analytics and simplifying our cloud-based technology. And, the process of finding the most current information will shift as the future unfurls. Explorit Everywhere! will leverage SMAC and other next-generation technologies to embrace new concepts, connect with data wherever it may sit, and engage our users. Explorit Everywhere! is state-of-the-search.
Data Planet was reviewed in April by the Charleston Advisor, a highly regarded critical review resource for libraries. Deep Web Technologies and Data Planet teamed up several years ago to create Data-Planet Statistical Ready Reference, designed to be a more user friendly interface to finding and extracting data from their extensive repository and flagship product, Data-Planet Statistical Datasets.
“Data-Planet Statistical Ready Reference is designed to allow users to quickly navigate the 18.9 billion points of data contained in the repository, representing 3.9 billion time series covering thousands of geographic entities. With Data-Planet Statistical Ready Reference, users can quickly search and view charts, maps, and rankings of time series at the country, state, county, MSA, postal code, and census-tract/block group levels. All of the data are drawn from authoritative sources and are citable. The product provides high-level summary information as well as detailed line item views.”
Deep Web Technologies worked closely with Data Planet to create the Data Planet Statistical Ready Reference application and later, the Data Planet Related Content application (not referenced with the review). Data Planet Statistical Ready Reference is an Explorit Everywhere! custom user interface that retrieves results via the Data Planet API. Extensive work was done to create a simple UI to search the data, and to present results with accompanying information such as graphs, charts and statistics. To enhance Ready Reference, an additional application, Related Content, was created to perform a Deep Web search for users to research topics beyond the Data Planet database. For example, a Ready Reference Geographical search for “New York”, and a Subject search for “Airports” returns results from only the Data Planet database. Clicking on a result link will take you to a Data Sheet with Sources, Dataset, graphs, charts and Subject Terms. From the Data Sheet, however, you can continue to research your topic by clicking on the Related Content section – News or Scholarly – which opens a federated search application of selected Deep Web resources to retrieve related results.
Data Planet received a composite score of 4 1/8 stars out of 5. Both Data-Planet Statistical Ready Reference and Data-Planet Statistical Datasets were included in the review, which judged Content, User Interface/Searchability, Pricing, and Contract Options. The reviewer, Jennifer Starkey of Oberlin College in Ohio mentions, “Data-Planet rates highly in comparison, with its broad coverage of subjects, focus on time series data, provision of raw data that can be downloaded or viewed using the analytical tools, and the overall number of data sources available.”
This isn’t the first time that the Charleston Advisor has taken a close look at DWT. In 2012, Grace Baysinger, Head Librarian and Bibliographer at the Swain Chemistry and Chemical Engineering Library and Tom Cramer, Chief Technology Strategist at Stanford University Libraries and Academic Information Resources gave Deep Web Technologies a 4 3/8 out of 5 stars, based on their experience with Deep Web Technologies product. We’re still going strong!
Data Planet plans on rolling out improvements to their Data-Planet Statistical Datasets over the next month. See the Data Planet blog to find out more information and where you can see their products in action.
Nowadays, everyone seems to have a mobile device. Over 80% of internet users who own a smartphone use it to access the internet. Almost 60% of the total digital users in the U.S. visit the web through a variety of mobile devices (comScore). We’re in a global, mobile wave, expected to continue with a projected 50 billion connected devices by 2020 (Cisco). That’s 10 zeros of mobile connection worldwide!
Deep Web Technologies is on board and ready for the mobile wave. We believe that Explorit Everywhere! clients should have the best possible viewing experience, no matter what device they are coming from – their desktop computer, smartphone, or tablet. With mobility in mind, we recently transformed Explorit Everywhere! so that access is easy and mobile friendly, wherever you are and from whatever device you are using.
Our revamped Explorit Everywhere! applications take advantage of responsive design. Responsive design websites adjust the layout and resize content to optimize the user’s experience, regardless of the device that they are coming from. With over 100 different device screen resolutions worldwide (and growing), Deep Web Technologies developed Explorit Everywhere! to detect the screen resolution our users are coming from and automatically adapt the content to fit the screen of the device. Of course, there is a little extra magic that takes place behind the scenes to tailor the robust Explorit Everywhere! features to the various devices and screen resolutions. We want to make sure that the useability for all of Explorit Everywhere! is optimized for every size and shape.
The updated Explorit Everywhere! uses only a single URL for sharing between smartphones, tablets, and desktop computers. Some mobile strategies require developers to create both a mobile site and desktop site, each with a separate URL (mobile and desktop). This strategy will not automatically detect and update the page to fit the device. For example, if a user shares a link from a mobile-only site with their desktop friend, the desktop screen will default to the mobile URL they received and will bring up the mobile website. Likely, the mobile page won’t show well on the desktop computer. Explorit Everywhere! now solves this problem by automatically identifying the screen – mobile or desktop – so users can share the same link across all devices.
Want to see Explorit Everywhere! on your mobile device? Mednar.com, a public medical search portal, offers a peek at Explorit Everywhere! with the new, responsive design. When viewed from your desktop computer, you’ll see the full screen with clusters on the left, tabs, filter options and tools. However, from a mobile device, those features are tucked into neat icons at the top of the results, maximizing screen space and simplifying your viewing experience. Once you see results you can view them as they appear, grouped thematically (corresponding to desktop tabs, such as Medical or Patents), or by topic clusters. You can still filter the results by rank, date, title or author, email the results, view the search summary or select results to email, save, print or export just as you would on the desktop screen.
With responsive design and a few nifty optimizations, Deep Web Technologies is surfing the mobile wave and ensuring Explorit Everywhere! users can access their application of choice from wherever their mobile connection takes them.
A question we hear regularly is, “Why doesn’t Explorit Everywhere! return all of the results from every source that is searched?” For example, if a user goes directly to a source to search, they may find thousands of results for their query. But, performing the same search on an Explorit Everywhere! application may only return 100 results from that source. Why aren’t we returning the thousands of results like the source does?
DWT specifically returns up to 100 of the top results (unless our customer specifies that we return more) from each source to ultimately avoid overloading the user with information that may not be relevant to their search. Because the majority of Explorit Everywhere! applications have at least 10 sources of information, if each of those 10 sources returns 100 results, then the user will see 1000 results for their query, divided into a default of 20 results per page, or 50 pages of results total. Each of those results has been ranked as relevant to the user’s query with the most relevant results across all of the sources on page 1. Of course, we hope that the gold nugget is right at the top, front and center. (You can read more about how DWT ranks results on this post – Ranking: The Secret Sauce for Searching the Deep Web.) But, if we returned all of the results from all of the sources, then the total number of results and total number of pages increases to a dizzying number.
We know from the countless SEO studies on Google’s results placement that the majority of consumers using Google rarely click to the second page of results. In fact, page 2 and 3 may get only around 6% of the clicks on any given search. (Marketing Land). While the likelihood of next page clicks does go up with age and education, we’ve found that it is unlikely even for our erudite Explorit Everywhere! users to click through 50 pages of results. Rather, most researchers will perform a new search or refine their search if they don’t find what they are looking for within the first couple of pages.
And while we’re being honest, although Google may say that they have found millions of results for your search, they actually don’t return all of those results for you to view. If you set up Google to display 100 results per page, and then perform a search for “climate change” you may see that Google found about 142,000,000 results, and that there are 7 pages available to scroll through. However, once you get to page 4, you are unable to scroll further, and will see this message: “In order to show you the most relevant results, we have omitted some entries very similar to the 354 already displayed. If you like, you can repeat the search with the omitted results included.” Try clicking that link to see how many results you actually get – probably around 900 results, or 9 pages. So much for the millions that are available! Even Google tries to avoid overloading users by limiting the vast number of results for broad queries.
For those researchers preferring to narrow their research to just one or two relevant sources on Explorit Everywhere!, it’s possible that 100 results from each source may not be enough. If a source has particularly relevant results, then we suggest capitalizing on that information by going directly to that source to continue searching. Use Explorit Everywhere! as a tool to not only find relevant results, but to find relevant sources of information to further your information discovery.
African research is on the rise, doubling the amount of research in the Science, Technology, Engineering and Math (STEM)-based fields between 2003 and 2012 (Worldbank). While at this point, African researchers produce only 1 percent of the world’s research, the quality of that research is measurably improving. And that’s where the United Nations Economic Commision for Africa (UNECA) is stepping in to help. UNECA launched their Explorit Everywhere! federated search application this month, aiming to improve opportunities for scientific discovery in Africa as part of their ASKIA Initiative:
“The Access to Scientific and Socio-economic Information in Africa (ASKIA) Initiative is under the Public Information & Knowledge Management Division of the United Nations Economic Commission for Africa. It defines a framework for bringing together scientific and socio-economic information for the African community, including scientists, researchers, academics, students, economists and, policy-makers, over an interactive online portal acting as a one-stop shop to such knowledge and associated information from/on Africa. The overall goal of the initiative is to strengthen knowledge discovery and access by tapping into global scientific and socio-economic knowledge on and from Africa.”
The launch of UNECA’s customized Explorit Everywhere! application offers users a rich digital search experience from any screen, and the ability to search and translate results into four different languages. The next-generation ASKIA portal, based on responsive design, offers an innovative multilingual search experience to African users.
With an estimated 120 million people in Africa speaking French, an additional six African countries speaking Portuguese and English as the dominant
language in scientific research, the ASKIA application offers the ability to search and display results in these three languages, as well as Spanish, with the click of a button even if the sources are in different languages. Results are automatically translated into the language of the source to search, and then translated back to the language of the user when results are returned. Other languages may be considered for future upgrades.
In order to reach the ever-growing group of mobile users in Africa, UNECA’s application was built with responsive design. Using this platform, the application will automatically detect the browser a user is coming from and adapt the display appropriately, loading a mobile friendly version of the application for tablets and smartphones and a desktop page otherwise. With smartphone and tablet growth expected to increase 20-fold in Africa over the next five years, trumping online activities normally performed on laptops or desktop computers, this was an essential update to broaden the reach of the application to users across Africa.
Also included in the new launch is the MyASKIA feature. MyASKIA implements the Explorit Everywhere! MyLibrary feature allowing users to save and tag selected results under their account. At any time users can email, export, print or download their results.
UNECA’s vision is far reaching and they plan on integrating other features in the future to help capture, manage and disseminate local content. Deep Web Technologies is proud to support UNECA with their ASKIA initiative, promoting information dissemination and discovery across Africa.
Explore their search here: http://askia.uneca.org/askia/
In a highly cited September 2001 article, The Deep Web: Surfacing Hidden Value, Michael Bergman coined the term “Deep Web” and wrote:
Searching on the Internet today can be compared to dragging a net across the surface of the ocean. While a great deal may be caught in the net, there is still a wealth of information that is deep, and therefore, missed. The reason is simple: Most of the Web’s information is buried far down on dynamically generated sites, and standard search engines never find it.
In February, 2002 just a few months after Michael Bergman published this article I saw the huge potential of the “Deep Web” for providing access to a wealth of high-quality content not available via search engines such as Google, so incorporated Deep Web Technologies that year. The “Deep Web” was a more accurate term for what had been referred to for a number of prior years as the “Hidden Web” or the “Invisible Web”. I’m not sure who eventually coined the term “Dark Web” or when. One early reference I found was to a chapter in a book on Intelligence and Security Informatics published in 2005: “The Dark Web Portal Project: Collecting and Analyzing the Presence of Terrorist Groups on the Web.”
Everything was mostly good until October 2013 when the FBI shut down the Silk Road website, a Dark Web eBay-style marketplace for selling illegal drugs, stolen credit cards and other nefarious items. Since the take-down of Silk Road there have been a plethora of articles published which refer to the Dark Web as the Deep Web and lead to a lot of confusion and heartache for the CEO of one company in particular, Deep Web Technologies.
On November 2013, following a cover story in Time Magazine, on the Secret Web, which soon was referenced as the Deep Web, I wrote a letter to the Editor of Time and followed it with the blog article – The Deep Web isn’t all drugs, porn and murder to no avail.
In the past few months following the announcement of DARPA’s Memex project which states as its goal, “Creation of a new domain-specific indexing and search paradigm will provide mechanisms for improved content discovery, information extraction, information retrieval, user collaboration, and extension of current search capabilities to the deep web, the dark web, and nontraditional (e.g. multimedia) content,” there have been many more articles published equating the “deep web” and the “dark web” such as the following article about NASA’s efforts to leverage the memex efforts: NASA has big plans for DARPA’s scary “Deep Web”.
What prompted me to write this blog article is that I learned a few days ago that Epix has produced a documentary, that is going to be released on May 31, 2015, titled Deep Web.
“Extending far beyond the confines of Google and Facebook, there is a vast section of the World Wide Web that is a hidden alternate internet. Appropriately named the Deep Web, this mysterious and complex cyberspace serves as an outlet for anonymous communication and was home to Silk Road, the online black market notorious for drug trafficking. The intricacies of this concealed cyber realm caught the attention of the general public with the October 2013 arrest of Ross William Ulbricht – the convicted 30-year-old entrepreneur accused to be ‘Dread Pirate Roberts,’ the online pseudonym of the Silk Road leader. Making its World Television Premiere this spring, Deep Web – an EPIX Original Documentary written, directed and produced by Alex Winter (Downloaded) – seeks to unravel this tangled web of secrecy, accusations, and criminal activity, and explores how the outcome of Ulbricht’s trial will set a critical precedent for the future of technological freedom around the world.”
Clearly Dark Web would be a more appropriate title for this documentary and might attract a bigger audience than Deep Web, but I’m not so fortunate. What am I to do?
The Beagle Research Group Blog posted “Apple iWatch: What’s the Killer App” on March 10, including this line: “An alert might also come from a pre-formed search that could include a constant federated search and data analysis to inform the wearer of a change in the environment that the wearer and only a few others might care about, such as a buy or sell signal for a complex derivative.” While this enticing suggestion is just a snippet in a full post, we thought we’d consider the possibilities this one-liner presents. Could federated search become the next killer app?
Well no, not really. Federated search in and of itself isn’t an application, it’s more of a supporting technology. It supports real-time searching, rather than indexing, and provides current information on fluxuating information such as weather, stocks, flights, etc. And that is exactly why it’s killer: Federated Search finds new information of any kind, anywhere, singles out the most precise data to display, and notifies the user to take a look.
In other words, its a great technology for mobile apps to use. Federated search connects directly to the source of the information, whether medical, energy, academic journals, social media, weather, etc. and finds information as soon as it’s available. Rather than storing information away, federated search links a person to the data circulating that minute, passing on the newest details as soon as they are available, which makes a huge difference with need-to-know information. In addition, alerts can be set up to notify the person, researcher, or iWatch wearer of that critical data such as a buy or sell signal as The Beagle Research Group suggests.
Of course, there’s also the issue of real-estate to keep in mind – the iWatch wraps less that 2 inches of display on a wrist. That’s not much room for a hefty list of information, much less junky results. What’s important is the single, most accurate piece of information that’s been hand-picked (so to speak) just for you pops up on the screen. Again, federated search can makes that happen quite easily...it has connections.
There is a world of possibility when it comes to using federated search technology to build applications, whether mobile or for desktop uses. Our on-demand lifestyles require federating, analyzing, and applying all sorts of data, from health, to environment, to social networking. Federated search is not just for librarians finding subscription content anymore. The next-generation federated search is for everyone in need of information on-the-fly. Don’t worry about missing information (you won’t). Don’t worry if information is current (it is). In fact, don’t worry at all. Relax, sit back and get alert notifications to buy that stock, watch the weather driving home, or check out an obscure tweet mentioning one of your hobbies. Your world reports to you what you need to know. And that, really, is simply killer.
Editor’s Note: This is a guest article by Lisa Brownlee. The 2015 edition of her book, “Intellectual Property Due Diligence in Corporate Transactions: Investment, Risk Assessment and Management”, originally published in 2000, will dive into discussions about using the Deep Web and the Dark Web for Intellectual Property research, emphasizing its importance and usefulness when performing legal due-diligence.
Lisa M. Brownlee is a private consultant and has become an authority on the Deep Web and the Dark Web, particularly as they apply to legal due-diligence. She writes and blogs for Thomson Reuters. Lisa is an internationally-recognized pioneer on the intersection between digital technologies and law.
In this blog post I will delve in some detail into the Deep Web. This expedition will focus exclusively on that part of the Deep Web that excludes the Dark Web. I cover both Deep Web and Dark Web legal due diligence in more detail in my blog and book, Intellectual Property Due Diligence in Corporate Transactions: Investment, Risk Assessment and Management. In particular, in this article I will discuss the Deep Web as a resource of information for legal due diligence.
When Deep Web Technologies invited me to write this post, I initially intended to primarily delve into the ongoing confusion regarding Deep Web and Dark Web terminology. The misuse of the terms Deep Web and Dark Web, among other related terms, are problematic from a legal perspective if confusion about those terms spills over into licenses and other contracts and into laws and legal decisions. The terms are so hopelessly intermingled that I decided it is not useful to even attempt untangling them here. In this post, as mentioned, I will specifically cover the Deep Web excluding the Dark Web. The definitions I use are provided in a blog post I wrote on the topic earlier this year, entitled The Deep Web and the Dark Web – Why Lawyers Need to Be Informed.
Deep Web: a treasure trove of and data and other information
The Deep Web is populated with vast amounts of data and other information that are essential to investigate during a legal due diligence in order to find information about a company that is a target for possible licensing, merger or acquisition. A Deep Web (as well as Dark Web) due diligence should be conducted in order to ensure that information relevant to the subject transaction and target company is not missed or misrepresented. Lawyers and financiers conducting the due diligence have essentially two options: conduct the due diligence themselves by visiting each potentially-relevant database and conducting each search individually (potentially ad infinitum), or hire a specialized company such as Deep Web Technologies to design and setup such a search. Hiring an outside firm to conduct such a search saves time and money.
Deep Web data mining is a science that cannot be mastered by lawyers or financiers in a single or a handful of transactions. Using a specialized firm such as DWT has the added benefit of being able to replicate the search on-demand and/or have ongoing updated searches performed. Additionally, DWT can bring multilingual search capacities to investigations—a feature that very few, if any, other data mining companies provide and that would most likely be deficient or entirely missing in a search conducted entirely in-house.
What information is sought in a legal due diligence?
A legal due diligence will investigate a wide and deep variety of topics, from real estate to human resources, to basic corporate finance information, industry and company pricing policies, and environmental compliance. Due diligence nearly always also investigates intellectual property rights of the target company, in a level of detail that is tailored to specific transactions, based on the nature of the company’s goods and/or services. DWT’s Next Generation Federated Search is particularly well-suited for conducting intellectual property investigations.
In sum, the goal of a legal due diligence is to identify and confirm basic information about the target company and determine whether there are any undisclosed infirmities with the target company’s assets and information as presented. In view of these goals, the investing party will require the target company to produce a checklist full of items about the various aspects of the business (and more) discussed above. An abbreviated correlation between the information typically requested in a due diligence and the information that is available in the Deep Web is provided in the chart attached below. In the absence of assistance by Deep Web Technologies with the due diligence, either someone within the investor company or its outside counsel will need to search in each of the databases listed, in addition to others, in order to confirm the information provided by the target company is correct and complete. While representations and warranties are typically given by the target company as to the accuracy and completeness of the information provided, it is also typical for the investing company to confirm all or part of that information, depending on the sensitivities of the transaction and the areas in which the values–and possible risks might be uncovered.
Deep Web Legal Due-Diligence Resource List
The April/May 2015 issue of Multilingual.com Magazine features a new article, “Advancing science by overcoming language barriers,” co-authored by DWT’s own Abe Lederman, and Darcy Katzman. The article discusses the Deep Web vs. the dark web, and the technology needed to find results in scientific and technical, multilingual Deep Web databases. It also speaks of the efforts of the WorldWideScience Alliance in addressing the global need for a multilingual search through the creation of the WorldWideScience.org federated search application.
Think of the Deep Web as more academic — used by knowledge workers, librarians and corporate researchers to access the latest scientific and technical reports, gather competitive intelligence or gain insights from the latest government data published. Most of this information is hidden simply because it has not been “surfaced” to the general public through Google or Bing spiders, or is not available globally because of language barriers. If a publication reaches Google Scholar, chances are, it now floats in the broad net of the shallow web, no longer submerged in the Deep Web. A large number of global science publications are located in the Deep Web, only accessible through passwords, subscriptions and only accessible to native language speakers. These publications, hiding in the Deep Web, limit the spread of science and discovery.
The current issue of Multilingual.com Magazine is free to view at the time of this post.
March 20th marked the first day of spring. Here in northern New Mexico we have seen signs of spring (and allergies) for over a month. The crocus stretched out of the soil in February marking both a celebratory moment for my family, and one of concern. The weather is already warm and beautiful causing the apricot, plum, and juniper trees to bloom like mad. But because they’ve bloomed so early, will a late freeze wipe out our delicate fruit? And will we all sniffle and sneeze longer from the thick pollen collecting on our cars and sidewalks?
My questions took me to three different federated search engines to see if I could see what “spring” topics were circulating.
On Biznar, a social media and business search engine, I couldn’t help but search out how others were handling their spring allergies. Some dive into the Claritin box, while others go for a Kettlebell workout. My family claims to have zero allergies, although we slyly keep a tissue box handy once the juniper pollen begins to circulate. However, it looks like some research indicates that dairy may offer relief. I shall eat more yogurt from here forth.
Speaking of pollen, Environar, a federated search portal dedicated to life, medical and earth sciences, had excellent research on pollen through the ages. Pollen has been used to document climate cycles, and indicate many other factors such as temperature and precipitation during the past 140,000 years or so. Pollen, atchoo!, is scientifically important.
I particularly enjoyed browsing the government portal, Science.gov, on the effects of climate change on allergies. I thought this interesting from the Annals of the American Thoracic Society found in PubMed regarding a survey on climate change and health: “A majority of respondents indicated they were already observing health impacts of climate change among their patients, most commonly as increases in chronic disease severity from air pollution (77%), allergic symptoms from exposure to plants or mold (58%), and severe weather injuries (57%).” I shall buy more tissue.
While my questions may not have precise answers, I can at least plan ahead at the grocery store when I see high pollen counts – yogurt and tissues. And perhaps I’ll have a new appreciation for the contributions pollen has made to our scientific community.
Explore your Pollen Allergy Forecast at Pollen.com: http://www.pollen.com/allergy-forecast.asp. Happy Spring and Happy Searching!